Other Sellers on Amazon
& FREE Shipping
91% positive over last 12 months
Usually ships within 4 to 5 days.
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Essential PySpark for Scalable Data Analytics: A beginner's guide to harnessing the power and ease of PySpark 3
| Price | New from | Used from |
- Kindle
$28.49 Read with Our Free App - Paperback
$46.993 Used from $56.83 10 New from $46.99
Enhance your purchase
Get started with distributed computing using PySpark, a single unified framework to solve end-to-end data analytics at scale
Key Features
- Discover how to convert huge amounts of raw data into meaningful and actionable insights
- Use Spark's unified analytics engine for end-to-end analytics, from data preparation to predictive analytics
- Perform data ingestion, cleansing, and integration for ML, data analytics, and data visualization
Book Description
Apache Spark is a unified data analytics engine designed to process huge volumes of data quickly and efficiently. PySpark is Apache Spark's Python language API, which offers Python developers an easy-to-use scalable data analytics framework.
Essential PySpark for Scalable Data Analytics starts by exploring the distributed computing paradigm and provides a high-level overview of Apache Spark. You'll begin your analytics journey with the data engineering process, learning how to perform data ingestion, cleansing, and integration at scale. This book helps you build real-time analytics pipelines that help you gain insights faster. You'll then discover methods for building cloud-based data lakes, and explore Delta Lake, which brings reliability to data lakes. The book also covers Data Lakehouse, an emerging paradigm, which combines the structure and performance of a data warehouse with the scalability of cloud-based data lakes. Later, you'll perform scalable data science and machine learning tasks using PySpark, such as data preparation, feature engineering, and model training and productionization. Finally, you'll learn ways to scale out standard Python ML libraries along with a new pandas API on top of PySpark called Koalas.
By the end of this PySpark book, you'll be able to harness the power of PySpark to solve business problems.
What you will learn
- Understand the role of distributed computing in the world of big data
- Gain an appreciation for Apache Spark as the de facto go-to for big data processing
- Scale out your data analytics process using Apache Spark
- Build data pipelines using data lakes, and perform data visualization with PySpark and Spark SQL
- Leverage the cloud to build truly scalable and real-time data analytics applications
- Explore the applications of data science and scalable machine learning with PySpark
- Integrate your clean and curated data with BI and SQL analysis tools
Who this book is for
This book is for practicing data engineers, data scientists, data analysts, and data enthusiasts who are already using data analytics to explore distributed and scalable data analytics. Basic to intermediate knowledge of the disciplines of data engineering, data science, and SQL analytics is expected. General proficiency in using any programming language, especially Python, and working knowledge of performing data analytics using frameworks such as pandas and SQL will help you to get the most out of this book.
Table of Contents
- Distributed Computing Primer
- Data Ingestion
- Data Cleansing and Integration
- Real-time Data Analytics
- Scalable Machine Learning with PySpark
- Feature Engineering – Extraction, Transformation, and Selection
- Supervised Machine Learning
- Unsupervised Machine Learning
- Machine Learning Life Cycle Management
- Scaling Out Single-Node Machine Learning Using PySpark
- Data Visualization with PySpark
- Spark SQL Primer
- Integrating External Tools with Spark SQL
- The Data Lakehouse
- ISBN-101800568878
- ISBN-13978-1800568877
- PublisherPackt Publishing
- Publication dateOctober 29, 2021
- LanguageEnglish
- Dimensions7.5 x 0.73 x 9.25 inches
- Print length322 pages
![]() |
Customers who viewed this item also viewed
Editorial Reviews
Review
"Essential PySpark for Scalable Data Analytics is an outstanding book. It’s tailored to the needs of beginners as well as experienced developers and covers all data pipelines (ingestion, cleansing, and so on). It will help you improve your Spark skills, especially structured streaming and optimization. The explanations are clear enough and the examples are very helpful for understanding the targeted use cases."
--Youssef Mrini, Customer Success Engineer, Databricks
About the Author
Sreeram Nudurupati is a data analytics professional with years of experience in designing and optimizing data analytics pipelines at scale. He has a history of helping enterprises, as well as digital natives, build optimized analytics pipelines by using the knowledge of the organization, infrastructure environment, and current technologies.
Product details
- Publisher : Packt Publishing (October 29, 2021)
- Language : English
- Paperback : 322 pages
- ISBN-10 : 1800568878
- ISBN-13 : 978-1800568877
- Item Weight : 1.23 pounds
- Dimensions : 7.5 x 0.73 x 9.25 inches
- Best Sellers Rank: #242,217 in Books (See Top 100 in Books)
- #108 in Data Modeling & Design (Books)
- #169 in Data Processing
- #382 in Artificial Intelligence & Semantics
- Customer Reviews:
About the author

I am a seasoned data analytics professional with experience in designing and optimizing data analytics pipelines at scale. I have a history of helping enterprises, as well as digital natives, build optimized analytics pipelines by leveraging the knowledge of the organization, infrastructure environment, and current technologies.
I have a master's degree in Computer and Information Sciences and more than thirteen years of experience building scalable data analytics pipelines and managing data analytics projects for businesses across the globe.
My expertise includes data engineering, data science, and big data analytics, and a career encompassing pretty much every aspect of data analytics in general.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Highly recommended!
It is very detailed with lots of code samples and all the code is on GitHub. Databricks Spark Clusters is used for executing the code provided in book but same code can be used on any Spark cluster running Spark 3.0, or higher. The book is well organized into different components of Spark, e.g. intro, structured api, streaming, optimizations, data lake, ml life cycle using MLflow and deployment options. ML section is thorough and covers Feature Engineering, Supervised , unsupervised Machine learning and ML Life cycle management. There is also coverage on how to connect to different applications like tableau, thrift.
Overall, the book contains solid information on the inner workings of Spark. I would recommend giving this book a read !!









