Other Sellers on Amazon
& FREE Shipping
91% positive over last 12 months
Usually ships within 4 to 5 days.
+ $3.99 shipping
85% positive over last 12 months
Usually ships within 3 to 4 days.
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Deep Learning from the Basics: Python and Deep Learning: Theory and Implementation
| Price | New from | Used from |
- Kindle
$17.19 Read with Our Free App - Paperback
$39.992 Used from $61.41 9 New from $39.99
Enhance your purchase
Discover ways to implement various deep learning algorithms by leveraging Python and other technologies
Key Features
- Learn deep learning models through several activities
- Begin with simple machine learning problems, and finish by building a complex system of your own
- Teach your machines to see by mastering the technologies required for image recognition
Book Description
Deep learning is rapidly becoming the most preferred way of solving data problems. This is thanks, in part, to its huge variety of mathematical algorithms and their ability to find patterns that are otherwise invisible to us.
Deep Learning from the Basics begins with a fast-paced introduction to deep learning with Python, its definition, characteristics, and applications. You'll learn how to use the Python interpreter and the script files in your applications, and utilize NumPy and Matplotlib in your deep learning models. As you progress through the book, you'll discover backpropagation―an efficient way to calculate the gradients of weight parameters―and study multilayer perceptrons and their limitations, before, finally, implementing a three-layer neural network and calculating multidimensional arrays.
By the end of the book, you'll have the knowledge to apply the relevant technologies in deep learning.
What you will learn
- Use Python with minimum external sources to implement deep learning programs
- Study the various deep learning and neural network theories
- Learn how to determine learning coefficients and the initial values of weights
- Implement trends such as Batch Normalization, Dropout, and Adam
- Explore applications like automatic driving, image generation, and reinforcement learning
Who this book is for
Deep Learning from the Basics is designed for data scientists, data analysts, and developers who want to use deep learning techniques to develop efficient solutions. This book is ideal for those who want a deeper understanding as well as an overview of the technologies. Some working knowledge of Python is a must. Knowledge of NumPy and pandas will be beneficial, but not essential.
Table of Contents
- Introduction to Python
- Perceptrons
- Neural Networks
- Neural Network Training
- Backpropagation
- Training Techniques
- Convolutional Neural Networks
- Deep Learning
- ISBN-101800206135
- ISBN-13978-1800206137
- PublisherPackt Publishing
- Publication dateMarch 8, 2021
- LanguageEnglish
- Dimensions7.5 x 0.72 x 9.25 inches
- Print length316 pages
Customers who viewed this item also viewed
Editorial Reviews
About the Author
Koki Saitoh was born in Tsushima, Nagasaki in 1984. He graduated from the engineering department of the Tokyo Institute of Technology and completed a master’s course at the Graduate School of Interdisciplinary Information Studies at the University of Tokyo. Currently, he conducts research and development in computer vision and machine learning. He has authored Python 3 in Practice, The Elements of Computing Systems, and Building Machine Learning Systems with Python, translations of which are published by O’Reilly, Japan.
Product details
- Publisher : Packt Publishing (March 8, 2021)
- Language : English
- Paperback : 316 pages
- ISBN-10 : 1800206135
- ISBN-13 : 978-1800206137
- Item Weight : 1.2 pounds
- Dimensions : 7.5 x 0.72 x 9.25 inches
- Best Sellers Rank: #531,156 in Books (See Top 100 in Books)
- #184 in Computer Neural Networks
- #651 in Python Programming
- #756 in Artificial Intelligence & Semantics
- Customer Reviews:
About the authors

Discover more of the author’s books, see similar authors, read author blogs and more

Discover more of the author’s books, see similar authors, read author blogs and more
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
The one criticism I have: It's took me a year and half to truly master Python when I was first introduced. I would consider it a necessary first step towards taking a technical dive into any Machine Learning, Artificial intelligence or Data Science topics. The book covers that in only 20 pages. From there, no time is spent on any Machine Learning concepts but instead goes right into Perceptrons which are fundamental building blocks for neural nets. I do wish there was some time spent in between to focus on more concepts in statistics, probability and ML algorithms like regression, ensemble models, boosting, the differences in types of ML use cases (unsupervised vs. supervised etc.).
All-in-all I would recommend it if you're a data analyst / scientist who knows some Python already and statistics already and feel you're ready to take on neural nets / deep learning.
One of the best ways to understand something is to make a simplified version yourself. Koki Saitoh guides you through this process with concise explanations that are straight to the point and reinforced with high-quality illustrations and implementations in python.
Unlike many basics texts, this one doesn't waste time teaching all the prerequisites from the beginning. The relevant prerequisites are covered just as you need them.
After a brief introduction to Python, Numpy and Matplotlib, the text jumps right into its subject matter, beginning with implementing logic gates, neural networks and training them. Backpropagation is explained exceptionally well, with a complete set of diagrams.
From this point, the text surveys some additional training techniques before covering CNNs. It ends with a survey of other techniques with links to implementations in the book's associated repo.
This book is, for the most part, an excellent choice for self-study and pedagogically well-conceived. However, the lack of exercises is a major impediment to its use for self-study. In a cookbook, it would make sense to leave them out. However, this is not a cookbook, and it wouldn't make sense to use this code in production settings -- you'd use the relevant libraries. Additionally, excessively detailed explanations of some of the simpler concepts and implementations can distract from the main line of argument.
Unlike many serious books that are hard to understand, it is really enjoyable to read this book because clearly explains how the components work together from the beginning to the end in concise and clear language.
This book does not describe how to use deep learning frameworks, such as TensorFlow, and PyTorch. It also does not refer to transformer-based architectures.
This book is not a cookbook for designing a DL workflow using TensorFlow or PyTorch. It is a detailed explanation of the basics of DL as advertised, describing the math and theories behind how DL works. It accomplishes this using basic implementation of DL principles starting with perceptrons and building to Neural Networks and Deep Learning.
Visuals and scratch Python code using minimal packages or frameworks (primarily numpy) are used to illustrate the principles of DL using the MNIST dataset. Algorithms are designed from scratch with emphasis on the math behind them, and how they work. The focus is definitely on the basics and inner workings of DL.
I found Chapter 6 particularly interesting. Important topics are addressed here including discussing how choice of descent optimization algorithm can affect learning rate and how the distribution of activations in each layer affects training. The author reviews different weight initialization techniques, regularization, and briefly discusses hyperparameter tuning.
It helps to have a good understanding of math, particularly basic calculus such as derivatives in addition to vector/matrix math, but the author makes a good attempt to explain principles for someone with less knowledge in these areas. I think this book offers the opportunity for readers to better understand the math and theories behind DL, to help make better choices (or at least understand choices better) when implementing standard DL frameworks such as TensorFlow.
It covers the basics and the maths, it won’t give you guidance on how to use fancy libraries and apply DL in easy steps but it will make you understand so you can evaluate and explain better.








