This repository is the GitHub project of this book: Ahmed Fawzy Gad & Fatima Ezzahra Jarmouni, Introduction to Deep Learning and Neural Networks with Python™: A Practical Guide, 2020, 978-0323909334.
Introduction to Deep Learning and Neural Networks with Python™: A Practical Guide is an intensive step-by-step guide for neuroscientists to fully understand, practice, and build neural networks. Providing math and Python™ code examples to clarify neural network calculations, by book’s end readers will fully understand how neural networks work starting from the simplest model Y=X and building from scratch. Details and explanations are provided on how a generic gradient descent algorithm works based on mathematical and Python™ examples, teaching you how to use the gradient descent algorithm to manually perform all calculations in both the forward and backward passes of training a neural network.
- Examines the practical side of deep learning and neural networks
- Provides a problem-based approach to building artificial neural networks using real data
- Describes Python™ functions and features for neuroscientists
- Uses a careful tutorial approach to describe implementation of neural networks in Python™
- Features math and code examples (via companion website) with helpful instructions for easy implementation
-
Preparing the Development Environment
-
Introduction to ANN
-
ANN with 1 Input and 1 Output
-
Working with Any Number of Inputs
-
Working with Hidden Layers
-
Using Any Number of Hidden Neurons
-
ANN with 2 Hidden Layers
-
ANN with 3 Hidden Layers
-
Any Number of Hidden Layers
-
Generic ANN
-
Deploying Neural Network to Mobile Devices
Here is a quick summary of project implementation progress in the chapters:
- Ch03 builds and trains the simplest neural network with just a single input neuron and a single output neuron. The network does not have any hidden layers. There is just a single training sample.
- Ch04 extends Ch03 implementation to allow the network to work with any number of inputs.
- Ch05 introduces a single hidden layer with just 2 hidden neurons.
- Ch06 is just different from Ch05 by using any number of hidden neurons within a single hidden layer.
- Ch07 us0es 2 hidden layers with any number of hidden neurons.
- Ch09 adds an additional hidden layer to Ch07 so that there are 3 hidden layers with any number of hidden neurons.
- Ch9 is a generic implementation that allows the network to work with any number of hidden layers and any number of neurons within such layers. After the network is trained, all network parameters are saved. Such parameters can be loaded later for making predictions.
- Ch10 adds more generalization as the network can work with any number of samples and any number of outputs. The bias is introduced. Moreover, batch and stochastic gradient descent are supported.