This repository contains Python scripts and implementations that build a neural network from scratch. The goal is to understand core concepts such as forward propagation, backpropagation, optimization, loss computation and dropout without relying on external libraries.
This repository offers step-by-step code examples for constructing a neural network, starting from basic concepts and gradually advancing to a complete implementation. It is designed to help both beginners and advanced learners gain a deep understanding of neural networks.
The journey begins with simple concepts like activation functions and progresses through forward and backpropagation, optimization techniques, and concludes with a fully functional neural network capable of handling multi-class classification.
To make the best use of this repository, follow the sequence below:
basic1.py: Introduction to neural network components and basic operations.basic2.py: Expanding on basic operations and introducing fundamental neural network mechanics.basic3.py: Further examples and exercises to solidify the foundational concepts.
Softmax_Activation.py: Understand how the softmax activation function works for multi-class classification.Categorical_Cross_Entropy_Loss.py: Learn how the categorical cross-entropy loss is computed for classification problems.
Forward_Propagation.py: See how input data flows through the network, including the calculations of weighted sums, biases, and activation functions.
Adam_Optimizer.py: Explore the Adam optimization algorithm to update weights during training efficiently.Dropout.py: Dropout is a regularization technique used in neural networks to prevent overfitting.
Neural_Network.py: The culmination of the entire process, this script contains a complete implementation of a neural network capable of handling multi-class classification tasks.
train_iris.py: A practical example using the Iris dataset to train the neural network and demonstrate its functionality.
- Input Layer: Accepts raw data and passes it to the network.
- Hidden Layers: Intermediate layers where computations occur. Each layer consists of neurons interconnected by weights.
- Output Layer: Produces the final predictions or classifications.
In forward propagation, data flows through the network from the input layer to the output layer:
-
Calculate z:
z = inputs * weights + bias -
Apply the activation function:
- Use ReLU or its variants in the hidden layers.
- Use Softmax for multi-class classification in the output layer.
- Use Sigmoid for binary classification problems.
-
Identify the problem type:
- Regression: Use loss functions like MSE, MAE, RMSE, or Huber loss.
- Binary Classification: Use binary cross-entropy loss.
- Multi-class Classification: Use categorical cross-entropy loss.
Backpropagation updates the weights in the network by minimizing the error (loss):
- Compute the gradient of the loss function with respect to the weights and biases.
- Update the weights using an optimizer like Adam
Dropout is a regularization technique used to prevent overfitting in neural networks. It works by randomly setting a fraction of input units to zero during training, forcing the network to learn more robust features and dependencies.
For example, during training, if a dropout rate of 0.2 is used, 20% of the neurons will be ignored in each training iteration
A special thanks to the following resources and individuals for their invaluable contributions to the understanding of neural networks:
- Krish Naik for his insightful tutorials.
- Harrison Kinsley (Sentdex) for practical coding examples.
- Resources from AIML.com, Medium, GeeksforGeeks, and many more.
