This repository contains compact, educational implementations of neural network building blocks written from scratch in NumPy. The included Python modules provide simple layer, activation, loss, and optimizer classes so you can build, train, and experiment with feedforward and convolutional networks without using high-level frameworks.
Files
-
ANN_v1.py: Lightweight feedforward (fully connected) neural network building blocks.- Layers:
DenseLayer - Activations:
ReLu,TanH,Sigmoid,SoftMax - Losses:
MeanSquaredError,CategoricalCrossEntropy - Output helper:
CategoricalOutput - Optimizer:
Optimizer_SGD(learning rate, decay, momentum)
- Layers:
-
CNN_v1.py: Convolutional neural network primitives and pooling utilities.- Convolution:
ConvLayer(forward + backward; supports padding and stride) - Pooling:
Max_Pool,Min_Pool,Avg_Pool(forward + backward) - Flattening:
Flat - Dense + activation + losses + optimizer (same API names as in
ANN_v1.py)
Notebooks (Table of Contents)
The repository includes 12 Jupyter notebooks that walk through concepts and implementations step-by-step. Below is a concise table of contents linking to each notebook file:
1-Basics.ipynb: Basics2-Layers.ipynb: Layers3-SGD-Optimizers.ipynb: SGD Optimizers4-Adaptive-Optimizer.ipynb: Adaptive Optimizer5-Regularizer-Dropout.ipynb: Regularizer & Dropout6-Convolution-Basics.ipynb: Convolution Basics7-Conv-Layer.ipynb: Conv Layer8-Pooling+Activation.ipynb: Pooling & Activation9-Backpropagation.ipynb: Backpropagation10-LeNet-Pytorch.ipynb: LeNet (PyTorch)11-MyLeNet.ipynb: My LeNet12-RNN-Cell.ipynb: RNN Cell
- Convolution:
Quick start examples
- Minimal feedforward training loop (sketch):
import numpy as np
from ANN_v1 import DenseLayer, ReLu, SoftMax, CategoricalOutput, Optimizer_SGD
# build a tiny network
layer1 = DenseLayer(input_size=784, neurons=128)
act1 = ReLu()
layer2 = DenseLayer(input_size=128, neurons=10)
output = CategoricalOutput()
opt = Optimizer_SGD(learning_rate=0.1, decay=1e-3, momentum=0.9)
# X: shape (n_samples, 784), y: integer labels shape (n_samples,)
for epoch in range(10):
# forward
a1 = layer1.forward(X)
a1 = act1.forward(a1)
a2 = layer2.forward(a1)
preds = output.forward(a2)
# loss (data loss)
loss = output.calculate(y)
# backward
dvalues = output.loss.output if hasattr(output.loss, 'output') else preds
dinputs = output.backward(preds, y)
dinputs = layer2.backward(dinputs)
dinputs = act1.backward(dinputs)
dinputs = layer1.backward(dinputs)
# update params
opt.pre_update_params()
opt.update_params(layer1)
opt.update_params(layer2)
opt.post_update_params()
print(f"Epoch {epoch}, loss={loss:.4f}")- Using
CNN_v1.pyprimitives (forward inference sketch):
from CNN_v1 import ConvLayer, Max_Pool, Flat, DenseLayer, ReLu, CategoricalOutput
conv = ConvLayer(Kx=3, Ky=3, Kn=8)
pool = Max_Pool()
flat = Flat()
fc = DenseLayer(input_size=??? , neurons=10) # compute flattened size after conv+pool
# M shape expected by ConvLayer: (x, y, channels, n_images)
features = conv.forward(M, pad=1, stride=1)
pooled = pool.forward(features, stride=2, K=2)
f = flat.forward(pooled)
out = fc.forward(f)
preds = CategoricalOutput().forward(out)Notes and usage tips
- The implementations are intentionally simple and educational โ they prioritize clarity over performance.
- Shapes: many custom functions expect specific array shapes (e.g., conv takes 4D inputs
(x, y, channels, n_images)). Check the module docstrings and inline code comments when adapting inputs. - Backpropagation: each layer implements
forwardandbackwardand stores gradients in attributes likedweights,dbiases, anddinputsfor use by the optimizer. - Optimizer:
Optimizer_SGDexpects layers withdweightsanddbiasesattributes; it supports momentum and learning-rate decay.
Contributing
- This project is for learning and experimentation. If you improve docs or continue with implementation of RNN and LSTM, please feel free to add changes and open an PR for review.
License
- Use and adapt for educational purposes. No license file included.
Contact
- If you have questions about the code, open an issue.