Skip to content

๐—”๐—ก๐—ก ๐—ฎ๐—ป๐—ฑ ๐—–๐—ก๐—ก ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐—ฐ๐—ฟ๐—ฎ๐˜๐—ฐ๐—ต | ๐—ก๐—ผ ๐—ณ๐—ฟ๐—ฎ๐—บ๐—ฒ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€, ๐—ท๐˜‚๐˜€๐˜ ๐—ฝ๐˜‚๐—ฟ๐—ฒ ๐—ก๐˜‚๐—บ๐—ฝ๐˜†

Notifications You must be signed in to change notification settings

Ecolash/Neural-Networks-from-Scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

10 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Neural Networks From Scratch

This repository contains compact, educational implementations of neural network building blocks written from scratch in NumPy. The included Python modules provide simple layer, activation, loss, and optimizer classes so you can build, train, and experiment with feedforward and convolutional networks without using high-level frameworks.

Files

  • ANN_v1.py: Lightweight feedforward (fully connected) neural network building blocks.

    • Layers: DenseLayer
    • Activations: ReLu, TanH, Sigmoid, SoftMax
    • Losses: MeanSquaredError, CategoricalCrossEntropy
    • Output helper: CategoricalOutput
    • Optimizer: Optimizer_SGD (learning rate, decay, momentum)
  • CNN_v1.py: Convolutional neural network primitives and pooling utilities.

    • Convolution: ConvLayer (forward + backward; supports padding and stride)
    • Pooling: Max_Pool, Min_Pool, Avg_Pool (forward + backward)
    • Flattening: Flat
    • Dense + activation + losses + optimizer (same API names as in ANN_v1.py)

    Notebooks (Table of Contents)

    The repository includes 12 Jupyter notebooks that walk through concepts and implementations step-by-step. Below is a concise table of contents linking to each notebook file:

    • 1-Basics.ipynb: Basics
    • 2-Layers.ipynb: Layers
    • 3-SGD-Optimizers.ipynb: SGD Optimizers
    • 4-Adaptive-Optimizer.ipynb: Adaptive Optimizer
    • 5-Regularizer-Dropout.ipynb: Regularizer & Dropout
    • 6-Convolution-Basics.ipynb: Convolution Basics
    • 7-Conv-Layer.ipynb: Conv Layer
    • 8-Pooling+Activation.ipynb: Pooling & Activation
    • 9-Backpropagation.ipynb: Backpropagation
    • 10-LeNet-Pytorch.ipynb: LeNet (PyTorch)
    • 11-MyLeNet.ipynb: My LeNet
    • 12-RNN-Cell.ipynb: RNN Cell

Quick start examples

  • Minimal feedforward training loop (sketch):
import numpy as np
from ANN_v1 import DenseLayer, ReLu, SoftMax, CategoricalOutput, Optimizer_SGD

# build a tiny network
layer1 = DenseLayer(input_size=784, neurons=128)
act1 = ReLu()
layer2 = DenseLayer(input_size=128, neurons=10)
output = CategoricalOutput()

opt = Optimizer_SGD(learning_rate=0.1, decay=1e-3, momentum=0.9)

# X: shape (n_samples, 784), y: integer labels shape (n_samples,)
for epoch in range(10):
    # forward
    a1 = layer1.forward(X)
    a1 = act1.forward(a1)
    a2 = layer2.forward(a1)
    preds = output.forward(a2)

    # loss (data loss)
    loss = output.calculate(y)

    # backward
    dvalues = output.loss.output if hasattr(output.loss, 'output') else preds
    dinputs = output.backward(preds, y)
    dinputs = layer2.backward(dinputs)
    dinputs = act1.backward(dinputs)
    dinputs = layer1.backward(dinputs)

    # update params
    opt.pre_update_params()
    opt.update_params(layer1)
    opt.update_params(layer2)
    opt.post_update_params()

    print(f"Epoch {epoch}, loss={loss:.4f}")
  • Using CNN_v1.py primitives (forward inference sketch):
from CNN_v1 import ConvLayer, Max_Pool, Flat, DenseLayer, ReLu, CategoricalOutput

conv = ConvLayer(Kx=3, Ky=3, Kn=8)
pool = Max_Pool()
flat = Flat()
fc = DenseLayer(input_size=??? , neurons=10)  # compute flattened size after conv+pool

# M shape expected by ConvLayer: (x, y, channels, n_images)
features = conv.forward(M, pad=1, stride=1)
pooled = pool.forward(features, stride=2, K=2)
f = flat.forward(pooled)
out = fc.forward(f)
preds = CategoricalOutput().forward(out)

Notes and usage tips

  • The implementations are intentionally simple and educational โ€” they prioritize clarity over performance.
  • Shapes: many custom functions expect specific array shapes (e.g., conv takes 4D inputs (x, y, channels, n_images)). Check the module docstrings and inline code comments when adapting inputs.
  • Backpropagation: each layer implements forward and backward and stores gradients in attributes like dweights, dbiases, and dinputs for use by the optimizer.
  • Optimizer: Optimizer_SGD expects layers with dweights and dbiases attributes; it supports momentum and learning-rate decay.

Contributing

  • This project is for learning and experimentation. If you improve docs or continue with implementation of RNN and LSTM, please feel free to add changes and open an PR for review.

License

  • Use and adapt for educational purposes. No license file included.

Contact

  • If you have questions about the code, open an issue.

About

๐—”๐—ก๐—ก ๐—ฎ๐—ป๐—ฑ ๐—–๐—ก๐—ก ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐—ฐ๐—ฟ๐—ฎ๐˜๐—ฐ๐—ต | ๐—ก๐—ผ ๐—ณ๐—ฟ๐—ฎ๐—บ๐—ฒ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€, ๐—ท๐˜‚๐˜€๐˜ ๐—ฝ๐˜‚๐—ฟ๐—ฒ ๐—ก๐˜‚๐—บ๐—ฝ๐˜†

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published