Skip to content

A custom Deep Learning Framework built from scratch using NumPy. Implements Automatic Differentiation, Backpropagation, and modular Layer architectures.

Notifications You must be signed in to change notification settings

skandvj/TorchLite-Custom-Pytorch

Repository files navigation

⚡ TorchLite: Differentiable Computing Framework

ML Infrastructure | Computational Graphs | Backpropagation Algorithm

Status Tech Focus


💼 Executive Summary

Modern AI product development relies heavily on high-level frameworks like PyTorch and TensorFlow. However, relying on these "black boxes" without understanding the underlying calculus leads to inefficient model design and poor debugging of vanishing/exploding gradients.

TorchLite is a custom-built Deep Learning Framework engineered from scratch using raw NumPy. It mimics the architecture of PyTorch, implementing the core Automatic Differentiation (AutoGrad) engine, forward/backward propagation pipelines, and optimization algorithms. It serves as a lightweight, educational inference engine to demonstrate the mathematical foundations of neural networks.


❓ The Business Problem

  • Abstraction Opacity: Engineers often treat model training as magic. When a model fails to converge, they lack the low-level intuition to diagnose the math.
  • Overhead: Standard libraries are bloated with legacy code. Understanding the minimal viable requirements for a neural network is crucial for Edge AI and custom hardware implementations.

💡 The Solution: Matrix Operation Engine

I reverse-engineered the core components of a DL library to build a modular, extensible training system.

Component Technical Implementation PM Value Proposition
Computational Graph Forward/Backward Pass Manually implemented the chain rule for gradient propagation, ensuring exact mathematical precision.
Layer Abstraction Linear / Dense Classes Created modular objects that hold state (weights/biases), allowing for "Lego-like" model assembly.
Activation Logic ReLU, Sigmoid, Tanh Implemented non-linearities and their derivatives to enable the network to learn complex patterns.
Optimization SGD (Stochastic Gradient Descent) Built the weight update logic to minimize loss functions (MSE, Cross Entropy) over time.

🔬 Architecture: The "AutoGrad" Flow

Unlike using an API, this project handles the raw matrix math:

graph LR
    A[Input Vector] --> B(Linear Transform Wx+b)
    B --> C{Activation ReLU}
    C --> D(Output Layer)
    D --> E[Loss Calculation]
    E -- Backpropagation (Chain Rule) --> A
Loading

About

A custom Deep Learning Framework built from scratch using NumPy. Implements Automatic Differentiation, Backpropagation, and modular Layer architectures.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages