Skip to content

rebuilding pytorch: from autograd to convolutions in CUDA

Notifications You must be signed in to change notification settings

thomasonzhou/minitorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch from Numpy and Numba

This is an reimplementation of a subset of the torch API. It supports the following:

  • autodifferentiation / backpropagation
  • tensors, views, broadcasting
  • GPU / CUDA programming in Numba
    • map / zip / reduce
    • batched matrix multiplication
  • 1D / 2D Convolution and Pooling
  • activation functions
    • ReLU / GeLU / softmax / tanh
  • optimizers
    • stochastic gradient descent

Getting Started

To install dependencies, create a virtual environment and install the required packages:

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

This will install minitorch in editable mode.

If pip raises an error, it may be necessary to upgrade before installing dependencies:

pip install --upgrade pip

Examples

Training a MNIST model

python project/run_mnist_multiclass.py 

Creating a custom model

A list of supported modules and functions are listed in examples/custom.py.

Repo Structure

Files prefixed with a leading underscore implement abstract base classes and tensor manipulation functions.

Subpackage Description
autograd central difference / topological sort of computational graph
backends naive / parallel / CUDA implementations of map / zip / reduce / matrix multiply
nn modules and functions for building networks
optim optimizers for loss function minimization

Extensions

Features

  • Saving and loading
    • torch state dictionaries
    • ONNX
  • Transformer module
    • tanh, GeLU
  • Embedding module
  • Expand core tensor operations
    • arange, cat, stack, hstack
  • Adam optimizer
  • Additional loss functions
  • Einsum!

Optimizations

  • Bindings
  • CUDA Convolution

Documentation

  • CUDA usage with Google Collab

Credit

Building this would have been impossible without the original course: Minitorch by Sasha Rush

About

rebuilding pytorch: from autograd to convolutions in CUDA

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages