This is an reimplementation of a subset of the torch API. It supports the following:
- autodifferentiation / backpropagation
- tensors, views, broadcasting
- GPU / CUDA programming in Numba
- map / zip / reduce
- batched matrix multiplication
- 1D / 2D Convolution and Pooling
- activation functions
- ReLU / GeLU / softmax / tanh
- optimizers
- stochastic gradient descent
To install dependencies, create a virtual environment and install the required packages:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
This will install minitorch in editable mode.
If pip raises an error, it may be necessary to upgrade before installing dependencies:
pip install --upgrade pip
python project/run_mnist_multiclass.py
A list of supported modules and functions are listed in examples/custom.py.
Files prefixed with a leading underscore implement abstract base classes and tensor manipulation functions.
Subpackage | Description |
---|---|
autograd | central difference / topological sort of computational graph |
backends | naive / parallel / CUDA implementations of map / zip / reduce / matrix multiply |
nn | modules and functions for building networks |
optim | optimizers for loss function minimization |
- Saving and loading
- torch state dictionaries
- ONNX
- Transformer module
- tanh, GeLU
- Embedding module
- Expand core tensor operations
- arange, cat, stack, hstack
- Adam optimizer
- Additional loss functions
- Einsum!
- Bindings
- CUDA Convolution
- CUDA usage with Google Collab
Building this would have been impossible without the original course: Minitorch by Sasha Rush