Skip to content

Latest commit

 

History

History
112 lines (77 loc) · 2.98 KB

README.md

File metadata and controls

112 lines (77 loc) · 2.98 KB

tinytorch

Newest ML framework that you propbaly don't need,
this is really autograd engine backed by numpy

tinytorch.py shall always remain under 1000 lines. if not we will revert commit

Python package

$$ f(x) =x^3+x $$

import tinytorch as tt      #👀

def f(x):
  return x**3 + x

x = tt.tensor((tt.arange(700) - 400)/100 , requires_grad=True)
z = f(x)
z.sum().backward()
print(x.grad)

Alt text

Alt text

What can you do with it?

Automatic diffecrtion, yep

import tinytorch as tt 

def f(x,y):
  return x**2 + x*y + (y**3+y) **0.5

x = tt.rand((5,5), requires_grad=True)
y = tt.rand((5,5), requires_grad=True)
z = f(x,y)
z.sum().backward()
print(x.grad)
print(y.grad)

Train MNITST, no problemo

python mnist.py

Open In Colab

GPT?? you bet (yes LLM fr fr)

GPU=1 python mnist.py

Open In Colab

note: numpy is too slow to train llm you need to install jax (just using it as faster numpy)

Visulization

If you want to see your computation graph run visulize.py

requirements

pip install graphviz
sudo apt-get install -y graphviz # IDK what to do for windows I use wsl

why this exists

Bcs I was bored

DEV BLOG

powerlevel

1.0 - karpathy micrograd (really simple, not much you can do with it)
3.14 - tinytorch (simpile and you can do lot of things with it) <= ❤️
69 - tinygrad (no longer simple you can do lot more)
∞ - pytorch (goat library, that makes gpu go burrr)

contribution guideline

  • be nice
  • performance optimization / more examples welcome
  • doc sources if any
  • keep tinytorch.py under 1000 lines

Buy me Chai/Coffee

ko-fi

License

MIT