Skip to content

Commit

Permalink
add graphviz gracing utility which I think is cute
Browse files Browse the repository at this point in the history
  • Loading branch information
karpathy committed Apr 14, 2020
1 parent 575bc17 commit 9d268f5
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 7 deletions.
14 changes: 11 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

A tiny Autograd engine (with a bite! :D). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural networks library on top of it with a PyTorch-like API. Both are currently about 50 lines of code each.

The DAG only allows individual scalar values, so e.g. we chop up each neuron into all of its individual tiny adds and multiplies. In particular, the current library only supports scalars and three operations over them: (+,*,relu), but in fact these are enough to build up an entire deep neural net doing binary classification, as the demo notebook shows.
The DAG only allows individual scalar values, so e.g. we chop up each neuron into all of its individual tiny adds and multiplies. In particular, the current library only supports scalars and three operations over them: (+,*,relu), but in fact these are enough to build up an entire deep neural net doing binary classification, as the demo notebook shows. Potentially useful for educational purposes. See the notebook `demo.ipynb` for a full demo of training an MLP binary classifier.

### Example usage

Expand All @@ -22,8 +22,16 @@ y.backward()
print(x.grad) # prints 62.0 - i.e. the numerical value of dy / dx
```

Potentially useful for educational purposes. See the notebook for a full demo of training an MLP binary classifier.

### Tracing / visualization

Have a look at the jupyter notebook `trace_graph.py` to also produce graphviz visualizations. E.g. this one is of a simple 2D neuron, arrived at by calling `draw_dot` on the code below, and it shows both the data (top number in each node) and the gradient (bottom number in each node).

```python
from micrograd import nn
n = nn.Neuron(2)
x = [Value(1.0), Value(-2.0)]
y = n(x)
dot = draw_dot(y)
```

![2d neuron](gout.png)
9 changes: 5 additions & 4 deletions micrograd/engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@
class Value:
""" stores a single scalar value and its gradient """

def __init__(self, data, _children=()):
def __init__(self, data, _children=(), _op=''):
self.data = data
self.grad = 0
# internal variables used for autograd graph construction
self._backward = lambda: None
self._prev = set(_children)
self._op = _op # the op that produced this node, for graphviz / debugging / etc

def __add__(self, other):
other = other if isinstance(other, Value) else Value(other)
out = Value(self.data + other.data, (self, other))
out = Value(self.data + other.data, (self, other), '+')

def _backward():
self.grad += out.grad
Expand All @@ -22,7 +23,7 @@ def _backward():

def __mul__(self, other):
other = other if isinstance(other, Value) else Value(other)
out = Value(self.data * other.data, (self, other))
out = Value(self.data * other.data, (self, other), '*')

def _backward():
self.grad += other.data * out.grad
Expand All @@ -32,7 +33,7 @@ def _backward():
return out

def relu(self):
out = Value(0 if self.data < 0 else self.data, (self,))
out = Value(0 if self.data < 0 else self.data, (self,), 'ReLU')

def _backward():
self.grad += (out.data > 0) * out.grad
Expand Down

0 comments on commit 9d268f5

Please sign in to comment.