Skip to content

Commit

Permalink
nn mnist tutorial (apache#6879)
Browse files Browse the repository at this point in the history
* mnist tutorial added, autograd modified

* mnist tutorial

* mnist tutorial

* minor change

* fixes

* minor change

* fix

* small fix

* removing dx from autograd

* Delete mnist.ipynb
  • Loading branch information
Roshrini authored and piiswrong committed Jul 12, 2017
1 parent 4ed646b commit 82c3e76
Show file tree
Hide file tree
Showing 5 changed files with 345 additions and 10 deletions.
15 changes: 13 additions & 2 deletions docs/tutorials/foo/autograd.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,7 @@ attach gradient buffers to them:

```python
x = mx.nd.array([[1, 2], [3, 4]])
dx = mx.nd.zeros_like(x)
x.attach_grad(dx)
x.attach_grad()
```

Now we can define the network while running forward computation by wrapping
Expand All @@ -40,3 +39,15 @@ is equivalent to `mx.nd.sum(z).backward()`:
z.backward()
print(x.grad)
```

Now, let's see if this is the expected output.

Here, y = f(x), z = f(y) = f(g(x))
which means y = 2 * x and z = 2 * x * x.

After, doing backprop with `z.backward()`, we will get gradient dz/dx as follows:

dy/dx = 2,
dz/dx = 4 * x

So, we should get x.grad as an array of [[4, 8],[12, 16]].
9 changes: 5 additions & 4 deletions docs/tutorials/foo/foo.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import mxnet as mx
import mxnet.ndarray as F
import mxnet.foo as foo
from mxnet.foo import nn
from mxnet import autograd
```

Neural networks (and other machine learning models) can be defined and trained
Expand All @@ -38,13 +39,13 @@ composing and inheriting `Layer`:
class Net(nn.Layer):
def __init__(self, **kwargs):
super(Net, self).__init__(**kwargs)
with self.name_scope:
with self.name_scope():
# layers created in name_scope will inherit name space
# from parent layer.
self.conv1 = nn.Conv2D(6, kernel_size=5)
self.pool1 = nn.Pool2D(kernel_size=2)
self.pool1 = nn.MaxPool2D(pool_size=(2,2))
self.conv2 = nn.Conv2D(16, kernel_size=5)
self.pool2 = nn.Pool2D(kernel_size=2)
self.pool2 = nn.MaxPool2D(pool_size=(2,2))
self.fc1 = nn.Dense(120)
self.fc2 = nn.Dense(84)
self.fc3 = nn.Dense(10)
Expand Down Expand Up @@ -99,7 +100,7 @@ To compute loss and backprop for one iteration, we do:

```python
label = mx.nd.arange(10) # dummy label
with record():
with autograd.record():
output = net(data)
loss = foo.loss.softmax_cross_entropy_loss(output, label)
loss.backward()
Expand Down
Loading

0 comments on commit 82c3e76

Please sign in to comment.