Skip to content

Commit 885c82c

Browse files
goldsboroughsoumith
authored andcommitted
nn.Function -> autograd.Function (pytorch#254)
1 parent 3dca121 commit 885c82c

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

advanced_source/cpp_extension.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ differentiation. This is something the PyTorch team is working on, but it is
251251
not available yet. As such, we have to also implement the backward pass of our
252252
LLTM, which computes the derivative of the loss with respect to each input of
253253
the forward pass. Ultimately, we will plop both the forward and backward
254-
function into a :class:`torch.nn.Function` to create a nice Python binding. The
254+
function into a :class:`torch.autograd.Function` to create a nice Python binding. The
255255
backward function is slightly more involved, so we'll not dig deeper into the
256256
code (if you are interested, `Alex Graves' thesis
257257
<http://www.cs.toronto.edu/~graves/phd.pdf>`_ is a good read for more
@@ -415,7 +415,7 @@ matches our C++ code::
415415
LLTM forward
416416

417417
Since we are now able to call our C++ functions from Python, we can wrap them
418-
with :class:`torch.nn.Function` and :class:`torch.nn.Module` to make them first
418+
with :class:`torch.autograd.Function` and :class:`torch.nn.Module` to make them first
419419
class citizens of PyTorch::
420420

421421
import math
@@ -424,7 +424,7 @@ class citizens of PyTorch::
424424
# Our module!
425425
import lltm
426426

427-
class LLTMFunction(torch.nn.Function):
427+
class LLTMFunction(torch.autograd.Function):
428428
@staticmethod
429429
def forward(ctx, input, weights, bias, old_h, old_cell):
430430
outputs = lltm.forward(input, weights, bias, old_h, old_cell)

0 commit comments

Comments
 (0)