We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent d235bc2 commit aea3229Copy full SHA for aea3229
beginner_source/nlp/pytorch_tutorial.py
@@ -270,7 +270,7 @@
270
# autograd.Variables (note this is more general than Pytorch. There is an
271
# equivalent object in every major deep learning toolkit):
272
#
273
-# **If you want the error from your loss function to backpropogate to a
+# **If you want the error from your loss function to backpropagate to a
274
# component of your network, you MUST NOT break the Variable chain from
275
# that component to your loss Variable. If you do, the loss will have no
276
# idea your component exists, and its parameters can't be updated.**
0 commit comments