You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ Some considerations:
18
18
* If you would like the tutorials section improved, please open a github issue
19
19
`here <https://github.com/pytorch/tutorials>`_ with your feedback.
20
20
21
-
Lastly, some of the tutorials are marked as requiring the *Preview release*. These are tutorials that use the new functionality from the PyTorch 1.0 Preview. Please visit the `Get Started <http://pytorch.org/get-started>`_ section of the PyTorch website for instructions on how to install the latest Preview build before trying these tutorials.
21
+
Lastly, some of the tutorials are marked as requiring the *Preview release*. These are tutorials that use the new functionality from the PyTorch 1.0 Preview. Please visit the `Get Started <https://pytorch.org/get-started>`_ section of the PyTorch website for instructions on how to install the latest Preview build before trying these tutorials.
@@ -447,7 +447,7 @@ there are currently three backends implemented in PyTorch: TCP, MPI, and
447
447
Gloo. They each have different specifications and tradeoffs, depending
448
448
on the desired use-case. A comparative table of supported functions can
449
449
be found
450
-
`here <http://pytorch.org/docs/stable/distributed.html#module-torch.distributed>`__. Note that a fourth backend, NCCL, has been added since the creation of this tutorial. See `this section <https://pytorch.org/docs/stable/distributed.html#multi-gpu-collective-functions>`__ of the ``torch.distributed`` docs for more information about its use and value.
450
+
`here <https://pytorch.org/docs/stable/distributed.html#module-torch.distributed>`__. Note that a fourth backend, NCCL, has been added since the creation of this tutorial. See `this section <https://pytorch.org/docs/stable/distributed.html#multi-gpu-collective-functions>`__ of the ``torch.distributed`` docs for more information about its use and value.
451
451
452
452
**TCP Backend**
453
453
@@ -552,7 +552,7 @@ Those methods allow you to define how this coordination is done.
552
552
Depending on your hardware setup, one of these methods should be
553
553
naturally more suitable than the others. In addition to the following
554
554
sections, you should also have a look at the `official
0 commit comments