Skip to content

Adding back docs folder #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 18, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Binary file added .DS_Store
Binary file not shown.
Binary file added docs/.DS_Store
Binary file not shown.
4 changes: 4 additions & 0 deletions docs/0.1.12/.buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: e3c0334253ce75f93e455219de7c5075
tags: 645f666f9bcd5a90fca523b33c5a78b7
636 changes: 636 additions & 0 deletions docs/0.1.12/_modules/index.html

Large diffs are not rendered by default.

925 changes: 925 additions & 0 deletions docs/0.1.12/_modules/torch.html

Large diffs are not rendered by default.

881 changes: 881 additions & 0 deletions docs/0.1.12/_modules/torch/_tensor_str.html

Large diffs are not rendered by default.

684 changes: 684 additions & 0 deletions docs/0.1.12/_modules/torch/_utils.html

Large diffs are not rendered by default.

628 changes: 628 additions & 0 deletions docs/0.1.12/_modules/torch/autograd.html

Large diffs are not rendered by default.

834 changes: 834 additions & 0 deletions docs/0.1.12/_modules/torch/autograd/function.html

Large diffs are not rendered by default.

1,485 changes: 1,485 additions & 0 deletions docs/0.1.12/_modules/torch/autograd/variable.html

Large diffs are not rendered by default.

994 changes: 994 additions & 0 deletions docs/0.1.12/_modules/torch/cuda.html

Large diffs are not rendered by default.

837 changes: 837 additions & 0 deletions docs/0.1.12/_modules/torch/cuda/comm.html

Large diffs are not rendered by default.

788 changes: 788 additions & 0 deletions docs/0.1.12/_modules/torch/cuda/streams.html

Large diffs are not rendered by default.

699 changes: 699 additions & 0 deletions docs/0.1.12/_modules/torch/functional.html

Large diffs are not rendered by default.

651 changes: 651 additions & 0 deletions docs/0.1.12/_modules/torch/multiprocessing.html

Large diffs are not rendered by default.

1,314 changes: 1,314 additions & 0 deletions docs/0.1.12/_modules/torch/nn/functional.html

Large diffs are not rendered by default.

938 changes: 938 additions & 0 deletions docs/0.1.12/_modules/torch/nn/init.html

Large diffs are not rendered by default.

1,207 changes: 1,207 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/activation.html

Large diffs are not rendered by default.

758 changes: 758 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/batchnorm.html

Large diffs are not rendered by default.

787 changes: 787 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/container.html

Large diffs are not rendered by default.

1,204 changes: 1,204 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/conv.html

Large diffs are not rendered by default.

621 changes: 621 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/distance.html

Large diffs are not rendered by default.

727 changes: 727 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/dropout.html

Large diffs are not rendered by default.

750 changes: 750 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/instancenorm.html

Large diffs are not rendered by default.

700 changes: 700 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/linear.html

Large diffs are not rendered by default.

1,071 changes: 1,071 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/loss.html

Large diffs are not rendered by default.

1,070 changes: 1,070 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/module.html

Large diffs are not rendered by default.

627 changes: 627 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/pixelshuffle.html

Large diffs are not rendered by default.

1,435 changes: 1,435 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/pooling.html

Large diffs are not rendered by default.

1,153 changes: 1,153 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/rnn.html

Large diffs are not rendered by default.

696 changes: 696 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/sparse.html

Large diffs are not rendered by default.

698 changes: 698 additions & 0 deletions docs/0.1.12/_modules/torch/nn/modules/upsampling.html

Large diffs are not rendered by default.

690 changes: 690 additions & 0 deletions docs/0.1.12/_modules/torch/nn/parallel/data_parallel.html

Large diffs are not rendered by default.

612 changes: 612 additions & 0 deletions docs/0.1.12/_modules/torch/nn/parameter.html

Large diffs are not rendered by default.

616 changes: 616 additions & 0 deletions docs/0.1.12/_modules/torch/nn/utils/clip_grad.html

Large diffs are not rendered by default.

710 changes: 710 additions & 0 deletions docs/0.1.12/_modules/torch/nn/utils/rnn.html

Large diffs are not rendered by default.

649 changes: 649 additions & 0 deletions docs/0.1.12/_modules/torch/optim/adadelta.html

Large diffs are not rendered by default.

670 changes: 670 additions & 0 deletions docs/0.1.12/_modules/torch/optim/adagrad.html

Large diffs are not rendered by default.

660 changes: 660 additions & 0 deletions docs/0.1.12/_modules/torch/optim/adam.html

Large diffs are not rendered by default.

660 changes: 660 additions & 0 deletions docs/0.1.12/_modules/torch/optim/adamax.html

Large diffs are not rendered by default.

659 changes: 659 additions & 0 deletions docs/0.1.12/_modules/torch/optim/asgd.html

Large diffs are not rendered by default.

832 changes: 832 additions & 0 deletions docs/0.1.12/_modules/torch/optim/lbfgs.html

Large diffs are not rendered by default.

729 changes: 729 additions & 0 deletions docs/0.1.12/_modules/torch/optim/optimizer.html

Large diffs are not rendered by default.

671 changes: 671 additions & 0 deletions docs/0.1.12/_modules/torch/optim/rmsprop.html

Large diffs are not rendered by default.

654 changes: 654 additions & 0 deletions docs/0.1.12/_modules/torch/optim/rprop.html

Large diffs are not rendered by default.

684 changes: 684 additions & 0 deletions docs/0.1.12/_modules/torch/optim/sgd.html

Large diffs are not rendered by default.

971 changes: 971 additions & 0 deletions docs/0.1.12/_modules/torch/serialization.html

Large diffs are not rendered by default.

763 changes: 763 additions & 0 deletions docs/0.1.12/_modules/torch/sparse.html

Large diffs are not rendered by default.

701 changes: 701 additions & 0 deletions docs/0.1.12/_modules/torch/storage.html

Large diffs are not rendered by default.

973 changes: 973 additions & 0 deletions docs/0.1.12/_modules/torch/tensor.html

Large diffs are not rendered by default.

893 changes: 893 additions & 0 deletions docs/0.1.12/_modules/torch/utils/data/dataloader.html

Large diffs are not rendered by default.

621 changes: 621 additions & 0 deletions docs/0.1.12/_modules/torch/utils/data/dataset.html

Large diffs are not rendered by default.

673 changes: 673 additions & 0 deletions docs/0.1.12/_modules/torch/utils/data/sampler.html

Large diffs are not rendered by default.

770 changes: 770 additions & 0 deletions docs/0.1.12/_modules/torch/utils/ffi.html

Large diffs are not rendered by default.

693 changes: 693 additions & 0 deletions docs/0.1.12/_modules/torch/utils/model_zoo.html

Large diffs are not rendered by default.

53 changes: 53 additions & 0 deletions docs/0.1.12/_sources/autograd.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
.. role:: hidden
:class: hidden-section

Automatic differentiation package - torch.autograd
==================================================

.. automodule:: torch.autograd
.. currentmodule:: torch.autograd

.. autofunction:: backward

Variable
--------

API compatibility
^^^^^^^^^^^^^^^^^

Variable API is nearly the same as regular Tensor API (with the exception
of a couple in-place methods, that would overwrite inputs required for
gradient computation). In most cases Tensors can be safely replaced with
Variables and the code will remain to work just fine. Because of this,
we're not documenting all the operations on variables, and you should
refer to :class:`torch.Tensor` docs for this purpose.

In-place operations on Variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Supporting in-place operations in autograd is a hard matter, and we discourage
their use in most cases. Autograd's aggressive buffer freeing and reuse makes
it very efficient and there are very few occasions when in-place operations
actually lower memory usage by any significant amount. Unless you're operating
under heavy memory pressure, you might never need to use them.

In-place correctness checks
^^^^^^^^^^^^^^^^^^^^^^^^^^^

All :class:`Variable` s keep track of in-place operations applied to them, and
if the implementation detects that a variable was saved for backward in one of
the functions, but it was modified in-place afterwards, an error will be raised
once backward pass is started. This ensures that if you're using in-place
functions and not seing any errors, you can be sure that the computed gradients
are correct.


.. autoclass:: Variable
:members:

:hidden:`Function`
------------------

.. autoclass:: Function
:members:

27 changes: 27 additions & 0 deletions docs/0.1.12/_sources/cuda.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
torch.cuda
===================================

.. currentmodule:: torch.cuda

.. automodule:: torch.cuda
:members:

Communication collectives
-------------------------

.. autofunction:: torch.cuda.comm.broadcast

.. autofunction:: torch.cuda.comm.reduce_add

.. autofunction:: torch.cuda.comm.scatter

.. autofunction:: torch.cuda.comm.gather

Streams and events
------------------

.. autoclass:: Stream
:members:

.. autoclass:: Event
:members:
12 changes: 12 additions & 0 deletions docs/0.1.12/_sources/data.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
torch.utils.data
===================================

.. automodule:: torch.utils.data
.. autoclass:: Dataset
.. autoclass:: TensorDataset
.. autoclass:: DataLoader
.. autoclass:: torch.utils.data.sampler.Sampler
.. autoclass:: torch.utils.data.sampler.SequentialSampler
.. autoclass:: torch.utils.data.sampler.RandomSampler
.. autoclass:: torch.utils.data.sampler.SubsetRandomSampler
.. autoclass:: torch.utils.data.sampler.WeightedRandomSampler
6 changes: 6 additions & 0 deletions docs/0.1.12/_sources/ffi.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
torch.utils.ffi
===============

.. currentmodule:: torch.utils.ffi
.. autofunction:: create_extension

55 changes: 55 additions & 0 deletions docs/0.1.12/_sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
.. PyTorch documentation master file, created by
sphinx-quickstart on Fri Dec 23 13:31:47 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.

:github_url: https://github.com/pytorch/pytorch

PyTorch documentation
===================================

PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.

.. toctree::
:glob:
:maxdepth: 1
:caption: Notes

notes/*


.. toctree::
:maxdepth: 1
:caption: Package Reference

torch
tensors
sparse
storage
nn
optim
torch.autograd <autograd>
torch.multiprocessing <multiprocessing>
torch.legacy <legacy>
cuda
ffi
data
model_zoo

.. toctree::
:glob:
:maxdepth: 1
:caption: torchvision Reference

torchvision/torchvision
torchvision/datasets
torchvision/models
torchvision/transforms
torchvision/utils


Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
4 changes: 4 additions & 0 deletions docs/0.1.12/_sources/legacy.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Legacy package - torch.legacy
===================================

.. automodule:: torch.legacy
5 changes: 5 additions & 0 deletions docs/0.1.12/_sources/model_zoo.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
torch.utils.model_zoo
===================================

.. automodule:: torch.utils.model_zoo
.. autofunction:: load_url
88 changes: 88 additions & 0 deletions docs/0.1.12/_sources/multiprocessing.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
Multiprocessing package - torch.multiprocessing
===============================================

.. automodule:: torch.multiprocessing
.. currentmodule:: torch.multiprocessing

.. warning::

If the main process exits abruptly (e.g. because of an incoming signal),
Python's ``multiprocessing`` sometimes fails to clean up its children.
It's a known caveat, so if you're seeing any resource leaks after
interrupting the interpreter, it probably means that this has just happened
to you.

Strategy management
-------------------

.. autofunction:: get_all_sharing_strategies
.. autofunction:: get_sharing_strategy
.. autofunction:: set_sharing_strategy

Sharing CUDA tensors
--------------------

Sharing CUDA tensors between processes is supported only in Python 3, using
a ``spawn`` or ``forkserver`` start methods. :mod:`python:multiprocessing` in
Python 2 can only create subprocesses using ``fork``, and it's not supported
by the CUDA runtime.

.. warning::

CUDA API requires that the allocation exported to other processes remains
valid as long as it's used by them. You should be careful and ensure that
CUDA tensors you shared don't go out of scope as long as it's necessary.
This shouldn't be a problem for sharing model parameters, but passing other
kinds of data should be done with care. Note that this restriction doesn't
apply to shared CPU memory.


Sharing strategies
------------------

This section provides a brief overview into how different sharing strategies
work. Note that it applies only to CPU tensor - CUDA tensors will always use
the CUDA API, as that's the only way they can be shared.

File descriptor - ``file_descriptor``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


.. note::

This is the default strategy (except for macOS and OS X where it's not
supported).

This strategy will use file descriptors as shared memory handles. Whenever a
storage is moved to shared memory, a file descriptor obtained from ``shm_open``
is cached with the object, and when it's going to be sent to other processes,
the file descriptor will be transferred (e.g. via UNIX sockets) to it. The
receiver will also cache the file descriptor and ``mmap`` it, to obtain a shared
view onto the storage data.

Note that if there will be a lot of tensors shared, this strategy will keep a
large number of file descriptors open most of the time. If your system has low
limits for the number of open file descriptors, and you can't rise them, you
should use the ``file_system`` strategy.

File system - ``file_system``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This strategy will use file names given to ``shm_open`` to identify the shared
memory regions. This has a benefit of not requiring the implementation to cache
the file descriptors obtained from it, but at the same time is prone to shared
memory leaks. The file can't be deleted right after its creation, because other
processes need to access it to open their views. If the processes fatally
crash, or are killed, and don't call the storage destructors, the files will
remain in the system. This is very serious, because they keep using up the
memory until the system is restarted, or they're freed manually.

To counter the problem of shared memory file leaks, :mod:`torch.multiprocessing`
will spawn a daemon named ``torch_shm_manager`` that will isolate itself from
the current process group, and will keep track of all shared memory allocations.
Once all processes connected to it exit, it will wait a moment to ensure there
will be no new connections, and will iterate over all shared memory files
allocated by the group. If it finds that any of them still exist, they will be
deallocated. We've tested this method and it prooved to be robust to various
failures. Still, if your system has high enough limits, and ``file_descriptor``
is a supported strategy, we do not recommend switching to this one.
Loading