Skip to content

Commit 8aefdf5

Browse files
authored
[v1.0.1] cpp tutorial fix (pytorch#422)
* fix Normalize * fix data_loader * [WIP] * fix torch_script_custom_ops
1 parent d35c4f3 commit 8aefdf5

File tree

3 files changed

+9
-9
lines changed

3 files changed

+9
-9
lines changed

advanced_source/cpp_export.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -105,13 +105,13 @@ method::
105105

106106
@torch.jit.script_method
107107
def forward(self, input):
108-
if input.sum() > 0:
108+
if bool(input.sum() > 0):
109109
output = self.weight.mv(input)
110110
else:
111111
output = self.weight + input
112112
return output
113113

114-
my_script_module = MyModule()
114+
my_script_module = MyModule(2, 3)
115115

116116
Creating a new ``MyModule`` object now directly produces an instance of
117117
``ScriptModule`` that is ready for serialization.

advanced_source/cpp_frontend.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -899,7 +899,7 @@ stacks them into a single tensor along the first dimension:
899899
.. code-block:: cpp
900900
901901
auto dataset = torch::data::datasets::MNIST("./mnist")
902-
.map(torch::data::transforms::Normalize(0.5, 0.5))
902+
.map(torch::data::transforms::Normalize<>(0.5, 0.5))
903903
.map(torch::data::transforms::Stack<>());
904904
905905
Note that the MNIST dataset should be located in the ``./mnist`` directory
@@ -914,7 +914,7 @@ dataset, the type of the sampler and some other implementation details):
914914
915915
.. code-block:: cpp
916916
917-
auto dataloader = torch::data::make_data_loader(std::move(dataset));
917+
auto data_loader = torch::data::make_data_loader(std::move(dataset));
918918
919919
The data loader does come with a lot of options. You can inspect the full set
920920
`here
@@ -928,7 +928,7 @@ let's create a ``DataLoaderOptions`` object and set the appropriate properties:
928928
929929
.. code-block:: cpp
930930
931-
auto dataloader = torch::data::make_data_loader(
931+
auto data_loader = torch::data::make_data_loader(
932932
std::move(dataset),
933933
torch::data::DataLoaderOptions().batch_size(kBatchSize).workers(2));
934934

advanced_source/torch_script_custom_ops.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -90,8 +90,8 @@ in Python). The return type of our ``warp_perspective`` function will also be a
9090
The TorchScript compiler understands a fixed number of types. Only these types
9191
can be used as arguments to your custom operator. Currently these types are:
9292
``torch::Tensor``, ``torch::Scalar``, ``double``, ``int64_t`` and
93-
``std::vector``s of these types. Note that __only__ ``double`` and __not__
94-
``float``, and __only__ ``int64_t`` and __not__ other integral types such as
93+
``std::vector`` s of these types. Note that *only* ``double`` and *not*
94+
``float``, and *only* ``int64_t`` and *not* other integral types such as
9595
``int``, ``short`` or ``long`` are supported.
9696

9797
Inside of our function, the first thing we need to do is convert our PyTorch
@@ -1018,7 +1018,7 @@ expects from a module), this route can be slightly quirky. That said, all you
10181018
need is a ``setup.py`` file in place of the ``CMakeLists.txt`` which looks like
10191019
this:
10201020
1021-
.. code-block::
1021+
.. code-block:: python
10221022
10231023
from setuptools import setup
10241024
from torch.utils.cpp_extension import BuildExtension, CppExtension
@@ -1081,7 +1081,7 @@ This will produce a shared library called ``warp_perspective.so``, which we can
10811081
pass to ``torch.ops.load_library`` as we did earlier to make our operator
10821082
visible to TorchScript:
10831083
1084-
.. code-block::
1084+
.. code-block:: python
10851085
10861086
>>> import torch
10871087
>>> torch.ops.load_library("warp_perspective.so")

0 commit comments

Comments
 (0)