diff --git a/advanced_source/extend_dispatcher.rst b/advanced_source/extend_dispatcher.rst index 7cdf49e614..f3ae1e7e55 100644 --- a/advanced_source/extend_dispatcher.rst +++ b/advanced_source/extend_dispatcher.rst @@ -53,6 +53,7 @@ You can choose any of keys above to prototype your customized backend. To create a Tensor on ``PrivateUse1`` backend, you need to set dispatch key in ``TensorImpl`` constructor. .. code-block:: cpp + /* Example TensorImpl constructor */ TensorImpl( Storage&& storage, diff --git a/advanced_source/torch-script-parallelism.rst b/advanced_source/torch-script-parallelism.rst index f2777fad56..5a2fd86e1f 100644 --- a/advanced_source/torch-script-parallelism.rst +++ b/advanced_source/torch-script-parallelism.rst @@ -207,6 +207,7 @@ Let's use the profiler along with the Chrome trace export functionality to visualize the performance of our parallelized model: .. code-block:: python + with torch.autograd.profiler.profile() as prof: ens(x) prof.export_chrome_trace('parallel.json') diff --git a/advanced_source/torch_script_custom_ops.rst b/advanced_source/torch_script_custom_ops.rst index d562038745..4444a18ae0 100644 --- a/advanced_source/torch_script_custom_ops.rst +++ b/advanced_source/torch_script_custom_ops.rst @@ -605,7 +605,7 @@ Along with a small ``CMakeLists.txt`` file: At this point, we should be able to build the application: -.. code-block:: +.. code-block:: shell $ mkdir build $ cd build @@ -645,7 +645,7 @@ At this point, we should be able to build the application: And run it without passing a model just yet: -.. code-block:: +.. code-block:: shell $ ./example_app usage: example_app @@ -672,7 +672,7 @@ The last line will serialize the script function into a file called "example.pt". If we then pass this serialized model to our C++ application, we can run it straight away: -.. code-block:: +.. code-block:: shell $ ./example_app example.pt terminate called after throwing an instance of 'torch::jit::script::ErrorReport' diff --git a/beginner_source/hyperparameter_tuning_tutorial.py b/beginner_source/hyperparameter_tuning_tutorial.py index 11524618cb..45c7663272 100644 --- a/beginner_source/hyperparameter_tuning_tutorial.py +++ b/beginner_source/hyperparameter_tuning_tutorial.py @@ -431,7 +431,7 @@ def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2): ###################################################################### # If you run the code, an example output could look like this: # -# .. code-block:: +# :: # # Number of trials: 10 (10 TERMINATED) # +-----+------+------+-------------+--------------+---------+------------+--------------------+ diff --git a/prototype_source/fx_graph_mode_ptq_static.rst b/prototype_source/fx_graph_mode_ptq_static.rst index 410f5a116b..2fc872b7d9 100644 --- a/prototype_source/fx_graph_mode_ptq_static.rst +++ b/prototype_source/fx_graph_mode_ptq_static.rst @@ -10,7 +10,8 @@ we'll have a separate tutorial to show how to make the part of the model we want We also have a tutorial for `FX Graph Mode Post Training Dynamic Quantization `_. tldr; The FX Graph Mode API looks like the following: -.. code:: python +.. code:: python + import torch from torch.quantization import get_default_qconfig # Note that this is temporary, we'll expose these functions to torch.quantization after official releasee diff --git a/recipes_source/android_native_app_with_custom_op.rst b/recipes_source/android_native_app_with_custom_op.rst index ba488de967..c03940b21f 100644 --- a/recipes_source/android_native_app_with_custom_op.rst +++ b/recipes_source/android_native_app_with_custom_op.rst @@ -704,7 +704,7 @@ If you check the android logcat: You should see logs with tag 'PyTorchNativeApp' that prints x, y, and the result of the model forward, which we print with ``log`` function in ``NativeApp/app/src/main/cpp/pytorch_nativeapp.cpp``. -.. code-block:: +:: I/PyTorchNativeApp(26968): x: -0.9484 -1.1757 -0.5832 0.9144 0.8867 1.0933 -0.4004 -0.3389 I/PyTorchNativeApp(26968): -1.0343 1.5200 -0.7625 -1.5724 -1.2073 0.4613 0.2730 -0.6789