Skip to content

Remove TorchScript part from the Saving and Loading tutorial #3419

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 26, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 18 additions & 31 deletions beginner_source/saving_loading_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -227,43 +227,30 @@
# normalization layers to evaluation mode before running inference.
# Failing to do this will yield inconsistent inference results.
#
# Export/Load Model in TorchScript Format
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Saving an Exported Program
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# One common way to do inference with a trained model is to use
# `TorchScript <https://pytorch.org/docs/stable/jit.html>`__, an intermediate
# representation of a PyTorch model that can be run in Python as well as in a
# high performance environment like C++. TorchScript is actually the recommended model format
# for scaled inference and deployment.
# If you are using ``torch.export``, you can save and load your ``ExportedProgram`` using the
# ``torch.export.save()`` and ``torch.export.load()`` APIs. with the ``.pt2`` file extension:
#
# .. note::
# Using the TorchScript format, you will be able to load the exported model and
# run inference without defining the model class.
#
# **Export:**
#
# .. code:: python
#
# model_scripted = torch.jit.script(model) # Export to TorchScript
# model_scripted.save('model_scripted.pt') # Save
#
# **Load:**
# .. code-block:: python
#
# class SimpleModel(torch.nn.Module):
# def forward(self, x):
# return x + 10
#
# .. code:: python
# # Create a sample input
# sample_input = torch.randn(5)
#
# # Export the model
# exported_program = torch.export.export(SimpleModel(), sample_input)
#
# model = torch.jit.load('model_scripted.pt')
# model.eval()
# # Save the exported program
# torch.export.save(exported_program, 'exported_program.pt2')
#
# Remember that you must call ``model.eval()`` to set dropout and batch
# normalization layers to evaluation mode before running inference.
# Failing to do this will yield inconsistent inference results.
# # Load the exported program
# saved_exported_program = torch.export.load('exported_program.pt2')
#
# For more information on TorchScript, feel free to visit the dedicated
# `tutorials <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html>`__.
# You will get familiar with the tracing conversion and learn how to
# run a TorchScript module in a `C++ environment <https://pytorch.org/tutorials/advanced/cpp_export.html>`__.



######################################################################
# Saving & Loading a General Checkpoint for Inference and/or Resuming Training
Expand Down