Skip to content

Commit c9f6eaa

Browse files
authored
Merge pull request pytorch#628 from lara-hdr/lahaidar/ort_tutorial_typos
Fix Typos in ORT Tutorial and Change its Location
2 parents 6db2a07 + 9896b04 commit c9f6eaa

File tree

2 files changed

+36
-34
lines changed

2 files changed

+36
-34
lines changed

advanced_source/super_resolution_with_onnxruntime.py

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,22 @@
11
"""
2-
Exporting a Model from PyTorch to ONNX and Running it using ONNXRuntime
3-
=======================================================================
2+
Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime
3+
========================================================================
44
55
In this tutorial, we describe how to convert a model defined
6-
in PyTorch into the ONNX format and then run it with ONNXRuntime.
6+
in PyTorch into the ONNX format and then run it with ONNX Runtime.
77
8-
ONNXRuntime is a performance-focused engine for ONNX models,
8+
ONNX Runtime is a performance-focused engine for ONNX models,
99
which inferences efficiently across multiple platforms and hardware
1010
(Windows, Linux, and Mac and on both CPUs and GPUs).
11-
ONNXRuntime has proved to considerably increase performance over
11+
ONNX Runtime has proved to considerably increase performance over
1212
multiple models as explained `here
1313
<https://cloudblogs.microsoft.com/opensource/2019/05/22/onnx-runtime-machine-learning-inferencing-0-4-release>`__
1414
15-
For this tutorial, you will need to install `onnx <https://github.com/onnx/onnx>`__
16-
and `onnxruntime <https://github.com/microsoft/onnxruntime>`__.
17-
You can get binary builds of onnx and onnxrunimte with
15+
For this tutorial, you will need to install `ONNX <https://github.com/onnx/onnx>`__
16+
and `ONNX Runtime <https://github.com/microsoft/onnxruntime>`__.
17+
You can get binary builds of ONNX and ONNX Runtime with
1818
``pip install onnx onnxruntime``.
19-
Note that ONNXRuntime is compatible with Python versions 3.5 to 3.7.
19+
Note that ONNX Runtime is compatible with Python versions 3.5 to 3.7.
2020
2121
``NOTE``: This tutorial needs PyTorch master branch which can be installed by following
2222
the instructions `here <https://github.com/pytorch/pytorch#from-source>`__
@@ -141,7 +141,7 @@ def _initialize_weights(self):
141141
x, # model input (or a tuple for multiple inputs)
142142
"super_resolution.onnx", # where to save the model (can be a file or file-like object)
143143
export_params=True, # store the trained parameter weights inside the model file
144-
opset_version=10, # the onnx version to export the model to
144+
opset_version=10, # the ONNX version to export the model to
145145
do_constant_folding=True, # wether to execute constant folding for optimization
146146
input_names = ['input'], # the model's input names
147147
output_names = ['output'], # the model's output names
@@ -151,10 +151,10 @@ def _initialize_weights(self):
151151
######################################################################
152152
# We also computed ``torch_out``, the output after of the model,
153153
# which we will use to verify that the model we exported computes
154-
# the same values when run in onnxruntime.
154+
# the same values when run in ONNX Runtime.
155155
#
156-
# But before verifying the model's output with onnxruntime, we will check
157-
# the onnx model with onnx's API.
156+
# But before verifying the model's output with ONNX Runtime, we will check
157+
# the ONNX model with ONNX's API.
158158
# First, ``onnx.load("super_resolution.onnx")`` will load the saved model and
159159
# will output a onnx.ModelProto structure (a top-level file/container format for bundling a ML model.
160160
# For more information `onnx.proto documentation <https://github.com/onnx/onnx/blob/master/onnx/onnx.proto>`__.).
@@ -172,18 +172,18 @@ def _initialize_weights(self):
172172

173173

174174
######################################################################
175-
# Now let's compute the output using ONNXRuntime's Python APIs.
175+
# Now let's compute the output using ONNX Runtime's Python APIs.
176176
# This part can normally be done in a separate process or on another
177177
# machine, but we will continue in the same process so that we can
178-
# verify that onnxruntime and PyTorch are computing the same value
178+
# verify that ONNX Runtime and PyTorch are computing the same value
179179
# for the network.
180180
#
181-
# In order to run the model with ONNXRuntime, we need to create an
181+
# In order to run the model with ONNX Runtime, we need to create an
182182
# inference session for the model with the chosen configuration
183183
# parameters (here we use the default config).
184184
# Once the session is created, we evaluate the model using the run() api.
185185
# The output of this call is a list containing the outputs of the model
186-
# computed by ONNXRuntime.
186+
# computed by ONNX Runtime.
187187
#
188188

189189
import onnxruntime
@@ -193,33 +193,33 @@ def _initialize_weights(self):
193193
def to_numpy(tensor):
194194
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
195195

196-
# compute onnxruntime output prediction
196+
# compute ONNX Runtime output prediction
197197
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(x)}
198198
ort_outs = ort_session.run(None, ort_inputs)
199199

200-
# compare onnxruntime and PyTorch results
200+
# compare ONNX Runtime and PyTorch results
201201
np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol=1e-03, atol=1e-05)
202202

203203
print("Exported model has been tested with ONNXRuntime, and the result looks good!")
204204

205205

206206
######################################################################
207-
# We should see that the output of PyTorch and onnxruntime runs match
207+
# We should see that the output of PyTorch and ONNX Runtime runs match
208208
# numerically with the given precision (rtol=1e-03 and atol=1e-05).
209209
# As a side-note, if they do not match then there is an issue in the
210-
# onnx exporter, so please contact us in that case.
210+
# ONNX exporter, so please contact us in that case.
211211
#
212212

213213

214214
######################################################################
215-
# Running the model on an image using ONNXRuntime
216-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
215+
# Running the model on an image using ONNX Runtime
216+
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217217
#
218218

219219

220220
######################################################################
221221
# So far we have exported a model from PyTorch and shown how to load it
222-
# and run it in onnxruntime with a dummy tensor as an input.
222+
# and run it in ONNX Runtime with a dummy tensor as an input.
223223

224224
######################################################################
225225
# For this tutorial, we will use a famous cat image used widely which
@@ -263,7 +263,7 @@ def to_numpy(tensor):
263263
######################################################################
264264
# Now, as a next step, let's take the tensor representing the
265265
# greyscale resized cat image and run the super-resolution model in
266-
# ONNXRuntime as explained previously.
266+
# ONNX Runtime as explained previously.
267267
#
268268

269269
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(img_y)}
@@ -299,14 +299,14 @@ def to_numpy(tensor):
299299
# :alt: output\_cat
300300
#
301301
#
302-
# ONNXRuntime being a cross platform engine, you can run it across
302+
# ONNX Runtime being a cross platform engine, you can run it across
303303
# multiple platforms and on both CPUs and GPUs.
304304
#
305-
# ONNXRuntime can also be deployed to the cloud for model inferencing
305+
# ONNX Runtime can also be deployed to the cloud for model inferencing
306306
# using Azure Machine Learning Services. More information `here <https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-onnx>`__.
307307
#
308-
# More information about ONNXRuntime's performance `here <https://github.com/microsoft/onnxruntime#high-performance>`__.
308+
# More information about ONNX Runtime's performance `here <https://github.com/microsoft/onnxruntime#high-performance>`__.
309309
#
310310
#
311-
# For more information about ONNXRuntime `here <https://github.com/microsoft/onnxruntime>`__.
311+
# For more information about ONNX Runtime `here <https://github.com/microsoft/onnxruntime>`__.
312312
#

index.rst

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -99,10 +99,6 @@ Image
9999
:tooltip: Raise your awareness to the security vulnerabilities of ML models, and get insight into the hot topic of adversarial machine learning
100100
:description: :doc:`beginner/fgsm_tutorial`
101101

102-
.. customgalleryitem::
103-
:figure: /_static/img/cat.jpg
104-
:tooltip: Exporting a Model from PyTorch to ONNX and Running it using ONNXRuntime
105-
:description: :doc:`advanced/super_resolution_with_onnxruntime`
106102

107103
.. raw:: html
108104

@@ -249,6 +245,12 @@ Production Usage
249245
:description: :doc:`/intermediate/flask_rest_api_tutorial`
250246
:figure: _static/img/flask.png
251247

248+
.. customgalleryitem::
249+
:figure: /_static/img/cat.jpg
250+
:tooltip: Exporting a Model from PyTorch to ONNX and Running it using ONNXRuntime
251+
:description: :doc:`advanced/super_resolution_with_onnxruntime`
252+
253+
252254
.. raw:: html
253255

254256
<div style='clear:both'></div>
@@ -296,7 +298,6 @@ PyTorch in Other Languages
296298
intermediate/spatial_transformer_tutorial
297299
advanced/neural_style_tutorial
298300
beginner/fgsm_tutorial
299-
advanced/super_resolution_with_onnxruntime
300301

301302
.. toctree::
302303
:maxdepth: 2
@@ -357,6 +358,7 @@ PyTorch in Other Languages
357358
intermediate/flask_rest_api_tutorial
358359
beginner/aws_distributed_training_tutorial
359360
advanced/cpp_export
361+
advanced/super_resolution_with_onnxruntime
360362

361363
.. toctree::
362364
:maxdepth: 2

0 commit comments

Comments
 (0)