Skip to content

Commit 00da7b2

Browse files
committed
typo fix
1 parent d9be00d commit 00da7b2

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

tutorials/frontend/deploy_prequantized.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,13 @@
1919
================================
2020
**Author**: `Masahiro Masuda <https://github.com/masahi>`_
2121
22-
This is an a tutorial on loading models quantized by deep learning frameworks into TVM.
22+
This is a tutorial on loading models quantized by deep learning frameworks into TVM.
2323
Pre-quantized model import is one of the quantization support we have in TVM. More details on
2424
the quantization story in TVM can be found
2525
`here <https://discuss.tvm.ai/t/quantization-story/3920>`_.
2626
2727
Here, we demonstrate how to load and run models quantized by PyTorch, MXNet, and TFLite.
28-
Once loaded, we can run quantized models on any hardware TVM supports.
28+
Once loaded, we can run compiled, quantized models on any hardware TVM supports.
2929
"""
3030

3131
#################################################################################
@@ -153,7 +153,7 @@ def quantize_model(model, inp):
153153
# You can print the output from the frontend to see how quantized models are
154154
# represented.
155155
#
156-
# You would see operators specfic to quantization such as
156+
# You would see operators specific to quantization such as
157157
# qnn.quantize, qnn.dequantize, qnn.requantize, and qnn.conv2d etc.
158158
input_name = "input" # the input name can be be arbitrary for PyTorch frontend.
159159
input_shapes = [(input_name, (1, 3, 224, 224))]

0 commit comments

Comments
 (0)