From a9f57db6079e688346afdc0926072a28eb54ccf8 Mon Sep 17 00:00:00 2001 From: Vasiliy Kuznetsov Date: Mon, 6 Feb 2023 13:55:49 -0800 Subject: [PATCH] AO migration: migrate .rst files to new locations (#94211) Summary: Migrates the PyTorch documentation to point to the new locations of AO code. Context: https://github.com/pytorch/pytorch/issues/81667 Process: 1. run https://gist.github.com/vkuzo/c38d4ba201604579d7d316ec4a4692e7 for automated replacement 2. manually fix the doc build errors (by removing the module declarations which are now duplicate) Test plan: CI Pull Request resolved: https://github.com/pytorch/pytorch/pull/94211 Approved by: https://github.com/jerryzh168 --- docs/source/quantization-support.rst | 59 ++++++++++++---------- docs/source/quantization.rst | 74 ++++++++++++---------------- 2 files changed, 65 insertions(+), 68 deletions(-) diff --git a/docs/source/quantization-support.rst b/docs/source/quantization-support.rst index e974df655af70..0e99517f3abf1 100644 --- a/docs/source/quantization-support.rst +++ b/docs/source/quantization-support.rst @@ -1,12 +1,12 @@ Quantization API Reference ------------------------------- -torch.quantization +torch.ao.quantization ~~~~~~~~~~~~~~~~~~~~~ This module contains Eager mode quantization APIs. -.. currentmodule:: torch.quantization +.. currentmodule:: torch.ao.quantization Top level APIs ^^^^^^^^^^^^^^ @@ -49,12 +49,12 @@ Utility functions propagate_qconfig_ default_eval_fn -torch.quantization.quantize_fx +torch.ao.quantization.quantize_fx ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This module contains FX graph mode quantization APIs (prototype). -.. currentmodule:: torch.quantization.quantize_fx +.. currentmodule:: torch.ao.quantization.quantize_fx .. autosummary:: :toctree: generated @@ -178,13 +178,13 @@ regular full-precision tensor. topk -torch.quantization.observer +torch.ao.quantization.observer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This module contains observers which are used to collect statistics about the values observed during calibration (PTQ) or training (QAT). -.. currentmodule:: torch.quantization.observer +.. currentmodule:: torch.ao.quantization.observer .. autosummary:: :toctree: generated @@ -211,13 +211,13 @@ the values observed during calibration (PTQ) or training (QAT). default_dynamic_quant_observer default_float_qparams_observer -torch.quantization.fake_quantize +torch.ao.quantization.fake_quantize ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This module implements modules which are used to perform fake quantization during QAT. -.. currentmodule:: torch.quantization.fake_quantize +.. currentmodule:: torch.ao.quantization.fake_quantize .. autosummary:: :toctree: generated @@ -240,13 +240,13 @@ during QAT. disable_observer enable_observer -torch.quantization.qconfig +torch.ao.quantization.qconfig ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This module defines `QConfig` objects which are used to configure quantization settings for individual ops. -.. currentmodule:: torch.quantization.qconfig +.. currentmodule:: torch.ao.quantization.qconfig .. autosummary:: :toctree: generated @@ -481,14 +481,14 @@ This module implements the quantized versions of the functional layers such as upsample_bilinear upsample_nearest -torch.nn.quantizable -~~~~~~~~~~~~~~~~~~~~ +torch.ao.nn.quantizable +~~~~~~~~~~~~~~~~~~~~~~~ This module implements the quantizable versions of some of the nn layers. These modules can be used in conjunction with the custom module mechanism, by providing the ``custom_module_config`` argument to both prepare and convert. -.. currentmodule:: torch.nn.quantizable +.. currentmodule:: torch.ao.nn.quantizable .. autosummary:: :toctree: generated @@ -585,21 +585,30 @@ the `custom operator mechanism