Skip to content

Commit

Permalink
Integrate xdoctest - Rebased (pytorch#82797)
Browse files Browse the repository at this point in the history
This is a new version of pytorch#15648 based on the latest master branch.

Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.

In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)

Fixes pytorch#71105

@ezyang
Pull Request resolved: pytorch#82797
Approved by: https://github.com/ezyang
  • Loading branch information
Erotemic authored and pytorchmergebot committed Aug 12, 2022
1 parent ba90c9f commit 4618371
Show file tree
Hide file tree
Showing 182 changed files with 830 additions and 386 deletions.
5 changes: 5 additions & 0 deletions .circleci/docker/requirements-ci.txt
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,11 @@ pytest-rerunfailures
#Pinned versions:
#test that import:

#xdoctest
#Description: runs doctests in pytest
#Pinned versions:
#test that import:

#PyYAML
#Description: data serialization format
#Pinned versions:
Expand Down
2 changes: 2 additions & 0 deletions .jenkins/pytorch/macos-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ pip install "unittest-xml-reporting<=3.2.0,>=2.0.0" \
pytest \
pytest-xdist \
pytest-rerunfailures
# TODO: enable xdoctest later
# xdoctest

if [ -z "${CI}" ]; then
rm -rf "${WORKSPACE_DIR}"/miniconda3/lib/python3.6/site-packages/torch*
Expand Down
3 changes: 2 additions & 1 deletion .jenkins/pytorch/win-test-helpers/setup_pytorch_env.bat
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,8 @@ popd
=======
:: Pin unittest-xml-reporting to freeze printing test summary logic, related: https://github.com/pytorch/pytorch/issues/69014

pip install "ninja==1.10.0.post1" future "hypothesis==5.35.1" "expecttest==0.1.3" "librosa>=0.6.2" "scipy==1.6.3" psutil pillow "unittest-xml-reporting<=3.2.0,>=2.0.0" pytest pytest-xdist pytest-rerunfailures
pip install "ninja==1.10.0.post1" future "hypothesis==5.35.1" "expecttest==0.1.3" "librosa>=0.6.2" "scipy==1.6.3" psutil pillow "unittest-xml-reporting<=3.2.0,>=2.0.0" pytest pytest-xdist pytest-rerunfailures
:: # TODO: enable xdoctest later
if errorlevel 1 exit /b
if not errorlevel 0 exit /b

Expand Down
46 changes: 46 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -490,6 +490,51 @@ def is_not_internal(modname):
for o in output:
f.write(o)


def process_docstring(app, what_, name, obj, options, lines):
"""
Custom process to transform docstring lines Remove "Ignore" blocks
Args:
app (sphinx.application.Sphinx): the Sphinx application object
what (str):
the type of the object which the docstring belongs to (one of
"module", "class", "exception", "function", "method", "attribute")
name (str): the fully qualified name of the object
obj: the object itself
options: the options given to the directive: an object with
attributes inherited_members, undoc_members, show_inheritance
and noindex that are true if the flag option of same name was
given to the auto directive
lines (List[str]): the lines of the docstring, see above
References:
https://www.sphinx-doc.org/en/1.5.1/_modules/sphinx/ext/autodoc.html
https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
"""
import re
remove_directives = [
# Remove all xdoctest directives
re.compile(r'\s*>>>\s*#\s*x?doctest:\s*.*'),
re.compile(r'\s*>>>\s*#\s*x?doc:\s*.*'),
]
filtered_lines = [
line for line in lines
if not any(pat.match(line) for pat in remove_directives)
]
# Modify the lines inplace
lines[:] = filtered_lines

# make sure there is a blank line at the end
if lines and lines[-1].strip():
lines.append('')


# Called automatically by Sphinx, making this `conf.py` an "extension".
def setup(app):
# NOTE: in Sphinx 1.8+ `html_css_files` is an official configuration value
Expand All @@ -506,6 +551,7 @@ def setup(app):
add_css(css_file)

app.connect("build-finished", coverage_post_process)
app.connect('autodoc-process-docstring', process_docstring)

# From PyTorch 1.5, we now use autogenerated files to document classes and
# functions. This breaks older references since
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ dependencies:
- numpy
- pytest
- pytest-cov
- xdoctest
- codecov
- pip
- pyyaml
Expand Down
5 changes: 5 additions & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,11 @@ addopts =
# capture only Python print and C++ py::print, but not C output (low-level Python errors)
--capture=sys
--disable-warnings
# TODO: enable xdoctest later
#--xdoctest
#--xdoctest-style=google
#--xdoctest-global-exec="from torch import nn\nimport torch.nn.functional as F\nimport torch"
#--xdoctest-options=+IGNORE_WHITESPACE
testpaths =
test
junit_logging_reruns = all
29 changes: 29 additions & 0 deletions test/run_doctests.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/bin/bash
__doc__="
This script simply runs the torch doctests via the xdoctest runner.
This must be run from the root of the torch repo, as it needs the path to the
torch source code.
"

#xdoctest -m torch --style=google list

# Reference: https://stackoverflow.com/questions/59895/bash-script-dir
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
TORCH_MODPATH=$SCRIPT_DIR/../torch
echo "TORCH_MODPATH = $TORCH_MODPATH"

if [[ ! -d "$TORCH_MODPATH" ]] ; then
echo "Could not find the path to the torch module"
else

# Next version of xdoctest will support environment variables that overlo


export XDOCTEST_GLOBAL_EXEC="from torch import nn\nimport torch.nn.functional as F\nimport torch"
export XDOCTEST_OPTIONS="+IGNORE_WHITESPACE"
# Note: google wont catch numpy style docstrings (a few exist) but it also wont fail
# on things not intended to be doctests.
export XDOCTEST_STYLE="google"
xdoctest "$TORCH_MODPATH" --style="$XDOCTEST_STYLE" --global-exec "$XDOCTEST_GLOBAL_EXEC" --options="$XDOCTEST_OPTIONS"
fi
14 changes: 14 additions & 0 deletions test/run_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,20 @@ def get_executable_command(options, allow_pytest, disable_coverage=False):
if options.pytest:
if allow_pytest:
executable += ["-m", "pytest"]
# Enable xdoctest
# TODO: enable xdoctest later
# Many doctests assume the existence of these variables
# xdoctest_global_exec_lines = r'\n'.join([
# 'from torch import nn',
# 'import torch.nn.functional as F',
# 'import torch',
# ])
# executable += [
# "--xdoctest",
# "--xdoctest-style=google",
# f"--xdoctest-global-exec='{xdoctest_global_exec_lines}'",
# "--xdoctest-options=+IGNORE_WHITESPACE"
# ]
else:
print_to_stderr(
"Pytest cannot be used for this test. Falling back to unittest."
Expand Down
3 changes: 3 additions & 0 deletions torch/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -318,6 +318,7 @@ def set_default_tensor_type(t):
Example::
>>> # xdoctest: +SKIP("Other tests may have changed the default type. Can we reset it?")
>>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_tensor_type(torch.DoubleTensor)
Expand Down Expand Up @@ -354,6 +355,7 @@ def set_default_dtype(d):
Either torch.float32 or torch.float64.
Example:
>>> # xdoctest: +SKIP("Other tests may have changed the default type. Can we reset it?")
>>> # initial default for floating point is torch.float32
>>> # Python floats are interpreted as float32
>>> torch.tensor([1.2, 3]).dtype
Expand Down Expand Up @@ -493,6 +495,7 @@ def use_deterministic_algorithms(mode, *, warn_only=False):
>>> torch.use_deterministic_algorithms(True)
# Forward mode nondeterministic error
>>> # xdoctest: +SKIP
>>> torch.randn(10, device='cuda').kthvalue(0)
...
RuntimeError: kthvalue CUDA does not have a deterministic implementation...
Expand Down
2 changes: 2 additions & 0 deletions torch/_namedtensor_internals.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ def update_names(tensor, names, rename_map, inplace):
>>> x.rename('batch', '...', 'width').names
('batch', 'C', 'H', 'width')
```
tensor.rename(**rename_map) returns a view on tensor that has rename dims
Expand All @@ -138,6 +139,7 @@ def update_names(tensor, names, rename_map, inplace):
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.rename(W='width', H='height').names
('N', 'C', 'height', 'width')
```
Finally, tensor.rename has an in-place version called tensor.rename_.
Expand Down
1 change: 1 addition & 0 deletions torch/_prims/context.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ class TorchRefsMode(torch.overrides.TorchFunctionMode):
Switches the interpretation of torch.* functions and Tensor methods to
use PrimTorch refs in torch._refs. (Direct calls to _refs are unaffected.)
>>> # xdoctest: +SKIP
>>> with TorchRefsMode():
... torch.add(x, y) # calls torch._refs.add(x, y)
Expand Down
2 changes: 1 addition & 1 deletion torch/_tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -1197,7 +1197,7 @@ def rename(self, *names, **rename_map):
>>> renamed_imgs = imgs.rename(None)
>>> renamed_imgs.names
(None,)
(None, None, None, None)
>>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width')
>>> renamed_imgs.names
Expand Down
1 change: 1 addition & 0 deletions torch/ao/quantization/fuse_modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,7 @@ def fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=fuse_known_mo
Examples::
>>> # xdoctest: +SKIP
>>> m = M().eval()
>>> # m is a module containing the sub-modules below
>>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]
Expand Down
4 changes: 4 additions & 0 deletions torch/ao/quantization/fuser_method_mappings.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ def fuse_conv_bn(is_qat, conv, bn):
>>> m1 = nn.Conv2d(10, 20, 3)
>>> b1 = nn.BatchNorm2d(20)
>>> # xdoctest: +SKIP
>>> m2 = fuse_conv_bn(m1, b1)
"""
assert(conv.training == bn.training),\
Expand Down Expand Up @@ -58,6 +59,7 @@ def fuse_conv_bn_relu(is_qat, conv, bn, relu):
>>> m1 = nn.Conv2d(10, 20, 3)
>>> b1 = nn.BatchNorm2d(20)
>>> r1 = nn.ReLU(inplace=False)
>>> # xdoctest: +SKIP
>>> m2 = fuse_conv_bn_relu(m1, b1, r1)
"""
assert(conv.training == bn.training == relu.training),\
Expand Down Expand Up @@ -103,6 +105,7 @@ def fuse_linear_bn(is_qat, linear, bn):
>>> m1 = nn.Linear(20, 10)
>>> b1 = nn.BatchNorm1d(10)
>>> # xdoctest: +SKIP
>>> m2 = fuse_linear_bn(m1, b1)
"""
assert(linear.training == bn.training),\
Expand Down Expand Up @@ -130,6 +133,7 @@ def fuse_convtranspose_bn(is_qat, convt, bn):
>>> m1 = nn.ConvTranspose2d(10, 20, 3)
>>> b1 = nn.BatchNorm2d(20)
>>> # xdoctest: +SKIP
>>> m2 = fuse_convtranspose_bn(m1, b1)
"""
assert(convt.training == bn.training),\
Expand Down
1 change: 1 addition & 0 deletions torch/ao/quantization/fx/_model_report/model_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ class compiles the report generated by each Detector class into a single report
8.) Call model_report.generate_qconfigs to generate the qconfigs based on the report suggestions
Example (with QuantizationTracer):
>>> # xdoctest: +SKIP
>>> # get the necessary qconfig
>>> config = PrepareCustomConfig()
>>> skipped_module_names, skipped_module_classes = get_skipped_module_name_and_classes(config, False)
Expand Down
38 changes: 21 additions & 17 deletions torch/ao/quantization/fx/_model_report/model_report_visualizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,10 +321,11 @@ def generate_filtered_tables(self, feature_filter: str = "", module_fqn_filter:
The rest of the rows will contain data
Example Use:
>>> # xdoctest: +SKIP("undefined variables")
>>> mod_report_visualizer.generate_filtered_tables(
feature_filter = "per_channel_min",
module_fqn_filter = "block1"
) # generates table with per_channel_min info for all modules in block 1 of the model
... feature_filter = "per_channel_min",
... module_fqn_filter = "block1"
... ) # generates table with per_channel_min info for all modules in block 1 of the model
"""
# first get the filtered data
filtered_data: OrderedDict[str, Any] = self._get_filtered_data(feature_filter, module_fqn_filter)
Expand Down Expand Up @@ -403,12 +404,13 @@ def generate_table_visualization(self, feature_filter: str = "", module_fqn_filt
Default = "", results in all the modules in the reports to be visible in the table
Example Use:
>>> # xdoctest: +SKIP("undefined variables")
>>> mod_report_visualizer.generate_table_visualization(
feature_filter = "per_channel_min",
module_fqn_filter = "block1"
)
# prints out neatly formatted table with per_channel_min info for
all modules in block 1 of the model
... feature_filter = "per_channel_min",
... module_fqn_filter = "block1"
... )
>>> # prints out neatly formatted table with per_channel_min info
>>> # for all modules in block 1 of the model
"""
# see if we got tabulate
if not got_tabulate:
Expand Down Expand Up @@ -552,13 +554,14 @@ def generate_plot_visualization(self, feature_filter: str, module_fqn_filter: st
Default = "", results in all the modules in the reports to be visible in the table
Example Use:
>>> # xdoctest: +SKIP("undefined variables")
>>> mod_report_visualizer.generate_plot_visualization(
feature_filter = "per_channel_min",
module_fqn_filter = "block1"
)
# outputs line plot of per_channel_min information for all modules in block1 of model
each channel gets it's own line, and it's plotted across the in-order modules
on the x-axis
... feature_filter = "per_channel_min",
... module_fqn_filter = "block1"
... )
>>> # outputs line plot of per_channel_min information for all
>>> # modules in block1 of model each channel gets it's own line,
>>> # and it's plotted across the in-order modules on the x-axis
"""
# checks if we have matplotlib and let's user know to install it if don't
if not got_matplotlib:
Expand Down Expand Up @@ -613,10 +616,11 @@ def generate_histogram_visualization(self, feature_filter: str, module_fqn_filte
Default = 10, the values will be split into 10 equal sized bins
Example Use:
>>> # xdoctest: +SKIP
>>> mod_report_visualizer.generategenerate_histogram_visualization_plot_visualization(
feature_filter = "per_channel_min",
module_fqn_filter = "block1"
)
... feature_filter = "per_channel_min",
... module_fqn_filter = "block1"
... )
# outputs histogram of per_channel_min information for all modules in block1 of model
information is gathered across all channels for all modules in block 1 for the
per_channel_min and is displayed in a histogram of equally sized bins
Expand Down
4 changes: 3 additions & 1 deletion torch/ao/quantization/observer.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ def _with_args(cls_or_self, **kwargs):
Example::
>>> # xdoctest: +SKIP("Undefined vars")
>>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>>> foo_instance1 = foo_builder()
Expand All @@ -103,11 +104,12 @@ def _with_callable_args(cls_or_self, **kwargs):
Example::
>>> # xdoctest: +SKIP("Undefined vars")
>>> Foo.with_callable_args = classmethod(_with_callable_args)
>>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_callable_args(cur_time=get_time_func).with_args(name="dan")
>>> foo_instance1 = foo_builder()
>>> wait 50
>>> # wait 50
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1.creation_time) == id(foo_instance2.creation_time)
False
Expand Down
Loading

0 comments on commit 4618371

Please sign in to comment.