Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[UnitTests] Apply correct requires_gpu() pytest marks for parametrized target #8542

Merged
merged 6 commits into from
Aug 6, 2021

Conversation

Lunderberg
Copy link
Contributor

Previously, the addition of tvm.testing._target_to_requirement pytest marks was handled by the parametrize_targets function. The _auto_parametrize_target function assumed that a unit test that was already parametrized had all markings needed. If a unit test was explicitly parametrized using @pytest.mark.parametrize, these marks would be missing.

In most cases, this explicit use of @pytest.mark.parametrize('target', ...) should be avoided, but has value in the case of marking with multiple parameters with @pytest.mark.parametrize('target,other', ...). This use case isn't yet supported by the tvm.testing.parameters function. Therefore, if this occurs, detect it and add the appropriate marks.

@Lunderberg
Copy link
Contributor Author

Related to #8387. This PR removes the @requires_gpu annotation, as it is now applied automatically, which should avoid any similar issues in the future.

if "target" not in parametrized_args:
# Check if the function is marked with either excluded or
# known failing targets.
for mark in metafunc.definition.iter_markers("parametrize"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Better not to use loop-else construct (e.g., https://intoli.com/blog/for-else-in-python/).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, and I can change that. The loop-else reads as cleaner to me, even in the blog post's example, but having a consistent style with the rest of the TVM codebase is far more important so I'll avoid using them.

@Lunderberg Lunderberg force-pushed the correlated_target_parametrize_marks branch from e7dfbdc to 8ee0702 Compare July 28, 2021 14:31
@Lunderberg
Copy link
Contributor Author

Failures in CI were due to ONNX-specific tests. This PR is now rebased on top of #8574 , which should be merged prior to this PR.

Copy link
Contributor

@mbrookhart mbrookhart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what this is doing, and it's enabled tests that aren't marked to run on GPU, and this freaks me out. I really think we need a full review/doc update/RFC of what on earth the testing parameterization is doing, because I really have no idea how to tell CI where to run or disable certain tests anymore. The fact that we've been silently disabling and enabling tests is terrifying.

@Lunderberg
Copy link
Contributor Author

Lunderberg commented Jul 28, 2021

Agreed, this ended up being a much larger impact than I had expected, and this PR shouldn't be merged until/unless we have consensus on it. (This comment ended up being a bit longer than I had expected, as I was looking up some history for my own knowledge, and figured I might as well write it down.)

Regarding changes to how unit tests are parametrized, when they are marked with @pytest.mark.gpu, here's the history as best as I know.

  1. (State prior to listed changes.) Unit tests explicitly apply to a particular target. The CI explicitly checks for the presence of @pytest.mark.gpu, from the [-m gpuargument](https://github.com/apache/tvm/blob/main/tests/scripts/task_python_unittest_gpuonly.sh#L20). A test can run over multiple targets by explicitly looping overtvm.testing.enabled_targets()`.
  2. [TESTS] Refactor tests to run on either the GPU or CPU #6331 added the @tvm.testing.parametrize_targets decorator, to parametrize across all enabled targets. A unit-test decorated with parametrize_targets has an independent test run for each item in tvm.testing.enabled_targets(). These independent tests are marked with the appropriate marks (e.g. tvm.testing.requires_cuda, which includes pytest.mark.gpu) as determined by tvm.testing._target_to_requirement.
  3. [UnitTests] Automatic parametrization over targets, with explicit opt-out #8010 added parametrized fixtures for target and dev. The goal was to reduce the amount of boilerplate needed for any given unit test. A unit test that accepts arguments of target or dev is automatically parametrized over the enabled targets. This behavior is described in the tvm-rfc repo, PR#7. There's some conversation on finalizing exactly how and what should be cached, but the RFC is up-to-date with the latest conversation and implementation. Relevant to the GPU-marking, this followed the same convention as [TESTS] Refactor tests to run on either the GPU or CPU #6331, marking tests as requiring a GPU if the target parameter is a GPU target.

This worked, but allowed for accidentally forgetting an annotation for a target-specific test. This particular PR was motivated by #8387 , which added a missing annotation of @requires_gpu to a test that explicitly used pytest.mark.parametrize. In order to remove that as a potential failure mode, this PR identifies cases where the test has been explicitly parametrized over targets, and adds the same marks as if it had used @tvm.testing.parametrize_targets, including @tvm.testing.requires_gpu if it is a GPU target. That is why the ONNX-specific tests started running with this PR, because the parametrization of target then received the target-specific marks.

I think the source of my misunderstanding was that @tvm.testing.requires_gpu serves two purposes. The first is to indicate that the test should be skipped if no GPU is present. The second purpose (which I hadn't known) is to indicate that a test should be run in the GPU step of the CI. By adding the target-specific marks, I had been assuming that their lack would be accidental rather than intentional.

@Lunderberg
Copy link
Contributor Author

With the parametrized targets, we can explicitly mark the test with @tvm.testing.excluded_targets('cuda') or @tvm.testing.known_failing_targets('cuda') to remove parametrization over those specific tests.

@Lunderberg
Copy link
Contributor Author

Added the ONNX cleanup and use of tvm.testing.known_failing_targets for onnx frontend tests to this PR, after rebasing on top of #8621. They both needed to adjust the target parameter, and this PR already enables the functionality needed.

@mbrookhart

Copy link
Contributor

@mbrookhart mbrookhart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Agreed that we'll follow up with a user-facing doc on pytest usage. Thanks, @Lunderberg!

@Lunderberg
Copy link
Contributor Author

Current failure is due to the automatic-marking expected a string, and not a tvm.target.Target object. This particular issue is resolved with some refactoring that went alongside #8383. Will cherry-pick that commit for the refactoring, leaving the cudnn-enabling by default on that PR.

Lunderberg added a commit to Lunderberg/tvm that referenced this pull request Aug 3, 2021
Follow-up from apache#8542, to document existing features.
Lunderberg added a commit to Lunderberg/tvm that referenced this pull request Aug 3, 2021
Follow-up from apache#8542, to document existing features.
@Lunderberg Lunderberg force-pushed the correlated_target_parametrize_marks branch from b4628a7 to 0019d60 Compare August 3, 2021 17:37
- The onnx tests `test_basic_convinteger`, `test_convinteger_with_padding`, `test_range_float_type_positive_delta_expanded`, and `test_range_int32_type_positive_delta_expanded` don't run correctly on CUDA targets, so they are added to the exclusion.

- Parametrized over the relative directory name, rather than the full directory name.  This improves readability of the pytest output, and keeps the same parametrized test name across different python version.

- Changed the target-specific skips to check the target kind, rather than the full target string.
…d target

Prevoiusly, the addition of tvm.testing._target_to_requirement pytest marks
was handled by the parametrize_targets function.  The
_auto_parametrize_target function assumed that a unit test that was already
parametrized had all markings needed.  If a unit test was explicitly
parametrized using @pytest.mark.parametrize, these marks would be missing.

In most cases, this explicit use of @pytest.mark.parametrize('target', ...)
should be avoided, but has value in the case of marking with multiple
parameters with @pytest.mark.parametrize('target,other', ...).  This use
case isn't yet supported by the tvm.testing.parameters function.  Therefore,
if this occurs, detect it and add the appropriate marks.
Initial implementation did work correctly with
@tvm.testing.parametrize_targets.

Also, went through all cases where "target" is used to parametrize on
something other than a target string, and renamed.
…argets

After merging of the `tvm.testing.parametrize_targets` and
`tvm.testing._auto_parametrize_target` code paths,
`known_failing_targets` can be used in both cases.
Previously, tvm.testing._target_to_requirement required the argument
to be a string.  This commit allows it to be either a string or a
`tvm.target.Target`.
@Lunderberg Lunderberg force-pushed the correlated_target_parametrize_marks branch 2 times, most recently from 63344f6 to ff83bdb Compare August 5, 2021 13:20
@Lunderberg
Copy link
Contributor Author

Bumping to restart CI after #8656 .

If the unit test has already been parametrized with pytest.params to
add parameter-specific marks, respect those existing marks.

This can happen in some cases in the CI, uncertain yet what is causing
them.  Maybe pytest-xdist related, but there's some difficulty in
reproducing it locally.
@Lunderberg Lunderberg force-pushed the correlated_target_parametrize_marks branch from ff83bdb to c49a86b Compare August 5, 2021 19:12
@masahi masahi merged commit 783fe98 into apache:main Aug 6, 2021
@Lunderberg Lunderberg deleted the correlated_target_parametrize_marks branch August 6, 2021 10:12
mehrdadh pushed a commit to mehrdadh/tvm that referenced this pull request Aug 11, 2021
…d target (apache#8542)

* [Onnx][UnitTests] Excluded additional onnx tests

- The onnx tests `test_basic_convinteger`, `test_convinteger_with_padding`, `test_range_float_type_positive_delta_expanded`, and `test_range_int32_type_positive_delta_expanded` don't run correctly on CUDA targets, so they are added to the exclusion.

- Parametrized over the relative directory name, rather than the full directory name.  This improves readability of the pytest output, and keeps the same parametrized test name across different python version.

- Changed the target-specific skips to check the target kind, rather than the full target string.

* [UnitTests] Apply correct requires_gpu() pytest marks for parametrized target

Prevoiusly, the addition of tvm.testing._target_to_requirement pytest marks
was handled by the parametrize_targets function.  The
_auto_parametrize_target function assumed that a unit test that was already
parametrized had all markings needed.  If a unit test was explicitly
parametrized using @pytest.mark.parametrize, these marks would be missing.

In most cases, this explicit use of @pytest.mark.parametrize('target', ...)
should be avoided, but has value in the case of marking with multiple
parameters with @pytest.mark.parametrize('target,other', ...).  This use
case isn't yet supported by the tvm.testing.parameters function.  Therefore,
if this occurs, detect it and add the appropriate marks.

* [UnitTest] Bugfix, applying requires_* markers to parametrized targets.

Initial implementation did work correctly with
@tvm.testing.parametrize_targets.

Also, went through all cases where "target" is used to parametrize on
something other than a target string, and renamed.

* [Onnx] Switched from using pytest.skip to tvm.testing.known_failing_targets

After merging of the `tvm.testing.parametrize_targets` and
`tvm.testing._auto_parametrize_target` code paths,
`known_failing_targets` can be used in both cases.

* [Testing] Enable `Target` object as argument to _target_to_requirement

Previously, tvm.testing._target_to_requirement required the argument
to be a string.  This commit allows it to be either a string or a
`tvm.target.Target`.

* [Testing] Auto-target parametrization, handle pytest ParameterSet

If the unit test has already been parametrized with pytest.params to
add parameter-specific marks, respect those existing marks.

This can happen in some cases in the CI, uncertain yet what is causing
them.  Maybe pytest-xdist related, but there's some difficulty in
reproducing it locally.

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
leandron pushed a commit that referenced this pull request Aug 11, 2021
* [Docs] Added documentation on pytest target parametrization.

Follow-up from #8542, to document existing features.

* [Docs] Updated pytest parametrization documentation following review

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
ylc pushed a commit to ylc/tvm that referenced this pull request Sep 29, 2021
…d target (apache#8542)

* [Onnx][UnitTests] Excluded additional onnx tests

- The onnx tests `test_basic_convinteger`, `test_convinteger_with_padding`, `test_range_float_type_positive_delta_expanded`, and `test_range_int32_type_positive_delta_expanded` don't run correctly on CUDA targets, so they are added to the exclusion.

- Parametrized over the relative directory name, rather than the full directory name.  This improves readability of the pytest output, and keeps the same parametrized test name across different python version.

- Changed the target-specific skips to check the target kind, rather than the full target string.

* [UnitTests] Apply correct requires_gpu() pytest marks for parametrized target

Prevoiusly, the addition of tvm.testing._target_to_requirement pytest marks
was handled by the parametrize_targets function.  The
_auto_parametrize_target function assumed that a unit test that was already
parametrized had all markings needed.  If a unit test was explicitly
parametrized using @pytest.mark.parametrize, these marks would be missing.

In most cases, this explicit use of @pytest.mark.parametrize('target', ...)
should be avoided, but has value in the case of marking with multiple
parameters with @pytest.mark.parametrize('target,other', ...).  This use
case isn't yet supported by the tvm.testing.parameters function.  Therefore,
if this occurs, detect it and add the appropriate marks.

* [UnitTest] Bugfix, applying requires_* markers to parametrized targets.

Initial implementation did work correctly with
@tvm.testing.parametrize_targets.

Also, went through all cases where "target" is used to parametrize on
something other than a target string, and renamed.

* [Onnx] Switched from using pytest.skip to tvm.testing.known_failing_targets

After merging of the `tvm.testing.parametrize_targets` and
`tvm.testing._auto_parametrize_target` code paths,
`known_failing_targets` can be used in both cases.

* [Testing] Enable `Target` object as argument to _target_to_requirement

Previously, tvm.testing._target_to_requirement required the argument
to be a string.  This commit allows it to be either a string or a
`tvm.target.Target`.

* [Testing] Auto-target parametrization, handle pytest ParameterSet

If the unit test has already been parametrized with pytest.params to
add parameter-specific marks, respect those existing marks.

This can happen in some cases in the CI, uncertain yet what is causing
them.  Maybe pytest-xdist related, but there's some difficulty in
reproducing it locally.

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
ylc pushed a commit to ylc/tvm that referenced this pull request Sep 29, 2021
…8638)

* [Docs] Added documentation on pytest target parametrization.

Follow-up from apache#8542, to document existing features.

* [Docs] Updated pytest parametrization documentation following review

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
…d target (apache#8542)

* [Onnx][UnitTests] Excluded additional onnx tests

- The onnx tests `test_basic_convinteger`, `test_convinteger_with_padding`, `test_range_float_type_positive_delta_expanded`, and `test_range_int32_type_positive_delta_expanded` don't run correctly on CUDA targets, so they are added to the exclusion.

- Parametrized over the relative directory name, rather than the full directory name.  This improves readability of the pytest output, and keeps the same parametrized test name across different python version.

- Changed the target-specific skips to check the target kind, rather than the full target string.

* [UnitTests] Apply correct requires_gpu() pytest marks for parametrized target

Prevoiusly, the addition of tvm.testing._target_to_requirement pytest marks
was handled by the parametrize_targets function.  The
_auto_parametrize_target function assumed that a unit test that was already
parametrized had all markings needed.  If a unit test was explicitly
parametrized using @pytest.mark.parametrize, these marks would be missing.

In most cases, this explicit use of @pytest.mark.parametrize('target', ...)
should be avoided, but has value in the case of marking with multiple
parameters with @pytest.mark.parametrize('target,other', ...).  This use
case isn't yet supported by the tvm.testing.parameters function.  Therefore,
if this occurs, detect it and add the appropriate marks.

* [UnitTest] Bugfix, applying requires_* markers to parametrized targets.

Initial implementation did work correctly with
@tvm.testing.parametrize_targets.

Also, went through all cases where "target" is used to parametrize on
something other than a target string, and renamed.

* [Onnx] Switched from using pytest.skip to tvm.testing.known_failing_targets

After merging of the `tvm.testing.parametrize_targets` and
`tvm.testing._auto_parametrize_target` code paths,
`known_failing_targets` can be used in both cases.

* [Testing] Enable `Target` object as argument to _target_to_requirement

Previously, tvm.testing._target_to_requirement required the argument
to be a string.  This commit allows it to be either a string or a
`tvm.target.Target`.

* [Testing] Auto-target parametrization, handle pytest ParameterSet

If the unit test has already been parametrized with pytest.params to
add parameter-specific marks, respect those existing marks.

This can happen in some cases in the CI, uncertain yet what is causing
them.  Maybe pytest-xdist related, but there's some difficulty in
reproducing it locally.

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
…8638)

* [Docs] Added documentation on pytest target parametrization.

Follow-up from apache#8542, to document existing features.

* [Docs] Updated pytest parametrization documentation following review

Co-authored-by: Eric Lunderberg <elunderberg@octoml.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants