-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tracking][Contrib] Known failing unit tests #8901
Labels
frontend:coreml
python/tvm/relay/frontend/coreml.py
Comments
Lunderberg
added a commit
to Lunderberg/tvm
that referenced
this issue
Sep 1, 2021
…cuda Previously, the tests had an early bailout if tensorrt was disabled, or if there was no cuda device present. However, the tests were not marked with `pytest.mark.gpu` and so they didn't run during `task_python_integration_gpuonly.sh`. This commit adds the `requires_cuda` mark, and maintains the same behavior of testing the tensorrt compilation steps if compilation is enabled, and running the results if tensorrt is enabled. In addition, some of the tests result in failures when run. These have been marked with `pytest.mark.xfail`, and are being tracked in issue apache#8901.
leandron
pushed a commit
that referenced
this issue
Sep 2, 2021
* [UnitTests][CoreML] Marked test_annotate as a known failure. The unit tests in `test_coreml_codegen.py` haven't run in the CI lately, so this test wasn't caught before. (See tracking issue - Added `pytest.mark.xfail` mark to `test_annotate`. - Added `tvm.testing.requires_package` decorator, which can mark tests as requiring a specific python package to be available. Switched from `pytest.importorskip('coremltools')` to `requires_package('coremltools')` in `test_coreml_codegen.py` so that all tests would explicitly show up as skipped in the report. - Added `uses_gpu` tag to all tests in `test_coreml_codegen.py`, since only ci_gpu has coremltools installed. In the future, if the ci_cpu image has coremltools installed, this mark can be removed. * [Pytest][TensorRT] Mark the TensorRT tests with tvm.testing.requires_cuda Previously, the tests had an early bailout if tensorrt was disabled, or if there was no cuda device present. However, the tests were not marked with `pytest.mark.gpu` and so they didn't run during `task_python_integration_gpuonly.sh`. This commit adds the `requires_cuda` mark, and maintains the same behavior of testing the tensorrt compilation steps if compilation is enabled, and running the results if tensorrt is enabled. In addition, some of the tests result in failures when run. These have been marked with `pytest.mark.xfail`, and are being tracked in issue #8901.
ylc
pushed a commit
to ylc/tvm
that referenced
this issue
Sep 29, 2021
…e#8902) * [UnitTests][CoreML] Marked test_annotate as a known failure. The unit tests in `test_coreml_codegen.py` haven't run in the CI lately, so this test wasn't caught before. (See tracking issue - Added `pytest.mark.xfail` mark to `test_annotate`. - Added `tvm.testing.requires_package` decorator, which can mark tests as requiring a specific python package to be available. Switched from `pytest.importorskip('coremltools')` to `requires_package('coremltools')` in `test_coreml_codegen.py` so that all tests would explicitly show up as skipped in the report. - Added `uses_gpu` tag to all tests in `test_coreml_codegen.py`, since only ci_gpu has coremltools installed. In the future, if the ci_cpu image has coremltools installed, this mark can be removed. * [Pytest][TensorRT] Mark the TensorRT tests with tvm.testing.requires_cuda Previously, the tests had an early bailout if tensorrt was disabled, or if there was no cuda device present. However, the tests were not marked with `pytest.mark.gpu` and so they didn't run during `task_python_integration_gpuonly.sh`. This commit adds the `requires_cuda` mark, and maintains the same behavior of testing the tensorrt compilation steps if compilation is enabled, and running the results if tensorrt is enabled. In addition, some of the tests result in failures when run. These have been marked with `pytest.mark.xfail`, and are being tracked in issue apache#8901.
ylc
pushed a commit
to ylc/tvm
that referenced
this issue
Jan 13, 2022
…e#8902) * [UnitTests][CoreML] Marked test_annotate as a known failure. The unit tests in `test_coreml_codegen.py` haven't run in the CI lately, so this test wasn't caught before. (See tracking issue - Added `pytest.mark.xfail` mark to `test_annotate`. - Added `tvm.testing.requires_package` decorator, which can mark tests as requiring a specific python package to be available. Switched from `pytest.importorskip('coremltools')` to `requires_package('coremltools')` in `test_coreml_codegen.py` so that all tests would explicitly show up as skipped in the report. - Added `uses_gpu` tag to all tests in `test_coreml_codegen.py`, since only ci_gpu has coremltools installed. In the future, if the ci_cpu image has coremltools installed, this mark can be removed. * [Pytest][TensorRT] Mark the TensorRT tests with tvm.testing.requires_cuda Previously, the tests had an early bailout if tensorrt was disabled, or if there was no cuda device present. However, the tests were not marked with `pytest.mark.gpu` and so they didn't run during `task_python_integration_gpuonly.sh`. This commit adds the `requires_cuda` mark, and maintains the same behavior of testing the tensorrt compilation steps if compilation is enabled, and running the results if tensorrt is enabled. In addition, some of the tests result in failures when run. These have been marked with `pytest.mark.xfail`, and are being tracked in issue apache#8901.
@Lunderberg I am tracking down the TRT BYOC and I figure out the issue with these tests. It is the mxnet importer plus a guard |
areusch
added
the
needs-triage
PRs or issues that need to be investigated by maintainers to find the right assignees to address it
label
Oct 19, 2022
Lunderberg
added
frontend:coreml
python/tvm/relay/frontend/coreml.py
and removed
needs-triage
PRs or issues that need to be investigated by maintainers to find the right assignees to address it
labels
Nov 16, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Summary
Some unit tests were unintentionally disabled in CI, and so regressions weren't been caught. These tests didn't run on the
ci_cpu
image, because they lacked either GPU hardware or python packages required to run. They didn't run on theci_gpu
image, because they weren't marked withtvm.testing.uses_gpu
. PR #8902 allows the tests to run, and marks tests with regressions as expected failures. These expected failures should be resolved to restore full functionality.Status
The text was updated successfully, but these errors were encountered: