You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I only consider a line to be "covered" if it is hit by a test that doesn't fail, because no guarantees are made about the behavior of lines hit during failing tests. Normally this does not cause a problem because the test suite fails if any of the tests fail, so the coverage metrics can be assumed to be inaccurate anyway.
However, this becomes a problem if you use the xfail marker, or its equivalent in other test runners. I want to run the tests because if they succeed that's useful information because it means I can remove the xfail marker, but the tests may hit uncovered lines before they fail. Since the failure is expected, it doesn't cause the test suite to fail, and I end up with a passing test suite with inaccurate coverage metrics.
I think my ideal situation would be that coverage could detect whether the test that is run has failed, and if it has, don't count it towards the coverage. I'm not sure if this is possible to do in a framework-independent way.
If this isn't possible with coverage, I can take this issue to pytest-cov. I think it might be reasonable to build in the conflation of xfail with no_cover (assuming no_cover is applied only if the conditional parameter of xfail is met).
The text was updated successfully, but these errors were encountered:
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
I only consider a line to be "covered" if it is hit by a test that doesn't fail, because no guarantees are made about the behavior of lines hit during failing tests. Normally this does not cause a problem because the test suite fails if any of the tests fail, so the coverage metrics can be assumed to be inaccurate anyway.
However, this becomes a problem if you use the xfail marker, or its equivalent in other test runners. I want to run the tests because if they succeed that's useful information because it means I can remove the
xfail
marker, but the tests may hit uncovered lines before they fail. Since the failure is expected, it doesn't cause the test suite to fail, and I end up with a passing test suite with inaccurate coverage metrics.See this Stack Overflow question for more details and an MCVE.
I think my ideal situation would be that
coverage
could detect whether the test that is run has failed, and if it has, don't count it towards the coverage. I'm not sure if this is possible to do in a framework-independent way.If this isn't possible with
coverage
, I can take this issue topytest-cov
. I think it might be reasonable to build in the conflation ofxfail
withno_cover
(assumingno_cover
is applied only if the conditional parameter ofxfail
is met).The text was updated successfully, but these errors were encountered: