Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't count lines as covered if they are only hit during a failing or xfail test #727

Open
pganssle opened this issue Nov 10, 2018 · 1 comment
Labels
enhancement New feature or request

Comments

@pganssle
Copy link

I only consider a line to be "covered" if it is hit by a test that doesn't fail, because no guarantees are made about the behavior of lines hit during failing tests. Normally this does not cause a problem because the test suite fails if any of the tests fail, so the coverage metrics can be assumed to be inaccurate anyway.

However, this becomes a problem if you use the xfail marker, or its equivalent in other test runners. I want to run the tests because if they succeed that's useful information because it means I can remove the xfail marker, but the tests may hit uncovered lines before they fail. Since the failure is expected, it doesn't cause the test suite to fail, and I end up with a passing test suite with inaccurate coverage metrics.

See this Stack Overflow question for more details and an MCVE.

I think my ideal situation would be that coverage could detect whether the test that is run has failed, and if it has, don't count it towards the coverage. I'm not sure if this is possible to do in a framework-independent way.

If this isn't possible with coverage, I can take this issue to pytest-cov. I think it might be reasonable to build in the conflation of xfail with no_cover (assuming no_cover is applied only if the conditional parameter of xfail is met).

@nedbat
Copy link
Owner

nedbat commented Nov 12, 2018

This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.

It looks like the Stack Overflow answer has a pytest-specific way to do this...

@nedbat nedbat added the enhancement New feature or request label Nov 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants