Skip to content

Add support for xfail mark in subtests #193

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

michelhe
Copy link

@michelhe michelhe commented Jun 9, 2025

Hello 😄
Firstly, wanted to say that we liked the project very much and adopted it into our testing infrastructure.
We have a usecase where we add known issues as test cases marked as xfail before we write the fixes, and this feature was missing from pytest-subtests nor did I manage to find any open issue about it.

This commit adds the xfail optional argument to the subtests.test. If provided, the subtest will be marked as SUBXFAIL/SUBXPASS depending on the outcome.

I added tests for this as well.

See example (Where beforehand, the xfailing subtests would have been reported as SUBFAIL) :

import pytest
import pytest_subtests

def test_subtests_xfails(subtests: pytest_subtests.SubTests) -> None:
    with subtests.test("test 1 == 2", xfail=pytest.mark.xfail("1 is not 2")):
        assert 1 == 2
    with subtests.test("test 1 == 1", xfail=pytest.mark.xfail("1 is not 1")):
        assert 1 == 1

    with subtests.test("manual xfail"):
        pytest.xfail("test")

    with subtests.test("test pass"):
        pass


@pytest.mark.xfail(reason="this test should be xfailed")
def test_subtests_xfails_with_main_xfail(subtests: pytest_subtests.SubTests) -> None:
    with subtests.test("test 1 == 2", xfail=pytest.mark.xfail("1 is not 2")):
        assert 1 == 2
    with subtests.test("test 1 == 1", xfail=pytest.mark.xfail("1 is not 1")):
        assert 1 == 1

    with subtests.test("manual xfail"):
        pytest.xfail("test")

    with subtests.test("test pass"):
        pass

Now results in:

collected 2 items                                                                                                             

tests/test_foo.py::test_subtests_xfails [test 1 == 2] SUBXFAIL (1 is not 2)                                             [ 50%]
tests/test_foo.py::test_subtests_xfails [test 1 == 1] SUBXPASS (1 is not 1)                                             [ 50%]
tests/test_foo.py::test_subtests_xfails [manual xfail] SUBXFAIL (test)                                            [ 50%]
tests/test_foo.py::test_subtests_xfails [test pass] SUBPASS                                                             [ 50%]
tests/test_foo.py::test_subtests_xfails PASSED                                                                          [ 50%]
tests/test_foo.py::test_subtests_xfails_with_main_xfail [test 1 == 2] SUBXFAIL (1 is not 2)                             [100%]
tests/test_foo.py::test_subtests_xfails_with_main_xfail [test 1 == 1] SUBXPASS (1 is not 1)                             [100%]
tests/test_foo.py::test_subtests_xfails_with_main_xfail [manual xfail] SUBXFAIL (test)                            [100%]
tests/test_foo.py::test_subtests_xfails_with_main_xfail [test pass] SUBXPASS (this test should be xfailed)              [100%]
tests/test_foo.py::test_subtests_xfails_with_main_xfail XPASS (this test should be xfailed)                             [100%]

=========================================================== XPASSES ===========================================================
=================================================== short test summary info ===================================================
XPASS tests/test_foo.py::test_subtests_xfails_with_main_xfail - this test should be xfailed
=================== 1 passed, 1 xpassed, 1 subtests passed, 4 subtests xfailed, 3 subtests xpassed in 0.02s ===================

This commit adds the `xfail` optional argument to the `subtests.test`.
If provided, the subtest will be marked as xfailed/xpass depending on the outcome.
@nicoddemus
Copy link
Member

nicoddemus commented Jun 9, 2025

Hi @michelhe,

First of all thanks for the contribution.

One question, I'm away from my computer so I cannot test this, but does this work (without changes to pytest-subtests)?

def test_subtests_xfails_with_main_xfail(subtests: pytest_subtests.SubTests) -> None:
    with subtests.test("test 1 == 2"):
        pytest.xfail("1 is not 2")
        assert 1 == 2

Another comment is that I'm not a big fan of passing a xfail=pytest.mark.xfail argument, because this hard-codes xfail in the signature... which raises the questions: what about skip? And other marks?

First thought would be to be able to pass any mark, but I don't think it makes much sense to pass any marks given that marks are supposed to be applied to test functions, which the subtest blocks are not. 🤔

@michelhe
Copy link
Author

michelhe commented Jun 9, 2025

Hi @nicoddemus.

Regarding the snippet you sent, it of course works but not as intended as pytest.xfail is an always-raising function so the subtests exits there and never reaches the assert 1 == 2 exception.

Regarding the plain xfail argument I have to agree with you on that, it's not ideal. At first I tried to implement this with a more generic marks: list[PytestMark] | None API and document that only xfail is supported but didn't like the end result (because skip doesn't make any sense in subtests context) , and reverted to the current revision.

@RonnyPfannschmidt
Copy link
Member

Skipif might make sense though

@nicoddemus
Copy link
Member

I'm not sure about this approach. Limiting the functionality to only handle xfail marks seems overly restrictive both in terms of current capabilities and future enhancements:

  1. If we later develop support for other marks, this would necessarily break the xfail= API.

  2. Since we eventually want to merge this plugin into pytest-core, supporting only xfail marks in the API isn't ideal.

Let's take some more time to consider this before making a final decision.

@michelhe
Copy link
Author

I understand and agree.

We can change to a marks: list[PytestMark] = [...] API and for now raise an exception of any marker that is not xfail, to be future compatible with other marks which I also don't like.

If we will want to support applying markers on subtests we probably need to make the subtests a real pytest Item and not a fake report, which is a greater change.

But then again - marks doesn't make any sense for subtests since they bear meaning in the test collection phase, which is not relevant in subtests as they happen during test call, post-collection.

I guess for now, instead of using this patch, I can keep subtests inside a with pytest.raises block as a form of "I expect this code to raise but let it run through so I know if it gets fixed in the future". I also don't like this alternative since pytest.raises usually means happy flow failure.

@michelhe
Copy link
Author

BTW regardless of the xfail behavior I think we need to consider keeping the improvement in pytest_report_teststatus, because without it we lose the subtests message in the test report. (See diff in test_xfail_direct_call)

@nicoddemus
Copy link
Member

I guess for now, instead of using this patch, I can keep subtests inside a with pytest.raises block as a form of "I expect this code to raise but let it run through so I know if it gets fixed in the future". I also don't like this alternative since pytest.raises usually means happy flow failure.

You might use a custom context manager to at least capture the intent in the code, so it will be clear to readers this is not a "normal" pytest.raises.

BTW regardless of the xfail behavior I think we need to consider keeping the improvement in pytest_report_teststatus, because without it we lose the subtests message in the test report. (See diff in test_xfail_direct_call)

Sounds good, could you open a separate PR with that?

@RonnyPfannschmidt
Copy link
Member

As conceptual followup i believe i want to ensure we start to create subreports for setup and teardown phases as well

Like fixture setup/teardrop

I believe we need a new discuss on that topic

@michelhe
Copy link
Author

You might use a custom context manager to at least capture the intent in the code, so it will be clear to readers this is not a "normal" pytest.raises.

Nice idea, this is what I ended up in my conftest.py tests codebase:

@dataclass
class _SubtestXfail(Exception):
    reason: str


class _SubtestXpass(Exception):
    pass

@contextmanager
def subtest_xfail(reason: str):
    """
    Context manager that raises a SubtestXfail exception if the subtest fails.
    If the subtest passes, a SubtestXpass exception is raised.
    """
    try:
        yield
    except Exception:
        raise _SubtestXfail(reason=reason)
    else:
        raise _SubtestXpass()


@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item: pytest.Item, call: CallInfo) -> Generator[None, None, None]:
    outcome = yield
    report = outcome.get_result()

    if report.when == "call" and call.excinfo is not None:
        exc = call.excinfo.value
        if isinstance(exc, (_SubtestXfail, _SubtestXpass)):
            report.outcome = "skipped" if isinstance(exc, _SubtestXfail) else "passed"
            report.wasxfail = exc.reason if isinstance(exc, _SubtestXfail) else "subtest passed"
            call.excinfo = None # Delete the excinfo

And in the tests themselves it looks like:

def test_foo(subtests):
    with subtests.test("test", x=5), subtest_xfail("should fail"):
        raise Exception("test")

If you like this API I can work on mainlining it here ^^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants