-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Add support for multiple clients in unit tests #523
✨ Add support for multiple clients in unit tests #523
Conversation
Co-authored-by: Lewis Gaul <lewis.gaul@gmail.com>
Before I take a proper look, did you take a look at #522? I think this would be a nice precursor PR, with the focus on moving to using the |
@pytest.mark.parametrize("ctr_client", [docker_client], indirect=True) | ||
def test_image_save_load(ctr_client: DockerClient, tmp_path: Path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like this (as I said here), to me it seems like a strange way to say "this test only supports docker". What's more relevant here is that this test doesn't yet work with podman, and it would be good if that was indicated more explicitly (e.g. by causing an XFAIL).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it's possible to do this. We can provide an indicator to the DockerClient (similar to what you did in the previous PR) about what kind of engine DockerClient in plugged on, then we can do
@pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True)
def test_image_save_load(ctr_client: DockerClient, tmp_path: Path):
if ctr_client.client_type == "podman":
pytest.xfail("load and save doesn't work yet with podman but it should")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so then every single testcase that should support both docker and podman will need the @pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True)
line, and every single one of those that's not yet working with podman will need those extra two lines to do the xfail? It just feels to me like a lot of boilerplate/distraction from what the tests are actually doing, but I get that it is very explicit.
Aside from that, I'm still against using @pytest.mark.parametrize("ctr_client", [docker_client], indirect=True)
("this test is marked to be parameterised indirectly using the ctr_client
fixture, and will take values [docker_client]
") as a way to write "this test is only expected to work with docker".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind verbosity and duplication in the unit tests. If that spills in the code that we're shipping, then that's an issue I do agree.
I do agree that it's a strange way to get the corresponding client. We could make a fixture that only gives you the docker client, but then the "docker" marker won't be applied to the function, and we'd rely on people thinking about it to have it applied manually. Or we could loop over all tests to check if they use the fixture and then apply the marker, which is some more custom code. I don't know if it's worth it, I'll take some time to think about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where I think it's far more natural for the default to be that tests are parameterised, and then to explicitly mark tests that don't support certain runtimes such that they gets skipped/xfailed/marked/removed as required. It's more relevant/interesting when tests don't support a runtime than when they do, so better to add verbosity in the negative case (e.g. "doesn't support podman") rather than the positive case (parameterise with these runtimes).
Indeed I did take a look at it, that's where I took most of this code from :) |
) | ||
|
||
|
||
class TestSessionClient: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like this class shouldn't be needed, I'm not sure there's any need for the {docker,podman}_client_fixture
fixtures either, and the logic in this class can just go in the ctr_client
fixture directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need at least two fixtures, one at session scope (for the availability testing) and one with a function scope (for skipping if necessary).
I just used a class because I needed to transfer 3 variables between fixtures (client, availability, message) and using a tuple was bloating the type hints.
I'm open to a refactoring of course if something can be simplified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(although maybe my comment #523 (comment) adds a reason for docker_client
and podman_client
fixtures, depending on how we solve that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need at least two fixtures, one at session scope (for the availability testing) and one with a function scope (for skipping if necessary).
I think you can use pytest.skip()
in any fixture? https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest-skip
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure but I wouldn't make sense to call skip in a function called once per test session. The availability testing is a bit slow, so I don't want to do it at every function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to skip all tests using a fixture that makes sense to be session-scoped then it very much does make sense to call pytest.skip()
in it? Assuming that works (which I'm not certain of).
Overall this approach seems fine, I'm not particularly convinced it's better than what I worked on (and the case of tests only supported by one runtime needs thinking about), but at a certain point we should just settle on something - better to have something available that works! |
I think we should try to have |
I'm in favour of marking parameterisation at a per-file level, it feels like one step towards my preference for the default of parameterising and removing some of the verbosity/explicitness. In fact this seems like a nice compromise given it will often be whole subcommands that are only supported by one of docker/podman (buildx, swarm, compose, pod, ...). As I was stating before, one of my concerns is that someone might add a new test file and miss that they should be parameterising the new tests, however I think if they were to try using
Could you elaborate on this? I know a bunch of tests currently don't work with podman for reasons such as a difference in convention in image naming, as well as some subcommands not being supported by podman. At some point I could look into getting more tests working with podman, e.g. the image tests could probably be made more generic (mark as xfail for now). |
Indeed I don't see how it would work.
If one single test can't handle all clients, then it means that we can't use pytestmark, thus making it easier to spot missing features. |
@LewisGaul I added the xfail as requested. I believe we can merge this PRs unless you have other recommendations. When the whole |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working with me on this
My pleasure! |
To select all podman tests:
pytest -m podman
, same for docker.To change the podman executable
pytest --podman-exe=podman4.2
. Same for docker.If podman is not present on the system, all podman tests are SKIPPED. Same with docker.
There are no global variables here except the params but I think it's reasonnable as they're only constant strings.
@LewisGaul I know we won't agree with the fact that there is no default parameters in the unit tests, but can you still take a look? Most of the code in this PR comes from your previous PRs on the subject.
The result looks like this: