Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add support for multiple clients in unit tests #523

Merged
merged 4 commits into from
Jan 9, 2024

Conversation

gabrieldemarmiesse
Copy link
Owner

@gabrieldemarmiesse gabrieldemarmiesse commented Jan 8, 2024

To select all podman tests: pytest -m podman, same for docker.
To change the podman executable pytest --podman-exe=podman4.2. Same for docker.
If podman is not present on the system, all podman tests are SKIPPED. Same with docker.

There are no global variables here except the params but I think it's reasonnable as they're only constant strings.

@LewisGaul I know we won't agree with the fact that there is no default parameters in the unit tests, but can you still take a look? Most of the code in this PR comes from your previous PRs on the subject.

The result looks like this:

$ python -m pytest -vv ./tests/python_on_whales/components/test_image.py y
======================================================================= test session starts =======================================================================
platform linux -- Python 3.10.13, pytest-7.4.3, pluggy-1.0.0 -- /opt/conda/bin/python
cachedir: .pytest_cache
rootdir: /projects/open_source/python-on-whales
configfile: pyproject.toml
plugins: mock-3.12.0
collected 42 items

tests/python_on_whales/components/test_image.py::test_load_json[json_file0] PASSED                                                                          [  2%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file1] PASSED                                                                          [  4%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file2] PASSED                                                                          [  7%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file3] PASSED                                                                          [  9%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file4] PASSED                                                                          [ 11%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file5] PASSED                                                                          [ 14%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file6] PASSED                                                                          [ 16%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file7] PASSED                                                                          [ 19%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file8] PASSED                                                                          [ 21%]
tests/python_on_whales/components/test_image.py::test_load_json[json_file9] PASSED                                                                          [ 23%]
tests/python_on_whales/components/test_image.py::test_image_repr PASSED                                                                                     [ 26%]
tests/python_on_whales/components/test_image.py::test_image_remove[docker client] PASSED                                                                    [ 28%]
tests/python_on_whales/components/test_image.py::test_image_remove[podman client] SKIPPED (Unable to get version with command 'podman version'. Reason:
executable not found)                                                                                                                                       [ 30%]
tests/python_on_whales/components/test_image.py::test_image_save_load[docker client] PASSED                                                                 [ 33%]
tests/python_on_whales/components/test_image.py::test_save_iterator_bytes PASSED                                                                            [ 35%]
tests/python_on_whales/components/test_image.py::test_filter_when_listing PASSED                                                                            [ 38%]
tests/python_on_whales/components/test_image.py::test_filter_when_listing_old_signature PASSED                                                              [ 40%]
tests/python_on_whales/components/test_image.py::test_use_first_argument_to_filter PASSED                                                                   [ 42%]
tests/python_on_whales/components/test_image.py::test_save_iterator_bytes_and_load PASSED                                                                   [ 45%]
tests/python_on_whales/components/test_image.py::test_save_iterator_bytes_and_load_from_iterator PASSED                                                     [ 47%]
tests/python_on_whales/components/test_image.py::test_save_iterator_bytes_and_load_from_iterator_list_of_images PASSED                                      [ 50%]
tests/python_on_whales/components/test_image.py::test_image_list PASSED                                                                                     [ 52%]
tests/python_on_whales/components/test_image.py::test_image_bulk_reload PASSED                                                                              [ 54%]
tests/python_on_whales/components/test_image.py::test_image_list_tags PASSED                                                                                [ 57%]
tests/python_on_whales/components/test_image.py::test_pull_not_quiet PASSED                                                                                 [ 59%]
tests/python_on_whales/components/test_image.py::test_pull_not_quiet_multiple_images PASSED                                                                 [ 61%]
tests/python_on_whales/components/test_image.py::test_pull_not_quiet_multiple_images_break PASSED                                                           [ 64%]
tests/python_on_whales/components/test_image.py::test_copy_from_and_to PASSED                                                                               [ 66%]
tests/python_on_whales/components/test_image.py::test_copy_from_and_to_directory PASSED                                                                     [ 69%]
tests/python_on_whales/components/test_image.py::test_prune PASSED                                                                                          [ 71%]
tests/python_on_whales/components/test_image.py::test_remove_nothing PASSED                                                                                 [ 73%]
tests/python_on_whales/components/test_image.py::test_no_such_image_with_multiple_functions[inspect] PASSED                                                 [ 76%]
tests/python_on_whales/components/test_image.py::test_no_such_image_with_multiple_functions[remove] PASSED                                                  [ 78%]
tests/python_on_whales/components/test_image.py::test_no_such_image_with_multiple_functions[push] PASSED                                                    [ 80%]
tests/python_on_whales/components/test_image.py::test_no_such_image_save PASSED                                                                             [ 83%]
tests/python_on_whales/components/test_image.py::test_no_such_image_save_generator PASSED                                                                   [ 85%]
tests/python_on_whales/components/test_image.py::test_no_such_image_tag PASSED                                                                              [ 88%]
tests/python_on_whales/components/test_image.py::test_exists PASSED                                                                                         [ 90%]
tests/python_on_whales/components/test_image.py::test_copy_from_default_pull PASSED                                                                         [ 92%]
tests/python_on_whales/components/test_image.py::test_copy_from_pull PASSED                                                                                 [ 95%]
tests/python_on_whales/components/test_image.py::test_copy_to_default_pull PASSED                                                                           [ 97%]
tests/python_on_whales/components/test_image.py::test_copy_to_pull PASSED                                                                                   [100%]

================================================================= 41 passed, 1 skipped in 52.09s ==================================================================

Co-authored-by: Lewis Gaul <lewis.gaul@gmail.com>
@LewisGaul
Copy link
Collaborator

Before I take a proper look, did you take a look at #522? I think this would be a nice precursor PR, with the focus on moving to using the ctr_client fixture without any distraction of parameterisation.

@LewisGaul LewisGaul self-requested a review January 8, 2024 16:47
Comment on lines 40 to 41
@pytest.mark.parametrize("ctr_client", [docker_client], indirect=True)
def test_image_save_load(ctr_client: DockerClient, tmp_path: Path):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like this (as I said here), to me it seems like a strange way to say "this test only supports docker". What's more relevant here is that this test doesn't yet work with podman, and it would be good if that was indicated more explicitly (e.g. by causing an XFAIL).

Copy link
Owner Author

@gabrieldemarmiesse gabrieldemarmiesse Jan 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it's possible to do this. We can provide an indicator to the DockerClient (similar to what you did in the previous PR) about what kind of engine DockerClient in plugged on, then we can do

@pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True)
def test_image_save_load(ctr_client: DockerClient, tmp_path: Path):
    if ctr_client.client_type == "podman":
        pytest.xfail("load and save doesn't work yet with podman but it should")

Copy link
Collaborator

@LewisGaul LewisGaul Jan 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so then every single testcase that should support both docker and podman will need the @pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True) line, and every single one of those that's not yet working with podman will need those extra two lines to do the xfail? It just feels to me like a lot of boilerplate/distraction from what the tests are actually doing, but I get that it is very explicit.

Aside from that, I'm still against using @pytest.mark.parametrize("ctr_client", [docker_client], indirect=True) ("this test is marked to be parameterised indirectly using the ctr_client fixture, and will take values [docker_client]") as a way to write "this test is only expected to work with docker".

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind verbosity and duplication in the unit tests. If that spills in the code that we're shipping, then that's an issue I do agree.

I do agree that it's a strange way to get the corresponding client. We could make a fixture that only gives you the docker client, but then the "docker" marker won't be applied to the function, and we'd rely on people thinking about it to have it applied manually. Or we could loop over all tests to check if they use the fixture and then apply the marker, which is some more custom code. I don't know if it's worth it, I'll take some time to think about it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where I think it's far more natural for the default to be that tests are parameterised, and then to explicitly mark tests that don't support certain runtimes such that they gets skipped/xfailed/marked/removed as required. It's more relevant/interesting when tests don't support a runtime than when they do, so better to add verbosity in the negative case (e.g. "doesn't support podman") rather than the positive case (parameterise with these runtimes).

@gabrieldemarmiesse
Copy link
Owner Author

gabrieldemarmiesse commented Jan 8, 2024

Before I take a proper look, did you take a look at #522? I think this would be a nice precursor PR, with the focus on moving to using the ctr_client fixture without any distraction of parameterisation.

Indeed I did take a look at it, that's where I took most of this code from :)

)


class TestSessionClient:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels like this class shouldn't be needed, I'm not sure there's any need for the {docker,podman}_client_fixture fixtures either, and the logic in this class can just go in the ctr_client fixture directly?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need at least two fixtures, one at session scope (for the availability testing) and one with a function scope (for skipping if necessary).
I just used a class because I needed to transfer 3 variables between fixtures (client, availability, message) and using a tuple was bloating the type hints.
I'm open to a refactoring of course if something can be simplified

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(although maybe my comment #523 (comment) adds a reason for docker_client and podman_client fixtures, depending on how we solve that)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need at least two fixtures, one at session scope (for the availability testing) and one with a function scope (for skipping if necessary).

I think you can use pytest.skip() in any fixture? https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest-skip

Copy link
Owner Author

@gabrieldemarmiesse gabrieldemarmiesse Jan 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure but I wouldn't make sense to call skip in a function called once per test session. The availability testing is a bit slow, so I don't want to do it at every function.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to skip all tests using a fixture that makes sense to be session-scoped then it very much does make sense to call pytest.skip() in it? Assuming that works (which I'm not certain of).

@LewisGaul
Copy link
Collaborator

LewisGaul commented Jan 8, 2024

Overall this approach seems fine, I'm not particularly convinced it's better than what I worked on (and the case of tests only supported by one runtime needs thinking about), but at a certain point we should just settle on something - better to have something available that works!

@gabrieldemarmiesse
Copy link
Owner Author

gabrieldemarmiesse commented Jan 8, 2024

I think we should try to have pytestmark = @pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True) in as many files as possible. Not having this means there are some weird test present that don't work with all clients. I'm even open to divide them into multiple files so that it's easy to track. What do you think?

@LewisGaul
Copy link
Collaborator

I think we should try to have pytestmark = @pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True) in as many files as possible. Not having this means there are some weird test present that don't work with all clients. I'm even open to divide them into multiple files so that it's easy to track. What do you think?

I'm in favour of marking parameterisation at a per-file level, it feels like one step towards my preference for the default of parameterising and removing some of the verbosity/explicitness. In fact this seems like a nice compromise given it will often be whole subcommands that are only supported by one of docker/podman (buildx, swarm, compose, pod, ...).

As I was stating before, one of my concerns is that someone might add a new test file and miss that they should be parameterising the new tests, however I think if they were to try using ctr_client it would complain about request.param not being available if there was no parameterisation?

Not having this means there are some weird test present that don't work with all clients.

Could you elaborate on this?

I know a bunch of tests currently don't work with podman for reasons such as a difference in convention in image naming, as well as some subcommands not being supported by podman. At some point I could look into getting more tests working with podman, e.g. the image tests could probably be made more generic (mark as xfail for now).

@gabrieldemarmiesse
Copy link
Owner Author

I think if they were to try using ctr_client it would complain about request.param not being available if there was no parameterisation?

Indeed I don't see how it would work.

Could you elaborate on this?

If one single test can't handle all clients, then it means that we can't use pytestmark, thus making it easier to spot missing features.

@gabrieldemarmiesse
Copy link
Owner Author

@LewisGaul I added the xfail as requested. I believe we can merge this PRs unless you have other recommendations. When the whole test_image.py has been parametrize, I expect that is should be possible to use pytestmark. And later, it should be possible to add if self.client_config.client_type == "podman": ... in the right places in the codebase to fix the xfails of podman. Better support for podman coming right up :)

Copy link
Collaborator

@LewisGaul LewisGaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working with me on this

@gabrieldemarmiesse
Copy link
Owner Author

My pleasure!

@gabrieldemarmiesse gabrieldemarmiesse merged commit 3509d47 into master Jan 9, 2024
35 checks passed
@gabrieldemarmiesse gabrieldemarmiesse deleted the 571dcf96-9d90-4a36-aa94-10f90ec1c004 branch January 9, 2024 19:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants