-
Notifications
You must be signed in to change notification settings - Fork 27
Added blackbox integration test. #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hey, thanks! Couple things: Any reason you're not using the existing You probably didn't want to include all the |
Good point on the config. And I thought I did have everything I wanted in the test_output, meaning all of the jsons to compare :) |
174f9dd
to
10218b2
Compare
10218b2
to
804c1f2
Compare
Ah, I see. You remove all That's a very interesting approach I'd advise against, because it feels very fragile and many changes will require changing all of the files in What I was hoping for was a test targetting specific pieces of the output such as
Using Troy's repo with Fedora 24 (a dead release which no longer gets updates) ensures we get consistent data over time. Does that make sense? |
Also Pytest is great for unit testing, not really for blackbox testing. I'd run it in a command line as described in README and then had something look at the output. This way you won't have to basically rewrite the main function to make it work within Pytest. |
To be clear, I'm happy you're contributing, please don't feel discouraged, I'm here to support/help you. A few pointers... if you look at the github workflow yaml, you'll see it:
So when writing tests, you already start with a directory ( You could add this to your tests:
And then load one workload:
... then, for example, get a list of added packages:
... and compare that with a reference which you can just have in the script itself. It's not that long. This specific workload has:
This way you can target specific things, which:
If you use the repo Troy added (Fedora 34), the results will be always the same. Let me know if you want me to walk you through the config structure, it's not complicated but can be complex if you look at it the first time. And thre's no stupid questions, so please don't hesitate to ask anything! Oh and to run the same thing locally, you can just add
|
The test previously compared presence of files. Instead, it now checks that every file in test_data/*/output has conresponding file actually outputed by the script, with caveat that we assume that we compare lists based on content, not on the order.
24bee96
to
f5e2dd1
Compare
@asamalik I still really like the idea of thorough comparisons of json files. If the repo that Troy added is as stable as we want it to be, this shouldn't be a problem. Other thing, if we are running tests, they should be self-contained. I really don't want to rely that But I agree, that with a test like this, it shouldn't fail if we add unrelated new files, |
In my opinion tests should fail when we break things not when we change things. For example adding information to a file shouldn't fail a test. Please don't re-run Content Resolver for every single test. For example, it pulls hundreds of MB of repodata with every run. If we need to run it with different sets of configs to test different things, then sure. But not as a rule of thumb. |
This test uses an old fedora 34 repo supplied by @tdawson and simple environment, view based on eln and workload based on bash package as defined in
test_input
. It runs all of the parts of feedback pipeline as defined in it's__main__
entrypoint. Then it compares the subset of results that are in json to ones already stored intest_ouptut