-
Notifications
You must be signed in to change notification settings - Fork 999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More concise, readable tool execution testing. #18977
Conversation
lib/galaxy_test/api/conftest.py
Outdated
@pytest.fixture(autouse=True, scope="session") | ||
def request_celery_app(celery_session_app, celery_config): | ||
try: | ||
global _celery_app |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, why do we need it to be a global?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just recreating the semantics of the class version. More of a copy-paste thing then a choice I understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect we may not need it. This celery_session_app
is a celery fixture (celery app used for testing). But I'm not sure we use it. We are definitely not using _celery_app
. And the celery_session_app
fixture is being called regardless of that assignment statement. I know it's not part of this PR, but do you want to try and drop all that and leave just the yield
statement in the try clause, and see what happens with the tests? I bet they'll pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done - it all seems fine still.
This is very neat syntax; much better/concise/readable testing code! I'll be using it. |
b2386a6
to
2509234
Compare
I will occasionally see tests that skip instead of failing because the tool required to run the test stopped loading - maybe a parsing error or a misconfiguration around sample data tables. I don't think this should be the default but we should make sure all our CI tests are running properly.
I keep adding these really verbose tool tests that are very tied to the current API and very low level. I need a version that can be remapped to the tool request API in bulk and I'd love to have a version that was a little more robust and readable. This PR introduces some infrastructure to do that and migrates a bunch of the older tests that test "tool execution behavior" to the new paradigm.
A simple example
before
A class method:
after
A simple function heavily leveraging pytest fixtures:
We're describing the execution and tests against it in many fewer characters and in properties that read a lot like English so I think it should make these tests more readable. Parts of this may be "too clever" - I'm not a huge fan of plumbing the depths of Pytest features - but injecting an object based on the decorated tool ID has already caught a bug where they didn't match and leads to nice de-duplication of the tool ID string. We essentially do no testing of the
@skip_without_tool decorator
in the current version of all of this testing. And breaking the methods out the test class does make reasoning about the fixutres a little cleaner I think and simplifies calling these tests from the CLI.A few more complicated examples:
before
after
before
after
How to test the changes?
(Select all options that apply)
License