Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up to use pytest for our data-driven tests, and switch testcheck over to it #1944

Merged
merged 15 commits into from
Jul 27, 2016

Conversation

gnprice
Copy link
Collaborator

@gnprice gnprice commented Jul 27, 2016

This is a step toward #1673 (switching entirely to pytest from myunit and runtests.py),
using some of the ideas developed in @kirbyfan64's PR #1723.

Both py.test with no arguments and py.test mypy/test/testcheck.py work just as you'd
hope, while ./runtests.py continues to run all the tests.

The output is very similar to the myunit output. It doesn't spew voluminous stack traces or
other verbose data, and it continues using assert_string_arrays_equal to produce
nicely-formatted comparisons when e.g. type-checker error messages differ. On error it
even includes the filename and line number for the test case itself, which I've long wanted,
and I think pytest's separator lines and coloration make the output slightly easier to read
when there are multiple failures.

The -i option from myunit is straightforwardly ported over as --update-data, giving it
a longer name because it feels like the kind of heavyweight and uncommon operation
that deserves such a name. It'd be equally straightforward to port over -u, but in the
presence of source control I think --update-data does the job on its own.

One small annoyance is that if there's a failure early in a test run, pytest doesn't print the
detailed report on that failure until the whole run is over. This has often annoyed me in
using pytest on other projects; useful workarounds include passing -x to make it stop
at the first failure, or -k to filter the set of tests to be run. For interactive use I think it'd
nearly always be preferable to do what myunit does by immediately printing the detailed
information, so I may come back to this later to try to get pytest to do that.

We don't yet take advantage of xdist to parallelize within a py.test run (though xdist
works for me out of the box in initial cursory testing.) For now we just stick with the
runtests.py parallelization, so we set up a separate py.test command for each
test module.

@gnprice
Copy link
Collaborator Author

gnprice commented Jul 27, 2016

@JukkaL, I'd appreciate your giving this a spin and/or reviewing the code, given the helpful thoughts you had on #1723.

@kirbyfan64, I'd also appreciate your review, particularly given the time you've spent already thinking about this problem! Hopefully this illustrates in code some of my suggestions in #1723 about how to use the pytest API. I think there's plenty more to do to realize the dream of pytest all the way (#1673), and I'd be glad to see followup PRs from you.

@gnprice gnprice mentioned this pull request Jul 27, 2016
@JukkaL
Copy link
Collaborator

JukkaL commented Jul 27, 2016

@gnprice I gave it a quick spin and it looks good! Thanks for working on this (also thank you @kirbyfan64 for getting this started!). I noticed one thing -- runtests.py doesn't report the number of test cases for pytest tests. It should be pretty easy to fix by updating parse_test_stats_from_output in mypy/waiter.py.

It's possible to get test failures immediately (though without stack traces) by using the pytest -s option.

@gnprice
Copy link
Collaborator Author

gnprice commented Jul 27, 2016

Thanks for the review! I'll make that change and then land this -- can always make further changes later.

py.test -s is a good thought as a useful workaround for not getting failure details immediately -- I might use that sometime. Still, it isn't meant to solve that problem and it doesn't solve it well. It lets stdout and stderr pass through, which because assert_string_arrays_equal sends its output straight to stderr means you get that data immediately. But you don't get any context, including which test is failing, and the formatting is also messed up by being thrown right in the middle of the ./s/F/... progress line. So I'd still like to have a proper solution, and I may go back and attempt one later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants