Skip to content

Switch to pytest from our homegrown test runner #1673

Closed
@refi64

Description

@refi64

Ok, so when I saw the PR a while back that "parallelized" the test runner, I was super hyped. But the hype wore off when I realized that the unit tests were still run sequentially.

In particular, it continuously bugs me how, when running the full unit test suite, there's absolutely no sort of progress indicator of any kind. To make things worse, runtests.py is largely undocumented, leaving me to wonder why stuff like ./runtests.py unit-test -a '*something*' works but not ./runtests.py -a '*something*'.

In addition, to an extent, -a is slightly...useless. I mean, because the naming convention for tests is so inconsistent, it's rarely ever useful.

Also, test fixtures are a cruel mess. It took me 30 minutes of debugging to figure out why the tests were giving out obscure KeyErrors that I couldn't reproduce in my own code. Turns out, the fixture I was using didn't define list. But defining list caused other tests to brutally fail, so then I just created a new fixture. IMO this whole thing should either be reworked or (better yet) completely gutted and thrown into a fire pit. A very hot fire pit. With sulfur.

So I see two options:

  • Move to a different test runner. Some have suggested pytest, which I personally really like! Pytest has the xdist plugin, which supports running multiple tests in parallel.
  • Revamp the current test runner. Ideally, this would include fixing fixtures (oh, the irony...), improving runtests.py, and making the actual unit tests run in parallel, preferably also with a progress bar of some sort.

Thoughts? I could try and help out here a bit if you guys come to a consensus of some sort.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions