Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make command line tests faster #5966

Open
JukkaL opened this issue Nov 28, 2018 · 10 comments
Open

Make command line tests faster #5966

JukkaL opened this issue Nov 28, 2018 · 10 comments

Comments

@JukkaL
Copy link
Collaborator

JukkaL commented Nov 28, 2018

Command line tests in test-data/unit/cmdline.test are very slow since they use full typeshed stubs and run mypy in a subprocess. It looks like many of them would be easy to migrate to be normal type checker tests, or some of then could be even regular Python unit tests. Alternatively, we could perhaps use test stubs with them.

Speeding up report generation tests may also be easy.

@mattany
Copy link

mattany commented Oct 13, 2023

Can I take this?

@hauntsaninja
Copy link
Collaborator

Yes, just open a PR!

@CodPro-Sui

This comment was marked as spam.

@manmartgarc
Copy link

Is this still the case? Runing pytest .\mypy\test\testcmdline.py runs in 51s for me; what would be considered fast? Also please let me know if this issue refers to something else than what runs when pytest .\mypy\test\testcmdline.py.

@hauntsaninja
Copy link
Collaborator

Yes, that's right / they're still slow. Due to pytest-xdist, how long they take is a function of number of cores on your machine. In CI, we have 2 cores, so pytest mypy/test/testcmdline.py -n 2 would give you a better sense of how much time these tests contribute when you open a PR

@hauntsaninja
Copy link
Collaborator

hauntsaninja commented Apr 28, 2024

Oh hmm, actually I think Github Actions updated its runners to have 4 cores. That should be an easy win, #17185

@manmartgarc
Copy link

manmartgarc commented Apr 28, 2024

Oh hmm, actually I think Github Actions updated its runners to have 4 cores. That should be an easy win.

Thanks, but what should be an easy win? Are they not being run with four workers on the GitHub instances? It's running with 8 workers on my machine even though it has 16 logical cores; do you mean tweaking that setting?

@JelleZijlstra
Copy link
Member

Yes, we currently run with -n 2, so with a maximum concurrency of 2. I think we set it explicitly because the value reported by os.cpu_count() on GH Actions doesn't match the actual number of cores the action has available.

@manmartgarc
Copy link

manmartgarc commented Apr 28, 2024

Gotcha. Edit: I see @hauntsaninja submitted a PR to set n 4

But beyond that, how can we get away from running the CLI as a sub process and still test some of the stdout from the CLI? The original comment mentions converting some of the tests into regular check tests or even unit tests. Could somebody point me to examples of both?

@hauntsaninja
Copy link
Collaborator

Yeah, Jukka's main idea is to just make tests that don't actually need to be run as a subprocess into other tests. For example, I think a lot of the "config file" tests in cmdline.test could be made into normal tests (e.g. move them into check-flags.test)

sloboegen added a commit to sloboegen/mypy that referenced this issue Jul 22, 2024
hauntsaninja pushed a commit that referenced this issue Oct 17, 2024
Relates #5966.

Below is the info what happened with the concrete test from
`cmdline.test`:

1. `testErrorContextConfig` => check-flags.test
(`testShowErrorContextFunction`) [duplicate]
2. `testNoConfigFile` => check-flags.test (`testNoConfigFile`) [move]
3. `testPerFileConfigSection` => check-flags.test
(`testPerFileUntypedDefs`) [move]
4. `testIgnoreErrorsConfig` => check-flags.test
(`testPerFileIgnoreErrors`) [move]
5. `testConfigFollowImportsNormal` => check-flags.test
(`testFollowImportsNormal`) [move + modified]
6. `testConfigFollowImportsSilent` => check-flags
(`testFollowImportsSilent`) [move + modified]
7. `testConfigFollowImportsSkip` => check-flags
(`testFollowImportsSkip`) [move + modified]
8. `testConfigFollowImportsError` => check-flags.test
(`testFollowImportsError`) [move + modified]
9. `testConfigFollowImportsSelective` => check-flags.test
(`testFollowImportsSelective`) [move]
10. `testConfigSilentMissingImportsOff` => check-flags.test
(`testSilentMissingImportsOff`) [move]
11. `testConfigSilentMissingImportsOn` => check-flags.test
(`testSilentMissingImportsOn`) [move]
12. `testDisallowAnyGenericsBuiltinCollectionsPre39` => check-flags.test
(`testDisallowAnyGenericsBuiltinTuplePre39`,
`testDisallowAnyGenericsBuiltinListPre39`,
`testDisallowAnyGenericsBuiltinSetPre39`,
`testDisallowAnyGenericsBuiltinDictPre39`) [split]
13. `testDisallowAnyGenericsTypingCollections` => check-flags.test
(`testDisallowAnyGenericsTupleWithNoTypeParamsGeneric`,
`testDisallowAnyGenericsPlainList`, `testDisallowAnyGenericsPlainDict`,
`testDisallowAnyGenericsPlainSet`) [split]
14. `testDisallowUntypedDefsAndGeneric` => check-flags.test
(`testDisallowUntypedDefsAndGeneric`) [move]
15. `testParseError` => parse-errors.test (`testMissingBracket`) [move]
16. `testParseErrorAnnots` => check-fastparse.test
(`testFasterParseTooManyArgumentsAnnotation`) [duplicate]
17. `testNotesOnlyResultInExitSuccess` => check-flags.test
(`testNotesOnlyResultInExitSuccess`) [move]

Let's compare test execution time. I've run `pytest -n 4
mypy/test/testcmdline.py` 3 times on my machine and calculated the
average time.
- Before: 130 tests, 1m 02s
- After: 115 tests, 0m 55s

Also, if it's possible to use fixture `FrozenSet` in `check-flags.test`,
we'd be able to totally split items 12 and 13 from the above list.

And `testMissingBracket` is skipped by pytest in the
`parse-errors.test`-file, but, probably, this file is the best variant
for it (not sure about it).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants