Skip to content

Conversation

@vstinner
Copy link
Member

@vstinner vstinner commented Aug 23, 2023

The test_concurrent_futures and test_multiprocessing_spawn tests now require the 'cpu' resource. Skip these tests unless the 'cpu' resource is enabled (it is disabled by default).

test_concurrent_futures is no longer skipped if Python is built with ASAN or MSAN sanitizer.

The test_concurrent_futures and test_multiprocessing_spawn tests now
require the 'cpu' resource. Skip these tests unless the 'cpu'
resource is enabled (it is disabled by default).

test_concurrent_futures is no longer skipped if Python is built with
ASAN or MSAN sanitizer.
@vstinner
Copy link
Member Author

I already made the change for test_peg_generator and test_freeze in PR #108386, since it's less important to test the PEG generator and the freeze tool at each commit. Buildbots run these tests as post-commit (with the cpu resource enabled).

make buildbottest, make testall, make testuniversal, and make hostrunnertest run the Python test suite with -u all.

On the other hand, make test and ./python -m test does not pass -u all option and so the cpu resource is disabled.

Buildbots run test Python suite with make buildbottest which use the -u all option.

See also the buildbot configuration: https://github.com/python/buildmaster-config/blob/main/master/custom/factories.py

The following buildbot factories disable the cpu resource:

  • UnixRefleakBuild: -R 3:3 -u-cpu
  • WindowsRefleakBuild: -u-cpu
  • SlowWindowsBuild: -u-cpu -u-largefile

These factories enable it explicitly, even if it's already enabled by make buildbottest:

  • UnixInstalledBuild: -uall
  • Windows64BigmemBuild: -uall

There are other ways to run the Python test suite:

  • (Windows) PCbuild/rt.bat doesn't use -uall
  • (Windows) Tools/buidbot/test.bat uses -uall

The Fedora package to build Python uses the python -m test command without passing -uall: https://src.fedoraproject.org/rpms/python3.12/blob/rawhide/f/python3.12.spec#_1138

@vstinner
Copy link
Member Author

Last year, issue #90682 discussed solutions to make the CI faster:

  • Shard tests i.e. run tests on two different jobs concurrently
  • Start the slower tests first

The issue title was "test_peg_generator takes 8 minutes on Windows" and it was closed when Gregory disabled compiler optimizations on Windows in test_peg_generator: the test takes 3-5 min instead of 5-16 min.

@vstinner
Copy link
Member Author

Total test duration, before (PR #108386) => after (this PR):

  • Windows x86: 19 min 26 sec => 20 min 12 sec (+40 sec)
  • Windows x64: 30 min 34 sec (ignore this one, test_concurrent_futures was re-run!) => 17 min 40 sec (ignored)
  • macOS: 17 min 34 sec => 12 min 26 sec (-5 min)
  • Ubuntu: 12 min 18 sec => 10 min 24 sec (-2 min)
  • Address Sanitizer: 15 min 24 sec => 14 min 6 sec (-1 min)

The exact time depends if slowest tests are run first, since tests are randomized.

Details of timing on this PR.

Windows x86:

10 slowest tests:
- test_compileall: 2 min 54 sec
- test_importlib: 2 min 38 sec
- test_tarfile: 2 min 14 sec
- test_regrtest: 1 min 49 sec
- test_venv: 1 min 46 sec
- test_threading_local: 1 min 44 sec
- test_fstring: 1 min 43 sec
- test_zipfile: 1 min 40 sec
- test_socket: 1 min 40 sec
- test_threading: 1 min 35 sec

Windows x64:

10 slowest tests:
- test_importlib: 3 min 4 sec
- test_venv: 2 min 2 sec
- test_regrtest: 1 min 56 sec
- test_compileall: 1 min 49 sec
- test_fstring: 1 min 46 sec
- test_tarfile: 1 min 45 sec
- test_socket: 1 min 33 sec
- test_queue: 1 min 31 sec
- test_mmap: 1 min 28 sec
- test_io: 1 min 22 sec

macOS:

10 slowest tests:
- test_signal: 2 min 18 sec
- test_multiprocessing_forkserver: 2 min 11 sec
- test_tarfile: 2 min 1 sec
- test_largefile: 1 min 41 sec
- test_ssl: 1 min 37 sec
- test_pickle: 1 min 35 sec
- test_cppext: 1 min 22 sec
- test_compileall: 1 min 19 sec
- test_venv: 1 min 14 sec
- test_regrtest: 1 min 6 sec

Ubuntu:

10 slowest tests:
- test_gdb: 3 min 18 sec
- test_multiprocessing_forkserver: 1 min 34 sec
- test_multiprocessing_fork: 1 min 15 sec
- test_venv: 1 min 1 sec
- test_compileall: 1 min
- test_cppext: 59.9 sec
- test_tarfile: 59.2 sec
- test_signal: 52.7 sec
- test_zipfile: 50.6 sec
- test_subprocess: 47.7 sec

Address Sanitizer:

10 slowest tests:
- test_subprocess: 2 min 8 sec
- test_venv: 2 min
- test_compileall: 1 min 49 sec
- test_gdb: 1 min 44 sec
- test_tarfile: 1 min 40 sec
- test_weakref: 1 min 18 sec
- test_pickle: 1 min 17 sec
- test_regrtest: 1 min 12 sec
- test_zipfile: 1 min 3 sec
- test_multiprocessing_main_handling: 1 min

@vstinner
Copy link
Member Author

Shard tests i.e. run tests on two different jobs concurrently

test_asyncio was actually splitted into sub-modules like test_asyncio.test_subprocess and so in practice, "test_asyncio" is run in multiple processes in parallel, rather than a single process.

@vstinner
Copy link
Member Author

I merged PR #108396 to split multiprocessing tests into sub-tests, so they can be run in parallel.

I also wrote PR #108401 to split test_concurrent_futures into sub-tests (to run them in parallel).

@serhiy-storchaka
Copy link
Member

See also #108421 which marks only the slowest tests, allowing faster tests to run.

The running time of test_concurrent_futures was reduced from 2 min 26 sec to 1 min 20 sec.

@gpshead
Copy link
Member

gpshead commented Aug 24, 2023

With them split up I don't believe we should do use "requires cpu" for concurrent_futures or multiprocessing.

I rely on CI for testing of changes to things on most platforms. If github CI doesn't use -uall or -ucpu then I won't have a way to meaningfully test changes.

@vstinner
Copy link
Member Author

@gpshead:

With them split up I don't believe we should do use "requires cpu" for concurrent_futures or multiprocessing.

Ok, I will continue in this way in this case.

@serhiy-storchaka's PR #108421is more narrow and so skips less tests.

@vstinner vstinner closed this Aug 24, 2023
@vstinner vstinner deleted the requires_cpu2 branch August 24, 2023 16:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting core review tests Tests in the Lib/test dir

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants