-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve twister performance when parallel execution is available #52701
Comments
@PerMac isn't this something twister V2 already supports using https://pypi.org/project/pytest-parallel/ ? |
I think Yuval's idea here doesn't concern the handlers for actually running stuff in parallel but rather the ability to take a giant testcase binary and sharding it post-build but pre-test runtime. So the executable would be duplicated N times and each copy would be modified to only run roughly 1/N of the tests, where N can be derived from the number of cores, number of matching HW boards plugged in, etc. At that point those N copies could be run in parallel through whatever handler/mechanism is in place just as if they were independent testcases. |
I am not sure if I get the full idea. Would your idea require:
|
Another question: would there be a place in ztest framework to allow communication and calling single tests? E.g something like
Where dut.write, dut.read would be handled in twister for serial communication with dut, and the flashed ztest application would handle what and how to call, when a command on the serial input is given? |
@PerMac not quite, Twister doesn't need to communicate with the dut. Prior to flashing, twister will identify if parallelism is possible, that means:
If any of the above is a
Some consideration would need to be taken for QEMUs and DUTs as the cost of flashing becomes greater (it might not make sense to split a binary with only 2 suites on a DUT test) but I believe we can tweak these heuristics as we get closer to feature complete. NOTE We have some very large integration tests and currently developers have to choose between the convenience of adding their test to the same binary (which bloats it even more) or go through the boilerplate of creating another binary for their test (with no real configuration changes). This leads to a very large discrepancy in run times where locally I'm seeing some tests run in milliseconds while our largest 2 tests run in 65 seconds. The issue is even worse in our CI where disk I/O is slower and the large |
Hi @tristan-google, This issue, marked as an Enhancement, was opened a while ago and did not get any traction. Please confirm the issue is correctly assigned and re-assign it otherwise. Please take a moment to review if the issue is still relevant to the project. If it is, please provide feedback and direction on how to move forward. If it is not, has already been addressed, is a duplicate, or is no longer relevant, please close it with a short comment explaining the reason. @yperess you are also encouraged to help moving this issue forward by providing additional information and confirming this request/issue is still relevant to you. Thanks! |
Is your enhancement proposal related to a problem? Please describe.
In our test writing we have an issue where creating a new variant of a test (a new binary) has a lot of boilerplate overhead + build times but the cost of piling yet another test is also getting too much. The test binaries end up executing hundreds of tests and taking a long time.
Describe the solution you'd like
I'd like twister to be able to take the final built .elf file and shard it. Effectively, taking the elf file and modifying the ztest suite and test iterable sections, running the different shards in parallel, then combining the results. When running twister, the following steps should take place:
Describe alternatives you've considered
I've considered having an easier way of specifying a similar binary in the testcase.yaml file, but the only way I seem to be able to do that is by introducing a Kconfig to select which test suites to include into the binary, this ends up being a little confusing and forces the test writers to have to manage the test binaries by hand.
The text was updated successfully, but these errors were encountered: