Closed
Description
Is there an existing issue for this?
- I have searched the existing issues
Enhancement description
Right now the following benchmarks are executed on every PR:
- Multiplatform benchmarks to validate the performance changes on all platforms
- Throughput
- Average time
- Comparison benchmarks to compare performance with other libraries
- Throughput
- Average time
All those benchmarks take a lot of time to execute, delaying the PR merge\feedback. To optimize the development loop we should reduce the number of benchmarks executed on each PR to the minimum required to verify the performance changes.
Also, the action we use to process the benchmark results does not correctly work with both op/s and s/op units. We should leave only one of them - s/op (Average Time).
Metadata
Metadata
Assignees
Projects
Status
✅ Done