Encourage projects to add benchmarks. #227
ericsnowcurrently
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
With the recent pyperformance changes, projects can use it to run their own benchmarks. However, essentially no projects have such benchmarks written. Adding them would help projects avoid performance regressions (as long as they run them, of course).
It would also allow others (like us) to compose benchmark suites from across different projects. A sample of such suites could be used to train PGO or as the target for performance improvement. This would be particularly useful if the benchmarks were categorized (e.g. workload type, real-world vs. micro), via tags or something.
Concrete things we could do:
Beta Was this translation helpful? Give feedback.
All reactions