-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it easier to compare different benchmarks #151
Comments
Also somewhat related to #62. |
Just wonder in relation to this issue, whether there is any way to "scale" the PDF values given by criterion report? Context: I am running a lot of individual benchmarks and would like to compare them on a KDE graph I am trying to render using the KDE information in an exported JSON report. However, since the y-value scales are inconsistent it is hard to compare different benchmarks. If there was a way to even make these scales even roughly comparable, that would be extremely helpful. EDIT |
Yes please |
I was recently made aware of the |
When comparing multiple different benchmarks, it would be nice to have a way to mark them as "comparable" (perhaps
bcomparisongroup
), with the following effects:This would make it easier to eyeball differences.
As an extension, perhaps also the library could do some statistics to see if there are true differences between the timings (ANOVA? it's been a while...), but that would be an added bonus.
The text was updated successfully, but these errors were encountered: