Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: align performance audit score with metric savings #15447

Merged
merged 24 commits into from
Oct 4, 2023

Conversation

adamraine
Copy link
Member

@adamraine adamraine commented Sep 8, 2023

This PR adds a new scoring mode METRIC_SAVINGS which will override the audit product score in the following way:

  • Set score to 1 if the provided audit product score is passing (>= 0.9)
  • Else Set score to 0 if the provided audit product had nonzero savings for any metric
  • Else set score to 0.5 for the case of a failing audit that does not have any savings

In theory, this should put audits into the same buckets established in #15445. The overall impact calculations in #15445 should only affect the bucket of audits that have 0 score.

We could take this a step further by making the score more continuous using the overall impact calculation. For audits that fail and have nonzero savings, we scale the score with the overall impact so audits with higher impact are closer to 0 and audits with lower impact are closer to 0.5. The reason I haven't done that in this PR (yet?) is because it shouldn't affect the end result of our sorting (assuming we are using the report-side change in #15445), and calculating the overall impact in runner.js could get messy.


This PR also adds a new audit product option scoreDisplayMode which overrides the same option on the audit meta. This is useful for audits such as dom-size which can have nonzero metric savings but we should make informative if there is just 10ms of savings or something.

However, this does create situations where audits that have no savings are ranked higher than audits which do have savings. Perhaps it would be better to improve our model for savings rather than proactively limit the visibility of some audits based on arbitrary thresholds?

@adamraine adamraine marked this pull request as ready for review September 10, 2023 05:38
@adamraine adamraine requested a review from a team as a code owner September 10, 2023 05:38
@adamraine adamraine requested review from connorjclark and removed request for a team September 10, 2023 05:38
@adamraine adamraine marked this pull request as draft September 10, 2023 05:38
@adamraine adamraine changed the title WIP: align performance audit score with metric savings core: align performance audit score with metric savings Sep 26, 2023
Base automatically changed from metric-savings-score to main October 3, 2023 18:06
@vercel
Copy link

vercel bot commented Oct 3, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
lighthouse ✅ Ready (Inspect) Visit Preview 💬 Add feedback Oct 3, 2023 7:19pm

core/audits/audit.js Outdated Show resolved Hide resolved
// 1 - audit passed
// 0.5 - audit failed and had no metric savings
// 0 - audit failed and had metric savings
metricSavings = 8;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was a name conflict with metricSavings, so I moved this up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants