Skip to content

Add experimental code to measure reference compile time regression for slice::sort #108662

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

Voultapher
Copy link
Contributor

This is only meant as an experiment, not to be merged. The regression data this yields should help get a better grip on the impact of slice::sort on compile times. This is helpful information for further work.

DO NOT MERGE THIS

…r slice::sort

This is only meant as an experiment, not to be merged. The regression
data this yields should help get a better grip on the impact of
`slice::sort` on compile times. This is helpful information for further
work.
@rustbot
Copy link
Collaborator

rustbot commented Mar 2, 2023

r? @m-ou-se

(rustbot has picked a reviewer for you, use r? to override)

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue. labels Mar 2, 2023
@rustbot
Copy link
Collaborator

rustbot commented Mar 2, 2023

Hey! It looks like you've submitted a new PR for the library teams!

If this PR contains changes to any rust-lang/rust public library APIs then please comment with @rustbot label +T-libs-api -T-libs to tag it appropriately. If this PR contains changes to any unstable APIs please edit the PR description to add a link to the relevant API Change Proposal or create one if you haven't already. If you're unsure where your change falls no worries, just leave it as is and the reviewer will take a look and make a decision to forward on if necessary.

Examples of T-libs-api changes:

  • Stabilizing library features
  • Introducing insta-stable changes such as new implementations of existing stable traits on existing stable types
  • Introducing new or changing existing unstable library APIs (excluding permanently unstable features / features without a tracking issue)
  • Changing public documentation in ways that create new stability guarantees
  • Changing observable runtime behavior of library APIs

@Voultapher
Copy link
Contributor Author

@thomcc could you please start a timer run. @orlp and I want to use this data to get a better understanding of the impact this has on compiler perf. The important metric being how much extra work does it have to do to compile more code, not how much faster is sort. Because for both stable and unstable sort, the compiler spends less than 0.1% in each, so these are not responsible for the slowdown.

@nnethercote
Copy link
Contributor

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Mar 3, 2023
@bors
Copy link
Collaborator

bors commented Mar 3, 2023

⌛ Trying commit 6aeebad with merge ba11738028df315432b532791e29abc7c781ba02...

@bors
Copy link
Collaborator

bors commented Mar 3, 2023

☀️ Try build successful - checks-actions
Build commit: ba11738028df315432b532791e29abc7c781ba02 (ba11738028df315432b532791e29abc7c781ba02)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (ba11738028df315432b532791e29abc7c781ba02): comparison URL.

Overall result: ❌ regressions - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.4% [0.3%, 17.0%] 58
Regressions ❌
(secondary)
1.7% [0.3%, 11.8%] 53
Improvements ✅
(primary)
-0.5% [-0.6%, -0.3%] 3
Improvements ✅
(secondary)
-0.4% [-0.5%, -0.3%] 4
All ❌✅ (primary) 2.2% [-0.6%, 17.0%] 61

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.8% [1.2%, 5.3%] 7
Regressions ❌
(secondary)
1.7% [1.5%, 1.8%] 2
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-3.3% [-3.3%, -3.3%] 1
All ❌✅ (primary) 2.8% [1.2%, 5.3%] 7

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.9% [1.2%, 16.8%] 27
Regressions ❌
(secondary)
2.2% [1.1%, 12.2%] 20
Improvements ✅
(primary)
-0.5% [-0.5%, -0.5%] 1
Improvements ✅
(secondary)
-2.6% [-3.1%, -2.2%] 3
All ❌✅ (primary) 3.7% [-0.5%, 16.8%] 28

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Mar 3, 2023
@Voultapher
Copy link
Contributor Author

The experiment was performed and the results gathered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants