Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve VecCache under parallel frontend #124780

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Mark-Simulacrum
Copy link
Member

@Mark-Simulacrum Mark-Simulacrum commented May 6, 2024

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).

@rustbot rustbot added A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels May 6, 2024
@Mark-Simulacrum
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 6, 2024
@bors
Copy link
Contributor

bors commented May 6, 2024

⌛ Trying commit 206251b with merge 15ee677...

bors added a commit to rust-lang-ci/rust that referenced this pull request May 6, 2024
[WIP] Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (<0.1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).

FIXME: Extract the internals to rustc_data_structures, safety comments, etc.

r? `@Mark-Simulacrum` -- opening for perf first.
@rust-log-analyzer

This comment has been minimized.

@bors
Copy link
Contributor

bors commented May 6, 2024

☀️ Try build successful - checks-actions
Build commit: 15ee677 (15ee67784efd5d46670c56f52fd618a4a00771e5)

@rust-timer

This comment has been minimized.


impl SlotIndex {
#[inline]
fn from_index(idx: u32) -> Self {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried to write a branchless version of this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The result of this on x86 at least is branchless (with conditional moves): https://rust.godbolt.org/z/x9s3aexYh

let index = match current {
0 => return None,
// Treat "initializing" as actually just not initialized at all.
// The only reason this is a separate state is that `complete` calls could race and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we ensure that each query only executes once, complete will only be called once per key, so this race can't happen.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to avoid a non-local safety condition like that. The performance gain from avoiding a true lock here isn't meaningful anyway.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (15ee677): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.8% [0.2%, 2.9%] 169
Regressions ❌
(secondary)
0.9% [0.2%, 2.3%] 70
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-1.8% [-2.3%, -0.4%] 5
All ❌✅ (primary) 0.8% [0.2%, 2.9%] 169

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.8% [1.8%, 1.8%] 1
Regressions ❌
(secondary)
3.0% [2.6%, 3.8%] 4
Improvements ✅
(primary)
-3.0% [-7.1%, -1.1%] 54
Improvements ✅
(secondary)
-8.2% [-13.3%, -4.6%] 22
All ❌✅ (primary) -2.9% [-7.1%, 1.8%] 55

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.2% [0.8%, 2.1%] 13
Regressions ❌
(secondary)
2.4% [1.4%, 3.2%] 6
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 1.2% [0.8%, 2.1%] 13

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 676.55s -> 678.643s (0.31%)
Artifact size: 315.72 MiB -> 316.01 MiB (0.09%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels May 6, 2024
@Mark-Simulacrum Mark-Simulacrum changed the title [WIP] Improve VecCache under parallel frontend Improve VecCache under parallel frontend Jun 9, 2024
@Mark-Simulacrum
Copy link
Member Author

@bors try @rust-timer queue

No major changes, but let's collect another more recent run. Presuming results are about the same I'll mark this ready for review, the improvement in scalability IMO outweighs slight regressions to single-threaded compilation.

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jun 9, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Jun 9, 2024
Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
@bors
Copy link
Contributor

bors commented Jun 9, 2024

⌛ Trying commit a28ef6f with merge d447427...

@bors
Copy link
Contributor

bors commented Jun 9, 2024

☀️ Try build successful - checks-actions
Build commit: d447427 (d447427ec5ed8db4f1baad0135b5304baef82ce9)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (d447427): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.7% [0.2%, 2.6%] 171
Regressions ❌
(secondary)
1.1% [0.2%, 10.0%] 99
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.0% [-5.2%, -0.4%] 8
All ❌✅ (primary) 0.7% [0.2%, 2.6%] 171

Max RSS (memory usage)

Results (primary -3.4%, secondary -8.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
3.2% [2.5%, 3.9%] 2
Improvements ✅
(primary)
-3.4% [-8.2%, -0.9%] 70
Improvements ✅
(secondary)
-9.2% [-14.0%, -3.3%] 21
All ❌✅ (primary) -3.4% [-8.2%, -0.9%] 70

Cycles

Results (primary 2.2%, secondary 2.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.2% [1.5%, 3.1%] 17
Regressions ❌
(secondary)
2.3% [1.4%, 2.9%] 22
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 2.2% [1.5%, 3.1%] 17

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: missing data
Artifact size: 319.77 MiB -> 319.42 MiB (-0.11%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jun 10, 2024
@Mark-Simulacrum
Copy link
Member Author

r? compiler

As noted in the PR description this is a regression for cycles/time on non-parallel builds (more code to execute). That's largely offset by far better scalability to more threads in parallel compilers. I think the trade-off is worth it, particularly since this also appears to be a memory usage win.

Copy link
Contributor

@lcnr lcnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the impl is looking good to me, not sure if someone else should or wants to review here

r=me after nit

if ptr.is_null() {
return None;
}
// SAFETY: Follows from preconditions on `buckets` and `self`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are the preconditions on self? That it has been created via from_index, i.e. it satisfies the type invariants? I guess, pls add an fn comment for the safety invariants of that unsafe function

@saethlin
Copy link
Member

not sure if someone else should or wants to review here

Just to make sure there's no confusion, I don't intend to hold this PR up.

@Mark-Simulacrum
Copy link
Member Author

@bors r=lcnr rollup=never

@bors
Copy link
Contributor

bors commented Nov 11, 2024

📌 Commit 0f109f5 has been approved by lcnr

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Nov 11, 2024
@bors
Copy link
Contributor

bors commented Nov 11, 2024

⌛ Testing commit 0f109f5 with merge 0ff92be...

bors added a commit to rust-lang-ci/rust that referenced this pull request Nov 11, 2024
Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
@rust-log-analyzer

This comment has been minimized.

@bors
Copy link
Contributor

bors commented Nov 11, 2024

💔 Test failed - checks-actions

@bors bors added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. and removed S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. labels Nov 11, 2024
@Mark-Simulacrum
Copy link
Member Author

@bors r=lcnr rollup=never

Fixed the drop impls.

@bors
Copy link
Contributor

bors commented Nov 11, 2024

📌 Commit cfb9e4f has been approved by lcnr

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Nov 11, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Nov 12, 2024
Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
@bors
Copy link
Contributor

bors commented Nov 12, 2024

⌛ Testing commit cfb9e4f with merge d8e452f...

@rust-log-analyzer

This comment has been minimized.

@bors
Copy link
Contributor

bors commented Nov 12, 2024

💔 Test failed - checks-actions

@bors bors added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. and removed S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. labels Nov 12, 2024
This replaces the single Vec allocation with a series of progressively
larger buckets. With the cfg for parallel enabled but with -Zthreads=1,
this looks like a slight regression in i-count and cycle counts (<0.1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5%
wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based
approach, likely due to reducing the bouncing of the cache line holding
the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319
seconds).
@Mark-Simulacrum
Copy link
Member Author

@bors r=lcnr rollup=never

Okay, new attempt... previous technically worked but overflowed counts on the final iteration.

@bors
Copy link
Contributor

bors commented Nov 14, 2024

📌 Commit bc4af80 has been approved by lcnr

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) perf-regression Performance regression. perf-regression-triaged The performance regression has been triaged. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.