-
Notifications
You must be signed in to change notification settings - Fork 13.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shard AllocMap Lock #136115
Shard AllocMap Lock #136115
Conversation
Some changes occurred to the CTFE / Miri interpreter cc @rust-lang/miri, @rust-lang/wg-const-eval |
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
Shard AllocMap Lock This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores, which is about the same as what I see with the data structure proposed in rust-lang#136105 - I haven't analyzed but I suspect the sharding might actually work out better if we commonly insert "densely" since sharding would split the cache lines and the OnceVec packs locks close together. Of course, we could do something similar with the bitset lock too. Either way, this seems like a very reasonable starting point that solves the problem ~equally well on what I can test locally. r? `@RalfJung`
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (e402369): comparison URL. Overall result: ❌ regressions - no action neededBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. @bors rollup=never Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (primary -2.0%, secondary 2.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (primary 2.6%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 772.928s -> 772.728s (-0.03%) |
Perf results look neutral enough that I'm okay moving forward given the non-perf measured gains for parallel executions. @rustbot label: perf-regression-triaged |
@@ -389,35 +391,37 @@ pub const CTFE_ALLOC_SALT: usize = 0; | |||
|
|||
pub(crate) struct AllocMap<'tcx> { | |||
/// Maps `AllocId`s to their corresponding allocations. | |||
alloc_map: FxHashMap<AllocId, GlobalAlloc<'tcx>>, | |||
// Note that this map on rustc workloads seems to be rather dense. In #136105 we considered |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Note that this map on rustc workloads seems to be rather dense. In #136105 we considered | |
// Note that this map on rustc workloads seems to be rather dense, but | |
// in Miri workloads it is expected to be quite sparse. In #136105 we considered |
b2bff4f
to
5780392
Compare
I didn't check just the counter, but based on profiling that change makes relatively minimal impact on at least this benchmark to concurrency (it might speed up each execution). This change is still necessary to yield the ~30 CPUs being active vs. ~9. Updated for the comments I think. @rustbot review |
When changing a PR, please either add new commits or do a force-push that leaves the base commit unchanged... right now I think it is impossible for me to view the diff of your PR since my previous review. (This is mostly Github's fault of course for being a pretty terrible code review tool, but the best we can do is work around its deficiencies.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I only now noticed the change around the assert!
.
@rustbot author
alloc_map.alloc_map.insert(id, alloc_salt.0.clone()); | ||
alloc_map.dedup.insert(alloc_salt, id); | ||
// We just reserved, so should always be unique. | ||
assert!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find code that has side-effects inside assertions pretty hard to follow. Is there a good way to avoid that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pulled the assert out to a separate line - does that help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's better, thanks!
Looking at the API, we have try_insert
. It's still unstable but might be worth it? Then it can be a single line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of adding unstable API usage given that we don't care about the semantic here (try_insert is useful primarily in that it leaves the entry untouched, rather than overwriting it).
5780392
to
9bdf536
Compare
…lfJung Shard AllocMap Lock This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores, which is about the same as what I see with the data structure proposed in rust-lang#136105 - I haven't analyzed but I suspect the sharding might actually work out better if we commonly insert "densely" since sharding would split the cache lines and the OnceVec packs locks close together. Of course, we could do something similar with the bitset lock too. Either way, this seems like a very reasonable starting point that solves the problem ~equally well on what I can test locally. r? `@RalfJung`
This comment has been minimized.
This comment has been minimized.
💔 Test failed - checks-actions |
This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores. That's pretty reasonable scaling for the simplicity of this solution.
9bdf536
to
7f1231c
Compare
@bors r=RalfJung Replaced std AtomicU64 with rustc_data_structures AtomicU64 since that works on platforms that don't have 64-bit atomics (e.g., the powerpc failure here). |
…lfJung Shard AllocMap Lock This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores, which is about the same as what I see with the data structure proposed in rust-lang#136105 - I haven't analyzed but I suspect the sharding might actually work out better if we commonly insert "densely" since sharding would split the cache lines and the OnceVec packs locks close together. Of course, we could do something similar with the bitset lock too. Either way, this seems like a very reasonable starting point that solves the problem ~equally well on what I can test locally. r? `@RalfJung`
The job Click to see the possible cause of the failure (guessed by this bot)
|
💔 Test failed - checks-actions |
@bors retry rollup=iffy I'm going to hope that's a spurious test failure. |
☀️ Test successful - checks-actions |
Finished benchmarking commit (e5f11af): comparison URL. Overall result: ✅ improvements - no action needed@rustbot label: -perf-regression Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (primary 3.4%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesThis benchmark run did not return any relevant results for this metric. Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 777.434s -> 777.31s (-0.02%) |
This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores, which is about the same as what I see with the data structure proposed in #136105 - I haven't analyzed but I suspect the sharding might actually work out better if we commonly insert "densely" since sharding would split the cache lines and the OnceVec packs locks close together. Of course, we could do something similar with the bitset lock too.
Either way, this seems like a very reasonable starting point that solves the problem ~equally well on what I can test locally.
r? @RalfJung