-
Notifications
You must be signed in to change notification settings - Fork 14k
update isolate_highest_one for NonZero<T> #147686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
rustbot has assigned @Mark-Simulacrum. Use |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Thanks! |
3cd7230 to
6d0a979
Compare
6d0a979 to
057127c
Compare
|
🌲 The tree is currently closed for pull requests below priority 100. This pull request will be tested once the tree is reopened. |
|
The job Click to see the possible cause of the failure (guessed by this bot) |
update isolate_highest_one for NonZero<T> ## Rationale Let `x = self` and `m = (((1 as $Int) << (<$Int>::BITS - 1)).wrapping_shr(self.leading_zeros()))` Then the previous code computed `NonZero::new_unchecked(x & m)`. Since `m` has exactly one bit set (the most significant 1-bit of `x`), `(x & m) == m`. Therefore, the masking step was redundant. The shift is safe and does not need wrapping because: * `self.leading_zeros() < $Int::BITS` because `self` is non-zero. * The result of `unchecked_shr` is non-zero, satisfying the `NonZero` invariant. if wrapping happens we would be violating `NonZero` invariants. why this micro optimization? the old code was suboptimal it duplicated `$Int`’s isolate_highest_one logic instead of delegating to it. Since the type already wraps `$Int`, either delegation should be used for clarity or, if keeping a custom implementation, it should be optimized as above.
Rollup of 15 pull requests Successful merges: - #147404 (Fix issue with callsite inline attribute not being applied sometimes.) - #147534 (Implement SIMD funnel shifts in const-eval/Miri) - #147686 (update isolate_highest_one for NonZero<T>) - #148020 (Show backtrace on allocation failures when possible) - #148204 (Modify contributor email entries in .mailmap) - #148230 (rustdoc: Properly highlight shebang, frontmatter & weak keywords in source code pages and code blocks) - #148555 (Fix rust-by-example spanish translation) - #148556 (Fix suggestion for returning async closures) - #148585 ([rustdoc] Replace `print` methods with functions to improve code readability) - #148600 (re-use `self.get_all_attrs` result for pass indirectly attribute) - #148612 (Add note for identifier with attempted hygiene violation) - #148613 (Switch hexagon targets to rust-lld) - #148644 ([bootstrap] Make `--open` option work with `doc src/tools/error_index_generator`) - #148649 (don't completely reset `HeadUsages`) - #148675 (Remove eslint-js from npm dependencies) r? `@ghost` `@rustbot` modify labels: rollup
update isolate_highest_one for NonZero<T> ## Rationale Let `x = self` and `m = (((1 as $Int) << (<$Int>::BITS - 1)).wrapping_shr(self.leading_zeros()))` Then the previous code computed `NonZero::new_unchecked(x & m)`. Since `m` has exactly one bit set (the most significant 1-bit of `x`), `(x & m) == m`. Therefore, the masking step was redundant. The shift is safe and does not need wrapping because: * `self.leading_zeros() < $Int::BITS` because `self` is non-zero. * The result of `unchecked_shr` is non-zero, satisfying the `NonZero` invariant. if wrapping happens we would be violating `NonZero` invariants. why this micro optimization? the old code was suboptimal it duplicated `$Int`’s isolate_highest_one logic instead of delegating to it. Since the type already wraps `$Int`, either delegation should be used for clarity or, if keeping a custom implementation, it should be optimized as above.
Rollup of 16 pull requests Successful merges: - #147534 (Implement SIMD funnel shifts in const-eval/Miri) - #147686 (update isolate_highest_one for NonZero<T>) - #148020 (Show backtrace on allocation failures when possible) - #148204 (Modify contributor email entries in .mailmap) - #148230 (rustdoc: Properly highlight shebang, frontmatter & weak keywords in source code pages and code blocks) - #148279 (rustc_builtin_macros: rename bench parameter to avoid collisions with user-defined function names) - #148555 (Fix rust-by-example spanish translation) - #148556 (Fix suggestion for returning async closures) - #148585 ([rustdoc] Replace `print` methods with functions to improve code readability) - #148600 (re-use `self.get_all_attrs` result for pass indirectly attribute) - #148612 (Add note for identifier with attempted hygiene violation) - #148613 (Switch hexagon targets to rust-lld) - #148619 (Enable std locking functions on AIX) - #148644 ([bootstrap] Make `--open` option work with `doc src/tools/error_index_generator`) - #148649 (don't completely reset `HeadUsages`) - #148675 (Remove eslint-js from npm dependencies) r? `@ghost` `@rustbot` modify labels: rollup
Rollup of 10 pull requests Successful merges: - #145656 (Stabilize s390x `vector` target feature and `is_s390x_feature_detected!` macro) - #147024 (std_detect: Support run-time detection on OpenBSD using elf_aux_info) - #147534 (Implement SIMD funnel shifts in const-eval/Miri) - #147540 (Stabilise `as_array` in `[_]` and `*const [_]`; stabilise `as_mut_array` in `[_]` and `*mut [_]`.) - #147686 (update isolate_highest_one for NonZero<T>) - #148230 (rustdoc: Properly highlight shebang, frontmatter & weak keywords in source code pages and code blocks) - #148555 (Fix rust-by-example spanish translation) - #148556 (Fix suggestion for returning async closures) - #148585 ([rustdoc] Replace `print` methods with functions to improve code readability) - #148600 (re-use `self.get_all_attrs` result for pass indirectly attribute) r? `@ghost` `@rustbot` modify labels: rollup
Rollup of 10 pull requests Successful merges: - #145656 (Stabilize s390x `vector` target feature and `is_s390x_feature_detected!` macro) - #147024 (std_detect: Support run-time detection on OpenBSD using elf_aux_info) - #147534 (Implement SIMD funnel shifts in const-eval/Miri) - #147540 (Stabilise `as_array` in `[_]` and `*const [_]`; stabilise `as_mut_array` in `[_]` and `*mut [_]`.) - #147686 (update isolate_highest_one for NonZero<T>) - #148230 (rustdoc: Properly highlight shebang, frontmatter & weak keywords in source code pages and code blocks) - #148555 (Fix rust-by-example spanish translation) - #148556 (Fix suggestion for returning async closures) - #148585 ([rustdoc] Replace `print` methods with functions to improve code readability) - #148600 (re-use `self.get_all_attrs` result for pass indirectly attribute) r? `@ghost` `@rustbot` modify labels: rollup
Rollup of 10 pull requests Successful merges: - #145656 (Stabilize s390x `vector` target feature and `is_s390x_feature_detected!` macro) - #147024 (std_detect: Support run-time detection on OpenBSD using elf_aux_info) - #147534 (Implement SIMD funnel shifts in const-eval/Miri) - #147540 (Stabilise `as_array` in `[_]` and `*const [_]`; stabilise `as_mut_array` in `[_]` and `*mut [_]`.) - #147686 (update isolate_highest_one for NonZero<T>) - #148230 (rustdoc: Properly highlight shebang, frontmatter & weak keywords in source code pages and code blocks) - #148555 (Fix rust-by-example spanish translation) - #148556 (Fix suggestion for returning async closures) - #148585 ([rustdoc] Replace `print` methods with functions to improve code readability) - #148600 (re-use `self.get_all_attrs` result for pass indirectly attribute) r? `@ghost` `@rustbot` modify labels: rollup
Rollup merge of #147686 - vrtgs:non-zero-isolate, r=joboet update isolate_highest_one for NonZero<T> ## Rationale Let `x = self` and `m = (((1 as $Int) << (<$Int>::BITS - 1)).wrapping_shr(self.leading_zeros()))` Then the previous code computed `NonZero::new_unchecked(x & m)`. Since `m` has exactly one bit set (the most significant 1-bit of `x`), `(x & m) == m`. Therefore, the masking step was redundant. The shift is safe and does not need wrapping because: * `self.leading_zeros() < $Int::BITS` because `self` is non-zero. * The result of `unchecked_shr` is non-zero, satisfying the `NonZero` invariant. if wrapping happens we would be violating `NonZero` invariants. why this micro optimization? the old code was suboptimal it duplicated `$Int`’s isolate_highest_one logic instead of delegating to it. Since the type already wraps `$Int`, either delegation should be used for clarity or, if keeping a custom implementation, it should be optimized as above.
Rationale
Let
x = selfandm = (((1 as $Int) << (<$Int>::BITS - 1)).wrapping_shr(self.leading_zeros()))Then the previous code computed
NonZero::new_unchecked(x & m).Since
mhas exactly one bit set (the most significant 1-bit ofx),(x & m) == m.Therefore, the masking step was redundant.
The shift is safe and does not need wrapping because:
self.leading_zeros() < $Int::BITSbecauseselfis non-zero.unchecked_shris non-zero, satisfying theNonZeroinvariant. if wrapping happens we would be violatingNonZeroinvariants.why this micro optimization?
the old code was suboptimal it duplicated
$Int’s isolate_highest_one logic instead of delegating to it. Since the type already wraps$Int, either delegation should be used for clarity or, if keeping a custom implementation, it should be optimized as above.