-
Notifications
You must be signed in to change notification settings - Fork 301
Doc erase
and prepare_rehash_in_place
#411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@bors r+ |
bors
added a commit
that referenced
this pull request
Mar 24, 2023
Doc `erase` and `prepare_rehash_in_place` Also updated old comments and broken links. For example, the old comment inside the `erase` function was very confusing (or I didn't understand it). The `self.buckets() < Group::WIDTH` inside `prepare_rehash_in_place` function is made `unlikely` since it is not possible to have tombstones in tables smaller than the group width. This is due to two conditions: 1. Inside the `erase` function, `index_before = index.wrapping_sub(Group::WIDTH) & self.bucket_mask` equals `index` for all tables less than or equal to `Group::WIDTH` (proved by simple iteration, see test below). 2. In particular, when `self.buckets() < Group::WIDTH`, we will have at least one empty slot due to the replication principles in the `set_ctrl` function. Based on the above, when `self.buckets() < Group::WIDTH`, in the `erase` function, these two lines ```rust let empty_before = Group::load(self.ctrl(index_before)).match_empty(); let empty_after = Group::load(self.ctrl(index)).match_empty(); ``` load the same group that has at least one empty slot in the trailing control bytes, even if the map was full and there were no empty slots at all (`self.items == self.buckets()`). That is, `empty_before.leading_zeros() + empty_after.trailing_zeros() < Group::WIDTH` for any tables where `self.buckets() < Group::WIDTH`. P.S. And so, after all that I wrote, I sit and think that maybe I should have done `debug_assert!` instead of `unlikely` 😄. ```rust fn main() { let buckets_array: [usize; 3] = [1, 2, 4]; let group_widths: [usize; 3] = [4, 8, 16]; for group_width in group_widths { for buckets in buckets_array { let bucket_mask = buckets - 1; for index in 0..buckets { let index_before = index.wrapping_sub(group_width) & bucket_mask; assert_eq!(index, index_before); } } } let buckets_array: [usize; 4] = [1, 2, 4, 8]; let group_widths: [usize; 2] = [8, 16]; for group_width in group_widths { for buckets in buckets_array { let bucket_mask = buckets - 1; for index in 0..buckets { let index_before = index.wrapping_sub(group_width) & bucket_mask; assert_eq!(index, index_before); } } } let buckets_array: [usize; 5] = [1, 2, 4, 8, 16]; let group_width: usize = 16; for buckets in buckets_array { let bucket_mask = buckets - 1; for index in 0..buckets { let index_before = index.wrapping_sub(group_width) & bucket_mask; assert_eq!(index, index_before); } } } ```
☀️ Test successful - checks-actions |
👀 Test was successful, but fast-forwarding failed: 422 Update is not a fast forward |
@bors r+ |
💡 This pull request was already approved, no need to approve it again.
|
bors
added a commit
that referenced
this pull request
Mar 29, 2023
Doc `erase` and `prepare_rehash_in_place` Also updated old comments and broken links. For example, the old comment inside the `erase` function was very confusing (or I didn't understand it). The `self.buckets() < Group::WIDTH` inside `prepare_rehash_in_place` function is made `unlikely` since it is not possible to have tombstones in tables smaller than the group width. This is due to two conditions: 1. Inside the `erase` function, `index_before = index.wrapping_sub(Group::WIDTH) & self.bucket_mask` equals `index` for all tables less than or equal to `Group::WIDTH` (proved by simple iteration, see test below). 2. In particular, when `self.buckets() < Group::WIDTH`, we will have at least one empty slot due to the replication principles in the `set_ctrl` function. Based on the above, when `self.buckets() < Group::WIDTH`, in the `erase` function, these two lines ```rust let empty_before = Group::load(self.ctrl(index_before)).match_empty(); let empty_after = Group::load(self.ctrl(index)).match_empty(); ``` load the same group that has at least one empty slot in the trailing control bytes, even if the map was full and there were no empty slots at all (`self.items == self.buckets()`). That is, `empty_before.leading_zeros() + empty_after.trailing_zeros() < Group::WIDTH` for any tables where `self.buckets() < Group::WIDTH`. P.S. And so, after all that I wrote, I sit and think that maybe I should have done `debug_assert!` instead of `unlikely` 😄. ```rust fn main() { let buckets_array: [usize; 3] = [1, 2, 4]; let group_widths: [usize; 3] = [4, 8, 16]; for group_width in group_widths { for buckets in buckets_array { let bucket_mask = buckets - 1; for index in 0..buckets { let index_before = index.wrapping_sub(group_width) & bucket_mask; assert_eq!(index, index_before); } } } let buckets_array: [usize; 4] = [1, 2, 4, 8]; let group_widths: [usize; 2] = [8, 16]; for group_width in group_widths { for buckets in buckets_array { let bucket_mask = buckets - 1; for index in 0..buckets { let index_before = index.wrapping_sub(group_width) & bucket_mask; assert_eq!(index, index_before); } } } let buckets_array: [usize; 5] = [1, 2, 4, 8, 16]; let group_width: usize = 16; for buckets in buckets_array { let bucket_mask = buckets - 1; for index in 0..buckets { let index_before = index.wrapping_sub(group_width) & bucket_mask; assert_eq!(index, index_before); } } } ```
💔 Test failed - checks-actions |
@bors retry |
☀️ Test successful - checks-actions |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Also updated old comments and broken links. For example, the old comment inside the
erase
function was very confusing (or I didn't understand it).The
self.buckets() < Group::WIDTH
insideprepare_rehash_in_place
function is madeunlikely
since it is not possible to have tombstones in tables smaller than the group width. This is due to two conditions:erase
function,index_before = index.wrapping_sub(Group::WIDTH) & self.bucket_mask
equalsindex
for all tables less than or equal toGroup::WIDTH
(proved by simple iteration, see test below).self.buckets() < Group::WIDTH
, we will have at least one empty slot due to the replication principles in theset_ctrl
function.Based on the above, when
self.buckets() < Group::WIDTH
, in theerase
function, these two linesload the same group that has at least one empty slot in the trailing control bytes, even if the map was full and there were no empty slots at all (
self.items == self.buckets()
). That is,empty_before.leading_zeros() + empty_after.trailing_zeros() < Group::WIDTH
for any tables whereself.buckets() < Group::WIDTH
.P.S. And so, after all that I wrote, I sit and think that maybe I should have done
debug_assert!
instead ofunlikely
😄.