-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework messages serialization #352
Open
Shatur
wants to merge
19
commits into
master
Choose a base branch
from
ser-de-rework
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
See comments in the code for details. I also switched to plain `Vec` for serialization since we no longer need `Cursor` for it and `Writer::write_all` is much slower then `Vec::extend_from_slice`.
Shatur
changed the title
Optimize replication message packing
Optimize replication message sizes
Nov 13, 2024
Shatur
changed the title
Optimize replication message sizes
Optimize replication message size
Nov 13, 2024
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #352 +/- ##
==========================================
+ Coverage 89.98% 90.46% +0.47%
==========================================
Files 41 44 +3
Lines 2367 2422 +55
==========================================
+ Hits 2130 2191 +61
+ Misses 237 231 -6 ☔ View full report in Codecov by Sentry. |
"Arrays" describes it better since the tick is technically also part of the header.
Check most unlikely conditions first.
Just read until the end of the cursor. Saves us a single byte.
Shatur
changed the title
Optimize replication message size
Rework messages serialization
Nov 17, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The performance is a bit slower, but we getting a free byte per changed entity and several bytes per message.
For example, in our statistic test we using only 16 bytes for a test message instead of 33.
While writing the documentation, I decided to rename
InitMessage
toChangeMessage
andUpdateMessage
toMutateMessage
(and related types). This change helps make it clear thatMutateMessage
stores only component mutations. In contrast,ChangeMessage
stores any type of change and may even include mutations if there is an insertion or removal. I considered including the rename into a separate PR, but decided to keep it here since the messages got completely reworked.I would recommend to start reading from
change_message
andmutate_message
modules, they contain a lot of internal documentation and description about how the new approach works.I also switched to plain
Vec
for serialization since we no longer needCursor
for itand
Writer::write_all
is much slower thenVec::extend_from_slice
.