-
Notifications
You must be signed in to change notification settings - Fork 1.9k
fix IovDeque
for non 4K pages
#5222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #5222 +/- ##
==========================================
+ Coverage 82.88% 82.93% +0.05%
==========================================
Files 250 250
Lines 26936 26942 +6
==========================================
+ Hits 22325 22344 +19
+ Misses 4611 4598 -13
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
037f7dc
to
62393e7
Compare
The `L` const generic was determining the maximum number of `iov` elements in the `IovDeque`. This cases the issue when the host kernel uses pages which can contain more entries than `L`. For example usual 4K pages can contain 256 `iov`s while 16K pages can contain 1024 `iov`s. Current implementation on 16K (and any other bigger than 4K page size) will continue wrap `IovDeque` when it reaches 256'th element. This breaks the implementation since elements written past 256'th index will not be 'duplicated' at the beginning of the queue. Curren implementation expects this behavior: page 1 page 2 |ABCD|#|ABCD| ^ will wrap here With big page sizes current impl will: page 1 page2 |ABCD|EFGD________|#|ABCDEFGD________| ^ sill wrap here ^ but should wrap here The solution is to calculate the maximum capacity the `IovDeque` can hold, and use it for wrapping purposes. This capacity is allowed to be bigger than `L`. The actual used number of entries in the queue will still be guarded by the `L` parameter used in the `is_full` method. Signed-off-by: Egor Lazarchuk <yegorlz@amazon.co.uk>
Add note about `IovDeque` fix for non 4K pages. Signed-off-by: Egor Lazarchuk <yegorlz@amazon.co.uk>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you to add a unit test to verify the behaviour when L < capacity
? For example, using L=64
and testing a scenario where it was failing before this change
Confirmed that this fix seems to solve the problem I reported. The issue is no longer reproducible. |
Changes
The
L
const generic was determining the maximum number ofiov
elements in the
IovDeque
. This cases the issue when the host kerneluses pages which can contain more entries than
L
. For example usual4K pages can contain 256
iov
s while 16K pages can contain 1024iov
s.Current implementation on 16K (and any other bigger than 4K page size)
will continue wrap
IovDeque
when it reaches 256'th element. Thisbreaks the implementation since elements written past 256'th index will
not be 'duplicated' at the beginning of the queue.
Curren implementation expects this behavior:
With big page sizes current impl will:
The solution is to calculate the maximum capacity the
IovDeque
canhold, and use it for wrapping purposes. This capacity is allowed to be
bigger than
L
. The actual used number of entries in the queue willstill be guarded by the
L
parameter used in theis_full
method.Reason
Fixes #5217
License Acceptance
By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md
.PR Checklist
tools/devtool checkstyle
to verify that the PR passes theautomated style checks.
how they are solving the problem in a clear and encompassing way.
in the PR.
CHANGELOG.md
.Runbook for Firecracker API changes.
integration tests.
TODO
.rust-vmm
.