-
Notifications
You must be signed in to change notification settings - Fork 44
DLPX-65491 Invalid argument when mounting ZFS filesystem #153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# As a results any new mount will propagate the mount event to their peer | ||
# groups. This can result in inflating the number of mounts for that | ||
# mount namespace resulting in it hitting the mount max value prematurely. | ||
# To avoid this, we increase the mount-max value to 3 times the default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate how you picked the number 3? Doesn't this just mean that we push the limit a bit further, and that we'd hit it if we spin up more VDBs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For us to hit this would require us to have about 150K filesystems mounted that were then part of the systemd-resolved
mount namespace. Even 50K filesystem is unlikely in our customer base but the scalability servers have seen numbers this high and that's where we were hitting this bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also just set all mount as private
but the impact of that is greater and given the release timeframe increasing the default max seemed less risky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with that, we should probably just mention that 150k file systems limit in this comment.
Setting the mounts as private would probably indeed cause issues during upgrade-verify since we are bind mounting domain0 in the container, right?
Also, when running upgrade-verify, would that result in even more mounts in the systemd-resolved
namespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested that the upgrade container will add another factor to our already inflated count. So with that in mind, to support 100,000 filesystems we need to set the max to 3 times that value. If we start adding additional containers, we will have to adjust this.
bors delegate+ |
✌️ grwilson can now approve this pull request. To approve and merge a pull request, simply reply with |
bors r+ |
Build succeeded
|
No description provided.