nearly-full vdevs are still being written to even after adding new, empty vdev #17510
Closed
justinpryzby
started this conversation in
General
Replies: 2 comments
-
You might be interested in this work: #17020 . Though it is only in master branch now and so will be in upcoming 2.4. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Looks like it's a known issue. Thanks for working on it. Looking forward to 2.4. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
(FYI: we run ZFS on top of LVM, on top of VM storage. I know It's against best-practices, but that's what we do.)
Running zfs-2.3.1, with compress=zstd, recordsize=1M,
I've been loading a large 5 TB DB to a new ZFS pool. I started with a single 512 GB vdev, and once it was filled to ~80%, I added another 512 GB vdev. Repeated several times, up to (now) 8, 512 GB vdevs. I was surprised to see that the nearly-full vdevs were still being written to, even after reaching 98% capacity, and 70% fragmentation:
None of these were above ~80% full when I added an additional, new vdev.
I tried setting these, but it doesn't seem to have changed the behavior to my expectations.
Any insight appreciated.
Beta Was this translation helpful? Give feedback.
All reactions