Replies: 1 comment 1 reply
-
This problem isn't related to sanoid, so you are going to get better help if you go to one of the openzfs places to get support. When you do ask for ZFS support, include the system specs, drive type, and pool space, and zfs pool vdev config in every ticket you open. You will get better and faster answers if you have already provided that information so others don't have to dig, or ask these questions and wait for a response. No, there is no defrag for ZFS. Google that for more information. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
You may recall my ZFS LTSP server with over 700 datasets in its home dir pool that has slowed to a crawl. There were over 600K snapshots on the pool and it took over a month to destroy them all but now that all of the snapshots have been removed the server is still running horribly slowly.
I was hoping that destroying all of the snapshots would reverse the damage caused by having too many but it doesn't seem to have made any difference. Is this your understanding of too many (destroyed) snapshots vs performance?
scrubs just check data integrity and don't do any kind of "defrag" or optimization do they? Is there no equivalent to defrag for ZFS that could spare me having to destroy and recreate the pool to regain the performance? I think thats my only option here.
Thanks Jim
Beta Was this translation helpful? Give feedback.
All reactions