-
-
Notifications
You must be signed in to change notification settings - Fork 720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement new ZFS pool io monitoring and the option to show ZFS pools only #385
Conversation
…ions, now it uses files "/proc/spl/kstat/zfs/*pool_name*/objset*". Needs additional work to be compatible with the option "zfs_pools_only".
…pool's stat filepath points to the objset-* file when the option is disabled, otherwise it points to the pool's stats directory. Made ZFS total pool stat collection into a separate function for clean code. Also removed an unnecessary variable in the default ZFS stat collection, and changed io_ticks to track the number of reads/writes, reducing unnecessary calculations.
… declarations to btop_collect.cpp so they aren't included when compiling for macos and freebsd
…se fs::path instead of strings.
FYI I tested
With |
@trentbuck Does it show your pool when |
Also, does anything come up in your |
@simplepad I don't ever see the pool in btop; both For the avoidance of doubt, I do see the pool in My btop.log has some entries but they look boring. I tested both as my regular user, and as root - I did not see any difference in behaviour. My regular user normal Debian/GNOME defaults: sudo and polkit privileges, but no zfs-allow(8) privileges. |
PS: here is I wonder if the issue is that my pool's root dataset is not mounted?
|
@trentbuck Can you confirm that your pool appears when using btop 1.2.8? That way we can be sure that the issue is caused by my PR.
That might be it. Btop is getting disks for listing from /etc/fstab, /etc/mtab, and /proc/self/mounts, so if it's not mounted, it won't be shown.
Yeah, I think the option's name and description aren't good currently, I want to make it clearer before merging. |
I think I misunderstood this also. I was expecting "zfs_pools_only" to show me the same data as "zpool iostat 2", i.e. aggregate I/O data for the whole pool, regardless of whether any of the datasets are mounted. For example, if I have four VMs (A B C D), and each one gets a zvol dataset that is handed directly to a VM ( Does that make sense? |
PS: I have
|
I get what you want, but currently btop acts like this: no mountpoint == disk won't be detected and won't be shown.
I looked closely to your output of I think your request is getting into the territory of #108, which is outside the scope of my PR. |
…zfs_collect_pool_total_stats(), use try-catch to prevent possible crashes from int_64t conversions
@simplepad Do you feel this PR is ready or do you wanna do more testing? The logging functions should be ok to keep if you change them from warning to debug, that way we can still get some useful output if someone is having issues. |
We could replace them with debug calls, but users won't know something is wrong until they change their log level. On the other hand, it's not like they would randomly want to check the log file, since as far as I know, btop has no error indication in the UI, so I'm not against hiding it behind debug.
The only thing that's stopping me from marking this PR ready to review is that the |
+1 for
Do you know what version of OpenZFS removed the "old way" files? From the below, I guess it's in 2.1.x.
|
Well, here's the commit that removes it openzfs/zfs@371f88d, so yeah, probably 2.1.0
Strange, I have a Debian 11 server with ZFS installed and the stable version there is 2.0.3 |
Well, an indication would be that the zfs io isn't showing up leading to them hopefully creating a bug report. And then if they haven't provided a debug logfile we can always ask for one. I've been thinking about adding some more functionality to the logger, for example optional parameters "log_identifier" and "max_reported". To easily be able to mute log entries of the same "log_identifier".
I haven't really used zfs so can't really tell what the best terminology for a regular user would be, but in general I would guess having "hide" in the name rather than "only" is probably better. There are several options already in the program that probably should be changed to better clarify what they actually do...
Unless we are talking 100+ files that are being read compared to the "old" way, I don't think will be noticeable performance wise. But I would say it's up to you if you wanna keep both ways. |
Although I don't really have that many datasets I did some testing and there wasn't any noticeable difference between the two ways. After all, the files we're reading are stored in Marking this PR as Ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work man!
And thank you for the continued contributions!
It's in bullseye-backports: https://packages.debian.org/search?keywords=zfs-dkms You can put something like this in
|
FWIW, I think 10 is typical for a single-user laptop, and 100 to 1000 is typical for a multi-user server. AFAICT you get one objset file for each mounted dataset. On a server with 20 users (thus at least 40 datasets mounted -- one for each $HOME and one for each $MAIL), I see:
On a single-user laptop, those numbers are all At the default refresh rate of 0.5Hz, I doubt 100 files is a big deal. |
@trentbuck I see, we should be good then. |
I'm wondering if ZFS users would like to see ZFS errors in btop, or if that is useless as they monitor those another way... I figure this would be a decent place to ask. |
This PR implements new ZFS pool io monitoring, compatible with newer versions of ZFS, where
/proc/spl/kstat/zfs/*pool_name*/io
files are not present anymore. Instead, it uses/proc/spl/kstat/zfs/*pool_name*/objset-*
files, which allows for per dataset io stats.Also, this PR adds the
zfs_pools_only
option, which if enabled will hide all datasets and only show pools in disk listings, and will count io stats differently.In more detail:
When
zfs_pools_only
is set to true:objset-*
files in the pool directory, pool root included)When
zfs_pools_only
is set to false:Closes #383
Partially closes #380, pools health monitoring isn't implemented