Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lvm: run blkdiscard before remove #267

Merged
merged 1 commit into from
Jun 23, 2019
Merged

Conversation

tasket
Copy link
Contributor

@tasket tasket commented Jun 21, 2019

issue #5077

@brendanhoar
Copy link
Contributor

brendanhoar commented Jun 22, 2019

[See replies below - chris's code works, I just failed to reboot first.]

Thanks Chris.

I pulled the replacement lvm.py and installed it. Doesn't seem to be working for me:

2019-06-21 20:21:30,899 Removing volume private: qubes_dom0/vm-disp6977-private
2019-06-21 20:21:37,214 unhandled exception while calling src=b'dom0' meth=b'admin.vm.Kill' dest=b'disp6977' arg=b'' len(untrusted_payload)=0
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/qubes/api/init.py", line 264, in respond
self.send_event)
File "/usr/lib/python3.5/site-packages/qubes/api/init.py", line 125, in init
self.dest = self.app.domains[dest.decode('ascii')]
File "/usr/lib/python3.5/site-packages/qubes/app.py", line 467, in getitem
raise KeyError(key)
KeyError: 'disp6977'

I did try adding a sudo before the blkdiscard but that didn't seem to make it work.

Reverting the code to baseline stopped the error from appearing.

Brendan

@marmarek
Copy link
Member

The above exception looks unrelated to this change. It's rather QubesOS/qubes-issues#5105

@brendanhoar
Copy link
Contributor

brendanhoar commented Jun 22, 2019

[See replies below - chris's code works, I just failed to reboot first.]
Ok red herring then.

I created two large random files in the disposable VM. If I delete file #1 inside the VM I see discards. If I then shutdown the VM, I do not see discards.

This is with both the pull request code as well as a modification to replace 'blkdiscard' with 'sudo', 'blkdiscard'

Brendan

EDIT: hmm, the tested VM has a private volume size of 16GB, which is also the discard IO max for LVs in the thin pool. Tried it again with a smaller private volume, but same results. Large discard activity on deletion of the first file inside the VM, little to no discard activity on shutdown of disposable VM.

@tasket
Copy link
Contributor Author

tasket commented Jun 22, 2019

@brendanhoar If you leave data in a volume that will be removed, you should see discards then.

For example: Add 50mb random data to domU /home/user, shutdown vm, then qvm-remove it.

Simpler example: Add 50mb data to domU root fs i.e. '/testfile'. Then shutdown vm. When *root-snap is automatically removed you should see discards.

@tasket
Copy link
Contributor Author

tasket commented Jun 22, 2019

Also note that code change won't take effect until some restart procedure has been done. I think a restart of qubesd.service is possible, but most reliable way is to reboot Qubes.

@brendanhoar
Copy link
Contributor

Also note that code change won't take effect until some restart procedure has been done. I think a restart of qubesd.service is possible, but most reliable way is to reboot Qubes.

Yeah, I was getting to that point. :) I'll let you know after I restart.

@brendanhoar
Copy link
Contributor

I rebooted, with chris's version of lvm.py installed.
Note that I have no VMs set to autostart, so only dom0 is running.
I ran forkstat in dom0 to watch for invocations of blkdiscard:

[admin@dom0 ~]$ ./forkstat -h
forkstat, version 0.02.09

usage: ./forkstat [-c|-d|-D|-e|-E|-g|-h|-l|-s|-S|-q|-x|-X]
-c	use task comm field for process name.
-d	strip off directory path from process name.
-D	specify run duration in seconds.
-e	select which events to monitor.
-E	equivalent to -e all.
-g	show glyphs for event types.
-h	show this help.
-l	force stdout line buffering.
-r	run with real time FIFO scheduler.
-s	show short process name.
-S	show event statistics at end of the run.
-q	run quietly and enable -S option.
-x	show extra process information.
-X	equivalent to -EgrSx.
[admin@dom0 ~]$ sudo ./forkstat -E|grep blkdiscard

Then I started a disposable VM. Apparently, there's a lot of cleanup work going on during VM startups!

22:43:59 exec   4135                 blkdiscard /dev/qubes_dom0/vm-sys-net-root-snap
22:43:59 exit   4135    256   0.014s blkdiscard /dev/qubes_dom0/vm-sys-net-root-snap
22:43:59 exec   4136                 blkdiscard /dev/qubes_dom0/vm-sys-net-volatile
22:43:59 exit   4136    256   0.006s blkdiscard /dev/qubes_dom0/vm-sys-net-volatile
22:43:59 exec   4137                 blkdiscard /dev/qubes_dom0/vm-sys-net-private-snap
22:43:59 exit   4137    256   0.005s blkdiscard /dev/qubes_dom0/vm-sys-net-private-snap
22:44:15 exec   4512                 blkdiscard /dev/qubes_dom0/vm-sys-mirage-fw2-61-root-snap
22:44:15 exit   4512    256   0.006s blkdiscard /dev/qubes_dom0/vm-sys-mirage-fw2-61-root-snap
22:44:15 exec   4513                 blkdiscard /dev/qubes_dom0/vm-sys-mirage-fw2-61-volatile
22:44:15 exit   4513    256   0.006s blkdiscard /dev/qubes_dom0/vm-sys-mirage-fw2-61-volatile
22:44:15 exec   4514                 blkdiscard /dev/qubes_dom0/vm-sys-mirage-fw2-61-private-snap
22:44:15 exit   4514    256   0.005s blkdiscard /dev/qubes_dom0/vm-sys-mirage-fw2-61-private-snap
22:44:17 exec   4810                 blkdiscard /dev/qubes_dom0/vm-sys-whonix-4-root-snap
22:44:17 exit   4810    256   0.006s blkdiscard /dev/qubes_dom0/vm-sys-whonix-4-root-snap
22:44:17 exec   4811                 blkdiscard /dev/qubes_dom0/vm-sys-whonix-4-private-snap
22:44:17 exit   4811    256   0.005s blkdiscard /dev/qubes_dom0/vm-sys-whonix-4-private-snap
22:44:17 exec   4812                 blkdiscard /dev/qubes_dom0/vm-sys-whonix-4-volatile
22:44:17 exit   4812    256   0.005s blkdiscard /dev/qubes_dom0/vm-sys-whonix-4-volatile
22:44:24 exec   5130                 blkdiscard /dev/qubes_dom0/vm-disp5439-volatile
22:44:24 exit   5130    256   0.007s blkdiscard /dev/qubes_dom0/vm-disp5439-volatile
22:44:24 exec   5131                 blkdiscard /dev/qubes_dom0/vm-disp5439-root-snap
22:44:24 exit   5131    256   0.007s blkdiscard /dev/qubes_dom0/vm-disp5439-root-snap
22:44:24 exec   5132                 blkdiscard /dev/qubes_dom0/vm-disp5439-private-snap
22:44:24 exit   5132    256   0.007s blkdiscard /dev/qubes_dom0/vm-disp5439-private-snap

A VM terminal window was opened.

cat /dev/urandom > delbig1
cat /dev/urandom > delbig2
rm delbig1 && sync

I noted a large number of discards issued to the hardware via the monitoring script.

sudo shutdown -h now

Then I shutdown the disposable VM, and saw additional blkdiscards being issued via forkstat...

22:45:45 exec   5853                 blkdiscard /dev/qubes_dom0/vm-disp5439-volatile
22:45:45 exit   5853      0   0.011s blkdiscard /dev/qubes_dom0/vm-disp5439-volatile
22:45:45 exec   5854                 blkdiscard /dev/qubes_dom0/vm-disp5439-private-snap
22:45:45 exec   5855                 blkdiscard /dev/qubes_dom0/vm-disp5439-root-snap
22:45:47 exit   5854      0   1.894s blkdiscard /dev/qubes_dom0/vm-disp5439-private-snap
22:45:47 exit   5855      0   1.972s blkdiscard /dev/qubes_dom0/vm-disp5439-root-snap

...as well as my monitoring script.

Looks good to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants