Skip to content

Conversation

ArrayBolt3
Copy link
Contributor

Whonix-Workstation should never need to open a file, URL, or application in anything other than a Whonix-Workstation disposable VM. Allowing it to even ask to do one of these actions in an unsafe way is dangerous, since a user who isn't paying enough attention might allow the request and leak their IP address in so doing (for instance, by visiting an attacker-provided URL in a VM that has clearnet access). Don't allow Whonix-Workstation to take (or ask to take) any of these dangerous actions.

Implements QubesOS/qubes-issues#10051.

@ArrayBolt3
Copy link
Contributor Author

@adrelanos Does this look good to you?

@adrelanos
Copy link
Member

Perhaps...

currently as for this PR:

qubes.OpenInVM        *        @tag:anon-vm      @dispvm           allow
qubes.OpenInVM        *        @tag:anon-vm      @anyvm            deny

my suggestion:

qubes.OpenInVM        *        @tag:anon-vm      @dispvm           ask
qubes.OpenInVM        *        @tag:anon-vm      @anyvm            deny

etc.?

rationale: ask being a bit more secure than allow.

@marmarek
Copy link
Member

If going for @dispvm ask rule, I'd suggest @dispvm ask default_target=@dispvm, so it's just confirmation, not needing user to select disposable template again.

@ArrayBolt3
Copy link
Contributor Author

Using a config like this appears to work pretty well:

qubes.OpenInVM * @tag:anon-vm @dispvm ask default_target=@dispvm
qubes.OpenInVM * @tag:anon-vm @tag:anon-vm ask
qubes.OpenInVM * @tag:anon-vm @anyvm deny

Doing this, I can open a file in a new disposable qube, or I can pick an existing disposable qube if I prefer, either one works. However, now the "Open in other qube" button in Thunar doesn't work, because of course it doesn't, this policy explicitly and intentionally breaks it. I can still "Copy to other qube", "Edit in disposable qube", and "View in disposable qube".

I guess we need to ask ourselves what's actually worth doing here. Do we assume that any Whonix-Workstation qubes are compromised and therefore the user should have to take the extra step of copying first, so that they don't open a compromised file without thinking long enough? Do we think adding that extra step of annoyance would even help? Given the amount of user interaction needed to cause issues with this kind of a qrexec endpoint, should we even bother?

The particular attack scenario I have in mind is basically:

  • User gets their Whonix-Workstation qube compromised somehow
  • User is unaware of this, and attempts to open a file or URL they believe is safe in a qube with clearnet access
  • The malicious qube intercepts the qrexec request (via a replaced qrexec-client-vm binary perhaps) and instead attempts to open a malicious file or URL that will ping the attacker's server
  • The user doesn't realize that the file or URL they're about to open isn't the one they asked for, and they proceed to open the malicious data in a qube with clearnet access
  • The attacker's server is pinged, leaking the user's IP to the attacker

Disabling opening URLs and requiring files to be copied first would likely prevent or at least reduce the risk of this kind of attack succeeding.

If we do want to bother with this, we should hide the "Open in other qube" button in Thunar's context menu.

@ArrayBolt3
Copy link
Contributor Author

ArrayBolt3 commented Jul 15, 2025

Or, maybe we don't hide the button, maybe we leave it but leave it broken, and try to provide a descriptive error message of some sort so that the user knows why it's broken? That error could then point to documentation of some sort.

Edit: I just looked at the qubes-core-agent-linux code for the Thunar context menu options - it looks like it should be (close to) trivial to add a descriptive error message for qvm-open-in-vm failure. I'm not sure if Qubes OS want's a Whonix-specific check here for pointing to some part of our documentation that says "do XYZ if you want this button to work", but in theory it would be doable. Simply hiding the button doesn't seem practical, unless Whonix dpkg-diverts and ships an alternative copy of /usr/lib/qubes/uca_qubes.xml, which sounds like playing with fire. Thunar does support conditionals for making certain context menu items appear or not appear, but they don't allow saying something like "don't offer this option if file XYZ exists on the filesystem".

For now, just going to go ahead with changing the config in a way that will break the button. We should continue to think about how to make this user-friendly in an elegant way.

Edit 2: It should be noted, if one has multiple non-disposable Whonix-Workstation qubes open, and attempts to open a file in a disposable qube, they can then proceed to select one of the non-disposable Whonix-Workstation qubes as the qube to open the file in. Not really unexpected given the semantics of the configuration, but ugh, this is ugly no matter how you come at it :P I'm still questioning if this should even be done at all...

@ArrayBolt3 ArrayBolt3 force-pushed the arraybolt3/anon-vm-harden branch from 4e57403 to 249ffdf Compare July 15, 2025 21:29
@marmarek
Copy link
Member

However, now the "Open in other qube" button in Thunar doesn't work

You can also add a rule @tag:anon-vm @default ask somewhere there - it will un-break it, while not increasing available options to choose from.

@ArrayBolt3
Copy link
Contributor Author

@marmarek That's a really good idea, I'll do that.

Whonix-Workstation should never need to open a file, URL, or
application in anything other than a Whonix-Workstation disposable VM.
Allowing it to even ask to do one of these actions in an unsafe way is
dangerous, since a user who isn't paying enough attention might allow
the request and leak their IP address in so doing (for instance, by
visiting an attacker-provided URL in a VM that has clearnet access).
Don't allow Whonix-Workstation to take (or ask to take) any of these
dangerous actions.
@ArrayBolt3 ArrayBolt3 force-pushed the arraybolt3/anon-vm-harden branch from 249ffdf to 43b89db Compare July 17, 2025 03:18
@ArrayBolt3
Copy link
Contributor Author

@marmarek Suggestion implemented and tested, appears to work!

@qubesos-bot
Copy link

qubesos-bot commented Jul 19, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025072115-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025061004-4.3&flavor=update

  • system_tests_pvgrub_salt_storage

    • TC_41_HVMGrub_debian-12-xfce: test_000_standalone_vm (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError
  • system_tests_audio

  • system_tests_qwt_win10@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/tKcyh-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/qDqV_-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_basic_vm_qrexec_gui_ext4

    • TC_20_NonAudio_whonix-gateway-17-pool: test_012_qubes_desktop_run (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError
  • system_tests_dispvm

  • system_tests_qwt_win10_seamless@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/4E4Ei-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

Failed tests

14 failures
  • system_tests_pvgrub_salt_storage

    • TC_41_HVMGrub_debian-12-xfce: test_000_standalone_vm (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError
  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio

  • system_tests_qwt_win10@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/tKcyh-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/qDqV_-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_basic_vm_qrexec_gui_ext4

    • TC_20_NonAudio_whonix-gateway-17-pool: test_012_qubes_desktop_run (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError
  • system_tests_dispvm

  • system_tests_qwt_win10_seamless@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/4E4Ei-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/142375#dependencies

11 fixed

Unstable tests

Performance Tests

Performance degradation:

8 performance degradations
  • debian-12-xfce_exec-data-duplex-root: 86.31 🔺 ( previous job: 70.01, degradation: 123.28%)
  • whonix-workstation-17_exec-data-duplex-root: 99.56 🔺 ( previous job: 86.00, degradation: 115.76%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 209967.00 :small_red_triangle: ( previous job: 289982.00, degradation: 72.41%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 276.00 :small_red_triangle: ( previous job: 1840.00, degradation: 15.00%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 217096.00 :small_red_triangle: ( previous job: 289182.00, degradation: 75.07%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 389082.00 :small_red_triangle: ( previous job: 433654.00, degradation: 89.72%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 5303.00 :small_red_triangle: ( previous job: 8874.00, degradation: 59.76%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 3281.00 :small_red_triangle: ( previous job: 4420.00, degradation: 74.23%)

Remaining performance tests:

64 tests
  • debian-12-xfce_exec: 6.50 🟢 ( previous job: 8.63, improvement: 75.31%)
  • debian-12-xfce_exec-root: 29.02 🟢 ( previous job: 29.44, improvement: 98.58%)
  • debian-12-xfce_socket: 8.65 🔺 ( previous job: 8.50, degradation: 101.80%)
  • debian-12-xfce_socket-root: 8.48 🔺 ( previous job: 8.31, degradation: 101.95%)
  • debian-12-xfce_exec-data-simplex: 62.86 🟢 ( previous job: 65.51, improvement: 95.96%)
  • debian-12-xfce_exec-data-duplex: 73.23 🟢 ( previous job: 73.55, improvement: 99.57%)
  • debian-12-xfce_socket-data-duplex: 159.08 🟢 ( previous job: 161.35, improvement: 98.59%)
  • fedora-42-xfce_exec: 9.17
  • fedora-42-xfce_exec-root: 57.61
  • fedora-42-xfce_socket: 8.38
  • fedora-42-xfce_socket-root: 8.37
  • fedora-42-xfce_exec-data-simplex: 60.51
  • fedora-42-xfce_exec-data-duplex: 70.40
  • fedora-42-xfce_exec-data-duplex-root: 94.67
  • fedora-42-xfce_socket-data-duplex: 137.76
  • whonix-gateway-17_exec: 7.06 🟢 ( previous job: 7.34, improvement: 96.22%)
  • whonix-gateway-17_exec-root: 38.24 🟢 ( previous job: 39.57, improvement: 96.64%)
  • whonix-gateway-17_socket: 7.70 🟢 ( previous job: 7.85, improvement: 98.06%)
  • whonix-gateway-17_socket-root: 7.75 🟢 ( previous job: 7.89, improvement: 98.14%)
  • whonix-gateway-17_exec-data-simplex: 81.27 🔺 ( previous job: 77.76, degradation: 104.51%)
  • whonix-gateway-17_exec-data-duplex: 79.87 🔺 ( previous job: 78.39, degradation: 101.89%)
  • whonix-gateway-17_exec-data-duplex-root: 91.33 🔺 ( previous job: 90.74, degradation: 100.64%)
  • whonix-gateway-17_socket-data-duplex: 163.47 🔺 ( previous job: 161.95, degradation: 100.94%)
  • whonix-workstation-17_exec: 7.74 🟢 ( previous job: 8.27, improvement: 93.59%)
  • whonix-workstation-17_exec-root: 53.57 🟢 ( previous job: 57.61, improvement: 92.98%)
  • whonix-workstation-17_socket: 8.64 🟢 ( previous job: 8.97, improvement: 96.33%)
  • whonix-workstation-17_socket-root: 8.75 🟢 ( previous job: 9.46, improvement: 92.49%)
  • whonix-workstation-17_exec-data-simplex: 71.46 🟢 ( previous job: 74.54, improvement: 95.87%)
  • whonix-workstation-17_exec-data-duplex: 76.62 🔺 ( previous job: 74.84, degradation: 102.38%)
  • whonix-workstation-17_socket-data-duplex: 152.53 🟢 ( previous job: 160.20, improvement: 95.21%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 197140.00 :green_circle: ( previous job: 101988.00, improvement: 193.30%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 232861.00 :green_circle: ( previous job: 14284.00, improvement: 1630.22%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 153396.00 :green_circle: ( previous job: 32696.00, improvement: 469.16%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 19853.00 :green_circle: ( previous job: 17102.00, improvement: 116.09%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 2970.00 :green_circle: ( previous job: 1091.00, improvement: 272.23%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 11866.00 :green_circle: ( previous job: 11086.00, improvement: 107.04%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 126422.00 :green_circle: ( previous job: 122848.00, improvement: 102.91%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 200223.00 :green_circle: ( previous job: 167872.00, improvement: 119.27%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 102963.00 :small_red_triangle: ( previous job: 108760.00, degradation: 94.67%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7723.00 :green_circle: ( previous job: 6356.00, improvement: 121.51%)
  • fedora-42-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 377185.00
  • fedora-42-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 136506.00
  • fedora-42-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 329948.00
  • fedora-42-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 67605.00
  • fedora-42-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 80688.00
  • fedora-42-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 3046.00
  • fedora-42-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 7479.00
  • fedora-42-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 891.00
  • fedora-42-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 358855.00
  • fedora-42-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 110852.00
  • fedora-42-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 342336.00
  • fedora-42-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 101173.00
  • fedora-42-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 88392.00
  • fedora-42-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 3804.00
  • fedora-42-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 9031.00
  • fedora-42-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 842.00
  • fedora-42-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 357509.00
  • fedora-42-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 119463.00
  • fedora-42-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 259035.00
  • fedora-42-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 99987.00
  • fedora-42-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 81220.00
  • fedora-42-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 2688.00
  • fedora-42-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8131.00
  • fedora-42-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 2187.00

@marmarek
Copy link
Member

marmarek commented Jul 20, 2025

system_tests_dispvm

* TC_20_DispVM_whonix-workstation-17: [test_030_edit_file](https://openqa.qubes-os.org/tests/147265#step/TC_20_DispVM_whonix-workstation-17/11) (failure + cleanup)
  `AssertionError: Timeout while waiting for disp[0-9]* window to show`

* TC_20_DispVM_whonix-workstation-17: [test_100_open_in_dispvm](https://openqa.qubes-os.org/tests/147265#step/TC_20_DispVM_whonix-workstation-17/12) (failure + cleanup)
  `AssertionError: Timeout while waiting for disp[0-9]* window to show`

Those two indeed looks related to this PR (not #23 ). Normally test assume opening file in a dispvm works without confirmation.
I see two options:

  1. Adjust test to also add a policy that bypasses the prompt (for whonix)
  2. Adjust the test to confirm opening, which would check if the prompt actually is there

The second option seems better, but also more complex. For the first option, there is self.qrexec_policy context manager. I don't see any other place in tests where we would confirm the prompt programmatically (to copy code from there). And there will need to be some delay, to not hit the anti-focus-stealing feature (confirmation prompt doesn't accept enter/esc for a short time after getting focus, to avoid getting confirmed by accident). If you want to go the second option, xdotool will be useful.

The test is in https://github.com/QubesOS/qubes-core-admin/blob/main/qubes/tests/integ/dispvm.py

@ArrayBolt3
Copy link
Contributor Author

@marmarek Mostly untested (except for I verified the xdotool code commands would do the right thing if there's only one "Operation execution" window open) attempt at implementing option 2: QubesOS/qubes-core-admin#705

@marmarek marmarek merged commit c0b7fea into QubesOS:main Jul 24, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants