Skip to content

Conversation

ben-grande
Copy link
Contributor

@ben-grande ben-grande commented Jul 16, 2025

@ben-grande ben-grande changed the title Set less memory to domain before pausing for preload Set less memory to preload before pausing Jul 16, 2025
Copy link

codecov bot commented Jul 16, 2025

Codecov Report

Attention: Patch coverage is 6.97674% with 80 lines in your changes missing coverage. Please review.

Project coverage is 70.53%. Comparing base (8de76b2) to head (0d0d716).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
qubes/qmemman/systemstate.py 0.00% 41 Missing ⚠️
qubes/tools/qmemmand.py 0.00% 13 Missing ⚠️
qubes/vm/qubesvm.py 8.33% 11 Missing ⚠️
qubes/vm/dispvm.py 16.66% 10 Missing ⚠️
qubes/qmemman/client.py 37.50% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #702      +/-   ##
==========================================
- Coverage   70.82%   70.53%   -0.30%     
==========================================
  Files          61       61              
  Lines       13473    13546      +73     
==========================================
+ Hits         9542     9554      +12     
- Misses       3931     3992      +61     
Flag Coverage Δ
unittests 70.53% <6.97%> (-0.30%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

resp = "FAIL"
domid, memory = str(data_args[0]), int(data_args[1])
# TODO: ben: create the function to set memory to a domain.
if system_state.mem_set(domid, memory):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what was the issue with using system_state.mem_set? Using under if looks weird, as it always returns None, but otherwise looks correct?
What you might need to add is waiting for the domain to actually balloon down to the requested size - possibly with a simple loop that calls refresh_mem_actual and checks mem_actual, with some timeout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't test with mem_set() because I saw that do_balloon was calling it with a loop, so mem_set() was there to give me hint of what to look next, I didn't test with only it.

I reused do_balloon() in order to avoid code duplication, but a lot of functions there use the class object self.dom_dict and that might cause some interference. I will try a new function with a simple loop, do_balloon() might indeed be overkill for a single domain changing values. I thought the xenfree value was necessary to be sure the memory was returned to Xen as well as reusing the waiting time and transfer speed to check again.

@ben-grande ben-grande force-pushed the preload-qmemman branch 3 times, most recently from 8d86200 to c514c51 Compare July 18, 2025 15:29
@ben-grande
Copy link
Contributor Author

ben-grande commented Jul 18, 2025

The simplified loop helped. It is working! Preloaded 10 qubes at once and every qube has low memory assigned to it. Trying to start a "heavy" application such as Firefox doesn't crash.

The use of a dictionary is to allow for multiple qubes to be setup at once: "0:0 1:0 2:0 3:0", but I haven't find a use case for that yet as preloads may be on different stages of preloading. If you think it complicates, I will remove it.

@ben-grande ben-grande marked this pull request as ready for review July 18, 2025 16:23
@marmarek
Copy link
Member

PipelineRetry

@qubesos-bot
Copy link

qubesos-bot commented Jul 19, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025071903-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025061004-4.3&flavor=update

  • system_tests_pvgrub_salt_storage

  • system_tests_dispvm

    • TC_20_DispVM_whonix-workstation-17: test_030_edit_file (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

    • TC_20_DispVM_whonix-workstation-17: test_100_open_in_dispvm (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

  • system_tests_devices

    • TC_00_List_whonix-gateway-17: test_011_list_dm_mounted (failure)
      AssertionError: 'test-dm' == 'test-dm' : Device test-inst-vm:dm-0::...
  • system_tests_audio@hw1

  • system_tests_qwt_win10_seamless@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win11@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

Failed tests

10 failures
  • system_tests_pvgrub_salt_storage

  • system_tests_extra

    • TC_00_QVCTest_whonix-workstation-17: test_010_screenshare (failure)
      AssertionError: 1 != 0 : Timeout waiting for /dev/video0 in test-in...
  • system_tests_dispvm

    • TC_20_DispVM_whonix-workstation-17: test_030_edit_file (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

    • TC_20_DispVM_whonix-workstation-17: test_100_open_in_dispvm (failure + cleanup)
      AssertionError: Timeout while waiting for disp[0-9]* window to show

  • system_tests_devices

    • TC_00_List_whonix-gateway-17: test_011_list_dm_mounted (failure)
      AssertionError: 'test-dm' == 'test-dm' : Device test-inst-vm:dm-0::...
  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio@hw1

  • system_tests_qwt_win10_seamless@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win11@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/142375#dependencies

10 fixed

Unstable tests

Performance Tests

Performance degradation:

9 performance degradations
  • debian-12-xfce_exec-data-simplex: 72.42 🔺 ( previous job: 65.51, degradation: 110.54%)
  • debian-12-xfce_exec-data-duplex-root: 82.28 🔺 ( previous job: 70.01, degradation: 117.53%)
  • whonix-gateway-17_exec-root: 43.81 🔺 ( previous job: 39.57, degradation: 110.71%)
  • whonix-gateway-17_socket: 9.83 🔺 ( previous job: 7.85, degradation: 125.16%)
  • whonix-gateway-17_socket-root: 8.70 🔺 ( previous job: 7.89, degradation: 110.24%)
  • whonix-gateway-17_exec-data-duplex-root: 101.47 🔺 ( previous job: 90.74, degradation: 111.83%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 253646.00 :small_red_triangle: ( previous job: 289982.00, degradation: 87.47%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 14119.00 :small_red_triangle: ( previous job: 17102.00, degradation: 82.56%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 105280.00 :small_red_triangle: ( previous job: 122848.00, degradation: 85.70%)

Remaining performance tests:

63 tests
  • debian-12-xfce_exec: 7.29 🟢 ( previous job: 8.63, improvement: 84.48%)
  • debian-12-xfce_exec-root: 29.21 🟢 ( previous job: 29.44, improvement: 99.24%)
  • debian-12-xfce_socket: 8.96 🔺 ( previous job: 8.50, degradation: 105.43%)
  • debian-12-xfce_socket-root: 8.56 🔺 ( previous job: 8.31, degradation: 102.94%)
  • debian-12-xfce_exec-data-duplex: 67.72 🟢 ( previous job: 73.55, improvement: 92.08%)
  • debian-12-xfce_socket-data-duplex: 160.98 🟢 ( previous job: 161.35, improvement: 99.77%)
  • fedora-42-xfce_exec: 9.10
  • fedora-42-xfce_exec-root: 58.04
  • fedora-42-xfce_socket: 8.05
  • fedora-42-xfce_socket-root: 8.53
  • fedora-42-xfce_exec-data-simplex: 69.44
  • fedora-42-xfce_exec-data-duplex: 71.77
  • fedora-42-xfce_exec-data-duplex-root: 98.53
  • fedora-42-xfce_socket-data-duplex: 156.15
  • whonix-gateway-17_exec: 6.91 🟢 ( previous job: 7.34, improvement: 94.14%)
  • whonix-gateway-17_exec-data-simplex: 78.89 🔺 ( previous job: 77.76, degradation: 101.45%)
  • whonix-gateway-17_exec-data-duplex: 82.13 🔺 ( previous job: 78.39, degradation: 104.78%)
  • whonix-gateway-17_socket-data-duplex: 169.74 🔺 ( previous job: 161.95, degradation: 104.81%)
  • whonix-workstation-17_exec: 7.73 🟢 ( previous job: 8.27, improvement: 93.38%)
  • whonix-workstation-17_exec-root: 58.83 🔺 ( previous job: 57.61, degradation: 102.11%)
  • whonix-workstation-17_socket: 8.78 🟢 ( previous job: 8.97, improvement: 97.90%)
  • whonix-workstation-17_socket-root: 10.34 🔺 ( previous job: 9.46, degradation: 109.33%)
  • whonix-workstation-17_exec-data-simplex: 62.25 🟢 ( previous job: 74.54, improvement: 83.51%)
  • whonix-workstation-17_exec-data-duplex: 81.62 🔺 ( previous job: 74.84, degradation: 109.07%)
  • whonix-workstation-17_exec-data-duplex-root: 86.93 🔺 ( previous job: 86.00, degradation: 101.08%)
  • whonix-workstation-17_socket-data-duplex: 169.35 🔺 ( previous job: 160.20, degradation: 105.71%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 135279.00 :green_circle: ( previous job: 101988.00, improvement: 132.64%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 56932.00 :green_circle: ( previous job: 14284.00, improvement: 398.57%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 38763.00 :green_circle: ( previous job: 32696.00, improvement: 118.56%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 1354.00 :green_circle: ( previous job: 1091.00, improvement: 124.11%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 11835.00 :green_circle: ( previous job: 11086.00, improvement: 106.76%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 4559.00 :green_circle: ( previous job: 1840.00, improvement: 247.77%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 500991.00 :green_circle: ( previous job: 289182.00, improvement: 173.24%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 439286.00 :green_circle: ( previous job: 433654.00, improvement: 101.30%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 157711.00 :small_red_triangle: ( previous job: 167872.00, degradation: 93.95%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 103257.00 :small_red_triangle: ( previous job: 108760.00, degradation: 94.94%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 9261.00 :green_circle: ( previous job: 8874.00, improvement: 104.36%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7920.00 :green_circle: ( previous job: 6356.00, improvement: 124.61%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 4954.00 :green_circle: ( previous job: 4420.00, improvement: 112.08%)
  • fedora-42-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 387786.00
  • fedora-42-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 274280.00
  • fedora-42-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 308223.00
  • fedora-42-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 139265.00
  • fedora-42-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 83540.00
  • fedora-42-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 5449.00
  • fedora-42-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8108.00
  • fedora-42-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 2626.00
  • fedora-42-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 334154.00
  • fedora-42-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 225791.00
  • fedora-42-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 281044.00
  • fedora-42-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 125146.00
  • fedora-42-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 36961.00
  • fedora-42-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 2612.00
  • fedora-42-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 8504.00
  • fedora-42-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 1051.00
  • fedora-42-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 346293.00
  • fedora-42-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 208754.00
  • fedora-42-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 292489.00
  • fedora-42-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 106325.00
  • fedora-42-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 46029.00
  • fedora-42-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 2084.00
  • fedora-42-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8014.00
  • fedora-42-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 1975.00

@marmarek
Copy link
Member

This needs a rebase now

@marmarek marmarek merged commit 0d0d716 into QubesOS:main Jul 20, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants