-
Notifications
You must be signed in to change notification settings - Fork 1.2k
File-based disk-only VM snapshot with KVM as hypervisor #10632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@blueorangutan package |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #10632 +/- ##
============================================
- Coverage 16.40% 16.39% -0.02%
- Complexity 13590 13604 +14
============================================
Files 5692 5705 +13
Lines 501976 502898 +922
Branches 60795 60884 +89
============================================
+ Hits 82369 82439 +70
- Misses 410449 411300 +851
- Partials 9158 9159 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
@blueorangutan package |
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13204 |
@rohityadavcloud @sureshanaparti @weizhouapache could we run the CI? |
@blueorangutan test |
@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
[SF] Trillian test result (tid-13177)
|
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
Description
This PR implements the spec available at #9524. For more information regarding it, please read the spec.
Furthermore, the following changes that are not contemplated in the spec were added:
snapshot.merge.timeout
agent property was added. It is only considered iflibvirt.events.enabled
is true;libvirt.events.enabled
is true, ACS will register to gather events from Libvirt and will collect information on the process, providing a progress report in the logs. If the configuration is false, the old process is used;Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?
Basic Tests
I created a test VM to carry out the tests below. Additionally, after performing the relevant operations, the VM's XML and the storage were checked to observe if the snapshots existed.
Snapshot Creation
The tests below were also repeated with the VM stopped.
Snapshot Reversion
Snapshot Removal
Advanced Tests
Deletion Test
All tests were carried out with the VM stopped.
The snapshot was marked as hidden and was not removed from storage.
Snapshot s3 was removed normally. Snapshot s2 was merged with snapshot s4.
Snapshot s4 was marked as hidden and was not removed from storage.
Snapshot s5 was removed normally. Snapshot s4 was merged with the delta of the VM's volume.
Reversion Test
Snapshot s1 was marked as hidden and was not removed from storage.
Concurrent Test
I created 4 VMs and took a VM snapshot of each. Then, I instructed to remove them all at the same time. All snapshots were removed simultaneously and successfully.
Test with Multiple Volumes
I created a VM with one datadisk and attached 8 more datadisks (10 volumes in total), took two VM snapshots, and then instructed to remove one at a time. The snapshots were removed successfully.
Tests Changing the
snapshot.merge.timeout
ConfigTests Related to Volume Resize with Disk-Only VM Snapshots on KVM
qemu-img info
qemu-img info
The last two tests were repeated on a VM with several snapshots, so that a merge between snapshots was performed. The result was the same.
Tests Related to Events:
cloud.usage_event
table that the resize event was correctly triggered, and it was also observed via GUI that the account's resource limit was updated.