Skip to content

Commit 2708b55

Browse files
sean-jcgregkh
authored andcommitted
KVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on CPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <chao.gao@intel.com> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: stable@vger.kernel.org Reviewed-by: Kai Huang <kai.huang@intel.com> Acked-by: Kai Huang <kai.huang@intel.com> Tested-by: Farrah Chen <farrah.chen@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240830043600.127750-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 06f0f8c commit 2708b55

File tree

2 files changed

+39
-24
lines changed

2 files changed

+39
-24
lines changed

Documentation/virt/kvm/locking.rst

Lines changed: 23 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ KVM Lock Overview
99

1010
The acquisition orders for mutexes are as follows:
1111

12-
- cpus_read_lock() is taken outside kvm_lock
12+
- cpus_read_lock() is taken outside kvm_lock and kvm_usage_lock
1313

1414
- kvm->lock is taken outside vcpu->mutex
1515

@@ -24,6 +24,12 @@ The acquisition orders for mutexes are as follows:
2424
are taken on the waiting side when modifying memslots, so MMU notifiers
2525
must not take either kvm->slots_lock or kvm->slots_arch_lock.
2626

27+
cpus_read_lock() vs kvm_lock:
28+
- Taking cpus_read_lock() outside of kvm_lock is problematic, despite that
29+
being the official ordering, as it is quite easy to unknowingly trigger
30+
cpus_read_lock() while holding kvm_lock. Use caution when walking vm_list,
31+
e.g. avoid complex operations when possible.
32+
2733
For SRCU:
2834

2935
- ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
@@ -227,10 +233,17 @@ time it will be set using the Dirty tracking mechanism described above.
227233
:Type: mutex
228234
:Arch: any
229235
:Protects: - vm_list
230-
- kvm_usage_count
236+
237+
``kvm_usage_lock``
238+
^^^^^^^^^^^^^^^^^^
239+
240+
:Type: mutex
241+
:Arch: any
242+
:Protects: - kvm_usage_count
231243
- hardware virtualization enable/disable
232-
:Comment: KVM also disables CPU hotplug via cpus_read_lock() during
233-
enable/disable.
244+
:Comment: Exists because using kvm_lock leads to deadlock (see earlier comment
245+
on cpus_read_lock() vs kvm_lock). Note, KVM also disables CPU hotplug via
246+
cpus_read_lock() when enabling/disabling virtualization.
234247

235248
``kvm->mn_invalidate_lock``
236249
^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -290,11 +303,12 @@ time it will be set using the Dirty tracking mechanism described above.
290303
wakeup.
291304

292305
``vendor_module_lock``
293-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
306+
^^^^^^^^^^^^^^^^^^^^^^
294307
:Type: mutex
295308
:Arch: x86
296309
:Protects: loading a vendor module (kvm_amd or kvm_intel)
297-
:Comment: Exists because using kvm_lock leads to deadlock. cpu_hotplug_lock is
298-
taken outside of kvm_lock, e.g. in KVM's CPU online/offline callbacks, and
299-
many operations need to take cpu_hotplug_lock when loading a vendor module,
300-
e.g. updating static calls.
310+
:Comment: Exists because using kvm_lock leads to deadlock. kvm_lock is taken
311+
in notifiers, e.g. __kvmclock_cpufreq_notifier(), that may be invoked while
312+
cpu_hotplug_lock is held, e.g. from cpufreq_boost_trigger_state(), and many
313+
operations need to take cpu_hotplug_lock when loading a vendor module, e.g.
314+
updating static calls.

virt/kvm/kvm_main.c

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -5500,6 +5500,7 @@ __visible bool kvm_rebooting;
55005500
EXPORT_SYMBOL_GPL(kvm_rebooting);
55015501

55025502
static DEFINE_PER_CPU(bool, hardware_enabled);
5503+
static DEFINE_MUTEX(kvm_usage_lock);
55035504
static int kvm_usage_count;
55045505

55055506
static int __hardware_enable_nolock(void)
@@ -5532,10 +5533,10 @@ static int kvm_online_cpu(unsigned int cpu)
55325533
* be enabled. Otherwise running VMs would encounter unrecoverable
55335534
* errors when scheduled to this CPU.
55345535
*/
5535-
mutex_lock(&kvm_lock);
5536+
mutex_lock(&kvm_usage_lock);
55365537
if (kvm_usage_count)
55375538
ret = __hardware_enable_nolock();
5538-
mutex_unlock(&kvm_lock);
5539+
mutex_unlock(&kvm_usage_lock);
55395540
return ret;
55405541
}
55415542

@@ -5555,10 +5556,10 @@ static void hardware_disable_nolock(void *junk)
55555556

55565557
static int kvm_offline_cpu(unsigned int cpu)
55575558
{
5558-
mutex_lock(&kvm_lock);
5559+
mutex_lock(&kvm_usage_lock);
55595560
if (kvm_usage_count)
55605561
hardware_disable_nolock(NULL);
5561-
mutex_unlock(&kvm_lock);
5562+
mutex_unlock(&kvm_usage_lock);
55625563
return 0;
55635564
}
55645565

@@ -5574,9 +5575,9 @@ static void hardware_disable_all_nolock(void)
55745575
static void hardware_disable_all(void)
55755576
{
55765577
cpus_read_lock();
5577-
mutex_lock(&kvm_lock);
5578+
mutex_lock(&kvm_usage_lock);
55785579
hardware_disable_all_nolock();
5579-
mutex_unlock(&kvm_lock);
5580+
mutex_unlock(&kvm_usage_lock);
55805581
cpus_read_unlock();
55815582
}
55825583

@@ -5607,7 +5608,7 @@ static int hardware_enable_all(void)
56075608
* enable hardware multiple times.
56085609
*/
56095610
cpus_read_lock();
5610-
mutex_lock(&kvm_lock);
5611+
mutex_lock(&kvm_usage_lock);
56115612

56125613
r = 0;
56135614

@@ -5621,7 +5622,7 @@ static int hardware_enable_all(void)
56215622
}
56225623
}
56235624

5624-
mutex_unlock(&kvm_lock);
5625+
mutex_unlock(&kvm_usage_lock);
56255626
cpus_read_unlock();
56265627

56275628
return r;
@@ -5649,13 +5650,13 @@ static int kvm_suspend(void)
56495650
{
56505651
/*
56515652
* Secondary CPUs and CPU hotplug are disabled across the suspend/resume
5652-
* callbacks, i.e. no need to acquire kvm_lock to ensure the usage count
5653-
* is stable. Assert that kvm_lock is not held to ensure the system
5654-
* isn't suspended while KVM is enabling hardware. Hardware enabling
5655-
* can be preempted, but the task cannot be frozen until it has dropped
5656-
* all locks (userspace tasks are frozen via a fake signal).
5653+
* callbacks, i.e. no need to acquire kvm_usage_lock to ensure the usage
5654+
* count is stable. Assert that kvm_usage_lock is not held to ensure
5655+
* the system isn't suspended while KVM is enabling hardware. Hardware
5656+
* enabling can be preempted, but the task cannot be frozen until it has
5657+
* dropped all locks (userspace tasks are frozen via a fake signal).
56575658
*/
5658-
lockdep_assert_not_held(&kvm_lock);
5659+
lockdep_assert_not_held(&kvm_usage_lock);
56595660
lockdep_assert_irqs_disabled();
56605661

56615662
if (kvm_usage_count)
@@ -5665,7 +5666,7 @@ static int kvm_suspend(void)
56655666

56665667
static void kvm_resume(void)
56675668
{
5668-
lockdep_assert_not_held(&kvm_lock);
5669+
lockdep_assert_not_held(&kvm_usage_lock);
56695670
lockdep_assert_irqs_disabled();
56705671

56715672
if (kvm_usage_count)

0 commit comments

Comments
 (0)