Skip to content

Commit

Permalink
Merge tag 'x86-core-2023-06-26' of ssh://gitolite.kernel.org/pub/scm/…
Browse files Browse the repository at this point in the history
…linux/kernel/git/tip/tip

Pull x86 core updates from Thomas Gleixner:
 "A set of fixes for kexec(), reboot and shutdown issues:

   - Ensure that the WBINVD in stop_this_cpu() has been completed before
     the control CPU proceedes.

     stop_this_cpu() is used for kexec(), reboot and shutdown to park
     the APs in a HLT loop.

     The control CPU sends an IPI to the APs and waits for their CPU
     online bits to be cleared. Once they all are marked "offline" it
     proceeds.

     But stop_this_cpu() clears the CPU online bit before issuing
     WBINVD, which means there is no guarantee that the AP has reached
     the HLT loop.

     This was reported to cause intermittent reboot/shutdown failures
     due to some dubious interaction with the firmware.

     This is not only a problem of WBINVD. The code to actually "stop"
     the CPU which runs between clearing the online bit and reaching the
     HLT loop can cause large enough delays on its own (think
     virtualization). That's especially dangerous for kexec() as kexec()
     expects that all APs are in a safe state and not executing code
     while the boot CPU jumps to the new kernel. There are more issues
     vs kexec() which are addressed separately.

     Cure this by implementing an explicit synchronization point right
     before the AP reaches HLT. This guarantees that the AP has
     completed the full stop proceedure.

   - Fix the condition for WBINVD in stop_this_cpu().

     The WBINVD in stop_this_cpu() is required for ensuring that when
     switching to or from memory encryption no dirty data is left in the
     cache lines which might cause a write back in the wrong more later.

     This checks CPUID directly because the feature bit might have been
     cleared due to a command line option.

     But that CPUID check accesses leaf 0x8000001f::EAX unconditionally.
     Intel CPUs return the content of the highest supported leaf when a
     non-existing leaf is read, while AMD CPUs return all zeros for
     unsupported leafs.

     So the result of the test on Intel CPUs is lottery and on AMD its
     just correct by chance.

     While harmless it's incorrect and causes the conditional wbinvd()
     to be issued where not required, which caused the above issue to be
     unearthed.

   - Make kexec() robust against AP code execution

     Ashok observed triple faults when doing kexec() on a system which
     had been booted with "nosmt".

     It turned out that the SMT siblings which had been brought up
     partially are parked in mwait_play_dead() to enable power savings.

     mwait_play_dead() is monitoring the thread flags of the AP's idle
     task, which has been chosen as it's unlikely to be written to.

     But kexec() can overwrite the previous kernel text and data
     including page tables etc. When it overwrites the cache lines
     monitored by an AP that AP resumes execution after the MWAIT on
     eventually overwritten text, stack and page tables, which obviously
     might end up in a triple fault easily.

     Make this more robust in several steps:

      1) Use an explicit per CPU cache line for monitoring.

      2) Write a command to these cache lines to kick APs out of MWAIT
         before proceeding with kexec(), shutdown or reboot.

         The APs confirm the wakeup by writing status back and then
         enter a HLT loop.

      3) If the system uses INIT/INIT/STARTUP for AP bringup, park the
         APs in INIT state.

         HLT is not a guarantee that an AP won't wake up and resume
         execution. HLT is woken up by NMI and SMI. SMI puts the CPU
         back into HLT (+/- firmware bugs), but NMI is delivered to the
         CPU which executes the NMI handler. Same issue as the MWAIT
         scenario described above.

         Sending an INIT/INIT sequence to the APs puts them into wait
         for STARTUP state, which is safe against NMI.

     There is still an issue remaining which can't be fixed: #MCE

     If the AP sits in HLT and receives a broadcast #MCE it will try to
     handle it with the obvious consequences.

     INIT/INIT clears CR4.MCE in the AP which will cause a broadcast
     #MCE to shut down the machine.

     So there is a choice between fire (HLT) and frying pan (INIT).
     Frying pan has been chosen as it's at least preventing the NMI
     issue.

     On systems which are not using INIT/INIT/STARTUP there is not much
     which can be done right now, but at least the obvious and easy to
     trigger MWAIT issue has been addressed"

* tag 'x86-core-2023-06-26' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/smp: Put CPUs into INIT on shutdown if possible
  x86/smp: Split sending INIT IPI out into a helper function
  x86/smp: Cure kexec() vs. mwait_play_dead() breakage
  x86/smp: Use dedicated cache-line for mwait_play_dead()
  x86/smp: Remove pointless wmb()s from native_stop_other_cpus()
  x86/smp: Dont access non-existing CPUID leaf
  x86/smp: Make stop_other_cpus() more robust
  • Loading branch information
torvalds committed Jun 26, 2023
2 parents cd336f6 + 45e34c8 commit 88afbb2
Show file tree
Hide file tree
Showing 5 changed files with 217 additions and 72 deletions.
2 changes: 2 additions & 0 deletions arch/x86/include/asm/cpu.h
Original file line number Diff line number Diff line change
Expand Up @@ -95,4 +95,6 @@ extern u64 x86_read_arch_cap_msr(void);
int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);

extern struct cpumask cpus_stop_mask;

#endif /* _ASM_X86_CPU_H */
4 changes: 4 additions & 0 deletions arch/x86/include/asm/smp.h
Original file line number Diff line number Diff line change
Expand Up @@ -127,11 +127,15 @@ void play_dead_common(void);
void wbinvd_on_cpu(int cpu);
int wbinvd_on_all_cpus(void);

void smp_kick_mwait_play_dead(void);

void native_smp_send_reschedule(int cpu);
void native_send_call_func_ipi(const struct cpumask *mask);
void native_send_call_func_single_ipi(int cpu);
void x86_idle_thread_init(unsigned int cpu, struct task_struct *idle);

bool smp_park_other_cpus_in_init(void);

void smp_store_boot_cpu_info(void);
void smp_store_cpu_info(int id);

Expand Down
28 changes: 24 additions & 4 deletions arch/x86/kernel/process.c
Original file line number Diff line number Diff line change
Expand Up @@ -759,15 +759,26 @@ bool xen_set_default_idle(void)
}
#endif

struct cpumask cpus_stop_mask;

void __noreturn stop_this_cpu(void *dummy)
{
struct cpuinfo_x86 *c = this_cpu_ptr(&cpu_info);
unsigned int cpu = smp_processor_id();

local_irq_disable();

/*
* Remove this CPU:
* Remove this CPU from the online mask and disable it
* unconditionally. This might be redundant in case that the reboot
* vector was handled late and stop_other_cpus() sent an NMI.
*
* According to SDM and APM NMIs can be accepted even after soft
* disabling the local APIC.
*/
set_cpu_online(smp_processor_id(), false);
set_cpu_online(cpu, false);
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
mcheck_cpu_clear(c);

/*
* Use wbinvd on processors that support SME. This provides support
Expand All @@ -781,8 +792,17 @@ void __noreturn stop_this_cpu(void *dummy)
* Test the CPUID bit directly because the machine might've cleared
* X86_FEATURE_SME due to cmdline options.
*/
if (cpuid_eax(0x8000001f) & BIT(0))
if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
native_wbinvd();

/*
* This brings a cache line back and dirties it, but
* native_stop_other_cpus() will overwrite cpus_stop_mask after it
* observed that all CPUs reported stop. This write will invalidate
* the related cache line on this CPU.
*/
cpumask_clear_cpu(cpu, &cpus_stop_mask);

for (;;) {
/*
* Use native_halt() so that memory contents don't change
Expand Down
104 changes: 74 additions & 30 deletions arch/x86/kernel/smp.c
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,14 @@
#include <linux/interrupt.h>
#include <linux/cpu.h>
#include <linux/gfp.h>
#include <linux/kexec.h>

#include <asm/mtrr.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/proto.h>
#include <asm/apic.h>
#include <asm/cpu.h>
#include <asm/idtentry.h>
#include <asm/nmi.h>
#include <asm/mce.h>
Expand Down Expand Up @@ -129,7 +131,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
}

/*
* this function calls the 'stop' function on all other CPUs in the system.
* Disable virtualization, APIC etc. and park the CPU in a HLT loop
*/
DEFINE_IDTENTRY_SYSVEC(sysvec_reboot)
{
Expand All @@ -146,76 +148,118 @@ static int register_stop_handler(void)

static void native_stop_other_cpus(int wait)
{
unsigned long flags;
unsigned long timeout;
unsigned int cpu = smp_processor_id();
unsigned long flags, timeout;

if (reboot_force)
return;

/*
* Use an own vector here because smp_call_function
* does lots of things not suitable in a panic situation.
*/
/* Only proceed if this is the first CPU to reach this code */
if (atomic_cmpxchg(&stopping_cpu, -1, cpu) != -1)
return;

/* For kexec, ensure that offline CPUs are out of MWAIT and in HLT */
if (kexec_in_progress)
smp_kick_mwait_play_dead();

/*
* We start by using the REBOOT_VECTOR irq.
* The irq is treated as a sync point to allow critical
* regions of code on other cpus to release their spin locks
* and re-enable irqs. Jumping straight to an NMI might
* accidentally cause deadlocks with further shutdown/panic
* code. By syncing, we give the cpus up to one second to
* finish their work before we force them off with the NMI.
* 1) Send an IPI on the reboot vector to all other CPUs.
*
* The other CPUs should react on it after leaving critical
* sections and re-enabling interrupts. They might still hold
* locks, but there is nothing which can be done about that.
*
* 2) Wait for all other CPUs to report that they reached the
* HLT loop in stop_this_cpu()
*
* 3) If the system uses INIT/STARTUP for CPU bringup, then
* send all present CPUs an INIT vector, which brings them
* completely out of the way.
*
* 4) If #3 is not possible and #2 timed out send an NMI to the
* CPUs which did not yet report
*
* 5) Wait for all other CPUs to report that they reached the
* HLT loop in stop_this_cpu()
*
* #4 can obviously race against a CPU reaching the HLT loop late.
* That CPU will have reported already and the "have all CPUs
* reached HLT" condition will be true despite the fact that the
* other CPU is still handling the NMI. Again, there is no
* protection against that as "disabled" APICs still respond to
* NMIs.
*/
if (num_online_cpus() > 1) {
/* did someone beat us here? */
if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
return;

/* sync above data before sending IRQ */
wmb();
cpumask_copy(&cpus_stop_mask, cpu_online_mask);
cpumask_clear_cpu(cpu, &cpus_stop_mask);

if (!cpumask_empty(&cpus_stop_mask)) {
apic_send_IPI_allbutself(REBOOT_VECTOR);

/*
* Don't wait longer than a second for IPI completion. The
* wait request is not checked here because that would
* prevent an NMI shutdown attempt in case that not all
* prevent an NMI/INIT shutdown in case that not all
* CPUs reach shutdown state.
*/
timeout = USEC_PER_SEC;
while (num_online_cpus() > 1 && timeout--)
while (!cpumask_empty(&cpus_stop_mask) && timeout--)
udelay(1);
}

/* if the REBOOT_VECTOR didn't work, try with the NMI */
if (num_online_cpus() > 1) {
/*
* Park all other CPUs in INIT including "offline" CPUs, if
* possible. That's a safe place where they can't resume execution
* of HLT and then execute the HLT loop from overwritten text or
* page tables.
*
* The only downside is a broadcast MCE, but up to the point where
* the kexec() kernel brought all APs online again an MCE will just
* make HLT resume and handle the MCE. The machine crashes and burns
* due to overwritten text, page tables and data. So there is a
* choice between fire and frying pan. The result is pretty much
* the same. Chose frying pan until x86 provides a sane mechanism
* to park a CPU.
*/
if (smp_park_other_cpus_in_init())
goto done;

/*
* If park with INIT was not possible and the REBOOT_VECTOR didn't
* take all secondary CPUs offline, try with the NMI.
*/
if (!cpumask_empty(&cpus_stop_mask)) {
/*
* If NMI IPI is enabled, try to register the stop handler
* and send the IPI. In any case try to wait for the other
* CPUs to stop.
*/
if (!smp_no_nmi_ipi && !register_stop_handler()) {
/* Sync above data before sending IRQ */
wmb();

pr_emerg("Shutting down cpus with NMI\n");

apic_send_IPI_allbutself(NMI_VECTOR);
for_each_cpu(cpu, &cpus_stop_mask)
apic->send_IPI(cpu, NMI_VECTOR);
}
/*
* Don't wait longer than 10 ms if the caller didn't
* request it. If wait is true, the machine hangs here if
* one or more CPUs do not reach shutdown state.
*/
timeout = USEC_PER_MSEC * 10;
while (num_online_cpus() > 1 && (wait || timeout--))
while (!cpumask_empty(&cpus_stop_mask) && (wait || timeout--))
udelay(1);
}

done:
local_irq_save(flags);
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
local_irq_restore(flags);

/*
* Ensure that the cpus_stop_mask cache lines are invalidated on
* the other CPUs. See comment vs. SME in stop_this_cpu().
*/
cpumask_clear(&cpus_stop_mask);
}

/*
Expand Down
Loading

0 comments on commit 88afbb2

Please sign in to comment.