Skip to content

Commit a33a5d2

Browse files
committed
genirq/generic_pending: Do not lose pending affinity update
The generic pending interrupt mechanism moves interrupts from the interrupt handler on the original target CPU to the new destination CPU. This is required for x86 and ia64 due to the way the interrupt delivery and acknowledge works if the interrupts are not remapped. However that update can fail for various reasons. Some of them are valid reasons to discard the pending update, but the case, when the previous move has not been fully cleaned up is not a legit reason to fail. Check the return value of irq_do_set_affinity() for -EBUSY, which indicates a pending cleanup, and rearm the pending move in the irq dexcriptor so it's tried again when the next interrupt arrives. Fixes: 996c591 ("x86/irq: Plug vector cleanup race") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Song Liu <songliubraving@fb.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <liu.song.a23@gmail.com> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: stable@vger.kernel.org Cc: Mike Travis <mike.travis@hpe.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Tariq Toukan <tariqt@mellanox.com> Link: https://lkml.kernel.org/r/20180604162224.386544292@linutronix.de
1 parent 80ae7b1 commit a33a5d2

File tree

1 file changed

+19
-7
lines changed

1 file changed

+19
-7
lines changed

kernel/irq/migration.c

Lines changed: 19 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -38,17 +38,18 @@ bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear)
3838
void irq_move_masked_irq(struct irq_data *idata)
3939
{
4040
struct irq_desc *desc = irq_data_to_desc(idata);
41-
struct irq_chip *chip = desc->irq_data.chip;
41+
struct irq_data *data = &desc->irq_data;
42+
struct irq_chip *chip = data->chip;
4243

43-
if (likely(!irqd_is_setaffinity_pending(&desc->irq_data)))
44+
if (likely(!irqd_is_setaffinity_pending(data)))
4445
return;
4546

46-
irqd_clr_move_pending(&desc->irq_data);
47+
irqd_clr_move_pending(data);
4748

4849
/*
4950
* Paranoia: cpu-local interrupts shouldn't be calling in here anyway.
5051
*/
51-
if (irqd_is_per_cpu(&desc->irq_data)) {
52+
if (irqd_is_per_cpu(data)) {
5253
WARN_ON(1);
5354
return;
5455
}
@@ -73,9 +74,20 @@ void irq_move_masked_irq(struct irq_data *idata)
7374
* For correct operation this depends on the caller
7475
* masking the irqs.
7576
*/
76-
if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids)
77-
irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
78-
77+
if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids) {
78+
int ret;
79+
80+
ret = irq_do_set_affinity(data, desc->pending_mask, false);
81+
/*
82+
* If the there is a cleanup pending in the underlying
83+
* vector management, reschedule the move for the next
84+
* interrupt. Leave desc->pending_mask intact.
85+
*/
86+
if (ret == -EBUSY) {
87+
irqd_set_move_pending(data);
88+
return;
89+
}
90+
}
7991
cpumask_clear(desc->pending_mask);
8092
}
8193

0 commit comments

Comments
 (0)