Skip to content

Commit

Permalink
net/mlx5: Fix driver load with single msix vector
Browse files Browse the repository at this point in the history
When a PCI device has just one msix vector available, we want to share
this vector between async and completion events. Current code fails to
do that assuming it will always have at least one dedicated vector for
completion events. Fix this by detecting when the pool contains just a
single vector.

Fixes: 3354822 ("net/mlx5: Use dynamic msix vectors allocation")
Signed-off-by: Eli Cohen <elic@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
  • Loading branch information
elic307i authored and Saeed Mahameed committed Jun 16, 2023
1 parent 62a522d commit 0ab999d
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
Original file line number Diff line number Diff line change
Expand Up @@ -565,15 +565,21 @@ void mlx5_irqs_release_vectors(struct mlx5_irq **irqs, int nirqs)
int mlx5_irqs_request_vectors(struct mlx5_core_dev *dev, u16 *cpus, int nirqs,
struct mlx5_irq **irqs, struct cpu_rmap **rmap)
{
struct mlx5_irq_table *table = mlx5_irq_table_get(dev);
struct mlx5_irq_pool *pool = table->pcif_pool;
struct irq_affinity_desc af_desc;
struct mlx5_irq *irq;
int offset = 1;
int i;

if (!pool->xa_num_irqs.max)
offset = 0;

af_desc.is_managed = false;
for (i = 0; i < nirqs; i++) {
cpumask_clear(&af_desc.mask);
cpumask_set_cpu(cpus[i], &af_desc.mask);
irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap);
irq = mlx5_irq_request(dev, i + offset, &af_desc, rmap);
if (IS_ERR(irq))
break;
irqs[i] = irq;
Expand Down

0 comments on commit 0ab999d

Please sign in to comment.