Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xsk: exit NAPI loop when AF_XDP Rx ring is full #6

Closed
wants to merge 7 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
From: Björn Töpel <bjorn.topel@intel.com>
Make the AF_XDP zero-copy path aware that the reason for redirect
failure was due to full Rx queue. If so, exit the napi loop as soon as
possible (exit the softirq processing), so that the userspace AF_XDP
process can hopefully empty the Rx queue. This mainly helps the "one
core scenario", where the userland process and Rx softirq processing
is on the same core.

Note that the early exit can only be performed if the "need wakeup"
feature is enabled, because otherwise there is no notification
mechanism available from the kernel side.

This requires that the driver starts using the newly introduced
xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_xsk.c | 23 +++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)
  • Loading branch information
bjoto authored and tsipa committed Sep 7, 2020
commit afabc95669136a021b25646319621109db07c27d
23 changes: 16 additions & 7 deletions drivers/net/ethernet/intel/i40e/i40e_xsk.c
Original file line number Diff line number Diff line change
Expand Up @@ -142,13 +142,15 @@ int i40e_xsk_pool_setup(struct i40e_vsi *vsi, struct xsk_buff_pool *pool,
* i40e_run_xdp_zc - Executes an XDP program on an xdp_buff
* @rx_ring: Rx ring
* @xdp: xdp_buff used as input to the XDP program
* @early_exit: true means that the napi loop should exit early
*
* Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR}
**/
static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp, bool *early_exit)
{
int err, result = I40E_XDP_PASS;
struct i40e_ring *xdp_ring;
enum bpf_map_type map_type;
struct bpf_prog *xdp_prog;
u32 act;

Expand All @@ -167,8 +169,13 @@ static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring);
break;
case XDP_REDIRECT:
err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type);
if (err) {
*early_exit = xsk_do_redirect_rx_full(err, map_type);
result = I40E_XDP_CONSUMED;
} else {
result = I40E_XDP_REDIR;
}
break;
default:
bpf_warn_invalid_xdp_action(act);
Expand Down Expand Up @@ -268,8 +275,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
{
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
bool early_exit = false, failure = false;
unsigned int xdp_res, xdp_xmit = 0;
bool failure = false;
struct sk_buff *skb;

while (likely(total_rx_packets < (unsigned int)budget)) {
Expand Down Expand Up @@ -316,7 +323,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
(*bi)->data_end = (*bi)->data + size;
xsk_buff_dma_sync_for_cpu(*bi, rx_ring->xsk_pool);

xdp_res = i40e_run_xdp_zc(rx_ring, *bi);
xdp_res = i40e_run_xdp_zc(rx_ring, *bi, &early_exit);
if (xdp_res) {
if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR))
xdp_xmit |= xdp_res;
Expand All @@ -329,6 +336,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)

cleaned_count++;
i40e_inc_ntc(rx_ring);
if (early_exit)
break;
continue;
}

Expand Down Expand Up @@ -363,12 +372,12 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets);

if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) {
if (failure || rx_ring->next_to_clean == rx_ring->next_to_use)
if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use)
xsk_set_rx_need_wakeup(rx_ring->xsk_pool);
else
xsk_clear_rx_need_wakeup(rx_ring->xsk_pool);

return (int)total_rx_packets;
return early_exit ? 0 : (int)total_rx_packets;
}
return failure ? budget : (int)total_rx_packets;
}
Expand Down