Commit ece593f
net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data
[ Upstream commit 66600fa ]
In case the non-paged data of a SKB carries protocol header and protocol
payload to be transmitted on a certain platform that the DMA AXI address
width is configured to 40-bit/48-bit, or the size of the non-paged data
is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI
address width is configured to 32-bit, then this SKB requires at least
two DMA transmit descriptors to serve it.
For example, three descriptors are allocated to split one DMA buffer
mapped from one piece of non-paged data:
dma_desc[N + 0],
dma_desc[N + 1],
dma_desc[N + 2].
Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold
extra information to be reused in stmmac_tx_clean():
tx_q->tx_skbuff_dma[N + 0],
tx_q->tx_skbuff_dma[N + 1],
tx_q->tx_skbuff_dma[N + 2].
Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer
address returned by DMA mapping call. stmmac_tx_clean() will try to
unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf
is a valid buffer address.
The expected behavior that saves DMA buffer address of this non-paged
data to tx_q->tx_skbuff_dma[entry].buf is:
tx_q->tx_skbuff_dma[N + 0].buf = NULL;
tx_q->tx_skbuff_dma[N + 1].buf = NULL;
tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single();
Unfortunately, the current code misbehaves like this:
tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single();
tx_q->tx_skbuff_dma[N + 1].buf = NULL;
tx_q->tx_skbuff_dma[N + 2].buf = NULL;
On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the
DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address
obviously, then the DMA buffer will be unmapped immediately.
There may be a rare case that the DMA engine does not finish the
pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go
horribly wrong, DMA is going to access a unmapped/unreferenced memory
region, corrupted data will be transmited or iommu fault will be
triggered :(
In contrast, the for-loop that maps SKB fragments behaves perfectly
as expected, and that is how the driver should do for both non-paged
data and paged frags actually.
This patch corrects DMA map/unmap sequences by fixing the array index
for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.
Tested and verified on DWXGMAC CORE 3.20a
Reported-by: Suraj Jaiswal <quic_jsuraj@quicinc.com>
Fixes: f748be5 ("stmmac: support new GMAC4")
Signed-off-by: Furong Xu <0x1207@gmail.com>
Reviewed-by: Hariprasad Kelam <hkelam@marvell.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://patch.msgid.link/20241021061023.2162701-1-0x1207@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>1 parent 720be85 commit ece593f
1 file changed
+17
-5
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
4110 | 4110 | | |
4111 | 4111 | | |
4112 | 4112 | | |
4113 | | - | |
4114 | | - | |
4115 | | - | |
4116 | | - | |
4117 | | - | |
4118 | 4113 | | |
4119 | 4114 | | |
4120 | 4115 | | |
| |||
4133 | 4128 | | |
4134 | 4129 | | |
4135 | 4130 | | |
| 4131 | + | |
| 4132 | + | |
| 4133 | + | |
| 4134 | + | |
| 4135 | + | |
| 4136 | + | |
| 4137 | + | |
| 4138 | + | |
| 4139 | + | |
| 4140 | + | |
| 4141 | + | |
| 4142 | + | |
| 4143 | + | |
| 4144 | + | |
| 4145 | + | |
| 4146 | + | |
| 4147 | + | |
4136 | 4148 | | |
4137 | 4149 | | |
4138 | 4150 | | |
| |||
0 commit comments