net: mvneta: recycle the page in case of out-of-order

Recycle the received page into the page_pool cache if the dma descriptors
arrived in a wrong order

Fixes: ca0e014609 ("net: mvneta: move skb build after descriptors processing")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Lorenzo Bianconi 2020-09-18 23:25:56 +02:00 committed by David S. Miller
parent 769f5083c5
commit b6e11785cf
1 changed files with 5 additions and 1 deletions

View File

@ -2383,8 +2383,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
&size, page, &ps);
} else {
if (unlikely(!xdp_buf.data_hard_start))
if (unlikely(!xdp_buf.data_hard_start)) {
rx_desc->buf_phys_addr = 0;
page_pool_put_full_page(rxq->page_pool, page,
true);
continue;
}
mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
&size, page);