ibmvnic: Unmap DMA address of TX descriptor buffers after use [Linux 4.19.72]

This Linux kernel change "ibmvnic: Unmap DMA address of TX descriptor buffers after use" is included in the Linux 4.19.72 release. This change is authored by Thomas Falcon <tlfalcon [at] linux.ibm.com> on Wed Aug 14 14:57:05 2019 -0500. The commit for this change in Linux stable tree is ea78dc8 (patch) which is from upstream commit 80f0fe0. The same Linux upstream change may have been applied to various maintained Linux releases and you can find all Linux releases containing changes from upstream 80f0fe0.

ibmvnic: Unmap DMA address of TX descriptor buffers after use

[ Upstream commit 80f0fe0934cd3daa13a5e4d48a103f469115b160 ]

There's no need to wait until a completion is received to unmap
TX descriptor buffers that have been passed to the hypervisor.
Instead unmap it when the hypervisor call has completed. This patch
avoids the possibility that a buffer will not be unmapped because
a TX completion is lost or mishandled.

Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Tested-by: Devesh K. Singh <devesh_singh@in.ibm.com>
Signed-off-by: Thomas Falcon <tlfalcon@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

There are 11 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 drivers/net/ethernet/ibm/ibmvnic.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 0ae43d2..255de7d 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -1586,6 +1586,8 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
        lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
                           (u64)tx_buff->indir_dma,
                           (u64)num_entries);
+       dma_unmap_single(dev, tx_buff->indir_dma,
+                sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
    } else {
        tx_buff->num_entries = num_entries;
        lpar_rc = send_subcrq(adapter, handle_array[queue_num],
@@ -2747,7 +2749,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
    union sub_crq *next;
    int index;
    int i, j;
-   u8 *first;

 restart_loop:
    while (pending_scrq(adapter, scrq)) {
@@ -2777,14 +2778,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,

                txbuff->data_dma[j] = 0;
            }
-           /* if sub_crq was sent indirectly */
-           first = &txbuff->indir_arr[0].generic.first;
-           if (*first == IBMVNIC_CRQ_CMD) {
-               dma_unmap_single(dev, txbuff->indir_dma,
-                        sizeof(txbuff->indir_arr),
-                        DMA_TO_DEVICE);
-               *first = 0;
-           }

            if (txbuff->last_frag) {
                dev_kfree_skb_any(txbuff->skb);

Leave a Reply

Your email address will not be published. Required fields are marked *