vhost_net: use packet weight for rx handler, too [Linux 4.14.133]

vhost_net: use packet weight for rx handler, too [Linux 4.14.133]

This Linux kernel change "vhost_net: use packet weight for rx handler, too" is included in the Linux 4.14.133 release. This change is authored by Paolo Abeni <pabeni [at] redhat.com> on Tue Apr 24 10:34:36 2018 +0200. The commit for this change in Linux stable tree is e9dac4c (patch) which is from upstream commit db688c2. The same Linux upstream change may have been applied to various maintained Linux releases and you can find all Linux releases containing changes from upstream db688c2.

vhost_net: use packet weight for rx handler, too

commit db688c24eada63b1efe6d0d7d835e5c3bdd71fd3 upstream.

Similar to commit a2ac99905f1e ("vhost-net: set packet weight of
tx polling to 2 * vq size"), we need a packet-based limit for
handler_rx, too - elsewhere, under rx flood with small packets,
tx can be delayed for a very long time, even without busypolling.

The pkt limit applied to handle_rx must be the same applied by
handle_tx, or we will get unfair scheduling between rx and tx.
Tying such limit to the queue length makes it less effective for
large queue length values and can introduce large process
scheduler latencies, so a constant valued is used - likewise
the existing bytes limit.

The selected limit has been validated with PVP[1] performance
test with different queue sizes:

queue size      256 512 1024

baseline        366 354 362
weight 128      715 723 670
weight 256      740 745 733
weight 512      600 460 583
weight 1024     423 427 418

A packet weight of 256 gives peek performances in under all the
tested scenarios.

No measurable regression in unidirectional performance tests has
been detected.

[1] https://developers.redhat.com/blog/2017/06/05/measuring-and-comparing-open-vswitch-performance/

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Balbir Singh <sblbir@amazon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

There are 12 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 drivers/vhost/net.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a60fcf0..9fcf7bf 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -45,8 +45,10 @@
 #define VHOST_NET_WEIGHT 0x80000

 /* Max number of packets transferred before requeueing the job.
- * Using this limit prevents one virtqueue from starving rx. */
-#define VHOST_NET_PKT_WEIGHT(vq) ((vq)->num * 2)
+ * Using this limit prevents one virtqueue from starving others with small
+ * pkts.
+ */
+#define VHOST_NET_PKT_WEIGHT 256

 /* MAX number of TX used buffers for outstanding zerocopy */
 #define VHOST_MAX_PEND 128
@@ -578,7 +580,7 @@ static void handle_tx(struct vhost_net *net)
            vhost_zerocopy_signal_used(net, vq);
        vhost_net_tx_packet(net);
        if (unlikely(total_len >= VHOST_NET_WEIGHT) ||
-           unlikely(++sent_pkts >= VHOST_NET_PKT_WEIGHT(vq))) {
+           unlikely(++sent_pkts >= VHOST_NET_PKT_WEIGHT)) {
            vhost_poll_queue(&vq->poll);
            break;
        }
@@ -760,6 +762,7 @@ static void handle_rx(struct vhost_net *net)
    struct socket *sock;
    struct iov_iter fixup;
    __virtio16 num_buffers;
+   int recv_pkts = 0;

    mutex_lock_nested(&vq->mutex, 0);
    sock = vq->private_data;
@@ -860,7 +863,8 @@ static void handle_rx(struct vhost_net *net)
            vhost_log_write(vq, vq_log, log, vhost_len,
                    vq->iov, in);
        total_len += vhost_len;
-       if (unlikely(total_len >= VHOST_NET_WEIGHT)) {
+       if (unlikely(total_len >= VHOST_NET_WEIGHT) ||
+           unlikely(++recv_pkts >= VHOST_NET_PKT_WEIGHT)) {
            vhost_poll_queue(&vq->poll);
            goto out;
        }

Leave a Reply

Your email address will not be published. Required fields are marked *