net/tls: fix copy to fragments in reencrypt [Linux 5.1]

net/tls: fix copy to fragments in reencrypt [Linux 5.1]

This Linux kernel change "net/tls: fix copy to fragments in reencrypt" is included in the Linux 5.1 release. This change is authored by Jakub Kicinski <jakub.kicinski [at] netronome.com> on Thu Apr 25 17:35:10 2019 -0700. The commit for this change in Linux stable tree is eb3d38d (patch).

net/tls: fix copy to fragments in reencrypt

Fragments may contain data from other records so we have to account
for that when we calculate the destination and max length of copy we
can perform.  Note that 'offset' is the offset within the message,
so it can't be passed as offset within the frag..

Here skb_store_bits() would have realised the call is wrong and
simply not copy data.

Fixes: 4799ac81e52a ("tls: Add rx inline crypto offload")
Signed-off-by: Jakub Kicinski <[email protected]>
Reviewed-by: John Hurley <[email protected]>
Signed-off-by: David S. Miller <[email protected]>

There are 29 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 net/tls/tls_device.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 9635706..14dedb2 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -597,7 +597,7 @@ void handle_device_resync(struct sock *sk, u32 seq, u64 rcd_sn)
 static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
 {
    struct strp_msg *rxm = strp_msg(skb);
-   int err = 0, offset = rxm->offset, copy, nsg;
+   int err = 0, offset = rxm->offset, copy, nsg, data_len, pos;
    struct sk_buff *skb_iter, *unused;
    struct scatterlist sg[1];
    char *orig_buf, *buf;
@@ -628,9 +628,10 @@ static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
    else
        err = 0;

+   data_len = rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE;
+
    if (skb_pagelen(skb) > offset) {
-       copy = min_t(int, skb_pagelen(skb) - offset,
-                rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE);
+       copy = min_t(int, skb_pagelen(skb) - offset, data_len);

        if (skb->decrypted)
            skb_store_bits(skb, offset, buf, copy);
@@ -639,16 +640,30 @@ static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
        buf += copy;
    }

+   pos = skb_pagelen(skb);
    skb_walk_frags(skb, skb_iter) {
-       copy = min_t(int, skb_iter->len,
-                rxm->full_len - offset + rxm->offset -
-                TLS_CIPHER_AES_GCM_128_TAG_SIZE);
+       int frag_pos;
+
+       /* Practically all frags must belong to msg if reencrypt
+        * is needed with current strparser and coalescing logic,
+        * but strparser may "get optimized", so let's be safe.
+        */
+       if (pos + skb_iter->len <= offset)
+           goto done_with_frag;
+       if (pos >= data_len + rxm->offset)
+           break;
+
+       frag_pos = offset - pos;
+       copy = min_t(int, skb_iter->len - frag_pos,
+                data_len + rxm->offset - offset);

        if (skb_iter->decrypted)
-           skb_store_bits(skb_iter, offset, buf, copy);
+           skb_store_bits(skb_iter, frag_pos, buf, copy);

        offset += copy;
        buf += copy;
+done_with_frag:
+       pos += skb_iter->len;
    }

 free_buf:

Leave a Reply

Your email address will not be published. Required fields are marked *