powerpc: Avoid code patching freed init sections [Linux 4.4.180]

powerpc: Avoid code patching freed init sections [Linux 4.4.180]

This Linux kernel change "powerpc: Avoid code patching freed init sections" is included in the Linux 4.4.180 release. This change is authored by Michael Neuling <mikey [at] neuling.org> on Mon Apr 22 00:20:29 2019 +1000. The commit for this change in Linux stable tree is 7fe905d (patch) which is from upstream commit 51c3c62. The same Linux upstream change may have been applied to various maintained Linux releases and you can find all Linux releases containing changes from upstream 51c3c62.

powerpc: Avoid code patching freed init sections

commit 51c3c62b58b357e8d35e4cc32f7b4ec907426fe3 upstream.

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
     __get_user() ->
       __get_user_nocheck() ->

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

There are 16 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 arch/powerpc/include/asm/setup.h |  1 +
 arch/powerpc/lib/code-patching.c | 13 +++++++++++++
 arch/powerpc/mm/mem.c            |  2 ++
 3 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index ca6f871..21daee8 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -8,6 +8,7 @@

 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 2ce6159..570c06a 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -14,12 +14,25 @@
 #include <asm/page.h>
 #include <asm/code-patching.h>
 #include <asm/uaccess.h>
+#include <asm/setup.h>
+#include <asm/sections.h>

+static inline bool is_init(unsigned int *addr)
+   return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
 int patch_instruction(unsigned int *addr, unsigned int instr)
    int err;

+   /* Make sure we aren't patching a freed init section */
+   if (init_mem_is_free && is_init(addr)) {
+       pr_debug("Skipping init section patching addr: 0x%px\n", addr);
+       return 0;
+   }
    __put_user_size(instr, addr, 4, err);
    if (err)
        return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3e..1efe5ca 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,6 +62,7 @@

 unsigned long long memory_limit;
+bool init_mem_is_free;

 pte_t *kmap_pte;
@@ -381,6 +382,7 @@ void __init mem_init(void)
 void free_initmem(void)
    ppc_md.progress = ppc_printk_progress;
+   init_mem_is_free = true;

Leave a Reply

Your email address will not be published. Required fields are marked *