x86/kprobes: Avoid kretprobe recursion bug [Linux 3.16.72]

This Linux kernel change "x86/kprobes: Avoid kretprobe recursion bug" is included in the Linux 3.16.72 release. This change is authored by Masami Hiramatsu <mhiramat [at] kernel.org> on Sun Feb 24 01:50:49 2019 +0900. The commit for this change in Linux stable tree is 58ef1dd (patch) which is from upstream commit b191fa9. The same Linux upstream change may have been applied to various maintained Linux releases and you can find all Linux releases containing changes from upstream b191fa9.

x86/kprobes: Avoid kretprobe recursion bug

commit b191fa96ea6dc00d331dcc28c1f7db5e075693a0 upstream.

Avoid kretprobe recursion loop bg by setting a dummy
kprobes to current_kprobe per-CPU variable.

This bug has been introduced with the asm-coded trampoline
code, since previously it used another kprobe for hooking
the function return placeholder (which only has a nop) and
trampoline handler was called from that kprobe.

This revives the old lost kprobe again.

With this fix, we don't see deadlock anymore.

And you can see that all inner-called kretprobe are skipped.

  event_1                                  235               0
  event_2                                19375           19612

The 1st column is recorded count and the 2nd is missed count.
Above shows (event_1 rec) + (event_2 rec) ~= (event_2 missed)
(some difference are here because the counter is racy)

Reported-by: Andrea Righi <righi.andrea@gmail.com>
Tested-by: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: c9becf58d935 ("[PATCH] kretprobe: kretprobe-booster")
Link: http://lkml.kernel.org/r/155094064889.6137.972160690963039.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

There are 22 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 arch/x86/kernel/kprobes/core.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 5ddb1f8..cb6657f 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -686,11 +686,16 @@ static void __used kretprobe_trampoline_holder(void)

+static struct kprobe kretprobe_kprobe = {
+   .addr = (void *)kretprobe_trampoline,
  * Called from kretprobe_trampoline
 __visible __used void *trampoline_handler(struct pt_regs *regs)
+   struct kprobe_ctlblk *kcb;
    struct kretprobe_instance *ri = NULL;
    struct hlist_head *head, empty_rp;
    struct hlist_node *tmp;
@@ -700,6 +705,17 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)
    void *frame_pointer;
    bool skipped = false;

+   preempt_disable();
+   /*
+    * Set a dummy kprobe for avoiding kretprobe recursion.
+    * Since kretprobe never run in kprobe handler, kprobe must not
+    * be running at this point.
+    */
+   kcb = get_kprobe_ctlblk();
+   __this_cpu_write(current_kprobe, &kretprobe_kprobe);
+   kcb->kprobe_status = KPROBE_HIT_ACTIVE;
    kretprobe_hash_lock(current, &head, &flags);
    /* fixup registers */
@@ -775,10 +791,9 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)
        orig_ret_address = (unsigned long)ri->ret_addr;
        if (ri->rp && ri->rp->handler) {
            __this_cpu_write(current_kprobe, &ri->rp->kp);
-           get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
            ri->ret_addr = correct_ret_addr;
            ri->rp->handler(ri, regs);
-           __this_cpu_write(current_kprobe, NULL);
+           __this_cpu_write(current_kprobe, &kretprobe_kprobe);

        recycle_rp_inst(ri, &empty_rp);
@@ -794,6 +809,9 @@ __visible __used void *trampoline_handler(struct pt_regs *regs)

    kretprobe_hash_unlock(current, &flags);

+   __this_cpu_write(current_kprobe, NULL);
+   preempt_enable();
    hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {

Leave a Reply

Your email address will not be published. Required fields are marked *