KVM: VMX: Move RSB stuffing to before the first RET after VM-Exit [Linux 5.1]

KVM: VMX: Move RSB stuffing to before the first RET after VM-Exit [Linux 5.1]

This Linux kernel change "KVM: VMX: Move RSB stuffing to before the first RET after VM-Exit" is included in the Linux 5.1 release. This change is authored by Rick Edgecombe <rick.p.edgecombe [at] intel.com> on Fri Apr 26 17:23:58 2019 -0700. The commit for this change in Linux stable tree is f2fde6a (patch).

KVM: VMX: Move RSB stuffing to before the first RET after VM-Exit

The not-so-recent change to move VMX's VM-Exit handing to a dedicated
"function" unintentionally exposed KVM to a speculative attack from the
guest by executing a RET prior to stuffing the RSB.  Make RSB stuffing
happen immediately after VM-Exit, before any unpaired returns.

Alternatively, the VM-Exit path could postpone full RSB stuffing until
its current location by stuffing the RSB only as needed, or by avoiding
returns in the VM-Exit path entirely, but both alternatives are beyond
ugly since vmx_vmexit() has multiple indirect callers (by way of
vmx_vmenter()).  And putting the RSB stuffing immediately after VM-Exit
makes it much less likely to be re-broken in the future.

Note, the cost of PUSH/POP could be avoided in the normal flow by
pairing the PUSH RAX with the POP RAX in __vmx_vcpu_run() and adding an
a POP to nested_vmx_check_vmentry_hw(), but such a weird/subtle
dependency is likely to cause problems in the long run, and PUSH/POP
will take all of a few cycles, which is peanuts compared to the number
of cycles required to fill the RSB.

Fixes: 453eafbe65f7 ("KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines")
Reported-by: Rick Edgecombe <[email protected]>
Signed-off-by: Rick Edgecombe <[email protected]>
Co-developed-by: Sean Christopherson <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>

There are 15 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 arch/x86/kvm/vmx/vmenter.S | 12 ++++++++++++
 arch/x86/kvm/vmx/vmx.c     |  3 ---
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 7b27273..d4cb194 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -3,6 +3,7 @@
 #include <asm/asm.h>
 #include <asm/bitsperlong.h>
 #include <asm/kvm_vcpu_regs.h>
+#include <asm/nospec-branch.h>

 #define WORD_SIZE (BITS_PER_LONG / 8)

@@ -77,6 +78,17 @@ ENDPROC(vmx_vmenter)
  * referred to by VMCS.HOST_RIP.
  */
 ENTRY(vmx_vmexit)
+#ifdef CONFIG_RETPOLINE
+   ALTERNATIVE "jmp .Lvmexit_skip_rsb", "", X86_FEATURE_RETPOLINE
+   /* Preserve guest's RAX, it's used to stuff the RSB. */
+   push %_ASM_AX
+
+   /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
+   FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
+
+   pop %_ASM_AX
+.Lvmexit_skip_rsb:
+#endif
    ret
 ENDPROC(vmx_vmexit)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index bcdd69d..0c955bb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6462,9 +6462,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)

    x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);

-   /* Eliminate branch target predictions from guest mode */
-   vmexit_fill_RSB();
-
    /* All fields are clean at this point */
    if (static_branch_unlikely(&enable_evmcs))
        current_evmcs->hv_clean_fields |=

Leave a Reply

Your email address will not be published. Required fields are marked *