KVM: arm/arm64: Only skip MMIO insn once [Linux 4.19.72]

This Linux kernel change "KVM: arm/arm64: Only skip MMIO insn once" is included in the Linux 4.19.72 release. This change is authored by Andrew Jones <drjones [at] redhat.com> on Thu Aug 22 13:03:05 2019 +0200. The commit for this change in Linux stable tree is 111d36b (patch) which is from upstream commit 2113c5f. The same Linux upstream change may have been applied to various maintained Linux releases and you can find all Linux releases containing changes from upstream 2113c5f.

KVM: arm/arm64: Only skip MMIO insn once

[ Upstream commit 2113c5f62b7423e4a72b890bd479704aa85c81ba ]

If after an MMIO exit to userspace a VCPU is immediately run with an
immediate_exit request, such as when a signal is delivered or an MMIO
emulation completion is needed, then the VCPU completes the MMIO
emulation and immediately returns to userspace. As the exit_reason
does not get changed from KVM_EXIT_MMIO in these cases we have to
be careful not to complete the MMIO emulation again, when the VCPU is
eventually run again, because the emulation does an instruction skip
(and doing too many skips would be a waste of guest code :-) We need
to use additional VCPU state to track if the emulation is complete.
As luck would have it, we already have 'mmio_needed', which even
appears to be used in this way by other architectures already.

Fixes: 0d640732dbeb ("arm64: KVM: Skip MMIO insn after emulation")
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

There are 7 lines of Linux source code added/deleted in this change. Code changes to Linux kernel are as follows.

 virt/kvm/arm/mmio.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c
index 08443a1..3caee91 100644
--- a/virt/kvm/arm/mmio.c
+++ b/virt/kvm/arm/mmio.c
@@ -98,6 +98,12 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
    unsigned int len;
    int mask;

+   /* Detect an already handled MMIO return */
+   if (unlikely(!vcpu->mmio_needed))
+       return 0;
+
+   vcpu->mmio_needed = 0;
+
    if (!run->mmio.is_write) {
        len = run->mmio.len;
        if (len > sizeof(unsigned long))
@@ -200,6 +206,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
    run->mmio.is_write  = is_write;
    run->mmio.phys_addr = fault_ipa;
    run->mmio.len       = len;
+   vcpu->mmio_needed   = 1;

    if (!ret) {
        /* We handled the access successfully in the kernel. */

Leave a Reply

Your email address will not be published. Required fields are marked *