From 60e21a0ef54cd836b9eb22c7cb396989b5b11648 Mon Sep 17 00:00:00 2001 From: Will Deacon Date: Thu, 29 Sep 2016 12:37:01 +0100 Subject: [PATCH 1/2] arm64: KVM: Take S1 walks into account when determining S2 write faults The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was generated by a read or a write instruction. For stage 2 data aborts generated by a stage 1 translation table walk (i.e. the actual page table access faults at EL2), the WnR bit therefore reports whether the instruction generating the walk was a load or a store, *not* whether the page table walker was reading or writing the entry. For page tables marked as read-only at stage 2 (e.g. due to KSM merging them with the tables from another guest), this could result in livelock, where a page table walk generated by a load instruction attempts to set the access flag in the stage 1 descriptor, but fails to trigger CoW in the host since only a read fault is reported. This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to take into account stage 2 faults in stage 1 walks. Since DBM cannot be disabled at EL2 for CPUs that implement it, we assume that these faults are always causes by writes, avoiding the livelock situation at the expense of occasional, spurious CoWs. We could, in theory, do a bit better by checking the guest TCR configuration and inspecting the page table to see why the PTE faulted. However, I doubt this is measurable in practice, and the threat of livelock is real. Cc: Cc: Julien Grall Reviewed-by: Marc Zyngier Reviewed-by: Christoffer Dall Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_emulate.h | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index fd9d5fd788f5..f5ea0ba70f07 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -178,11 +178,6 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV); } -static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) -{ - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR); -} - static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) { return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE); @@ -203,6 +198,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu) return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW); } +static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) +{ + return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) || + kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */ +} + static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) { return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM); From c8ea0395ff3bd5f0fd3c3aa69b383b2d1231e9fd Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Thu, 20 Oct 2016 10:17:21 +0100 Subject: [PATCH 2/2] arm/arm64: KVM: Map the BSS at HYP When used with a compiler that doesn't implement "asm goto" (such as the AArch64 port of GCC 4.8), jump labels generate a memory access to find out about the value of the key (instead of just patching the code). The key itself is likely to be stored in the BSS. This is perfectly fine, except that we don't map the BSS at HYP, leading to an exploding kernel at the first access. The obvious fix is simply to map the BSS there (which should have been done a long while ago, but hey...). Reported-by: Eric Auger Tested-by: Eric Auger Signed-off-by: Marc Zyngier --- arch/arm/kvm/arm.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 03e9273f1876..08bb84f2ad58 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -1312,6 +1312,13 @@ static int init_hyp_mode(void) goto out_err; } + err = create_hyp_mappings(kvm_ksym_ref(__bss_start), + kvm_ksym_ref(__bss_stop), PAGE_HYP_RO); + if (err) { + kvm_err("Cannot map bss section\n"); + goto out_err; + } + /* * Map the Hyp stack pages */