The spr TBU40 is used to set the upper 40 bits of the timebase
register, present on POWER5+ and later processors.
This register can only be written by the hypervisor, and cannot be read.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Access Segment Descriptor Register (ASDR) provides information about
the storage element when taking a hypervisor storage interrupt. When
performing nested radix address translation, this is normally the guest
real address. This register is present on POWER9 processors and later.
Implement the ADSR, note read and write access is limited to the
hypervisor.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Processor Utilisation of Resources Register (PURR) and Scaled
Processor Utilisation of Resources Register (SPURR) provide an estimate
of the resources used by the thread, present on POWER7 and later
processors.
Currently the [S]PURR registers simply count at the rate of the
timebase.
Preserve this behaviour but rework the implementation to store an offset
like the timebase rather than doing the calculation manually. Also allow
hypervisor write access to the register along with the currently
available read access.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[ clg: rebased on current ppc tree ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The virtual timebase register (VTB) is a 64-bit register which
increments at the same rate as the timebase register, present on POWER8
and later processors.
The register is able to be read/written by the hypervisor and read by
the supervisor. All other accesses are illegal.
Currently the VTB is just an alias for the timebase (TB) register.
Implement the VTB so that is can be read/written independent of the TB.
Make use of the existing method for accessing timebase facilities where
by the compensation is stored and used to compute the value on reads/is
updated on writes.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[ clg: rebased on current ppc tree ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This includes in QEMU a new CPU model for the POWER10 processor with
the same capabilities of a POWER9 process. The model will be extended
when support is completed.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191205184454.10722-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PPCVirtualHypervisor is an interface instance. It should never be
dereferenced. Drop the dummy type definition for extra safety, which
is the common practice with QOM interfaces.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157589808041.21182.18121655959115011353.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This only makes sense with an emulated CPU. Don't set the bit in
CPUState::interrupt_request when using KVM to avoid confusions.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548863423.3650476.16424649423510075159.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The power7_set_irq() and power9_set_irq() functions set this but it is
never used actually. Modern Book3s compatible CPUs are only supported
by the pnv and spapr machines. They have an interrupt controller, XICS
for POWER7/8 and XIVE for POWER9, whose models don't require to track
IRQ input states at the CPU level.
Drop these lines to avoid confusion.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548862861.3650476.16622818876928044450.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When a CPU is reset, QEMU makes sure no interrupt is pending by clearing
CPUPPCstate::pending_interrupts in ppc_cpu_reset(). In the case of a
complete machine emulation, eg. a sPAPR machine, an external interrupt
request could still be pending in KVM though, eg. an IPI. It will be
eventually presented to the guest, which is supposed to acknowledge it at
the interrupt controller. If the interrupt controller is emulated in QEMU,
either XICS or XIVE, ppc_set_irq() won't deassert the external interrupt
pin in KVM since it isn't pending anymore for QEMU. When the vCPU re-enters
the guest, the interrupt request is still pending and the vCPU will try
again to acknowledge it. This causes an infinite loop and eventually hangs
the guest.
The code has been broken since the beginning. The issue wasn't hit before
because accel=kvm,kernel-irqchip=off is an awkward setup that never got
used until recently with the LC92x IBM systems (aka, Boston).
Add a ppc_irq_reset() function to do the necessary cleanup, ie. deassert
the IRQ pins of the CPU in QEMU and most importantly the external interrupt
pin for this vCPU in KVM.
Reported-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548861740.3650476.16879693165328764758.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Make kvmppc_hint_smt_possible hint append helper well formed:
rename errp to errp_in, as it is IN-parameter here (which is unusual
for errp), rename function to be kvmppc_error_append_*_hint.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20191127191434.20945-1-vsementsov@virtuozzo.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* Add support for Cortex-M7 CPU
* exynos4210_gic: Suppress gcc9 format-truncation warnings
* aspeed: Various minor bug fixes and improvements
* aspeed: Add support for the tacoma-bmc board
* Honour HCR_EL32.TID1 and .TID2 trapping requirements
* Handle trapping to EL2 of AArch32 VMRS instructions
* Handle AArch32 CP15 trapping via HSTR_EL2
* Add support for missing Jazelle system registers
* arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on
* Add support for DC CVAP & DC CVADP instructions
* Fix assertion when SCR.NS is changed in Secure-SVC &c
* enable SHPC native hot plug in arm ACPI
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAl33ZZcZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3o16D/9RvnobR+zYPcXUTfEy1pX3
zGdjgesm2iwots4NPnYEKQdOsoKOcaoCZASlisjTdXOcAWBl6OVIQ9VC3uiydheF
KInvG2nI31ISFGkAbRzuVK01gY3R7Oz/HKPZqfdWT0GaUh8WFaEUPPfM4osnKrj2
Lbz2S6YRs1i5BzZHQq41R02T/S31gI57n8SWItjvN//psOOZdnmgjDtoh8J9l6i3
uEVcBS6/jeSiYK191y9PIOeLtuqtnW1AsHI7hpKHMkla6kGkCBaz7KchyfmbTU/E
tJhJbk5i18irekXdsTlI/RqixO06/l2GLRhSdgyFYHT8PvABQcEjZl5+mYv0965i
DVNv8Ehqk5ICCVfHVAqN17xNs0V8iMH5H3L9MnnFkrjHpM3j3VXLLMRWsZNAE4u4
BDypXrcbGK57vE6Intl9G+FASQTdQm9hgYrFwbfLPT8f29LqKnxmMK9MgBIizTr2
m+Fd6iW6mrhKSdBQyxnCq0T6/KkY2zM7GPg/ISnEtHAN6HzagxORRukk/cYYdv4W
dK/aCgfYVXjjqP0VDR0+p8xsxXsMv1Y/FiiPpZuMX0RciKUpcEbH1Yg1R04nhdq+
lJAbXmA7ZYE7CVQRw5oWVR5GMkfTbfcx9XsaWQ7YQRfrKeZIOSnl1rt3DFhtwQp8
jpq8btNBr4QBpw5xejtgkA==
=V+AK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20191216-1' into staging
target-arm queue:
* Add support for Cortex-M7 CPU
* exynos4210_gic: Suppress gcc9 format-truncation warnings
* aspeed: Various minor bug fixes and improvements
* aspeed: Add support for the tacoma-bmc board
* Honour HCR_EL32.TID1 and .TID2 trapping requirements
* Handle trapping to EL2 of AArch32 VMRS instructions
* Handle AArch32 CP15 trapping via HSTR_EL2
* Add support for missing Jazelle system registers
* arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on
* Add support for DC CVAP & DC CVADP instructions
* Fix assertion when SCR.NS is changed in Secure-SVC &c
* enable SHPC native hot plug in arm ACPI
# gpg: Signature made Mon 16 Dec 2019 11:08:07 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20191216-1: (34 commits)
target/arm: ensure we use current exception state after SCR update
hw/arm/virt: Simplify by moving the gic in the machine state
hw/arm/acpi: enable SHPC native hot plug
hw/arm/acpi: simplify AML bit and/or statement
hw/arm/sbsa-ref: Simplify by moving the gic in the machine state
target/arm: Add support for DC CVAP & DC CVADP ins
migration: ram: Switch to ram block writeback
Memory: Enable writeback for given memory region
tcg: cputlb: Add probe_read
arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on()
target/arm: Add support for missing Jazelle system registers
target/arm: Handle AArch32 CP15 trapping via HSTR_EL2
target/arm: Handle trapping to EL2 of AArch32 VMRS instructions
target/arm: Honor HCR_EL2.TID1 trapping requirements
target/arm: Honor HCR_EL2.TID2 trapping requirements
aspeed: Change the "nic" property definition
aspeed: Change the "scu" property definition
gpio: fix memory leak in aspeed_gpio_init()
aspeed: Add support for the tacoma-bmc board
aspeed: Remove AspeedBoardConfig array and use AspeedMachineClass
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A write to the SCR can change the effective EL by droppping the system
from secure to non-secure mode. However if we use a cached current_el
from before the change we'll rebuild the flags incorrectly. To fix
this we introduce the ARM_CP_NEWEL CP flag to indicate the new EL
should be used when recomputing the flags.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191212114734.6962-1-alex.bennee@linaro.org
Cc: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191209143723.6368-1-alex.bennee@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ARMv8.2 introduced support for Data Cache Clean instructions
to PoP (point-of-persistence) - DC CVAP and PoDP (point-of-deep-persistence)
- DV CVADP. Both specify conceptual points in a memory system where all writes
that are to reach them are considered persistent.
The support provided considers both to be actually the same so there is no
distinction between the two. If none is available (there is no backing store
for given memory) both will result in Data Cache Clean up to the point of
coherency. Otherwise sync for the specified range shall be performed.
Signed-off-by: Beata Michalska <beata.michalska@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191121000843.24844-5-beata.michalska@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This change ensures that the FPU can be accessed in Non-Secure mode
when the CPU core is reset using the arm_set_cpu_on() function call.
The NSACR.{CP11,CP10} bits define the exception level required to
access the FPU in Non-Secure mode. Without these bits set, the CPU
will give an undefined exception trap on the first FPU access for the
secondary cores under Linux.
This is necessary because in this power-control codepath QEMU
is effectively emulating a bit of EL3 firmware, and has to set
the CPU up as the EL3 firmware would.
Fixes: fc1120a7f5
Cc: qemu-stable@nongnu.org
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
[PMM: added clarifying para to commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU lacks the minimum Jazelle implementation that is required
by the architecture (everything is RAZ or RAZ/WI). Add it
together with the HCR_EL2.TID0 trapping that goes with it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191201122018.25808-6-maz@kernel.org
[PMM: moved ARMCPRegInfo array to file scope, marked it
'static global', moved new condition down in
register_cp_regs_for_features() to go with other feature
things rather than up with the v6/v7/v8 stuff]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HSTR_EL2 offers a way to trap ranges of CP15 system register
accesses to EL2, and it looks like this register is completely
ignored by QEMU.
To avoid adding extra .accessfn filters all over the place (which
would have a direct performance impact), let's add a new TB flag
that gets set whenever HSTR_EL2 is non-zero and that QEMU translates
a context where this trap has a chance to apply, and only generate
the extra access check if the hypervisor is actively using this feature.
Tested with a hand-crafted KVM guest accessing CBAR.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191201122018.25808-5-maz@kernel.org
[PMM: use is_a64(); fix comment syntax]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.
Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191201122018.25808-4-maz@kernel.org
[PMM: move helper declaration to helper.h; make it
TCG_CALL_NO_WG]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HCR_EL2.TID1 mandates that access from EL1 to REVIDR_EL1, AIDR_EL1
(and their 32bit equivalents) as well as TCMTR, TLBTR are trapped
to EL2. QEMU ignores it, making it harder for a hypervisor to
virtualize the HW (though to be fair, no known hypervisor actually
cares).
Do the right thing by trapping to EL2 if HCR_EL2.TID1 is set.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191201122018.25808-3-maz@kernel.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HCR_EL2.TID2 mandates that access from EL1 to CTR_EL0, CCSIDR_EL1,
CCSIDR2_EL1, CLIDR_EL1, CSSELR_EL1 are trapped to EL2, and QEMU
completely ignores it, making it impossible for hypervisors to
virtualize the cache hierarchy.
Do the right thing by trapping to EL2 if HCR_EL2.TID2 is set.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191201122018.25808-2-maz@kernel.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is derived from cortex-m4 description, adding DP support and FPv5
instructions with the corresponding flags in isar and mvfr2.
Checked that it could successfully execute
vrinta.f32 s15, s15
while cortex-m4 emulation rejects it with "illegal instruction".
Signed-off-by: Christophe Lyon <christophe.lyon@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20191025090841.10299-1-christophe.lyon@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We neglected to clean up pending interrupts and emergency signals;
fix that.
Message-Id: <20191206135404.16051-1-cohuck@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
qmp_query_cpu_definitions() passes @errp to get_max_cpu_model(), then
frees any error it gets back. This effectively ignores errors.
Dereferencing @errp is wrong; see the big comment in error.h. Passing
@errp is also wrong, because it works only as long as @errp is neither
@error_fatal nor @error_abort. Introduced in commit 38cba1f4d8
"s390x: return unavailable features via query-cpu-definitions".
No caller actually passes such @errp values.
Fix anyway: simply pass NULL to get_max_cpu_model().
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191204093625.14836-16-armbru@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
cpu_model_from_info() is a helper for qmp_query_cpu_model_expansion(),
qmp_query_cpu_model_comparison(), qmp_query_cpu_model_baseline(). It
dereferences @errp when the visitor or the QOM setter fails. That's
wrong; see the big comment in error.h. Introduced in commit
137974cea3 's390x/cpumodel: implement QMP interface
"query-cpu-model-expansion"'.
Its three callers have the same issue. Introduced in commit
4e82ef0502 's390x/cpumodel: implement QMP interface
"query-cpu-model-comparison"' and commit f1a47d08ef 's390x/cpumodel:
implement QMP interface "query-cpu-model-baseline"'.
No caller actually passes null.
Fix anyway: splice in a local Error *err, and error_propagate().
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191204093625.14836-15-armbru@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
get_max_cpu_model() dereferences @errp when
kvm_s390_get_host_cpu_model() fails, apply_cpu_model() dereferences it
when kvm_s390_apply_cpu_model() fails, and s390_realize_cpu_model()
dereferences it when get_max_cpu_model() or check_compatibility()
fail. That's wrong; see the big comment in error.h. All three
introduced in commit 80560137cf "s390x/cpumodel: check and apply the
CPU model".
No caller actually passes null.
Fix anyway: splice in a local Error *err, and error_propagate().
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191204093625.14836-14-armbru@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
s390x-cpu property setters set_feature() and set_feature_group()
dereference @errp when the visitor fails. That's wrong; see the big
comment in error.h. Introduced in commit 0754f60429 "s390x/cpumodel:
expose features and feature groups as properties".
No caller actually passes null.
Fix anyway: splice in a local Error *err, and error_propagate().
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191204093625.14836-13-armbru@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As it turns out we need to clear the ri controls and PSW enablement
bit to be architecture compliant.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20191203132813.2734-4-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
It defaults to returning 0 anyway and that return value is not
necessary, as 0 is also the default rc that the caller would return.
While doing that we can simplify the logic a bit and return early if
we inject a PGM exception.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20191129091713.4582-1-frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's improve readability by:
* Using constants for the subcodes
* Moving parameter checking into a function
* Removing subcode > 6 check as the default case catches that
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191127175046.4911-6-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's also move the clear reset function into the reset handler.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Message-Id: <20191127175046.4911-5-frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's move the intial reset into the reset handler and cleanup
afterwards.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191128083723.11937-1-frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's start moving the cpu reset functions into a single function with
a switch/case, so we can later use fallthroughs and share more code
between resets.
This patch introduces the reset function by renaming cpu_reset().
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191127175046.4911-3-frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If kvm does not support VMX feature by nested=0, the kvm_vmx_basic
can't get the right value from MSR_IA32_VMX_BASIC register, which
make qemu coredump when qemu do KVM_SET_MSRS.
The coredump info:
error: failed to set MSR 0x480 to 0x0
kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20191206071111.12128-1-yang.zhong@intel.com>
Reported-by: Catherine Ho <catherine.hecx@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Previous implementation in hvf_inject_interrupts() would always inject
VMCS_INTR_T_SWINTR even when VMCS_INTR_T_HWINTR was required. Now
correctly determine when VMCS_INTR_T_HWINTR is appropriate versus
VMCS_INTR_T_SWINTR.
Make sure to clear ins_len and has_error_code when ins_len isn't
valid and error_code isn't set.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <bf8d945ea1b423786d7802bbcf769517d1fd01f8.1575330463.git.dirty@apple.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
registers traps to EL2, and QEMU has so far ignored this requirement.
This breaks (among other things) KVM guests that have PtrAuth enabled,
while the hypervisor doesn't want to expose the feature to its guest.
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
case), and masks out the unsupported feature.
QEMU not honoring the trap request means that the guest observes
that the feature is present in the HW, starts using it, and dies
a horrible death when KVM injects an UNDEF, because the feature
*really* isn't supported.
Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.
Note that this change does not include trapping of the MVFR
registers from AArch32 (they are accessed via the VMRS
instruction and need to be handled in a different way).
Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Message-id: 20191123115618.29230-1-maz@kernel.org
[PMM: added missing accessfn line for ID_AA4PFR2_EL1_RESERVED;
changed names of access functions to include _tid3]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
SError interrupts.
Unfortunately, QEMU's implementation only considers the HCR_EL2
bits, and ignores the current exception level. This means a hypervisor
trying to look at its own interrupt state actually sees the guest
state, which is unexpected and breaks KVM as of Linux 5.3.
Instead, check for the running EL and return the physical bits
if not running in a virtualized context.
Fixes: 636540e9c4
Cc: qemu-stable@nongnu.org
Reported-by: Quentin Perret <qperret@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191122135833.28953-1-maz@kernel.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to the PushStack() pseudocode in the armv7m RM,
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
an FPU is present. Current implementation is doing it for
armv8, but not for armv7. This patch makes the existing
logic applicable to both code paths.
Signed-off-by: Jean-Hugues Deschenes <jean-hugues.deschenes@ossiaco.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
More accurately match SDM when setting CR0 and PDPTE registers.
Clear PDPTE registers when resetting vcpus.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <464adb39c8699fb8331d8ad6016fc3e2eff53dbc.1574625592.git.dirty@apple.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In real x86 processors, the REX prefix must come after legacy prefixes.
REX before legacy is ignored. Update the HVF emulation code to properly
handle this. Fix some spelling errors in constants. Fix some decoder
table initialization issues found by Coverity.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <eff30ded8307471936bec5d84c3b6efbc95e3211.1574625592.git.dirty@apple.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The existing code in QEMU's HVF support to attempt to synchronize TSC
across multiple cores is not sufficient. TSC value on other cores
can go backwards. Until implementation is fixed, remove calls to
hv_vm_sync_tsc(). Pass through TSC to guest OS.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <44c4afd2301b8bf99682b229b0796d84edd6d66f.1574625592.git.dirty@apple.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If an area is non-RAM and non-ROMD, then remove mappings so accesses
will trap and can be emulated. Change hvf_find_overlap_slot() to take
a size instead of an end address: it wouldn't return a slot because
callers would pass the same address for start and end. Don't always
map area as read/write/execute, respect area flags.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <1d8476c8f86959273fbdf23c86f8b4b611f5e2e1.1574625592.git.dirty@apple.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They are present in client (Core) Skylake but pasted wrong into the server
SKUs.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We have been trying to avoid adding new aliases for CPU model
versions, but in the case of changes in defaults introduced by
the TAA mitigation patches, the aliases might help avoid user
confusion when applying host software updates.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The MSR_IA32_TSX_CTRL MSR can be used to hide TSX (also known as the
Trusty Side-channel Extension). By virtualizing the MSR, KVM guests
can disable TSX and avoid paying the price of mitigating TSX-based
attacks on microarchitectural side channels.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Simply moving the non-stub helper_v7m_mrs/msr outside of
!CONFIG_USER_ONLY is not an option, because of all of the
other system-mode helpers that are called.
But we can split out a few subroutines to handle the few
EL0 accessible registers without duplicating code.
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191118194916.3670-1-richard.henderson@linaro.org
[PMM: deleted now-redundant comment; added a default case
to switch in v7m_msr helper]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Armv8-A removes UNPREDICTABLE for R13 for these cases.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191117090621.32425-3-richard.henderson@linaro.org
[PMM: changed ENABLE_ARCH_8 checks to check a new bool 'v8a',
since these cases are still UNPREDICTABLE for v8M]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There was too much cut and paste between ldrexd and strexd,
as ldrexd does prohibit two output registers the same.
Fixes: af28822899
Reported-by: Michael Goffioul <michael.goffioul@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191117090621.32425-2-richard.henderson@linaro.org
Reviewed-by: Robert Foley <robert.foley@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>