These will be used to compile the list of VMX features for named
CPU models, and/or by the code that sets up the VMX MSRs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VMX requires 64-bit feature words for the IA32_VMX_EPT_VPID_CAP
and IA32_VMX_BASIC MSRs. (The VMX control MSRs are 64-bit wide but
actually have only 32 bits of information).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sometimes a CPU feature does not make sense unless another is
present. In the case of VMX features, KVM does not even allow
setting the VMX controls to some invalid combinations.
Therefore, this patch adds a generic mechanism that looks for bits
that the user explicitly cleared, and uses them to remove other bits
from the expanded CPU definition. If these dependent bits were also
explicitly *set* by the user, this will be a warning for "-cpu check"
and an error for "-cpu enforce". If not, then the dependent bits are
cleared silently, for convenience.
With VMX features, this will be used so that for example
"-cpu host,-rdrand" will also hide support for RDRAND exiting.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The next patch will add a different reason for filtering features, unrelated
to host feature support. Extract a new function that takes care of disabling
the features and optionally reporting them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is a problem, that you don't have access to the data using cpu_memory_rw_debug() function when in SMM. You can't remotely debug SMM mode program because of that for example.
Likely attrs version of get_phys_page_debug should be used to get correct asidx at the end to handle access properly.
Here the patch to fix it.
Signed-off-by: Dmitry Poletaev <poletaev@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The CPUID bits CLZERO and XSAVEERPTR are availble on AMD's ZEN platform
and could be passed to the guest.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of splitting at an unaligned address, we can simply split at
4TB.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
s390 was trying to solve limited KVM memslot size issue by abusing
memory_region_allocate_system_memory(), which breaks API contract
where the function might be called only once.
Beside an invalid use of API, the approach also introduced migration
issue, since RAM chunks for each KVM_SLOT_MAX_BYTES are transferred in
migration stream as separate RAMBlocks.
After discussion [1], it was agreed to break migration from older
QEMU for guest with RAM >8Tb (as it was relatively new (since 2.12)
and considered to be not actually used downstream).
Migration should keep working for guests with less than 8TB and for
more than 8TB with QEMU 4.2 and newer binary.
In case user tries to migrate more than 8TB guest, between incompatible
QEMU versions, migration should fail gracefully due to non-exiting
RAMBlock ID or RAMBlock size mismatch.
Taking in account above and that now KVM code is able to split too
big MemorySection into several memslots, partially revert commit
(bb223055b s390-ccw-virtio: allow for systems larger that 7.999TB)
and use kvm_set_max_memslot_size() to set KVMSlot size to
KVM_SLOT_MAX_BYTES.
1) [PATCH RFC v2 4/4] s390: do not call memory_region_allocate_system_memory() multiple times
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20190924144751.24149-5-imammedo@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
* Fix the CBAR register implementation for Cortex-A53,
Cortex-A57, Cortex-A72
* Fix direct booting of Linux kernels on emulated CPUs
which have an AArch32 EL3 (incorrect NSACR settings
meant they could not access the FPU)
* semihosting cleanup: do more work at translate time
and less work at runtime
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAl2OHYsZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3oHlD/4iD57WzVkf2EagPg61EbqV
KJU0bloj6lpfhI410zv6RLfSxRhuJKj1voBPl0wh/uWz4kIHBjcYZgRQGZz5+Fem
XE4j7bLfgXlbYkjl6CFo3oqZJM+iVmMofKVbpj7nEnO6cB9nW2O4Uk88vPTqCRUp
uip/ZveoQ3WvzyM8ERWiIiGZrvCRPnfTFvWGNEDd+ESx3ACmNbeAHilMURESkXR8
3iRt83bzL+H7xRpVEmLvUAbjJlf+4dzyftJSwTDquLsu+g4I45BDe1ki7ip9U06B
EvgNZ0TKchNI2kn6I4R0XAYAdZyKRONWqYTPE3xEtweihLwOKYsKfQViSHkhYxuE
upqMfsSzpT2ivqMb5myFU8JbG6jZZGTguAZ40MQT073gckgFoFfWjAtzR0fWa/Cy
VJ79fWIfOXrRsc76UDBeDuJ3CFEliFMSzDJWwglxlp9JX6ckfHH0Vwfmj9NPcuRw
AeAkI7Xh+emNKftJzNtC+6Ba7jMhMLLDBoe1r3NQYK1BFg/JRtkGCja3UAswotXH
hEYMicbMnkhOGEKxjKL0jbl33XKKAVq3pens2tT0QIz3Xqzh9iIcceCnv4MsddK9
MPU8yfQYcj6eNxVBLofhuRGURMK4BpQzj2Rxg03G3dRpFuNEwneUrx64q8lEv4Y5
EWSFxOoBPEpooiMCoboZ/A==
=/0m2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190927' into staging
target-arm queue:
* Fix the CBAR register implementation for Cortex-A53,
Cortex-A57, Cortex-A72
* Fix direct booting of Linux kernels on emulated CPUs
which have an AArch32 EL3 (incorrect NSACR settings
meant they could not access the FPU)
* semihosting cleanup: do more work at translate time
and less work at runtime
# gpg: Signature made Fri 27 Sep 2019 15:32:43 BST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20190927:
hw/arm/boot: Use the IEC binary prefix definitions
hw/arm/boot.c: Set NSACR.{CP11,CP10} for NS kernel boots
tests/tcg: add linux-user semihosting smoke test for ARM
target/arm: remove run-time semihosting checks for linux-user
target/arm: remove run time semihosting checks
target/arm: handle A-profile semihosting at translate time
target/arm: handle M-profile semihosting at translate time
tests/tcg: clean-up some comments after the de-tangling
target/arm: fix CBAR register for AArch64 CPUs
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# tests/tcg/arm/Makefile.target
Now we do all our checking and use a common EXCP_SEMIHOST for
semihosting operations we can make helper code a lot simpler.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190913151845.12582-5-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As for the other semihosting calls we can resolve this at translate
time.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190913151845.12582-4-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We do this for other semihosting calls so we might as well do it for
M-profile as well.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190913151845.12582-3-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For AArch64 CPUs with a CBAR register, we have two views for it:
- in AArch64 state, the CBAR_EL1 register (S3_1_C15_C3_0), returns the
full 64 bits CBAR value
- in AArch32 state, the CBAR register (cp15, opc1=1, CRn=15, CRm=3, opc2=0)
returns a 32 bits view such that:
CBAR = CBAR_EL1[31:18] 0..0 CBAR_EL1[43:32]
This commit fixes the current implementation where:
- CBAR_EL1 was returning the 32 bits view instead of the full 64 bits
value,
- CBAR was returning a truncated 32 bits version of the full 64 bits
one, instead of the 32 bits view
- CBAR was declared as cp15, opc1=4, CRn=15, CRm=0, opc2=0, which is
the CBAR register found in the ARMv7 Cortex-Ax CPUs, but not in
ARMv8 CPUs.
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20190912110103.1417887-1-luc.michel@greensocs.com
[PMM: Added a comment about the two different kinds of CBAR]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The WHPX build is broken since commit 12e9493df9 which removed the
"hw/boards.h" where MachineState is declared:
$ ./configure \
--enable-hax --enable-whpx
$ make x86_64-softmmu/all
[...]
CC x86_64-softmmu/target/i386/whpx-all.o
target/i386/whpx-all.c: In function 'whpx_accel_init':
target/i386/whpx-all.c:1378:25: error: dereferencing pointer to
incomplete type 'MachineState' {aka 'struct MachineState'}
whpx->mem_quota = ms->ram_size;
^~
make[1]: *** [rules.mak:69: target/i386/whpx-all.o] Error 1
CC x86_64-softmmu/trace/generated-helpers.o
make[1]: Target 'all' not remade because of errors.
make: *** [Makefile:471: x86_64-softmmu/all] Error 2
Restore this header, partially reverting commit 12e9493df9.
Fixes: 12e9493df9
Reported-by: Ilias Maratos <i.maratos@gmail.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190920113329.16787-2-philmd@redhat.com>
Remove a redundant masking of ignore. Once that's gone it is
obvious that the system-mode inner test is redundant with the
outer test. Move the fpcr_exc_enable masking up and tidy.
No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-8-richard.henderson@linaro.org>
The kernel masks the integer overflow exception with the
software invalid exception mask. Include IOV in the set
of exception bits masked by fpcr_exc_enable.
Fixes the new float_convs test.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-7-richard.henderson@linaro.org>
Tidy the computation of the value; no functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-6-richard.henderson@linaro.org>
Since we're converting the swcr to fpcr format for exceptions,
it's trivial to add FPCR_DNZ to the set of fpcr bits overriden.
No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-5-richard.henderson@linaro.org>
The CONFIG_USER_ONLY adjustment blindly mashed the swcr
exception enable bits into the fpcr exception disable bits.
However, fpcr_exc_enable has already converted the exception
disable bits into the exception status bits in order to make
it easier to mask status bits at runtime.
Instead, merge the swcr enable bits with the fpcr before we
convert to status bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-4-richard.henderson@linaro.org>
We were setting the wrong bit. The fp_status.flush_to_zero
setting is overwritten by either the constant 1 or the value
of fpcr_flush_to_zero depending on bits within an fp insn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-3-richard.henderson@linaro.org>
This is a bit more straight-forward than using a switch statement.
No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-2-richard.henderson@linaro.org>
Each operand can have a maximum length of 16. Make sure to prepare all
reads/writes before writing.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Access at most single pages and document why. Using the access helpers
might over-indicate watchpoints within the same page, I guess we can
live with that.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can process a maximum of 256 bytes, crossing two pages.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can process a maximum of 256 bytes, crossing two pages.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can process a maximum of 256 bytes, crossing two pages. Calculate the
accessed range upfront - src is accessed right-to-left.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can process a maximum of 256 bytes, crossing two pages.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can process a maximum of 256 bytes, crossing two pages. While at it,
increment the length once.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can process a maximum of 256 bytes, crossing two pages.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
The last remaining bit is padding with two bytes.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
The last remaining bit for MVC is handling destructive overlaps in a
fault-safe way.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
As we are moving between address spaces, we can use access_memmove()
without checking for destructive overlaps (especially of real storage
locations):
"Each storage operand is processed left to right. The
storage-operand-consistency rules are the same as
for MOVE (MVC), except that when the operands
overlap in real storage, the use of the common real-
storage locations is not necessarily recognized."
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Replace fast_memmove() variants by access_memmove() variants, that
first try to probe access to all affected pages (maximum is two pages).
Introduce access_get_byte()/access_set_byte(). We might be able to speed
up memmove in special cases even further (do single-byte access, use
memmove() for remaining bytes in page), however, we'll skip that for now.
In MVCOS, simply always call access_memmove_as() and drop the TODO
about LAP. LAP is already handled in the MMU.
Get rid of adj_len_to_page(), which is now unused.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Replace fast_memset() by access_memset(), that first tries to probe
access to all affected pages (maximum is two). We'll use the same
mechanism for other types of accesses soon.
Only in very rare cases (especially TLB_NOTDIRTY), we'll have to
fallback to ld/st helpers. Try to speed up that case as suggested by
Richard.
We'll rework most involved handlers soon to do all accesses via new
fault-safe helpers, especially MVC.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Although we basically ignore the index all the time for CONFIG_USER_ONLY,
let's simply skip all the checks and always return MMU_USER_IDX in
cpu_mmu_index() and get_mem_index().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
24 and 31-bit address space handling is wrong when it comes to storing
back the addresses to the register.
While at it, read gprs 0 implicitly.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Bit position 32-55 of general register 0 must be zero.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
... and don't perform any move in case the length is zero.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Triggered by a review comment from Richard, also MVCOS has a 32-bit
length in 24/31-bit addressing mode. Add a new helper.
Rename wrap_length() to wrap_length31().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's stay within single pages.
... and indicate cc=3 in case there is work remaining. Keep unicode
padding simple.
While reworking, properly wrap the addresses.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We have to mask of any unused bits. While at it, document what exactly is
missing.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Perform the checks documented in the PoP.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's use the new helper, that also detects destructive overlaps when
wrapping.
We'll make the remaining code (e.g., fast_memmove()) aware of wrapping
later.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's increment the length once.
While at it, cleanup the comment. The memset() example is given as a
programming note in the PoP, so drop the description.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Process max 4k bytes at a time, writing back registers between the
accesses. The instruction is interruptible.
"For operands longer than 2K bytes, access exceptions are not
recognized for locations more than 2K bytes beyond the current location
being processed."
Note that on z/Architecture, 2k vs. 4k access cannot get differentiated as
long as pages are not crossed. This seems to be a leftover from ESA/390.
Simply stay within single pages.
MVCL handling is quite different than MVCLE/MVCLU handling, so split up
the handlers.
Defer interrupt handling, as that will require more thought, add a TODO
for that.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We'll have to zero-out unused bit positions, so make sure to write the
addresses back.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We have to zero out unused bits in 24 and 31-bit addressing mode.
Provide a new helper.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We use the marker "-1" for "no exception". s390_cpu_do_interrupt() might
get confused by that.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>