Commit Graph

2369 Commits

Author SHA1 Message Date
Frédéric Pétrot fc313c6434 exec/memop: Adding signedness to quad definitions
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.

Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-01-08 15:46:10 +10:00
Idan Horowitz b7469ef92a target/arm: Add missing FEAT_TLBIOS instructions
Some of the instructions added by the FEAT_TLBIOS extension were forgotten
when the extension was originally added to QEMU.

Fixes: 7113d61850 ("target/arm: Add support for FEAT_TLBIOS")
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211231103928.1455657-1-idan.horowitz@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-01-07 17:07:57 +00:00
Peter Maydell 52a9f60935 target/arm: Correct calculation of tlb range invalidate length
The calculation of the length of TLB range invalidate operations
in tlbi_aa64_range_get_length() is incorrect in two ways:
 * the NUM field is 5 bits, but we read only 4 bits
 * we miscalculate the page_shift value, because of an
   off-by-one error:
    TG 0b00 is invalid
    TG 0b01 is 4K granule size == 4096 == 2^12
    TG 0b10 is 16K granule size == 16384 == 2^14
    TG 0b11 is 64K granule size == 65536 == 2^16
   so page_shift should be (TG - 1) * 2 + 12

Thanks to the bug report submitter Cha HyunSoo for identifying
both these errors.

Fixes: 84940ed825 ("target/arm: Add support for FEAT_TLBIRANGE")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/734
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20211130173257.1274194-1-peter.maydell@linaro.org
2021-12-15 10:35:26 +00:00
Richard Henderson 8dc89f1faa target/arm: Suppress bp for exceptions with more priority
Both single-step and pc alignment faults have priority over
breakpoint exceptions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson 7055fe4baf target/arm: Assert thumb pc is aligned
Misaligned thumb PC is architecturally impossible.
Assert is better than proceeding, in case we've missed
something somewhere.

Expand a comment about aligning the pc in gdbstub.
Fail an incoming migrate if a thumb pc is misaligned.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson ee03027a2c target/arm: Take an exception if PC is misaligned
For A64, any input to an indirect branch can cause this.

For A32, many indirect branch paths force the branch to be aligned,
but BXWritePC does not.  This includes the BX instruction but also
other interworking changes to PC.  Prior to v8, this case is UNDEFINED.
With v8, this is CONSTRAINED UNPREDICTABLE and may either raise an
exception or force align the PC.

We choose to raise an exception because we have the infrastructure,
it makes the generated code for gen_bx simpler, and it has the
possibility of catching more guest bugs.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson 936a6b8603 target/arm: Split compute_fsr_fsc out of arm_deliver_fault
We will reuse this section of arm_deliver_fault for
raising pc alignment faults.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson 485088f742 target/arm: Advance pc for arch single-step exception
The size of the code covered by a TranslationBlock cannot be 0;
this is checked via assert in tb_gen_code.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson 258a00e5a4 target/arm: Split arm_pre_translate_insn
Create arm_check_ss_active and arm_check_kernelpage.

Reverse the order of the tests.  While it doesn't matter in practice,
because only user-only has a kernel page and user-only never sets
ss_active, ss_active has priority over execution exceptions and it
is best to keep them in the proper order.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson 0bb72bca7c target/arm: Hoist pc_next to a local variable in thumb_tr_translate_insn
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson bf9dd2aa5f target/arm: Hoist pc_next to a local variable in arm_tr_translate_insn
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Richard Henderson 3b39ba360d target/arm: Hoist pc_next to a local variable in aarch64_tr_translate_insn
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15 10:35:26 +00:00
Peter Maydell 4825eaae4f Revert "arm: tcg: Adhere to SMCCC 1.3 section 5.2"
This reverts commit 9fcd15b919.

This change turns out to cause regressions, for instance on the
imx6ul boards as described here:
https://lore.kernel.org/qemu-devel/c8b89685-7490-328b-51a3-48711c140a84@tribudubois.net/

The primary cause of that regression is that the guest code running
at EL3 expects SMCs (not related to PSCI) to do what they would if
our PSCI emulation was not present at all, but after this change
they instead set a value in R0/X0 and continue.

We could fix that by a refactoring that allowed us to only turn on
the PSCI emulation if we weren't booting the guest at EL3, but there
is a more tangled problem with the highbank board, which:
 (1) wants to enable PSCI emulation
 (2) has a bit of guest code that it wants to run at EL3 and
     to perform SMC calls that trap to the monitor vector table:
     this is the boot stub code that is written to memory by
     arm_write_secure_board_setup_dummy_smc() and which the
     highbank board enables by setting bootinfo->secure_board_setup

We can't satisfy both of those and also have the PSCI emulation
handle all SMC instruction executions regardless of function
identifier value.

This is too tricky to try to sort out before 6.2 is released;
revert this commit so we can take the time to get it right in
the 7.0 release.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20211119163419.557623-1-peter.maydell@linaro.org
2021-11-22 13:41:48 +00:00
Richard Henderson cc23377516 Add nuvoton sd module for NPCM7XX
Add gdb-xml for MVE
 More uses of tcg_constant_* in target/arm
 Fix parameter naming for default-bus-bypass-iommu
 Ignore cache operations to mmio in HVF
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmGBgjkdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8sAAgAsHaW2sHH/W4TzCwl
 DfqFar4u047Q+ZtQHjNehGHF9Bxp4NS4A0qL52vk0hVoqeWlyF1N29MOnewgVDqY
 q1x+uxJtG9xjTse7oEEshEEFF/7J8eB8dN4E78TFn/6IhvVhGiUeeRu29s44Ot6N
 E2KABcXfd+4gEdqhepLGEbi5n0TnA8ARmmeffZNWVEbsxQjHnMQQYmqGmllB3xV3
 qPpnp3avvD1015zMwrLVmlDO+tSRr/1bed7k3k26ebga2B/zitxcpXFNCDlgePx0
 LNT5QYvBDpE7HOruGQjf4iXPJHfYw5VMtopK7K++rY9KWiJgBVSjQUcB462sdCPk
 wNAp0g==
 =vlZ5
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-arm-20211102-2' into staging

Add nuvoton sd module for NPCM7XX
Add gdb-xml for MVE
More uses of tcg_constant_* in target/arm
Fix parameter naming for default-bus-bypass-iommu
Ignore cache operations to mmio in HVF

# gpg: Signature made Tue 02 Nov 2021 02:23:53 PM EDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]

* remotes/rth/tags/pull-arm-20211102-2:
  hvf: arm: Ignore cache operations on MMIO
  hw/arm/virt: Rename default_bus_bypass_iommu
  target/arm: Use tcg_constant_i32() in gen_rev16()
  target/arm: Use tcg_constant_i64() in do_sat_addsub_64()
  target/arm: Use the constant variant of store_cpu_field() when possible
  target/arm: Introduce store_cpu_field_constant() helper
  target/arm: Use tcg_constant_i32() in op_smlad()
  target/arm: Advertise MVE to gdb when present
  tests/qtest/libqos: add SDHCI commands
  hw/arm: Attach MMC to quanta-gbs-bmc
  hw/arm: Add Nuvoton SD module to board
  hw/sd: add nuvoton MMC

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-03 09:31:25 -04:00
Alexander Graf 5fd6a3e236 hvf: arm: Ignore cache operations on MMIO
Apple's Hypervisor.Framework forwards cache operations as MMIO traps
into user space. For MMIO however, these have no meaning: There is no
cache attached to them.

So let's just treat cache data exits as nops.

This fixes OpenBSD booting as guest.

Reported-by: AJ Barris <AwlsomeAlex@github.com>
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Mark Kettenis <kettenis@openbsd.org>
Reference: https://github.com/utmapp/UTM/issues/3197
Message-Id: <20211026071241.74889-1-agraf@csgraf.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:18:33 -04:00
Philippe Mathieu-Daudé a7ac8e83ae target/arm: Use tcg_constant_i32() in gen_rev16()
Since the mask is a constant value, use tcg_constant_i32()
instead of a TCG temporary.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-6-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:14:55 -04:00
Philippe Mathieu-Daudé 35a1ec8e47 target/arm: Use tcg_constant_i64() in do_sat_addsub_64()
The immediate value used for comparison is constant and
read-only. Move it to the constant pool. This frees a
TCG temporary for unsigned saturation opcodes.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-5-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:14:55 -04:00
Philippe Mathieu-Daudé cacb1aa486 target/arm: Use the constant variant of store_cpu_field() when possible
When using a constant variable, we can replace the store_cpu_field()
call by store_cpu_field_constant() which avoid using TCG temporaries.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-4-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:14:55 -04:00
Philippe Mathieu-Daudé daf7a1814f target/arm: Introduce store_cpu_field_constant() helper
Similarly to the store_cpu_field() helper which takes a TCG
temporary, store its value to the CPUState, introduce the
store_cpu_field_constant() helper which store a constant to
CPUState (without using any TCG temporary).

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-3-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:14:55 -04:00
Philippe Mathieu-Daudé 060c1f4252 target/arm: Use tcg_constant_i32() in op_smlad()
Avoid using a TCG temporary for a read-only constant.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-2-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:14:55 -04:00
Peter Maydell dbd9e08476 target/arm: Advertise MVE to gdb when present
Cortex-M CPUs with MVE should advertise this fact to gdb, using the
org.gnu.gdb.arm.m-profile-mve XML feature, which defines the VPR
register.  Presence of this feature also tells gdb to create
pseudo-registers Q0..Q7, so we do not need to tell gdb about them
separately.

Note that unless you have a very recent GDB that includes this fix:
http://patches-tcwg.linaro.org/patch/58133/ gdb will mis-print the
individual fields of the VPR register as zero (but showing the whole
thing as hex, eg with "print /x $vpr" will give the correct value).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211101160814.5103-1-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 14:14:55 -04:00
Richard Henderson 39a099ca25 target/arm: Implement arm_cpu_record_sigbus
Because of the complexity of setting ESR, re-use the existing
arm_cpu_do_unaligned_access function.  This means we have to
handle the exception ourselves in cpu_loop, transforming it
to the appropriate signal.

Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 07:00:52 -04:00
Richard Henderson 9b12b6b442 target/arm: Implement arm_cpu_record_sigsegv
Because of the complexity of setting ESR, continue to use
arm_deliver_fault.  This means we cannot remove the code
within cpu_loop that decodes EXCP_DATA_ABORT and
EXCP_PREFETCH_ABORT.

But using the new hook means that we don't have to do the
page_get_flags check manually, and we'll be able to restrict
the tlb_fill hook to sysemu later.

Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 07:00:52 -04:00
Richard Henderson 5e98763c0e target/arm: Use cpu_loop_exit_sigsegv for mte tag lookup
Use the new os interface for raising the exception,
rather than calling arm_cpu_tlb_fill directly.

Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 07:00:52 -04:00
Richard Henderson 7ce8e389ef target/arm: Fixup comment re handle_cpu_signal
The named function no longer exists.
Refer to host_signal_handler instead.

Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-02 07:00:52 -04:00
Richard Henderson 364caea70f target/arm: Drop checks for singlestep_enabled
GDB single-stepping is now handled generically.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-15 16:39:14 -07:00
Richard Henderson 1a2eaf9e38 target/arm: Use cpu_*_mmu instead of helper_*_mmu
The helper_*_mmu functions were the only thing available
when this code was written.  This could have been adjusted
when we added cpu_*_mmuidx_ra, but now we can most easily
use the newest set of interfaces.

Cc: qemu-arm@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 08:46:11 -07:00
Richard Henderson b4c8f3d4dd accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.h
The previous placement in tcg/tcg.h was not logical.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 08:14:54 -07:00
Richard Henderson c21751f394 target/arm: Use MO_128 for 16 byte atomics
Cc: qemu-arm@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 07:26:33 -07:00
Richard Henderson 9002ffcb72 tcg: Rename TCGMemOpIdx to MemOpIdx
We're about to move this out of tcg.h, so rename it
as we did when moving MemOp.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson 4b473e0c60 tcg: Expand MO_SIZE to 3 bits
We have lacked expressive support for memory sizes larger
than 64-bits for a while.  Fixing that requires adjustment
to several points where we used this for array indexing,
and two places that develop -Wswitch warnings after the change.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Peter Maydell bb4aa8f59e target-arm queue:
* allwinner-h3: Switch to SMC as PSCI conduit
  * arm: tcg: Adhere to SMCCC 1.3 section 5.2
  * xlnx-zcu102, xlnx-versal-virt: Support BBRAM and eFUSE devices
  * gdbstub related code cleanups
  * Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XML
  * Use _init vs _new convention in bus creation function names
  * sabrelite: Connect SPI flash CS line to GPIO3_19
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmFV05gZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3kVoD/9rlpi81v6U2zPmW5s/lFB8
 m7eqtVpP2T1UwwGPw5jXZ4qAyDyCDXJxtW8B2ePxjXfrFT5f59hy9myBFqDebjNC
 Xwdwafc17lkUm0CrIEwhMhGYiXs6yak1YcGqEPZ3ceWt67kVByXGj89mLepogCHn
 LvcjQGC3PuDvDHWnOKdOBhxTu+rvQSDRXpVCuBAd3eJBn9jxG10cdaCr3/Z7VFA/
 bnc9bSU8qJ0hYCswHHld9R2Rk9zYDQmrtMpygN6pviCd5qWGEOh8b5vszmrSHYo9
 tn0bSp9d9k2wBXrPR5Ux3L0IMRBp7N88tSDw2QyatDltltjsCKw+ZMxjKHh0mxnr
 N1QF1FteIFliu5GQeMiEWPP87rVZ31quWZUIln6XYo9+aXus8jd88vxdpND1v767
 np/q6BW0g+Tuu2T+QRe5V8VBQJzgEAKT7AwCVHC+5Flyq8fWFcFdPp1dygWXdzW3
 Yhmq2JwwMq/3MjZY10aymohrvFPAQSx2bGGDS9yi8m5seaJvHjJW5fZQUVapy0vw
 andiIFNC9KxeQ23AZM0oKkW/d5EckKIkagfiq5+71QhvtbJrXbz+fs7UxHN0IVeX
 7px+ih0xJcz3uVxZtZ/kvpBMMe3WEMd9r2tZOhbJ9K8RlCcB11y1AnZaBs/2fess
 +DzTJOkZGu1oDP4IAAqGBg==
 =eAN3
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20210930' into staging

target-arm queue:
 * allwinner-h3: Switch to SMC as PSCI conduit
 * arm: tcg: Adhere to SMCCC 1.3 section 5.2
 * xlnx-zcu102, xlnx-versal-virt: Support BBRAM and eFUSE devices
 * gdbstub related code cleanups
 * Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XML
 * Use _init vs _new convention in bus creation function names
 * sabrelite: Connect SPI flash CS line to GPIO3_19

# gpg: Signature made Thu 30 Sep 2021 16:11:20 BST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20210930: (22 commits)
  hw/arm: sabrelite: Connect SPI flash CS line to GPIO3_19
  ide: Rename ide_bus_new() to ide_bus_init()
  qbus: Rename qbus_create() to qbus_new()
  qbus: Rename qbus_create_inplace() to qbus_init()
  pci: Rename pci_root_bus_new_inplace() to pci_root_bus_init()
  ipack: Rename ipack_bus_new_inplace() to ipack_bus_init()
  scsi: Replace scsi_bus_new() with scsi_bus_init(), scsi_bus_init_named()
  target/arm: Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XML
  target/arm: Move gdbstub related code out of helper.c
  target/arm: Fix coding style issues in gdbstub code in helper.c
  configs: Don't include 32-bit-only GDB XML in aarch64 linux configs
  docs/system/arm: xlnx-versal-virt: BBRAM and eFUSE Usage
  hw/arm: xlnx-zcu102: Add Xilinx eFUSE device
  hw/arm: xlnx-zcu102: Add Xilinx BBRAM device
  hw/arm: xlnx-versal-virt: Add Xilinx eFUSE device
  hw/arm: xlnx-versal-virt: Add Xilinx BBRAM device
  hw/nvram: Introduce Xilinx battery-backed ram
  hw/nvram: Introduce Xilinx ZynqMP eFuse device
  hw/nvram: Introduce Xilinx Versal eFuse device
  hw/nvram: Introduce Xilinx eFuse QOM
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-30 21:16:54 +01:00
Peter Xu 142518bda5 memory: Name all the memory listeners
Provide a name field for all the memory listeners.  It can be used to identify
which memory listener is which.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210817013553.30584-2-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-30 15:30:24 +02:00
Peter Maydell b355f08a37 target/arm: Don't put FPEXC and FPSID in org.gnu.gdb.arm.vfp XML
Currently we send VFP XML which includes D0..D15 or D0..D31, plus
FPSID, FPSCR and FPEXC.  The upstream GDB tolerates this, but its
definition of this XML feature does not include FPSID or FPEXC.  In
particular, for M-profile cores there are no FPSID or FPEXC
registers, so advertising those is wrong.

Move FPSID and FPEXC into their own bit of XML which we only send for
A and R profile cores.  This brings our definition of the XML
org.gnu.gdb.arm.vfp feature into line with GDB's own (at least for
non-Neon cores...) and means we don't claim to have FPSID and FPEXC
on M-profile.

(It seems unlikely to me that any gdbstub users really care about
being able to look at FPEXC and FPSID; but we've supplied them to gdb
for a decade and it's not hard to keep doing so.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210921162901.17508-5-peter.maydell@linaro.org
2021-09-30 13:42:10 +01:00
Peter Maydell 89f4f20e27 target/arm: Move gdbstub related code out of helper.c
Currently helper.c includes some code which is part of the arm
target's gdbstub support.  This code has a better home: in gdbstub.c
and gdbstub64.c.  Move it there.

Because aarch64_fpu_gdb_get_reg() and aarch64_fpu_gdb_set_reg() move
into gdbstub64.c, this means that they're now compiled only for
TARGET_AARCH64 rather than always.  That is the only case when they
would ever be used, but it does mean that the ifdef in
arm_cpu_register_gdb_regs_for_features() needs to be adjusted to
match.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210921162901.17508-4-peter.maydell@linaro.org
2021-09-30 13:42:10 +01:00
Peter Maydell d59b7cdccc target/arm: Fix coding style issues in gdbstub code in helper.c
We're going to move this code to a different file; fix the coding
style first so checkpatch doesn't complain.  This includes deleting
the spurious 'break' statements after returns in the
vfp_gdb_get_reg() function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210921162901.17508-3-peter.maydell@linaro.org
2021-09-30 13:42:10 +01:00
Alexander Graf 9fcd15b919 arm: tcg: Adhere to SMCCC 1.3 section 5.2
The SMCCC 1.3 spec section 5.2 says

  The Unknown SMC Function Identifier is a sign-extended value of (-1)
  that is returned in the R0, W0 or X0 registers. An implementation must
  return this error code when it receives:

    * An SMC or HVC call with an unknown Function Identifier
    * An SMC or HVC call for a removed Function Identifier
    * An SMC64/HVC64 call from AArch32 state

To comply with these statements, let's always return -1 when we encounter
an unknown HVC or SMC call.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-30 13:42:09 +01:00
Richard Henderson fa947a667f hw/core: Make do_unaligned_access noreturn
While we may have had some thought of allowing system-mode
to return from this hook, we have no guests that require this.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson 8b1d5b3c35 include/exec: Move cpu_signal_handler declaration
There is nothing target specific about this.  The implementation
is host specific, but the declaration is 100% common.

Reviewed-By: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Peter Maydell 4b445c926a target/arm: Optimize MVE 1op-immediate insns
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
use TCG vector ops when possible.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell ce75c43f6d target/arm: Optimize MVE VSLI and VSRI
Optimize the MVE shift-and-insert insns by using TCG
vector ops when possible.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell a7789fabe1 target/arm: Optimize MVE VSHLL and VMOVL
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
with zero shift count".

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell 752970ef7c target/arm: Optimize MVE VSHL, VSHR immediate forms
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
ops when possible.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell 5cf525a8a6 target/arm: Optimize MVE VMVN
Optimize the MVE VMVN insn by using TCG vector ops when possible.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell f8d94803f1 target/arm: Optimize MVE VDUP
Optimize the MVE VDUP insns by using TCG vector ops when possible.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell 4b1561c472 target/arm: Optimize MVE VNEG, VABS
Optimize the MVE VNEG and VABS insns by using TCG
vector ops when possible.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell bc3087f253 target/arm: Optimize MVE arithmetic ops
Optimize MVE arithmetic ops when we have a TCG
vector operation we can use.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell 451f9d66cf target/arm: Optimize MVE logic ops
When not predicating, implement the MVE bitwise logical insns
directly using TCG vector operations.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell 2670221397 target/arm: Add TB flag for "MVE insns not predicated"
Our current codegen for MVE always calls out to helper functions,
because some byte lanes might be predicated.  The common case is that
in fact there is no predication active and all lanes should be
updated together, so we can produce better code by detecting that and
using the TCG generic vector infrastructure.

Add a TB flag that is set when we can guarantee that there is no
active MVE predication, and a bool in the DisasContext.  Subsequent
patches will use this flag to generate improved code for some
instructions.

In most cases when the predication state changes we simply end the TB
after that instruction.  For the code called from vfp_access_check()
that handles lazy state preservation and creating a new FP context,
we can usually avoid having to try to end the TB because luckily the
new value of the flag following the register changes in those
sequences doesn't depend on any runtime decisions.  We do have to end
the TB if the guest has enabled lazy FP state preservation but not
automatic state preservation, but this is an odd corner case that is
not going to be common in real-world code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00
Peter Maydell 85e7d1e9ff target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migration
Architecturally, for an M-profile CPU with the LOB feature the
LTPSIZE field in FPDSCR is always constant 4.  QEMU's implementation
enforces this everywhere, except that we don't check that it is true
in incoming migration data.

We're going to add come in gen_update_fp_context() which relies on
the "always 4" property.  Since this is TCG-only, we don't actually
need to be robust to bogus incoming migration data, and the effect of
it being wrong would be wrong code generation rather than a QEMU
crash; but if it did ever happen somehow it would be very difficult
to track down the cause.  Add a check so that we fail the inbound
migration if the FPDSCR.LTPSIZE value is incorrect.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
2021-09-21 16:28:27 +01:00