Commit Graph

493 Commits

Author SHA1 Message Date
Philippe Mathieu-Daudé 6d2d454a88 target/cpu: Restrict cpu_get_phys_page_debug() handlers to sysemu
The 'hwaddr' type is only available / meaningful on system emulation.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221216215519.5522-5-philmd@linaro.org>
2023-02-27 22:29:01 +01:00
Fabiano Rosas 9200d5cc74 target/arm: Move cpregs code out of cpu.h
Since commit cf7c6d1004 ("target/arm: Split out cpregs.h") we now have
a cpregs.h header which is more suitable for this code.

Code moved verbatim.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:11:04 +00:00
Philippe Mathieu-Daudé 165876f22c target/arm: Declare CPU <-> NVIC helpers in 'hw/intc/armv7m_nvic.h'
While dozens of files include "cpu.h", only 3 files require
these NVIC helper declarations.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230206223502.25122-12-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Philippe Mathieu-Daudé 8f4e07c9d1 target/arm: Store CPUARMState::nvic as NVICState*
There is no point in using a void pointer to access the NVIC.
Use the real type to avoid casting it while debugging.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230206223502.25122-11-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Philippe Mathieu-Daudé 2bd6918f3c target/arm: Restrict CPUARMState::nvic to sysemu
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230206223502.25122-10-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Philippe Mathieu-Daudé 2a94a50776 target/arm: Restrict CPUARMState::arm_boot_info to sysemu
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230206223502.25122-9-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Philippe Mathieu-Daudé 1701d70e15 target/arm: Restrict CPUARMState::gicv3state to sysemu
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230206223502.25122-8-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Philippe Mathieu-Daudé 26f0856130 target/arm: Avoid resetting CPUARMState::eabi field
Although the 'eabi' field is only used in user emulation where
CPU reset doesn't occur, it doesn't belong to the area to reset.
Move it after the 'end_reset_fields' for consistency.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230206223502.25122-7-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Philippe Mathieu-Daudé de4143fc77 target/arm: Convert CPUARMState::eabi to boolean
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230206223502.25122-6-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16 16:00:47 +00:00
Peter Maydell 34a8a07e57 target/arm: Implement the HFGITR_EL2.SVC_EL0 and SVC_EL1 traps
Implement the HFGITR_EL2.SVC_EL0 and SVC_EL1 fine-grained traps.
These trap execution of the SVC instruction from AArch32 and AArch64.
(As usual, AArch32 can only trap from EL0, as fine grained traps are
disabled with an AArch32 EL1.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-22-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-22-peter.maydell@linaro.org
2023-02-03 12:59:24 +00:00
Peter Maydell 5572f7557f target/arm: Implement the HFGITR_EL2.ERET trap
Implement the HFGITR_EL2.ERET fine-grained trap.  This traps
execution from AArch64 EL1 of ERET, ERETAA and ERETAB.  The trap is
reported with a syndrome value of 0x1a.

The trap must take precedence over a possible pointer-authentication
trap for ERETAA and ERETAB.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-21-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-21-peter.maydell@linaro.org
2023-02-03 12:59:24 +00:00
Peter Maydell 361c33f6b8 target/arm: Implement FGT trapping infrastructure
Implement the machinery for fine-grained traps on normal sysregs.
Any sysreg with a fine-grained trap will set the new field to
indicate which FGT register bit it should trap on.

FGT traps only happen when an AArch64 EL2 enables them for
an AArch64 EL1. They therefore are only relevant for AArch32
cpregs when the cpreg can be accessed from EL0. The logic
in access_check_cp_reg() will check this, so it is safe to
add a .fgt marking to an ARM_CP_STATE_BOTH ARMCPRegInfo.

The DO_BIT and DO_REV_BIT macros define enum constants FGT_##bitname
which can be used to specify the FGT bit, eg
   .fgt = FGT_AFSR0_EL1
(We assume that there is no bit name duplication across the FGT
registers, for brevity's sake.)

Subsequent commits will add the .fgt fields to the relevant register
definitions and define the FGT_nnn values for them.

Note that some of the FGT traps are for instructions that we don't
handle via the cpregs mechanisms (mostly these are instruction traps).
Those we will have to handle separately.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-10-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-10-peter.maydell@linaro.org
2023-02-03 12:59:23 +00:00
Peter Maydell 15126d9ce2 target/arm: Define the FEAT_FGT registers
Define the system registers which are provided by the
FEAT_FGT fine-grained trap architectural feature:
 HFGRTR_EL2, HFGWTR_EL2, HDFGRTR_EL2, HDFGWTR_EL2, HFGITR_EL2

All these registers are a set of bit fields, where each bit is set
for a trap and clear to not trap on a particular system register
access.  The R and W register pairs are for system registers,
allowing trapping to be done separately for reads and writes; the I
register is for system instructions where trapping is on instruction
execution.

The data storage in the CPU state struct is arranged as a set of
arrays rather than separate fields so that when we're looking up the
bits for a system register access we can just index into the array
rather than having to use a switch to select a named struct member.
The later FEAT_FGT2 will add extra elements to these arrays.

The field definitions for the new registers are in cpregs.h because
in practice the code that needs them is code that also needs
the cpregs information; cpu.h is included in a lot more files.
We're also going to add some FGT-specific definitions to cpregs.h
in the next commit.

We do not implement HAFGRTR_EL2, because we don't implement
FEAT_AMUv1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-9-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-9-peter.maydell@linaro.org
2023-02-03 12:59:23 +00:00
Evgeny Iakovlev 5fc83f1128 target/arm: implement DBGCLAIM registers
The architecture does not define any functionality for the CLAIM tag bits.
So we will just keep the raw bits, as per spec.

Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230120155929.32384-2-eiakovlev@linux.microsoft.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23 13:32:38 +00:00
Richard Henderson 7f2a01e736 target/arm/sme: Reset SVE state in aarch64_set_svcr()
Move arm_reset_sve_state() calls to aarch64_set_svcr().

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230112102436.1913-5-philmd@linaro.org
Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org>
[PMD: Split patch in multiple tiny steps]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23 13:32:38 +00:00
Richard Henderson 2a8af38259 target/arm/sme: Introduce aarch64_set_svcr()
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230112102436.1913-4-philmd@linaro.org
Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org>
[PMD: Split patch in multiple tiny steps]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23 13:32:38 +00:00
Richard Henderson bb461330a1 target/arm: Widen cnthctl_el2 to uint64_t
This is a 64-bit register on AArch64, even if the high 44 bits
are RES0.  Because this is defined as ARM_CP_STATE_BOTH, we are
asserting that the cpreg field is 64-bits.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1400
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230115171633.3171890-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-23 13:00:11 +00:00
Tobias Röhmel 761c46425e target/arm: Add PMSAv8r registers
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
Message-id: 20221206102504.165775-6-tobias.roehmel@rwth-aachen.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-01-05 11:51:09 +00:00
Peter Maydell d2fd931362 target/arm: Allow relevant HCR bits to be written for FEAT_EVT
FEAT_EVT adds five new bits to the HCR_EL2 register: TTLBIS, TTLBOS,
TICAB, TOCU and TID4.  These allow the guest to enable trapping of
various EL1 instructions to EL2.  In this commit, add the necessary
code to allow the guest to set these bits if the feature is present;
because the bit is always zero when the feature isn't present we
won't need to use explicit feature checks in the "trap on condition"
tests in the following commits.

Note that although full implementation of the feature (mandatory from
Armv8.5 onward) requires all five trap bits, the ID registers permit
a value indicating that only TICAB, TOCU and TID4 are implemented,
which might be the case for CPUs between Armv8.2 and Armv8.5.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15 11:18:19 +00:00
Richard Henderson 980a68925c target/arm: Add isar predicates for FEAT_HAFDBS
The MMFR1 field may indicate support for hardware update of
access flag alone, or access flag and dirty bit.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Peter Maydell e4c93e44ab target/arm: Implement FEAT_E0PD
FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the
OS to forbid EL0 access to half of the address space.  Since this is
an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can
implement it entirely in aa64_va_parameters().

This requires moving the existing regime_is_user() to internals.h
so that the code in helper.c can get at it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
2022-10-27 10:27:23 +01:00
Richard Henderson 50d4c8c1d4 accel/tcg: Make page_alloc_target_data allocation constant
Use a constant target data allocation size for all pages.
This will be necessary to reduce overhead of page tracking.
Since TARGET_PAGE_DATA_SIZE is now required, we can use this
to omit data tracking for targets that don't require it.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26 11:11:28 +10:00
Richard Henderson f3639a64f6 target/arm: Use softmmu tlbs for page table walking
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
arm_ldq_ptw.  Use probe_access_full to find the host address,
and if so use a host load.  If the probe fails, we've got our
fault info already.  On the off chance that page tables are not
in RAM, continue to use the address_space_ld* functions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson 575a94af3c target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
Flush the tlb when invalidating stage 1+2 translations.  Re-use
alle1_tlbmask() for other instances of EL1&0 + Stage2.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson a1ce3084c5 target/arm: Add ARMMMUIdx_Phys_{S,NS}
Not yet used, but add mmu indexes for 1-1 mapping
to physical addresses.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson 937f224559 target/arm: Use probe_access_full for BTI
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
In is_guarded_page, use probe_access_full instead of just guessing
that the tlb entry is still present.  Also handles the FIXME about
executing from device memory.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson b8967ddf39 target/arm: Use probe_access_full for MTE
The CPUTLBEntryFull structure now stores the original pte attributes, as
well as the physical address.  Therefore, we no longer need a separate
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Stefan Hajnoczi cdda364e1d target-arm queue:
* Retry KVM_CREATE_VM call if it fails EINTR
  * allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
  * docs/nuvoton: Update URL for images
  * refactoring of page table walk code
  * hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
  * Don't allow guest to use unimplemented granule sizes
  * Report FEAT_GTG support
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmNEK54ZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3kzHD/9StYmulAf0iwe1ZNp6NavK
 CioOgZi6XyZl4rS2DrCf6/IO5XRFJP68byZd4Po554r2jcPc149yTuQAn4wb7d5e
 kejMZRQeWsXdxschhoVzDp9fgfzyZBn9X+gbdEZFFPWzOHMyWuu4cTok0dAKQvQY
 tZDLGmKeTv4MRUFJCri0310Sq0T0v/nAX/AyFtpvIr2SBx7DVCWYY02s5R4Yy5+M
 ntDWb0j12r78/bPwI1ll+g19JXUV5Tfh9AsbcYjKv45kdftz/Xc8fBiSiEpxyMrF
 mnVrr3kesZHOYAnOr2K1MnwsF0vU41kRg7kMRqSnu7pZXlI/8tmRyXoPR3c2aDbW
 Q5HWtsA48j2h0CJ0ESzl5SQnl3TSPa94m/HmpRSBFrYkU727QgnWDhUmBb4n54xs
 9iBJDhcKGZLq68CB2+j6ENdRNTndolr14OwwEns0lbkoiCKUOQY3AigtZJQGRBGM
 J5r3ED7jfTWpvP6vpp5X484fK6KVprSMxsRFDkmiwhbb3J+WtKLxbSlgsWIrkZ7s
 +JgTGfGB8sD9hJVuFZYyPQb/XWP8Bb8jfgsLsTu1vW9Xs1ASrLimFYdRO3hhwSg3
 c5yubz6Vu9GB/JYh7hGprlMD5Yv48AA3if70hOu2d4P8A4OitavT7o+4Thwqjhds
 cSV1RsBJ8ha6L3CziZaKrQ==
 =s+1f
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20221010' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Retry KVM_CREATE_VM call if it fails EINTR
 * allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
 * docs/nuvoton: Update URL for images
 * refactoring of page table walk code
 * hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
 * Don't allow guest to use unimplemented granule sizes
 * Report FEAT_GTG support

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmNEK54ZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3kzHD/9StYmulAf0iwe1ZNp6NavK
# CioOgZi6XyZl4rS2DrCf6/IO5XRFJP68byZd4Po554r2jcPc149yTuQAn4wb7d5e
# kejMZRQeWsXdxschhoVzDp9fgfzyZBn9X+gbdEZFFPWzOHMyWuu4cTok0dAKQvQY
# tZDLGmKeTv4MRUFJCri0310Sq0T0v/nAX/AyFtpvIr2SBx7DVCWYY02s5R4Yy5+M
# ntDWb0j12r78/bPwI1ll+g19JXUV5Tfh9AsbcYjKv45kdftz/Xc8fBiSiEpxyMrF
# mnVrr3kesZHOYAnOr2K1MnwsF0vU41kRg7kMRqSnu7pZXlI/8tmRyXoPR3c2aDbW
# Q5HWtsA48j2h0CJ0ESzl5SQnl3TSPa94m/HmpRSBFrYkU727QgnWDhUmBb4n54xs
# 9iBJDhcKGZLq68CB2+j6ENdRNTndolr14OwwEns0lbkoiCKUOQY3AigtZJQGRBGM
# J5r3ED7jfTWpvP6vpp5X484fK6KVprSMxsRFDkmiwhbb3J+WtKLxbSlgsWIrkZ7s
# +JgTGfGB8sD9hJVuFZYyPQb/XWP8Bb8jfgsLsTu1vW9Xs1ASrLimFYdRO3hhwSg3
# c5yubz6Vu9GB/JYh7hGprlMD5Yv48AA3if70hOu2d4P8A4OitavT7o+4Thwqjhds
# cSV1RsBJ8ha6L3CziZaKrQ==
# =s+1f
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 10 Oct 2022 10:26:38 EDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* tag 'pull-target-arm-20221010' of https://git.linaro.org/people/pmaydell/qemu-arm: (28 commits)
  docs/system/arm/emulation.rst: Report FEAT_GTG support
  target/arm: Use ARMGranuleSize in ARMVAParameters
  target/arm: Don't allow guest to use unimplemented granule sizes
  hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
  target/arm: Use tlb_set_page_full
  target/arm: Fix cacheattr in get_phys_addr_disabled
  target/arm: Split out get_phys_addr_disabled
  target/arm: Fix ATS12NSO* from S PL1
  target/arm: Pass HCR to attribute subroutines.
  target/arm: Remove env argument from combined_attrs_fwb
  target/arm: Hoist read of *is_secure in S1_ptw_translate
  target/arm: Introduce arm_hcr_el2_eff_secstate
  target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
  target/arm: Reorg regime_translation_disabled
  target/arm: Fold secure and non-secure a-profile mmu indexes
  target/arm: Add is_secure parameter to do_ats_write
  target/arm: Merge regime_is_secure into get_phys_addr
  target/arm: Add TBFLAG_M32.SECURE
  target/arm: Add is_secure parameter to v7m_read_half_insn
  target/arm: Split out get_phys_addr_with_secure
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-12 15:59:10 -04:00
Peter Maydell 104f703d93 target/arm: Don't allow guest to use unimplemented granule sizes
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
64K.  The guest selects the one it wants using bits in the TCR_ELx
registers.  If it tries to program these registers with a value that
is either reserved or which requests a size that the CPU does not
implement, the architecture requires that the CPU behaves as if the
field was programmed to some size that has been implemented.
Currently we don't implement this, and instead let the guest use any
granule size, even if the CPU ID register fields say it isn't
present.

Make aa64_va_parameters() check against the supported granule size
and force use of a different one if it is not implemented.

(A subsequent commit will make ARMVAParameters use the new enum
rather than the current pair of using16k/using64k bools.)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
2022-10-10 14:52:25 +01:00
Richard Henderson b74c04431d target/arm: Introduce arm_hcr_el2_eff_secstate
For page walking, we may require HCR for a security state
that is not "current".

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson d902ae7558 target/arm: Fold secure and non-secure a-profile mmu indexes
For a-profile aarch64, which does not bank system registers, it takes
quite a lot of code to switch between security states.  In the process,
registers such as TCR_EL{1,2} must be swapped, which in itself requires
the flushing of softmmu tlbs.  Therefore it doesn't buy us anything to
separate tlbs by security state.

Retain the distinction between Stage2 and Stage2_S.

This will be important as we implement FEAT_RME, and do not wish to
add a third set of mmu indexes for Realm state.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson a393dee019 target/arm: Add TBFLAG_M32.SECURE
Remove the use of regime_is_secure from arm_tr_init_disas_context.
Instead, provide the value of v8m_secure directly from tb_flags.
Rather than use regime_is_secure, use the env->v7m.secure directly,
as per arm_mmu_idx_el.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Jerome Forissier 06f2adccfa target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
to 64-bit so that masking and bitwise not (~) behave as expected.

This enables booting Linux with Trusted Firmware-A at EL3 with
"-M virt,secure=on -cpu max".

Cc: qemu-stable@nongnu.org
Fixes: 78cb977666 ("target/arm: Enable SME for -cpu max")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:23 +01:00
Janosch Frank 1af0006ab9 dump: Replace opaque DumpState pointer with a typed one
It's always better to convey the type of a pointer if at all
possible. So let's add the DumpState typedef to typedefs.h and move
the dump note functions from the opaque pointers to DumpState
pointers.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Cédric Le Goater <clg@kaod.org>
CC: Daniel Henrique Barboza <danielhb413@gmail.com>
CC: David Gibson <david@gibson.dropbear.id.au>
CC: Greg Kurz <groug@kaod.org>
CC: Palmer Dabbelt <palmer@dabbelt.com>
CC: Alistair Francis <alistair.francis@wdc.com>
CC: Bin Meng <bin.meng@windriver.com>
CC: Cornelia Huck <cohuck@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>
CC: David Hildenbrand <david@redhat.com>
Acked-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220811121111.9878-2-frankja@linux.ibm.com>
2022-10-06 19:30:43 +04:00
Peter Maydell f190bd1da1 target/arm: Update SDCR_VALID_MASK to include SCCD
Our SDCR_VALID_MASK doesn't include all of the bits which are defined
by the current architecture.  In particular in commit 0b42f4fab9 we
forgot to add SCCD, which meant that an AArch32 guest couldn't
actually use the SCCD bit to disable counting in Secure state.

Add all the currently defined bits; we don't implement all of them,
but this makes them be reads-as-written, which is architecturally
valid and matches how we currently handle most of the others in the
mask.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-4-peter.maydell@linaro.org
2022-09-29 17:31:52 +01:00
Peter Maydell 47b385dae8 target/arm: Support 64-bit event counters for FEAT_PMUv3p5
With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32
bit.  (Previously, only the cycle counter could be 64 bit, and other
event counters were always 32 bits).  For any given event counter,
whether the overflow event is noted for overflow from bit 31 or from
bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and
MDCR_EL2.HPMN.

Implement the 64-bit event counter handling.  We choose to make our
counters always 64 bits, and mask out the top 32 bits on read or
write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5.

(Note that the changes to pmenvcntr_op_start() and
pmenvcntr_op_finish() bring their logic closer into line with that of
pmccntr_op_start() and pmccntr_op_finish(), which already had to cope
with the overflow being either at 32 or 64 bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 0b42f4fab9 target/arm: Implement FEAT_PMUv3p5 cycle counter disable bits
FEAT_PMUv3p5 introduces new bits which disable the cycle
counter from counting:
 * MDCR_EL2.HCCD disables the counter when in EL2
 * MDCR_EL3.SCCD disables the counter when Secure

Add the code to support these bits.

(Note that there is a third documented counter-disable
bit, MDCR_EL3.MCCD, which disables the counter when in
EL3. This is not present until FEAT_PMUv3p7, so is
out of scope for now.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell a793bcd027 target/arm: Rename pmu_8_n feature test functions
Our feature test functions that check the PMU version are named
isar_feature_{aa32,aa64,any}_pmu_8_{1,4}.  This doesn't match the
current Arm ARM official feature names, which are FEAT_PMUv3p1 and
FEAT_PMUv3p4.  Rename these functions to _pmuv3p1 and _pmuv3p4.

This commit was created with:
  sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch]

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell d22c564958 target/arm: Implement ID_DFR1
In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement
it. We don't have any CPUs with features that they need to advertise
here yet, but plumbing in the ID register gives it the right name
when debugging and will help in future when we do add a CPU that
has non-zero ID_DFR1 fields.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 32957aad8c target/arm: Implement ID_MMFR5
In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined.
Implement this; we want to be able to use it to report to
the guest that we implement FEAT_ETS.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell fca75f60ab target/arm: Add MO_128 entry to pred_esz_masks[]
In commit 7390e0e9ab, we added support for SME loads and stores.
Unlike SVE loads and stores, these include handling of 128-bit
elements.  The SME load/store functions call down into the existing
sve_cont_ldst_elements() function, which uses the element size MO_*
value as an index into the pred_esz_masks[] array.  Because this code
path now has to handle MO_128, we need to add an extra element to the
array.

This bug was spotted by Coverity because it meant we were reading off
the end of the array.

Resolves: Coverity CID 1490539, 1490541, 1490543, 1490544, 1490545,
 1490546, 1490548, 1490549, 1490550, 1490551, 1490555, 1490557,
 1490558, 1490560, 1490561, 1490563
Fixes: 7390e0e9ab ("target/arm: Implement SME LD1, ST1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220718100144.3248052-1-peter.maydell@linaro.org
2022-07-26 13:38:23 +01:00
Peter Maydell f04383e749 target/arm: Honour VTCR_EL2 bits in Secure EL2
In regime_tcr() we return the appropriate TCR register for the
translation regime.  For Secure EL2, we return the VSTCR_EL2 value,
but in this translation regime some fields that control behaviour are
in VTCR_EL2.  When this code was originally written (as the comment
notes), QEMU didn't care about any of those fields, but we have since
added support for features such as LPA2 which do need the values from
those fields.

Synthesize a TCR value by merging in the relevant VTCR_EL2 fields to
the VSTCR_EL2 value.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1103
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-8-peter.maydell@linaro.org
2022-07-18 13:20:14 +01:00
Peter Maydell cb4a0a3444 target/arm: Store TCR_EL* registers as uint64_t
Change the representation of the TCR_EL* registers in the CPU state
struct from struct TCR to uint64_t.  This allows us to drop the
custom vmsa_ttbcr_raw_write() function, moving the "enforce RES0"
checks to their more usual location in the writefn
vmsa_ttbcr_write().  We also don't need the resetfn any more.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-7-peter.maydell@linaro.org
2022-07-18 13:20:13 +01:00
Peter Maydell 988cc1909f target/arm: Store VTCR_EL2, VSTCR_EL2 registers as uint64_t
Change the representation of the VSTCR_EL2 and VTCR_EL2 registers in
the CPU state struct from struct TCR to uint64_t.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-6-peter.maydell@linaro.org
2022-07-18 13:20:13 +01:00
Richard Henderson 7f2cf760fe linux-user/aarch64: Do not clear PROT_MTE on mprotect
The documentation for PROT_MTE says that it cannot be cleared
by mprotect.  Further, the implementation of the VM_ARCH_CLEAR bit,
contains PROT_BTI confiming that bit should be cleared.

Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
which bits may be reset during page_set_flags.  This is sort of the
opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
that are separate from PROT_* bits.

Reported-by: Vitaly Buka <vitalybuka@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220711031420.17820-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-18 13:20:13 +01:00
Richard Henderson 75fe83564a target/arm: Trap non-streaming usage when Streaming SVE is active
This new behaviour is in the ARM pseudocode function
AArch64.CheckFPAdvSIMDEnabled, which applies to AArch32
via AArch32.CheckAdvSIMDOrFPEnabled when the EL to which
the trap would be delivered is in AArch64 mode.

Given that ARMv9 drops support for AArch32 outside EL0, the trap EL
detection ought to be trivially true, but the pseudocode still contains
a number of conditions, and QEMU has not yet committed to dropping A32
support for EL[12] when v9 features are present.

Since the computation of SME_TRAP_NONSTREAMING is necessarily different
for the two modes, we might as well preserve bits within TBFLAG_ANY and
allocate separate bits within TBFLAG_A32 and TBFLAG_A64 instead.

Note that DDI0616A.a has typos for bits [22:21] of LD1RO in the table
of instructions illegal in streaming mode.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-11 13:19:35 +01:00
Peter Maydell f94a6df5dd target/arm: Correctly implement Feat_DoubleLock
The architecture defines the OS DoubleLock as a register which
(similarly to the OS Lock) suppresses debug events for use in CPU
powerdown sequences.  This functionality is required in Arm v7 and
v8.0; from v8.2 it becomes optional and in v9 it must not be
implemented.

Currently in QEMU we implement the OSDLR_EL1 register as a NOP.  This
is wrong both for the "feature implemented" and the "feature not
implemented" cases: if the feature is implemented then the DLK bit
should read as written and cause suppression of debug exceptions, and
if it is not implemented then the bit must be RAZ/WI.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-07 11:38:36 +01:00
Peter Maydell 09754ca867 target/arm: Implement AArch32 DBGDEVID, DBGDEVID1, DBGDEVID2
Starting with v7 of the debug architecture, there are three extra
ID registers that add information on top of that provided in
DBGDIDR. These are DBGDEVID, DBGDEVID1 and DBGDEVID2. In the
v7 debug architecture, DBGDEVID is optional, present only of
DBGDIDR.DEVID_imp is set. In v7.1 all three must be present.

Implement the missing registers.  Note that we only need to set the
values in the ARMISARegisters struct for the CPUs Cortex-A7, A15,
A53, A57 and A72 (plus the 32-bit 'max' which uses the Cortex-A53
values): earlier CPUs didn't implement v7 of the architecture, and
our other 64-bit CPUs (Cortex-A76, Neoverse-N1 and A64fx) don't have
AArch32 support at EL1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-5-peter.maydell@linaro.org
2022-07-07 11:37:33 +01:00
Richard Henderson 5d7953adcf target/arm: Add SVL to TB flags
We need SVL separate from VL for RDSVL et al, as well as
ZA storage loads and stores, which do not require PSTATE.SM.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27 11:18:17 +01:00
Richard Henderson 6ca54aa9a8 target/arm: Introduce sve_vqm1_for_el_sm
When Streaming SVE mode is enabled, the size is taken from
SMCR_ELx instead of ZCR_ELx.  The format is shared, but the
set of vector lengths is not.  Further, Streaming SVE does
not require any particular length to be supported.

Adjust sve_vqm1_for_el to pass the current value of PSTATE.SM
to the new function.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27 11:18:17 +01:00