Commit Graph

221 Commits

Author SHA1 Message Date
Richard Henderson
b7770d72f5 target/arm: Split out arm_env_mmu_index
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03 08:52:25 +10:00
Richard Henderson
1764ad70ce include/qemu: Add TCGCPUOps typedef to typedefs.h
QEMU coding style recommends using structure typedefs.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29 21:04:10 +10:00
Anton Johansson
32f0c394bb target: Use vaddr in gen_intermediate_code
Makes gen_intermediate_code() signature target agnostic so the function
can be called from accel/tcg/translate-all.c without target specifics.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20240119144024.14289-9-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-29 07:06:03 +10:00
Peter Maydell
6fffc83785 target/arm: Fix A64 scalar SQSHRN and SQRSHRN
In commit 1b7bc9b5c8 we changed handle_vec_simd_sqshrn() so
that instead of starting with a 0 value and depositing in each new
element from the narrowing operation, it instead started with the raw
result of the narrowing operation of the first element.

This is fine in the vector case, because the deposit operations for
the second and subsequent elements will always overwrite any higher
bits that might have been in the first element's result value in
tcg_rd.  However in the scalar case we only go through this loop
once.  The effect is that for a signed narrowing operation, if the
result is negative then we will now return a value where the bits
above the first element are incorrectly 1 (because the narrowfn
returns a sign-extended result, not one that is truncated to the
element size).

Fix this by using an extract operation to get exactly the correct
bits of the output of the narrowfn for element 1, instead of a
plain move.

Cc: qemu-stable@nongnu.org
Fixes: 1b7bc9b5c8 ("target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2089
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240123153416.877308-1-peter.maydell@linaro.org
2024-01-26 12:19:11 +00:00
Philippe Mathieu-Daudé
e2d8cf9b53 target/arm: Expose arm_cpu_mp_affinity() in 'multiprocessing.h' header
Declare arm_cpu_mp_affinity() prototype in the new
 "target/arm/multiprocessing.h" header so units in
hw/arm/ can use it without having to include the huge
target-specific "cpu.h".

File list to include the new header generated using:

  $ git grep -lw arm_cpu_mp_affinity

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240118200643.29037-11-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-26 11:30:48 +00:00
Richard Henderson
c4380f7bcd target/arm: Create arm_cpu_mp_affinity
Wrapper to return the mp affinity bits from the cpu.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240118200643.29037-10-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-26 11:30:48 +00:00
Peter Maydell
6b504a01c1 target/arm: Fix VNCR fault detection logic
In arm_deliver_fault() we check for whether the fault is caused
by a data abort due to an access to a FEAT_NV2 sysreg in the
memory pointed to by the VNCR. Unfortunately part of the
condition checks the wrong argument to the function, meaning
that it would spuriously trigger, resulting in some instruction
aborts being taken to the wrong EL and reported incorrectly.

Use the right variable in the condition.

Fixes: 674e534527 ("target/arm: Report VNCR_EL2 based faults correctly")
Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-id: 20240116165605.2523055-1-peter.maydell@linaro.org
2024-01-26 11:30:47 +00:00
Peter Maydell
e2862554c2 target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUs
Enable FEAT_NV2 on the 'max' CPU, and stop filtering it out for
the Neoverse N2 and Neoverse V1 CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
674e534527 target/arm: Report VNCR_EL2 based faults correctly
If FEAT_NV2 redirects a system register access to a memory offset
from VNCR_EL2, that access might fault.  In this case we need to
report the correct syndrome information:
 * Data Abort, from same-EL
 * no ISS information
 * the VNCR bit (bit 13) is set

and the exception must be taken to EL2.

Save an appropriate syndrome template when generating code; we can
then use that to:
 * select the right target EL
 * reconstitute a correct final syndrome for the data abort
 * report the right syndrome if we take a FEAT_RME granule protection
   fault on the VNCR-based write

Note that because VNCR is bit 13, we must start keeping bit 13 in
template syndromes, by adjusting ARM_INSN_START_WORD2_SHIFT.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
daf9b4a00f target/arm: Implement FEAT_NV2 redirection of sysregs to RAM
FEAT_NV2 requires that when HCR_EL2.{NV,NV2} == 0b11 then accesses by
EL1 to certain system registers are redirected to RAM.  The full list
of affected registers is in the table in rule R_CSRPQ in the Arm ARM.
The registers may be normally accessible at EL1 (like ACTLR_EL1), or
normally UNDEF at EL1 (like HCR_EL2).  Some registers redirect to RAM
only when HCR_EL2.NV1 is 0, and some only when HCR_EL2.NV1 is 1;
others trap in both cases.

Add the infrastructure for identifying which registers should be
redirected and turning them into memory accesses.

This code does not set the correct syndrome or arrange for the
exception to be taken to the correct target EL if the access via
VNCR_EL2 faults; we will do that in the next commit.

Subsequent commits will mark up the relevant regdefs to set their
nv2_redirect_offset, and if relevant one of the two flags which
indicates that the redirect happens only for a particular value of
HCR_EL2.NV1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-09 14:43:53 +00:00
Peter Maydell
c35da11df4 target/arm: Handle FEAT_NV2 redirection of SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2
Under FEAT_NV2, when HCR_EL2.{NV,NV2} == 0b11 at EL1, accesses to the
registers SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2 and TFSR_EL2 (which
would UNDEF without FEAT_NV or FEAT_NV2) should instead access the
equivalent EL1 registers SPSR_EL1, ELR_EL1, ESR_EL1, FAR_EL1 and
TFSR_EL1.

Because there are only five registers involved and the encoding for
the EL1 register is identical to that of the EL2 register except
that opc1 is 0, we handle this by finding the EL1 register in the
hash table and using it instead.

Note that traps that apply to direct accesses to the EL1 register,
such as active fine-grained traps or other trap bits, do not trigger
when it is accessed via the EL2 encoding in this way.  However, some
traps that are defined by the EL2 register may apply.  We therefore
call the EL2 register's accessfn first.  The only one of the five
which has such traps is TFSR_EL2: make sure its accessfn correctly
handles both FEAT_NV (where we trap to EL2 without checking ATA bits)
and FEAT_NV2 (where we check ATA bits and then redirect to TFSR_EL1).

(We don't need the NV1 tbflag bit until the next patch, but we
introduce it here to avoid putting the NV, NV1, NV2 bits in an
odd order.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:53 +00:00
Peter Maydell
1274a47fbd target/arm: Add FEAT_NV to max, neoverse-n2, neoverse-v1 CPUs
Enable FEAT_NV on the 'max' CPU, and stop filtering it out for the
Neoverse N2 and Neoverse V1 CPUs.  We continue to downgrade FEAT_NV2
support to FEAT_NV for the latter two CPU types.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:51 +00:00
Peter Maydell
2e9b1e50bd target/arm: Treat LDTR* and STTR* as LDR/STR when NV, NV1 is 1, 1
FEAT_NV requires (per I_JKLJK) that when HCR_EL2.{NV,NV1} is {1,1} the
unprivileged-access instructions LDTR, STTR etc behave as normal
loads and stores. Implement the check that handles this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:51 +00:00
Peter Maydell
b7ecc3da6c target/arm: Make NV reads of CurrentEL return EL2
FEAT_NV requires that when HCR_EL2.NV is set reads of the CurrentEL
register from EL1 always report EL2 rather than the real EL.
Implement this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:48 +00:00
Peter Maydell
67d10fc473 target/arm: Trap sysreg accesses for FEAT_NV
For FEAT_NV, accesses to system registers and instructions from EL1
which would normally UNDEF there but which work in EL2 need to
instead be trapped to EL2. Detect this both for "we know this will
UNDEF at translate time" and "we found this UNDEFs at runtime", and
make the affected registers trap to EL2 instead.

The Arm ARM defines the set of registers that should trap in terms
of their names; for our implementation this would be both awkward
and inefficent as a test, so we instead trap based on the opc1
field of the sysreg. The regularity of the architectural choice
of encodings for sysregs means that in practice this captures
exactly the correct set of registers.

Regardless of how we try to define the registers this trapping
applies to, there's going to be a certain possibility of breakage
if new architectural features introduce new registers that don't
follow the current rules (FEAT_MEC is one example already visible
in the released sysreg XML, though not yet in the Arm ARM). This
approach seems to me to be straightforward and likely to require
a minimum of manual overrides.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:48 +00:00
Peter Maydell
44572fc984 target/arm: Move FPU/SVE/SME access checks up above ARM_CP_SPECIAL_MASK check
In handle_sys() we don't do the check for whether the register is
marked as needing an FPU/SVE/SME access check until after we've
handled the special cases covered by ARM_CP_SPECIAL_MASK.  This is
conceptually the wrong way around, because if for example we happen
to implement an FPU-access-checked register as ARM_CP_NOP, we should
do the access check first.

Move the access checks up so they are with all the other access
checks, not sandwiched between the special-case read/write handling
and the normal-case read/write handling. This doesn't change
behaviour at the moment, because we happen not to define any
cpregs with both ARM_CPU_{FPU,SVE,SME} and one of the cases
dealt with by ARM_CP_SPECIAL_MASK.

Moving this code also means we have the correct place to put the
FEAT_NV/FEAT_NV2 access handling, which should come after the access
checks and before we try to do any read/write action.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:48 +00:00
Peter Maydell
b9377d1c5f target/arm: Always honour HCR_EL2.TSC when HCR_EL2.NV is set
The HCR_EL2.TSC trap for trapping EL1 execution of SMC instructions
has a behaviour change for FEAT_NV when EL3 is not implemented:

 * in older architecture versions TSC was required to have no
   effect (i.e. the SMC insn UNDEFs)
 * with FEAT_NV, when HCR_EL2.NV == 1 the trap must apply
   (i.e. SMC traps to EL2, as it already does in all cases when
   EL3 is implemented)
 * in newer architecture versions, the behaviour either without
   FEAT_NV or with FEAT_NV and HCR_EL2.NV == 0 is relaxed to
   an IMPDEF choice between UNDEF and trap-to-EL2 (i.e. it is
   permitted to always honour HCR_EL2.TSC) for AArch64 only

Add the condition to honour the trap bit when HCR_EL2.NV == 1.  We
leave the HCR_EL2.NV == 0 case with the existing (UNDEF) behaviour,
as our IMPDEF choice (both because it avoids a behaviour change
for older CPU models and because we'd have to distinguish AArch32
from AArch64 if we opted to trap to EL2).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:46 +00:00
Peter Maydell
e37e98b7f9 target/arm: Enable trapping of ERET for FEAT_NV
When FEAT_NV is turned on via the HCR_EL2.NV bit, ERET instructions
are trapped, with the same syndrome information as for the existing
FEAT_FGT fine-grained trap (in the pseudocode this is handled in
AArch64.CheckForEretTrap()).

Rename the DisasContext and tbflag bits to reflect that they are
no longer exclusively for FGT traps, and set the tbflag bit when
FEAT_NV is enabled as well as when the FGT is enabled.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:45 +00:00
Peter Maydell
3d65b958c5 target/arm: Set CTR_EL0.{IDC,DIC} for the 'max' CPU
The CTR_EL0 register has some bits which allow the implementation to
tell the guest that it does not need to do cache maintenance for
data-to-instruction coherence and instruction-to-data coherence.
QEMU doesn't emulate caches and so our cache maintenance insns are
all NOPs.

We already have some models of specific CPUs where we set these bits
(e.g.  the Neoverse V1), but the 'max' CPU still uses the settings it
inherits from Cortex-A57.  Set the bits for 'max' as well, so the
guest doesn't need to do unnecessary work.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:43 +00:00
Stefan Hajnoczi
195801d700 system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonly
referred to as the BQL in discussions and some code comments. The
locking APIs, however, are called qemu_mutex_lock_iothread() and
qemu_mutex_unlock_iothread().

The "iothread" name is historic and comes from when the main thread was
split into into KVM vcpu threads and the "iothread" (now called the main
loop thread). I have contributed to the confusion myself by introducing
a separate --object iothread, a separate concept unrelated to the BQL.

The "iothread" name is no longer appropriate for the BQL. Rename the
locking APIs to:
- void bql_lock(void)
- void bql_unlock(void)
- bool bql_locked(void)

There are more APIs with "iothread" in their names. Subsequent patches
will rename them. There are also comments and documentation that will be
updated in later patches.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Acked-by: Hyman Huang <yong.huang@smartx.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Philippe Mathieu-Daudé
47eac5d423 target/arm/tcg: Including missing 'exec/exec-all.h' header
translate_insn() ends up calling probe_access_full(), itself
declared in "exec/exec-all.h":

  TranslatorOps::translate_insn
    -> aarch64_tr_translate_insn()
      -> is_guarded_page()
        -> probe_access_full()

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231130142519.28417-4-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-12-19 17:57:49 +00:00
Philippe Mathieu-Daudé
d1d119bbd7 target/arm: Restrict TCG specific helpers
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231130142519.28417-2-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-12-19 17:57:48 +00:00
Richard Henderson
3efd849573 target/arm: Fix SME FMOPA (16-bit), BFMOPA
Perform the loop increment unconditionally, not nested
within the predication.

Cc: qemu-stable@nongnu.org
Fixes: 3916841ac7 ("target/arm: Implement FMOPA, FMOPS (widening)")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1985
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231117193135.1180657-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-20 15:17:00 +00:00
Marcin Juszkiewicz
e867a1242e target/arm: enable FEAT_RNG on Neoverse-N2
I noticed that Neoverse-V1 has FEAT_RNG enabled so let enable it also on
Neoverse-N2.

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231114103443.1652308-1-marcin.juszkiewicz@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-20 15:09:42 +00:00
Michael Tokarev
51464c5612 target/arm/tcg: spelling fixes: alse, addreses
Fixes: 179e9a3bac "target/arm: Define new TB flag for ATA0"
Fixes: 5d7b37b5f6 "target/arm: Implement the CPY* instructions"
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-11-15 11:59:54 +03:00
Nikita Ostrenkov
f6e8d1ef05 target/arm/tcg: enable PMU feature for Cortex-A8 and A9
According to the technical reference manual, the Cortex-A9
has a Perfomance Unit Monitor (PMU):
https://developer.arm.com/documentation/100511/0401/performance-monitoring-unit/about-the-performance-monitoring-unit
The Cortex-A8 does also.

We already already define the PMU registers when emulating the
Cortex-A8 and Cortex-A9, because we put them in v7_cp_reginfo[]
rather than guarding them behind ARM_FEATURE_PMU.  So the only thing
that setting the feature bit changes is that the registers actually
do something.

Enable ARM_FEATURE_PMU for Cortex-A8 and Cortex-A9, to avoid
this anomaly.

(The A8 and A9 PMU predates the standardisation of ID_DFR0.PerfMon,
so the field there is 0, but the PMU is still present.)

Signed-off-by: Nikita Ostrenkov <n.ostrenkov@gmail.com>
Message-id: 20231112165658.2335-1-n.ostrenkov@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweaked commit message; also enable PMU for A8]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-13 16:31:41 +00:00
Peter Maydell
4d044472ab target/arm: Correct MTE tag checking for reverse-copy MOPS
When we are doing a FEAT_MOPS copy that must be performed backwards,
we call mte_mops_probe_rev(), passing it the address of the last byte
in the region we are probing.  However, allocation_tag_mem_probe()
wants the address of the first byte to get the tag memory for.
Because we passed it (ptr, size) we could incorrectly trip the
allocation_tag_mem_probe() check for "does this access run across to
the following page", and if that following page happened not to be
valid then we would assert.

We know we will always be only dealing with a single page because the
code that calls mte_mops_probe_rev() ensures that.  We could make
mte_mops_probe_rev() pass 'ptr - (size - 1)' to
allocation_tag_mem_probe(), but then we would have to adjust the
returned 'mem' pointer to get back to the tag RAM for the last byte
of the region.  It's simpler to just pass in a size of 1 byte,
because we know that allocation_tag_mem_probe() in pure-probe
single-page mode doesn't care about the size.

Fixes: 69c51dc372 ("target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231110162546.2192512-1-peter.maydell@linaro.org
2023-11-13 13:15:50 +00:00
Peter Maydell
fc58891d04 target/arm: HVC at EL3 should go to EL3, not EL2
AArch64 permits code at EL3 to use the HVC instruction; however the
exception we take should go to EL3, not down to EL2 (see the pseudocode
AArch64.CallHypervisor()). Fix the target EL.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Message-id: 20231109151917.1925107-1-peter.maydell@linaro.org
2023-11-13 13:15:31 +00:00
Peter Maydell
5722fc4712 target/arm: Fix A64 LDRA immediate decode
In commit be23a049 in the conversion to decodetree we broke the
decoding of the immediate value in the LDRA instruction.  This should
be a 10 bit signed value that is scaled by 8, but in the conversion
we incorrectly ended up scaling it only by 2.  Fix the scaling
factor.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1970
Fixes: be23a049 ("target/arm: Convert load (pointer auth) insns to decodetree")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231106113445.1163063-1-peter.maydell@linaro.org
2023-11-06 15:00:29 +00:00
Richard Henderson
b11293c212 target/arm: Fix SVE STR increment
The previous change missed updating one of the increments and
one of the MemOps.  Add a test case for all vector lengths.

Cc: qemu-stable@nongnu.org
Fixes: e6dd5e782b ("target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231031143215.29764-1-richard.henderson@linaro.org
[PMM: fixed checkpatch nit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-11-02 13:36:45 +00:00
Peter Maydell
854c001f12 target/arm: Make FEAT_MOPS SET* insns handle Xs == XZR correctly
Most of the registers used by the FEAT_MOPS instructions cannot use
31 as a register field value; this is CONSTRAINED UNPREDICTABLE to
NOP or UNDEF (we UNDEF).  However, it is permitted for the "source
value" register for the memset insns SET* to be 31, which (as usual
for most data-processing insns) means it should be the zero register
XZR. We forgot to handle this case, with the effect that trying to
set memory to zero with a "SET* Xd, Xn, XZR" sets the memory to
the value that happens to be in the low byte of SP.

Handle XZR when getting the SET* data value from the register file.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231030174000.3792225-4-peter.maydell@linaro.org
2023-11-02 13:36:45 +00:00
Peter Maydell
307521d6e2 target/arm: Fix syndrome for FGT traps on ERET
In commit 442c9d682c when we converted the ERET, ERETAA, ERETAB
instructions to decodetree, the conversion accidentally lost the
correct setting of the syndrome register when taking a trap because
of the FEAT_FGT HFGITR_EL1.ERET bit.  Instead of reporting a correct
full syndrome value with the EC and IL bits, we only reported the low
two bits of the syndrome, because the call to syn_erettrap() got
dropped.

Fix the syndrome values for these traps by reinstating the
syn_erettrap() calls.

Fixes: 442c9d682c ("target/arm: Convert ERET, ERETAA, ERETAB to decodetree")
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024172438.2990945-1-peter.maydell@linaro.org
2023-10-27 11:44:59 +01:00
Peter Maydell
5a534314a8 target/arm: Move feature test functions to their own header
The feature test functions isar_feature_*() now take up nearly
a thousand lines in target/arm/cpu.h. This header file is included
by a lot of source files, most of which don't need these functions.
Move the feature test functions to their own header file.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-2-peter.maydell@linaro.org
2023-10-27 11:44:32 +01:00
Peter Maydell
dfff1000fe target/arm: Implement Neoverse N2 CPU model
Implement a model of the Neoverse N2 CPU. This is an Armv9.0-A
processor very similar to the Cortex-A710. The differences are:
 * no FEAT_EVT
 * FEAT_DGH (data gathering hint)
 * FEAT_NV (not yet implemented in QEMU)
 * Statistical Profiling Extension (not implemented in QEMU)
 * 48 bit physical address range, not 40
 * CTR_EL0.DIC = 1 (no explicit icache cleaning needed)
 * PMCR_EL0.N = 6 (always 6 PMU counters, not 20)

Because it has 48-bit physical address support, we can use
this CPU in the sbsa-ref board as well as the virt board.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230915185453.1871167-3-peter.maydell@linaro.org
2023-10-27 11:41:13 +01:00
Peter Maydell
3bcc53980b target/arm: Correct minor errors in Cortex-A710 definition
Correct a couple of minor errors in the Cortex-A710 definition:
 * ID_AA64DFR0_EL1.DebugVer is 9 (indicating Armv8.4 debug architecture)
 * ID_AA64ISAR1_EL1.APA is 5 (indicating more PAuth support)
 * there is an IMPDEF CPUCFR_EL1, like that on the Neoverse-N1

Fixes: e3d45c0a89 ("target/arm: Implement cortex-a710")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230915185453.1871167-2-peter.maydell@linaro.org
2023-10-27 11:41:13 +01:00
Richard Henderson
2f02c14b21 target/arm: Use tcg_gen_ext_i64
The ext_and_shift_reg helper does this plus a shift.
The non-zero check for shift count is duplicate to
the one done within tcg_gen_shli_i64.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-22 16:35:43 -07:00
Peter Maydell
3d80bbf1f6 target/arm: Implement FEAT_HPMN0
FEAT_HPMN0 is a small feature which defines that it is valid for
MDCR_EL2.HPMN to be set to 0, meaning "no PMU event counters provided
to an EL1 guest" (previously this setting was reserved). QEMU's
implementation almost gets HPMN == 0 right, but we need to fix
one check in pmevcntr_is_64_bit(). That is enough for us to
advertise the feature in the 'max' CPU.

(We don't need to make the behaviour conditional on feature
presence, because the FEAT_HPMN0 behaviour is within the range
of permitted UNPREDICTABLE behaviour for a non-FEAT_HPMN0
implementation.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230921185445.3339214-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
a530e470ea target/arm: Permit T32 LDM with single register
For the Thumb T32 encoding of LDM, if only a single register is
specified in the register list this instruction is UNPREDICTABLE,
with the following choices:
 * instruction UNDEFs
 * instruction is a NOP
 * instruction loads a single register
 * instruction loads an unspecified set of registers

Currently we choose to UNDEF (a behaviour chosen in commit
4b222545db in 2019; previously we treated it as "load the
specified single register").

Unfortunately there is real world code out there (which shipped in at
least Android 11, 12 and 13) which incorrectly uses this
UNPREDICTABLE insn on the assumption that it does a single register
load, which is (presumably) what it happens to do on real hardware,
and is also what it does on the equivalent A32 encoding.

Revert to the pre-4b222545dbf30 behaviour of not UNDEFing
for this T32 encoding.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1799
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230927101853.39288-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Richard Henderson
b77af26e97 accel/tcg: Replace CPUState.env_ptr with cpu_env()
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Richard Henderson
ad75a51e84 tcg: Rename cpu_env to tcg_env
Allow the name 'cpu_env' to be used for something else.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Anton Johansson
a81fef4b64 target/arm: Replace TARGET_PAGE_ENTRY_EXTRA
TARGET_PAGE_ENTRY_EXTRA is a macro that allows guests to specify additional
fields for caching with the full TLB entry.  This macro is replaced with
a union in CPUTLBEntryFull, thus making CPUTLB target-agnostic at the
cost of slightly inflated CPUTLBEntryFull for non-arm guests.

Note, this is needed to ensure that fields in CPUTLB don't vary in
offset between various targets.

(arm is the only guest actually making use of this feature.)

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-2-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Philippe Mathieu-Daudé
d54deb2a07 target/arm/tcg: Clean up local variable shadowing
Fix:

  target/arm/tcg/translate-m-nocp.c: In function ‘gen_M_fp_sysreg_read’:
  target/arm/tcg/translate-m-nocp.c:509:18: warning: declaration of ‘tmp’ shadows a previous local [-Wshadow=compatible-local]
    509 |         TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
        |                  ^~~
  target/arm/tcg/translate-m-nocp.c:433:14: note: shadowed declaration is here
    433 |     TCGv_i32 tmp;
        |              ^~~
       ---

  target/arm/tcg/mve_helper.c: In function ‘helper_mve_vqshlsb’:
  target/arm/tcg/mve_helper.c:1259:19: warning: declaration of ‘r’ shadows a previous local [-Wshadow=compatible-local]
   1259 |         typeof(N) r = FN(N, (int8_t)(M), sizeof(N) * 8, ROUND, &su32);  \
        |                   ^
  target/arm/tcg/mve_helper.c:1267:5: note: in expansion of macro ‘WRAP_QRSHL_HELPER’
   1267 |     WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
        |     ^~~~~~~~~~~~~~~~~
  target/arm/tcg/mve_helper.c:927:22: note: in expansion of macro ‘DO_SQSHL_OP’
    927 |             TYPE r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)], &sat);          \
        |                      ^~
  target/arm/tcg/mve_helper.c:945:5: note: in expansion of macro ‘DO_2OP_SAT’
    945 |     DO_2OP_SAT(OP##b, 1, int8_t, FN)            \
        |     ^~~~~~~~~~
  target/arm/tcg/mve_helper.c:1277:1: note: in expansion of macro ‘DO_2OP_SAT_S’
   1277 | DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
        | ^~~~~~~~~~~~
       ---

  target/arm/tcg/mve_helper.c: In function ‘do_sqrshl48_d’:
  target/arm/tcg/mve_helper.c:2463:17: warning: declaration of ‘extval’ shadows a previous local [-Wshadow=compatible-local]
   2463 |         int64_t extval = sextract64(src << shift, 0, 48);
        |                 ^~~~~~
  target/arm/tcg/mve_helper.c:2443:18: note: shadowed declaration is here
   2443 |     int64_t val, extval;
        |                  ^~~~~~
       ---

  target/arm/tcg/mve_helper.c: In function ‘do_uqrshl48_d’:
  target/arm/tcg/mve_helper.c:2495:18: warning: declaration of ‘extval’ shadows a previous local [-Wshadow=compatible-local]
   2495 |         uint64_t extval = extract64(src << shift, 0, 48);
        |                  ^~~~~~
  target/arm/tcg/mve_helper.c:2479:19: note: shadowed declaration is here
   2479 |     uint64_t val, extval;
        |                   ^~~~~~

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230904161235.84651-3-philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-09-29 10:07:14 +02:00
Peter Maydell
706a92fbfa target/arm: Enable FEAT_MOPS for CPU 'max'
Enable FEAT_MOPS on the AArch64 'max' CPU, and add it to
the list of features we implement.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-13-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
5d7b37b5f6 target/arm: Implement the CPY* instructions
The FEAT_MOPS CPY* instructions implement memory copies. These
come in both "always forwards" (memcpy-style) and "overlap OK"
(memmove-style) flavours.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-12-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
69c51dc372 target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies
The FEAT_MOPS memory copy operations need an extra helper routine
for checking for MTE tag checking failures beyond the ones we
already added for memory set operations:
 * mte_mops_probe_rev() does the same job as mte_mops_probe(), but
   it checks tags starting at the provided address and working
   backwards, rather than forwards

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-11-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
6087df5744 target/arm: Implement the SETG* instructions
The FEAT_MOPS SETG* instructions are very similar to the SET*
instructions, but as well as setting memory contents they also
set the MTE tags. They are architecturally required to operate
on tag-granule aligned regions only.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-10-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
179e9a3bac target/arm: Define new TB flag for ATA0
Currently the only tag-setting instructions always do so in the
context of the current EL, and so we only need one ATA bit in the TB
flags.  The FEAT_MOPS SETG instructions include ones which set tags
for a non-privileged access, so we now also need the equivalent "are
tags enabled?" information for EL0.

Add the new TB flag, and convert the existing 'bool ata' field in
DisasContext to a 'bool ata[2]' that can be indexed by the is_unpriv
bit in an instruction, similarly to mte[2].

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-9-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
0e92818887 target/arm: Implement the SET* instructions
Implement the SET* instructions which collectively implement a
"memset" operation.  These come in a set of three, eg SETP
(prologue), SETM (main), SETE (epilogue), and each of those has
different flavours to indicate whether memory accesses should be
unpriv or non-temporal.

This commit does not include the "memset with tag setting"
SETG* instructions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-8-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
8163998920 target/arm: Implement MTE tag-checking functions for FEAT_MOPS
The FEAT_MOPS instructions need a couple of helper routines that
check for MTE tag failures:
 * mte_mops_probe() checks whether there is going to be a tag
   error in the next up-to-a-page worth of data
 * mte_check_fail() is an existing function to record the fact
   of a tag failure, which we need to make global so we can
   call it from helper-a64.c

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-7-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
aa03378bcc target/arm: New function allocation_tag_mem_probe()
For the FEAT_MOPS operations, the existing allocation_tag_mem()
function almost does what we want, but it will take a watchpoint
exception even for an ra == 0 probe request, and it requires that the
caller guarantee that the memory is accessible.  For FEAT_MOPS we
want a function that will not take any kind of exception, and will
return NULL for the not-accessible case.

Rename allocation_tag_mem() to allocation_tag_mem_probe() and add an
extra 'probe' argument that lets us distinguish these cases;
allocation_tag_mem() is now a wrapper that always passes 'false'.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-6-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00