Commit Graph

9763 Commits

Author SHA1 Message Date
Jason A. Donenfeld 3dbc5fdacb target/s390x: support PRNO_TRNG instruction
In order for hosts running inside of TCG to initialize the kernel's
random number generator, we should support the PRNO_TRNG instruction,
backed in the usual way with the qemu_guest_getrandom helper. This is
confirmed working on Linux 5.19.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220921100729.2942008-2-Jason@zx2c4.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
[thuth: turn prno-trng off in avocado test to avoid breaking it]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-09-26 17:23:11 +02:00
Jason A. Donenfeld 9f17bfdab4 target/s390x: support SHA-512 extensions
In order to fully support MSA_EXT_5, we have to support the SHA-512
special instructions. So implement those.

The implementation began as something TweetNacl-like, and then was
adjusted to be useful here. It's not very beautiful, but it is quite
short and compact, which is what we're going for.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
[ restructure, add missing exception, add comments, fixup CPU model ]
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220922153820.221811-1-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-09-23 15:18:52 +02:00
Christian Borntraeger 131aafa7ef s390x/tcg: Fix opcode for lzrf
Fix the opcode for Load and Zero Rightmost Byte (32).

Fixes: c2a5c1d718 ("target/s390x: Implement load-and-zero-rightmost-byte insns")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: qemu-stable@nongnu.org
Message-Id: <20220914105750.767697-1-borntraeger@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-09-23 08:36:07 +02:00
Stefan Hajnoczi 394876e008 Hexagon update
remove unused encodings
     add fmin/fmax tests for signed zero
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmMou7IACgkQewJE+xLe
 RCIYbQgAgjFujecgbbCJfBPVMmpTXNOgk+Jt3w+jfg7/WJRZuhxAU3xB2qpismUH
 5MntMlFHAGOjlPXfg6U5AZFSw3RhlanH/RChHpVKuL6peOXFImIfEqdyVXHXfCuu
 FlpQFGwJ3Rs50UJhd7lVdlx0I7lup4E4X77hFvFcZQP6aNrt6Ic1Zq5eXhEq9k2A
 NnXol1R416JRT/senujYVvcTpgYVHlQCS+4dJEzKUqvFlTdo7lnAbPdjO8MPrz7B
 0NgPUGjGZJ70Dcqvd1n8HePIU1YyKTlHJNaWyTlAmw4MECyHyAJnd64jEMNECDb5
 0BrpHcY1HCt1Rh4QratemTfJglAJlA==
 =UUyr
 -----END PGP SIGNATURE-----

Merge tag 'pull-hex-20220919' of https://github.com/quic/qemu into staging

Hexagon update
    remove unused encodings
    add fmin/fmax tests for signed zero

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmMou7IACgkQewJE+xLe
# RCIYbQgAgjFujecgbbCJfBPVMmpTXNOgk+Jt3w+jfg7/WJRZuhxAU3xB2qpismUH
# 5MntMlFHAGOjlPXfg6U5AZFSw3RhlanH/RChHpVKuL6peOXFImIfEqdyVXHXfCuu
# FlpQFGwJ3Rs50UJhd7lVdlx0I7lup4E4X77hFvFcZQP6aNrt6Ic1Zq5eXhEq9k2A
# NnXol1R416JRT/senujYVvcTpgYVHlQCS+4dJEzKUqvFlTdo7lnAbPdjO8MPrz7B
# 0NgPUGjGZJ70Dcqvd1n8HePIU1YyKTlHJNaWyTlAmw4MECyHyAJnd64jEMNECDb5
# 0BrpHcY1HCt1Rh4QratemTfJglAJlA==
# =UUyr
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 19 Sep 2022 14:57:54 EDT
# gpg:                using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5  9AB4 7B02 44FB 12DE 4422

* tag 'pull-hex-20220919' of https://github.com/quic/qemu:
  Hexagon (tests/tcg/hexagon): add fmin/fmax tests for signed zero
  Hexagon (target/hexagon) remove unused encodings

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-09-22 13:21:50 -04:00
Stefan Hajnoczi 6338c30111 m68k pull request 20220921
- several fixes for SR
 - implement TAS
 - feature cleanup
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmMrMx0SHGxhdXJlbnRA
 dml2aWVyLmV1AAoJEPMMOL0/L748UB0P/1w5w+ogpcWVp9uBPE9m6lTT8sTricWD
 oGMIEG0kgpS3xTp7pZ/WeCp38IShFBfBcz5aypvR5nS/1aclyJnzGsCqyWdBe9c4
 jJJY5r7JKP2aKcolGNilNyd20ldOiaxZGe6yFLJDWi2spFbRx3iiRJ4/2NXF5Xi8
 TlHKdlDGlNLFiFNtBXMMwqvL77qJ8LH/aE4cAr8JTOb1VszKXrFxqEqxoUucRirB
 u0LbM+DP3u2xXjTGLMLlMcKf9X2BXwuWBSAXslB8xWmRlX+B6fMudBFglTgbu0Cc
 bpoBBqY4s3QYPb21i89osYevJAJSdrtEzkKus3xAI08ACSffb9k9m/naVJJDSSNC
 HZeKVbAd7I0Xw2xzzO5yQB+7rdfgoL1miE1rs936WKHi0WWHZpdJqzl/3G3ZPhRz
 NmqczF9rRR9B9SabXx5lWlhK+Ys/W7PzY+R4gc6ose0wF4T70qmVF3EoioP1c5Y8
 6OonMpRu6L5sW5KM3IUmkBo3KcnLezlxtebfaDyaKC9tB0qg4aM14ikL36nsLFbh
 2nGExYSyMJ6U4tqpxyQxijMMSQG20vyVIup6cUsrSD+rGmbSuWZWJwsTmaAw2W6k
 6HtDgtFk40ZB1WttYupQBa/LgjshGLl28jyLI9nNEdFYb4H1JAallEERF/tb6AUD
 WEiu8vcUMYEp
 =IAMt
 -----END PGP SIGNATURE-----

Merge tag 'm68k-for-7.2-pull-request' of https://github.com/vivier/qemu-m68k into staging

m68k pull request 20220921

- several fixes for SR
- implement TAS
- feature cleanup

# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmMrMx0SHGxhdXJlbnRA
# dml2aWVyLmV1AAoJEPMMOL0/L748UB0P/1w5w+ogpcWVp9uBPE9m6lTT8sTricWD
# oGMIEG0kgpS3xTp7pZ/WeCp38IShFBfBcz5aypvR5nS/1aclyJnzGsCqyWdBe9c4
# jJJY5r7JKP2aKcolGNilNyd20ldOiaxZGe6yFLJDWi2spFbRx3iiRJ4/2NXF5Xi8
# TlHKdlDGlNLFiFNtBXMMwqvL77qJ8LH/aE4cAr8JTOb1VszKXrFxqEqxoUucRirB
# u0LbM+DP3u2xXjTGLMLlMcKf9X2BXwuWBSAXslB8xWmRlX+B6fMudBFglTgbu0Cc
# bpoBBqY4s3QYPb21i89osYevJAJSdrtEzkKus3xAI08ACSffb9k9m/naVJJDSSNC
# HZeKVbAd7I0Xw2xzzO5yQB+7rdfgoL1miE1rs936WKHi0WWHZpdJqzl/3G3ZPhRz
# NmqczF9rRR9B9SabXx5lWlhK+Ys/W7PzY+R4gc6ose0wF4T70qmVF3EoioP1c5Y8
# 6OonMpRu6L5sW5KM3IUmkBo3KcnLezlxtebfaDyaKC9tB0qg4aM14ikL36nsLFbh
# 2nGExYSyMJ6U4tqpxyQxijMMSQG20vyVIup6cUsrSD+rGmbSuWZWJwsTmaAw2W6k
# 6HtDgtFk40ZB1WttYupQBa/LgjshGLl28jyLI9nNEdFYb4H1JAallEERF/tb6AUD
# WEiu8vcUMYEp
# =IAMt
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 21 Sep 2022 11:51:57 EDT
# gpg:                using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
# gpg:                issuer "laurent@vivier.eu"
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* tag 'm68k-for-7.2-pull-request' of https://github.com/vivier/qemu-m68k:
  target/m68k: always call gen_exit_tb() after writes to SR
  target/m68k: rename M68K_FEATURE_M68000 to M68K_FEATURE_M68K
  target/m68k: Perform writback before modifying SR
  target/m68k: Fix MACSR to CCR
  target/m68k: Implement atomic test-and-set

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-09-21 13:12:36 -04:00
Stefan Hajnoczi 6514f1a522 ppc patch queue for 2022-09-20:
This queue contains a implementation of PowerISA 3.1B hash insns, ppc
 TCG insns cleanups and fixes, and miscellaneus fixes in the spapr and
 pnv_phb models.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYyoWlAAKCRA82cqW3gMx
 ZDYhAP0eQMeA4NS3hiw7WMcAVg0pei3ZJL9oEh1UE3+MfK7MhQEA0q8qExWnQJAA
 a0hfnFH9pLjI+v0f/FbFK6QJBpu/bg8=
 =qT+H
 -----END PGP SIGNATURE-----

Merge tag 'pull-ppc-20220920' of https://gitlab.com/danielhb/qemu into staging

ppc patch queue for 2022-09-20:

This queue contains a implementation of PowerISA 3.1B hash insns, ppc
TCG insns cleanups and fixes, and miscellaneus fixes in the spapr and
pnv_phb models.

# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYyoWlAAKCRA82cqW3gMx
# ZDYhAP0eQMeA4NS3hiw7WMcAVg0pei3ZJL9oEh1UE3+MfK7MhQEA0q8qExWnQJAA
# a0hfnFH9pLjI+v0f/FbFK6QJBpu/bg8=
# =qT+H
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 20 Sep 2022 15:37:56 EDT
# gpg:                using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28  3819 3CD9 CA96 DE03 3164

* tag 'pull-ppc-20220920' of https://gitlab.com/danielhb/qemu:
  hw/ppc/spapr: Fix code style problems reported by checkpatch
  hw/pci-host: pnv_phb{3, 4}: Fix heap out-of-bound access failure
  hw/ppc: spapr: Use qemu_vfree() to free spapr->htab
  target/ppc: Clear fpstatus flags on helpers missing it
  target/ppc: Zero second doubleword of VSR registers for FPR insns
  target/ppc: Set OV32 when OV is set
  target/ppc: Zero second doubleword for VSX madd instructions
  target/ppc: Set result to QNaN for DENBCD when VXCVI occurs
  target/ppc: Zero second doubleword in DFP instructions
  target/ppc: Remove unused xer_* macros
  target/ppc: Remove extra space from s128 field in ppc_vsr_t
  target/ppc: Merge fsqrt and fsqrts helpers
  target/ppc: Move fsqrts to decodetree
  target/ppc: Move fsqrt to decodetree
  target/ppc: Implement hashstp and hashchkp
  target/ppc: Implement hashst and hashchk
  target/ppc: Add HASHKEYR and HASHPKEYR SPRs

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-09-21 13:11:57 -04:00
Mark Cave-Ayland c7546abfaa target/m68k: always call gen_exit_tb() after writes to SR
Any write to SR can change the security state so always call gen_exit_tb() when
this occurs. In particular MacOS makes use of andiw/oriw in a few places to
handle the switch between user and supervisor mode.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220917112515.83905-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-09-21 15:10:57 +02:00
Mark Cave-Ayland aece90d85d target/m68k: rename M68K_FEATURE_M68000 to M68K_FEATURE_M68K
The M68K_FEATURE_M68000 feature is misleading in that its name suggests the feature
is defined just for Motorola 68000 CPUs, whilst in fact it is defined for all
Motorola 680X0 CPUs.

In order to avoid confusion with the other M68K_FEATURE_M680X0 constants which
define the features available for specific Motorola CPU models, rename
M68K_FEATURE_M68000 to M68K_FEATURE_M68K and add comments to clarify its usage.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220917112515.83905-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-09-21 15:10:49 +02:00
Richard Henderson 214c6002d0 target/m68k: Perform writback before modifying SR
Writes to SR may change security state, which may involve
a swap of %ssp with %usp as reflected in %a7.  Finish the
writeback of %sp@+ before swapping stack pointers.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1206
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220913142818.7802-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-09-21 15:02:28 +02:00
Richard Henderson 24ec52f91d target/m68k: Fix MACSR to CCR
First, we were writing to the entire SR register, instead
of only the flags portion.  Second, we were not clearing C
as per the documentation (X was cleared via the 0xf mask).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220913142818.7802-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-09-21 15:01:37 +02:00
Richard Henderson 5934dae7a7 target/m68k: Implement atomic test-and-set
This is slightly more complicated than cas,
because tas is allowed on data registers.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220829051746.227094-1-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-09-21 14:59:45 +02:00
Víctor Colombo c3f24257e3 target/ppc: Clear fpstatus flags on helpers missing it
In ppc emulation, exception flags are not cleared at the end of an
instruction. Instead, the next instruction is responsible to clear
it before its emulation. However, some helpers are not doing it,
causing an issue where the previously set exception flags are being
used and leading to incorrect values being set in FPSCR.
Fix this by clearing fp_status before doing the instruction 'real' work
for the following helpers that were missing this behavior:

- VSX_CVT_INT_TO_FP_VECTOR
- VSX_CVT_FP_TO_FP
- VSX_CVT_FP_TO_INT_VECTOR
- VSX_CVT_FP_TO_INT2
- VSX_CVT_FP_TO_INT
- VSX_CVT_FP_TO_FP_HP
- VSX_CVT_FP_TO_FP_VECTOR
- VSX_CMP
- VSX_ROUND
- xscvqpdp
- xscvdpsp[n]

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-9-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 4b65b6e769 target/ppc: Zero second doubleword of VSR registers for FPR insns
FPR register are mapped to the first doubleword of the VSR registers.
Since PowerISA v3.1, the second doubleword of the target register
must be zeroed for FP instructions.

This patch does it by writting 0 to the second dw everytime the
first dw is being written using set_fpr.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-8-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo af721a3169 target/ppc: Set OV32 when OV is set
According to PowerISA: "OV32 is set whenever OV is implicitly set, and
is set to the same value that OV is defined to be set to in 32-bit
mode".

This patch changes helper_update_ov_legacy to set/clear ov32 when
applicable.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-7-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 9f097daa54 target/ppc: Zero second doubleword for VSX madd instructions
In 205eb5a89e we updated most VSX instructions to zero the
second doubleword, as is requested by PowerISA since v3.1.
However, VSX_MADD helper was left behind unchanged, while it
is also affected and should be fixed as well.

This patch applies the fix for MADD instructions.

Fixes: 205eb5a89e ("target/ppc: Change VSX instructions behavior to fill with zeros")
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-6-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 3ecec4c042 target/ppc: Set result to QNaN for DENBCD when VXCVI occurs
According to the ISA, for instruction DENBCD:
"If an invalid BCD digit or sign code is detected in the source
operand, an invalid-operation exception (VXCVI) occurs."

In the Invalid Operation Exception section, there is the situation:
"When Invalid Operation Exception is disabled (VE=0) and Invalid
Operation occurs (...) If the operation is an (...) or format the
target FPR is set to a Quiet NaN". This was not being done in
QEMU.

This patch sets the result to QNaN when the instruction DENBCD causes
an Invalid Operation Exception.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-5-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 34f760bac2 target/ppc: Zero second doubleword in DFP instructions
Starting at PowerISA v3.1, the second doubleword of the registers
used to store results in DFP instructions are supposed to be zeroed.

From the ISA, chapter 7.2.1.1 Floating-Point Registers:
"""
Chapter 4. Floating-Point Facility provides 32 64-bit
FPRs. Chapter 5. Decimal Floating-Point also employs
FPRs in decimal floating-point (DFP) operations. When
VSX is implemented, the 32 FPRs are mapped to
doubleword 0 of VSRs 0-31. (...)
All instructions that operate on an FPR are redefined
to operate on doubleword element 0 of the
corresponding VSR. (...)
and the contents of doubleword element 1 of the
VSR corresponding to the target FPR or FPR pair for these
instructions are set to 0.
"""

Before, the result stored at doubleword 1 was said to be undefined.

With that, this patch changes the DFP facility to zero doubleword 1
when using set_dfp64 and set_dfp128. This fixes the behavior for ISA
3.1 while keeping the behavior correct for previous ones.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 228ab1451d target/ppc: Remove unused xer_* macros
The macros xer_ov, xer_ca, xer_ov32, and xer_ca32 are both unused and
hiding the usage of env. Remove them.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-3-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 676696f428 target/ppc: Remove extra space from s128 field in ppc_vsr_t
Very trivial rogue space removal. There are two spaces between Int128
and s128 in ppc_vsr_t struct, where it should be only one.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 74177ec661 target/ppc: Merge fsqrt and fsqrts helpers
These two helpers are almost identical, differing only by the softfloat
operation it calls. Merge them into one using a macro.
Also, take this opportunity to capitalize the helper name as we moved
the instruction to decodetree in a previous patch.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220905123746.54659-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 4896c15bc3 target/ppc: Move fsqrts to decodetree
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220905123746.54659-3-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 6a8654d6c2 target/ppc: Move fsqrt to decodetree
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220905123746.54659-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 53ae2aeb94 target/ppc: Implement hashstp and hashchkp
Implementation for instructions hashstp and hashchkp, the privileged
versions of hashst and hashchk, which were added in Power ISA 3.1B.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 670f1da374 target/ppc: Implement hashst and hashchk
Implementation for instructions hashst and hashchk, which were added
in Power ISA 3.1B.

It was decided to implement the hash algorithm from ground up in this
patch exactly as described in Power ISA.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-3-victor.colombo@eldorado.org.br>
[danielhb: fix block comment in excp_helper.c]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Víctor Colombo 903f84eb88 target/ppc: Add HASHKEYR and HASHPKEYR SPRs
Add the Special Purpose Registers HASHKEYR and HASHPKEYR, which were
introduced by the Power ISA 3.1B. They are used by the new instructions
hashchk(p) and hashst(p).

The ISA states that the Operating System should generate the value for
these registers when creating a process, so it's its responsability to
do so. We initialize it with 0 for qemu-softmmu, and set a random 64
bits value for linux-user.

Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-09-20 10:54:06 -03:00
Taylor Simpson caa0a12e01 Hexagon (target/hexagon) remove unused encodings
Remove encodings guarded by ifdef that is not defined

Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606222327.7682-4-tsimpson@quicinc.com>
2022-09-19 11:44:20 -07:00
Paolo Bonzini efcca7ef17 target/i386: introduce insn_get_addr
The "O" operand type in the Intel SDM needs to load an 8- to 64-bit
unsigned value, while insn_get is limited to 32 bits.  Extract the code
out of disas_insn and into a separate function.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini 5c2f60bd1b target/i386: REPZ and REPNZ are mutually exclusive
The later prefix wins if both are present, make it show in s->prefix too.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini ca4b1b43bc target/i386: fix INSERTQ implementation
INSERTQ is defined to not modify any bits in the lower 64 bits of the
destination, other than the ones being replaced with bits from the
source operand.  QEMU instead is using unshifted bits from the source
for those bits.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini 034668c329 target/i386: correctly mask SSE4a bit indices in register operands
SSE4a instructions EXTRQ and INSERTQ have two bit index operands, that can be
immediates or taken from an XMM register.  In both cases, the fields are
6-bit wide and the top two bits in the byte are ignored.  translate.c is
doing that correctly for the immediate case, but not for the XMM case, so
fix it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19 15:16:00 +02:00
Paolo Bonzini 958e1dd130 target/i386: Raise #GP on unaligned m128 accesses when required.
Many instructions which load/store 128-bit values are supposed to
raise #GP when the memory operand isn't 16-byte aligned. This includes:
 - Instructions explicitly requiring memory alignment (Exceptions Type 1
   in the "AVX and SSE Instruction Exception Specification" section of
   the SDM)
 - Legacy SSE instructions that load/store 128-bit values (Exceptions
   Types 2 and 4).

This change sets MO_ALIGN_16 on 128-bit memory accesses that require
16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access
hooks that simulate a #GP exception in qemu-user and qemu-system,
respectively.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Message-Id: <20220830034816.57091-2-ricky@rzhou.org>
[Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-18 09:17:40 +02:00
Stefan Hajnoczi 123a32f9b3 Convert m68k to semihosting/syscalls.h.
Convert nios2 to semihosting/syscalls.h.
 Allow optional use of semihosting from userspace.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmMh1W8dHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8ptggAimuNN6IiD19Huu5F
 PMjzDqFPvWFOf82O16WTBM1xN0lwVH8+02PYRL3AhOIw9ZTgxezOo9/KXZpr8a8Z
 gocr4Ge/J7zHzHahYuqcyOqqkur2dM4lFiK9rfDD6vdNBMbi0kQZVuaNlQK6rV6Z
 2LHEwKKh64MXJVfwGzK7OLMv4pu0wpWcuCTH2/6U4E1325SOKmEos1VzIePxY1bw
 +AMNnairGEdBX1b3JlzZfrLSaOapJcgl0HZdrg6Mflm6ttTuuykGGtjkWBfcu3Nw
 utNI1zmUYfD/iJbnbsCNpZSLv6LVOQ2l5S6dOWV+JJ1HukVTZu3DoyfTr8t95kwK
 UuUoqA==
 =W7Yh
 -----END PGP SIGNATURE-----

Merge tag 'pull-semi-20220914' of https://gitlab.com/rth7680/qemu into staging

Convert m68k to semihosting/syscalls.h.
Convert nios2 to semihosting/syscalls.h.
Allow optional use of semihosting from userspace.

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmMh1W8dHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8ptggAimuNN6IiD19Huu5F
# PMjzDqFPvWFOf82O16WTBM1xN0lwVH8+02PYRL3AhOIw9ZTgxezOo9/KXZpr8a8Z
# gocr4Ge/J7zHzHahYuqcyOqqkur2dM4lFiK9rfDD6vdNBMbi0kQZVuaNlQK6rV6Z
# 2LHEwKKh64MXJVfwGzK7OLMv4pu0wpWcuCTH2/6U4E1325SOKmEos1VzIePxY1bw
# +AMNnairGEdBX1b3JlzZfrLSaOapJcgl0HZdrg6Mflm6ttTuuykGGtjkWBfcu3Nw
# utNI1zmUYfD/iJbnbsCNpZSLv6LVOQ2l5S6dOWV+JJ1HukVTZu3DoyfTr8t95kwK
# UuUoqA==
# =W7Yh
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Sep 2022 09:21:51 EDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-semi-20220914' of https://gitlab.com/rth7680/qemu:
  target/riscv: Honour -semihosting-config userspace=on and enable=on
  target/xtensa: Honour -semihosting-config userspace=on
  target/nios2: Honour -semihosting-config userspace=on
  target/mips: Honour -semihosting-config userspace=on
  target/m68k: Honour -semihosting-config userspace=on
  target/arm: Honour -semihosting-config userspace=on
  semihosting: Allow optional use of semihosting from userspace
  target/m68k: Convert semihosting errno to gdb remote errno
  target/m68k: Use semihosting/syscalls.h
  target/nios2: Convert semihosting errno to gdb remote errno
  target/nios2: Use semihosting/syscalls.h

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-09-17 10:30:34 -04:00
Peter Maydell e31e0f5661 target/arm: Report FEAT_PMUv3p5 for TCG '-cpu max'
Update the ID registers for TCG's '-cpu max' to report a FEAT_PMUv3p5
compliant PMU.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-11-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 47b385dae8 target/arm: Support 64-bit event counters for FEAT_PMUv3p5
With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32
bit.  (Previously, only the cycle counter could be 64 bit, and other
event counters were always 32 bits).  For any given event counter,
whether the overflow event is noted for overflow from bit 31 or from
bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and
MDCR_EL2.HPMN.

Implement the 64-bit event counter handling.  We choose to make our
counters always 64 bits, and mask out the top 32 bits on read or
write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5.

(Note that the changes to pmenvcntr_op_start() and
pmenvcntr_op_finish() bring their logic closer into line with that of
pmccntr_op_start() and pmccntr_op_finish(), which already had to cope
with the overflow being either at 32 or 64 bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 0b42f4fab9 target/arm: Implement FEAT_PMUv3p5 cycle counter disable bits
FEAT_PMUv3p5 introduces new bits which disable the cycle
counter from counting:
 * MDCR_EL2.HCCD disables the counter when in EL2
 * MDCR_EL3.SCCD disables the counter when Secure

Add the code to support these bits.

(Note that there is a third documented counter-disable
bit, MDCR_EL3.MCCD, which disables the counter when in
EL3. This is not present until FEAT_PMUv3p7, so is
out of scope for now.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell a793bcd027 target/arm: Rename pmu_8_n feature test functions
Our feature test functions that check the PMU version are named
isar_feature_{aa32,aa64,any}_pmu_8_{1,4}.  This doesn't match the
current Arm ARM official feature names, which are FEAT_PMUv3p1 and
FEAT_PMUv3p4.  Rename these functions to _pmuv3p1 and _pmuv3p4.

This commit was created with:
  sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch]

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell f1dd2506ee target/arm: Detect overflow when calculating next PMU interrupt
In pmccntr_op_finish() and pmevcntr_op_finish() we calculate the next
point at which we will get an overflow and need to fire the PMU
interrupt or set the overflow flag.  We do this by calculating the
number of nanoseconds to the overflow event and then adding it to
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL).  However, we don't check
whether that signed addition overflows, which can happen if the next
PMU interrupt would happen massively far in the future (250 years or
more).

Since QEMU assumes that "when the QEMU_CLOCK_VIRTUAL rolls over" is
"never", the sensible behaviour in this situation is simply to not
try to set the timer if it would be beyond that point.  Detect the
overflow, and skip setting the timer in that case.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 872d20343d target/arm: Honour MDCR_EL2.HPMD in Secure EL2
The logic in pmu_counter_enabled() for handling the 'prohibit event
counting' bits MDCR_EL2.HPMD and MDCR_EL3.SPME is written in a way
that assumes that EL2 is never Secure.  This used to be true, but the
architecture now permits Secure EL2, and QEMU can emulate this.

Refactor the prohibit logic so that we effectively OR together
the various prohibit bits when they apply, rather than trying to
construct an if-else ladder where any particular state of the CPU
ends up in exactly one branch of the ladder.

This fixes the Secure EL2 case and also is a better structure for
adding the PMUv8.5 bits MDCR_EL2.HCCD and MDCR_EL3.SCCD.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell b57aa7bdc3 target/arm: Ignore PMCR.D when PMCR.LC is set
The architecture requires that if PMCR.LC is set (for a 64-bit cycle
counter) then PMCR.D (which enables the clock divider so the counter
ticks every 64 cycles rather than every cycle) should be ignored.  We
were always honouring PMCR.D; fix the bug so we correctly ignore it
in this situation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 01765386a8 target/arm: Don't mishandle count when enabling or disabling PMU counters
The PMU cycle and event counter infrastructure design requires that
operations on the PMU register fields are wrapped in pmu_op_start()
and pmu_op_finish() calls (or their more specific pmmcntr and
pmevcntr equivalents).  This includes any changes to registers which
affect whether the counter should be enabled or disabled, but we
forgot to do this.

The effect of this bug is that in sequences like:
 * disable the cycle counter (PMCCNTR) using the PMCNTEN register
 * write a value such as 0xfffff000 to the PMCCNTR
 * restart the counter by writing to PMCNTEN
the value written to the cycle counter is corrupted, and it starts
counting from the wrong place. (Essentially, we fail to record that
the QEMU_CLOCK_VIRTUAL timestamp when the counter should be considered
to have started counting is the point when PMCNTEN is written to enable
the counter.)

Add the necessary bracketing calls, so that updates to the various
registers which affect whether the PMU is counting are handled
correctly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell c117c0649c target/arm: Correct value returned by pmu_counter_mask()
pmu_counter_mask() accidentally returns a value with bits [63:32]
set, because the expression it returns is evaluated as a signed value
that gets sign-extended to 64 bits.  Force the whole expression to be
evaluated with 64-bit arithmetic with ULL suffixes.

The main effect of this bug was that a guest could write to the bits
in the high half of registers like PMCNTENSET_EL0 that are supposed
to be RES0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 76e25d41d4 target/arm: Don't corrupt high half of PMOVSR when cycle counter overflows
When the cycle counter overflows, we are intended to set bit 31 in PMOVSR
to indicate this. However a missing ULL suffix means that we end up
setting all of bits 63-31. Fix the bug.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell bb7d902154 target/arm: Add missing space in comment
Fix a missing space before a comment terminator.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 3fe72e213e target/arm: Advertise FEAT_ETS for '-cpu max'
The architectural feature FEAT_ETS (Enhanced Translation
Synchronization) is a set of tightened guarantees about memory
ordering involving translation table walks:

 * if memory access RW1 is ordered-before memory access RW2 then it
   is also ordered-before any translation table walk generated by RW2
   that generates a translation fault, address size fault or access
   fault

 * TLB maintenance on non-exec-permission translations is guaranteed
   complete after a DSB (ie it does not need the context
   synchronization event that you have to have if you don’t have
   FEAT_ETS)

For QEMU’s implementation we don’t reorder translation table walk
accesses, and we guarantee to finish the TLB maintenance as soon as
the TLB op is done (the tlb_flush functions will complete at the end
of the TLB, and TLB ops always end the TB because they’re sysreg
writes).

So we’re already compliant and all we need to do is say so in the ID
registers for the 'max' CPU.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell d22c564958 target/arm: Implement ID_DFR1
In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement
it. We don't have any CPUs with features that they need to advertise
here yet, but plumbing in the ID register gives it the right name
when debugging and will help in future when we do add a CPU that
has non-zero ID_DFR1 fields.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 32957aad8c target/arm: Implement ID_MMFR5
In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined.
Implement this; we want to be able to use it to report to
the guest that we implement FEAT_ETS.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell 62b6f5e2f0 target/arm: Sort KVM reads of AArch32 ID registers into encoding order
The code that reads the AArch32 ID registers from KVM in
kvm_arm_get_host_cpu_features() does so almost but not quite in
encoding order.  Move the read of ID_PFR2 down so it's really in
encoding order.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell dde4d028dc target/arm: Make cpregs 0, c0, c{3-15}, {0-7} correctly RAZ in v8
In the AArch32 ID register scheme, coprocessor registers with
encoding cp15, 0, c0, c{0-7}, {0-7} are all in the space covered by
what in v6 and v7 was called the "CPUID scheme", and are supposed to
RAZ if they're not allocated to a specific ID register.  For our
pre-v8 CPUs we get this right, because the regdefs in
id_pre_v8_midr_cp_reginfo[] cover these RAZ requirements.  However
for v8 we failed to put in the necessary patterns to cover this, so
we end up UNDEFing on everything we didn't have an ID register for.
This is a problem because in Armv8 some encodings in 0, c0, c3, {0-7}
are now being used for new ID registers, and guests might thus start
trying to read them.  (We already have one of these: ID_PFR2.)

For v8 CPUs, we already have regdefs for 0, c0, c{0-2}, {0-7} (that
is, the space is completely allocated with no reserved spaces).  Add
entries to v8_idregs[] covering 0, c0, c3, {0-7}:
 * c3, {0-2} is the reserved AArch32 space corresponding to the
   AArch64 MVFR[012]_EL1
 * c3, {3,5,6,7} are reserved RAZ for both AArch32 and AArch64
   (in fact some of these are given defined meanings in Armv8.6,
   but we don't implement them yet)
 * c3, 4 is ID_PFR2 (already defined)

We then programmatically add RAZ patterns for AArch32 for
0, c0, c{4..15}, {0-7}:
 * c4-c7 are unused, and not shared with AArch64 (these
   are the encodings corresponding to where the AArch64
   specific ID registers live in the system register space)
 * c8-c15 weren't required to RAZ in v6/v7, but v8 extends
   the AArch32 reserved-should-RAZ space to cover these;
   the equivalent area of the AArch64 sysreg space is not
   defined as must-RAZ

Note that the architecture allows some registers in this space
to return an UNKNOWN value; we always return 0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:39 +01:00
Hao Wu 3b16766b5a target/arm: Add cortex-a35
Add cortex A35 core and enable it for virt board.

Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Joe Komlodi <komlodi@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220819002015.1663247-1-wuhaotsh@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:39 +01:00
Peter Maydell 7d7fb11615 target/riscv: Honour -semihosting-config userspace=on and enable=on
The riscv target incorrectly enabled semihosting always, whether the
user asked for it or not.  Call semihosting_enabled() passing the
correct value to the is_userspace argument, which fixes this and also
handles the userspace=on argument.  Because we do this at translate
time, we no longer need to check the privilege level in
riscv_cpu_do_interrupt().

Note that this is a behaviour change: we used to default to
semihosting being enabled, and now the user must pass
"-semihosting-config enable=on" if they want it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220822141230.3658237-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-13 17:18:21 +01:00