Mark up the sysreg definitions for the registers trapped
by HDFGRTR/HDFGWTR bits 12..x.
Bits 12..22 and bit 58 are for PMU registers.
The remaining bits in HDFGRTR/HDFGWTR are for traps on
registers that are part of features we don't implement:
Bits 23..32 and 63 : FEAT_SPE
Bits 33..48 : FEAT_ETE
Bits 50..56 : FEAT_TRBE
Bits 59..61 : FEAT_BRBE
Bit 62 : FEAT_SPEv1p2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-16-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-16-peter.maydell@linaro.org
Mark up the sysreg definitons for the registers trapped
by HDFGRTR/HDFGWTR bits 0..11. These cover various debug
related registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-15-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-15-peter.maydell@linaro.org
Mark up the sysreg definitions for the registers trapped
by HFGRTR/HFGWTR bits 36..63.
Of these, some correspond to RAS registers which we implement as
always-UNDEF: these don't need any extra handling for FGT because the
UNDEF-to-EL1 always takes priority over any theoretical
FGT-trap-to-EL2.
Bit 50 (NACCDATA_EL1) is for the ACCDATA_EL1 register which is part
of the FEAT_LS64_ACCDATA feature which we don't yet implement.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-14-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-14-peter.maydell@linaro.org
Implement the machinery for fine-grained traps on normal sysregs.
Any sysreg with a fine-grained trap will set the new field to
indicate which FGT register bit it should trap on.
FGT traps only happen when an AArch64 EL2 enables them for
an AArch64 EL1. They therefore are only relevant for AArch32
cpregs when the cpreg can be accessed from EL0. The logic
in access_check_cp_reg() will check this, so it is safe to
add a .fgt marking to an ARM_CP_STATE_BOTH ARMCPRegInfo.
The DO_BIT and DO_REV_BIT macros define enum constants FGT_##bitname
which can be used to specify the FGT bit, eg
.fgt = FGT_AFSR0_EL1
(We assume that there is no bit name duplication across the FGT
registers, for brevity's sake.)
Subsequent commits will add the .fgt fields to the relevant register
definitions and define the FGT_nnn values for them.
Note that some of the FGT traps are for instructions that we don't
handle via the cpregs mechanisms (mostly these are instruction traps).
Those we will have to handle separately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-10-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-10-peter.maydell@linaro.org
Define the system registers which are provided by the
FEAT_FGT fine-grained trap architectural feature:
HFGRTR_EL2, HFGWTR_EL2, HDFGRTR_EL2, HDFGWTR_EL2, HFGITR_EL2
All these registers are a set of bit fields, where each bit is set
for a trap and clear to not trap on a particular system register
access. The R and W register pairs are for system registers,
allowing trapping to be done separately for reads and writes; the I
register is for system instructions where trapping is on instruction
execution.
The data storage in the CPU state struct is arranged as a set of
arrays rather than separate fields so that when we're looking up the
bits for a system register access we can just index into the array
rather than having to use a switch to select a named struct member.
The later FEAT_FGT2 will add extra elements to these arrays.
The field definitions for the new registers are in cpregs.h because
in practice the code that needs them is code that also needs
the cpregs information; cpu.h is included in a lot more files.
We're also going to add some FGT-specific definitions to cpregs.h
in the next commit.
We do not implement HAFGRTR_EL2, because we don't implement
FEAT_AMUv1.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-9-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-9-peter.maydell@linaro.org
The HSTR_EL2 register is not supposed to have an effect unless EL2 is
enabled in the current security state. We weren't checking for this,
which meant that if the guest set up the HSTR_EL2 register we would
incorrectly trap even for accesses from Secure EL0 and EL1.
Add the missing checks. (Other places where we look at HSTR_EL2
for the not-in-v8A bits TTEE and TJDBX are already checking that
we are in NS EL0 or EL1, so there we alredy know EL2 is enabled.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-8-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-8-peter.maydell@linaro.org
The semantics of HSTR_EL2 require that it traps cpreg accesses
to EL2 for:
* EL1 accesses
* EL0 accesses, if the access is not UNDEFINED when the
trap bit is 0
(You can see this in the I_ZFGJP priority ordering, where HSTR_EL2
traps from EL1 to EL2 are priority 12, UNDEFs are priority 13, and
HSTR_EL2 traps from EL0 are priority 15.)
However, we don't get this right for EL1 accesses which UNDEF because
the register doesn't exist at all or because its ri->access bits
non-configurably forbid the access. At EL1, check for the HSTR_EL2
trap early, before either of these UNDEF reasons.
We have to retain the HSTR_EL2 check in access_check_cp_reg(),
because at EL0 any kind of UNDEF-to-EL1 (including "no such
register", "bad ri->access" and "ri->accessfn returns 'trap to EL1'")
takes precedence over the trap to EL2. But we only need to do that
check for EL0 now.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230130182459.3309057-7-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-7-peter.maydell@linaro.org
The HSTR_EL2 register has a collection of trap bits which allow
trapping to EL2 for AArch32 EL0 or EL1 accesses to coprocessor
registers. The specification of these bits is that when the bit is
set we should trap
* EL1 accesses
* EL0 accesses, if the access is not UNDEFINED when the
trap bit is 0
In other words, all UNDEF traps from EL0 to EL1 take precedence over
the HSTR_EL2 trap to EL2. (Since this is all AArch32, the only kind
of trap-to-EL1 is the UNDEF.)
Our implementation doesn't quite get this right -- we check for traps
in the order:
* no such register
* ARMCPRegInfo::access bits
* HSTR_EL2 trap bits
* ARMCPRegInfo::accessfn
So UNDEFs that happen because of the access bits or because the
register doesn't exist at all correctly take priority over the
HSTR_EL2 trap, but where a register can UNDEF at EL0 because of the
accessfn we are incorrectly always taking the HSTR_EL2 trap. There
aren't many of these, but one example is the PMCR; if you look at the
access pseudocode for this register you can see that UNDEFs taken
because of the value of PMUSERENR.EN are checked before the HSTR_EL2
bit.
Rearrange helper_access_check_cp_reg() so that we always call the
accessfn, and use its return value if it indicates that the access
traps to EL0 rather than continuing to do the HSTR_EL2 check.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-6-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-6-peter.maydell@linaro.org
Rearrange the code in do_coproc_insn() so that we calculate the
syndrome value for a potential trap early; we're about to add a
second check that wants this value earlier than where it is currently
determined.
(Specifically, a trap to EL2 because of HSTR_EL2 should take
priority over an UNDEF to EL1, even when the UNDEF is because
the register does not exist at all or because its ri->access
bits non-configurably fail the access. So the check we put in
for HSTR_EL2 trapping at EL1 (which needs the syndrome) is
going to have to be done before the check "is the ARMCPRegInfo
pointer NULL".)
This commit is just code motion; the change to HSTR_EL2
handling that will use the 'syndrome' variable is in a
subsequent commit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-5-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-5-peter.maydell@linaro.org
We added the CPAccessResult values CP_ACCESS_TRAP_UNCATEGORIZED_EL2
and CP_ACCESS_TRAP_UNCATEGORIZED_EL3 purely in order to use them in
the ats_access() function, but doing so was incorrect (a bug fixed in
a previous commit). There aren't any cases where we want an access
function to be able to request a trap to EL2 or EL3 with a zero
syndrome value, so remove these enum values.
As well as cleaning up dead code, the motivation here is that
we'd like to implement fine-grained-trap handling in
helper_access_check_cp_reg(). Although the fine-grained traps
to EL2 are always lower priority than trap-to-same-EL and
higher priority than trap-to-EL3, they are in the middle of
various other kinds of trap-to-EL2. Knowing that a trap-to-EL2
must always for us have the same syndrome (ie that an access
function will return CP_ACCESS_TRAP_EL2 and there is no other
kind of trap-to-EL2 enum value) means we don't have to try
to choose which of the two syndrome values to report if the
access would trap to EL2 both for the fine-grained-trap and
because the access function requires it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-4-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-4-peter.maydell@linaro.org
The AArch32 ATS12NSO* address translation operations are supposed to
trap to either EL2 or EL3 if they're executed at Secure EL1 (which
can only happen if EL3 is AArch64). We implement this, but we got
the syndrome value wrong: like other traps to EL2 or EL3 on an
AArch32 cpreg access, they should report the 0x3 syndrome, not the
0x0 'uncategorized' syndrome. This is clear in the access pseudocode
for these instructions.
Fix the syndrome value for these operations by correcting the
returned value from the ats_access() function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-3-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-3-peter.maydell@linaro.org
The encodings 0,0,C7,C9,0 and 0,0,C7,C9,1 are AT SP1E1RP and AT
S1E1WP, but our ARMCPRegInfo definitions for them incorrectly name
them AT S1E1R and AT S1E1W (which are entirely different
instructions). Fix the names.
(This has no guest-visible effect as the names are for debug purposes
only.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Fuad Tabba <tabba@google.com>
Message-id: 20230130182459.3309057-2-peter.maydell@linaro.org
Message-id: 20230127175507.2895013-2-peter.maydell@linaro.org
We currently only support GICv2 emulation. To also support GICv3, we will
need to pass a few system registers into their respective handler functions.
This patch adds support for HVF to call into the TCG callbacks for GICv3
system register handlers. This is safe because the GICv3 TCG code is generic
as long as we limit ourselves to EL0 and EL1 - which are the only modes
supported by HVF.
To make sure nobody trips over that, we also annotate callbacks that don't
work in HVF mode, such as EL state change hooks.
With GICv3 support in place, we can run with more than 8 vCPUs.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-id: 20230128224459.70676-1-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The HAXM project has been retired (see https://github.com/intel/haxm#status),
so we should mark the code in QEMU as deprecated (and finally remove it
unless somebody else picks the project up again - which is quite unlikely
since there are now whpx and hvf on these operating systems, too).
Message-Id: <20230126121034.1035138-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Print both the raw field and the resolved pc-relative
address, as we do for branches.
Reviewed-by: WANG Xuerui <git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While jirl shares the same instruction format as bne etc,
it is not assembled the same. In particular, rd is printed
first not second and the immediate is not pc-relative.
Decode into the arg_rr_i structure, which prints correctly.
This changes the "offs" member to "imm", to update translate.
Reviewed-by: WANG Xuerui <git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reuse the decodetree based disassembler from
target/loongarch/ for tcg/loongarch64/.
The generation of decode-insns.c.inc into ./libcommon.fa.p/ could
eventually result in conflict, if any other host requires the same
trick, but this is good enough for now.
Reviewed-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do not encode the pointer as a constant in the opcode stream.
This pointer is specific to the cpu that first generated the
translation, which runs into problems with both hot-pluggable
cpus and user-only threads, as cpus are removed. It's also a
potential correctness issue in the theoretical case of a
slightly-heterogenous system, because if CPU 0 generates a
TB and then CPU 1 executes it, CPU 1 will end up using CPU 0's
hash table, which might have a wrong set of registers in it.
(All our current systems are either completely homogenous,
M-profile, or have CPUs sufficiently different that they
wouldn't be sharing TBs anyway because the differences would
show up in the TB flags, so the correctness issue is only
theoretical, not practical.)
Perform the lookup in either helper_access_check_cp_reg,
or a new helper_lookup_cp_reg.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230106194451.1213153-3-richard.henderson@linaro.org
[PMM: added note in commit message about correctness issue]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the ri == NULL case to the top of the function and return.
This allows the else to be removed and the code unindented.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230106194451.1213153-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Qemu doesn't implement Debug Communication Channel, as well as the rest
of external debug interface. However, Microsoft Hyper-V in tries to
access some of those registers during an EL2 context switch.
Since there is no architectural way to not advertise support for external
debug, provide RAZ/WI stubs for OSDTRRX_EL1, OSDTRTX_EL1 and OSECCR_EL1
registers in the same way the rest of DCM is currently done. Do account
for access traps though with access_tda.
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230120155929.32384-3-eiakovlev@linux.microsoft.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The architecture does not define any functionality for the CLAIM tag bits.
So we will just keep the raw bits, as per spec.
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230120155929.32384-2-eiakovlev@linux.microsoft.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In v7m_exception_taken(), for v8M we set the EXC_RETURN.ES bit if
either the exception targets Secure or if the CPU doesn't implement
the Security Extension. This is incorrect: the v8M Arm ARM specifies
that the ES bit should be RES0 if the Security Extension is not
implemented, and the pseudocode agrees.
Remove the incorrect condition, so that we leave the ES bit 0
if the Security Extension isn't implemented.
This doesn't have any guest-visible effects for our current set of
emulated CPUs, because all our v8M CPUs implement the Security
Extension; but it's worth fixing in case we add a v8M CPU without
the extension in future.
Reported-by: Igor Kotrasinski <i.kotrasinsk@samsung.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Unify the two helper_set_pstate_{sm,za} in this function.
Do not call helper_* functions from svcr_write.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230112102436.1913-8-philmd@linaro.org
Message-Id: <20230112004322.161330-1-richard.henderson@linaro.org>
[PMD: Split patch in multiple tiny steps]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
BASEPRI, FAULTMASK, and their _NS equivalents only exist on devices with
the Main Extension. However, the MRS instruction did not check this,
and the MSR instruction handled it inconsistently (warning BASEPRI, but
silently ignoring writes to BASEPRI_NS). Unify this behavior and always
warn when reading or writing any of these registers if the extension is
not present.
Signed-off-by: David Reiss <dreiss@meta.com>
Message-id: 167330628518.10497.13100425787268927786-0@git.sr.ht
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a 64-bit register on AArch64, even if the high 44 bits
are RES0. Because this is defined as ARM_CP_STATE_BOTH, we are
asserting that the cpreg field is 64-bits.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1400
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230115171633.3171890-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* riscv_htif: Support console output via proxy syscall
* Cleanup firmware and device tree loading
* Fix elen check when using vector extensions
* add RISC-V OpenSBI boot test
* Ensure we always follow MISA parsing
* Fix up masking of vsip/vsie accesses
* Trap on writes to stimecmp from VS when hvictl.VTI=1
* Introduce helper_set_rounding_mode_chkfrm
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmPKRP0ACgkQIeENKd+X
cFTHTwgAkyRDxrLepvI0KNaT0+cUBh+3QFlJ5JRtVnDW+5R+3aGT72PTS7Migqoh
H3IFCB2mcSdQvyjj2jDFlrFd0oVIaqE0+bnhouS/4nHB5S/vmapHi4Mc74Vv1CMB
rgXScL+C5gDOH1I7XjqOb1FY5Vxqyhi3IzdIoj+0ysUrGmUkqx+ij/cfQL7jkH9Q
slNAkorgwgrTgMgkJ5RKd4cjyv35O4XKLAsgixVTfJ+WcxKmc/zaJOkNM/UDnmxK
k2+2P8bshZWtWscXbm3oMC5+2ow1QtFedEkhHqb4adkQIyolKL7P1TfMlCgMSvES
BKl0DUhqQ+7F77tik3GPy9spQ6LpTQ==
=ifFF
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20230120' of https://github.com/alistair23/qemu into staging
Second RISC-V PR for QEMU 8.0
* riscv_htif: Support console output via proxy syscall
* Cleanup firmware and device tree loading
* Fix elen check when using vector extensions
* add RISC-V OpenSBI boot test
* Ensure we always follow MISA parsing
* Fix up masking of vsip/vsie accesses
* Trap on writes to stimecmp from VS when hvictl.VTI=1
* Introduce helper_set_rounding_mode_chkfrm
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmPKRP0ACgkQIeENKd+X
# cFTHTwgAkyRDxrLepvI0KNaT0+cUBh+3QFlJ5JRtVnDW+5R+3aGT72PTS7Migqoh
# H3IFCB2mcSdQvyjj2jDFlrFd0oVIaqE0+bnhouS/4nHB5S/vmapHi4Mc74Vv1CMB
# rgXScL+C5gDOH1I7XjqOb1FY5Vxqyhi3IzdIoj+0ysUrGmUkqx+ij/cfQL7jkH9Q
# slNAkorgwgrTgMgkJ5RKd4cjyv35O4XKLAsgixVTfJ+WcxKmc/zaJOkNM/UDnmxK
# k2+2P8bshZWtWscXbm3oMC5+2ow1QtFedEkhHqb4adkQIyolKL7P1TfMlCgMSvES
# BKl0DUhqQ+7F77tik3GPy9spQ6LpTQ==
# =ifFF
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 20 Jan 2023 07:38:37 GMT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* tag 'pull-riscv-to-apply-20230120' of https://github.com/alistair23/qemu: (37 commits)
hw/riscv/virt.c: move create_fw_cfg() back to virt_machine_init()
target/riscv: Remove helper_set_rod_rounding_mode
target/riscv: Introduce helper_set_rounding_mode_chkfrm
tcg/riscv: Use tcg_pcrel_diff in tcg_out_ldst
target/riscv: Trap on writes to stimecmp from VS when hvictl.VTI=1
target/riscv: Fix up masking of vsip/vsie accesses
hw/riscv: use ms->fdt in riscv_socket_fdt_write_distance_matrix()
hw/riscv: use MachineState::fdt in riscv_socket_fdt_write_id()
hw/riscv/virt.c: remove 'is_32_bit' param from create_fdt_socket_cpus()
hw/riscv/sifive_u.c: simplify create_fdt()
hw/riscv/virt.c: simplify create_fdt()
hw/riscv/spike.c: simplify create_fdt()
target/riscv: Use TARGET_FMT_lx for env->mhartid
target/riscv/cpu.c: do not skip misa logic in riscv_cpu_realize()
target/riscv/cpu: set cpu->cfg in register_cpu_props()
hw/riscv/boot.c: use MachineState in riscv_load_kernel()
hw/riscv/boot.c: use MachineState in riscv_load_initrd()
hw/riscv: write bootargs 'chosen' FDT after riscv_load_kernel()
hw/riscv: write initrd 'chosen' FDT inside riscv_load_initrd()
hw/riscv/spike.c: load initrd right after riscv_load_kernel()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have two inclusion loops:
block/block.h
-> block/block-global-state.h
-> block/block-common.h
-> block/blockjob.h
-> block/block.h
block/block.h
-> block/block-io.h
-> block/block-common.h
-> block/blockjob.h
-> block/block.h
I believe these go back to Emanuele's reorganization of the block API,
merged a few months ago in commit d7e2fe4aac.
Fortunately, breaking them is merely a matter of deleting unnecessary
includes from headers, and adding them back in places where they are
now missing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
The only setting of RISCV_FRM_ROD is from the vector unit,
and now handled by helper_set_rounding_mode_chkfrm.
This helper is now unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230115160657.3169274-3-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The new helper always validates the contents of FRM, even
if the new rounding mode is not DYN. This is required by
the vector unit.
Track whether we've validated FRM separately from whether
we've updated fp_status with a given rounding mode, so that
we can elide calls correctly.
This partially reverts d6c4d3f2a6 which attempted the to do
the same thing, but with two calls to gen_set_rm(), which is
both inefficient and tickles an assertion in decode_save_opc.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1441
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230115160657.3169274-2-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Per the AIA specification, writes to stimecmp from VS level should
trap when hvictl.VTI is set since the write may cause vsip.STIP to
become unset.
Fixes: 3ec0fe18a3 ("target/riscv: Add vstimecmp support")
Signed-off-by: Andrew Bresticker <abrestic@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221215224541.1423431-2-abrestic@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The current logic attempts to shift the VS-level bits into their correct
position in mip while leaving the remaining bits in-tact. This is both
pointless and likely incorrect since one would expect that any new, future
VS-level interrupts will get their own position in mip rather than sharing
with their (H)S-level equivalent. Fix this, and make the logic more
readable, by just making off the VS-level bits and shifting them into
position.
This also fixes reads of vsip, which would only ever report vsip.VSSIP
since the non-writable bits got masked off as well.
Fixes: d028ac7512 ("arget/riscv: Implement AIA CSRs for 64 local interrupts on RV32")
Signed-off-by: Andrew Bresticker <abrestic@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221215224541.1423431-1-abrestic@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
env->mhartid is currently casted to long before printed, which drops
the high 32-bit for rv64 on 32-bit host. Use TARGET_FMT_lx instead.
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230109152655.340114-1-bmeng@tinylab.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
All RISCV CPUs are setting cpu->cfg during their cpu_init() functions,
meaning that there's no reason to skip all the misa validation and setup
if misa_ext was set beforehand - especially since we're setting an
updated value in set_misa() in the end.
Put this code chunk into a new riscv_cpu_validate_set_extensions()
helper and always execute it regardless of what the board set in
env->misa_ext.
This will put more responsibility in how each board is going to init
their attributes and extensions if they're not using the defaults.
It'll also allow realize() to do its job looking only at the extensions
enabled per se, not corner cases that some CPUs might have, and we won't
have to change multiple code paths to fix or change how extensions work.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Message-Id: <20230113175230.473975-3-dbarboza@ventanamicro.com>
[ Changes by AF:
- Rebase
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is an informal contract between the cpu_init() functions and
riscv_cpu_realize(): if cpu->env.misa_ext is zero, assume that the
default settings were loaded via register_cpu_props() and do validations
to set env.misa_ext. If it's not zero, skip this whole process and
assume that the board somehow did everything.
At this moment, all SiFive CPUs are setting a non-zero misa_ext during
their cpu_init() and skipping a good chunk of riscv_cpu_realize(). This
causes problems when the code being skipped in riscv_cpu_realize()
contains fixes or assumptions that affects all CPUs, meaning that SiFive
CPUs are missing out.
To allow this code to not be skipped anymore, all the cpu->cfg.ext_*
attributes needs to be set during cpu_init() time. At this moment this
is being done in register_cpu_props(). The SiFive boards are setting
their own extensions during cpu_init() though, meaning that they don't
want all the defaults from register_cpu_props().
Let's move the contract between *_cpu_init() and riscv_cpu_realize() to
register_cpu_props(). Inside this function we'll check if
cpu->env.misa_ext was set and, if that's the case, set all relevant
cpu->cfg.ext_* attributes, and only that. Leave the 'misa_ext' = 0 case
as is today, i.e. loading all the defaults from riscv_cpu_extensions[].
register_cpu_props() can then be called by all the cpu_init() functions,
including the SiFive ones. This will make all CPUs behave more in line
with what riscv_cpu_realize() expects.
This will also make the cpu_init() functions even more alike, but at this
moment we would need some design changes in how we're initializing
extensions/attributes (e.g. some CPUs are setting cfg options after
register_cpu_props(), so we can't simply add the function to a common
post_init() hook) to make a common cpu_init() code across all CPUs.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230113175230.473975-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The elen check should be cpu->cfg.elen in range [8, 64].
Signed-off-by: Dongxue Zhang <elta.era@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwe_liu@linux.alibaba.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <167236721596.15277.2653405273227256289-0@git.sr.ht>
[ Changes by AF:
- Tidy up commit message
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
At present for some unknown reason the HTIF registers (fromhost &
tohost) are defined in the RISC-V CPUArchState. It should really
be put in the HTIFState struct as it is only meaningful to HTIF.
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221229091828.1945072-6-bmeng@tinylab.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>