While the first 'off' variable assignment is unused, it helps
to better understand the code logic. Move the assignation where
it would have been used so it is easier to compare the MSA
registers based on FPU ones versus the MSA specific registers.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211023214803.522078-34-f4bug@amsat.org>
The result of the 'Vector Multiply and Subtract' opcode is
incorrect with Byte vectors. Probably due to a copy/paste error,
commit 5f148a0232 mistakenly used the $wt (target register)
instead of $wd (destination register) as first operand. Fix that.
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Fixes: 5f148a0232 ("target/mips: msa: Split helpers for MSUBV.<B|H|W|D>")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211028210843.2120802-3-f4bug@amsat.org>
The result of the 'Vector Multiply and Add' opcode is incorrect
with Byte vectors. Probably due to a copy/paste error, commit
7a7a162add mistakenly used the $wt (target register) instead
of $wd (destination register) as first operand. Fix that.
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Fixes: 7a7a162add ("target/mips: msa: Split helpers for MADDV.<B|H|W|D>")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211028210843.2120802-2-f4bug@amsat.org>
Because core-capability releated features are model-specific and KVM
won't support it, remove the core-capability in CPU model to avoid the
warning message.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210827064818.4698-3-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
- Use a shared PLIC config helper function
- Fixup the OpenTitan PLIC configuration
- Add support for the experimental J extension
- Update the fmin/fmax handling
- Fixup VS interrupt forwarding
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmF7nNMACgkQIeENKd+X
cFQhegf/U3L/SOPCU5uICn67TZHRUeyzH1ebw6p9hHkGbhUq2hLsg2N5yqIPusbM
Y/uouTHciRXqSNiqNle24wvdORxBPdwkE+hplyU3os3wvIelU+8HAhBIrFsJPOVV
G3kuMoc7rKPhjbwSjSIQcrfDA52pT3wQJUfza3bvVZ1VoI4jb+I2yopRLVq7S0qA
d/Hl5QoUC/CcSrpubDp8AtN/lQWnmFlFq4vfbaFg/NJK3+lCR1JWc8RxfRJ3Y9T0
V3AZP8m8+dTubjoMAbNkuIlRLKtvCfa+qFe9WFwxLKul6sbM/qONVDFQJAiHczyG
Pjkg77ZXRXvu1McMN/rF4tz5k9dQOA==
=5EIe
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair23/tags/pull-riscv-to-apply-20211029-1' into staging
Fifth RISC-V PR for QEMU 6.2
- Use a shared PLIC config helper function
- Fixup the OpenTitan PLIC configuration
- Add support for the experimental J extension
- Update the fmin/fmax handling
- Fixup VS interrupt forwarding
# gpg: Signature made Fri 29 Oct 2021 12:03:47 AM PDT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
* remotes/alistair23/tags/pull-riscv-to-apply-20211029-1:
target/riscv: change the api for RVF/RVD fmin/fmax
softfloat: add APIs to handle alternative sNaN propagation for fmax/fmin
target/riscv: remove force HS exception
target/riscv: fix VS interrupts forwarding to HS
target/riscv: Allow experimental J-ext to be turned on
target/riscv: Implement address masking functions required for RISC-V Pointer Masking extension
target/riscv: Support pointer masking for RISC-V for i/c/f/d/a types of instructions
target/riscv: Print new PM CSRs in QEMU logs
target/riscv: Add J extension state description
target/riscv: Support CSRs required for RISC-V PM extension except for the h-mode
target/riscv: Add CSR defines for RISC-V PM extension
target/riscv: Add J-extension into RISC-V
hw/riscv: opentitan: Fixup the PLIC context addresses
hw/riscv: virt: Use the PLIC config helper function
hw/riscv: microchip_pfsoc: Use the PLIC config helper function
hw/riscv: sifive_u: Use the PLIC config helper function
hw/riscv: boot: Add a PLIC config string function
hw/riscv: virt: Don't use a macro for the PLIC configuration
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fixes for 128/64 division.
Cleanup tcg/optimize.c
Optimize redundant sign extensions
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmF7cygdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8mAggAtHuBHs018O6k9dSl
5JJReghwMvsapV5w3MTfN72UR8xTVyC0+dk+P3hv2qJMx/Oofb2Z0m9e9n/iwWxJ
kktySWUuHXE/Hty4fVSEfUdx0C4FBF49I1PllzzjS8gR2gHbEoHXc2doJVCXCW0C
BSKzWERZjVdHWT2GeBtSV0n4vOoiHoBaa5ZcH7VVXVOlpT2iu8Tn3RlVELA1h3pY
NeDLCONWNAXHDQfM+63glLDTZ7eMZ8deOcLgJAiYDA2XVegYGeTZuqdBT3SiTno+
ts4D5aBkmy8yinCcJQktd3alsM1cwYlco0U/x8+JEvNqzWmLzsRpox7g6+rrpe+d
KhZ7Ww==
=UEO3
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20211028' into staging
Improvements to qemu/int128
Fixes for 128/64 division.
Cleanup tcg/optimize.c
Optimize redundant sign extensions
# gpg: Signature made Thu 28 Oct 2021 09:06:00 PM PDT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* remotes/rth/tags/pull-tcg-20211028: (60 commits)
softmmu: fix for "after access" watchpoints
softmmu: remove useless condition in watchpoint check
softmmu: fix watchpoint processing in icount mode
tcg/optimize: Propagate sign info for shifting
tcg/optimize: Propagate sign info for bit counting
tcg/optimize: Propagate sign info for setcond
tcg/optimize: Propagate sign info for logical operations
tcg/optimize: Optimize sign extensions
tcg/optimize: Use fold_xx_to_i for rem
tcg/optimize: Use fold_xi_to_x for div
tcg/optimize: Use fold_xi_to_x for mul
tcg/optimize: Use fold_xx_to_i for orc
tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
tcg: Extend call args using the correct opcodes
tcg/optimize: Sink commutative operand swapping into fold functions
tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
tcg/optimize: Split out fold_masks
tcg/optimize: Split out fold_ix_to_i
tcg/optimize: Split out fold_xi_to_x
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The sNaN propagation behavior has been changed since cd20cee7 in
https://github.com/riscv/riscv-isa-manual.
In Priv spec v1.10, RVF is v2.0. fmin.s and fmax.s are implemented with
IEEE 754-2008 minNum and maxNum operations.
In Priv spec v1.11, RVF is v2.2. fmin.s and fmax.s are amended to
implement IEEE 754-2019 minimumNumber and maximumNumber operations.
Therefore, to prevent the risk of having too many version variables.
Instead of introducing an extra *fext_ver* variable, we tie RVF version
to Priv version. Though it's not completely accurate but is close enough.
Signed-off-by: Chih-Min Chao <chihmin.chao@sifive.com>
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20211021160847.2748577-3-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is no need to "force an hs exception" as the current privilege
level, the state of the global ie and of the delegation registers should
be enough to route the interrupt to the appropriate privilege level in
riscv_cpu_do_interrupt. The is true for both asynchronous and
synchronous exceptions, specifically, guest page faults which must be
hardwired to zero hedeleg. As such the hs_force_except mechanism can be
removed.
Signed-off-by: Jose Martins <josemartins90@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20211026145126.11025-3-josemartins90@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
VS interrupts (2, 6, 10) were not correctly forwarded to hs-mode when
not delegated in hideleg (which was not being taken into account). This
was mainly because hs level sie was not always considered enabled when
it should. The spec states that "Interrupts for higher-privilege modes,
y>x, are always globally enabled regardless of the setting of the global
yIE bit for the higher-privilege mode." and also "For purposes of
interrupt global enables, HS-mode is considered more privileged than
VS-mode, and VS-mode is considered more privileged than VU-mode". Also,
vs-level interrupts were not being taken into account unless V=1, but
should be unless delegated.
Finally, there is no need for a special case for to handle vs interrupts
as the current privilege level, the state of the global ie and of the
delegation registers should be enough to route all interrupts to the
appropriate privilege level in riscv_cpu_do_interrupt.
Signed-off-by: Jose Martins <josemartins90@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20211026145126.11025-2-josemartins90@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Change SET_USR_FIELD to write to hex_new_value[HEX_REG_USR] instead
of hex_gpr[HEX_REG_USR].
Then, we need code to mark the instructions that can set implicitly
set USR
- Macros added to hex_common.py
- A_FPOP added in translate.c
Test case added in tests/tcg/hexagon/overflow.c
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Change additional tcg_const_tl to tcg_constant_tl
Note that gen_pred_cancal had slot_mask initialized with tcg_const_tl.
However, it is not constant throughout, so we initialize it with
tcg_temp_new and replace the first use with the constant value.
Inspired-by: Richard Henderson <richard.henderson@linaro.org>
Inspired-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
These will be used to implement new decimal floating point
instructions from Power ISA 3.1.
The remainder is now returned directly by divu128/divs128,
freeing up phigh to receive the high 64 bits of the quotient.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211025191154.350831-4-luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In preparation for changing the divu128/divs128 implementations
to allow for quotients larger than 64 bits, move the div-by-zero
and overflow checks to the callers.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211025191154.350831-2-luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since commit 12b6e9b27d ("disas: Clean up CPUDebug initialization")
the disassemble_info->bfd_endian enum is set for all targets in
target_disas(). We can directly call print_insn_nios2() and simplify.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210807110939.95853-3-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The position of this read-only field is dependent on the current xlen.
Rather than having to compute that difference in many places, compute
it only on read.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-16-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use the official debug read interface to the csrs,
rather than referencing the env slots directly.
Put the list of csrs to dump into a table.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-15-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Most shift instructions require a separate implementation
for RV32 when TARGET_LONG_BITS == 64.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-14-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The count zeros instructions require a separate implementation
for RV32 when TARGET_LONG_BITS == 64.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-13-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When target_long is 64-bit, we still want a 32-bit bswap for rev8.
Since this opcode is specific to RV32, we need not conditionalize.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-12-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The multiply high-part instructions require a separate
implementation for RV32 when TARGET_LONG_BITS == 64.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-11-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In preparation for RV128, consider more than just "w" for
operand size modification. This will be used for the "d"
insns from RV128 as well.
Rename oper_len to get_olen to better match get_xlen.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-10-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In preparation for RV128, replace a simple predicate
with a more versatile test.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-9-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We're currently assuming SEW <= 3, and the "else" from
the SEW == 3 must be less. Use a switch and explicitly
bound both SEW and SEQ for all cases.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-8-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use the same REQUIRE_64BIT check that we use elsewhere,
rather than open-coding the use of is_32bit.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-7-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Begin adding support for switching XLEN at runtime. Extract the
effective XLEN from MISA and MSTATUS and store for use during translation.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-6-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Shortly, the set of supported XL will not be just 32 and 64,
and representing that properly using the enumeration will be
imperative.
Two places, booting and gdb, intentionally use misa_mxl_max
to emphasize the use of the reset value of misa.mxl, and not
the current cpu state.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-5-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The hw representation of misa.mxl is at the high bits of the
misa csr. Representing this in the same way inside QEMU
results in overly complex code trying to check that field.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-4-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Move the MXL_RV* defines to enumerators.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-3-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Move the function to cpu_helper.c, as it is large and growing.
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211020031709.359469-2-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Organise the CPU properties so that standard extensions come first
then followed by experimental extensions.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: b6598570f60c5ee7f402be56d837bb44b289cc4d.1634531504.git.alistair.francis@wdc.com
Since commit 1a9540d1f1
("target/riscv: Drop support for ISA spec version 1.09.1")
these definitions are unused, remove them.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: f4d8a7a035f39c0a35d44c1e371c5c99cc2fa15a.1634531504.git.alistair.francis@wdc.com
TB_FLAGS mem_idx bits was extended from 2 bits to 3 bits in
commit: c445593, but other TB_FLAGS bits for rvv and rvh were
not shift as well so these bits may overlap with each other when
rvv is enabled.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20211015074627.3957162-2-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The earlier implementation fell into a corner case for bytes that were
0x01, giving a wrong result (but not affecting our application test
cases for strings, as an ASCII value 0x01 is rare in those...).
This changes the algorithm to:
1. Mask out the high-bit of each bytes (so that each byte is <= 127).
2. Add 127 to each byte (i.e. if the low 7 bits are not 0, this will overflow
into the highest bit of each byte).
3. Bitwise-or the original value back in (to cover those cases where the
source byte was exactly 128) to saturate the high-bit.
4. Shift-and-mask (implemented as a mask-and-shift) to extract the MSB of
each byte into its LSB.
5. Multiply with 0xff to fan out the LSB to all bits of each byte.
Fixes: d7a4fcb034 ("target/riscv: Add orc.b instruction for Zbb, removing gorc/gorci")
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reported-by: Vincent Palatin <vpalatin@rivosinc.com>
Tested-by: Vincent Palatin <vpalatin@rivosinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211013184125.2010897-1-philipp.tomsich@vrull.eu
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Ensure the columns for all of the register names and values line up.
No functional change, just a minor tweak to the output.
Signed-off-by: Travis Geiselbrecht <travisg@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20211009055019.545153-1-travisg@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
oprsz and maxsz are passed with the same value in commit: eee2d61e20.
However, vmv.v.v was missed in that commit and should pass the same
value as well in its tcg_gen_gvec_2_ptr() call.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20211007081803.1705656-1-frank.chang@sifive.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Problem state needs to be able to read and write the PMU counters,
otherwise it won't be aware of any sampling result that the PMU produces
after a Perf run.
This patch does that in a similar fashion as already done in the
previous patches. PMCs 5 and 6 have a special condition, aside from the
constraints that are common with PMCs 1-4, where they are not part of the
PMU if MMCR0_PMCC is 0b11.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-5-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Similar to the previous patch, let's add problem state read/write access to
the MMCR2 SPR, which is also a group A PMU SPR that needs to be filtered
to be read/written by userspace.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-4-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Userspace need access to PMU SPRs to be able to operate the PMU. One of
such SPRs is MMCR0.
MMCR0, as defined by PowerISA v3.1, is classified as a 'group A' PMU
register. This class of registers has common read/write rules that are
governed by MMCR0 PMCC bits. MMCR0 is also not fully exposed to problem
state: only MMCR0_FC, MMCR0_PMAO and MMCR0_PMAE bits are
readable/writable in this case.
This patch exposes MMCR0 to userspace by doing the following:
- two new callbacks, spr_read_MMCR0_ureg() and spr_write_MMCR0_ureg(),
are added to be used as problem state read/write callbacks of UMMCR0.
Both callbacks filters the amount of bits userspace is able to
read/write by using a MMCR0_UREG_MASK;
- problem state access control is done by the spr_groupA_read_allowed()
and spr_groupA_write_allowed() helpers. These helpers will read the
current PMCC bits from DisasContext and check whether the read/write
MMCR0 operation is valid or noti;
- to avoid putting exclusive PMU logic into the already loaded
translate.c file, let's create a new 'power8-pmu-regs.c.inc' file that
will hold all the spr_read/spr_write functions of PMU registers.
The 'power8' name of this new file intends to hint about the proven
support of the PMU logic to be added. The code has been tested with the
IBM POWER chip family, POWER8 being the oldest version tested. This
doesn't mean that the PMU logic will break with any other PPC64 chip
that implements Book3s, but rather that we can't assert that it works
properly with any Book3s compliant chip.
CC: Gustavo Romero <gustavo.romero@linaro.org>
Signed-off-by: Gustavo Romero <gromero@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-3-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're going to add PMU support for TCG PPC64 chips, based on IBM POWER8+
emulation and following PowerISA v3.1. This requires several PMU related
registers to be exposed to userspace (problem state). PowerISA v3.1
dictates that the PMCC bits of the MMCR0 register controls the level of
access of the PMU registers to problem state.
This patch start things off by exposing both PMCC bits to hflags,
allowing us to access them via DisasContext in the read/write callbacks
that we're going to add next.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211018010133.315842-2-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PowerISA says that mtmsr[d] "does not alter MSR[HV], MSR[S], MSR[ME], or
MSR[LE]", but the current code only filters the GPR-provided value if
L=1. This behavior caused some problems in FreeBSD, and a build option
was added to work around the issue [1], but it seems that the bug was
not reported in launchpad/gitlab. This patch address the issue in qemu,
so the option on FreeBSD should no longer be required.
[1] https://cgit.freebsd.org/src/commit/?id=4efb1ca7d2a44cfb33d7f9e18bd92f8d68dcfee0
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211015181940.197982-1-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>