Replace uses of tcg_const_* with the allocate and free close together.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-2-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For U-mode CSRs, read-only check is also needed.
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: 20210810014552.4884-1-zhiwei_liu@c-sky.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For some cpu, the isa version has already been set in cpu init function.
Thus only override the isa version when isa version is not set, or
users set different isa version explicitly by cpu parameters.
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: 20210811144612.68674-1-zhiwei_liu@c-sky.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When privilege check fails, RISCV_EXCP_ILLEGAL_INST is returned,
not -1 (RISCV_EXCP_NONE).
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210807141025.31808-1-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
First ppc pull request for qemu-6.2. As usual, there's a fair bit
here, since it's been queued during the 6.1 freeze. Highlights are:
* Some fixes for 128 bit arithmetic and some vector opcodes that use
them
* Significant improvements to the powernv to support POWER10 cpus
(more to come though)
* Several cleanups to the ppc softmmu code
* A few other assorted fixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmEoj5gACgkQbDjKyiDZ
s5JFPw/+JOmi1G6eY3u/kYJ8TJhe65s6TJDQhGQiQSBoBShRBJ1+bro3fPGA8pkT
48NAb9RnTnLqys+vhScF7qt2wIxXJFVoVyMhAj2Xv11VQzDPpbLGg6+2Qt7WFraQ
zyeEKBQQTV29RtV7UBUEmx4ZGmnoc0cmzl3QGO3Jq17ucOHNTSW19QpxU60wClU1
PZIUDoWdt7FBS8lvj/55736H3z6ZRnBqZtW9m64ln+CBQuuKo5UkAkaooaJhEFJx
OUZYeo+zky8YaYSWwTFGIxBYhwptnAWCsqkzeJUxPw1ICAzwj/kQX7ckVhbgTpbE
CADpgkATXTbQzLFipzxJ45UMP0yMsk5IOPZ6FS9G+JfsP2T92RMwy7XhqPfWCoov
WKqX/xpmGTnJONuQ7SO/bWUyPH4K7hYgSPPlLAcwDYCg4szWRIbTCs9Yr9rzAPhk
KqKUGLb7D7Rbi1ulSC2ieqsTqVmp6plfnjxR2gPcbp0FltqGln6tVZEHEyPjTEv0
5b7w+3AHDwh9a4NyzULaxxBKktNU1KXKe74/U86qhJtx4kXFSkAhoeztcR30zmUX
W1xjb5eoRgFbHnoDTCtDYAUwuz2w1/I2OLA5kfnSQnRQS0YiqUeicbBkW6iIE61z
oM86ZwEQX1lyf7agECRgpfdcPa6uyAQ72QUR5wgvXDW59PSNNxk=
=C5XY
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.2-20210827' into staging
ppc patch queue 2021-08-27
First ppc pull request for qemu-6.2. As usual, there's a fair bit
here, since it's been queued during the 6.1 freeze. Highlights are:
* Some fixes for 128 bit arithmetic and some vector opcodes that use
them
* Significant improvements to the powernv to support POWER10 cpus
(more to come though)
* Several cleanups to the ppc softmmu code
* A few other assorted fixes
# gpg: Signature made Fri 27 Aug 2021 08:09:12 BST
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dg-gitlab/tags/ppc-for-6.2-20210827:
target/ppc: fix vector registers access in gdbstub for little-endian
include/qemu/int128.h: introduce bswap128s
target/ppc: fix vextu[bhw][lr]x helpers
include/qemu/int128.h: define struct Int128 according to the host endianness
ppc/xive: Export xive_presenter_notify()
ppc/xive: Export PQ get/set routines
ppc/pnv: add a chip topology index for POWER10
ppc/pnv: Distribute RAM among the chips
ppc/pnv: Use a simple incrementing index for the chip-id
ppc/pnv: powerpc_excp: Do not discard HDECR exception when entering power-saving mode
ppc/pnv: Change the POWER10 machine to support DD2 only
ppc: Add a POWER10 DD2 CPU
ppc/pnv: update skiboot to commit 820d43c0a775.
target/ppc: moved store_40x_sler to helper_regs.c
target/ppc: moved ppc_store_sdr1 to mmu_common.c
target/ppc: divided mmu_helper.c in 2 files
spapr_pci: Fix leak in spapr_phb_vfio_get_loc_code() with g_autofree
xive: Remove extra '0x' prefix in trace events
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As vector registers are stored in host endianness, we shouldn't swap its
64-bit elements in user mode. Add a 16-byte case in
ppc_maybe_bswap_register to handle the reordering of elements in softmmu
and remove avr_need_swap which is now unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826145656.2507213-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These helpers shouldn't depend on the host endianness, as they only use
shifts, ands, and int128_* methods.
Fixes: 60caf2216b ("target-ppc: add vextu[bhw][lr]x instructions")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826141446.2488609-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Hypervisor Decrementer exception should not be generated while the
CPU is in power-saving mode (see cpu_ppc_hdecr_excp()). However,
discarding the exception before entering the power-saving mode is
wrong since we would loose a previously generated HDEC.
Fixes: 4b236b621b ("ppc: Initial HDEC support")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The POWER10 DD2 CPU adds an extra LPCR[HAIL] bit. DD1 doesn't have
HAIL but since it does not break the modeling and that we don't plan
to support DD1, modify the LPCR mask of all the POWER10 family.
Setting the HAIL bit is a requirement to support the scv instruction
on PowerNV POWER10 platforms since glibc-2.33.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-2-clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
moved store_40x_sler from mmu_common.c to helper_regs.c as it is
a function to store a value in a special purpose register, so
moving it to a file focused in special register manipulation
is more appropriate.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-4-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ppc_store_sdr1 was at first in mmu_helper.c and was moved as part
the patches to enable the disable-tcg option, now it's being moved
back to a file that will be compiled with that option
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-3-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divided mmu_helper.c in 2 files, functions inside #ifdef CONFIG_SOFTMMU
stayed in mmu_helper.c, other functions moved to mmu_common.c. Updated
meson.build to compile mmu_common.c and only compile mmu_helper.c when
CONFIG_TCG is set.
Moved function declarations, #define and structs used by both files to
internal.h except for functions that use structures defined in cpu.h,
those were moved to cpu.h.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-2-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently we rely on all the callsites of cpsr_write() to rebuild the
cached hflags if they change one of the CPSR bits which we use as a
TB flag and cache in hflags. This is a bit awkward when we want to
change the set of CPSR bits that we cache, because it means we need
to re-audit all the cpsr_write() callsites to see which flags they
are writing and whether they now need to rebuild the hflags.
Switch instead to making cpsr_write() call arm_rebuild_hflags()
itself if one of the bits being changed is a cached bit.
We don't do the rebuild for the CPSRWriteRaw write type, because that
kind of write is generally doing something special anyway. For the
CPSRWriteRaw callsites in the KVM code and inbound migration we
definitely don't want to recalculate the hflags; the callsites in
boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves
anyway because of other CPU state changes they make.
This allows us to drop explicit arm_rebuild_hflags() calls in a
couple of places where the only reason we needed to call it was the
CPSR write.
This fixes a bug where we were incorrectly failing to rebuild hflags
in the code path for a gdbstub write to CPSR, which meant that you
could make QEMU assert by breaking into a running guest, altering the
CPSR to change the value of, for example, CPSR.E, and then
continuing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1
access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ.
Implement these traps. In v8A this HSTR bit doesn't exist, so don't
trap for v8A CPUs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses
to the Thumb2EE TEECR and TEEHBR registers to be trapped to the
hypervisor. Implement these traps.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
KVM cannot support multiple address spaces per CPU; if you try to
create more than one then cpu_address_space_init() will assert.
In the Arm CPU realize function, detect the configurations which
would cause us to need more than one AS, and cleanly fail the
realize rather than blundering on into the assertion. This
turns this:
$ qemu-system-aarch64 -enable-kvm -display none -cpu max -machine raspi3b
qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed.
Aborted
into:
$ qemu-system-aarch64 -enable-kvm -display none -machine raspi3b
qemu-system-aarch64: Cannot enable KVM when guest CPU has EL3 enabled
and this:
$ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524
qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed.
Aborted
into:
$ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524
qemu-system-aarch64: Cannot enable KVM when using an M-profile guest CPU
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/528
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210816135842.25302-3-peter.maydell@linaro.org
arch_init.h only defines the QEMU_ARCH_* enumeration and the
arch_type global. Don't include it in files that don't use those.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210730105947.28215-8-peter.maydell@linaro.org
Future CPU types may specify which vector lengths are supported.
We can apply nearly the same logic to validate those lengths
as we do for KVM's supported vector lengths. We merge the code
where we can, but unfortunately can't completely merge it because
KVM requires all vector lengths, power-of-two or not, smaller than
the maximum enabled length to also be enabled. The architecture
only requires all the power-of-two lengths, though, so TCG will
only enforce that.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-5-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we have an ARMCPU member sve_vq_supported we no longer
need the local kvm_supported bitmap for KVM's supported vector
lengths.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-4-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
bitmap_clear() only clears the given range. While the given
range should be sufficient in this case we might as well be
100% sure all bits are zeroed by using bitmap_zero().
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-3-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Allow CPUs that support SVE to specify which SVE vector lengths they
support by setting them in this bitmap. Currently only the 'max' and
'host' CPU types supports SVE and 'host' requires KVM which obtains
its supported bitmap from the host. So, we only need to initialize the
bitmap for 'max' with TCG. And, since 'max' should support all SVE
vector lengths we simply fill the bitmap. Future CPU types may have
less trivial maps though.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-2-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Most callers check the return value. Some check whether it set an
error. Functionally equivalent, but the former tends to be easier on
the eyes, so do that everywhere.
Prior art: commit c6ecec43b2 "qemu-option: Check return value instead
of @err where convenient".
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-10-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
There is nothing to delete after migrate_add_blocker() failed. Trying
anyway is safe, but useless. Don't.
Cc: Sunil Muthuswamy <sunilmut@microsoft.com>
Cc: Kamil Rytarowski <kamil@netbsd.org>
Cc: Reinoud Zandijk <reinoud@netbsd.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-9-armbru@redhat.com>
Reviewed-by: Reinoud Zandijk <reinoud@NetBSD.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
invtsc_mig_blocker has static storage duration. When a CPU with
certain features is initialized, and invtsc_mig_blocker is still null,
we add a migration blocker and store it in invtsc_mig_blocker.
The object is freed when migrate_add_blocker() fails, leaving
invtsc_mig_blocker dangling. It is not freed on later failures.
Same for hv_passthrough_mig_blocker and hv_no_nonarch_cs_mig_blocker.
All failures are actually fatal, so whether we free or not doesn't
really matter, except as bad examples to be copied / imitated.
Clean this up in a minimal way: never free these blocker objects.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-7-armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
We did this with scripts/coccinelle/use-error_fatal.cocci before, in
commit 50beeb6809 and 007b06578a. This commit cleans up rarer
variations that don't seem worth matching with Coccinelle.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-2-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The AVX_VNNI feature is not in Cooperlake platform, remove it
from cpu model.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210820054611.84303-1-yang.zhong@intel.com>
Fixes: c1826ea6a0 ("i386/cpu: Expose AVX_VNNI instruction to guest")
Cc: qemu-stable@nongnu.org
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
At present, there's no mechanism intelligent enough to virtualize split
lock detection correctly. Remove it in Snowridge CPU model to avoid the
feature exposure.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210630012053.10098-1-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add the inlined cpu_is_bigendian() function in "translate.h".
Replace the TARGET_WORDS_BIGENDIAN #ifdef'ry by calls to
cpu_is_bigendian().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818164321.2474534-6-f4bug@amsat.org>
Most TCG helpers only have access to a DisasContext pointer,
not CPUMIPSState. Store a copy of CPUMIPSState::CP0_Config0
in DisasContext so we can access it from TCG helpers.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818164321.2474534-5-f4bug@amsat.org>
The target endianess information is stored in the BigEndian
bit of the Config0 register in CP0.
Replace the GET_LMASK() macro by an inlined get_lmask() function,
passing CPUMIPSState and the word size as argument.
We can remove another use of the TARGET_WORDS_BIGENDIAN definition.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818215517.2560994-4-f4bug@amsat.org>
The target endianess information is stored in the BigEndian
bit of the Config0 register in CP0.
Replace the GET_LMASK() macro by an inlined get_lmask() function,
passing CPUMIPSState and the word size as argument.
We can remove one use of the TARGET_WORDS_BIGENDIAN definition.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818215517.2560994-3-f4bug@amsat.org>
The target endianess information is stored in the BigEndian
bit of the Config0 register in CP0.
As a first step, inline the GET_OFFSET() macro, calling
cpu_is_bigendian() to get the 'direction' of the offset.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818215517.2560994-2-f4bug@amsat.org>
To be able to split some code calling the gen_helper() macros
out of the huge translate.c, we need to define them in the
'translate.h' local header.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-9-f4bug@amsat.org>
excp/err are temporaries input, so we can replace tcg_const_i32()
calls by tcg_constant_i32() equivalent.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-8-f4bug@amsat.org>
gen_helper_0e0i() is one-line long and is only used twice:
simply inline it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-7-f4bug@amsat.org>
gen_helper_1e1i() is one-line long and is used in one place:
simply inline it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-6-f4bug@amsat.org>
In all call sites the last argument is always used as a
read-only value, so we can replace tcg_const_i32() temporary
by tcg_constant_i32().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-5-f4bug@amsat.org>
$rt register is used read-only, so we can replace tcg_const_i32()
temporary by tcg_constant_i32().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-4-f4bug@amsat.org>
gen_helper_1e2i() is unused since commit 33a07fa2db
("target/mips: reimplement SC instruction emulation
and use cmpxchg"), remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-3-f4bug@amsat.org>
gen_helper_0e3i() is unused since commit 895c2d0435
("target-mips: switch to AREG0 free mode"), remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-2-f4bug@amsat.org>
We already call check_cp1_enabled() earlier in the "pre-conditions"
checks for GSLWXC1 and GSLDXC1 in gen_loongson_lsdc2() prologue.
Remove the duplicated calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Message-Id: <20210816001031.1720432-1-f4bug@amsat.org>
Per the manual '龙芯 GS264 处理器核用户手册' v1.0, chapter
1.1.5 SEGBITS: the 3A1000 (based on GS464 core) implements
48 virtual address bits in each 64-bit segment, not 40.
Fixes: af868995e1 ("target/mips: Add Loongson-3 CPU definition")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Message-Id: <20210813110149.1432692-3-f4bug@amsat.org>
Document the cores on which each Loongson-3A CPU is based (see
commit af868995e1, "target/mips: Add Loongson-3 CPU definition").
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Message-Id: <20210813110149.1432692-2-f4bug@amsat.org>
Convert the following Integer Multiply-Accumulate opcodes:
* MSAC Multiply, negate, accumulate, and move LO
* MSACHI Multiply, negate, accumulate, and move HI
* MSACHIU Unsigned multiply, negate, accumulate, and move HI
* MSACU Unsigned multiply, negate, accumulate, and move LO
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210808173018.90960-8-f4bug@amsat.org>
Convert the following Integer Multiply-Accumulate opcodes:
* MULHI Multiply and move HI
* MULHIU Unsigned multiply and move HI
* MULS Multiply, negate, and move LO
* MULSHI Multiply, negate, and move HI
* MULSHIU Unsigned multiply, negate, and move HI
* MULSU Unsigned multiply, negate, and move LO
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210808173018.90960-7-f4bug@amsat.org>
Convert the following Integer Multiply-Accumulate opcodes:
* MACC Multiply, accumulate, and move LO
* MACCHI Multiply, accumulate, and move HI
* MACCHIU Unsigned multiply, accumulate, and move HI
* MACCU Unsigned multiply, accumulate, and move LO
Since all opcodes are generated using the same pattern, we
add the gen_helper_mult_acc_t typedef and MULT_ACC() macro
to remove boilerplate code.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210808173018.90960-6-f4bug@amsat.org>
The decoder is called but doesn't decode anything. This will
ease reviewing the next commit.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801235926.3178085-3-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Extract NEC Vr54xx helpers from op_helper.c to a new file:
'vr54xx_helper.c'.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-14-f4bug@amsat.org>
Extract the NEC Vr54xx helper definitions to
'vendor-vr54xx_helper.h'.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-15-f4bug@amsat.org>
Plain copy/paste of the TRANS() macro introduced in the PPC
commit f2aabda8ac ("target/ppc: Move D/DS/X-form integer
loads to decodetree") to the MIPS target.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210808173018.90960-2-f4bug@amsat.org>
We'll soon have more opcode and decoded arguments, and 'rtype'
is not very helpful. Naming it simply 'r' ease reviewing the
.decode files when we have many opcodes.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-5-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We don't need to maintain 2 sets of decodetree definitions.
Merge them into a single file.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-4-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In commit ffc672aa97 ("target/mips/tx79: Move MFHI1 / MFLO1
opcodes to decodetree") we misplaced the decoder call. Move
it to the correct place.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-3-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
check_insn() checks for any bit in the set, and INSN_R5900 is
just another bit added to the set. No need to special-case it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-2-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
JR opcode (Jump Register) only takes 1 argument, $rs.
JALR (Jump And Link Register) takes 3: $rs, $rd and $hint.
Commit 6af0bf9c7c added their processing into decode_opc() as:
case 0x08 ... 0x09: /* Jumps */
gen_compute_branch(ctx, op1 | EXT_SPECIAL, rs, rd, sa);
having both opcodes handled in the same function: gen_compute_branch.
Per JR encoding, both $rd and $hint ('sa') are decoded as zero.
Later this code got extracted to decode_opc_special(),
commit 7a387fffce used definitions instead of magic values:
case OPC_JR ... OPC_JALR:
gen_compute_branch(ctx, op1, rs, rd, sa);
Finally commit 0aefa33318 moved OPC_JR out of decode_opc_special,
to a new 'decode_opc_special_legacy' function:
@@ -15851,6 +15851,9 @@ static void decode_opc_special_legacy(CPUMIPSState *env, DisasContext *ctx)
+ case OPC_JR:
+ gen_compute_branch(ctx, op1, 4, rs, rd, sa);
+ break;
@@ -15933,7 +15936,7 @@ static void decode_opc_special(CPUMIPSState *env, DisasContext *ctx)
- case OPC_JR ... OPC_JALR:
+ case OPC_JALR:
gen_compute_branch(ctx, op1, 4, rs, rd, sa);
break;
Since JR is now handled individually, it is pointless to decode
and pass it unused arguments. Replace them by simple zero value
to avoid confusion with this opcode.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210730225507.2642827-1-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
As per commit 5626f8c6d4 ("rcu: Add automatically released rcu_read_lock
variants"), RCU_READ_LOCK_GUARD() should be used instead of
rcu_read_{un}lock().
Signed-off-by: Hamza Mahfooz <someguy@effective-light.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20210727235201.11491-1-someguy@effective-light.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unlike A-profile, for M-profile the UDIV and SDIV insns can be
configured to raise an exception on division by zero, using the CCR
DIV_0_TRP bit.
Implement support for setting this bit by making the helper functions
raise the appropriate exception.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
We're about to make a code change to the sdiv and udiv helper
functions, so first fix their indentation and coding style.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2
and VST4. VLD2 loads 16 bytes of data from memory and writes to 2
consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes
to 4 consecutive Qregs. The 'pattern' field in the encoding
determines the offset into memory which is accessed and also which
elements in the Qregs are written to. (The intention is that a
sequence of four consecutive VLD4 with different pattern values
performs a complete de-interleaving load of 64 bytes into all
elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VLDR/VSTR insns which do scatter-gather using base
addresses from Qm plus or minus an immediate offset (possibly with
writeback). Note that writeback is not predicated but it does have
to honour ECI state, so we have to add an eci_mask check to the
VSTR_SG macros (the VLDR_SG macros already needed this to be able
to distinguish "skip beat" from "set predicated element to 0").
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE gather-loads and scatter-stores which
form the address by adding a base value from a scalar
register to an offset in each element of a vector.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so
as to predicate any element at index Rn or greater is predicated. As
with VPNOT, this insn itself is predicable and subject to beatwise
execution.
The calculation of the mask is the same as is used to determine
ltpmask in mve_element_mask(), but we precalculate masklen in
generated code to avoid having to have 4 helpers specialized by size.
We put the decode line in with the low-overhead-loop insns in
t32.decode because it's logically part of that collection of insn
patterns, even though it is an MVE only insn.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0
(subject to both predication and to beatwise execution).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VMOV forms that move data between 2 general-purpose
registers and 2 32-bit lanes in a vector register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VMAXA and VMINA insns, which take the absolute
value of the signed elements in the input vector and then accumulate
the unsigned max or min into the destination vector.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE 1-operand saturating operations VQABS and VQNEG.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE saturating doubling multiply accumulate insns
VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply,
double, add the accumulator shifted by the element size, possibly
round, saturate to twice the element size, then take the high half of
the result. The *MLAH insns do vector * scalar + vector, and the
*MLASH insns do vector * vector + scalar.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VMLA insn, which multiplies a vector by a scalar
and accumulates into another vector.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and
VMLSLDAV insns already implemented, these accumulate multiplied
vector elements; but they accumulate a 32-bit result rather than a
64-bit one.
Note that these encodings overlap with what would be RdaHi=0b111 for
VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The MVEGenDualAccOpFn is a bit misnamed, since it is used for
the "long dual accumulate" operations that use a 64-bit
accumulator. Rename it to MVEGenLongDualAccOpFn so we can
use the former name for the 32-bit accumulator insns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN.
These take a double-width input, narrow it (possibly saturating) and
store the result to either the top or bottom half of the output
element.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VABAV insn, which computes absolute differences
between elements of two vectors and accumulates the result into
a general purpose register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE integer min/max across vector insns
VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum
from the vector elements and a general purpose register,
and store the maximum back into the general purpose
register.
These insns overlap with VRMLALDAVH (they use what would
be RdaHi=0b110).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
All the users of the vmlaldav formats have an 'x bit in bit 12 and an
'a' bit in bit 5; move these to the format rather than specifying them
in each insn pattern.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE instructions which perform shifts by a scalar.
These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the
shift amount in a general purpose register and shift every element in
the vector by that amount.
Mostly we can reuse the helper functions for shift-by-immediate; we
do need two new helpers for VQRSHL.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VMLAS insn, which multiplies a vector by a vector
and adds a scalar.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VPSEL insn, which sets each byte of the destination
vector Qd to the byte from either Qn or Qm depending on the value of
the corresponding bit in VPR.P0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE integer vector comparison instructions that compare
each element against a scalar from a general purpose register. These
are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)"
encodings T4, T5 and T6.
We have to move the decodetree pattern for VPST, because it
overlaps with VCMP T4 with size = 0b11.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE integer vector comparison instructions. These are
"VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings
T1, T2 and T3.
These insns compare corresponding elements in each vector, and update
the VPR.P0 predicate bits with the results of the comparison. VPT
also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively
"VCMP then VPST".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Factor out the "generate code to update VPR.MASK01/MASK23" part of
trans_VPST(); we are going to want to reuse it for the VPT insns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP,
VIWDUP and VDWDUP. These fill the elements of a vector with
successively incrementing values, starting at the offset specified in
a general purpose register. The final value of the offset is written
back to this register. The wrapping variants take a second general
purpose register which specifies the point where the count should
wrap back to 0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes
in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the
inputs are in either the low or the high half of each double-width
element.
The assembler for this insn indicates the size with "P8" or "P16",
encoded into bit 28 as size = 0 or 1. We choose to follow the
same encoding as VQDMULL and decode this into a->size as MO_16
or MO_32 indicating the size of the result elements. This then
carries through to the helper function names where it then
matches up with the existing pmull_h() which does an 8x8->16
operation and a new pmull_w() which does the 16x16->32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
For vector loads, predicated elements are zeroed, instead of
retaining their previous values (as happens for most data
processing operations). This means we need to distinguish
"beat not executed due to ECI" (don't touch destination
element) from "beat executed but predicated out" (zero
destination element).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We were not paying attention to the ECI state when advancing the VPT
state. Architecturally, VPT state advance happens for every beat
(see the pseudocode VPTAdvance()), so on every beat the 4 bits of
VPR.P0 corresponding to the current beat are inverted if required,
and at the end of beats 1 and 3 the VPR MASK fields are updated.
This means that if the ECI state says we should not be executing all
4 beats then we need to skip some of the updating of the VPR that we
currently do in mve_advance_vpt().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In some situations we need a mask telling us which parts of the
vector correspond to beats that are not being executed because of
ECI, separately from the combined "which bytes are predicated away"
mask. Factor this mask calculation out of mve_element_mask() into
its own function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In mve_element_mask(), we calculate a mask for tail predication which
should have a number of 1 bits based on the value of LR. However,
our MAKE_64BIT_MASK() macro has undefined behaviour when passed a
zero length. Special case this to give the all-zeroes mask we
require.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We got an edge case wrong in the 48-bit SQRSHRL implementation: if
the shift is to the right, although it always makes the result
smaller than the input value it might not be within the 48-bit range
the result is supposed to be if the input had some bits in [63..48]
set and the shift didn't bring all of those within the [47..0] range.
Handle this similarly to the way we already do for this case in
do_uqrshl48_d(): extend the calculated result from 48 bits,
and return that if not saturating or if it doesn't change the
result; otherwise fall through to return a saturated value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge
cases wrong and failed to saturate correctly:
(1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs()
does to obtain the saturated most-negative and most-positive 48-bit
signed values for the large-shift-left case. This gives (1 << 47)
for saturate-to-most-negative, but we weren't sign-extending this
value to the 64-bit output as the pseudocode requires.
(2) For left shifts by less than 48, we copied the "8/16 bit" code
from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right
thing because it assumes the C type we're working with is at least
twice the number of bits we're saturating to (so that a shift left by
bits-1 can't shift anything off the top of the value). This isn't
true for bits == 48, so we would incorrectly return 0 rather than the
most-positive value for situations like "shift (1 << 44) right by
20". Instead check for saturation by doing the shift and signextend
and then testing whether shifting back left again gives the original
value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In the MVE helpers for the narrowing operations (DO_VSHRN and
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
the 'top' versions of the insn. This is because the loop works over
the double-sized input elements and shifts the predicate mask by that
many bits each time, but when we write out the half-sized output we
must look at the mask bits for whichever half of the element we are
writing to.
Correct this by shifting the whole mask right by ESIZE bits for the
'top' insns. This allows us also to simplify the saturation bit
checking (where we had noticed that we needed to look at a different
mask bit for the 'top' insn.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
A cut-and-paste error meant we handled signed VADDV like
unsigned VADDV; fix the type used.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In the MVE shift-and-insert insns, we special case VSLI by 0
and VSRI by <dt>. VSRI by <dt> means "don't update the destination",
which is what we've implemented. However VSLI by 0 is "set
destination to the input", so we don't want to use the same
special-casing that we do for VSRI by <dt>.
Since the generic logic gives the right answer for a shift
by 0, just use that.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Include the MVE VPR register value in the CPU dumps produced by
arm_cpu_dump_state() if we are printing FPU information. This
makes it easier to interpret debug logs when predication is
active.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Although the architecture doesn't define it as an alias, VMOVL
(vector move long) is encoded as a VSHLL with a zero shift.
Add a comment in the decode file noting that we handle VMOVL
as part of VSHLL.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
~0UL has 64 bits on Linux and 32 bits on Windows.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/512
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210812111056.26926-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Raised exceptions don't return, so mark the helper with noreturn.
Fixes: 032c76bc6f ("nios2: Add architecture emulation support")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210729101315.2318714-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The shift constant was incorrect, causing int_prio to always be zero.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
[Rewritten commit message since v1 had already been included. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>