VS-stage translation at get_physical_address needs to translate pte
address by G-stage translation. But the G-stage translation error
can not be distinguished from VS-stage translation error in
riscv_cpu_tlb_fill. On migration, destination needs to rebuild pte,
and this G-stage translation error must be handled by HS-mode. So
introduce TRANSLATE_STAGE2_FAIL so that riscv_cpu_tlb_fill could
distinguish and raise it to HS-mode.
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201014101728.848-1-jiangyifei@huawei.com
[ Change by AF:
- Clarify the fault_pte_addr shift
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The HLVX.WU instruction is supposed to read a machine word,
but prior to this change it read a byte instead.
Fixes: 8c5362acb5 ("target/riscv: Allow generating hlv/hlvx/hsv instructions")
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201013172223.443645-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The hstatus.GVA bit was not set if the faulting guest virtual address
was zero.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201013173054.451135-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When trapping from virt into HS mode, hstatus.SPVP was set to
the value of sstatus.SPP, as according to the specification both
flags should be set to the same value.
However, the assignment of SPVP takes place before SPP itself is
updated, which results in SPVP having an outdated value.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201013151054.396481-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently we log interrupts and exceptions using the trace backend in
riscv_cpu_do_interrupt(). We also log exceptions using the interrupt log
mask (-d int) in riscv_raise_exception().
This patch converts riscv_cpu_do_interrupt() to log both interrupts and
exceptions with the interrupt log mask, so that both are printed when a
user runs QEMU with -d int.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 29a8c766c7c4748d0f2711c3a0abb81208138c5e.1601652179.git.alistair.francis@wdc.com
If the M-profile low-overhead-branch extension is implemented, FPSCR
bits [18:16] are a new field LTPSIZE. If MVE is not implemented
(currently always true for us) then this field always reads as 4 and
ignores writes.
These bits used to be the vector-length field for the old
short-vector extension, so we need to take care that they are not
misinterpreted as setting vec_len. We do this with a rearrangement
of the vfp_set_fpscr() code that deals with vec_len, vec_stride
and also the QC bit; this obviates the need for the M-profile
only masking step that we used to have at the start of the function.
We provide a new field in CPUState for LTPSIZE, even though this
will always be 4, in preparation for MVE, so we don't have to
come back later and split it out of the vfp.xregs[FPSCR] value.
(This state struct field will be saved and restored as part of
the FPSCR value via the vmstate_fpscr in machine.c.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-11-peter.maydell@linaro.org
M-profile CPUs with half-precision floating point support should
be able to write to FPSCR.FZ16, but an M-profile specific masking
of the value at the top of vfp_set_fpscr() currently prevents that.
This is not yet an active bug because we have no M-profile
FP16 CPUs, but needs to be fixed before we can add any.
The bits that the masking is effectively preventing from being
set are the A-profile only short-vector Len and Stride fields,
plus the Neon QC bit. Rearrange the order of the function so
that those fields are handled earlier and only under a suitable
guard; this allows us to drop the M-profile specific masking,
making FZ16 writeable.
This change also makes the QC bit correctly RAZ/WI for older
no-Neon A-profile cores.
This refactoring also paves the way for the low-overhead-branch
LTPSIZE field, which uses some of the bits that are used for
A-profile Stride and Len.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-10-peter.maydell@linaro.org
In arm_cpu_realizefn(), if the CPU has VFP or Neon disabled then we
squash the ID register fields so that we don't advertise it to the
guest. This code was written for A-profile and needs some tweaks to
work correctly on M-profile:
* A-profile only fields should not be zeroed on M-profile:
- MVFR0.FPSHVEC,FPTRAP
- MVFR1.SIMDLS,SIMDINT,SIMDSP,SIMDHP
- MVFR2.SIMDMISC
* M-profile only fields should be zeroed on M-profile:
- MVFR1.FP16
In particular, because MVFR1.SIMDHP on A-profile is the same field as
MVFR1.FP16 on M-profile this code was incorrectly disabling FP16
support on an M-profile CPU (where has_neon is always false). This
isn't a visible bug yet because we don't have any M-profile CPUs with
FP16 support, but the change is necessary before we introduce any.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-9-peter.maydell@linaro.org
v8.1M's "low-overhead-loop" extension has three instructions
for looping:
* DLS (start of a do-loop)
* WLS (start of a while-loop)
* LE (end of a loop)
The loop-start instructions are both simple operations to start a
loop whose iteration count (if any) is in LR. The loop-end
instruction handles "decrement iteration count and jump back to loop
start"; it also caches the information about the branch back to the
start of the loop to improve performance of the branch on subsequent
iterations.
As with the branch-future instructions, the architecture permits an
implementation to discard the LO_BRANCH_INFO cache at any time, and
QEMU takes the IMPDEF option to never set it in the first place
(equivalent to discarding it immediately), because for us a "real"
implementation would be unnecessary complexity.
(This implementation only provides the simple looping constructs; the
vector extension MVE (Helium) adds some extra variants to handle
looping across vectors. We'll add those later when we implement
MVE.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-8-peter.maydell@linaro.org
v8.1M implements a new 'branch future' feature, which is a
set of instructions that request the CPU to perform a branch
"in the future", when it reaches a particular execution address.
In hardware, the expected implementation is that the information
about the branch location and destination is cached and then
acted upon when execution reaches the specified address.
However the architecture permits an implementation to discard
this cached information at any point, and so guest code must
always include a normal branch insn at the branch point as
a fallback. In particular, an implementation is specifically
permitted to treat all BF insns as NOPs (which is equivalent
to discarding the cached information immediately).
For QEMU, implementing this caching of branch information
would be complicated and would not improve the speed of
execution at all, so we make the IMPDEF choice to implement
all BF insns as NOPs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-7-peter.maydell@linaro.org
The BLX immediate insn in the Thumb encoding always performs
a switch from Thumb to Arm state. This would be totally useless
in M-profile which has no Arm decoder, and so the instruction
does not exist at all there. Make the encoding UNDEF for M-profile.
(This part of the encoding space is used for the branch-future
and low-overhead-loop insns in v8.1M.)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-6-peter.maydell@linaro.org
The t32 decode has a group which represents a set of insns
which overlap with B_cond_thumb because they have [25:23]=111
(which is an invalid condition code field for the branch insn).
This group is currently defined using the {} overlap-OK syntax,
but it is almost entirely non-overlapping patterns. Switch
it over to use a non-overlapping group.
For this to be valid syntactically, CPS must move into the same
overlapping-group as the hint insns (CPS vs hints was the
only actual use of the overlap facility for the group).
The non-overlapping subgroup for CLREX/DSB/DMB/ISB/SB is no longer
necessary and so we can remove it (promoting those insns to
be members of the parent group).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-5-peter.maydell@linaro.org
From v8.1M, disabled-coprocessor handling changes slightly:
* coprocessors 8, 9, 14 and 15 are also governed by the
cp10 enable bit, like cp11
* an extra range of instruction patterns is considered
to be inside the coprocessor space
We previously marked these up with TODO comments; implement the
correct behaviour.
Unfortunately there is no ID register field which indicates this
behaviour. We could in theory test an unrelated ID register which
indicates guaranteed-to-be-in-v8.1M behaviour like ID_ISAR0.CmpBranch
>= 3 (low-overhead-loops), but it seems better to simply define a new
ARM_FEATURE_V8_1M feature flag and use it for this and other
new-in-v8.1M behaviour that isn't identifiable from the ID registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-3-peter.maydell@linaro.org
Unlike many other bits in HCR_EL2, the description for this
bit does not contain the phrase "if ... this field behaves
as 0 for all purposes other than", so do not squash the bit
in arm_hcr_el2_eff.
Instead, replicate the E2H+TGE test in the two places that
require it.
Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Message-id: 20201008162155.161886-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The reporting in AArch64.TagCheckFail only depends on PSTATE.EL,
and not the AccType of the operation. There are two guest
visible problems that affect LDTR and STTR because of this:
(1) Selecting TCF0 vs TCF1 to decide on reporting,
(2) Report "data abort same el" not "data abort lower el".
Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Message-id: 20201008162155.161886-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already have the full ARMMMUIdx as computed from the
function parameter.
For the purpose of regime_has_2_ranges, we can ignore any
difference between AccType_Normal and AccType_Unpriv, which
would be the only difference between the passed mmu_idx
and arm_mmu_idx_el.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Message-id: 20201008162155.161886-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When TBI is enabled in a given regime, 56 bits of the address
are significant and we need to clear out any other matching
virtual addresses with differing tags.
The other uses of tlb_flush_page (without mmuidx) in this file
are only used by aarch32 mode.
Fixes: 38d931687f
Reported-by: Jordan Frank <jordanfrank@fb.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20201016210754.818257-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For AArch32, unlike the VCVT of integer to float, which honours the
rounding mode specified by the FPSCR, VCVT of fixed-point to float is
always round-to-nearest. (AArch64 fixed-point-to-float conversions
always honour the FPCR rounding mode.)
Implement this by providing _round_to_nearest versions of the
relevant helpers which set the rounding mode temporarily when making
the call to the underlying softfloat function.
We only need to change the VFP VCVT instructions, because the
standard- FPSCR value used by the Neon VCVT is always set to
round-to-nearest, so we don't need to do the extra work of saving
and restoring the rounding mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201013103532.13391-1-peter.maydell@linaro.org
The SMLAD instruction is supposed to:
* signed multiply Rn[15:0] * Rm[15:0]
* signed multiply Rn[31:16] * Rm[31:16]
* perform a signed addition of the products and Ra
* set Rd to the low 32 bits of the theoretical
infinite-precision result
* set the Q flag if the sign-extension of Rd
would differ from the infinite-precision result
(ie on overflow)
Our current implementation doesn't quite do this, though: it performs
an addition of the products setting Q on overflow, and then it adds
Ra, again possibly setting Q. This sometimes incorrectly sets Q when
the architecturally mandated only-check-for-overflow-once algorithm
does not. For instance:
r1 = 0x80008000; r2 = 0x80008000; r3 = 0xffffffff
smlad r0, r1, r2, r3
This is (-32768 * -32768) + (-32768 * -32768) - 1
The products are both 0x4000_0000, so when added together as 32-bit
signed numbers they overflow (and QEMU sets Q), but because the
addition of Ra == -1 brings the total back down to 0x7fff_ffff
there is no overflow for the complete operation and setting Q is
incorrect.
Fix this edge case by resorting to 64-bit arithmetic for the
case where we need to add three values together.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201009144712.11187-1-peter.maydell@linaro.org
. Fix some comment spelling errors
. Demacro some TCG helpers
. Add loongson-ext lswc2/lsdc2 group of instructions
. Log unimplemented cache opcode
. Increase number of TLB entries on the 34Kf core
. Allow the CPU to use dynamic frequencies
. Calculate the CP0 timer period using the CPU frequency
. Set CPU frequency for each machine
. Fix Malta FPGA I/O region size
. Allow running qtests when ROM is missing
. Add record/replay acceptance tests
. Update MIPS CPU documentation
. MAINTAINERS updates
CI jobs results:
https://gitlab.com/philmd/qemu/-/pipelines/203931842https://travis-ci.org/github/philmd/qemu/builds/736491461https://cirrus-ci.com/build/6272264062631936https://app.shippable.com/github/philmd/qemu/runs/886/summary/console
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAl+K+NkACgkQ4+MsLN6t
wN4oPRAArZ5v0fjGrylt9g4xCAygLoSMkH3sZxltB77UVN/dCawSTLK2seKKO5g/
UtGt/j4/OAt8Ms+nF8FT+UZbknkgq+h7coOorHvz6gDEAx9UIg6/S2TRZJEx28+l
LbzkqdxvNSoHRrQDpGo43xoaxjzCxSTSOKpPfje6p2YDxWjkxdr/ahcsbKHSKc+x
uGdVdEAlLiAs/fBhkaJD3yy1VfqJKu8V5JJo1g4gSQOD1worRbZ4Us9QfuYr79Q7
Kce1Z1MQSf/TceZuDubhzZBep5lF1uW4lTywcaDby0LvGNK4K+RnH+i+t7CNhtKs
LH5j6iFQY1ecjb1Vh0IgKNAFaM2sTtO7A6fbBSOkVTO60wEp7i9fpbI5TRIjv7z/
EBkzP3n00hhbFFDci6Lnh/Ko0Xy0ODe3Um5l410sTnJe9+LK0HR5V6WH8PD/wKV2
nnKzSgb1U51KS6+FzLGLbQzDEvCgRKAJ9mwiQ+dlRfFHj+rEM6a9rlQmtsADBhKi
sEx62BKe6mM/+qQL9AOwZ5xBmFAn6wquuLYoA2Bwfg0wPIiAiFTwrz/eVSm9qYsw
O9Fer+1IMmd06T1REUtSDAh8+D2ekknKmFA3AG0818WvluD0Qm3KZp8uLLHJ/XkO
jiRtmeW+hApeh8hP4E0bzmrfJPKseBCYYP1By7XavIOoCxlqhew=
=BOxw
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/philmd-gitlab/tags/mips-next-20201017' into staging
MIPS patches queue
. Fix some comment spelling errors
. Demacro some TCG helpers
. Add loongson-ext lswc2/lsdc2 group of instructions
. Log unimplemented cache opcode
. Increase number of TLB entries on the 34Kf core
. Allow the CPU to use dynamic frequencies
. Calculate the CP0 timer period using the CPU frequency
. Set CPU frequency for each machine
. Fix Malta FPGA I/O region size
. Allow running qtests when ROM is missing
. Add record/replay acceptance tests
. Update MIPS CPU documentation
. MAINTAINERS updates
CI jobs results:
https://gitlab.com/philmd/qemu/-/pipelines/203931842https://travis-ci.org/github/philmd/qemu/builds/736491461https://cirrus-ci.com/build/6272264062631936https://app.shippable.com/github/philmd/qemu/runs/886/summary/console
# gpg: Signature made Sat 17 Oct 2020 14:59:53 BST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* remotes/philmd-gitlab/tags/mips-next-20201017: (44 commits)
target/mips: Increase number of TLB entries on the 34Kf core (16 -> 64)
MAINTAINERS: Remove duplicated Malta test entries
MAINTAINERS: Downgrade MIPS Boston to 'Odd Fixes', fix Paul Burton mail
MAINTAINERS: Put myself forward for MIPS target
MAINTAINERS: Remove myself
docs/system: Update MIPS CPU documentation
tests/acceptance: Add MIPS record/replay tests
hw/mips: Remove exit(1) in case of missing ROM
hw/mips: Rename TYPE_MIPS_BOSTON to TYPE_BOSTON
hw/mips: Simplify code using ROUND_UP(INITRD_PAGE_SIZE)
hw/mips: Simplify loading 64-bit ELF kernels
hw/mips/malta: Use clearer qdev style
hw/mips/malta: Move gt64120 related code together
hw/mips/malta: Fix FPGA I/O region size
target/mips/cpu: Display warning when CPU is used without input clock
hw/mips/cps: Do not allow use without input clock
hw/mips/malta: Set CPU frequency to 320 MHz
hw/mips/boston: Set CPU frequency to 1 GHz
hw/mips/cps: Expose input clock and connect it to CPU cores
hw/mips/jazz: Correct CPU frequencies
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
during my split of cpus.c, code line
"current_cpu = cpu"
was removed by mistake, causing hax to break.
This commit fixes the situation restoring it.
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Fixes: e92558e4bf
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20201016080032.13914-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Per "MIPS32 34K Processor Core Family Software User's Manual,
Revision 01.13" page 8 in "Joint TLB (JTLB)" section:
"The JTLB is a fully associative TLB cache containing 16, 32,
or 64-dual-entries mapping up to 128 virtual pages to their
corresponding physical addresses."
There is no particular reason to restrict the 34Kf core model to
16 TLB entries, so raise its config to 64.
This is helpful for other projects, in particular the Yocto Project:
Yocto Project uses qemu-system-mips 34Kf cpu model, to run 32bit
MIPS CI loop. It was observed that in this case CI test execution
time was almost twice longer than 64bit MIPS variant that runs
under MIPS64R2-generic model. It was investigated and concluded
that the difference in number of TLBs 16 in 34Kf case vs 64 in
MIPS64R2-generic is responsible for most of CI real time execution
difference. Because with 16 TLBs linux user-land trashes TLB more
and it needs to execute more instructions in TLB refill handler
calls, as result it runs much longer.
(https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg03428.html)
Buglink: https://bugzilla.yoctoproject.org/show_bug.cgi?id=13992
Reported-by: Victor Kamensky <kamensky@cisco.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201016133317.553068-1-f4bug@amsat.org>
All our QOM users provides an input clock. In order to avoid
avoid future machines added without clock, display a warning.
User-mode emulation use the CP0 timer with the RDHWR instruction
(see commit cdfcad7883) so keep using the fixed 200 MHz clock
without diplaying any warning. Only display it in system-mode
emulation.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-22-f4bug@amsat.org>
Introduce an helper to create a MIPS CPU and connect it to
a reference clock. This helper is not MIPS specific, but so
far only MIPS CPUs need it.
Suggested-by: Huacai Chen <zltjiangshi@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-13-f4bug@amsat.org>
Use the Clock API and let the CPU object have an input clock.
If no clock is connected, keep using the default frequency of
200 MHz used since the introduction of the 'r4k' machine in
commit 6af0bf9c7c.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-12-f4bug@amsat.org>
Since not all CPU implementations use a cores use a CP0 timer
at half the frequency of the CPU, make this variable a property.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-11-f4bug@amsat.org>
The CP0 timer period is a function of the CPU frequency.
Start using the default values, which will be replaced by
properties in the next commits.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201012095804.3335117-10-f4bug@amsat.org>
Currently the CP0 timer period is fixed at 10 ns, corresponding
to a fixed CPU frequency of 200 MHz (using half the speed of the
CPU).
In few commits we will be able to use a different CPU frequency.
In preparation, move the cp0_count_ns variable to CPUMIPSState
so we can modify it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201012095804.3335117-9-f4bug@amsat.org>
TIMER_PERIOD value of '10 ns' can be explained looking at
commit 6af0bf9c7c3doc, where the CPU frequency is 200 MHz
and CP0 default count rate is half the frequency of the
CPU. Document that.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-8-f4bug@amsat.org>
Name variables holding nanoseconds with the '_ns' suffix.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Message-Id: <20201012095804.3335117-7-f4bug@amsat.org>
The get_random() helper uses the CP0_Wired register, which is
unrelated to the CP0_Count register used as timer.
Commit e16fe40c87 ("Move the MIPS CPU timer in a separate file")
incorrectly moved this get_random() helper with timer specific
code. Move it back to generic CP0 helpers.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Luc Michel <luc@lmichel.fr>
Message-Id: <20201012095804.3335117-6-f4bug@amsat.org>
In case the guest uses a cache opcode we are not expecting,
log it to give us a chance to notice it, in case we should
actually do something.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20200813181527.22551-4-f4bug@amsat.org>
QEMU does not model caches, so there is not much to do with the
Invalidate/Writeback opcodes. Make it explicit adding a comment.
Suggested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20200813181527.22551-3-f4bug@amsat.org>
The cache operation is encoded in bits [20:18] of the instruction.
The 'op' argument of helper_cache() contains the bits [20:16].
Extract the 3 bits and parse them using a switch case. This allow
us to handle multiple cache types (the cache type is encoded in
bits [17:16]).
Previously the if() block was only checking the D-Cache (Primary
Data or Unified Primary). Now we also handle the I-Cache (Primary
Instruction), S-Cache (Secondary) and T-Cache (Terciary).
Reported-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20200813181527.22551-2-f4bug@amsat.org>
LDC2/SDC2 opcodes have been rewritten as "load & store with offset"
group of instructions by loongson-ext ASE.
This patch add implementation of these instructions:
gslbx: load 1 bytes to GPR
gslhx: load 2 bytes to GPR
gslwx: load 4 bytes to GPR
gsldx: load 8 bytes to GPR
gslwxc1: load 4 bytes to FPR
gsldxc1: load 8 bytes to FPR
gssbx: store 1 bytes from GPR
gsshx: store 2 bytes from GPR
gsswx: store 4 bytes from GPR
gssdx: store 8 bytes from GPR
gsswxc1: store 4 bytes from FPR
gssdxc1: store 8 bytes from FPR
Details of Loongson-EXT is here:
https://github.com/FlyGoat/loongson-insn/blob/master/loongson-ext.md
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602831120-3377-5-git-send-email-chenhc@lemote.com>
LWC2 & SWC2 have been rewritten by Loongson EXT vendor ASE
as "load/store quad word" and "shifted load/store" groups of
instructions.
This patch add implementation of these instructions:
gslwlc1: similar to lwl but RT is FPR instead of GPR
gslwrc1: similar to lwr but RT is FPR instead of GPR
gsldlc1: similar to ldl but RT is FPR instead of GPR
gsldrc1: similar to ldr but RT is FPR instead of GPR
gsswlc1: similar to swl but RT is FPR instead of GPR
gsswrc1: similar to swr but RT is FPR instead of GPR
gssdlc1: similar to sdl but RT is FPR instead of GPR
gssdrc1: similar to sdr but RT is FPR instead of GPR
Details of Loongson-EXT is here:
https://github.com/FlyGoat/loongson-insn/blob/master/loongson-ext.md
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Message-Id: <1602831120-3377-4-git-send-email-chenhc@lemote.com>
[PMD: Reuse t1 on MIPS32, reintroduce t2/fp0]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
LWC2 & SWC2 have been rewritten by Loongson EXT vendor ASE
as "load/store quad word" and "shifted load/store" groups of
instructions.
This patch add implementation of these instructions:
gslq: load 16 bytes to GPR
gssq: store 16 bytes from GPR
gslqc1: load 16 bytes to FPR
gssqc1: store 16 bytes from FPR
Details of Loongson-EXT is here:
https://github.com/FlyGoat/loongson-insn/blob/master/loongson-ext.md
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Message-Id: <1602831120-3377-3-git-send-email-chenhc@lemote.com>
[PMD: Restrict t1 variable to TARGET_MIPS64, remove unused t2/fp0]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Remove function definitions via macros to achieve better code clarity.
Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602103041-32017-4-git-send-email-aleksandar.qemu.devel@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Remove function definitions via macros to achieve better code clarity.
Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602103041-32017-3-git-send-email-aleksandar.qemu.devel@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Remove function definitions via macros to achieve better code clarity.
Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602103041-32017-2-git-send-email-aleksandar.qemu.devel@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There are many spelling errors in the comments in target/mips/.
Use spellcheck to check the spelling errors.
Signed-off-by: zhaolichang <zhaolichang@huawei.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201009064449.2336-7-zhaolichang@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Icelake-Client CPU models will be removed in the future.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1600758855-80046-2-git-send-email-robert.hu@linux.intel.com>
[ehabkost: reword deprecation note, fix version in doc]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Implement the ability of marking some versions deprecated. When
that CPU model is chosen, print a warning. The warning message
can be customized, e.g. suggesting an alternative CPU model to be
used instead.
The deprecation message will be printed by x86_cpu_list_entry(),
e.g. '-cpu help'.
QMP command 'query-cpu-definitions' will return a bool value
indicating the deprecation status.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1600758855-80046-1-git-send-email-robert.hu@linux.intel.com>
[ehabkost: reword commit message]
[ehabkost: Handle NULL cpu_type]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As IRQ routing is always available on x86,
kvm_allows_irq0_override() will always return true, so we don't
need the function anymore.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200922201922.2153598-4-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
KVM_CAP_IRQ_ROUTING is always available on x86, so replace checks
for kvm_has_gsi_routing() and KVM_CAP_IRQ_ROUTING with asserts.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200922201922.2153598-3-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
KVM_CAP_IRQ_ROUTING is available since 2009 (Linux v2.6.30), so
it's safe to just make it a requirement on x86.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200922201922.2153598-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
With x2apic enabled, configurations can have more that 255 cores.
Noticed the device add test is hitting an assert when during cpu
hotplug with core_id > 255. This is due to assert check in the
CPUID 0x8000001E.
Remove the assert check and fix the problem.
Fixes the bug:
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1834200
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <160072824160.9666.8890355282135970684.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>