Commit Graph

480 Commits

Author SHA1 Message Date
Richard Henderson
4818c3743b target/arm: Remove redundant s->pc & ~1
The thumb bit has already been removed from s->pc, and is always even.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
16e0d8234e target/arm: Introduce add_reg_for_lit
Provide a common routine for the places that require ALIGN(PC, 4)
as the base address as opposed to plain PC.  The two are always
the same for A32, but the difference is meaningful for thumb mode.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
fdbcf6329d target/arm: Introduce read_pc
We currently have 3 different ways of computing the architectural
value of "PC" as seen in the ARM ARM.

The value of s->pc has been incremented past the current insn,
but that is all.  Thus for a32, PC = s->pc + 4; for t32, PC = s->pc;
for t16, PC = s->pc + 2.  These differing computations make it
impossible at present to unify the various code paths.

With the newly introduced s->pc_curr, we can compute the correct
value for all cases, using the formula given in the ARM ARM.

This changes the behaviour for load_reg() and load_reg_var()
when called with reg==15 from a 32-bit Thumb instruction:
previously they would have returned the incorrect value
of pc_curr + 6, and now they will return the architecturally
correct value of PC, which is pc_curr + 4. This will not
affect well-behaved guest software, because all of the places
we call these functions from T32 code are instructions where
using r15 is UNPREDICTABLE. Using the architectural PC value
here is more consistent with the T16 and A32 behaviour.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-4-richard.henderson@linaro.org
[PMM: added commit message note about UNPREDICTABLE T32 cases]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
43722a6d4f target/arm: Introduce pc_curr
Add a new field to retain the address of the instruction currently
being translated.  The 32-bit uses are all within subroutines used
by a32 and t32.  This will become less obvious when t16 support is
merged with a32+t32, and having a clear definition will help.

Convert aarch64 as well for consistency.  Note that there is one
instance of a pre-assert fprintf that used the wrong value for the
address of the current instruction.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
331b1ca616 target/arm: Pass in pc to thumb_insn_is_16bit
This function is used in two different contexts, and it will be
clearer if the function is given the address to which it applies.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Peter Maydell
8bd587c106 target/arm: Fix routing of singlestep exceptions
When generating an architectural single-step exception we were
routing it to the "default exception level", which is to say
the same exception level we execute at except that EL0 exceptions
go to EL1. This is incorrect because the debug exception level
can be configured by the guest for situations such as single
stepping of EL0 and EL1 code by EL2.

We have to track the target debug exception level in the TB
flags, because it is dependent on CPU state like HCR_EL2.TGE
and MDCR_EL2.TDE. (That we were previously calling the
arm_debug_target_el() function to determine dc->ss_same_el
is itself a bug, though one that would only have manifested
as incorrect syndrome information.) Since we are out of TB
flag bits unless we want to expand into the cs_base field,
we share some bits with the M-profile only HANDLER and
STACKCHECK bits, since only A-profile has this singlestep.

Fixes: https://bugs.launchpad.net/qemu/+bug/1838913
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190805130952.4415-3-peter.maydell@linaro.org
2019-08-16 14:02:49 +01:00
Peter Maydell
c1d5f50f09 target/arm: Factor out 'generate singlestep exception' function
Factor out code to 'generate a singlestep exception', which is
currently repeated in four places.

To do this we need to also pull the identical copies of the
gen-exception() function out of translate-a64.c and translate.c
into translate.h.

(There is a bug in the code: we're taking the exception to the wrong
target EL.  This will be simpler to fix if there's only one place to
do it.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190805130952.4415-2-peter.maydell@linaro.org
2019-08-16 14:02:48 +01:00
Peter Maydell
5529de1e55 target/arm: Execute Thumb instructions when their condbits are 0xf
Thumb instructions in an IT block are set up to be conditionally
executed depending on a set of condition bits encoded into the IT
bits of the CPSR/XPSR.  The architecture specifies that if the
condition bits are 0b1111 this means "always execute" (like 0b1110),
not "never execute"; we were treating it as "never execute".  (See
the ConditionHolds() pseudocode in both the A-profile and M-profile
Arm ARM.)

This is a bit of an obscure corner case, because the only legal
way to get to an 0b1111 set of condbits is to do an exception
return which sets the XPSR/CPSR up that way. An IT instruction
which encodes a condition sequence that would include an 0b1111 is
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
Add a comment noting that we take the latter option.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190617175317.27557-7-peter.maydell@linaro.org
2019-07-04 17:25:30 +01:00
Philippe Mathieu-Daudé
864806156a target/arm: Move CPU state dumping routines to cpu.c
Suggested-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190701132516.26392-11-philmd@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01 17:29:00 +01:00
Philippe Mathieu-Daudé
9798ac7162 target/arm: Fix coding style issues
Since we'll move this code around, fix its style first.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190701132516.26392-9-philmd@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01 17:29:00 +01:00
Peter Maydell
d9eea52c67 target/arm: Remove unused cpu_F0s, cpu_F0d, cpu_F1s, cpu_F1d
Remove the now unused TCG globals cpu_F0s, cpu_F0d, cpu_F1s, cpu_F1d.

cpu_M0 is still used by the iwmmxt code, and cpu_V0 and
cpu_V1 are used by both iwmmxt and Neon.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-13-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
b66f6b9981 target/arm: Stop using deprecated functions in NEON_2RM_VCVT_F32_F16
Remove some old constructns from NEON_2RM_VCVT_F16_F32 code:
 * don't use CPU_F0s
 * don't use tcg_gen_st_f32

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-12-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
58f2682eee target/arm: stop using deprecated functions in NEON_2RM_VCVT_F16_F32
Remove some old constructs from NEON_2RM_VCVT_F16_F32 code:
 * don't use cpu_F0s
 * don't use tcg_gen_ld_f32

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-11-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
c253dd7832 target/arm: Stop using cpu_F0s in Neon VCVT fixed-point ops
Stop using cpu_F0s in the Neon VCVT fixed-point operations.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-10-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
60737ed578 target/arm: Stop using cpu_F0s for Neon f32/s32 VCVT
Stop using cpu_F0s for the Neon f32/s32 VCVT operations.
Since this is the last user of cpu_F0s in the Neon 2rm-op
loop, we can remove the handling code for it too.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-9-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
9a011fece7 target/arm: Stop using cpu_F0s for NEON_2RM_VRECPE_F and NEON_2RM_VRSQRTE_F
Stop using cpu_F0s for NEON_2RM_VRECPE_F and NEON_2RM_VRSQRTE_F.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-8-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
30bf0a018f target/arm: Stop using cpu_F0s for NEON_2RM_VCVT[ANPM][US]
Stop using cpu_F0s for the NEON_2RM_VCVT[ANPM][US] ops.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-7-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
3b52ad1fae target/arm: Stop using cpu_F0s for NEON_2RM_VRINT*
Switch NEON_2RM_VRINT* away from using cpu_F0s.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-6-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
cedcc96fc7 target/arm: Stop using cpu_F0s for NEON_2RM_VNEG_F
Switch NEON_2RM_VABS_F away from using cpu_F0s.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-5-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
fd8a68cdcf target/arm: Stop using cpu_F0s for NEON_2RM_VABS_F
Where Neon instructions are floating point operations, we
mostly use the old VFP utility functions like gen_vfp_abs()
which work on the TCG globals cpu_F0s and cpu_F1s. The
Neon for-each-element loop conditionally loads the inputs
into either a plain old TCG temporary for most operations
or into cpu_F0s for float operations, and similarly stores
back either cpu_F0s or the temporary.

Switch NEON_2RM_VABS_F away from using cpu_F0s, and
update neon_2rm_is_float_op() accordingly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190613163917.28589-4-peter.maydell@linaro.org
2019-06-17 15:14:19 +01:00
Peter Maydell
3111bfc2da target/arm: Convert float-to-integer VCVT insns to decodetree
Convert the float-to-integer VCVT instructions to decodetree.
Since these are the last unconverted instructions, we can
delete the old decoder structure entirely now.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
e3d6f4290c target/arm: Convert VCVT fp/fixed-point conversion insns to decodetree
Convert the VCVT (between floating-point and fixed-point) instructions
to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
92073e9474 target/arm: Convert VJCVT to decodetree
Convert the VJCVT instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
8fc9d8918c target/arm: Convert integer-to-float insns to decodetree
Convert the VCVT integer-to-float instructions to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
6ed7e49c36 target/arm: Convert double-single precision conversion insns to decodetree
Convert the VCVT double/single precision conversion insns to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
e25155f55d target/arm: Convert VFP round insns to decodetree
Convert the VFP round-to-integer instructions VRINTR, VRINTZ and
VRINTX to decodetree.

These instructions were only introduced as part of the "VFP misc"
additions in v8A, so we check this. The old decoder's implementation
was incorrectly providing them even for v7A CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
cdfd14e86a target/arm: Convert the VCVT-to-f16 insns to decodetree
Convert the VCVTT and VCVTB instructions which convert from
f32 and f64 to f16 to decodetree.

Since we're no longer constrained to the old decoder's style
using cpu_F0s and cpu_F0d we can perform a direct 16 bit
store of the right half of the input single-precision register
rather than doing a load/modify/store sequence on the full
32 bits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
b623d803dd target/arm: Convert the VCVT-from-f16 insns to decodetree
Convert the VCVTT, VCVTB instructions that deal with conversion
from half-precision floats to f32 or 64 to decodetree.

Since we're no longer constrained to the old decoder's style
using cpu_F0s and cpu_F0d we can perform a direct 16 bit
load of the right half of the input single-precision register
rather than loading the full 32 bits and then doing a
separate shift or sign-extension.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
386bba2368 target/arm: Convert VFP comparison insns to decodetree
Convert the VFP comparison instructions to decodetree.

Note that comparison instructions should not honour the VFP
short-vector length and stride information: they are scalar-only
operations.  This applies to all the 2-operand instructions except
for VMOV, VABS, VNEG and VSQRT.  (In the old decoder this is
implemented via the "if (op == 15 && rn > 3) { veclen = 0; }" check.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:06 +01:00
Peter Maydell
17552b979e target/arm: Convert VMOV (register) to decodetree
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
b8474540cb target/arm: Convert VSQRT to decodetree
Convert the VSQRT instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
1882651afd target/arm: Convert VNEG to decodetree
Convert the VNEG instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
90287e22c9 target/arm: Convert VABS to decodetree
Convert the VFP VABS instruction to decodetree.

Unlike the 3-op versions, we don't pass fpst to the VFPGen2OpSPFn or
VFPGen2OpDPFn because none of the operations which use this format
and support short vectors will need it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
b518c753f0 target/arm: Convert VMOV (imm) to decodetree
Convert the VFP VMOV (immediate) instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
d4893b01d2 target/arm: Convert VFP fused multiply-add insns to decodetree
Convert the VFP fused multiply-add instructions (VFNMA, VFNMS,
VFMA, VFMS) to decodetree.

Note that in the old decode structure we were implementing
these to honour the VFP vector stride/length. These instructions
were introduced in VFPv4, and in the v7A architecture they
are UNPREDICTABLE if the vector stride or length are non-zero.
In v8A they must UNDEF if stride or length are non-zero, like
all VFP instructions; we choose to UNDEF always.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
519ee7ae31 target/arm: Convert VDIV to decodetree
Convert the VDIV instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
8fec9a1192 target/arm: Convert VSUB to decodetree
Convert the VSUB instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
ce28b30371 target/arm: Convert VADD to decodetree
Convert the VADD instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
43c4be1236 target/arm: Convert VNMUL to decodetree
Convert the VNMUL instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
88c5188ced target/arm: Convert VMUL to decodetree
Convert the VMUL instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
8a483533ad target/arm: Convert VFP VNMLA to decodetree
Convert the VFP VNMLA instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:05 +01:00
Peter Maydell
c54a416cc6 target/arm: Convert VFP VNMLS to decodetree
Convert the VFP VNMLS instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
e7258280d4 target/arm: Convert VFP VMLS to decodetree
Convert the VFP VMLS instruction to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
266bd25c48 target/arm: Convert VFP VMLA to decodetree
Convert the VFP VMLA instruction to decodetree.

This is the first of the VFP 3-operand data processing instructions,
so we include in this patch the code which loops over the elements
for an old-style VFP vector operation. The existing code to do this
looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since
we are going to be converting instructions one at a time anyway
we can take the opportunity to make the new loop use TCG temporaries,
which means we can do that conversion one operation at a time
rather than needing to do it all in one go.

We include an UNDEF check which was missing in the old code:
short-vector operations (with stride or length non-zero) were
deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec
field does not indicate that support for short vectors is present
we UNDEF the operations that would use them. (This is a change
of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which
previously were all incorrectly allowing short-vector operations.)

Note that the conversion fixes a bug in the old code for the
case of VFP short-vector "mixed scalar/vector operations". These
happen where the destination register is in a vector bank but
but the second operand is in a scalar bank. For example
  vmla.f64 d10, d1, d16   with length 2 stride 2
is equivalent to the pair of scalar operations
  vmla.f64 d10, d1, d16
  vmla.f64 d8, d3, d16
where the destination and first input register cycle through
their vector but the second input is scalar (d16). In the
old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d}
as a temporary output for the multiply, which trashes the
second input operand. For the fully-scalar case (where we
never do a second iteration) and the fully-vector case
(where the loop loads the new second input operand) this
doesn't matter, but for the mixed scalar/vector case we
will end up using the wrong value for later loop iterations.
In the new code we use TCG temporaries and so avoid the bug.
This bug is present for all the multiply-accumulate insns
that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS.

Note 2: the expression used to calculate the next register
number in the vector bank is not in fact correct; we leave
this behaviour unchanged from the old decoder and will
fix this bug later in the series.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
3993d0407d target/arm: Remove VLDR/VSTR/VLDM/VSTM use of cpu_F0s and cpu_F0d
Expand out the sequences in the new decoder VLDR/VSTR/VLDM/VSTM trans
functions which perform the memory accesses by going via the TCG
globals cpu_F0s and cpu_F0d, to use local TCG temps instead.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
fa288de272 target/arm: Convert the VFP load/store multiple insns to decodetree
Convert the VFP load/store multiple insns to decodetree.
This includes tightening up the UNDEF checking for pre-VFPv3
CPUs which only have D0-D15 : they now UNDEF for any access
to D16-D31, not merely when the smallest register in the
transfer list is in D16-D31.

This conversion does not try to share code between the single
precision and the double precision versions; this looks a bit
duplicative of code, but it leaves the door open for a future
refactoring which gets rid of the use of the "F0" registers
by inlining the various functions like gen_vfp_ld() and
gen_mov_F0_reg() which are hiding "if (dp) { ... } else { ... }"
conditionalisation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
79b02a3b52 target/arm: Convert VFP VLDR and VSTR to decodetree
Convert the VFP single load/store insns VLDR and VSTR to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
81f681106e target/arm: Convert VFP two-register transfer insns to decodetree
Convert the VFP two-register transfer instructions to decodetree
(in the v8 Arm ARM these are the "Advanced SIMD and floating-point
64-bit move" encoding group).

Again, we expand out the sequences involving gen_vfp_msr() and
gen_msr_vfp().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
a9ab50011a target/arm: Convert "single-precision" register moves to decodetree
Convert the "single-precision" register moves to decodetree:
 * VMSR
 * VMRS
 * VMOV between general purpose register and single precision

Note that the VMSR/VMRS conversions make our handling of
the "should this UNDEF?" checks consistent between the two
instructions:
 * VMSR to MVFR0, MVFR1, MVFR2 now UNDEF from EL0
   (previously was a nop)
 * VMSR to FPSID now UNDEFs from EL0 or if VFPv3 or better
   (previously was a nop)
 * VMSR to FPINST and FPINST2 now UNDEF if VFPv3 or better
   (previously would write to the register, which had no
   guest-visible effect because we always UNDEF reads)

We also tighten up the decode: we were previously underdecoding
some SBZ or SBO bits.

The conversion of VMOV_single includes the expansion out of the
gen_mov_F0_vreg()/gen_vfp_mrs() and gen_mov_vreg_F0()/gen_vfp_msr()
sequences into the simpler direct load/store of the TCG temp via
neon_{load,store}_reg32(): we know in the new function that we're
always single-precision, we don't need to use the old-and-deprecated
cpu_F0* TCG globals, and we don't happen to have the declaration of
gen_vfp_msr() and gen_vfp_mrs() at the point in the file where the
new function is.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
9851ed9269 target/arm: Convert "double-precision" register moves to decodetree
Convert the "double-precision" register moves to decodetree:
this covers VMOV scalar-to-gpreg, VMOV gpreg-to-scalar and VDUP.

Note that the conversion process has tightened up a few of the
UNDEF encoding checks: we now correctly forbid:
 * VMOV-to-gpr with U:opc1:opc2 == 10x00 or x0x10
 * VMOV-from-gpr with opc1:opc2 == 0x10
 * VDUP with B:E == 11
 * VDUP with Q == 1 and Vn<0> == 1

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
The accesses of elements < 32 bits could be improved by doing
direct ld/st of the right size rather than 32-bit read-and-shift
or read-modify-write, but we leave this for later cleanup,
since this series is generally trying to stick to fixing
the decode.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
160f3b64c5 target/arm: Add helpers for VFP register loads and stores
The current VFP code has two different idioms for
loading and storing from the VFP register file:
 1 using the gen_mov_F0_vreg() and similar functions,
   which load and store to a fixed set of TCG globals
   cpu_F0s, CPU_F0d, etc
 2 by direct calls to tcg_gen_ld_f64() and friends

We want to phase out idiom 1 (because the use of the
fixed globals is a relic of a much older version of TCG),
but idiom 2 is quite longwinded:
 tcg_gen_ld_f64(tmp, cpu_env, vfp_reg_offset(true, reg))
requires us to specify the 64-bitness twice, once in
the function name and once by passing 'true' to
vfp_reg_offset(). There's no guard against accidentally
passing the wrong flag.

Instead, let's move to a convention of accessing 64-bit
registers via the existing neon_load_reg64() and
neon_store_reg64(), and provide new neon_load_reg32()
and neon_store_reg32() for the 32-bit equivalents.

Implement the new functions and use them in the code in
translate-vfp.inc.c. We will convert the rest of the VFP
code as we do the decodetree conversion in subsequent
commits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
f7bbb8f31f target/arm: Move the VFP trans_* functions to translate-vfp.inc.c
Move the trans_*() functions we've just created from translate.c
to translate-vfp.inc.c. This is pure code motion with no textual
changes (this can be checked with 'git show --color-moved').

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
c2a46a914c target/arm: Convert VCVTA/VCVTN/VCVTP/VCVTM to decodetree
Convert the VCVTA/VCVTN/VCVTP/VCVTM instructions to decodetree.
trans_VCVT() is temporarily left in translate.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
e3bb599d16 target/arm: Convert VRINTA/VRINTN/VRINTP/VRINTM to decodetree
Convert the VRINTA/VRINTN/VRINTP/VRINTM instructions to decodetree.
Again, trans_VRINT() is temporarily left in translate.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
Peter Maydell
f65988a1ef target/arm: Convert VMINNM, VMAXNM to decodetree
Convert the VMINNM and VMAXNM instructions to decodetree.
As with VSEL, we leave the trans_VMINMAXNM() function
in translate.c for the moment.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:03 +01:00
Peter Maydell
b3ff4b87b4 target/arm: Convert the VSEL instructions to decodetree
Convert the VSEL instructions to decodetree.
We leave trans_VSEL() in translate.c for now as this allows
the patch to show just the changes from the old handle_vsel().

In the old code the check for "do D16-D31 exist" was hidden in
the VFP_DREG macro, and assumed that VFPv3 always implied that
D16-D31 exist. In the new code we do the correct ID register test.
This gives identical behaviour for most of our CPUs, and fixes
previously incorrect handling for  Cortex-R5F, Cortex-M4 and
Cortex-M33, which all implement VFPv3 or better with only 16
double-precision registers.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:03 +01:00
Peter Maydell
06db8196bb target/arm: Factor out VFP access checking code
Factor out the VFP access checking code so that we can use it in the
leaf functions of the decodetree decoder.

We call the function full_vfp_access_check() so we can keep
the more natural vfp_access_check() for a version which doesn't
have the 'ignore_vfp_enabled' flag -- that way almost all VFP
insns will be able to use vfp_access_check(s) and only the
special-register access function will have to use
full_vfp_access_check(s, ignore_vfp_enabled).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:03 +01:00
Peter Maydell
78e138bc1f target/arm: Add stubs for AArch32 VFP decodetree
Add the infrastructure for building and invoking a decodetree decoder
for the AArch32 VFP encodings.  At the moment the new decoder covers
nothing, so we always fall back to the existing hand-written decode.

We need to have one decoder for the unconditional insns and one for
the conditional insns, as otherwise the patterns for conditional
insns would incorrectly match against the unconditional ones too.

Since translate.c is over 14,000 lines long and we're going to be
touching pretty much every line of the VFP code as part of the
decodetree conversion, we create a new translate-vfp.inc.c to hold
the code which deals with VFP in the new scheme.  It should be
possible to convert this into a standalone translation unit
eventually, but the conversion process will be much simpler if we
simply #include it midway through translate.c to start with.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:03 +01:00
Richard Henderson
3a7a2b4e5c target/arm: Use tcg_gen_gvec_bitsel
This replaces 3 target-specific implementations for BIT, BIF, and BSL.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190518191934.21887-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-06-13 15:14:03 +01:00
Richard Henderson
2fc0cc0e1e target/arm: Use env_cpu, env_archcpu
Cleanup in the boilerplate that each target must define.
Replace arm_env_get_cpu with env_archcpu.  The combination
CPU(arm_env_get_cpu) should have used ENV_GET_CPU to begin;
use env_cpu now.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10 07:03:34 -07:00
Alex Bennée
f1672e6f2b semihosting: move semihosting configuration into its own directory
In preparation for having some more common semihosting code let's
excise the current config magic from vl.c into its own file. We shall
later add more conditionals to the build configurations so we can
avoid building this if we don't need it.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-28 10:28:50 +01:00
Alistair Francis
2f143d3ad1 target/arm: Fix vector operation segfault
Commit 89e68b575 "target/arm: Use vector operations for saturation"
causes this abort() when booting QEMU ARM with a Cortex-A15:

0  0x00007ffff4c2382f in raise () at /usr/lib/libc.so.6
1  0x00007ffff4c0e672 in abort () at /usr/lib/libc.so.6
2  0x00005555559c1839 in disas_neon_data_insn (insn=<optimized out>, s=<optimized out>) at ./target/arm/translate.c:6673
3  0x00005555559c1839 in disas_neon_data_insn (s=<optimized out>, insn=<optimized out>) at ./target/arm/translate.c:6386
4  0x00005555559cd8a4 in disas_arm_insn (insn=4081107068, s=0x7fffe59a9510) at ./target/arm/translate.c:9289
5  0x00005555559cd8a4 in arm_tr_translate_insn (dcbase=0x7fffe59a9510, cpu=<optimized out>) at ./target/arm/translate.c:13612
6  0x00005555558d1d39 in translator_loop (ops=0x5555561cc580 <arm_translator_ops>, db=0x7fffe59a9510, cpu=0x55555686a2f0, tb=<optimized out>, max_insns=<optimized out>) at ./accel/tcg/translator.c:96
7  0x00005555559d10d4 in gen_intermediate_code (cpu=cpu@entry=0x55555686a2f0, tb=tb@entry=0x7fffd7840080 <code_gen_buffer+126091347>, max_insns=max_insns@entry=512) at ./target/arm/translate.c:13901
8  0x00005555558d06b9 in tb_gen_code (cpu=cpu@entry=0x55555686a2f0, pc=3067096216, cs_base=0, flags=192, cflags=-16252928, cflags@entry=524288) at ./accel/tcg/translate-all.c:1736
9  0x00005555558ce467 in tb_find (cf_mask=524288, tb_exit=1, last_tb=0x7fffd783e640 <code_gen_buffer+126084627>, cpu=0x1) at ./accel/tcg/cpu-exec.c:407
10 0x00005555558ce467 in cpu_exec (cpu=cpu@entry=0x55555686a2f0) at ./accel/tcg/cpu-exec.c:728
11 0x000055555588b0cf in tcg_cpu_exec (cpu=0x55555686a2f0) at ./cpus.c:1431
12 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=0x55555686a2f0) at ./cpus.c:1735
13 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=arg@entry=0x55555686a2f0) at ./cpus.c:1709
14 0x0000555555d2629a in qemu_thread_start (args=<optimized out>) at ./util/qemu-thread-posix.c:502
15 0x00007ffff4db8a92 in start_thread () at /usr/lib/libpthread.

This patch ensures that we don't hit the abort() in the second switch
case in disas_neon_data_insn() as we will return from the first case.

Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: ad91b397f360b2fc7f4087e476f7df5b04d42ddb.1558021877.git.alistair.francis@wdc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-05-23 14:47:43 +01:00
Richard Henderson
4e027a7106 target/arm: Use tcg_gen_abs_i64 and tcg_gen_gvec_abs
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13 22:52:08 +00:00
Richard Henderson
ff1f11f7f8 tcg: Add support for integer absolute value
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13 22:52:08 +00:00
Richard Henderson
53229a7703 tcg: Specify optional vector requirements with a list
Replace the single opcode in .opc with a null-terminated
array in .opt_opc.  We still require that all opcodes be
used with the same .vece.

Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion.  Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.

Convert all existing vector aware front ends.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13 14:44:03 -07:00
Peter Maydell
956fe143b4 target/arm: Implement VLLDM for v7M CPUs with an FPU
Implement the VLLDM instruction for v7M for the FPU present cas.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-26-peter.maydell@linaro.org
2019-04-29 17:36:03 +01:00
Peter Maydell
019076b036 target/arm: Implement VLSTM for v7M CPUs with an FPU
Implement the VLSTM instruction for v7M for the FPU present case.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
2019-04-29 17:36:03 +01:00
Peter Maydell
e33cf0f8d8 target/arm: Implement M-profile lazy FP state preservation
The M-profile architecture floating point system supports
lazy FP state preservation, where FP registers are not
pushed to the stack when an exception occurs but are instead
only saved if and when the first FP instruction in the exception
handler is executed. Implement this in QEMU, corresponding
to the check of LSPACT in the pseudocode ExecuteFPCheck().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
2019-04-29 17:36:02 +01:00
Peter Maydell
6000531e19 target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
context preservation is enabled. Before executing any floating-point
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
indicate that there is no active floating point context then we
must create a new context (by initializing FPSCR and setting
FPCA/SFPA to indicate that the context is now active). In the
pseudocode this is handled by ExecuteFPCheck().

Implement this with a new TB flag which tracks whether we
need to create a new FP context.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
2019-04-29 17:36:01 +01:00
Peter Maydell
6d60c67a1a target/arm: Set FPCCR.S when executing M-profile floating point insns
The M-profile FPCCR.S bit indicates the security status of
the floating point context. In the pseudocode ExecuteFPCheck()
function it is unconditionally set to match the current
security state whenever a floating point instruction is
executed.

Implement this by adding a new TB flag which tracks whether
FPCCR.S is different from the current security state, so
that we only need to emit the code to update it in the
less-common case when it is not already set correctly.

Note that we will add the handling for the other work done
by ExecuteFPCheck() in later commits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
2019-04-29 17:36:01 +01:00
Peter Maydell
ea7ac69d12 target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
We are close to running out of TB flags for AArch32; we could
start using the cs_base word, but before we do that we can
economise on our usage by sharing the same bits for the VFP
VECSTRIDE field and the XScale XSCALE_CPAR field. This
works because no XScale CPU ever had VFP.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
2019-04-29 17:36:01 +01:00
Peter Maydell
8859ba3c96 target/arm: Decode FP instructions for M profile
Correct the decode of the M-profile "coprocessor and
floating-point instructions" space:
 * op0 == 0b11 is always unallocated
 * if the CPU has an FPU then all insns with op1 == 0b101
   are floating point and go to disas_vfp_insn()

For the moment we leave VLLDM and VLSTM as NOPs; in
a later commit we will fill in the proper implementation
for the case where an FPU is present.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
2019-04-29 17:35:59 +01:00
Peter Maydell
d87513c0ab target/arm: Honour M-profile FP enable bits
Like AArch64, M-profile floating point has no FPEXC enable
bit to gate floating point; so always set the VFPEN TB flag.

M-profile also has CPACR and NSACR similar to A-profile;
they behave slightly differently:
 * the CPACR is banked between Secure and Non-Secure
 * if the NSACR forces a trap then this is taken to
   the Secure state, not the Non-Secure state

Honour the CPACR and NSACR settings. The NSACR handling
requires us to borrow the exception.target_el field
(usually meaningless for M profile) to distinguish the
NOCP UsageFault taken to Secure state from the more
usual fault taken to the current security state.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
2019-04-29 17:35:58 +01:00
Peter Maydell
ef9aae2522 target/arm: Disable most VFP sysregs for M-profile
The only "system register" that M-profile floating point exposes
via the VMRS/VMRS instructions is FPSCR, and it does not have
the odd special case for rd==15. Add a check to ensure we only
expose FPSCR.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
2019-04-29 17:35:58 +01:00
Richard Henderson
8b86d6d258 tcg: Hoist max_insns computation to tb_gen_code
In order to handle TB's that translate to too much code, we
need to place the control of the length of the translation
in the hands of the code gen master loop.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-24 13:04:33 -07:00
Markus Armbruster
90c84c5600 qom/cpu: Simplify how CPUClass:cpu_dump_state() prints
CPUClass method dump_statistics() takes an fprintf()-like callback and
a FILE * to pass to it.  Most callers pass fprintf() and stderr.
log_cpu_state() passes fprintf() and qemu_log_file.
hmp_info_registers() passes monitor_fprintf() and the current monitor
cast to FILE *.  monitor_fprintf() casts it right back, and is
otherwise identical to monitor_printf().

The callback gets passed around a lot, which is tiresome.  The
type-punning around monitor_fprintf() is ugly.

Drop the callback, and call qemu_fprintf() instead.  Also gets rid of
the type-punning, since qemu_fprintf() takes NULL instead of the
current monitor cast to FILE *.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190417191805.28198-15-armbru@redhat.com>
2019-04-18 22:18:59 +02:00
Richard Henderson
22ac3c4964 target/arm: Add set/clear_pstate_bits, share gen_ss_advance
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:08 +00:00
Richard Henderson
9888bd1e20 target/arm: Implement ARMv8.0-SB
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:07 +00:00
Richard Henderson
9d090d1723 target/arm: Fix PC test for LDM (exception return)
Found by inspection: Rn is the base register against which the
load began; I is the register within the mask being processed.
The exception return should of course be processed from the loaded PC.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190301202921.21209-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05 15:55:07 +00:00
Richard Henderson
87732318c5 target/arm: Implement VFMAL and VFMSL for aarch32
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-28 11:03:05 +00:00
Peter Maydell
c0c760afe8 target/arm: Gate "miscellaneous FP" insns by ID register field
There is a set of VFP instructions which we implement in
disas_vfp_v8_insn() and gate on the ARM_FEATURE_V8 bit.
These were all first introduced in v8 for A-profile, but in
M-profile they appeared in v7M. Gate them on the MVFR2
FPMisc field instead, and rename the function appropriately.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-3-peter.maydell@linaro.org
2019-02-28 11:03:04 +00:00
Peter Maydell
602f6e42cf target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
Instead of gating the A32/T32 FP16 conversion instructions on
the ARM_FEATURE_VFP_FP16 flag, switch to our new approach of
looking at ID register bits. In this case MVFR1 fields FPHP
and SIMDHP indicate the presence of these insns.

This change doesn't alter behaviour for any of our CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-2-peter.maydell@linaro.org
2019-02-28 11:03:04 +00:00
Richard Henderson
6c1f6f2733 target/arm: Implement ARMv8.3-JSConv
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed a couple of comment typos]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:46 +00:00
Richard Henderson
e80941bd64 target/arm: Rearrange Floating-point data-processing (2 regs)
There are lots of special cases within these insns.  Split the
major argument decode/loading/saving into no_output (compares),
rd_is_dp, and rm_is_dp.

We still need to special case argument load for compare (rd as
input, rm as zero) and vcvt fixed (rd as input+output), but lots
of special cases do disappear.

Now that we have a full switch at the beginning, hoist the ISA
checks from the code generation.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-21 18:17:45 +00:00
Richard Henderson
89e68b575e target/arm: Use vector operations for saturation
For same-sign saturation, we have tcg vector operations.  We can
compute the QC bit by comparing the saturated value against the
unsaturated value.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15 09:56:41 +00:00
Richard Henderson
ec527e4eec target/arm: Fix arm_cpu_dump_state vs FPSCR
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15 09:56:40 +00:00
Richard Henderson
9ecd3c5c16 target/arm: Use tcg integer min/max primitives for neon
The 32-bit PMIN/PMAX has been decomposed to scalars,
and so can be trivially expanded inline.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15 09:56:40 +00:00
Richard Henderson
6f27822182 target/arm: Use vector minmax expanders for aarch32
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15 09:56:40 +00:00
Richard Henderson
2900847ff4 target/arm: Rely on optimization within tcg_gen_gvec_or
Since we're now handling a == b generically, we no longer need
to do it by hand within target/arm/.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-15 09:56:39 +00:00
Peter Maydell
96c552958d target/arm: Emit barriers for A32/T32 load-acquire/store-release insns
Now that MTTCG is here, the comment in the 32-bit Arm decoder that
"Since the emulation does not have barriers, the acquire/release
semantics need no special handling" is no longer true. Emit the
correct barriers for the load-acquire/store-release insns, as
we already do in the A64 decoder.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2019-01-07 15:23:48 +00:00
Richard Henderson
aad821ac4f target/arm: Convert ARM_TBFLAG_* to FIELDs
Use "register" TBFLAG_ANY to indicate shared state between
A32 and A64, and "registers" TBFLAG_A32 & TBFLAG_A64 for
fields that are specific to the given cpu state.

Move ARM_TBFLAG_BE_DATA to shared state, instead of its current
placement within "Bit usage when in AArch32 state".

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181218164348.7127-1-richard.henderson@linaro.org
[PMM: removed the renaming of BE_DATA flag to BE]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-07 15:23:45 +00:00
Richard Henderson
2d6ac92083 target/arm: Reorg NEON VLD/VST single element to one lane
Instead of shifts and masks, use direct loads and stores from
the neon register file.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
e23f12b3a2 target/arm: Promote consecutive memory ops for aa32
For a sequence of loads or stores from a single register,
little-endian operations can be promoted to an 8-byte op.
This can reduce the number of operations by a factor of 8.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
ac55d00709 target/arm: Reorg NEON VLD/VST all elements
Instead of shifts and masks, use direct loads and stores from the neon
register file.  Mirror the iteration structure of the ARM pseudocode
more closely.  Correct the parameters of the VLD2 A2 insn.

Note that this includes a bugfix for handling of the insn
"VLD2 (multiple 2-element structures)" -- we were using an
incorrect stride value.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
7377c2c97e target/arm: Use gvec for NEON VLD all lanes
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
[PMM: added parens in ?: expression]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
ea580fa312 target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
Move cmtst_op expanders from translate-a64.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
4a7832b095 target/arm: Use gvec for NEON_3R_VML
Move mla_op and mls_op expanders from translate-a64.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
f3cd8218d1 target/arm: Use gvec for VSRI, VSLI
Move shi_op and sli_op expanders from translate-a64.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
41f6c113c9 target/arm: Use gvec for VSRA
Move ssra_op and usra_op expanders from translate-a64.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
1dc8425e55 target/arm: Use gvec for VSHR, VSHL
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
82083184b6 target/arm: Use gvec for NEON_3R_VMUL
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
4bf940beba target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
e4717ae02d target/arm: Use gvec for NEON_3R_VADD_VSUB insns
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
eabcd6faa9 target/arm: Use gvec for NEON_3R_LOGIC insns
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
246fa4aca9 target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
32f91fb71f target/arm: Use gvec for NEON VDUP
Also introduces neon_element_offset to find the env offset
of a specific element within a neon register.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
308e563615 target/arm: Mark some arrays const
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
[PMM: drop change to now-deleted cpu_mode_names array]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:37 +01:00
Richard Henderson
7108e255c2 target/arm: Don't call tcg_clear_temp_count
This is done generically in translator_loop.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:36 +01:00
Peter Maydell
4be42f4013 target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
provided in HSR has more information than is reported to AArch64.
Specifically, there are extra fields TA and coproc which indicate
whether the trapped instruction was FP or SIMD. Add this extra
information to the syndromes we construct, and mask it out when
taking the exception to AArch64.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
81e3728407 target/arm: Improve debug logging of AArch32 exception return
For AArch32, exception return happens through certain kinds
of CPSR write. We don't currently have any CPU_LOG_INT logging
of these events (unlike AArch64, where we log in the ERET
instruction). Add some suitable logging.

This will log exception returns like this:
Exception return from AArch32 hyp to usr PC 0x80100374

paralleling the existing logging in the exception_return
helper for AArch64 exception returns:
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c

(Note that an AArch32 exception return can only be
AArch32->AArch32, never to AArch64.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
2018-10-24 07:51:32 +01:00
Richard Henderson
5763190fa8 target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:31 +01:00
Richard Henderson
09cbd50198 target/arm: Convert jazelle from feature bit to isar1 test
Having V6 alone imply jazelle was wrong for cortex-m0.
Change to an assertion for V6 & !M.

This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:17 +01:00
Richard Henderson
7e0cf8b47f target/arm: Convert division from feature bits to isar0 tests
Both arm and thumb2 division are controlled by the same ISAR field,
which takes care of the arm implies thumb case.  Having M imply
thumb2 division was wrong for cortex-m0, which is v6m and does not
have thumb2 at all, much less thumb2 division.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Richard Henderson
962fcbf2ef target/arm: Convert v8 extensions from feature bits to isar tests
Most of the v8 extensions are self-contained within the ISAR
registers and are not implied by other feature bits, which
makes them the easiest to convert.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Peter Maydell
8a954faf54 target/arm: Add v8M stack checks for VLDM/VSTM
Add the v8M stack checks for the VLDM/VSTM
(aka VPUSH/VPOP) instructions. This code is currently
unreachable because we haven't yet implemented M profile
floating point support, but since the change is simple,
we add it now because otherwise we're likely to forget to
do it later.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-13-peter.maydell@linaro.org
2018-10-08 14:55:05 +01:00
Peter Maydell
aa369e5c08 target/arm: Add v8M stack checks for Thumb push/pop
Add v8M stack checks for the 16-bit Thumb push/pop
encodings: STMDB, STMFD, LDM, LDMIA, LDMFD.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-12-peter.maydell@linaro.org
2018-10-08 14:55:05 +01:00
Peter Maydell
0bc003bad9 target/arm: Add v8M stack checks for T32 load/store single
Add v8M stack checks for the instructions in the T32
"load/store single" encoding class: these are the
"immediate pre-indexed" and "immediate, post-indexed"
LDR and STR instructions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-11-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
7c0ed88e7d target/arm: Add v8M stack checks for Thumb2 LDM/STM
Add the v8M stack checks for:
 * LDM (T2 encoding)
 * STM (T2 encoding)

This includes the 32-bit encodings of the instructions listed
in v8M ARM ARM rule R_YVWT as
 * LDM, LDMIA, LDMFD
 * LDMDB, LDMEA
 * POP (multiple registers)
 * PUSH (muliple registers)
 * STM, STMIA, STMEA
 * STMDB, STMFD

We perform the stack limit before doing any other part
of the load or store.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-10-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
910d7692e5 target/arm: Add v8M stack checks for LDRD/STRD (imm)
Add the v8M stack checks for:
 * LDRD (immediate)
 * STRD (immediate)

Loads and stores are more complicated than ADD/SUB/MOV, because we
must ensure that memory accesses below the stack limit are not
performed, so we can't simply do the check when we actually update
SP.

For these instructions, if the stack limit check triggers
we must not:
 * perform any memory access below the SP limit
 * update PC, SP or the load/store base register
but it is IMPDEF whether we:
 * perform any accesses above or equal to the SP limit
 * update destination registers for loads

For QEMU we choose to always check the limit before doing any other
part of the load or store, so we won't update any registers or
perform any memory accesses.

It is UNKNOWN whether the limit check triggers for a load or store
where the initial SP value is below the limit and one of the stores
would be below the limit, but the writeback moves SP to above the
limit.  For QEMU we choose to trigger the check in this situation.

Note that limit checks happen only for loads and stores which update
SP via writeback; they do not happen for loads and stores which
simply use SP as a base register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-9-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
a2d12f0f34 target/arm: Add some comments in Thumb decode
Add some comments to the Thumb decoder indicating what bits
of the instruction have been decoded at various points in
the code.

This is not an exhaustive set of comments; we're gradually
adding comments as we work with particular bits of the code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-6-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
5520318939 target/arm: Add v8M stack checks on ADD/SUB/MOV of SP
Add code to insert calls to a helper function to do the stack
limit checking when we handle these forms of instruction
that write to SP:
 * ADD (SP plus immediate)
 * ADD (SP plus register)
 * SUB (SP minus immediate)
 * SUB (SP minus register)
 * MOV (register)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-5-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
4730fb8503 target/arm: Define new TBFLAG for v8M stack checking
The Arm v8M architecture includes hardware stack limit checking.
When certain instructions update the stack pointer, if the new
value of SP is below the limit set in the associated limit register
then an exception is taken. Add a TB flag that tracks whether
the limit-checking code needs to be emitted.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181002163556.10279-2-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
d00584b7cf target/arm: Untabify translate.c
Untabify the arm translate.c. This affects only some lines,
mostly comments, in the iwMMXt code. We've never touched
that code in years, so it's not going to get fixed up
by our "change when touched" process, and a bulk change
is not going to be too disruptive.

This commit was produced using Emacs "untabify"; it is
a whitespace-only change.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-2-peter.maydell@linaro.org
2018-08-24 13:17:47 +01:00
Peter Maydell
55c544ed27 target/arm: Implement AArch32 ERET instruction
ARMv7VE introduced the ERET instruction, which is necessary to
return from an exception taken to Hyp mode. Implement this.
In A32 encoding it is a completely new encoding; in T32 it
is an adjustment of the behaviour of the existing
"SUBS PC, LR, #<imm8>" instruction.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-10-peter.maydell@linaro.org
2018-08-20 11:24:32 +01:00
Peter Maydell
aec4dd09f1 target/arm: Permit accesses to ELR_Hyp from Hyp mode via MSR/MRS (banked)
The MSR (banked) and MRS (banked) instructions allow accesses to ELR_Hyp
from either Monitor or Hyp mode. Our translate time check
was overly strict and only permitted access from Monitor mode.

The runtime check we do in msr_mrs_banked_exc_checks() had the
correct code in it, but never got there because of the earlier
"currmode == tgtmode" check. Special case ELR_Hyp.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-9-peter.maydell@linaro.org
2018-08-20 11:24:32 +01:00
Roman Kapl
c2d9644e6d target/arm: Fix crash on conditional instruction in an IT block
If an instruction is conditional (like CBZ) and it is executed
conditionally (using the ITx instruction), a jump to an undefined
label is generated, and QEMU crashes.

CBZ in IT block is an UNPREDICTABLE behavior, but we should not
crash.  Honouring the condition code is allowed by the spec in this
case (constrained unpredictable, ARMv8, section K1.1.7), and matches
what we do for other "UNPREDICTABLE inside an IT block" instructions.

Fix the 'skip on condition' code to create a new label only if it
does not already exist.  Previously multiple labels were created, but
only the last one of them was set.

Signed-off-by: Roman Kapl <rka@sysgo.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180816120533.6587-1-rka@sysgo.com
[PMM: fixed ^ 1 being applied to wrong argument, fixed typo]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-20 11:24:31 +01:00
Richard Henderson
26c470a7bb target/arm: Implement ARMv8.2-DotProd
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators.  Enable the feature
within -cpu max for CONFIG_USER_ONLY.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:15 +01:00
Richard Henderson
2cc99919a8 target/arm: Pass index to AdvSIMD FCMLA (indexed)
For aa64 advsimd, we had been passing the pre-indexed vector.
However, sve applies the index to each 128-bit segment, so we
need to pass in the index separately.

For aa32 advsimd, the fp32 operation always has index 0, but
we failed to interpret the fp16 index correctly.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:12 +01:00
Julia Suvorova
2aeba0d007 target/arm: Strict alignment for ARMv6-M and ARMv8-M Baseline
Unlike ARMv7-M, ARMv6-M and ARMv8-M Baseline only supports naturally
aligned memory accesses for load/store instructions.

Signed-off-by: Julia Suvorova <jusual@mail.ru>
Message-id: 20180622080138.17702-3-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-22 13:28:41 +01:00
Julia Suvorova
8297cb13e4 target/arm: Minor cleanup for ARMv6-M 32-bit instructions
The arrays were made static, "if" was simplified because V7M and V8M
define V6 feature.

Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180618214604.6777-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-22 13:28:34 +01:00
Julia Suvorova
14120108f8 target/arm: Allow ARMv6-M Thumb2 instructions
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
instructions and allows their execution.
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.

This patch is required for future Cortex-M0 support.

Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180612204632.28780-1-jusual@mail.ru
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
 point of use, and mark 'const'. Check for M-and-not-v7
 rather than M-and-6.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-15 15:23:34 +01:00
Richard Henderson
07ea28b418 tcg: Pass tb and index to tcg_gen_exit_tb separately
Do the cast to uintptr_t within the helper, so that the compiler
can type check the pointer argument.  We can also do some more
sanity checking of the index argument.

Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-01 15:15:27 -07:00
Alex Bennée
486624fcd3 target/arm: convert conversion helpers to fpst/ahp_flag
Instead of passing env and leaving it up to the helper to get the
right fpstatus we pass it explicitly. There was already a get_fpstatus
helper for neon for the 32 bit code. We also add an get_ahp_flag() for
passing the state of the alternative FP16 format flag. This leaves
scope for later tracking the AHP state in translation flags.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-17 15:27:09 -07:00
Emilio G. Cota
b542683d77 translator: merge max_insns into DisasContextBase
While at it, use int for both num_insns and max_insns to make
sure we have same-type comparisons.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-09 10:12:21 -07:00
Emilio G. Cota
bfe7ad5be7 target/arm: avoid integer overflow in next_page PC check
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-09 10:12:21 -07:00
Peter Maydell
b1e5336a98 target/arm: Implement v8M VLLDM and VLSTM
For v8M the instructions VLLDM and VLSTM support lazy saving
and restoring of the secure floating-point registers. Even
if the floating point extension is not implemented, these
instructions must act as NOPs in Secure state, so they can
be used as part of the secure-to-nonsecure call sequence.

Fixes: https://bugs.launchpad.net/qemu/+bug/1768295
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180503105730.5958-1-peter.maydell@linaro.org
2018-05-04 18:05:51 +01:00
Aaron Lindsay
e69ad9df6c target/arm: Allow EL change hooks to do IO
During code generation, surround CPSR writes and exception returns which
call the EL change hooks with gen_io_start/end. The immediate need is
for the PMU to access the clock and icount during EL change to support
mode filtering.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-9-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26 11:04:39 +01:00
Onur Sahin
c4869ca630 target-arm: Check undefined opcodes for SWP in A32 decoder
Make sure we are not treating architecturally Undefined instructions
as a SWP, by verifying the opcodes as per section A8.8.229 of ARMv7-A
specification. Bits [21:20] must be zero for this to be a SWP or SWPB.
We also choose to UNDEF for the architecturally UNPREDICTABLE case of
bits [11:8] not being zero.

Signed-off-by: Onur Sahin <onursahin08@gmail.com>
[PMM: tweaked commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-10 13:02:24 +01:00
Peter Maydell
c900a2e62d target/arm: Honour MDCR_EL2.TDE when routing exceptions due to BKPT/BRK
The MDCR_EL2.TDE bit allows the exception level targeted by debug
exceptions to be set to EL2 for code executing at EL0.  We handle
this in the arm_debug_target_el() function, but this is only used for
hardware breakpoint and watchpoint exceptions, not for the exception
generated when the guest executes an AArch32 BKPT or AArch64 BRK
instruction.  We don't have enough information for a translate-time
equivalent of arm_debug_target_el(), so instead make BKPT and BRK
call a special purpose helper which can do the routing, rather than
the generic exception_with_syndrome helper.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180320134114.30418-2-peter.maydell@linaro.org
2018-03-23 18:26:46 +00:00
Richard Henderson
0052087efb target/arm: Decode t32 simd 3reg and 2reg_scalar extension
Happily, the bits are in the same places compared to a32.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180228193125.20577-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02 11:03:45 +00:00
Richard Henderson
638808ff8a target/arm: Decode aa32 armv8.3 2-reg-index
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180228193125.20577-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02 11:03:45 +00:00
Richard Henderson
8b7209fae7 target/arm: Decode aa32 armv8.3 3-same
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180228193125.20577-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02 11:03:45 +00:00
Richard Henderson
61adacc8f5 target/arm: Decode aa32 armv8.1 two reg and a scalar
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180228193125.20577-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02 11:03:45 +00:00
Richard Henderson
36a719348a target/arm: Decode aa32 armv8.1 three same
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180228193125.20577-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02 11:03:45 +00:00
Alex Bennée
9b04991686 target/arm/helper: pass explicit fpst to set_rmode
As the rounding mode is now split between FP16 and the rest of
floating point we need to be explicit when tweaking it. Instead of
passing the CPU env we now pass the appropriate fpst pointer directly.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180227143852.11175-6-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-01 11:13:59 +00:00
Peter Maydell
384c6c03fb target/arm/translate.c: Fix missing 'break' for TT insns
The code where we added the TT instruction was accidentally
missing a 'break', which meant that after generating the code
to execute the TT we would fall through to 'goto illegal_op'
and generate code to take an UNDEF insn.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180206103941.13985-1-peter.maydell@linaro.org
2018-02-09 10:55:39 +00:00
Richard Henderson
c39c2b9043 target/arm: Expand vector registers for SVE
Change vfp.regs as a uint64_t to vfp.zregs as an ARMVectorReg.
The previous patches have made the change in representation
relatively painless.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180123035349.24538-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-02-09 10:40:31 +00:00
Richard Henderson
9a2b5256ea target/arm: Add aa{32, 64}_vfp_{dreg, qreg} helpers
Helpers that return a pointer into env->vfp.regs so that we isolate
the logic of how to index the regs array for different cpu modes.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-7-richard.henderson@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25 11:45:29 +00:00
Richard Henderson
3f68b8a5a6 target/arm: Change the type of vfp.regs
All direct users of this field want an integral value.  Drop all
of the extra casting between uint64_t and float64.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-6-richard.henderson@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25 11:45:28 +00:00
Richard Henderson
e7c06c4e4c target/arm: Use pointers in neon tbl helper
Rather than passing a regno to the helper, pass pointers to the
vector register directly.  This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-5-richard.henderson@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25 11:45:28 +00:00
Richard Henderson
b13708bbbd target/arm: Use pointers in neon zip/uzp helpers
Rather than passing regnos to the helpers, pass pointers to the
vector registers directly.  This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180119045438.28582-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25 11:45:28 +00:00
Richard Henderson
1a66ac61af target/arm: Use pointers in crypto helpers
Rather than passing regnos to the helpers, pass pointers to the
vector registers directly.  This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180119045438.28582-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25 11:45:28 +00:00
Peter Maydell
2eea841c11 target/arm: Make disas_thumb2_insn() generate its own UNDEF exceptions
Refactor disas_thumb2_insn() so that it generates the code for raising
an UNDEF exception for invalid insns, rather than returning a flag
which the caller must check to see if it needs to generate the UNDEF
code. This brings the function in to line with the behaviour of
disas_thumb_insn() and disas_arm_insn().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1513080506-17703-1-git-send-email-peter.maydell@linaro.org
2018-01-11 13:25:40 +00:00
Richard Henderson
15fa08f845 tcg: Dynamically allocate TCGOps
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.

Use QTAILQ to link the ops together.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-29 12:43:39 -08:00
Richard Henderson
f764718d0c tcg: Remove TCGV_UNUSED* and TCGV_IS_UNUSED*
These are now trivial sets and tests against NULL.  Unwrap.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-29 12:43:39 -08:00
Peter Maydell
5158de241b target/arm: Implement TT instruction
Implement the TT instruction which queries the security
state and access permissions of a memory location.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-8-git-send-email-peter.maydell@linaro.org
2017-12-13 17:59:24 +00:00
Peter Maydell
62593718d7 target/arm: Split M profile MNegPri mmu index into user and priv
For M profile, we currently have an mmu index MNegPri for
"requested execution priority negative". This fails to
distinguish "requested execution priority negative, privileged"
from "requested execution priority negative, usermode", but
the two can return different results for MPU lookups. Fix this
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
similarly for the Secure equivalent MSNegPri.

This takes us from 6 M profile MMU modes to 8, which means
we need to bump NB_MMU_MODES; this is OK since the point
where we are forced to reduce TLB sizes is 9 MMU modes.

(It would in theory be possible to stick with 6 MMU indexes:
{mpu-disabled,user,privileged} x {secure,nonsecure} since
in the MPU-disabled case the result of an MPU lookup is
always the same for both user and privileged code. However
we would then need to rework the TB flags handling to put
user/priv into the TB flags separately from the mmuidx.
Adding an extra couple of mmu indexes is simpler.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-5-git-send-email-peter.maydell@linaro.org
2017-12-13 17:59:23 +00:00
Peter Maydell
7472e2efb0 target/arm: Generate UNDEF for 32-bit Thumb2 insns
The refactoring of commit 296e5a0a6c has a nasty bug:
it accidentally dropped the generation of code to raise
the UNDEF exception when disas_thumb2_insn() returns nonzero.
This means that 32-bit Thumb2 instruction patterns that
ought to UNDEF just act like nops instead. This is likely
to break any number of things, including the kernel's "disable
the FPU and use the UNDEF exception to identify when to turn
it back on again" trick.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1513006964-3371-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-11 17:11:27 +00:00
Peter Maydell
3448d47b31 translate.c: Fix usermode big-endian AArch32 LDREXD and STREXD
For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the
lowest address is always Rt and the one at addr+4 is Rt2, even if the
CPU is big-endian. Our implementation does these with a single
64-bit store, so if we're big-endian then we need to put the two
32-bit halves together in the opposite order to little-endian,
so that they end up in the right places. We were trying to do
this with the gen_aa32_frob64() function, but that is not correct
for the usermode emulator, because there there is a distinction
between "load a 64 bit value" (which does a BE 64-bit access
and doesn't need swapping) and "load two 32 bit values as one
64 bit access" (where we still need to do the swapping, like
system mode BE32).

Fixes: https://bugs.launchpad.net/qemu/+bug/1725267
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1509622400-13351-1-git-send-email-peter.maydell@linaro.org
2017-11-07 13:03:51 +00:00
Stefano Stabellini
58803318e5 fix WFI/WFE length in syndrome register
WFI/E are often, but not always, 4 bytes long. When they are, we need to
set ARM_EL_IL_SHIFT in the syndrome register.

Pass the instruction length to HELPER(wfi), use it to decrement pc
appropriately and to pass an is_16bit flag to syn_wfx, which sets
ARM_EL_IL_SHIFT if needed.

Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn.

Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: alpine.DEB.2.10.1710241055160.574@sstabellini-ThinkPad-X260
[PMM: move setting of dc->insn for Thumb so it is correct for 32 bit insns]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-31 11:50:50 +00:00
Peter Maydell
6e6430a821 Capstone disassembler
-----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJZ8bGHAAoJEGTfOOivfiFfOXQH/jc3BbQ+ulxvQSgA3rI2JE1e
 Ww5FK5HEs4qZU3hz4EtE2Cd5p7qV5I4tWRtbxzc6BGBwLsfz3a60Abx7726sZiH0
 ZuULTsWXQ/71XfZHQysgOSoy36G8xj/1yvrMWHjDCfWp/pzz479YXWSSn2TWEHpI
 jI6nKP5ALdv5XTAaglGaNzqVeWgjKXJn4O8qZFS7axj7hndzLFguymfm8rV8DAdd
 LRuYWOizzzJ0dcaO/HHyLTzSl7rR0g+DmcOAuFCREy4f+r6tXijwiirB5f7ZJiqc
 hgEBq/6NfztW2+pAUSxqI2Kuq1zVETTpZORH1+UxvVk9GPu1ouYldMx0NrYhDtc=
 =fC5W
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-dis-20171026' into staging

Capstone disassembler

# gpg: Signature made Thu 26 Oct 2017 10:57:27 BST
# gpg:                using RSA key 0x64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>"
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth/tags/pull-dis-20171026:
  disas: Add capstone as submodule
  disas: Remove monitor_disas_is_physical
  ppc: Support Capstone in disas_set_info
  arm: Support Capstone in disas_set_info
  i386: Support Capstone in disas_set_info
  disas: Support the Capstone disassembler library
  disas: Remove unused flags arguments
  target/arm: Don't set INSN_ARM_BE32 for CONFIG_USER_ONLY
  target/arm: Move BE32 disassembler fixup
  target/ppc: Convert to disas_set_info hook
  target/i386: Convert to disas_set_info hook

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

# Conflicts:
#	target/i386/cpu.c
#	target/ppc/translate_init.c
2017-10-27 08:04:51 +01:00
Richard Henderson
1d48474d8e disas: Remove unused flags arguments
Now that every target is using the disas_set_info hook,
the flags argument is unused.  Remove it.

Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-25 11:55:09 +02:00
Richard Henderson
1c2adb958f tcg: Initialize cpu_env generically
This is identical for each target.  So, move the initialization to
common code.  Move the variable itself out of tcg_ctx and name it
cpu_env to minimize changes within targets.

This also means we can remove tcg_global_reg_new_{ptr,i32,i64},
since there are no longer global-register temps created by targets.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:42 -07:00
Emilio G. Cota
b1311c4acf tcg: define tcg_init_ctx and make tcg_ctx a pointer
Groundwork for supporting multiple TCG contexts.

The core of this patch is this change to tcg/tcg.h:

> -extern TCGContext tcg_ctx;
> +extern TCGContext tcg_init_ctx;
> +extern TCGContext *tcg_ctx;

Note that for now we set *tcg_ctx to whatever TCGContext is passed
to tcg_context_init -- in this case &tcg_init_ctx.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:42 -07:00
Emilio G. Cota
2399d4e7ce target/arm: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:41 -07:00
Emilio G. Cota
c5a49c63fa tcg: convert tb->cflags reads to tb_cflags(tb)
Convert all existing readers of tb->cflags to tb_cflags, so that we
use atomic_read and therefore avoid undefined behaviour in C11.

Note that the remaining setters/getters of the field are protected
by tb_lock, and therefore do not need conversion.

Luckily all readers access the field via 'tb->cflags' (so no foo.cflags,
bar->cflags in the code base), which makes the conversion easily
scriptable:

FILES=$(git grep 'tb->cflags' target include/exec/gen-icount.h \
	 accel/tcg/translator.c | cut -f1 -d':' | sort | uniq)

perl -pi -e 's/([^.>])tb->cflags/$1tb_cflags(tb)/g' $FILES
perl -pi -e 's/([a-z->.]*)(->|\.)tb->cflags/tb_cflags($1$2tb)/g' $FILES

Then manually fixed the few errors that checkpatch reported.

Compile-tested for all targets.

Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:41 -07:00
Peter Maydell
76eff04d16 target/arm: Implement SG instruction corner cases
The common situation of the SG instruction is that it is
executed from S&NSC memory by a CPU in NS state. That case
is handled by v7m_handle_execute_nsc(). However the instruction
also has defined behaviour in a couple of other cases:
 * SG instruction in NS memory (behaves as a NOP)
 * SG in S memory but CPU already secure (clears IT bits and
   does nothing else)
 * SG instruction in v8M without Security Extension (NOP)

These can be implemented in translate.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
dcf14dfb70 target/arm: Support some Thumb insns being always unconditional
A few Thumb instructions are always unconditional even inside an
IT block (as opposed to being UNPREDICTABLE if used inside an
IT block): BKPT, the v8M SG instruction, and the A profile
HLT (debug halt) instruction.

This means we need to suppress the jump-over-instruction-on-condfail
code generation (though the IT state still advances as usual and
subsequent insns in the IT block may be conditional).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
5b8d7289e9 target-arm: Simplify insn_crosses_page()
Recent changes have left insn_crosses_page() more complicated
than it needed to be:
 * it's only called from thumb_tr_translate_insn() so we know
   for certain that we're looking at a Thumb insn
 * the caller's check for dc->pc >= dc->next_page_start - 3
   means that dc->pc can't possibly be 4 aligned, so there's
   no need to check that (the check was partly there to ensure
   that we didn't treat an ARM insn as Thumb, I think)
 * we now have thumb_insn_is_16bit() which lets us do a precise
   check of the length of the next insn, rather than opencoding
   an inaccurate check

Simplify it down to just loading the first half of the insn
and calling thumb_insn_is_16bit() on it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
296e5a0a6c target/arm: Pull Thumb insn word loads up to top level
Refactor the Thumb decode to do the loads of the instruction words at
the top level rather than only loading the second half of a 32-bit
Thumb insn in the middle of the decode.

This is simple apart from the awkward case of Thumb1, where the
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
32-bit insn space.  To handle these we decode enough to identify
whether we're looking at a prefix/suffix that we handle as a 16 bit
insn, or a prefix that we're going to merge with the following suffix
to consider as a 32 bit insn.  The translation of the 16 bit cases
then moves from disas_thumb2_insn() to disas_thumb_insn().

The refactoring has the benefit that we don't need to pass the
CPUARMState* down into the decoder code any more, but the major
reason for doing this is that some Thumb instructions must be always
unconditional regardless of the IT state bits, so we need to know the
whole insn before we emit the "skip this insn if the IT bits and cond
state tell us to" code.  (The always unconditional insns are BKPT,
HLT and SG; the last of these is 32 bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
6b8acf256d target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
The code which implements the Thumb1 split BL/BLX instructions
is guarded by a check on "not M or THUMB2". All we really need
to check here is "not THUMB2" (and we assume that elsewhere too,
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).

This doesn't change behaviour because all M profile cores
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
(v6M implements a very restricted subset of Thumb2, but we
can cross that bridge when we get to it with appropriate
feature bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
d02a8698d7 target/arm: Implement secure function return
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.

Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
3e3fa230e3 target/arm: Implement BLXNS
Implement the BLXNS instruction, which allows secure code to
call non-secure code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
b9f587d62c target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
Add the M profile secure MMU index values to the switch in
get_a32_user_mem_index() so that LDRT/STRT work correctly
rather than asserting at translate time.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Emilio G. Cota
7f11636dbe tcg: remove addr argument from lookup_tb_ptr
It is unlikely that we will ever want to call this helper passing
an argument other than the current PC. So just remove the argument,
and use the pc we already get from cpu_get_tb_cpu_state.

This change paves the way to having a common "tb_lookup" function.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-10 07:37:10 -07:00
Peter Maydell
ef475b5dd1 target-arm:
* cleanups converting to DEFINE_PROP_LINK
  * allwinner-a10: mark as not user-creatable
  * initial patches working towards ARMv8M support
  * implement generating aborts on memory transaction failures
  * make BXJ behave correctly (ie not UNDEF) on ARMv6-and-later
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJZsUjvAAoJEDwlJe0UNgzey10P+wf1TRxRMGnoDftimLyPt9Pt
 cXYSP1KKF4qn618ZSJHPHJasWEx2obAP8JrrA8qLz0quWpWlXZ40bhgxKX9iKb2l
 4jrt/DjfTH7RWMRs94lOb0ZOtMokLfjHMSBhP31xR4Lgia0HdlmwqUPLr2T10ffE
 B9BKvPbXcee9Ss7osDqQr3OMUtSMjuc3G3z3WaySwG80od9MB8mblnMU0h9gZEeT
 6csGRHU8rfOkv9ZzrSJRWBuhmxC0Mrg3lB3iZffupFnI//q+PZfW2+ojAyn+pATu
 3YgHjgfgw4P5N2iGlg8c4y6mrig0fQNHWIXWFk7zWp7kWCdXnq5doFpJmi+CfMlE
 yQqMYzuy2Bd9n2fAB036nvb1LBHEKFYfKxqPoeJzuB9wEcXjmnbwuJ+iAKo/DP94
 9wE/cPNKySFmZJFEz+byAZvnEp0ynpQtDoCnaIJPbx6ytkKfL9xXX78+mmlTn8hj
 55NyH2aaEXpuxJKkld1pP2O+r/amFJ603rujSEaK0Or2YGcE1fit+YZSSh1glt25
 b3vEKn1ydWV4udRjBIEd0l/PIhGenILXC3bDONiWqEIPaMVeOxjhl+lvEHmELOjd
 t+o4ntQfU94Z6eDXPhx/bXqIZi9qtDbMZosojWL6wMAIMEiuXlB/a9vhcs9uBnRJ
 M0PiR5jVpZgDfLipV/8A
 =URgX
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170907' into staging

target-arm:
 * cleanups converting to DEFINE_PROP_LINK
 * allwinner-a10: mark as not user-creatable
 * initial patches working towards ARMv8M support
 * implement generating aborts on memory transaction failures
 * make BXJ behave correctly (ie not UNDEF) on ARMv6-and-later

# gpg: Signature made Thu 07 Sep 2017 14:26:07 BST
# gpg:                using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>"
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20170907: (31 commits)
  target/arm: Add Jazelle feature
  target/arm: Implement new do_transaction_failed hook
  hw/arm: Set ignore_memory_transaction_failures for most ARM boards
  boards.h: Define new flag ignore_memory_transaction_failures
  target/arm: Implement BXNS, and banked stack pointers
  target/arm: Move regime_is_secure() to target/arm/internals.h
  target/arm: Make CFSR register banked for v8M
  target/arm: Make MMFAR banked for v8M
  target/arm: Make CCR register banked for v8M
  target/arm: Make MPU_CTRL register banked for v8M
  target/arm: Make MPU_RNR register banked for v8M
  target/arm: Make MPU_RBAR, MPU_RLAR banked for v8M
  target/arm: Make MPU_MAIR0, MPU_MAIR1 registers banked for v8M
  target/arm: Make VTOR register banked for v8M
  nvic: Add NS alias SCS region
  target/arm: Make CONTROL register banked for v8M
  target/arm: Make FAULTMASK register banked for v8M
  target/arm: Make PRIMASK register banked for v8M
  target/arm: Make BASEPRI register banked for v8M
  target/arm: Add MMU indexes for secure v8M
  ...

# Conflicts:
#	target/arm/translate.c
2017-09-07 16:46:15 +01:00
Portia Stephens
c99a55d38d target/arm: Add Jazelle feature
This adds a feature bit indicating support of the (trivial) Jazelle
implementation if ARM_FEATURE_V6 is set or if the processor is arm926
or arm1026.  This fixes the issue that any BXJ instruction will
result in an illegal_op.  BXJ instructions will now check if the
architecture supports ARM_FEATURE_JAZELLE.

Signed-off-by: Portia Stephens <portia.stephens@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 20170905211232.11092-1-portia.stephens@xilinx.com
[PMM: edited commit message and comment text a bit]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-07 13:54:55 +01:00
Peter Maydell
fb602cb726 target/arm: Implement BXNS, and banked stack pointers
Implement the BXNS v8M instruction, which is like BX but will do a
jump-and-switch-to-NonSecure if the branch target address has bit 0
clear.

This is the first piece of code which implements "switch to the
other security state", so the commit also includes the code to
switch the stack pointers around, which is the only complicated
part of switching security state.

BLXNS is more complicated than just "BXNS but set the link register",
so we leave it for a separate commit.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-21-git-send-email-peter.maydell@linaro.org
2017-09-07 13:54:54 +01:00
Peter Maydell
8bfc26ea30 target/arm: Make CONTROL register banked for v8M
Make the CONTROL register banked if v8M security extensions are enabled.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-10-git-send-email-peter.maydell@linaro.org
2017-09-07 13:54:53 +01:00
Peter Maydell
1e577cc7cf target/arm: Add state field, feature bit and migration for v8M secure state
As the first step in implementing ARM v8M's security extension:
 * add a new feature bit ARM_FEATURE_M_SECURITY
 * add the CPU state field that indicates whether the CPU is
   currently in the secure state
 * add a migration subsection for this new state
   (we will add the Secure copies of banked register state
   to this subsection in later patches)
 * add a #define for the one new-in-v8M exception type
 * make the CPU debug log print S/NS status

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-4-git-send-email-peter.maydell@linaro.org
2017-09-07 13:54:52 +01:00
Richard Henderson
d0264d86b0 target/arm: Perform per-insn cross-page check only for Thumb
ARM is a fixed-length ISA and we can compute the page crossing
condition exactly once during init_disas_context.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:48 -07:00
Richard Henderson
722ef0a562 target/arm: Split out thumb_tr_translate_insn
We need not check for ARM vs Thumb state in order to dispatch
disassembly of every instruction.

Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:48 -07:00
Richard Henderson
f7708456aa target/arm: Move ss check to init_disas_context
We can check for single-step just once.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:48 -07:00
Lluís Vilanova
2316922420 target/arm: [tcg] Port to generic translation framework
Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-Id: <150002631325.22386.10348327185029496649.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:48 -07:00
Lluís Vilanova
4013f7fc81 target/arm: [tcg] Port to disas_log
Incrementally paves the way towards using the generic instruction translation
loop.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Message-Id: <150002582711.22386.191527630537864599.stgit@frigg.lan>
[rth: Move tb->size computation and use that result.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:48 -07:00
Lluís Vilanova
70d3c035ae target/arm: [tcg] Port to tb_stop
Incrementally paves the way towards using the generic instruction translation
loop.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-Id: <150002534291.22386.13499916738708680298.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:48 -07:00
Lluís Vilanova
13189a9080 target/arm: [tcg] Port to translate_insn
Incrementally paves the way towards using the generic instruction translation
loop.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-Id: <150002485863.22386.13949856269576226529.stgit@frigg.lan>
[rth: Adjust for translate_insn interface change.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Lluís Vilanova
a68956ad7f target/arm: [tcg,a64] Port to insn_start
Incrementally paves the way towards using the generic instruction translation
loop.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Message-Id: <150002413187.22386.156315485813606121.stgit@frigg.lan>
[rth: Use DISAS_TOO_MANY for "execute only one more" after bp.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Lluís Vilanova
f62bd897e6 target/arm: [tcg] Port to insn_start
Incrementally paves the way towards using the generic instruction translation
loop.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Message-Id: <150002388959.22386.12439646324427589940.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Lluís Vilanova
b14768544f target/arm: [tcg] Port to tb_start
Incrementally paves the way towards using the generic instruction translation
loop.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Message-Id: <150002364681.22386.1701754996184325808.stgit@frigg.lan>
[rth: Adjust for tb_start interface change.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Lluís Vilanova
1d8a553523 target/arm: [tcg] Port to init_disas_context
Incrementally paves the way towards using the generic instruction translation
loop.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Message-Id: <150002316201.22386.12115078843605656029.stgit@frigg.lan>
[rth: Adjust for max_insns interface change.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Lluís Vilanova
dcba3a8d44 target/arm: [tcg] Port to DisasContextBase
Incrementally paves the way towards using the generic
instruction translation loop.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Message-Id: <150002291931.22386.11441154993010495674.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Richard Henderson
3805c2eba8 target/arm: Delay check for magic kernel page
There's nothing magic about the exception that we generate in order
to execute the magic kernel page.  We can and should allow gdb to
set a breakpoint at this location.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Lluís Vilanova
77fc6f5e28 target: [tcg] Use a generic enum for DISAS_ values
Used later. An enum makes expected values explicit and
bounds the value space of switches.

Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <150002049746.22386.2316077281615710615.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Richard Henderson
a0c231e651 target/arm: Use DISAS_NORETURN
Fold DISAS_EXC and DISAS_TB_JUMP into DISAS_NORETURN.

In both cases all following code is dead.  In the first
case because we have exited the TB via exception; in the
second case because we have exited the TB via goto_tb
and its associated machinery.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06 08:06:47 -07:00
Peter Maydell
5b906f3589 target/arm: Make arm_cpu_dump_state() handle the M-profile XPSR
Make the arm_cpu_dump_state() debug logging handle the M-profile XPSR
rather than assuming it's an A-profile CPSR.  On M profile the PSR
line of a register dump will now look like this:

XPSR=41000000 -Z-- T priv-thread

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-12-git-send-email-peter.maydell@linaro.org
2017-09-04 15:21:52 +01:00
Peter Maydell
ebfe27c593 target/arm: Tighten up Thumb decode where new v8M insns will be
Tighten up the T32 decoder in the places where new v8M instructions
will be:
 * TT/TTT/TTA/TTAT are in what was nominally LDREX/STREX r15, ...
   which is UNPREDICTABLE:
   make the UNPREDICTABLE behaviour be to UNDEF
 * BXNS/BLXNS are distinguished from BX/BLX via the low 3 bits,
   which in previous architectural versions are SBZ:
   enforce the SBZ via UNDEF rather than ignoring it, and move
   the "ARCH(5)" UNDEF case up so we don't leak a TCG temporary
 * SG is in the encoding which would be LDRD/STRD with rn = r15;
   this is UNPREDICTABLE and we currently UNDEF:
   move this check further up the code so that we don't leak
   TCG temporaries in the UNDEF case and have a better place
   to put the SG decode.

This means that if a v8M binary is accidentally run on v7M
or if a test case hits something that we haven't implemented
yet the behaviour will be obvious (UNDEF) rather than obscure
(plough on treating it as a different instruction).

In the process, add some comments about the instruction patterns
at these points in the decode. Our Thumb and ARM decoders are
very difficult to understand currently, but gradually adding
comments like this should help to clarify what exactly has
been decoded when.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-5-git-send-email-peter.maydell@linaro.org
2017-09-04 15:21:51 +01:00
Lluís Vilanova
9c489ea6be tcg: Pass generic CPUState to gen_intermediate_code()
Needed to implement a target-agnostic gen_intermediate_code()
in the future.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-Id: <150002025498.22386.18051908483085660588.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-07-19 14:45:16 -07:00
Aurelien Jarno
68cedf733a target/arm: optimize aarch32 rev16
Use the same mask to avoid having to load two different constants, as
suggested by Richard Henderson.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170516230159.4195-2-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-07-19 14:45:15 -07:00
Alex Bennée
b29fd33db5 target/arm: use DISAS_EXIT for eret handling
Previously DISAS_JUMP did ensure this but with the optimisation of
8a6b28c7 (optimize indirect branches) we might not leave the loop.
This means if any pending interrupts are cleared by changing IRQ flags
we might never get around to servicing them. You usually notice this
by seeing the lookup_tb_ptr() helper gainfully chaining TBs together
while cpu->interrupt_request remains high and the exit_request has not
been set.

This breaks amongst other things the OPTEE test suite which executes
an eret from the secure world after a non-secure world IRQ has gone
pending which then never gets serviced.

Instead of using the previously implied semantics of DISAS_JUMP we use
DISAS_EXIT which will always exit the run-loop.

CC: Etienne Carriere <etienne.carriere@linaro.org>
CC: Joakim Bech <joakim.bech@linaro.org>
CC: Jaroslaw Pelczar <j.pelczar@samsung.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Emilio G. Cota <cota@braap.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-7-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Alex Bennée
0b609cc128 target/arm: use gen_goto_tb for ISB handling
While an ISB will ensure any raised IRQs happen on the next
instruction it doesn't cause any to get raised by itself. We can
therefore use a simple tb exit for ISB instructions and rely on the
exit_request check at the top of each TB to deal with exiting if
needed.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-6-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Alex Bennée
4cae8f56fb target/arm/translate: ensure gen_goto_tb sets exit flags
As the gen_goto_tb function can do both static and dynamic jumps it
should also set the is_jmp field. This matches the behaviour of the
a64 code.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-5-alex.bennee@linaro.org
[tweak to multiline comment formatting]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Alex Bennée
e8d5230221 target/arm/translate: make DISAS_UPDATE match declared semantics
DISAS_UPDATE should be used when the wider CPU state other than just
the PC has been updated and we should therefore exit the TCG runtime
and return to the main execution loop rather assuming DISAS_JUMP would
do that.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-3-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Emilio G. Cota
8a6b28c7b5 target/arm: optimize indirect branches
Speed up indirect branches by jumping to the target if it is valid.

Softmmu measurements (see later commit for user-mode results):

Note: baseline (i.e. speedup == 1x) is QEMU v2.9.0.

- Impact on Boot time

| setup  | ARM debian jessie boot+shutdown time | stddev |
|--------+--------------------------------------+--------|
| v2.9.0 |                                 8.84 |   0.07 |
| +cross |                                 8.85 |   0.03 |
| +jr    |                                 8.83 |   0.06 |

-                            NBench, arm-softmmu (debian jessie guest). Host: Intel i7-4790K @ 4.00GHz

  1.3x +-+-------------------------------------------------------------------------------------------------------------+-+
       |                                                                                                                 |
       |   cross                                                          ####                                           |
 1.25x +cross+jr..........................................................#++#.........................................+-+
       |                                                        ####      #  #                                           |
       |                                                     +++#  #      #  #                                           |
       |                                      +++            ****  #      #  #                                           |
  1.2x +-+...................................####............*..*..#......#..#.........................................+-+
       |                                  ****  #            *  *  #      #  #     ####                                  |
       |                                  *  *  #            *  *  #      #  #     #  #                                  |
 1.15x +-+................................*..*..#............*..*..#......#..#.....#..#................................+-+
       |                                  *  *  #            *  *  #      #  #     #  #                                  |
       |                                  *  *  #      ####  *  *  #      #  #     #  #                                  |
       |                                  *  *  #      #  #  *  *  #      #  #     #  #                         ####     |
  1.1x +-+................................*..*..#......#..#..*..*..#......#..#.....#..#.........................#..#...+-+
       |                                  *  *  #      #  #  *  *  #      #  #     #  #                         #  #     |
       |                                  *  *  #      #  #  *  *  #      #  #     #  #                         #  #     |
 1.05x +-+..........................####..*..*..#......#..#..*..*..#......#..#.....#..#......+++............*****..#...+-+
       |                        *****  #  *  *  #      #  #  *  *  #  *****  #     #  #   +++ |    ****###  *   *  #     |
       |                        *+++*  #  *  *  #      #  #  *  *  #  *+++*  #  ****  #  *****###  *  *  #  *   *  #     |
       |     *****###  +++####  *   *  #  *  *  #  *****  #  *  *  #  *   *  #  *  *  #  * | *++#  *  *  #  *   *  #     |
    1x +-++-+*+++*-+#++****++#++*+-+*++#+-*++*++#-+*+++*-+#++*++*++#++*+-+*++#+-*++*++#-+*+++*-+#++*++*++#++*+-+*++#+-++-+
       |     *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #     |
       |     *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #     |
 0.95x +-+---*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###---+-+
       ASSIGNMENT BITFIELD   FOURFP EMULATION   HUFFMAN   LU DECOMPOSITIONEURAL NNUMERIC SOSTRING SORT     hmean
  png: http://imgur.com/eOLmZNR

NB. 'cross' represents the previous commit.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-8-git-send-email-cota@braap.org>
[rth: Replace gen_jr global variable with DISAS_EXIT state.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-06-05 09:25:42 -07:00
Emilio G. Cota
7ad55b4ffd target/arm: optimize cross-page direct jumps in softmmu
Instead of unconditionally exiting to the exec loop, use the
lookup_and_goto_ptr helper to jump to the target if it is valid.

Perf impact: see next commit's log.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-7-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-06-05 09:25:42 -07:00
Peter Maydell
3bef701256 arm: Implement HFNMIENA support for M profile MPU
Implement HFNMIENA support for the M profile MPU. This bit controls
whether the MPU is treated as enabled when executing at execution
priorities of less than zero (in NMI, HardFault or with the FAULTMASK
bit set).

Doing this requires us to use a different MMU index for "running
at execution priority < 0", because we will have different
access permissions for that case versus the normal case.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-14-git-send-email-peter.maydell@linaro.org
2017-06-02 11:51:49 +01:00
Peter Maydell
e7b921c2d9 arm: Use different ARMMMUIdx values for M profile
Make M profile use completely separate ARMMMUIdx values from
those that A profile CPUs use. This is a prelude to adding
support for the MPU and for v8M, which together will require
6 MMU indexes which don't map cleanly onto the A profile
uses:
 non secure User
 non secure Privileged
 non secure Privileged, execution priority < 0
 secure User
 secure Privileged
 secure Privileged, execution priority < 0

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-4-git-send-email-peter.maydell@linaro.org
2017-06-02 11:51:47 +01:00
Peter Maydell
8bd5c82030 arm: Add support for M profile CPUs having different MMU index semantics
The M profile CPU's MPU has an awkward corner case which we
would like to implement with a different MMU index.

We can avoid having to bump the number of MMU modes ARM
uses, because some of our existing MMU indexes are only
used by non-M-profile CPUs, so we can borrow one.
To avoid that getting too confusing, clean up the code
to try to keep the two meanings of the index separate.

Instead of ARMMMUIdx enum values being identical to core QEMU
MMU index values, they are now the core index values with some
high bits set. Any particular CPU always uses the same high
bits (so eventually A profile cores and M profile cores will
use different bits). New functions arm_to_core_mmu_idx()
and core_to_arm_mmu_idx() convert between the two.

In general core index values are stored in 'int' types, and
ARM values are stored in ARMMMUIdx types.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org
2017-06-02 11:51:47 +01:00
Peter Maydell
f4e8e4edda arm: Remove workarounds for old M-profile exception return implementation
Now that we've rewritten M-profile exception return so that the magic
PC values are not visible to other parts of QEMU, we can delete the
special casing of them elsewhere.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-10-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
3bb8a96f53 arm: Implement M profile exception return properly
On M profile, return from exceptions happen when code in Handler mode
executes one of the following function call return instructions:
 * POP or LDM which loads the PC
 * LDR to PC
 * BX register
and the new PC value is 0xFFxxxxxx.

QEMU tries to implement this by not treating the instruction
specially but then catching the attempt to execute from the magic
address value.  This is not ideal, because:
 * there are guest visible differences from the architecturally
   specified behaviour (for instance jumping to 0xFFxxxxxx via a
   different instruction should not cause an exception return but it
   will in the QEMU implementation)
 * we have to account for it in various places (like refusing to take
   an interrupt if the PC is at a magic value, and making sure that
   the MPU doesn't deny execution at the magic value addresses)

Drop these hacks, and instead implement exception return the way the
architecture specifies -- by having the relevant instructions check
for the magic value and raise the 'do an exception return' QEMU
internal exception immediately.

The effect on the generated code is minor:

 bx lr, old code (and new code for Thread mode):
  TCG:
   mov_i32 tmp5,r14
   movi_i32 tmp6,$0xfffffffffffffffe
   and_i32 pc,tmp5,tmp6
   movi_i32 tmp6,$0x1
   and_i32 tmp5,tmp5,tmp6
   st_i32 tmp5,env,$0x218
   exit_tb $0x0
   set_label $L0
   exit_tb $0x7f2aabd61993
  x86_64 generated code:
   0x7f2aabe87019:  mov    %ebx,%ebp
   0x7f2aabe8701b:  and    $0xfffffffffffffffe,%ebp
   0x7f2aabe8701e:  mov    %ebp,0x3c(%r14)
   0x7f2aabe87022:  and    $0x1,%ebx
   0x7f2aabe87025:  mov    %ebx,0x218(%r14)
   0x7f2aabe8702c:  xor    %eax,%eax
   0x7f2aabe8702e:  jmpq   0x7f2aabe7c016

 bx lr, new code when in Handler mode:
  TCG:
   mov_i32 tmp5,r14
   movi_i32 tmp6,$0xfffffffffffffffe
   and_i32 pc,tmp5,tmp6
   movi_i32 tmp6,$0x1
   and_i32 tmp5,tmp5,tmp6
   st_i32 tmp5,env,$0x218
   movi_i32 tmp5,$0xffffffffff000000
   brcond_i32 pc,tmp5,geu,$L1
   exit_tb $0x0
   set_label $L1
   movi_i32 tmp5,$0x8
   call exception_internal,$0x0,$0,env,tmp5
  x86_64 generated code:
   0x7fe8fa1264e3:  mov    %ebp,%ebx
   0x7fe8fa1264e5:  and    $0xfffffffffffffffe,%ebx
   0x7fe8fa1264e8:  mov    %ebx,0x3c(%r14)
   0x7fe8fa1264ec:  and    $0x1,%ebp
   0x7fe8fa1264ef:  mov    %ebp,0x218(%r14)
   0x7fe8fa1264f6:  cmp    $0xff000000,%ebx
   0x7fe8fa1264fc:  jae    0x7fe8fa126509
   0x7fe8fa126502:  xor    %eax,%eax
   0x7fe8fa126504:  jmpq   0x7fe8fa122016
   0x7fe8fa126509:  mov    %r14,%rdi
   0x7fe8fa12650c:  mov    $0x8,%esi
   0x7fe8fa126511:  mov    $0x56095dbeccf5,%r10
   0x7fe8fa12651b:  callq  *%r10

which is a difference of one cmp/branch-not-taken. This will
be lost in the noise of having to exit generated code and
look up the next TB anyway.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-9-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
064c379c99 arm: Track M profile handler mode state in TB flags
For M profile exception-return handling we'd like to generate different
code for some instructions depending on whether we are in Handler
mode or Thread mode. This isn't the same as "are we privileged
or user", so we need an extra bit in the TB flags to distinguish.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-8-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
b636649f5a arm: Abstract out "are we singlestepping" test to utility function
We now test for "are we singlestepping" in several places and
it's not a trivial check because we need to care about both
architectural singlestep and QEMU gdbstub singlestep. We're
also about to add another place that needs to make this check,
so pull the condition out into a function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-7-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
f021b2c462 arm: Move condition-failed codepath generation out of if()
Move the code to generate the "condition failed" instruction
codepath out of the if (singlestepping) {} else {}. This
will allow adding support for handling a new is_jmp type
which can't be neatly split into "singlestepping case"
versus "not singlestepping case".

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-6-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
4d5e8c969a arm: Move gen_set_condexec() and gen_set_pc_im() up in the file
Move the utility routines gen_set_condexec() and gen_set_pc_im()
up in the file, as we will want to use them from a function
placed earlier in the file than their current location.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-5-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
5425415ebb arm: Factor out "generate right kind of step exception"
We currently have two places that do:
            if (dc->ss_active) {
                gen_step_complete_exception(dc);
            } else {
                gen_exception_internal(EXCP_DEBUG);
            }

Factor this out into its own function, as we're about to add
a third place that needs the same logic.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-4-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
bedb8a6b09 arm: Thumb shift operations should not permit interworking branches
In Thumb mode, the only instructions which can cause an interworking
branch by writing the PC are BLX, BX, BXJ, LDR, POP and LDM. Unlike
ARM mode, data processing instructions which target the PC do not
cause interworking branches.

When we added support for doing interworking branches on writes to
PC from data processing instructions in commit 21aeb3430c, we
accidentally changed a Thumb instruction to have interworking
branch behaviour for writes to PC. (MOV, MOVS register-shifted
register, encoding T2; this is the standard encoding for
LSL/LSR/ASR/ROR (register).)

For this encoding, behaviour with Rd == R15 is specified as
UNPREDICTABLE, so allowing an interworking branch is within
spec, but it's confusing and differs from our handling of this
class of UNPREDICTABLE for other Thumb ALU operations. Make
it perform a simple (non-interworking) branch like the others.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-3-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
9d7c59c84d arm: Don't implement BXJ on M-profile CPUs
For M-profile CPUs, the BXJ instruction does not exist at all, and
the encoding should always UNDEF. We were accidentally implementing
it to behave like A-profile BXJ; correct the error.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-2-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell
b28b3377d7 arm: Fix APSR writes via M profile MSR
Our implementation of writes to the APSR for M-profile via the MSR
instruction was badly broken.

First and worst, we had the sense wrong on the test of bit 2 of the
SYSm field -- this is supposed to request an APSR write if bit 2 is 0
but we were doing it if bit 2 was 1.  This bug was introduced in
commit 58117c9bb4, so hasn't been in a QEMU release.

Secondly, the choice of exactly which parts of APSR should be written
is defined by bits in the 'mask' field.  We were not passing these
through from instruction decode, making it impossible to check them
in the helper.

Pass the mask bits through from the instruction decode to the helper
function and process them appropriately; fix the wrong sense of the
SYSm bit 2 check.

Invalid mask values and invalid combinations of mask and register
number are UNPREDICTABLE; we choose to treat them as if the mask
values were valid.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1487616072-9226-5-git-send-email-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-03-20 12:41:44 +00:00
Peter Maydell
3d54026fb0 arm: Enforce should-be-1 bits in MRS decoding
The MRS instruction requires that bits [19..16] are all 1s, and for
A/R profile also that bits [7..0] are all 0s.  At this point in the
decode tree we have checked all of the rest of the instruction but
were allowing these to be any value.  If these bits are not set then
the result is architecturally UNPREDICTABLE, but choosing to UNDEF is
more helpful to the user and avoids unexpected odd behaviour if the
encodings are used for some purpose in future architecture versions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-4-git-send-email-peter.maydell@linaro.org
2017-03-20 12:41:44 +00:00
Peter Maydell
43ac657423 arm: Don't decode MRS(banked) or MSR(banked) for M profile
M profile doesn't have the MSR(banked) and MRS(banked) instructions
and uses the encodings for different kinds of M-profile MRS/MSR.
Guard the relevant bits of the decode logic to make sure we don't
accidentally fall into them by accident on M-profile.

(The bit being checked for this (bit 5) is part of the SYSm field on
M-profile, but since no currently allocated system registers have
encodings with bit 5 of SYSm set, this hasn't been a problem in
practice.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-3-git-send-email-peter.maydell@linaro.org
2017-03-20 12:41:44 +00:00
Peter Maydell
001b3cab51 arm: HVC and SMC encodings don't exist for M profile
M profile doesn't have the HVC or SMC encodings, so make them always
UNDEF rather than generating calls to helper functions that assume
A/R profile.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-2-git-send-email-peter.maydell@linaro.org
2017-03-20 12:41:44 +00:00
Peter Maydell
e13886e3a7 armv7m: Raise correct kind of UsageFault for attempts to execute ARM code
M profile doesn't implement ARM, and the architecturally required
behaviour for attempts to execute with the Thumb bit clear is to
generate a UsageFault with the CFSR INVSTATE bit set.  We were
incorrectly implementing this as generating an UNDEFINSTR UsageFault;
fix this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:19 +00:00
Alex Bennée
c22edfebff target-arm: don't generate WFE/YIELD calls for MTTCG
The WFE and YIELD instructions are really only hints and in TCG's case
they were useful to move the scheduling on from one vCPU to the next. In
the parallel context (MTTCG) this just causes an unnecessary cpu_exit
and contention of the BQL.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:46 +00:00
Peter Maydell
9bb6558a21 target/arm: A32, T32: Create Instruction Syndromes for Data Aborts
Add support for generating the ISS (Instruction Specific Syndrome)
for Data Abort exceptions taken from AArch32. These syndromes are
used by hypervisors for example to trap and emulate memory accesses.

This is the equivalent for AArch32 guests of the work done for AArch64
guests in commit aaa1f954d4.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-02-07 18:30:00 +00:00
Peter Maydell
63f26fcfda target/arm: Abstract out pbit/wbit tests in ARM ldr/str decode
In the ARM ldr/str decode path, rather than directly testing
"insn & (1 << 21)" and "insn & (1 << 24)", abstract these
bits out into wbit and pbit local flags. (We will want to
do more tests against them to determine whether we need to
provide syndrome information.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-02-07 18:29:59 +00:00
Peter Maydell
7517748e3f armv7m: Report no-coprocessor faults correctly
For v7M attempts to access a nonexistent coprocessor are reported
differently from plain undefined instructions (as UsageFaults of type
NOCP rather than type UNDEFINSTR).  Split them out into a new
EXCP_NOCP so we can report the FSR value correctly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-8-git-send-email-peter.maydell@linaro.org
2017-01-27 15:29:08 +00:00
Michael Davidsaver
542b3478a0 armv7m: Replace armv7m.hack with unassigned_access handler
For v7m we need to catch attempts to execute from special
addresses at 0xfffffff0 and above. Previously we did this
with the aid of a hacky special purpose lump of memory
in the address space and a check in translate.c for whether
we were translating code at those addresses.

We can implement this more cleanly using a CPU
unassigned access handler which throws the exception
if the unassigned access is for one of the special addresses.

Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-3-git-send-email-peter.maydell@linaro.org
[PMM:
 * drop the deletion of the "don't interrupt if PC is magic"
   code in arm_v7m_cpu_exec_interrupt() -- this is still
   required
 * don't generate an exception for unassigned accesses
   which aren't to the magic address -- although doing
   this is in theory correct in practice it will break
   currently working guests which rely on the RAZ/WI
   behaviour when they touch devices which we haven't
   modelled.
 * trigger EXCP_EXCEPTION_EXIT on is_exec, not !is_write
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27 15:20:21 +00:00
Richard Henderson
7539a012f6 target-arm: Use clz opcode
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-01-10 08:06:11 -08:00
Richard Henderson
59a71b4c5b target-arm: Use new deposit and extract ops
Use the new primitives for UBFX and SBFX.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-01-10 08:06:10 -08:00
Thomas Huth
fcf5ef2ab5 Move target-* CPU file into a target/ folder
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.

Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [cris&microblaze part]
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2016-12-20 21:52:12 +01:00