Commit Graph

581 Commits

Author SHA1 Message Date
Alex Bennée b29fd33db5 target/arm: use DISAS_EXIT for eret handling
Previously DISAS_JUMP did ensure this but with the optimisation of
8a6b28c7 (optimize indirect branches) we might not leave the loop.
This means if any pending interrupts are cleared by changing IRQ flags
we might never get around to servicing them. You usually notice this
by seeing the lookup_tb_ptr() helper gainfully chaining TBs together
while cpu->interrupt_request remains high and the exit_request has not
been set.

This breaks amongst other things the OPTEE test suite which executes
an eret from the secure world after a non-secure world IRQ has gone
pending which then never gets serviced.

Instead of using the previously implied semantics of DISAS_JUMP we use
DISAS_EXIT which will always exit the run-loop.

CC: Etienne Carriere <etienne.carriere@linaro.org>
CC: Joakim Bech <joakim.bech@linaro.org>
CC: Jaroslaw Pelczar <j.pelczar@samsung.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Emilio G. Cota <cota@braap.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-7-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Alex Bennée 0b609cc128 target/arm: use gen_goto_tb for ISB handling
While an ISB will ensure any raised IRQs happen on the next
instruction it doesn't cause any to get raised by itself. We can
therefore use a simple tb exit for ISB instructions and rely on the
exit_request check at the top of each TB to deal with exiting if
needed.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-6-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Alex Bennée 4cae8f56fb target/arm/translate: ensure gen_goto_tb sets exit flags
As the gen_goto_tb function can do both static and dynamic jumps it
should also set the is_jmp field. This matches the behaviour of the
a64 code.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-5-alex.bennee@linaro.org
[tweak to multiline comment formatting]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Alex Bennée e8d5230221 target/arm/translate: make DISAS_UPDATE match declared semantics
DISAS_UPDATE should be used when the wider CPU state other than just
the PC has been updated and we should therefore exit the TCG runtime
and return to the main execution loop rather assuming DISAS_JUMP would
do that.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-3-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-07-17 13:36:07 +01:00
Emilio G. Cota 8a6b28c7b5 target/arm: optimize indirect branches
Speed up indirect branches by jumping to the target if it is valid.

Softmmu measurements (see later commit for user-mode results):

Note: baseline (i.e. speedup == 1x) is QEMU v2.9.0.

- Impact on Boot time

| setup  | ARM debian jessie boot+shutdown time | stddev |
|--------+--------------------------------------+--------|
| v2.9.0 |                                 8.84 |   0.07 |
| +cross |                                 8.85 |   0.03 |
| +jr    |                                 8.83 |   0.06 |

-                            NBench, arm-softmmu (debian jessie guest). Host: Intel i7-4790K @ 4.00GHz

  1.3x +-+-------------------------------------------------------------------------------------------------------------+-+
       |                                                                                                                 |
       |   cross                                                          ####                                           |
 1.25x +cross+jr..........................................................#++#.........................................+-+
       |                                                        ####      #  #                                           |
       |                                                     +++#  #      #  #                                           |
       |                                      +++            ****  #      #  #                                           |
  1.2x +-+...................................####............*..*..#......#..#.........................................+-+
       |                                  ****  #            *  *  #      #  #     ####                                  |
       |                                  *  *  #            *  *  #      #  #     #  #                                  |
 1.15x +-+................................*..*..#............*..*..#......#..#.....#..#................................+-+
       |                                  *  *  #            *  *  #      #  #     #  #                                  |
       |                                  *  *  #      ####  *  *  #      #  #     #  #                                  |
       |                                  *  *  #      #  #  *  *  #      #  #     #  #                         ####     |
  1.1x +-+................................*..*..#......#..#..*..*..#......#..#.....#..#.........................#..#...+-+
       |                                  *  *  #      #  #  *  *  #      #  #     #  #                         #  #     |
       |                                  *  *  #      #  #  *  *  #      #  #     #  #                         #  #     |
 1.05x +-+..........................####..*..*..#......#..#..*..*..#......#..#.....#..#......+++............*****..#...+-+
       |                        *****  #  *  *  #      #  #  *  *  #  *****  #     #  #   +++ |    ****###  *   *  #     |
       |                        *+++*  #  *  *  #      #  #  *  *  #  *+++*  #  ****  #  *****###  *  *  #  *   *  #     |
       |     *****###  +++####  *   *  #  *  *  #  *****  #  *  *  #  *   *  #  *  *  #  * | *++#  *  *  #  *   *  #     |
    1x +-++-+*+++*-+#++****++#++*+-+*++#+-*++*++#-+*+++*-+#++*++*++#++*+-+*++#+-*++*++#-+*+++*-+#++*++*++#++*+-+*++#+-++-+
       |     *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #     |
       |     *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #  *  *  #  *   *  #     |
 0.95x +-+---*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###---+-+
       ASSIGNMENT BITFIELD   FOURFP EMULATION   HUFFMAN   LU DECOMPOSITIONEURAL NNUMERIC SOSTRING SORT     hmean
  png: http://imgur.com/eOLmZNR

NB. 'cross' represents the previous commit.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-8-git-send-email-cota@braap.org>
[rth: Replace gen_jr global variable with DISAS_EXIT state.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-06-05 09:25:42 -07:00
Emilio G. Cota 7ad55b4ffd target/arm: optimize cross-page direct jumps in softmmu
Instead of unconditionally exiting to the exec loop, use the
lookup_and_goto_ptr helper to jump to the target if it is valid.

Perf impact: see next commit's log.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-7-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-06-05 09:25:42 -07:00
Peter Maydell 3bef701256 arm: Implement HFNMIENA support for M profile MPU
Implement HFNMIENA support for the M profile MPU. This bit controls
whether the MPU is treated as enabled when executing at execution
priorities of less than zero (in NMI, HardFault or with the FAULTMASK
bit set).

Doing this requires us to use a different MMU index for "running
at execution priority < 0", because we will have different
access permissions for that case versus the normal case.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-14-git-send-email-peter.maydell@linaro.org
2017-06-02 11:51:49 +01:00
Peter Maydell e7b921c2d9 arm: Use different ARMMMUIdx values for M profile
Make M profile use completely separate ARMMMUIdx values from
those that A profile CPUs use. This is a prelude to adding
support for the MPU and for v8M, which together will require
6 MMU indexes which don't map cleanly onto the A profile
uses:
 non secure User
 non secure Privileged
 non secure Privileged, execution priority < 0
 secure User
 secure Privileged
 secure Privileged, execution priority < 0

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-4-git-send-email-peter.maydell@linaro.org
2017-06-02 11:51:47 +01:00
Peter Maydell 8bd5c82030 arm: Add support for M profile CPUs having different MMU index semantics
The M profile CPU's MPU has an awkward corner case which we
would like to implement with a different MMU index.

We can avoid having to bump the number of MMU modes ARM
uses, because some of our existing MMU indexes are only
used by non-M-profile CPUs, so we can borrow one.
To avoid that getting too confusing, clean up the code
to try to keep the two meanings of the index separate.

Instead of ARMMMUIdx enum values being identical to core QEMU
MMU index values, they are now the core index values with some
high bits set. Any particular CPU always uses the same high
bits (so eventually A profile cores and M profile cores will
use different bits). New functions arm_to_core_mmu_idx()
and core_to_arm_mmu_idx() convert between the two.

In general core index values are stored in 'int' types, and
ARM values are stored in ARMMMUIdx types.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org
2017-06-02 11:51:47 +01:00
Peter Maydell f4e8e4edda arm: Remove workarounds for old M-profile exception return implementation
Now that we've rewritten M-profile exception return so that the magic
PC values are not visible to other parts of QEMU, we can delete the
special casing of them elsewhere.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-10-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell 3bb8a96f53 arm: Implement M profile exception return properly
On M profile, return from exceptions happen when code in Handler mode
executes one of the following function call return instructions:
 * POP or LDM which loads the PC
 * LDR to PC
 * BX register
and the new PC value is 0xFFxxxxxx.

QEMU tries to implement this by not treating the instruction
specially but then catching the attempt to execute from the magic
address value.  This is not ideal, because:
 * there are guest visible differences from the architecturally
   specified behaviour (for instance jumping to 0xFFxxxxxx via a
   different instruction should not cause an exception return but it
   will in the QEMU implementation)
 * we have to account for it in various places (like refusing to take
   an interrupt if the PC is at a magic value, and making sure that
   the MPU doesn't deny execution at the magic value addresses)

Drop these hacks, and instead implement exception return the way the
architecture specifies -- by having the relevant instructions check
for the magic value and raise the 'do an exception return' QEMU
internal exception immediately.

The effect on the generated code is minor:

 bx lr, old code (and new code for Thread mode):
  TCG:
   mov_i32 tmp5,r14
   movi_i32 tmp6,$0xfffffffffffffffe
   and_i32 pc,tmp5,tmp6
   movi_i32 tmp6,$0x1
   and_i32 tmp5,tmp5,tmp6
   st_i32 tmp5,env,$0x218
   exit_tb $0x0
   set_label $L0
   exit_tb $0x7f2aabd61993
  x86_64 generated code:
   0x7f2aabe87019:  mov    %ebx,%ebp
   0x7f2aabe8701b:  and    $0xfffffffffffffffe,%ebp
   0x7f2aabe8701e:  mov    %ebp,0x3c(%r14)
   0x7f2aabe87022:  and    $0x1,%ebx
   0x7f2aabe87025:  mov    %ebx,0x218(%r14)
   0x7f2aabe8702c:  xor    %eax,%eax
   0x7f2aabe8702e:  jmpq   0x7f2aabe7c016

 bx lr, new code when in Handler mode:
  TCG:
   mov_i32 tmp5,r14
   movi_i32 tmp6,$0xfffffffffffffffe
   and_i32 pc,tmp5,tmp6
   movi_i32 tmp6,$0x1
   and_i32 tmp5,tmp5,tmp6
   st_i32 tmp5,env,$0x218
   movi_i32 tmp5,$0xffffffffff000000
   brcond_i32 pc,tmp5,geu,$L1
   exit_tb $0x0
   set_label $L1
   movi_i32 tmp5,$0x8
   call exception_internal,$0x0,$0,env,tmp5
  x86_64 generated code:
   0x7fe8fa1264e3:  mov    %ebp,%ebx
   0x7fe8fa1264e5:  and    $0xfffffffffffffffe,%ebx
   0x7fe8fa1264e8:  mov    %ebx,0x3c(%r14)
   0x7fe8fa1264ec:  and    $0x1,%ebp
   0x7fe8fa1264ef:  mov    %ebp,0x218(%r14)
   0x7fe8fa1264f6:  cmp    $0xff000000,%ebx
   0x7fe8fa1264fc:  jae    0x7fe8fa126509
   0x7fe8fa126502:  xor    %eax,%eax
   0x7fe8fa126504:  jmpq   0x7fe8fa122016
   0x7fe8fa126509:  mov    %r14,%rdi
   0x7fe8fa12650c:  mov    $0x8,%esi
   0x7fe8fa126511:  mov    $0x56095dbeccf5,%r10
   0x7fe8fa12651b:  callq  *%r10

which is a difference of one cmp/branch-not-taken. This will
be lost in the noise of having to exit generated code and
look up the next TB anyway.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-9-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell 064c379c99 arm: Track M profile handler mode state in TB flags
For M profile exception-return handling we'd like to generate different
code for some instructions depending on whether we are in Handler
mode or Thread mode. This isn't the same as "are we privileged
or user", so we need an extra bit in the TB flags to distinguish.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-8-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell b636649f5a arm: Abstract out "are we singlestepping" test to utility function
We now test for "are we singlestepping" in several places and
it's not a trivial check because we need to care about both
architectural singlestep and QEMU gdbstub singlestep. We're
also about to add another place that needs to make this check,
so pull the condition out into a function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-7-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell f021b2c462 arm: Move condition-failed codepath generation out of if()
Move the code to generate the "condition failed" instruction
codepath out of the if (singlestepping) {} else {}. This
will allow adding support for handling a new is_jmp type
which can't be neatly split into "singlestepping case"
versus "not singlestepping case".

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-6-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell 4d5e8c969a arm: Move gen_set_condexec() and gen_set_pc_im() up in the file
Move the utility routines gen_set_condexec() and gen_set_pc_im()
up in the file, as we will want to use them from a function
placed earlier in the file than their current location.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-5-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell 5425415ebb arm: Factor out "generate right kind of step exception"
We currently have two places that do:
            if (dc->ss_active) {
                gen_step_complete_exception(dc);
            } else {
                gen_exception_internal(EXCP_DEBUG);
            }

Factor this out into its own function, as we're about to add
a third place that needs the same logic.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-4-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell bedb8a6b09 arm: Thumb shift operations should not permit interworking branches
In Thumb mode, the only instructions which can cause an interworking
branch by writing the PC are BLX, BX, BXJ, LDR, POP and LDM. Unlike
ARM mode, data processing instructions which target the PC do not
cause interworking branches.

When we added support for doing interworking branches on writes to
PC from data processing instructions in commit 21aeb3430c, we
accidentally changed a Thumb instruction to have interworking
branch behaviour for writes to PC. (MOV, MOVS register-shifted
register, encoding T2; this is the standard encoding for
LSL/LSR/ASR/ROR (register).)

For this encoding, behaviour with Rd == R15 is specified as
UNPREDICTABLE, so allowing an interworking branch is within
spec, but it's confusing and differs from our handling of this
class of UNPREDICTABLE for other Thumb ALU operations. Make
it perform a simple (non-interworking) branch like the others.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-3-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell 9d7c59c84d arm: Don't implement BXJ on M-profile CPUs
For M-profile CPUs, the BXJ instruction does not exist at all, and
the encoding should always UNDEF. We were accidentally implementing
it to behave like A-profile BXJ; correct the error.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-2-git-send-email-peter.maydell@linaro.org
2017-04-20 17:39:17 +01:00
Peter Maydell b28b3377d7 arm: Fix APSR writes via M profile MSR
Our implementation of writes to the APSR for M-profile via the MSR
instruction was badly broken.

First and worst, we had the sense wrong on the test of bit 2 of the
SYSm field -- this is supposed to request an APSR write if bit 2 is 0
but we were doing it if bit 2 was 1.  This bug was introduced in
commit 58117c9bb4, so hasn't been in a QEMU release.

Secondly, the choice of exactly which parts of APSR should be written
is defined by bits in the 'mask' field.  We were not passing these
through from instruction decode, making it impossible to check them
in the helper.

Pass the mask bits through from the instruction decode to the helper
function and process them appropriately; fix the wrong sense of the
SYSm bit 2 check.

Invalid mask values and invalid combinations of mask and register
number are UNPREDICTABLE; we choose to treat them as if the mask
values were valid.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1487616072-9226-5-git-send-email-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-03-20 12:41:44 +00:00
Peter Maydell 3d54026fb0 arm: Enforce should-be-1 bits in MRS decoding
The MRS instruction requires that bits [19..16] are all 1s, and for
A/R profile also that bits [7..0] are all 0s.  At this point in the
decode tree we have checked all of the rest of the instruction but
were allowing these to be any value.  If these bits are not set then
the result is architecturally UNPREDICTABLE, but choosing to UNDEF is
more helpful to the user and avoids unexpected odd behaviour if the
encodings are used for some purpose in future architecture versions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-4-git-send-email-peter.maydell@linaro.org
2017-03-20 12:41:44 +00:00
Peter Maydell 43ac657423 arm: Don't decode MRS(banked) or MSR(banked) for M profile
M profile doesn't have the MSR(banked) and MRS(banked) instructions
and uses the encodings for different kinds of M-profile MRS/MSR.
Guard the relevant bits of the decode logic to make sure we don't
accidentally fall into them by accident on M-profile.

(The bit being checked for this (bit 5) is part of the SYSm field on
M-profile, but since no currently allocated system registers have
encodings with bit 5 of SYSm set, this hasn't been a problem in
practice.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-3-git-send-email-peter.maydell@linaro.org
2017-03-20 12:41:44 +00:00
Peter Maydell 001b3cab51 arm: HVC and SMC encodings don't exist for M profile
M profile doesn't have the HVC or SMC encodings, so make them always
UNDEF rather than generating calls to helper functions that assume
A/R profile.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-2-git-send-email-peter.maydell@linaro.org
2017-03-20 12:41:44 +00:00
Peter Maydell e13886e3a7 armv7m: Raise correct kind of UsageFault for attempts to execute ARM code
M profile doesn't implement ARM, and the architecturally required
behaviour for attempts to execute with the Thumb bit clear is to
generate a UsageFault with the CFSR INVSTATE bit set.  We were
incorrectly implementing this as generating an UNDEFINSTR UsageFault;
fix this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:19 +00:00
Alex Bennée c22edfebff target-arm: don't generate WFE/YIELD calls for MTTCG
The WFE and YIELD instructions are really only hints and in TCG's case
they were useful to move the scheduling on from one vCPU to the next. In
the parallel context (MTTCG) this just causes an unnecessary cpu_exit
and contention of the BQL.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:46 +00:00
Peter Maydell 9bb6558a21 target/arm: A32, T32: Create Instruction Syndromes for Data Aborts
Add support for generating the ISS (Instruction Specific Syndrome)
for Data Abort exceptions taken from AArch32. These syndromes are
used by hypervisors for example to trap and emulate memory accesses.

This is the equivalent for AArch32 guests of the work done for AArch64
guests in commit aaa1f954d4.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-02-07 18:30:00 +00:00
Peter Maydell 63f26fcfda target/arm: Abstract out pbit/wbit tests in ARM ldr/str decode
In the ARM ldr/str decode path, rather than directly testing
"insn & (1 << 21)" and "insn & (1 << 24)", abstract these
bits out into wbit and pbit local flags. (We will want to
do more tests against them to determine whether we need to
provide syndrome information.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-02-07 18:29:59 +00:00
Peter Maydell 7517748e3f armv7m: Report no-coprocessor faults correctly
For v7M attempts to access a nonexistent coprocessor are reported
differently from plain undefined instructions (as UsageFaults of type
NOCP rather than type UNDEFINSTR).  Split them out into a new
EXCP_NOCP so we can report the FSR value correctly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-8-git-send-email-peter.maydell@linaro.org
2017-01-27 15:29:08 +00:00
Michael Davidsaver 542b3478a0 armv7m: Replace armv7m.hack with unassigned_access handler
For v7m we need to catch attempts to execute from special
addresses at 0xfffffff0 and above. Previously we did this
with the aid of a hacky special purpose lump of memory
in the address space and a check in translate.c for whether
we were translating code at those addresses.

We can implement this more cleanly using a CPU
unassigned access handler which throws the exception
if the unassigned access is for one of the special addresses.

Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-3-git-send-email-peter.maydell@linaro.org
[PMM:
 * drop the deletion of the "don't interrupt if PC is magic"
   code in arm_v7m_cpu_exec_interrupt() -- this is still
   required
 * don't generate an exception for unassigned accesses
   which aren't to the magic address -- although doing
   this is in theory correct in practice it will break
   currently working guests which rely on the RAZ/WI
   behaviour when they touch devices which we haven't
   modelled.
 * trigger EXCP_EXCEPTION_EXIT on is_exec, not !is_write
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27 15:20:21 +00:00
Richard Henderson 7539a012f6 target-arm: Use clz opcode
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-01-10 08:06:11 -08:00
Richard Henderson 59a71b4c5b target-arm: Use new deposit and extract ops
Use the new primitives for UBFX and SBFX.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-01-10 08:06:10 -08:00
Thomas Huth fcf5ef2ab5 Move target-* CPU file into a target/ folder
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.

Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [cris&microblaze part]
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2016-12-20 21:52:12 +01:00