This prevent coprocessor IO structure from being reset on cpu reset. This was
a problem for PXA which uses coprocessor 6 and 14.
Signed-off-by: Lars Munch <lars@segv.dk>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This patch fixes few resource leaks in the iwmmxt disassemble.
Signed-off-by: Lars Munch <lars@segv.dk>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Move ARMv7-M PC/SP initialization to the CPU reset routine. Add a board
reset routine to call this. Also load values directly from ROM as
images have not been copied yet.
Avoid clearing the NVIC pointer on cpu reset.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Don't set PAGE_EXEC for XN pages, to avoid a bypass of XN protection
checking if the page is already in the TLB.
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Do not try to insert a conditional jump over next instruction when the
condition code is AL as this will trigger an internal error.
Signed-off-by: Johan Bengtsson <teofrastius@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
QEMU uses a fixed page size for the CPU TLB. If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.
When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page. However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.
Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages. If the guest invalidates this region then flush the
whole TLB.
Signed-off-by: Paul Brook <paul@codesourcery.com>
The rfe instruction can be used with any register, not just sp. Adjust the
condition check accordingly.
Signed-off-by: Adam Lackorzynski <adam@os.inf.tu-dresden.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Removes a set of ifdefs from exec.c.
Introduce TARGET_VIRT_ADDR_SPACE_BITS for all targets other
than Alpha. This will be used for page_find_alloc, which is
supposed to be using virtual addresses in the first place.
Signed-off-by: Richard Henderson <rth@twiddle.net>
There's a return missing in the srs handling which leads to srs always being
treated an an invalid op.
Signed-off-by: Adam Lackorzynski <adam@os.inf.tu-dresden.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
implementation only widened the 32bit source vector elements into a
64bit destination vector but forgot to perform the actual shifting
operation.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The rounding/truncating options were inverted. truncating
was done when rounding was meant and vice verse.
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Support the "subs pc, lr" Thumb-2 exception return instruction.
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Paul Brook <paul@codesourcery.com>
The Thumb CPS currently does not work correctly: CPSID touches more bits
than the instruction wants to, and CPSIE does nothing. Fix it by
passing the correct mask (the "affect" bits) and value.
Signed-off-by: Rabin Vincent <rabin@rab.in>
Access the cp15.c13 TLS registers directly with TCG ops instead of with
a slow helper. If the the cp15 read/write was not TLS register access,
fall back to the cp15 helper.
This makes accessing __thread variables in linux-user when apps are compiled
with -mtp=cp15 possible. legal cp15 register to acces from linux-user are
already checked in cp15_user_ok.
While at it, make the cp15.c13 Thread ID registers available only on
ARMv6K and newer.
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Current code is broken at least on recent compilers, comparison
between signed and unsigned types yield incorrect code and render
the neon shift helper functions defunct. This is the third revision
of this patch, casting all comparisons with the sizeof operator to
signed ssize_t type to force comparisons to be between signed integral
types.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Shift by immediate value is incorrectly overwritten by a temporary
variable in the processing of NEON vsri, vshl and vsli instructions.
This patch has been revised to also include a fix for the special
case where the code would previously try to shift an integer value
over 31 bits left/right.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
All other bits except for the EN in the VFP FPEXC register are defined
as subarchitecture specific and real functionality for any of the
other bits has not been implemented in QEMU. However, current code
allows modifying all bits in the VFP FPEXC register leading to
problems when guest code is writing 1's to the subarchitecture
specific bits and checking whether the bits stay up to verify the
existence of functionality which in fact does not exist in QEMU.
This patch has been revised to include the same behavior change in
the gdb register write function.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Add support for NEON vld1.64 and vst1.64 instructions. This patch is
revised to follow more closely the specification and raises
undefined exception if 64bit element size is used for vld2/vst2 or
vld4/vst4 instructions.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
In the existing code shift value is clobbered during the pass loop.
This patch changes the code so that it stores the intermediate
result in the target neon register directly and eliminates the need
to use a temporary to hold the intermediate value thus leaving the
shift value in the temporary variable intact. This is a new patch
in this version of the patch series.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
tmp4 and tmp5 temporary variables are allocated using tcg_const_i32
but incorrectly released using dead_tmp which will cause resource
leak tracking to report false leaks.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Current implementation of thumb mul instruction is implemented as a
32x32->64 multiply which then uses only 32 least significant bits of
the result. Replace that with a simple 32x32->32 multiply.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Revised patch for getting rid of tcg temporary variable leaks in
target-arm/translate.c. This version also includes the leak patch for
gen_set_cpsr macro, now converted as a static inline function, which I
sent earlier as a separate patch on top of this patch.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(INT32_MIN / -1) triggers an overflow, and the result depends on the
host architecture (INT32_MIN on arm, -1 on ppc, SIGFPE on x86). Use a
test to output the correct value.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com>
The goal is eventually to get rid of all cpu_T register usage and to use
just short-lived tmp/tmp2 registers. This patch converts all the places where
cpu_T was used in the Thumb code and replaces it with explicit TCG register
allocation.
Signed-off-by: Filip Navara <filip.navara@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Uninitialized register was used instead of proper TCG variable.
Signed-off-by: Filip Navara <filip.navara@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The neon_trn_u8, neon_trn_u16, neon_unzip_u8, neon_zip_u8 and neon_zip_u16
helpers used fixed registers to return values. This patch replaces that with
TCG code, so T0/T1 is no longer directly used by the helper functions.
Bugs in the gen_neon_unzip register load code were also fixed.
Signed-off-by: Filip Navara <filip.navara@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>