Commit Graph

1620 Commits

Author SHA1 Message Date
Richard Henderson
1e1df962e3 tcg/ppc: Prefer mask over andi.
Prefer the instruction that isn't required to modify cr0.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19 11:04:38 -10:00
Richard Henderson
5bfd75a35c tcg/ppc: Revise goto_tb implementation
Restrict the size of code_gen_buffer to 2GB on ppc64, which
lets us assert that everything is reachable with addis+addi
from tb_ret_addr.  This lets us use a max of 4 insns for goto_tb
instead of 7.

Emit the indirect branch portion of goto_tb up front, which
means we only have to update two insns to update any link.
With a 64-bit store, we can update the link atomically, which
may be required in future.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19 11:04:37 -10:00
Richard Henderson
70f897bdc4 tcg/ppc: Adjust exit_tb for change in prologue placement
Changing the prologue to the beginning of the code_gen_buffer
changes the direction of the "return" branch.  Need to change
the logic to match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19 11:04:37 -10:00
Richard Henderson
b125f9dc7b tcg: Check for overflow via highwater mark
We currently pre-compute an worst case code size for any TB, which
works out to be 122kB.  Since the average TB size is near 1kB, this
wastes quite a lot of storage.

Instead, check for overflow in between generating code for each opcode.
The overhead of the check isn't measurable and wastage is minimized.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:53 +11:00
Richard Henderson
8163b74938 tcg: Emit prologue to the beginning of code_gen_buffer
By putting the prologue at the end, we risk overwriting the
prologue should our estimate of maximum TB size.  Given the
two different placements of the call to tcg_prologue_init,
move the high water mark computation into tcg_prologue_init.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:53 +11:00
Richard Henderson
04fe640001 tcg: Remove tcg_gen_code_search_pc
It's no longer used, so tidy up everything reached by it.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:53 +11:00
Richard Henderson
4e5e121515 tcg: Remove gen_intermediate_code_pc
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:52 +11:00
Richard Henderson
fca8a500d5 tcg: Save insn data and use it in cpu_restore_state_from_tb
We can now restore state without retranslation.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:51 +11:00
Richard Henderson
bad729e272 tcg: Pass data argument to restore_state_to_opc
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments.  Transition restore_state_to_opc to use
data from the latter.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:51 +11:00
Richard Henderson
190ce7fbc7 tcg: Add TCG_MAX_INSNS
Adjust all translators to respect it.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:50 +11:00
Richard Henderson
9aef40ed1f tcg: Allow extra data to be attached to insn_start
With an eye toward having this data replace the gen_opc_* arrays
that each target collects in order to enable restore_state_from_tb.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:46 +11:00
Richard Henderson
765b842ade tcg: Rename debug_insn_start to insn_start
With an eye toward making it mandatory.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07 20:36:26 +11:00
Aurelien Jarno
81dfaf1a8f tcg/mips: pass oi to tcg_out_tlb_load
Instead of computing mem_index and s_bits in both tcg_out_qemu_ld and
tcg_out_qemu_st function and passing them to tcg_out_tlb_load, directly
pass oi to the tcg_out_tlb_load function and compute mem_index and
s_bits there.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-09-19 11:53:15 +02:00
Aurelien Jarno
d9f26847f1 tcg/mips: move tcg_out_addsub2
Somehow the tcg_out_addsub2 function ended-up in the middle of the
qemu_ld/st related functions. Move it with other arithmetics related
functions.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-09-19 11:53:14 +02:00
James Hogan
5eb4f645eb tcg/mips: Fix clobbering of qemu_ld inputs
The MIPS TCG backend implements qemu_ld with 64-bit targets using the v0
register (base) as a temporary to load the upper half of the QEMU TLB
comparator (see line 5 below), however this happens before the input
address is used (line 8 to mask off the low bits for the TLB
comparison, and line 12 to add the host-guest offset). If the input
address (addrl) also happens to have been placed in v0 (as in the second
column below), it gets clobbered before it is used.

     addrl in t2              addrl in v0

 1 srl     a0,t2,0x7        srl     a0,v0,0x7
 2 andi    a0,a0,0x1fe0     andi    a0,a0,0x1fe0
 3 addu    a0,a0,s0         addu    a0,a0,s0
 4 lw      at,9136(a0)      lw      at,9136(a0)      set TCG_TMP0 (at)
 5 lw      v0,9140(a0)      lw      v0,9140(a0)      set base (v0)
 6 li      t9,-4093         li      t9,-4093
 7 lw      a0,9160(a0)      lw      a0,9160(a0)      set addend (a0)
 8 and     t9,t9,t2         and     t9,t9,v0         use addrl
 9 bne     at,t9,0x836d8c8  bne     at,t9,0x836d838  use TCG_TMP0
10  nop                      nop
11 bne     v0,t8,0x836d8c8  bne     v0,a1,0x836d838  use base
12  addu   v0,a0,t2          addu   v0,a0,v0         use addrl, addend
13 lw      t0,0(v0)         lw      t0,0(v0)

Fix by using TCG_TMP0 (at) as the temporary instead of v0 (base),
pushing the load on line 5 forward into the delay slot of the low
comparison (line 10). The early load of the addend on line 7 also needs
pushing even further for 64-bit targets, or it will clobber a0 before
we're done with it. The output for 32-bit targets is unaffected.

 srl     a0,v0,0x7
 andi    a0,a0,0x1fe0
 addu    a0,a0,s0
 lw      at,9136(a0)
-lw      v0,9140(a0)      load high comparator
 li      t9,-4093
-lw      a0,9160(a0)      load addend
 and     t9,t9,v0
 bne     at,t9,0x836d838
- nop
+ lw     at,9140(a0)      load high comparator
+lw      a0,9160(a0)      load addend
-bne     v0,a1,0x836d838
+bne     at,a1,0x836d838
  addu   v0,a0,v0
 lw      t0,0(v0)

Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-09-19 11:53:14 +02:00
Peter Crosthwaite
162e992270 tcg: Move tci_tb_ptr to -common
This requires global visibility to common code. Move to tcg-common.

Cc: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <cb0340eba225ab4945aa6cf7c9013f33aa05bcf8.1441614289.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-16 17:33:33 +02:00
Peter Crosthwaite
7d8f787d9d tcg: split tcg_op_defs to -common
tcg_op_defs (and the _max) are both needed by the TCI disassembler. For
multi-arch, tcg.c will be multiple-compiled (arch-obj) with its symbols
hidden from common code. So split the definition off to new file,
tcg-common.c which will remain a regular obj-y for use by both the TCI
disas as well as the multiple tcg.c's.

Cc: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <4b607425886d85aee65878e4935dfad46b3e6085.1441614289.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-16 17:33:33 +02:00
Peter Maydell
a2aa09e181 * Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
 * cutils: qemu_strto* wrappers
 * iohandler.c simplification
 * Many other fixes and misc patches.
 
 And some MTTCG work (with Emilio's fixes squashed):
 * Signal-free TCG kick
 * Removing spinlock in favor of QemuMutex
 * User-mode emulation multi-threading fixes/docs
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJV8Tk7AAoJEL/70l94x66Ds3QH/3bi0RRR2NtKIXAQrGo5tfuD
 NPMu1K5Hy+/26AC6mEVNRh4kh7dPH5E4NnDGbxet1+osvmpjxAjc2JrxEybhHD0j
 fkpzqynuBN6cA2Gu5GUNoKzxxTmi2RrEYigWDZqCftRXBeO2Hsr1etxJh9UoZw5H
 dgpU3j/n0Q8s08jUJ1o789knZI/ckwL4oXK4u2KhSC7ZTCWhJT7Qr7c0JmiKReaF
 JEYAsKkQhICVKRVmC8NxML8U58O8maBjQ62UN6nQpVaQd0Yo/6cstFTZsRrHMHL3
 7A2Tyg862cMvp+1DOX3Bk02yXA+nxnzLF8kUe0rYo6llqDBDStzqyn1j9R0qeqA=
 =nB06
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.

And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs

# gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"

* remotes/bonzini/tags/for-upstream: (44 commits)
  cutils: work around platform differences in strto{l,ul,ll,ull}
  cpu-exec: fix lock hierarchy for user-mode emulation
  exec: make mmap_lock/mmap_unlock globally available
  tcg: comment on which functions have to be called with mmap_lock held
  tcg: add memory barriers in page_find_alloc accesses
  remove unused spinlock.
  replace spinlock by QemuMutex.
  cpus: remove tcg_halt_cond and tcg_cpu_thread globals
  cpus: protect work list with work_mutex
  scripts/dump-guest-memory.py: fix after RAMBlock change
  configure: Add support for jemalloc
  add macro file for coccinelle
  configure: factor out adding disas configure
  vhost-scsi: fix wrong vhost-scsi firmware path
  checkpatch: remove tests that are not relevant outside the kernel
  checkpatch: adapt some tests to QEMU
  CODING_STYLE: update mixed declaration rules
  qmp: Add example usage of strto*l() qemu wrapper
  cutils: Add qemu_strtoull() wrapper
  cutils: Add qemu_strtoll() wrapper
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-09-14 16:13:16 +01:00
Pavel Dovgalyuk
282dffc8a4 softmmu: add helper function to pass through retaddr
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20150710095650.13280.32255.stgit@PASHA-ISP>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-11 08:15:32 -07:00
Veres Lajos
67cc32ebfd typofixes - v4
Signed-off-by: Veres Lajos <vlajos@gmail.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-09-11 10:45:43 +03:00
KONRAD Frederic
677ef6230b replace spinlock by QemuMutex.
spinlock is only used in two cases:
  * cpu-exec.c: to protect TranslationBlock
  * mem_helper.c: for lock helper in target-i386 (which seems broken).

It's a pthread_mutex_t in user-mode, so we can use QemuMutex directly,
with an #ifdef.  The #ifdef will be removed when multithreaded TCG
will need the mutex as well.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Message-Id: <1439220437-23957-5-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
[Merge Emilio G. Cota's patch to remove volatile. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:55 +02:00
Aurelien Jarno
08b0b23be6 tcg/i386: omit a few REXW prefixes in softmmu code
When computing the TLB address we are likely to mask out the high
32-bits by using shr + and. We can use 32-bit instructions in that
case. This saves 2 bytes per TLB access.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1437306632-20655-1-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-02 14:24:10 -07:00
Richard Henderson
352bcb0a2b tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0
In ffc6372851, we swapped the guest
base to the address base register from the address index register.
Except that 31 in the base slot is SP not XZR, so we need to be
more intelligent about which reg gets placed in which slot.

Cc: qemu-stable@nongnu.org (v2.4.0)
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-02 14:23:14 -07:00
Laurent Vivier
090d0bfd94 s390: fix softmmu compilation
guest_base must be used only in linux-user mode.

Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-id: 1440757421-9674-1-git-send-email-laurent@vivier.eu
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-08-28 16:05:24 +01:00
Laurent Vivier
b76f21a707 linux-user: remove useless macros GUEST_BASE and RESERVED_VA
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base
and the macros GUEST_BASE and RESERVED_VA become useless: replace
them by their values.

Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:14:30 -07:00
Laurent Vivier
4cbea59869 linux-user: remove --enable-guest-base/--disable-guest-base
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.

Anyway, guest base use can be disabled lively by setting guest
base to 0.

CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.

Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:14:17 -07:00
Richard Henderson
9ee14902bf tcg/aarch64: Use softmmu fast path for unaligned accesses
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Richard Henderson
a5e39810b9 tcg/s390: Use softmmu fast path for unaligned accesses
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Benjamin Herrenschmidt
68d45bb61c tcg/ppc: Improve unaligned load/store handling on 64-bit backend
Currently, we get to the slow path for any unaligned access in the
backend, because we effectively preserve the bottom address bits
below the alignment requirement when comparing with the TLB entry,
so any non-0 bit there will cause the compare to fail.

For the same number of instructions, we can instead add the access
size - 1 to the address and stick to clearing all the bottom bits.

That means that normal unaligned accesses will not fallback (the HW
will handle them fine). Only when crossing a page boundary well we
end up having a mismatch because we'll end up pointing to the next
page which cannot possibly be in that same TLB entry.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Message-Id: <1437455978.5809.2.camel@kernel.crashing.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
8cc580f6a0 tcg/i386: use softmmu fast path for unaligned accesses
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
  - to support unaligned access on host with strict alignement
  - to correctly handle accesses crossing pages

x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.

For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.

On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.

[rth: Tidied calculation of the offset mask]

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1436467197-2183-1-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Richard Henderson
ecc7b3aa71 tcg: Remove tcg_gen_trunc_i64_i32
Replacing it with tcg_gen_extrl_i64_i32.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Richard Henderson
609ad70562 tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts.  This is all that was being used anyway.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
870ad1547a tcg: update README about size changing ops
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
8bcb5c8f34 tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 ops
They behave the same as ext32s_i64 and ext32u_i64 from the constant
folding and zero propagation point of view, except that they can't
be replaced by a mov, so we don't compute the affected value.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
4f2331e5b6 tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Stefan Weil <sw@weilnetz.de>
Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
6acd2558fd tcg: don't abuse TCG type in tcg_gen_trunc_shr_i64_i32
The tcg_gen_trunc_shr_i64_i32 function takes a 64-bit argument and
returns a 32-bit value. Directly call tcg_gen_op3 with the correct
types instead of calling tcg_gen_op3i_i32 and abusing the TCG types.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
0632e555fc tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
299f801304 tcg/optimize: allow constant to have copies
Now that copies and constants are tracked separately, we can allow
constant to have copies, deferring the choice to use a register or a
constant to the register allocation pass. This prevent this kind of
regular constant reloading:

-OUT: [size=338]
+OUT: [size=298]
   mov    -0x4(%r14),%ebp
   test   %ebp,%ebp
   jne    0x7ffbe9cb0ed6
   mov    $0x40002219f8,%rbp
   mov    %rbp,(%r14)
-  mov    $0x40002219f8,%rbp
   mov    $0x4000221a20,%rbx
   mov    %rbp,(%rbx)
   mov    $0x4000000000,%rbp
   mov    %rbp,(%r14)
-  mov    $0x4000000000,%rbp
   mov    $0x4000221d38,%rbx
   mov    %rbp,(%rbx)
   mov    $0x40002221a8,%rbp
   mov    %rbp,(%r14)
-  mov    $0x40002221a8,%rbp
   mov    $0x4000221d40,%rbx
   mov    %rbp,(%rbx)
   mov    $0x4000019170,%rbp
   mov    %rbp,(%r14)
-  mov    $0x4000019170,%rbp
   mov    $0x4000221d48,%rbx
   mov    %rbp,(%rbx)
   mov    $0x40000049ee,%rbp
   mov    %rbp,0x80(%r14)
   mov    %r14,%rdi
   callq  0x7ffbe99924d0
   mov    $0x4000001680,%rbp
   mov    %rbp,0x30(%r14)
   mov    0x10(%r14),%rbp
   mov    $0x4000001680,%rbp
   mov    %rbp,0x30(%r14)
   mov    0x10(%r14),%rbp
   shl    $0x20,%rbp
   mov    (%r14),%rbx
   mov    %ebx,%ebx
   mov    %rbx,(%r14)
   or     %rbx,%rbp
   mov    %rbp,0x10(%r14)
   mov    %rbp,0x90(%r14)
   mov    0x60(%r14),%rbx
   mov    %rbx,0x38(%r14)
   mov    0x28(%r14),%rbx
   mov    $0x4000220e60,%r12
   mov    %rbx,(%r12)
   mov    $0x40002219c8,%rbx
   mov    %rbp,(%rbx)
   mov    0x20(%r14),%rbp
   sub    $0x8,%rbp
   mov    $0x4000004a16,%rbx
   mov    %rbx,0x0(%rbp)
   mov    %rbp,0x20(%r14)
   mov    $0x19,%ebp
   mov    %ebp,0xa8(%r14)
   mov    $0x4000015110,%rbp
   mov    %rbp,0x80(%r14)
   xor    %eax,%eax
   jmpq   0x7ffbebcae426
   lea    -0x5f6d72a(%rip),%rax        # 0x7ffbe3d437b3
   jmpq   0x7ffbebcae426

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
b41059dd9d tcg/optimize: track const/copy status separately
Instead of using an enum which could be either a copy or a const, track
them separately. This will be used in the next patch.

Constants are tracked through a bool. Copies are tracked by initializing
temp's next_copy and prev_copy to itself, allowing to simplify the code
a bit.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:53 -07:00
Aurelien Jarno
d9c769c609 tcg/optimize: add temp_is_const and temp_is_copy functions
Add two accessor functions temp_is_const and temp_is_copy, to make the
code more readable and make code change easier.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:53 -07:00
Aurelien Jarno
1208d7dd5f tcg/optimize: optimize temps tracking
The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate
vector registers on most guests, it's not uncommon to have more than 100
used temps. This means we have initialize more than 2kB at least twice
per TB, often more when there is a few goto_tb.

Instead used a TCGTempSet bit array to track which temps are in used in
the current basic block. This means there are only around 16 bytes to
initialize.

This improves the boot time of a MIPS guest on an x86-64 host by around
7% and moves out tcg_optimize from the the top of the profiler list.

[rth: Handle TCG_CALL_DUMMY_ARG]

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:30 -07:00
Aurelien Jarno
29f3ff8d6c tcg/optimize: fix constant signedness
By convention, on a 64-bit host TCG internally stores 32-bit constants
as sign-extended. This is not the case in the optimizer when a 32-bit
constant is folded.

This doesn't seem to have more consequences than suboptimal code
generation. For instance the x86 backend assumes sign-extended constants,
and in some rare cases uses a 32-bit unsigned immediate 0xffffffff
instead of a 8-bit signed immediate 0xff for the constant -1. This is
with a ppc guest:

before
------

 ---- 0x9f29cc
 movi_i32 tmp1,$0xffffffff
 movi_i32 tmp2,$0x0
 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
 mov_i32 r10,tmp0

0x7fd8c7dfe90c:  xor    %ebp,%ebp
0x7fd8c7dfe90e:  mov    %ebp,%r11d
0x7fd8c7dfe911:  mov    0x18(%r14),%r9d
0x7fd8c7dfe915:  add    %r9d,%r10d
0x7fd8c7dfe918:  adc    %ebp,%r11d
0x7fd8c7dfe91b:  add    $0xffffffff,%r10d
0x7fd8c7dfe922:  adc    %ebp,%r11d
0x7fd8c7dfe925:  mov    %r11d,0x134(%r14)
0x7fd8c7dfe92c:  mov    %r10d,0x28(%r14)

after
-----

 ---- 0x9f29cc
 movi_i32 tmp1,$0xffffffffffffffff
 movi_i32 tmp2,$0x0
 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
 mov_i32 r10,tmp0

0x7f37010d490c:  xor    %ebp,%ebp
0x7f37010d490e:  mov    %ebp,%r11d
0x7f37010d4911:  mov    0x18(%r14),%r9d
0x7f37010d4915:  add    %r9d,%r10d
0x7f37010d4918:  adc    %ebp,%r11d
0x7f37010d491b:  add    $0xffffffffffffffff,%r10d
0x7f37010d491f:  adc    %ebp,%r11d
0x7f37010d4922:  mov    %r11d,0x134(%r14)
0x7f37010d4929:  mov    %r10d,0x28(%r14)

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1436544211-2769-2-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:08 -07:00
Aurelien Jarno
c99d69694a tcg/mips: fix add2
The add2 code in the tcg_out_addsub2 function doesn't take into account
the case where rl == al == bl. In that case we can't compute the carry
after the addition. As it corresponds to a multiplication by 2, the
carry bit is the bit 31.

While this is a corner case, this prevents x86-64 guests to boot on a
MIPS host.

Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-08-01 09:39:50 +02:00
Aurelien Jarno
3c8691f568 tcg/s390x: Mask TCGMemOp appropriately for indexing
Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition,
but two cases were forgotten in the TCG S390 backend.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-08-01 09:39:37 +02:00
Aurelien Jarno
4214a8cb7c tcg/mips: Mask TCGMemOp appropriately for indexing
Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition,
but two cases were forgotten in the TCG MIPS backend.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-08-01 09:39:33 +02:00
Aurelien Jarno
e72c4fb81d tcg/mips: fix TLB loading for BE host with 32-bit guests
For 32-bit guest, we load a 32-bit address from the TLB, so there is no
need to compensate for the low or high part. This fixes 32-bit guests on
big-endian hosts.

Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2015-08-01 09:38:36 +02:00
Aurelien Jarno
bbeb82395e tcg: mark temps as mem_coherent = 0 for mov with a constant
When a constant has to be loaded in a mov op, we fail to set
mem_coherent = 0. This patch fixes that.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1437994568-7825-3-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-27 07:25:40 -07:00
Aurelien Jarno
7df69deadf tcg: correctly mark dead inputs for mov with a constant
When tcg_reg_alloc_mov propagate a constant, we failed to correctly mark
a temp as dead if the liveness analysis hints so. This fixes the
following assert when configure with --enable-debug-tcg:

  qemu-x86_64: tcg/tcg.c:1827: tcg_reg_alloc_bb_end: Assertion `ts->val_type == TEMP_VAL_DEAD' failed.

Cc: Richard Henderson <rth@twiddle.net>
Reported-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1437994568-7825-2-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-27 07:25:40 -07:00
Aurelien Jarno
961521261a tcg/optimize: fix tcg_opt_gen_movi
Due to a copy&paste, the new op value is tested against mov_i32 instead
of movi_i32. The test is therefore always false. Fix that.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1436544211-2769-1-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 20:37:12 -07:00
Richard Henderson
80adb8fcad tcg/aarch64: use 32-bit offset for 32-bit softmmu emulation
Similar to the same fix for user-mode, except this instance
occurs on the softmmu path.  Again, the tlb addend must be
the base register, while the guest address is the index.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 20:19:44 -07:00
Paolo Bonzini
ffc6372851 tcg/aarch64: use 32-bit offset for 32-bit user-mode emulation
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and
tcg_out_qemu_st to use a 32-bit zero extended offset.  However, the
guest base register x28 must be the base and addr_reg must be the
index.

Reported-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1436974021-28978-3-git-send-email-pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 15:09:12 -07:00
Paolo Bonzini
6c0f0c0f12 tcg/aarch64: add ext argument to tcg_out_insn_3310
The new argument lets you pick uxtw or uxtx mode for the offset
register.  For now, all callers pass TCG_TYPE_I64 so that uxtx
is generated.  The bits for uxtx are removed from I3312_TO_I3310.

Reported-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1436974021-28978-2-git-send-email-pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 15:09:04 -07:00
Richard Henderson
ee8ba9e4d8 tcg/i386: Extend addresses for 32-bit guests
Removing the ??? comment explaining why it (mostly) worked.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1437081950-7206-2-git-send-email-rth@twiddle.net>
2015-07-23 15:09:04 -07:00
Stefan Weil
6e3c0c6edb tci: Fix regression with INDEX_op_qemu_st_i32, INDEX_op_qemu_st_i64
Commit 59227d5d45 did not update the
code in tcg/tci/tcg-target.c for those two cases.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1436556159-3002-1-git-send-email-sw@weilnetz.de
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-07-13 10:07:38 +01:00
James Hogan
a8f13961fd tcg/mips: Fix build error from merged memop+mmu_idx parameter
Commit 3972ef6f83 ("tcg: Push merged memop+mmu_idx parameter to
softmmu routines") caused the following build errors when building TCG
for MIPS:

In file included from tcg/tcg.c:258:0:
tcg/mips/tcg-target.c In function ‘tcg_out_qemu_ld_slow_path’:
tcg/mips/tcg-target.c:1015:22: error: ‘lb’ undeclared (first use in this function)
tcg/mips/tcg-target.c In function ‘tcg_out_qemu_st_slow_path’:
tcg/mips/tcg-target.c:1058:22: error: ‘lb’ undeclared (first use in this function)

It looks like lb was meant to refer to the TCGLabelQemuLdst *l
parameter, so fix both references to lb to refer to just l.

Fixes: 3972ef6f83 ("tcg: Push merged memop+mmu_idx parameter to softmmu routines")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Message-id: 1436433435-24898-2-git-send-email-james.hogan@imgtec.com
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-07-09 13:51:27 +01:00
Aurelien Jarno
cd3b29b745 tcg/s390: fix branch target change during code retranslation
Make sure to not modify the branch target. This ensure that the
branch target is not corrupted during partial retranslation.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-07-07 17:51:47 +02:00
Peter Crosthwaite
6e0b07306d cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcg
The usages of this define are pure TCG and there is no architecture
specific variation of the value. Localise it to the TCG engine to
remove another architecture agnostic piece from cpu-defs.h.

This follows on from a28177820a where
temp_buf was moved out of the CPU_COMMON obsoleting the need for
the super early definition.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <498e8e5325c1a1aff79e5bcfc28cb760ef6b214e.1433052532.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-26 16:00:50 +02:00
Aurelien Jarno
36e60ef6ac tcg/optimize: rename tcg_constant_folding
The tcg_constant_folding folding ends up doing all the optimizations
(which is a good thing to avoid looping on all ops multiple time), so
make it clear and just rename it tcg_optimize.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433447607-31184-6-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 07:00:56 -07:00
Aurelien Jarno
97a79eb70d tcg/optimize: fold constant test in tcg_opt_gen_mov
Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if
the source temp is a constant. Fold that into the tcg_opt_gen_mov
function.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433495958-9508-1-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 07:00:56 -07:00
Aurelien Jarno
5365718a9a tcg/optimize: fold temp copies test in tcg_opt_gen_mov
Each call to tcg_opt_gen_mov is preceeded by a test to check if the
source and destination temps are copies. Fold that into the
tcg_opt_gen_mov function.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433447607-31184-4-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 07:00:56 -07:00
Aurelien Jarno
8d6a91602e tcg/optimize: remove opc argument from tcg_opt_gen_mov
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433447607-31184-3-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 07:00:56 -07:00
Aurelien Jarno
ebd27391b0 tcg/optimize: remove opc argument from tcg_opt_gen_movi
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433447607-31184-2-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 07:00:56 -07:00
Aurelien Jarno
c19f47bf5e tcg: fix dead computation for repeated input arguments
When the same temp is used twice or more as an input argument to a TCG
instruction, the dead computation code doesn't recognize the second use
as a dead temp. This is because the temp is marked as live in the same
loop where dead inputs are checked.

The fix is to split the loop in two parts. This avoid emitting a move
and using a register for the movcond instruction when used as "move if
true" on x86-64. This might bring more improvements on RISC TCG targets
which don't have outputs aliased to inputs.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433447228-29425-3-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 06:42:27 -07:00
Aurelien Jarno
7e1df267a7 tcg: fix register allocation with two aliased dead inputs
For TCG ops with two outputs registers (add2, sub2, div2, div2u), when
the same input temp is used for the two inputs aliased to the two
outputs, and when these inputs are both dead, the register allocation
code wrongly assigned the same register to the same output.

This happens for example with sub2 t1, t2, t3, t3, t4, t5, when t3 is
not used anymore after the TCG op.  In that case the same register is
used for t1, t2 and t3.

The fix is to look for already allocated aliased input when allocating
a dead aliased input and check that the register is not already
used.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <1433447228-29425-2-git-send-email-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 06:42:27 -07:00
Richard Henderson
59c4b7e8df tcg: Handle MO_AMASK in tcg_dump_ops
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 06:35:53 -07:00
Richard Henderson
2b7ec66f02 tcg: Mask TCGMemOp appropriately for indexing
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.

Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 06:35:29 -07:00
Paolo Bonzini
006f8638c6 tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITS
This will be used to size the TLB when more than 8 MMU modes are
used by the target.  Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-06-03 23:56:56 +02:00
Paolo Bonzini
5a58e884d1 tci: do not use CPUArchState in tcg-target.h
tcg-target.h does not use any QEMU-specific symbols, save for tci's usage
of CPUArchState.  Pull that up to tcg/tcg.h.

This will make it possible to include tcg-target.h in cpu-defs.h.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-06-03 23:56:55 +02:00
Richard Henderson
dfb3630562 tcg: Add MO_ALIGN, MO_UNALN
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed.  The default setting
reflects the target's definition of ALIGNED_ONLY.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:15:18 -07:00
Richard Henderson
3972ef6f83 tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:15:14 -07:00
Richard Henderson
59227d5d45 tcg: Merge memop and mmu_idx parameters to qemu_ld/st
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:14:55 -07:00
Emilio G. Cota
00c8fa9ffe tcg: optimise memory layout of TCGTemp
This brings down the size of the struct from 56 to 32 bytes on 64-bit,
and to 20 bytes on 32-bit. This leads to memory savings:

Before:
$ find . -name 'tcg.o' | xargs size
   text    data     bss     dec     hex filename
  41131   29800      88   71019   1156b ./aarch64-softmmu/tcg/tcg.o
  37969   29416      96   67481   10799 ./x86_64-linux-user/tcg/tcg.o
  39354   28816      96   68266   10aaa ./arm-linux-user/tcg/tcg.o
  40802   29096      88   69986   11162 ./arm-softmmu/tcg/tcg.o
  39417   29672      88   69177   10e39 ./x86_64-softmmu/tcg/tcg.o

After:
$ find . -name 'tcg.o' | xargs size
   text    data     bss     dec     hex filename
  40883   29800      88   70771   11473 ./aarch64-softmmu/tcg/tcg.o
  37473   29416      96   66985   105a9 ./x86_64-linux-user/tcg/tcg.o
  38858   28816      96   67770   108ba ./arm-linux-user/tcg/tcg.o
  40554   29096      88   69738   1106a ./arm-softmmu/tcg/tcg.o
  39169   29672      88   68929   10d41 ./x86_64-softmmu/tcg/tcg.o

Note that using an entire byte for some enums that need less than
that wastes a few bits (noticeable in 32 bits, where we use
20 bytes instead of 16) but avoids extraction code, which overall
is a win--I've tested several variations of the patch, and the appended
is the best performer for OpenSSL's bntest by a very small margin:

Before:
$ taskset -c 0 perf stat -r 15 -- x86_64-linux-user/qemu-x86_64 img/bntest-x86_64 >/dev/null
[...]
 Performance counter stats for 'x86_64-linux-user/qemu-x86_64 img/bntest-x86_64' (15 runs):

      10538.479833 task-clock (msec)  # 0.999 CPUs utilized  ( +-  0.38% )
               772 context-switches   # 0.073 K/sec          ( +-  2.03% )
                 0 cpu-migrations     # 0.000 K/sec          ( +-100.00% )
             2,207 page-faults        # 0.209 K/sec          ( +-  0.08% )
      10.552871687 seconds time elapsed                      ( +-  0.39% )

After:
$ taskset -c 0 perf stat -r 15 -- x86_64-linux-user/qemu-x86_64 img/bntest-x86_64 >/dev/null
 Performance counter stats for 'x86_64-linux-user/qemu-x86_64 img/bntest-x86_64' (15 runs):

      10459.968847 task-clock (msec)  # 0.999 CPUs utilized  ( +-  0.30% )
               739 context-switches   # 0.071 K/sec          ( +-  1.71% )
                 0 cpu-migrations     # 0.000 K/sec          ( +- 68.14% )
             2,204 page-faults        # 0.211 K/sec          ( +-  0.10% )
      10.473900411 seconds time elapsed                      ( +-  0.30% )

Suggested-by: Stefan Weil <sw@weilnetz.de>
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-05 08:44:46 -07:00
Peter Crosthwaite
fee068e4f1 tcg: Delete unused cpu_pc_from_tb()
No code uses the cpu_pc_from_tb() function. Delete from tricore and
arm which each provide an unused implementation. Update the comment
in tcg.h to reflect that this is obsoleted by synchronize_from_tb.

Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-04-30 16:06:18 +03:00
Peter Maydell
cf811fff2a tcg/tcg-op.c: Fix ld/st of 64 bit values on 32-bit bigendian hosts
Commit 951c6300f7 out-of-lined the 32-bit-host versions of
tcg_gen_{ld,st}_i64, but in the process it inadvertently changed
an #ifdef HOST_WORDS_BIGENDIAN to #ifdef TCG_TARGET_WORDS_BIGENDIAN.
Since the latter doesn't get defined anywhere this meant we always
took the "LE host" codepath, and stored the two halves of the value
in the wrong order on BE hosts. This typically breaks any 64-bit
guest on a 32-bit BE host completely, and will have possibly more
subtle effects even for 32-bit guests.

Switch the ifdef back to HOST_WORDS_BIGENDIAN.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Andreas Färber <afaerber@suse.de>
Message-id: 1428523029-13620-1-git-send-email-peter.maydell@linaro.org
2015-04-09 10:51:10 +01:00
Richard Henderson
2374c4b837 tcg/optimize: Handle or r,a,a with constant a
As seen with ubuntu-5.10-live-powerpc.iso.

Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-16 08:46:13 -07:00
Richard Henderson
37ed3bf1ee tcg: Complete handling of ALWAYS and NEVER
Missing from movcond, and brcondi_i32 (but not brcondi_i64).

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 13:08:05 -07:00
Richard Henderson
51e3972c41 tcg: Use tcg_malloc to allocate TCGLabel
Pre-allocating 512 of them per TB is a waste.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Richard Henderson
bec1631100 tcg: Change generator-side labels to a pointer
This is less about improved type checking than enabling a
subsequent change to the representation of labels.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Richard Henderson
42a268c241 tcg: Change translator-side labels to a pointer
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.

Note that the code generating backends still manipulate labels as int.

With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Anthony Green <green@moxielogic.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Richard Henderson
3f626793a2 tcg-ia64: Use tcg_malloc to allocate TCGLabelQemuLdst
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Richard Henderson
686461c962 tcg: Use tcg_malloc to allocate TCGLabelQemuLdst
Pre-allocating 640 of them per TB is a waste.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Richard Henderson
15fc7daa77 tcg: Remove unused opcodes
We no longer need INDEX_op_end to terminate the list, nor do we
need 5 forms of nop, since we just remove the TCGOp instead.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Richard Henderson
a4ce099a7a tcg: Implement insert_op_before
Rather reserving space in the op stream for optimization,
let the optimizer add ops as necessary.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Richard Henderson
0c627cdca2 tcg: Remove opcodes instead of noping them out
With the linked list scheme we need not leave nops in the stream
that we need to process later.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Richard Henderson
c45cb8bb89 tcg: Put opcodes in a linked list
The previous setup required ops and args to be completely sequential,
and was error prone when it came to both iteration and optimization.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Richard Henderson
fe700adb3d tcg: Introduce tcg_op_buf_count and tcg_op_buf_full
The method by which we count the number of ops emitted
is going to change.  Abstract that away into some inlines.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Richard Henderson
3a13c3f34c tcg: Reduce ifdefs in tcg-op.c
Almost completely eliminates the ifdefs in this file, improving
confidence in the lesser used 32-bit builds.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Richard Henderson
951c6300f7 tcg: Move some opcode generation functions out of line
Some of these functions are really quite large.  We have a number of
things that ought to be circularly dependent, but we duplicated code
to break that chain for the inlines.

This saved 25% of the code size of one of the translators I examined.

Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-02-12 21:21:38 -08:00
Max Filippov
246ae24d7d tcg: add separate monitor command to dump opcode counters
Currently 'info jit' outputs half of the information to monitor and the
rest to qemu log. Dumping opcode counts to monitor as a part of 'info
jit' command doesn't sound useful. Add new monitor command 'info
opcount' that only dumps opcode counters.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2014-12-17 05:49:32 +03:00
Aurelien Jarno
0a2923f848 tcg/mips: fix store softmmu slow path
Commit 9d8bf2d1 moved the softmmu slow path out of line and introduce a
regression at the same time by always calling tcg_out_tlb_load with
is_load=1. This makes impossible to run any significant code under
qemu-system-mips*.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2014-11-02 13:30:00 +01:00
Richard Henderson
b6c73a6d45 tcg: Always enable TCGv type checking
Instead of using structures, which imply some amount of overhead
on certain ABIs, use pointer types.

This actually reduces the size of the binaries vs a NON-debug
build on ppc64 and x86_64, due to a reduction in the number of
sign-extension insns.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:28 -04:00
Richard Henderson
9c53889ba3 tcg-aarch64: Use 32-bit loads for qemu_ld_i32
The "old" qemu_ld opcode did not specify the size of the result,
and so we had to assume full register width.  With the new opcodes,
we can narrow the result.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:28 -04:00
Richard Henderson
de8301e542 tcg-sparc: Use UMULXHI instruction
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:27 -04:00
Richard Henderson
c470b663f7 tcg-sparc: Rename ADDX/SUBX insns
The pre-v9 ADDX/SUBX insns were renamed ADDC/SUBC for v9.
Standardizing on the v9 name makes things less confusing.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:27 -04:00
Richard Henderson
9d6a7a8542 tcg-sparc: Use ADDXC in setcond_i64
Similar to the ADDC tricks we use in setcond_i32.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:27 -04:00
Richard Henderson
321b6c0585 tcg-sparc: Fix setcond_i32 uninitialized value
We failed to swap c1 and c2 correctly for NE c2 == 0.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:27 -04:00
Richard Henderson
90379ca84e tcg-sparc: Use ADDXC in addsub2_i64
On T4 and newer Sparc chips we have an add-with-carry insn
that takes its input from %xcc instead of %icc.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:27 -04:00
Richard Henderson
609ac1e164 tcg-sparc: Support addsub2_i64
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:26 -04:00
zhanghailiang
d70724cec8 tcg: dump op count into qemu log
fopen() may fail and it does not check its return vaule here,
it is better to dump op count to the normal log file.

Signed-off-by: Li Liu <john.liuli@huawei.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-08-24 13:16:32 +04:00
Peter Maydell
1045fc0439 tcg/ppc: Fix support for 64-bit PPC MacOSX hosts
Add back in the support for 64-bit PPC MacOSX hosts that was
broken in the recent merge of the 32-bit and 64-bit TCG backends.

Reported-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Andreas Färber <andreas.faerber@web.de>
2014-06-29 11:38:50 +01:00
Richard Henderson
d4cba13bdf tcg/ppc: Fix failure in tcg_out_mem_long
With rt != r0 on loads, we use rt for scratch.  If we need an index
register different from base, we can't use rt, but r0 is usable.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1403843160-30332-1-git-send-email-rth@twiddle.net
Tested-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-27 13:23:41 +01:00
Peter Maydell
4196dca63b tcg: mark tcg_out* and tcg_patch* with attribute 'unused'
The tcg_out* and tcg_patch* functions are utility routines that may or
may not be used by a particular backend; mark them with the 'unused'
attribute to suppress spurious warnings if they aren't used.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-06-24 20:01:24 +04:00
Stefan Weil
5d831be272 Fix new typos (found by codespell)
* accomodate -> accommodate
* aquiring -> acquiring
* beacuse -> because
* loosing -> losing
* prefering -> preferring
* threshhold -> threshold

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-06-24 20:01:24 +04:00
Richard Henderson
a84ac4cbbb tcg-ppc: Use the return address as a base pointer
This can significantly reduce code size for generation of (some)
64-bit constants.  With the side effect that we know for a fact
that exit_tb can use the register to good effect.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:33 -07:00
Richard Henderson
224f9fd419 tcg-ppc: Merge cache-utils into the backend
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h.  Removing this
disparity is easier now that the two ppc backends are merged.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:30 -07:00
Richard Henderson
40d964b563 tcg-ppc: Rename the tcg/ppc64 backend
The other tcg backends that support 32- and 64-bit modes
use the 32-bit name for the port.  Follow suit.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:23 -07:00
Richard Henderson
b38daef9d4 tcg-ppc: Remove the backend
Vectoring the 32-bit build to the ppc64 directory.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:18 -07:00
Richard Henderson
a757e1eef0 tcg-ppc64: Merge ppc32 shifts
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:15 -07:00
Richard Henderson
8fa391a011 tcg-ppc64: Support mulsh_i32
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:12 -07:00
Richard Henderson
dfca177874 tcg-ppc64: Merge ppc32 register usage
Good enough to run some instructions before things go awry.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:09 -07:00
Richard Henderson
7f25c469c7 tcg-ppc64: Merge ppc32 qemu_ld/st
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:06 -07:00
Richard Henderson
abcf61c48e tcg-ppc64: Merge ppc32 brcond2, setcond2, muluh
Now passes tcg_add_target_add_op_defs assertions, but
not complete enough to function.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:03 -07:00
Richard Henderson
796f1a689d tcg-ppc64: Begin merging ppc32 with ppc64
Just enough to compile, assuming you edit config-host.mak manually.
It will still abort at runtime, due to missing brcond2, setcond2, mulu2.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:58 -07:00
Richard Henderson
b31284cecf tcg-ppc64: Fix sub2 implementation
All sorts of confusion on argument ordering.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:56 -07:00
Richard Henderson
ffcfbecec3 tcg-ppc64: Merge 32-bit ABIs into the prologue / frame code
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:52 -07:00
Ulrich Weigand
77e58d0d60 tcg-ppc64: Adjust tcg_out_call for ELFv2
The new ELFv2 ABI, used by default on powerpc64le-linux hosts,
introduced some changes that are incompatible with code currently
generated by the ppc64 TGC target.  In particular, we no longer
use function descriptors.

This patch adds support for the ELFv2 ABI in the ppc64 TGC
function call and function prologue sequences.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:46 -07:00
Richard Henderson
a2a98f807b tcg-ppc64: Support the ppc64 elfv2 ABI
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:43 -07:00
Richard Henderson
eaf7d1cfe0 tcg-ppc64: Use the correct test in tcg_out_call
The correct test uses the _CALL_AIX macro, not a host-specific macro.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:38 -07:00
Richard Henderson
802ca56e1d tcg-ppc64: Better parameterize the stack frame
In preparation for supporting other ABIs.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:34 -07:00
Richard Henderson
5456788db7 tcg-ppc64: Fix TCG_TARGET_CALL_STACK_OFFSET
The calling convention reserves space for the 8 register parameters on
the stack, so using only 6*8=48 as the offset was wrong.  We never saw
this bug because we don't have any helpers with more than 5 parameters.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:29 -07:00
Richard Henderson
a921fddcc1 tcg-ppc64: Move call macros out of tcg-target.h
These values are private to tcg.c; we don't need to expose
this nonsense to the translators.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:26 -07:00
Richard Henderson
3bf4a1ed61 tcg-ppc64: Make TCG_AREG0 and TCG_REG_CALL_STACK enum constants
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:22 -07:00
Richard Henderson
4c3831a088 tcg-ppc64: Use tcg_out_{ld,st,cmp} internally
Rather than using tcg_out32 and opcodes directly.  This allows us
to remove LD_ADDR and CMP_L macros.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:17 -07:00
Richard Henderson
de7761a39d tcg-ppc64: Relax register restrictions in tcg_out_mem_long
In order to be able to use tcg_out_ld/st sensibly with scratch
registers, assert only when we'd incorrectly clobber a scratch.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:14 -07:00
Richard Henderson
d604f1a90d tcg-ppc64: Move functions around
Code movement only.  This will allow us to make use of the
other tcg_out_* functions in tidying their implementations.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:10 -07:00
Richard Henderson
de3d636d83 tcg-ppc64: Avoid some hard-codings of TCG_TYPE_I64
Using more appropriate _PTR or _REG where possible.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:31:07 -07:00
Richard Henderson
9171478c95 tcg-ppc: Use uintptr_t in ppc_tb_set_jmp_target
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:29:30 -07:00
Richard Henderson
bc8d688ff3 tcg/optimize: Don't special case TCG_OPF_CALL_CLOBBER
With the "old" ldst ops we didn't know the real width of the
result of the load, but with the "new" ldst ops we do.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-18 11:39:02 -07:00
Peter Maydell
31e25e3e57 Merge remote-tracking branch 'remotes/bonzini/softmmu-smap' into staging
* remotes/bonzini/softmmu-smap: (33 commits)
  target-i386: cleanup x86_cpu_get_phys_page_debug
  target-i386: fix protection bits in the TLB for SMEP
  target-i386: support long addresses for 4MB pages (PSE-36)
  target-i386: raise page fault for reserved bits in large pages
  target-i386: unify reserved bits and NX bit check
  target-i386: simplify pte/vaddr calculation
  target-i386: raise page fault for reserved physical address bits
  target-i386: test reserved PS bit on PML4Es
  target-i386: set correct error code for reserved bit access
  target-i386: introduce support for 1 GB pages
  target-i386: introduce do_check_protect label
  target-i386: tweak handling of PG_NX_MASK
  target-i386: commonize checks for PAE and non-PAE
  target-i386: commonize checks for 4MB and 4KB pages
  target-i386: commonize checks for 2MB and 4KB pages
  target-i386: fix coding standards in x86_cpu_handle_mmu_fault
  target-i386: simplify SMAP handling in MMU_KSMAP_IDX
  target-i386: fix kernel accesses with SMAP and CPL = 3
  target-i386: move check_io helpers to seg_helper.c
  target-i386: rename KSMAP to KNOSMAP
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-05 21:06:14 +01:00
Paolo Bonzini
c773828aa9 softmmu: move all load/store functions to cpu_ldst.h
Unify pieces of cpu-all.h, exec-all.h, softmmu_exec.h and tcg/tcg.h
into a single new header file with all helpers.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05 16:10:33 +02:00
Alexander Graf
e3eb9806c7 TCG: Fix tcg_gen_extr_i64_tl for 32bit
We expose a generic helper "tcg_gen_extr_i64_tl" for 64bit targets, but the
same function for 32bit targets is a misnomer and refers to an invalid function
name.

Fix up the definition to point to the correct internal helper names instead.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04 14:11:45 -07:00
Richard Henderson
3d1b2ff62c tcg: Remove TCG_TARGET_HAS_new_ldst
Since all backends have been converted, remove the compatibility code.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04 14:10:26 -07:00
Richard Henderson
76782fab1c tci: Convert to new ldst opcodes
Tested-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04 14:10:16 -07:00
Richard Henderson
0b91966730 tcg-i386: Fix win64 qemu store
The first non-register argument isn't placed at offset 0.

Cc: qemu-stable@nongnu.org
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04 13:58:39 -07:00
Richard Henderson
24666baf1f tcg/optimize: Remember garbage high bits for 32-bit ops
For a 64-bit host, the high bits of a register after a 32-bit operation
are undefined.  Adjust the temps mask for all 32-bit ops to reflect that.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:56 -07:00
Richard Henderson
a62f6f5600 tcg/optimize: Move updating of gen_opc_buf into tcg_opt_gen_mov*
No functional change, just reduce a bit of redundancy.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:56 -07:00
Richard Henderson
ae18b28dd1 tcg-sparc: Make debug_frame const
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:56 -07:00
Richard Henderson
d2e16f2ce1 tcg-s390: Make debug_frame const
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
1695974187 tcg-arm: Make debug_frame const
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
3d9bddb30b tcg-aarch64: Make debug_frame const
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
e9a9a5b605 tcg-i386: Make debug_frame const
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
2c90784abf tcg: Allow the debug_frame data structure to be constant
Adjust the FDE to point to the code_buffer after we've copied it
to the image, rather than requiring that the backend set it prior.
This allows the backend to use read-only storage for its data.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
bbb8a1b455 tcg: Remove sizemask and flags arguments to tcg_gen_callN
Take them from the TCGHelperInfo struct instead.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
afb49896fa tcg: Save flags and computed sizemask in TCGHelperInfo
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
72866e823e tcg: Register the helper info struct rather than the name
This will let us find all the info from the hash table.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
836d6ed96e tcg: Inline tcg_gen_helperN
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
c017230d9b tcg: Use helper-gen.h in tcg-op.h
No need to open-code the setup of the builtin helpers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
944eea962b tcg: Push tcg-runtime routines into exec/helper-*
Rather than special casing them, use the standard mechanisms
for tcg helper generation.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
2ef6175aa7 tcg: Invert the inclusion of helper.h
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h.  This
minimizes the files that must be rebuilt when changing the macros
for file N.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
a763551ad5 tcg: Optimize brcond2 and setcond2 ne/eq
If either the high or low pair can be resolved, we can
simplify to either a constant or to a 32-bit comparison.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:53 -07:00
Peter Maydell
27aa948502 Merge remote-tracking branch 'remotes/rth/tcg-mips' into staging
* remotes/rth/tcg-mips: (24 commits)
  tcg-mips: Enable direct chaining of TBs
  tcg-mips: Simplify movcond
  tcg-mips: Simplify brcond2
  tcg-mips: Improve setcond eq/ne vs zeros
  tcg-mips: Simplify setcond2
  tcg-mips: Simplify brcond
  tcg-mips: Simplify setcond
  tcg-mips: Commonize opcode implementations
  tcg-mips: Improve add2/sub2
  tcg-mips: Hoist args loads
  tcg-mips: Fix subtract immediate range
  tcg-mips: Name the opcode enumeration
  tcg-mips: Use EXT for AND on mips32r2
  tcg-mips: Use T9 for TCG_TMP1
  tcg-mips: Introduce TCG_TMP0, TCG_TMP1
  tcg-mips: Rearrange register allocation
  tcg-mips: Convert to new_ldst
  tcg-mips: Convert to new qemu_l/st helpers
  tcg-mips: Move softmmu slow path out of line
  tcg-mips: Split large ldst offsets
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-27 18:31:02 +01:00
Richard Henderson
b6bfeea92a tcg-mips: Enable direct chaining of TBs
Now that the code_gen_buffer is constrained to not cross 256mb
regions, we are assured that we can use J to reach another TB.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:48:37 -07:00
Richard Henderson
33fac20bb2 tcg-mips: Simplify movcond
Use the same table to fold comparisons as with setcond.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:47:14 -07:00
Richard Henderson
3401fd259e tcg-mips: Simplify brcond2
Emitting a single branch instead of (up to) 3, using setcond2
to generate the composite compare.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:47:08 -07:00
Richard Henderson
1db1c4d7d9 tcg-mips: Improve setcond eq/ne vs zeros
The original code results in one too many insns per zero
present in the input.  And since comparing 64-bit numbers
vs zero is common...

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:58 -07:00
Richard Henderson
9a2f0bfe32 tcg-mips: Simplify setcond2
Using tcg_unsigned_cond and tcg_high_cond.
Also, move the function up in the file for future cleanups.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:53 -07:00
Richard Henderson
c068896f7f tcg-mips: Simplify brcond
Use the same table to fold comparisons as with setcond.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:49 -07:00
Richard Henderson
fd1cf66630 tcg-mips: Simplify setcond
Use a table to fold comparisons to less-than.
Also, move the function up in the file for futher simplifications.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:39 -07:00
Richard Henderson
4f048535cd tcg-mips: Commonize opcode implementations
Most opcodes fall in to one of a couple of patterns.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:32 -07:00
Richard Henderson
741f117d9a tcg-mips: Improve add2/sub2
Reduce insn count from 5 to either 3 or 4.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:25 -07:00
Richard Henderson
22ee3a987d tcg-mips: Hoist args loads
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:20 -07:00
Richard Henderson
070603f62b tcg-mips: Fix subtract immediate range
Since we must use ADDUI, we would generate incorrect code for -32768.
Leaving off subtract of +32768 makes things easier for a follow-on patch.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:08 -07:00
Richard Henderson
ac0f3b1263 tcg-mips: Name the opcode enumeration
And use it in the opcode emission functions.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:03 -07:00
Richard Henderson
1c4182687e tcg-mips: Use EXT for AND on mips32r2
At the same time, tidy deposit by introducing tcg_out_opc_bf.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:56 -07:00
Richard Henderson
f216a35f36 tcg-mips: Use T9 for TCG_TMP1
T0 is an argument register for the n32 and n64 abis.  T9 is the call
address register for the abis, and is more directly under the control
of the backend.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:48 -07:00
Richard Henderson
6c530e32f4 tcg-mips: Introduce TCG_TMP0, TCG_TMP1
Use these instead of hard-coding the registers to use for temporaries.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:44 -07:00
Richard Henderson
418839044e tcg-mips: Rearrange register allocation
Use FP (also known as S8) as a normal call-saved register.

Include T0 in the allocation order and call-clobbered list
even though it's currently used as a TCG temporary.

Put the argument registers at the end of the allocation order.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:28 -07:00
Richard Henderson
fbef2cc80f tcg-mips: Convert to new_ldst
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:24 -07:00
Richard Henderson
ce0236cfbd tcg-mips: Convert to new qemu_l/st helpers
In addition, fill delay slots calling the helpers and tail
call to the store helpers.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:20 -07:00
Richard Henderson
9d8bf2d125 tcg-mips: Move softmmu slow path out of line
At the same time, tidy up the call helpers, avoiding a memory reference.
Split out several subroutines.  Use TCGMemOp constants.  Make endianness
selectable at runtime.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:16 -07:00
Richard Henderson
f9a716325f tcg-mips: Split large ldst offsets
Use this to reduce goto_tb by one insn.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:13 -07:00
Richard Henderson
7dae901d2d tcg-mips: Fill the exit_tb delay slot
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:09 -07:00
Richard Henderson
f8c9eddb2b tcg-mips: Use J and JAL opcodes
For userland builds calls will normally be in range,
and for the exit_tb opcode the branch to the epilogue.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:05 -07:00
Richard Henderson
a3abb29292 tci: Fix tcg_out_call
Broken since dddbb2e1e3.
Do all the rest of the things that tcg_out_op did before
and after the big switch statement.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-22 13:25:34 -07:00
Richard Henderson
a10c64e0df tcg-s390: Implement direct chaining of TBs
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 09:22:32 -07:00
Richard Henderson
7b7066b1db tcg-s390: Improve setcond
There are a variety of common cases for which we can use carry tricks to
avoid a conditional branch.  On very new hardware, use LOAD ON CONDITION
instead of a conditional branch.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 01:33:35 -04:00
Richard Henderson
ad19b35808 tcg-s390: Allow immediate operands to add2 and sub2
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 01:33:29 -04:00
Richard Henderson
f167dc37da tcg-s390: Implement tcg_register_jit
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:12:25 -04:00
Richard Henderson
547ec12141 tcg-s390: Use more risbg in the tlb sequence
Elides two insns from the sequence.  The resulting tlb compare
sequence is satisfyingly minimal:

	risbg  %r2,%r8,51,186,56
	risbg  %r3,%r8,61,178,0
	cg     %r3,904(%r10,%r2)
	lg     %r2,920(%r10,%r2)
	jlh    tlb_miss

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:10:42 -04:00
Richard Henderson
fb5964152d tcg-s390: Move ldst helpers out of line
That is, the old LDST_OPTIMIZATION.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:10:00 -04:00
Richard Henderson
f24efee41e tcg-s390: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:59 -04:00
Richard Henderson
b8dd88b85c tcg-s390: Integrate endianness into TCGMemOp
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:59 -04:00
Richard Henderson
a5a04f2830 tcg-s390: Convert to TCGMemOp
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:59 -04:00
Richard Henderson
a175689654 tcg-s390: Fix off-by-one in wraparound andi
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:47 -04:00
Richard Henderson
450445d543 tcg: Fix tcg_reg_alloc_mov vs no-op truncation
Commit af3cbfbe80 hoisted some "common"
loads of the temporary type, forgetting that the types could differ
during truncating moves.  This affects the correctness of the memory
offset on big-endian hosts.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-14 09:56:13 -07:00
Richard Henderson
96d0ee7f09 tcg: Remove unreachable code in tcg_out_op and op_defs
The INDEX_op_call case has just been obsoleted; the mov and movi
cases have not been reachable for years.  Attempt to document this
both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT.

Because of the TCG_OPF_NOT_PRESENT change, this must be done for
all targets in a single commit.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:13 -07:00
Richard Henderson
af3cbfbe80 tcg: Use tcg_target_available_regs in tcg_reg_alloc_mov
The move opcodes are special in that their constraints must cover
all available registers.  So instead of checking the constraints,
just use the available registers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
cf06667428 tcg: Make call address a constant parameter
Avoid allocating a tcg temporary to hold the constant address,
and instead place it directly into the op_call arguments.

At the same time, convert to the newly introduced tcg_out_call
backend function, rather than invoking tcg_out_op for the call.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
dddbb2e1e3 tci: Create tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
eb68a4fa4e tcg-mips: Split out tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
4e9cf8409a tcg-sparc: Create tcg_out_call
Rename the existing tcg_out_calli to tcg_out_call_nodelay.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
fdd8ec7184 tcg-ppc64: Rename tcg_out_calli to tcg_out_call
Merge the existing tcg_out_call into tcg_out_op.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
00d7a1acab tcg-ppc: Split out tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
a8111212b3 tcg-s390: Rename tgen_calli to tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
6bf3e99747 tcg-i386: Rename tcg_out_calli to tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:11 -07:00
Richard Henderson
5053361b3e tcg: Require TCG_TARGET_INSN_UNIT_SIZE
Now that all backends do define TCG_TARGET_INSN_UNIT_SIZE,
remove the fallback definition.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:07:06 -07:00
Richard Henderson
a7f96f7666 tci: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:07:02 -07:00
Richard Henderson
ae0218e350 tcg-mips: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:58 -07:00
Richard Henderson
5588ff2921 tcg-ia64: Define TCG_TARGET_INSN_UNIT_SIZE
Using a 16-byte aligned structure achieves best results, both for code
cleanliness and compiled code size.  However, this means that we can't
use the trick of encoding the slot number into the low 2 bits.

Thankfully, we only ever use slot2, so make that explicit in the names
of the relocation functions, and drop the code for other slots.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:58 -07:00
Richard Henderson
8c081b1802 tcg-s390: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:58 -07:00
Richard Henderson
8587c30c3e tcg-aarch64: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:52 -07:00
Richard Henderson
267c931985 tcg-arm: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:29 -07:00
Richard Henderson
abce5964be tcg-sparc: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
38cf39f739 tcg-ppc: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
e083c4a233 tcg-ppc64: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
f6bff89d06 tcg-i386: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
1813e1758d tcg: Define tcg_insn_unit for code pointers
To be defined by the tcg backend based on the elemental unit of the ISA.
During the transition, allow TCG_TARGET_INSN_UNIT_SIZE to be undefined,
which allows us to default tcg_insn_unit to the current uint8_t.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
52a1f64ec5 tcg: Introduce byte pointer arithmetic helpers
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Peter Maydell
5c53bb8121 tcg: Avoid undefined behaviour patching code at unaligned addresses
To avoid C undefined behaviour when patching generated code,
provide wrappers tcg_patch8/16/32/64 which use the usual memcpy
trick, and use them in the i386 backend.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Peter Maydell
4387345a96 tcg: Avoid stores to unaligned addresses
Avoid stores to unaligned addresses in TCG code generation, by using the
usual memcpy() approach. (Using bswap.h would drag a lot of QEMU baggage
into TCG, so it's simpler just to do direct memcpy() here.)

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
ebd0c614d7 tcg-sparc: Accept stores of zero
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
035b239826 tcg-sparc: Fix small 32-bit movi
We tested imm13 before discarding garbage high bits.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
35e2da1556 tcg-sparc: Fixup function argument types
Use TCGReg everywhere appropriate.  Use int32_t for all arguments
that may be registers or immediate constants.  Merge tcg_out_addi
into its only caller.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
b357f902bf tcg-sparc: Hoist common argument loads in tcg_out_op
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
98b90bab3f tcg-sparc: Don't handle mov/movi in tcg_out_op
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
425532d71d tcg-sparc: Tidy check_fit_* tests
Use sextract instead of raw bit shifting for the tests.  Introduce
a new check_fit_ptr macro to make it clear we're looking at pointers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
f4c166619e tcg-sparc: Implement muls2_i32
Using the 32-bit SMUL is a tad more efficient than
resorting to extending and using the 64-bit MULX.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
8b66eefe0d tcg-sparc: Use the RETURN instruction
Saves one insn per TB exit over JMPL+RESTORE.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
34b1a49cb1 tcg-sparc: Use 64-bit registers with sparcv8plus
Quite a lot of effort was spent composing and decomposing 64-bit
quantities in registers, when we should just create them and leave
them as one 64-bit register.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
a24fba935a tcg-sparc: Support trunc_shr_i32
Unlike a 64-bit shift op, allows the output to be in %l or %i registers
for sparcv8plus.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
9f44adc573 tcg-sparc: Remove most uses of TCG_TARGET_REG_BITS
Replace with SPARC64 define.  Soon even sparcv8plus will use
64-bit register as far as TCG is concerned.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
4bb7a41ed6 tcg: Add INDEX_op_trunc_shr_i32
Let the backend do something special for truncation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:34 -07:00
Richard Henderson
71b926992e tcg: Fix missed pointer size != TCG_TARGET_REG_BITS changes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:34 -07:00
Peter Maydell
ad600a4d49 Pull tcg 2014-04-22
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJTVthUAAoJEK0ScMxN0Ceb5OUH/0Ri4engOQim3mV1jAOr2Vca
 3zg+Hl8b+UioXD0se24Iipr6s+02G1DApbbLPX7DZAnoh9jEBvDtHOdde3pNbMkQ
 jcTpShTyT7OKSsklRN19ckvk0ffBch5W3Ekkw6/Hg6ys2HIvirRpEL6R58oJNlP6
 xcCkQZISZVkakbv5xft8YQo1v8wnU5q2l85OaC1aaDB6g+Y6ZgoA1qkWjqlHkmQk
 1asflfbC0r5ke+yx7vz6310f5xBDLSVv17dqsDUr70o1m/6bem6wQXMczwmYUfk5
 99OCPiqdiCZLJyVFvIvfwSakL9Bq/nvnmywXTkrB7rovk5VZz3gc4sJkuUkL+KM=
 =URV1
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/tcg-next-20140422' into staging

Pull tcg 2014-04-22

# gpg: Signature made Tue 22 Apr 2014 22:00:04 BST using RSA key ID 4DD0279B
# gpg: Can't check signature: public key not found

* remotes/rth/tags/tcg-next-20140422:
  tcg: Use HOST_WORDS_BIGENDIAN
  tcg: Fix fallback from muls2_i64 to mulu2_i64
  tcg: Use tcg_gen_mulu2_i32 in tcg_gen_muls2_i32
  tcg: Relax requirement for mulu2_i32 on 32-bit hosts
  tcg-s390: Remove W constraint
  tcg-sparc: Use the type parameter to tcg_target_const_match
  tcg-ppc64: Use the type parameter to tcg_target_const_match
  tcg-aarch64: Remove w constraint
  tcg: Add TCGType parameter to tcg_target_const_match
  tcg: Fix out of range shift in deposit optimizations
  tci: Mask shift counts to avoid undefined behavior
  tcg: Mask shift quantities while folding
  tcg: Use "unspecified behavior" for shifts
  tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-24 15:24:52 +01:00
Richard Henderson
02eb19d0ec tcg: Use HOST_WORDS_BIGENDIAN
Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
662deb908f tcg: Fix fallback from muls2_i64 to mulu2_i64
Brown Bag sez, don't put the fallback code into the wrong function.
Also, check for muluh_i64 and use tcg_gen_mulu2_i64 instead of raw ops.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
f46fc4e6a9 tcg: Use tcg_gen_mulu2_i32 in tcg_gen_muls2_i32
Rather than hard-coding use of mulu2_i32, allow muluh_i32.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
df9ebea53e tcg: Relax requirement for mulu2_i32 on 32-bit hosts
Instead require either mulu2_i32 or muluh_i32.  The code in tcg-op.h
already supports looking for both.  Previous incomplete conversion?

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
671c835b7d tcg-s390: Remove W constraint
Now redundant with the type parameter to tcg_target_const_match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
4b304cfae1 tcg-sparc: Use the type parameter to tcg_target_const_match
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
1194dcba22 tcg-ppc64: Use the type parameter to tcg_target_const_match
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
170bf9315b tcg-aarch64: Remove w constraint
Now redundant with the type parameter to tcg_target_const_match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
f6c6afc1d4 tcg: Add TCGType parameter to tcg_target_const_match
Most 64-bit targets need to be able to ignore the high bits
of a TCG_TYPE_I32 value.

Suggested-by: Stuart Brady <sdb@zubnet.me.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
d998e555d2 tcg: Fix out of range shift in deposit optimizations
By inspection, for a deposit(x, y, 0, 64), we'd have a shift of (1<<64)
and everything else falls apart.  But we can reuse the existing deposit
logic to get this right.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
50c5c4d125 tcg: Mask shift quantities while folding
The TCG result would be undefined, but we can at least produce one
plausible result and avoid triggering the wrath of analysis tools.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
20022fa15f tcg: Use "unspecified behavior" for shifts
Change the definition such that shifts are not allowed to crash
for any input.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Stefan Weil
ad5171dbd4 tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool
Static code analyzers complain about signed bitfields with only a single
bit. is_ld is used as a boolean value, so make it bool.

ppc64 already used bool for the 2nd argument is_ld of the local function
add_qemu_ldst_label. Modify all other TCG targets to do follow this
example.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
0374f5089a tcg-ia64: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:20 -04:00
Richard Henderson
3bf16cb31a tcg-ia64: Move part of softmmu slow path out of line
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:20 -04:00
Richard Henderson
4bdd547aaa tcg-ia64: Convert to new ldst helpers
Still inline, but updated to the new routines.  Always use the LE
helpers, reusing the bswap between the fast and slot paths.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:19 -04:00
Richard Henderson
af9fe31070 tcg-ia64: Reduce code duplication in tcg_out_qemu_ld
The only differences were in the bswap insns emitted.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:19 -04:00
Richard Henderson
1f91f39219 tcg-ia64: Move tlb addend load into tlb read
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:18 -04:00
Richard Henderson
b672cf66c3 tcg-ia64: Move bswap for store into tlb load
Saving at least two cycles per store, and cleaning up the code.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:17 -04:00
Richard Henderson
4c186ee2cf tcg-ia64: Re-bundle the tlb load
This sequencing requires 5 stop bits instead of 6, and has room left
over to pre-load the tlb addend, and bswap data prior to being stored.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:17 -04:00
Richard Henderson
dcf91778ca tcg-ia64: Optimize small arguments to exit_tb
Saves one bundle for the common case of exit_tb 0.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:16 -04:00
Richard Henderson
b825025f08 tcg-aarch64: Use tcg_out_mov in preference to tcg_out_movr
It's the more canonical interface.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:02 -04:00
Richard Henderson
a056c9faa4 tcg-aarch64: Prefer unsigned offsets before signed offsets for ldst
The assembler seems to prefer them, perhaps we should too.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
3d4299f425 tcg-aarch64: Introduce tcg_out_insn_3312, _3310, _3313
Replace aarch64_ldst_op_data with AArch64LdstType, as it wasn't encoded
for the proper shift for the field and was confusing.

Merge aarch64_ldst_op_data, AArch64LdstType, and a few stray opcode bits
into a single I3312_* argument, eliminating some magic numbers from the
helper functions.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
dc73dfd4bc tcg-aarch64: Merge aarch64_ldst_get_data/type into tcg_out_op
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
edd8824cd4 tcg-aarch64: Introduce tcg_out_insn_3507
Cleaning up the implementation of REV and REV16 at the same time.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
e81864a109 tcg-aarch64: Support stores of zero
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
de61d14fa7 tcg-aarch64: Implement TCG_TARGET_HAS_new_ldst
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
667b1cdd4e tcg-aarch64: Pass qemu_ld/st arguments directly
Instead of passing them the "args" array.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
9e4177ad6d tcg-aarch64: Use TCGMemOp in qemu_ld/st
Making the bswap conditional on the memop instead of a compile-time test.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
dc0c8aaf2c tcg-aarch64: Use ADR to pass the return address to the ld/st helpers
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
ae7ab46aa8 tcg-aarch64: Use tcg_out_call for qemu_ld/st
In some cases, a direct branch will be in range.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
6f4724672c tcg-aarch64: Avoid add with zero in tlb load
Some guest env are small enough to reach the tlb with only a 12-bit addition.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
38d195aa05 tcg-aarch64: Implement tcg_register_jit
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
95f72aa90a tcg-aarch64: Introduce tcg_out_insn_3314
Combines 4 other inline functions and tidies the prologue.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
d82b78e48b tcg-aarch64: Reuse LR in translated code
It's obviously call-clobbered, but is otherwise unused.
Repurpose it as the TCG temporary.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
3d9e69a238 tcg-aarch64: Use CBZ and CBNZ
A compare and branch against zero happens at the start of
every single TB.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
cae1f6f3e6 tcg-aarch64: Create tcg_out_brcond
Rearrange code to put the compare and branch in the same place.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
81d8a5ee19 tcg-aarch64: Use symbolic names for branches
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
c6e310d938 tcg-aarch64: Use adrp in tcg_out_movi
Loading an qemu pointer as an immediate happens often.  E.g.

- exit_tb $0x7fa8140013
+ exit_tb $0x7f81ee0013
...
- :  d2800260        mov     x0, #0x13
- :  f2b50280        movk    x0, #0xa814, lsl #16
- :  f2c00fe0        movk    x0, #0x7f, lsl #32
+ :  90ff1000        adrp    x0, 0x7f81ee0000
+ :  91004c00        add     x0, x0, #0x13

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
d8918df577 tcg-aarch64: Special case small constants in tcg_out_movi
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
4ec4f0bd56 tcg-aarch64: Use ORRI in tcg_out_movi
The subset of logical immediates that we support is quite quick to test,
and such constants are quite common to want to load.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
dfeb5fe770 tcg-aarch64: Use MOVN in tcg_out_movi
When profitable, initialize the register with MOVN instead of MOVZ,
before setting the remaining lanes with MOVK.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
929f8b5550 tcg-aarch64: Use TCGType and TCGMemOp constants
Rather than raw constants that could mean anything.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
8bf56493f1 tcg-aarch64: Use intptr_t apropriately
As opposed to tcg_target_long.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
1a8e80d7e8 tcg-arm: Avoid ldrd/strd for user-only emulation
The arm ldrd/strd insns must cause alignment traps, whereas
at least for armv7 ldr/str must handle unaligned operations.

While this is hardly the only problem facing user-only emu,
this solves one problem for i386 on armv7 emulation.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-27 16:33:01 -04:00
Richard Henderson
cab0a7ea00 tcg-sparc: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
7ea5d7256d tcg-sparc: Convert to new ldst helpers
All of the helpers with the explicit big/little endian option
require the return address as a parameter.  Acquire this via
a trampoline.

Move the load of areg0 into the trampoline.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
a8b12c108c tcg-sparc: Tidy tcg_out_tlb_load interface
Pass address registers explicitly, rather than as indicies of args[].
It's two argument registers either way.  Use more TCGReg as appropriate.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
eef0d9e740 tcg-sparc: Use TCGMemOp within qemu_ldst routines
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
a9c7d27bd1 tcg-sparc: Improve tcg_out_movi
If bits 31:13 are zero, reduce the insn count by one.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
1d0a60681a tcg-sparc: Dont handle constant arguments to ext32 ops
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
5f9eb02555 tcg-sparc: Don't handle remainder
The generic fallback is exactly what we implemented.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
c8fc56cedd tcg-sparc: Use intptr_t as appropriate
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
aad2f06a7f tcg-sparc: Tidy call+jump patterns
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:25 -07:00
Richard Henderson
d801a8f2ce tcg-sparc: Fix tlb read
We were computing the full address into %o0 and then not using it.
Adjust some of the computation to rely less on having to pull immediate
values into registers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:25 -07:00
Richard Henderson
e7bc9004e7 tcg-sparc: Fix ld64 for 32-bit mode
Since were not using an annulled branch, we need to put a nop
in the delay slot.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:25 -07:00
Richard Henderson
582ab779c5 tcg-aarch64: Introduce tcg_out_insn_3405
Cleaning up the implementation of tcg_out_movi at the same time.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:15 -07:00
Richard Henderson
8678b71ce6 tcg-aarch64: Support div, rem
Clean up multiply at the same time.

For remainder, generic code will produce mul+sub,
whereas we can implement with msub.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:10 -07:00
Richard Henderson
1fcc9ddfb3 tcg-aarch64: Support muluh, mulsh
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:07 -07:00
Richard Henderson
c6e929e784 tcg-aarch64: Support add2, sub2
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:04 -07:00
Richard Henderson
b3c56df769 tcg-aarch64: Support deposit
Also tidy the implementation of ubfm, sbfm, extr in order to share code.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:01 -07:00
Richard Henderson
ed7a0aa8bc tcg-aarch64: Use tcg_out_insn for setcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:58 -07:00
Richard Henderson
04ce397b33 tcg-aarch64: Support movcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:55 -07:00
Richard Henderson
14b155ddc4 tcg-aarch64: Support andc, orc, eqv, not, neg
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:52 -07:00
Richard Henderson
e029f29385 tcg-aarch64: Handle constant operands to and, or, xor
Handle a simplified set of logical immediates for the moment.

The way gcc and binutils do it, with 52k worth of tables, and
a binary search depth of log2(5334) = 13, seems slow for the
most common cases.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:47 -07:00
Richard Henderson
90f1cd9138 tcg-aarch64: Handle constant operands to add, sub, and compare
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:44 -07:00
Richard Henderson
7d11fc7c2b tcg-aarch64: Implement mov with tcg_out_insn
Avoid the magic numbers in the current implementation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:41 -07:00
Richard Henderson
096c46c0ff tcg-aarch64: Introduce tcg_out_insn_3401
This merges the implementation of tcg_out_addi and tcg_out_subi.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:38 -07:00
Richard Henderson
df9351e372 tcg-aarch64: Convert shift insns to tcg_out_insn
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:35 -07:00
Richard Henderson
50573c66eb tcg-aarch64: Introduce tcg_out_insn
Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction
groups to the new scheme.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:13 -07:00
Richard Henderson
f8e2484389 tcg-aarch64: Remove nop from qemu_st slow path
Commit 023261ef85 failed to remove a
nop that's no longer required.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
523fdc08cc tcg-aarch64: Simplify tcg_out_ldst_9 encoding
At first glance the code appears to be using 1's compliment encoding,
a-la AArch32.  Except that the constant is "off", creating a complicated
split field 2's compliment encoding.

Much clearer to just use a normal mask and shift.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
017a86f7ad tcg-aarch64: Use intptr_t apropriately
As opposed to tcg_target_long.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
2e796c7621 tcg-aarch64: Remove the shift_imm parameter from tcg_out_cmp
It was unused.  Let's not overcomplicate things before we need them.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
8d8db193f2 tcg-aarch64: Hoist common argument loads in tcg_out_op
This reduces the code size of the function significantly.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:16 -08:00
Richard Henderson
a51a6b6ad5 tcg-aarch64: Don't handle mov/movi in tcg_out_op
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:15 -08:00
Richard Henderson
f029341494 tcg-aarch64: Set ext based on TCG_OPF_64BIT
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:09 -08:00
Richard Henderson
7763ffa017 tcg-aarch64: Change all ext variables to TCGType
We assert that the values for _I32 and _I64 are 0 and 1 respectively.
This will make a couple of functions declared by tcg.c cleaner.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:09 -08:00
Richard Henderson
3353d0dcc3 tcg-aarch64: Remove redundant CPU_TLB_ENTRY_BITS check
Removed from other targets in 56bbc2f967.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:02 -08:00
Stefan Weil
c5d3c49896 tcg: Fix typo in comment (dependancies -> dependencies)
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-03-02 17:12:51 +04:00
Peter Maydell
774d566cdb tcg/i386: Fix build for systems without working cpuid.h (MacOSX, Win32)
Win32 doesn't have a cpuid.h, and MacOSX may have one but without
the __cpuid() function we use, which means that commit 9d2eec20
broke the build for those platforms. Fix this by tightening up
our configure cpuid.h check to test that the functions we need
are present, and adding some missing #ifdef guards in
tcg/i386/tcg-target.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-21 10:39:10 +00:00
Richard Henderson
6399ab3325 tcg/i386: Use SHLX/SHRX/SARX instructions
These three-operand shift instructions do not require the shift count
to be placed into ECX.  This reduces the number of mov insns required,
with the mere addition of a new register constraint.

Don't attempt to get rid of the matching constraint, as that's impossible
to manipulate with just a new constraint.  In addition, constant shifts
still need the matching constraint.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
9d2eec202f tcg/i386: Use ANDN instruction
Note that the optimizer cannot simplify ANDC X,Y,C to AND X,Y,~C
so we must handle constants in the implementation of andc.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
ecc7e84327 tcg/i386: Add tcg_out_vex_modrm
Prepare for emitting BMI insns which require VEX encoding.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
a1b29c9ae0 tcg/i386: Move TCG_CT_CONST_* to tcg-target.c
These are not needed by users of tcg-target.h.  No need to recompile
when we adjust them.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
464a1441c1 tcg/optimize: Add more identity simplifications
Recognize 0 operand to andc, and -1 operands to and, orc, eqv.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
e64e958e20 tcg/optimize: Optmize ANDC X,Y,Y to MOV X,0
Like we already do for SUB and XOR.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
e201b56418 tcg/optimize: Simply some logical ops to NOT
Given, of course, an appropriate constant.  These could be generated
from the "canonical" operation for inversion on the guest, or via
other optimizations.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
23ec69ed37 tcg/optimize: Handle known-zeros masks for ANDC
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Aurelien Jarno
c8d7027253 tcg/optimize: add known-zero bits compute for load ops
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Aurelien Jarno
f096dc9618 tcg/optimize: improve known-zero bits for 32-bit ops
The shl_i32 op might set some bits of the unused 32 high bits of the
mask. Fix that by clearing the unused 32 high bits for all 32-bit ops
except load/store which operate on tl values.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Aurelien Jarno
3031244b01 tcg/optimize: fix known-zero bits optimization
Known-zero bits optimization is a great idea that helps to generate more
optimized code. However the current implementation only works in very few
cases as the computed mask is not saved.

Fix this to make it really working.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Aurelien Jarno
e46b225a31 tcg/optimize: fix known-zero bits for right shift ops
32-bit versions of sar and shr ops should not propagate known-zero bits
from the unused 32 high bits. For sar it could even lead to wrong code
being generated.

Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Huw Davies
7a3a00979d tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].
It's this that should be subtracted from 0x20 when converting to a right rotate.

Cc: qemu-stable@nongnu.org
Signed-off-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:08 -06:00
Richard Henderson
f6aa2f7dee TCG: Fix 32-bit host allocation typo
The second half register of a 64-bit temp on a 32-bit host
was allocated with the wrong base_type.

The base_type of the second half register is never checked,
but for consistency it should be the same as the first half.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-15 15:20:17 -08:00
Peter Maydell
c1de788ab9 tcg: Add TCGV_UNUSED_PTR, TCGV_IS_UNUSED_PTR, TCGV_EQUAL_PTR
We have macros for marking TCGv values as unused, checking if they
are unused and comparing them to each other. However these only exist
for TCGv_i32 and TCGv_i64; add them for TCGv_ptr as well.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-08 14:46:55 +00:00
Richard Henderson
c6830cdb2c tcg/s390: Remove sigill_handler
Commit c9baa30f42 failed to
delete all of the relevant code, leading to Werrors about
unused symbols.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-02-01 13:45:20 +04:00
Peter Maydell
dc08f85188 Merge remote-tracking branch 'rth/tcg-movbe' into staging
* rth/tcg-movbe:
  tcg/i386: cleanup useless #ifdef
  tcg/i386: use movbe instruction in qemu_ldst routines
  tcg/i386: add support for three-byte opcodes
  tcg/i386: remove hardcoded P_REXW value
  disas/i386.c: disassemble movbe instruction

Message-id: 1390692772-15282-1-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-30 19:02:16 +00:00
Alexander Graf
18d13fa293 TCG: Fix I64-on-32bit-host temporaries
We have cache pools of temporaries that we can reuse later when they've
already been allocated before.

These cache pools differenciate between the target TCG variable type they
contain. So we have one pool for I32 and one pool for I64 variables.

On a 32bit system, we can't work with 64bit registers though. So instead we
spawn two I32 temporaries for every I64 temporary we create. All caching
works the same way as on a real 64-bit system though: We create a cache entry
in the 64bit array for the first i32 index.

However, when we free such a temporary we free it to the pool of its type
(which is always i32 on 32bit systems) rather than its base_type (which is
i64 or i32 depending on the variable). This means we put a temporary that
is of base_type == i64 into the i32 preallocated temporary pool.

Eventually, this results in failures like this on 32bit hosts:

  qemu-system-ppc64: tcg/tcg.c:515: tcg_temp_new_internal: Assertion `ts->base_type == type' failed.

This patch makes the free routine use the base_type instead for the free case,
so it's consistent with the temporary allocation. It fixes the above failure
for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1390146811-59936-1-git-send-email-agraf@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-30 13:25:28 +00:00
Aurelien Jarno
2d23d5edb5 tcg/i386: cleanup useless #ifdef
TCG_TARGET_HAS_movcond_i32 is always defined to 1 in tcg-target.h, so
remove the corresponding #ifdef #endif sequence, left from a previous
refactoring.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 15:21:33 -08:00
Aurelien Jarno
085bb5bb64 tcg/i386: use movbe instruction in qemu_ldst routines
The movbe instruction has been added on some Intel Atom CPUs and on
recent Intel Haswell CPUs. It allows to load/store a value and at the
same time bswap it.

This patch detects the avaibility of this instruction and when available
use it in the qemu load/store routines in replacement of load/store +
bswap. Note that for 16-bit unsigned loads, movbe + movzw is basically the
same as movzw + bswap, so the patch doesn't touch this case.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[RTH: Reduced the number of conditionals using "movop".]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 15:19:19 -08:00
Aurelien Jarno
2a1137753f tcg/i386: add support for three-byte opcodes
Add support for three-byte opcodes, starting with the 0x0f 0x38 prefix.
Use P_EXT38 as the new constant, and shift all other constants so that
P_EXT and P_EXT38 have neighbouring values.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[RTH: Changed the name from P_EXT2 to P_EXT38.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 14:12:45 -08:00
Aurelien Jarno
c9d78213b8 tcg/i386: remove hardcoded P_REXW value
P_REXW is defined has a constant at the beginning of i386/tcg-target.c,
but the corresponding bit is later used in a harcoded way, which defeat
the purpose of a constant.

Fix that by using a conditional expression operator instead of a shift.
On x86 this actually makes the code slightly smaller as GCC does in
practice (opc >> 8) & 8 instead of (opc & 0x800) >> 8 so the constants
are smaller to load.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 14:12:38 -08:00
Aurelien Jarno
8589467f94 tcg/i386: fix a comment
The comments apply to 8-bit stores, not 8-byte stores.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2013-12-21 16:41:56 +01:00
Richard Henderson
0ec9eabc7f tcg: Use bitmaps for free temporaries
We previously allocated 32-bits per temp for the next_free_temp entry.
We now allocate 4 bits per temp across the 4 bitmaps.

Using a linked list meant that if a translator is tweeked, resulting in
temps being freed in a different order, that would have follow-on effects
throughout the TB.  Always allocating the lowest free temp means that
follow-on effects are minimized, which can make it easier to diff output
when debugging the translators.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-12-10 09:23:45 -08:00
Richard Henderson
c9baa30f42 tcg-s390: Use qemu_getauxval in query_facilities
No need to set up a SIGILL signal handler for detection anymore.

Remove a ton of sanity checks that must be true, given that we're
requiring a 64-bit build (the note about 31-bit KVM is satisfied
by configuring with TCI).

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30 07:45:30 +13:00
Richard Henderson
41d9ea80ac tcg-arm: Use qemu_getauxval
Allow host detection on linux systems without glibc 2.16 or later.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30 07:45:14 +13:00
Richard Henderson
cd629de1cf tcg-ppc64: Use qemu_getauxval
Allow host detection on linux systems without glibc 2.16 or later.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30 07:45:13 +13:00
Richard Henderson
463230d85e tcg-ia64: Introduce tcg_opc_bswap64_i
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:59 +10:00
Richard Henderson
db008a8de2 tcg-ia64: Introduce tcg_opc_ext_i
Being able to "extend" from 64-bits (with a mov) simplifies
a few places where the conditional breaks the train of thought.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:54 +10:00
Richard Henderson
fa0cdb6c2a tcg-ia64: Introduce tcg_opc_movi_a
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:50 +10:00
Richard Henderson
3b9ccdcc74 tcg-ia64: Introduce tcg_opc_mov_a
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:46 +10:00
Richard Henderson
25c9c73bdc tcg-ia64: Use A3 form of logical operations
We can and/or/xor/andcm small constants, saving one cycle.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:40 +10:00
Richard Henderson
f940fb086c tcg-ia64: Use SUB_A3 and ADDS_A4 for subtraction
We can subtract from more small constants that just 0 with one insn,
and we can add the negative for most small constants.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:33 +10:00
Richard Henderson
8642088a3d tcg-ia64: Use ADDS for small addition
Avoids a wasted cycle loading up small constants.

Simplify the code assuming the tcg optimizer is going to work
and don't expect the first operand of the add to be constant.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:23 +10:00
Richard Henderson
3c289cba9b tcg-ia64: Avoid unnecessary stop bit in tcg_out_alu
When performing an operation with two input registers, we'd leave
the stop bit (and thus an extra cycle) that's only needed when one
or the other input is a constant.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:16 +10:00
Richard Henderson
d15de15ca0 tcg-ia64: Move AREG0 to R32
Since the move away from the global areg0, we're no longer globally
reserving areg0.  Which means our use of R7 clobbers a call-saved
register.  Shift areg0 into the windowed registers.  Indeed, choose
the incoming parameter register that it comes to us by.

This requires moving the register holding the return address elsewhere.
Choose R33 for tidiness.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:08 +10:00
Richard Henderson
6d264b38fc tcg-ia64: Simplify brcond
There was a misconception that a stop bit is required between a compare
and the branch that uses the predicate set by the compare.  This lead to
the usage of an extra bundle in which to perform the compare.  The extra
bundle left room for constants to be loaded for use with the compare insn.

If we pack the compare and the branch together in the same bundle, then
there's no longer any room for non-zero constants.  At which point we
can eliminate half the function by not handling them.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:42 +10:00
Richard Henderson
6f65c780b9 tcg-ia64: Handle constant calls
Using only indirect calls results in 3 bundles (one to load the
descriptor address), and 4 stop bits.  By looking through the
descriptor to the constants, we can perform the call with 2
bundles and only 1 stop bit.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:30 +10:00
Richard Henderson
5f7b16877a tcg-ia64: Use shortcuts for nop insns
There's no need to go through the full opcode-to-insn function call
to generate nops.  This makes the source a bit more readable.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:25 +10:00
Richard Henderson
e3afa1c4ad tcg-ia64: Use TCGMemOp within qemu_ldst routines
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:12 +10:00
Richard Henderson
1768ec0623 tcg-ppc64: Support new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
5dd391604f tcg-ppc: Support new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
e349a8d4ff tcg-ppc64: Convert to le/be ldst helpers
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
92d0acda27 tcg-ppc: Convert to le/be ldst helpers
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00