Commit Graph

1378 Commits

Author SHA1 Message Date
Richard Henderson
bbb8a1b455 tcg: Remove sizemask and flags arguments to tcg_gen_callN
Take them from the TCGHelperInfo struct instead.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
afb49896fa tcg: Save flags and computed sizemask in TCGHelperInfo
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
72866e823e tcg: Register the helper info struct rather than the name
This will let us find all the info from the hash table.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
836d6ed96e tcg: Inline tcg_gen_helperN
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
c017230d9b tcg: Use helper-gen.h in tcg-op.h
No need to open-code the setup of the builtin helpers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
944eea962b tcg: Push tcg-runtime routines into exec/helper-*
Rather than special casing them, use the standard mechanisms
for tcg helper generation.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
2ef6175aa7 tcg: Invert the inclusion of helper.h
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h.  This
minimizes the files that must be rebuilt when changing the macros
for file N.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Richard Henderson
a763551ad5 tcg: Optimize brcond2 and setcond2 ne/eq
If either the high or low pair can be resolved, we can
simplify to either a constant or to a 32-bit comparison.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:53 -07:00
Peter Maydell
27aa948502 Merge remote-tracking branch 'remotes/rth/tcg-mips' into staging
* remotes/rth/tcg-mips: (24 commits)
  tcg-mips: Enable direct chaining of TBs
  tcg-mips: Simplify movcond
  tcg-mips: Simplify brcond2
  tcg-mips: Improve setcond eq/ne vs zeros
  tcg-mips: Simplify setcond2
  tcg-mips: Simplify brcond
  tcg-mips: Simplify setcond
  tcg-mips: Commonize opcode implementations
  tcg-mips: Improve add2/sub2
  tcg-mips: Hoist args loads
  tcg-mips: Fix subtract immediate range
  tcg-mips: Name the opcode enumeration
  tcg-mips: Use EXT for AND on mips32r2
  tcg-mips: Use T9 for TCG_TMP1
  tcg-mips: Introduce TCG_TMP0, TCG_TMP1
  tcg-mips: Rearrange register allocation
  tcg-mips: Convert to new_ldst
  tcg-mips: Convert to new qemu_l/st helpers
  tcg-mips: Move softmmu slow path out of line
  tcg-mips: Split large ldst offsets
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-27 18:31:02 +01:00
Richard Henderson
b6bfeea92a tcg-mips: Enable direct chaining of TBs
Now that the code_gen_buffer is constrained to not cross 256mb
regions, we are assured that we can use J to reach another TB.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:48:37 -07:00
Richard Henderson
33fac20bb2 tcg-mips: Simplify movcond
Use the same table to fold comparisons as with setcond.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:47:14 -07:00
Richard Henderson
3401fd259e tcg-mips: Simplify brcond2
Emitting a single branch instead of (up to) 3, using setcond2
to generate the composite compare.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:47:08 -07:00
Richard Henderson
1db1c4d7d9 tcg-mips: Improve setcond eq/ne vs zeros
The original code results in one too many insns per zero
present in the input.  And since comparing 64-bit numbers
vs zero is common...

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:58 -07:00
Richard Henderson
9a2f0bfe32 tcg-mips: Simplify setcond2
Using tcg_unsigned_cond and tcg_high_cond.
Also, move the function up in the file for future cleanups.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:53 -07:00
Richard Henderson
c068896f7f tcg-mips: Simplify brcond
Use the same table to fold comparisons as with setcond.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:49 -07:00
Richard Henderson
fd1cf66630 tcg-mips: Simplify setcond
Use a table to fold comparisons to less-than.
Also, move the function up in the file for futher simplifications.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:39 -07:00
Richard Henderson
4f048535cd tcg-mips: Commonize opcode implementations
Most opcodes fall in to one of a couple of patterns.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:32 -07:00
Richard Henderson
741f117d9a tcg-mips: Improve add2/sub2
Reduce insn count from 5 to either 3 or 4.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:25 -07:00
Richard Henderson
22ee3a987d tcg-mips: Hoist args loads
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:20 -07:00
Richard Henderson
070603f62b tcg-mips: Fix subtract immediate range
Since we must use ADDUI, we would generate incorrect code for -32768.
Leaving off subtract of +32768 makes things easier for a follow-on patch.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:08 -07:00
Richard Henderson
ac0f3b1263 tcg-mips: Name the opcode enumeration
And use it in the opcode emission functions.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:46:03 -07:00
Richard Henderson
1c4182687e tcg-mips: Use EXT for AND on mips32r2
At the same time, tidy deposit by introducing tcg_out_opc_bf.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:56 -07:00
Richard Henderson
f216a35f36 tcg-mips: Use T9 for TCG_TMP1
T0 is an argument register for the n32 and n64 abis.  T9 is the call
address register for the abis, and is more directly under the control
of the backend.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:48 -07:00
Richard Henderson
6c530e32f4 tcg-mips: Introduce TCG_TMP0, TCG_TMP1
Use these instead of hard-coding the registers to use for temporaries.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:44 -07:00
Richard Henderson
418839044e tcg-mips: Rearrange register allocation
Use FP (also known as S8) as a normal call-saved register.

Include T0 in the allocation order and call-clobbered list
even though it's currently used as a TCG temporary.

Put the argument registers at the end of the allocation order.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:28 -07:00
Richard Henderson
fbef2cc80f tcg-mips: Convert to new_ldst
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:24 -07:00
Richard Henderson
ce0236cfbd tcg-mips: Convert to new qemu_l/st helpers
In addition, fill delay slots calling the helpers and tail
call to the store helpers.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:20 -07:00
Richard Henderson
9d8bf2d125 tcg-mips: Move softmmu slow path out of line
At the same time, tidy up the call helpers, avoiding a memory reference.
Split out several subroutines.  Use TCGMemOp constants.  Make endianness
selectable at runtime.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:16 -07:00
Richard Henderson
f9a716325f tcg-mips: Split large ldst offsets
Use this to reduce goto_tb by one insn.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:13 -07:00
Richard Henderson
7dae901d2d tcg-mips: Fill the exit_tb delay slot
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:09 -07:00
Richard Henderson
f8c9eddb2b tcg-mips: Use J and JAL opcodes
For userland builds calls will normally be in range,
and for the exit_tb opcode the branch to the epilogue.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:05 -07:00
Richard Henderson
a3abb29292 tci: Fix tcg_out_call
Broken since dddbb2e1e3.
Do all the rest of the things that tcg_out_op did before
and after the big switch statement.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-22 13:25:34 -07:00
Richard Henderson
a10c64e0df tcg-s390: Implement direct chaining of TBs
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 09:22:32 -07:00
Richard Henderson
7b7066b1db tcg-s390: Improve setcond
There are a variety of common cases for which we can use carry tricks to
avoid a conditional branch.  On very new hardware, use LOAD ON CONDITION
instead of a conditional branch.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 01:33:35 -04:00
Richard Henderson
ad19b35808 tcg-s390: Allow immediate operands to add2 and sub2
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 01:33:29 -04:00
Richard Henderson
f167dc37da tcg-s390: Implement tcg_register_jit
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:12:25 -04:00
Richard Henderson
547ec12141 tcg-s390: Use more risbg in the tlb sequence
Elides two insns from the sequence.  The resulting tlb compare
sequence is satisfyingly minimal:

	risbg  %r2,%r8,51,186,56
	risbg  %r3,%r8,61,178,0
	cg     %r3,904(%r10,%r2)
	lg     %r2,920(%r10,%r2)
	jlh    tlb_miss

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:10:42 -04:00
Richard Henderson
fb5964152d tcg-s390: Move ldst helpers out of line
That is, the old LDST_OPTIMIZATION.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:10:00 -04:00
Richard Henderson
f24efee41e tcg-s390: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:59 -04:00
Richard Henderson
b8dd88b85c tcg-s390: Integrate endianness into TCGMemOp
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:59 -04:00
Richard Henderson
a5a04f2830 tcg-s390: Convert to TCGMemOp
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:59 -04:00
Richard Henderson
a175689654 tcg-s390: Fix off-by-one in wraparound andi
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-15 00:09:47 -04:00
Richard Henderson
450445d543 tcg: Fix tcg_reg_alloc_mov vs no-op truncation
Commit af3cbfbe80 hoisted some "common"
loads of the temporary type, forgetting that the types could differ
during truncating moves.  This affects the correctness of the memory
offset on big-endian hosts.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-14 09:56:13 -07:00
Richard Henderson
96d0ee7f09 tcg: Remove unreachable code in tcg_out_op and op_defs
The INDEX_op_call case has just been obsoleted; the mov and movi
cases have not been reachable for years.  Attempt to document this
both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT.

Because of the TCG_OPF_NOT_PRESENT change, this must be done for
all targets in a single commit.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:13 -07:00
Richard Henderson
af3cbfbe80 tcg: Use tcg_target_available_regs in tcg_reg_alloc_mov
The move opcodes are special in that their constraints must cover
all available registers.  So instead of checking the constraints,
just use the available registers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
cf06667428 tcg: Make call address a constant parameter
Avoid allocating a tcg temporary to hold the constant address,
and instead place it directly into the op_call arguments.

At the same time, convert to the newly introduced tcg_out_call
backend function, rather than invoking tcg_out_op for the call.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
dddbb2e1e3 tci: Create tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
eb68a4fa4e tcg-mips: Split out tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
4e9cf8409a tcg-sparc: Create tcg_out_call
Rename the existing tcg_out_calli to tcg_out_call_nodelay.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
fdd8ec7184 tcg-ppc64: Rename tcg_out_calli to tcg_out_call
Merge the existing tcg_out_call into tcg_out_op.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
00d7a1acab tcg-ppc: Split out tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
a8111212b3 tcg-s390: Rename tgen_calli to tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
6bf3e99747 tcg-i386: Rename tcg_out_calli to tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:11 -07:00
Richard Henderson
5053361b3e tcg: Require TCG_TARGET_INSN_UNIT_SIZE
Now that all backends do define TCG_TARGET_INSN_UNIT_SIZE,
remove the fallback definition.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:07:06 -07:00
Richard Henderson
a7f96f7666 tci: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:07:02 -07:00
Richard Henderson
ae0218e350 tcg-mips: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:58 -07:00
Richard Henderson
5588ff2921 tcg-ia64: Define TCG_TARGET_INSN_UNIT_SIZE
Using a 16-byte aligned structure achieves best results, both for code
cleanliness and compiled code size.  However, this means that we can't
use the trick of encoding the slot number into the low 2 bits.

Thankfully, we only ever use slot2, so make that explicit in the names
of the relocation functions, and drop the code for other slots.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:58 -07:00
Richard Henderson
8c081b1802 tcg-s390: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:58 -07:00
Richard Henderson
8587c30c3e tcg-aarch64: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:52 -07:00
Richard Henderson
267c931985 tcg-arm: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:29 -07:00
Richard Henderson
abce5964be tcg-sparc: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
38cf39f739 tcg-ppc: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
e083c4a233 tcg-ppc64: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
f6bff89d06 tcg-i386: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
1813e1758d tcg: Define tcg_insn_unit for code pointers
To be defined by the tcg backend based on the elemental unit of the ISA.
During the transition, allow TCG_TARGET_INSN_UNIT_SIZE to be undefined,
which allows us to default tcg_insn_unit to the current uint8_t.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
52a1f64ec5 tcg: Introduce byte pointer arithmetic helpers
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Peter Maydell
5c53bb8121 tcg: Avoid undefined behaviour patching code at unaligned addresses
To avoid C undefined behaviour when patching generated code,
provide wrappers tcg_patch8/16/32/64 which use the usual memcpy
trick, and use them in the i386 backend.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Peter Maydell
4387345a96 tcg: Avoid stores to unaligned addresses
Avoid stores to unaligned addresses in TCG code generation, by using the
usual memcpy() approach. (Using bswap.h would drag a lot of QEMU baggage
into TCG, so it's simpler just to do direct memcpy() here.)

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
ebd0c614d7 tcg-sparc: Accept stores of zero
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
035b239826 tcg-sparc: Fix small 32-bit movi
We tested imm13 before discarding garbage high bits.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
35e2da1556 tcg-sparc: Fixup function argument types
Use TCGReg everywhere appropriate.  Use int32_t for all arguments
that may be registers or immediate constants.  Merge tcg_out_addi
into its only caller.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
b357f902bf tcg-sparc: Hoist common argument loads in tcg_out_op
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
98b90bab3f tcg-sparc: Don't handle mov/movi in tcg_out_op
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
425532d71d tcg-sparc: Tidy check_fit_* tests
Use sextract instead of raw bit shifting for the tests.  Introduce
a new check_fit_ptr macro to make it clear we're looking at pointers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
f4c166619e tcg-sparc: Implement muls2_i32
Using the 32-bit SMUL is a tad more efficient than
resorting to extending and using the 64-bit MULX.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
8b66eefe0d tcg-sparc: Use the RETURN instruction
Saves one insn per TB exit over JMPL+RESTORE.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
34b1a49cb1 tcg-sparc: Use 64-bit registers with sparcv8plus
Quite a lot of effort was spent composing and decomposing 64-bit
quantities in registers, when we should just create them and leave
them as one 64-bit register.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
a24fba935a tcg-sparc: Support trunc_shr_i32
Unlike a 64-bit shift op, allows the output to be in %l or %i registers
for sparcv8plus.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
9f44adc573 tcg-sparc: Remove most uses of TCG_TARGET_REG_BITS
Replace with SPARC64 define.  Soon even sparcv8plus will use
64-bit register as far as TCG is concerned.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:35 -07:00
Richard Henderson
4bb7a41ed6 tcg: Add INDEX_op_trunc_shr_i32
Let the backend do something special for truncation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:34 -07:00
Richard Henderson
71b926992e tcg: Fix missed pointer size != TCG_TARGET_REG_BITS changes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:34 -07:00
Peter Maydell
ad600a4d49 Pull tcg 2014-04-22
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJTVthUAAoJEK0ScMxN0Ceb5OUH/0Ri4engOQim3mV1jAOr2Vca
 3zg+Hl8b+UioXD0se24Iipr6s+02G1DApbbLPX7DZAnoh9jEBvDtHOdde3pNbMkQ
 jcTpShTyT7OKSsklRN19ckvk0ffBch5W3Ekkw6/Hg6ys2HIvirRpEL6R58oJNlP6
 xcCkQZISZVkakbv5xft8YQo1v8wnU5q2l85OaC1aaDB6g+Y6ZgoA1qkWjqlHkmQk
 1asflfbC0r5ke+yx7vz6310f5xBDLSVv17dqsDUr70o1m/6bem6wQXMczwmYUfk5
 99OCPiqdiCZLJyVFvIvfwSakL9Bq/nvnmywXTkrB7rovk5VZz3gc4sJkuUkL+KM=
 =URV1
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/tcg-next-20140422' into staging

Pull tcg 2014-04-22

# gpg: Signature made Tue 22 Apr 2014 22:00:04 BST using RSA key ID 4DD0279B
# gpg: Can't check signature: public key not found

* remotes/rth/tags/tcg-next-20140422:
  tcg: Use HOST_WORDS_BIGENDIAN
  tcg: Fix fallback from muls2_i64 to mulu2_i64
  tcg: Use tcg_gen_mulu2_i32 in tcg_gen_muls2_i32
  tcg: Relax requirement for mulu2_i32 on 32-bit hosts
  tcg-s390: Remove W constraint
  tcg-sparc: Use the type parameter to tcg_target_const_match
  tcg-ppc64: Use the type parameter to tcg_target_const_match
  tcg-aarch64: Remove w constraint
  tcg: Add TCGType parameter to tcg_target_const_match
  tcg: Fix out of range shift in deposit optimizations
  tci: Mask shift counts to avoid undefined behavior
  tcg: Mask shift quantities while folding
  tcg: Use "unspecified behavior" for shifts
  tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-24 15:24:52 +01:00
Richard Henderson
02eb19d0ec tcg: Use HOST_WORDS_BIGENDIAN
Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
662deb908f tcg: Fix fallback from muls2_i64 to mulu2_i64
Brown Bag sez, don't put the fallback code into the wrong function.
Also, check for muluh_i64 and use tcg_gen_mulu2_i64 instead of raw ops.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
f46fc4e6a9 tcg: Use tcg_gen_mulu2_i32 in tcg_gen_muls2_i32
Rather than hard-coding use of mulu2_i32, allow muluh_i32.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
df9ebea53e tcg: Relax requirement for mulu2_i32 on 32-bit hosts
Instead require either mulu2_i32 or muluh_i32.  The code in tcg-op.h
already supports looking for both.  Previous incomplete conversion?

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
671c835b7d tcg-s390: Remove W constraint
Now redundant with the type parameter to tcg_target_const_match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
4b304cfae1 tcg-sparc: Use the type parameter to tcg_target_const_match
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
1194dcba22 tcg-ppc64: Use the type parameter to tcg_target_const_match
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
170bf9315b tcg-aarch64: Remove w constraint
Now redundant with the type parameter to tcg_target_const_match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
f6c6afc1d4 tcg: Add TCGType parameter to tcg_target_const_match
Most 64-bit targets need to be able to ignore the high bits
of a TCG_TYPE_I32 value.

Suggested-by: Stuart Brady <sdb@zubnet.me.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
d998e555d2 tcg: Fix out of range shift in deposit optimizations
By inspection, for a deposit(x, y, 0, 64), we'd have a shift of (1<<64)
and everything else falls apart.  But we can reuse the existing deposit
logic to get this right.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
50c5c4d125 tcg: Mask shift quantities while folding
The TCG result would be undefined, but we can at least produce one
plausible result and avoid triggering the wrath of analysis tools.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
20022fa15f tcg: Use "unspecified behavior" for shifts
Change the definition such that shifts are not allowed to crash
for any input.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Stefan Weil
ad5171dbd4 tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool
Static code analyzers complain about signed bitfields with only a single
bit. is_ld is used as a boolean value, so make it bool.

ppc64 already used bool for the 2nd argument is_ld of the local function
add_qemu_ldst_label. Modify all other TCG targets to do follow this
example.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
0374f5089a tcg-ia64: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:20 -04:00
Richard Henderson
3bf16cb31a tcg-ia64: Move part of softmmu slow path out of line
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:20 -04:00
Richard Henderson
4bdd547aaa tcg-ia64: Convert to new ldst helpers
Still inline, but updated to the new routines.  Always use the LE
helpers, reusing the bswap between the fast and slot paths.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:19 -04:00
Richard Henderson
af9fe31070 tcg-ia64: Reduce code duplication in tcg_out_qemu_ld
The only differences were in the bswap insns emitted.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:19 -04:00
Richard Henderson
1f91f39219 tcg-ia64: Move tlb addend load into tlb read
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:18 -04:00
Richard Henderson
b672cf66c3 tcg-ia64: Move bswap for store into tlb load
Saving at least two cycles per store, and cleaning up the code.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:17 -04:00
Richard Henderson
4c186ee2cf tcg-ia64: Re-bundle the tlb load
This sequencing requires 5 stop bits instead of 6, and has room left
over to pre-load the tlb addend, and bswap data prior to being stored.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:17 -04:00
Richard Henderson
dcf91778ca tcg-ia64: Optimize small arguments to exit_tb
Saves one bundle for the common case of exit_tb 0.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-17 16:56:16 -04:00
Richard Henderson
b825025f08 tcg-aarch64: Use tcg_out_mov in preference to tcg_out_movr
It's the more canonical interface.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:02 -04:00
Richard Henderson
a056c9faa4 tcg-aarch64: Prefer unsigned offsets before signed offsets for ldst
The assembler seems to prefer them, perhaps we should too.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
3d4299f425 tcg-aarch64: Introduce tcg_out_insn_3312, _3310, _3313
Replace aarch64_ldst_op_data with AArch64LdstType, as it wasn't encoded
for the proper shift for the field and was confusing.

Merge aarch64_ldst_op_data, AArch64LdstType, and a few stray opcode bits
into a single I3312_* argument, eliminating some magic numbers from the
helper functions.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
dc73dfd4bc tcg-aarch64: Merge aarch64_ldst_get_data/type into tcg_out_op
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
edd8824cd4 tcg-aarch64: Introduce tcg_out_insn_3507
Cleaning up the implementation of REV and REV16 at the same time.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
e81864a109 tcg-aarch64: Support stores of zero
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
de61d14fa7 tcg-aarch64: Implement TCG_TARGET_HAS_new_ldst
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
667b1cdd4e tcg-aarch64: Pass qemu_ld/st arguments directly
Instead of passing them the "args" array.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
9e4177ad6d tcg-aarch64: Use TCGMemOp in qemu_ld/st
Making the bswap conditional on the memop instead of a compile-time test.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
dc0c8aaf2c tcg-aarch64: Use ADR to pass the return address to the ld/st helpers
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
ae7ab46aa8 tcg-aarch64: Use tcg_out_call for qemu_ld/st
In some cases, a direct branch will be in range.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
6f4724672c tcg-aarch64: Avoid add with zero in tlb load
Some guest env are small enough to reach the tlb with only a 12-bit addition.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
38d195aa05 tcg-aarch64: Implement tcg_register_jit
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
95f72aa90a tcg-aarch64: Introduce tcg_out_insn_3314
Combines 4 other inline functions and tidies the prologue.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
d82b78e48b tcg-aarch64: Reuse LR in translated code
It's obviously call-clobbered, but is otherwise unused.
Repurpose it as the TCG temporary.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
3d9e69a238 tcg-aarch64: Use CBZ and CBNZ
A compare and branch against zero happens at the start of
every single TB.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
cae1f6f3e6 tcg-aarch64: Create tcg_out_brcond
Rearrange code to put the compare and branch in the same place.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
81d8a5ee19 tcg-aarch64: Use symbolic names for branches
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
c6e310d938 tcg-aarch64: Use adrp in tcg_out_movi
Loading an qemu pointer as an immediate happens often.  E.g.

- exit_tb $0x7fa8140013
+ exit_tb $0x7f81ee0013
...
- :  d2800260        mov     x0, #0x13
- :  f2b50280        movk    x0, #0xa814, lsl #16
- :  f2c00fe0        movk    x0, #0x7f, lsl #32
+ :  90ff1000        adrp    x0, 0x7f81ee0000
+ :  91004c00        add     x0, x0, #0x13

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
d8918df577 tcg-aarch64: Special case small constants in tcg_out_movi
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
4ec4f0bd56 tcg-aarch64: Use ORRI in tcg_out_movi
The subset of logical immediates that we support is quite quick to test,
and such constants are quite common to want to load.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
dfeb5fe770 tcg-aarch64: Use MOVN in tcg_out_movi
When profitable, initialize the register with MOVN instead of MOVZ,
before setting the remaining lanes with MOVK.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
929f8b5550 tcg-aarch64: Use TCGType and TCGMemOp constants
Rather than raw constants that could mean anything.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
8bf56493f1 tcg-aarch64: Use intptr_t apropriately
As opposed to tcg_target_long.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
1a8e80d7e8 tcg-arm: Avoid ldrd/strd for user-only emulation
The arm ldrd/strd insns must cause alignment traps, whereas
at least for armv7 ldr/str must handle unaligned operations.

While this is hardly the only problem facing user-only emu,
this solves one problem for i386 on armv7 emulation.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-27 16:33:01 -04:00
Richard Henderson
cab0a7ea00 tcg-sparc: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
7ea5d7256d tcg-sparc: Convert to new ldst helpers
All of the helpers with the explicit big/little endian option
require the return address as a parameter.  Acquire this via
a trampoline.

Move the load of areg0 into the trampoline.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
a8b12c108c tcg-sparc: Tidy tcg_out_tlb_load interface
Pass address registers explicitly, rather than as indicies of args[].
It's two argument registers either way.  Use more TCGReg as appropriate.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
eef0d9e740 tcg-sparc: Use TCGMemOp within qemu_ldst routines
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
a9c7d27bd1 tcg-sparc: Improve tcg_out_movi
If bits 31:13 are zero, reduce the insn count by one.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
1d0a60681a tcg-sparc: Dont handle constant arguments to ext32 ops
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
5f9eb02555 tcg-sparc: Don't handle remainder
The generic fallback is exactly what we implemented.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
c8fc56cedd tcg-sparc: Use intptr_t as appropriate
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:26 -07:00
Richard Henderson
aad2f06a7f tcg-sparc: Tidy call+jump patterns
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:25 -07:00
Richard Henderson
d801a8f2ce tcg-sparc: Fix tlb read
We were computing the full address into %o0 and then not using it.
Adjust some of the computation to rely less on having to pull immediate
values into registers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:25 -07:00
Richard Henderson
e7bc9004e7 tcg-sparc: Fix ld64 for 32-bit mode
Since were not using an annulled branch, we need to put a nop
in the delay slot.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17 11:13:25 -07:00
Richard Henderson
582ab779c5 tcg-aarch64: Introduce tcg_out_insn_3405
Cleaning up the implementation of tcg_out_movi at the same time.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:15 -07:00
Richard Henderson
8678b71ce6 tcg-aarch64: Support div, rem
Clean up multiply at the same time.

For remainder, generic code will produce mul+sub,
whereas we can implement with msub.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:10 -07:00
Richard Henderson
1fcc9ddfb3 tcg-aarch64: Support muluh, mulsh
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:07 -07:00
Richard Henderson
c6e929e784 tcg-aarch64: Support add2, sub2
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:04 -07:00
Richard Henderson
b3c56df769 tcg-aarch64: Support deposit
Also tidy the implementation of ubfm, sbfm, extr in order to share code.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:01 -07:00
Richard Henderson
ed7a0aa8bc tcg-aarch64: Use tcg_out_insn for setcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:58 -07:00
Richard Henderson
04ce397b33 tcg-aarch64: Support movcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:55 -07:00
Richard Henderson
14b155ddc4 tcg-aarch64: Support andc, orc, eqv, not, neg
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:52 -07:00
Richard Henderson
e029f29385 tcg-aarch64: Handle constant operands to and, or, xor
Handle a simplified set of logical immediates for the moment.

The way gcc and binutils do it, with 52k worth of tables, and
a binary search depth of log2(5334) = 13, seems slow for the
most common cases.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:47 -07:00
Richard Henderson
90f1cd9138 tcg-aarch64: Handle constant operands to add, sub, and compare
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:44 -07:00
Richard Henderson
7d11fc7c2b tcg-aarch64: Implement mov with tcg_out_insn
Avoid the magic numbers in the current implementation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:41 -07:00
Richard Henderson
096c46c0ff tcg-aarch64: Introduce tcg_out_insn_3401
This merges the implementation of tcg_out_addi and tcg_out_subi.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:38 -07:00
Richard Henderson
df9351e372 tcg-aarch64: Convert shift insns to tcg_out_insn
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:35 -07:00
Richard Henderson
50573c66eb tcg-aarch64: Introduce tcg_out_insn
Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction
groups to the new scheme.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:13 -07:00
Richard Henderson
f8e2484389 tcg-aarch64: Remove nop from qemu_st slow path
Commit 023261ef85 failed to remove a
nop that's no longer required.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
523fdc08cc tcg-aarch64: Simplify tcg_out_ldst_9 encoding
At first glance the code appears to be using 1's compliment encoding,
a-la AArch32.  Except that the constant is "off", creating a complicated
split field 2's compliment encoding.

Much clearer to just use a normal mask and shift.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
017a86f7ad tcg-aarch64: Use intptr_t apropriately
As opposed to tcg_target_long.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
2e796c7621 tcg-aarch64: Remove the shift_imm parameter from tcg_out_cmp
It was unused.  Let's not overcomplicate things before we need them.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
8d8db193f2 tcg-aarch64: Hoist common argument loads in tcg_out_op
This reduces the code size of the function significantly.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:16 -08:00
Richard Henderson
a51a6b6ad5 tcg-aarch64: Don't handle mov/movi in tcg_out_op
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:15 -08:00
Richard Henderson
f029341494 tcg-aarch64: Set ext based on TCG_OPF_64BIT
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:09 -08:00
Richard Henderson
7763ffa017 tcg-aarch64: Change all ext variables to TCGType
We assert that the values for _I32 and _I64 are 0 and 1 respectively.
This will make a couple of functions declared by tcg.c cleaner.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:09 -08:00
Richard Henderson
3353d0dcc3 tcg-aarch64: Remove redundant CPU_TLB_ENTRY_BITS check
Removed from other targets in 56bbc2f967.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:02 -08:00
Stefan Weil
c5d3c49896 tcg: Fix typo in comment (dependancies -> dependencies)
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-03-02 17:12:51 +04:00
Peter Maydell
774d566cdb tcg/i386: Fix build for systems without working cpuid.h (MacOSX, Win32)
Win32 doesn't have a cpuid.h, and MacOSX may have one but without
the __cpuid() function we use, which means that commit 9d2eec20
broke the build for those platforms. Fix this by tightening up
our configure cpuid.h check to test that the functions we need
are present, and adding some missing #ifdef guards in
tcg/i386/tcg-target.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-21 10:39:10 +00:00
Richard Henderson
6399ab3325 tcg/i386: Use SHLX/SHRX/SARX instructions
These three-operand shift instructions do not require the shift count
to be placed into ECX.  This reduces the number of mov insns required,
with the mere addition of a new register constraint.

Don't attempt to get rid of the matching constraint, as that's impossible
to manipulate with just a new constraint.  In addition, constant shifts
still need the matching constraint.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
9d2eec202f tcg/i386: Use ANDN instruction
Note that the optimizer cannot simplify ANDC X,Y,C to AND X,Y,~C
so we must handle constants in the implementation of andc.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
ecc7e84327 tcg/i386: Add tcg_out_vex_modrm
Prepare for emitting BMI insns which require VEX encoding.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
a1b29c9ae0 tcg/i386: Move TCG_CT_CONST_* to tcg-target.c
These are not needed by users of tcg-target.h.  No need to recompile
when we adjust them.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
464a1441c1 tcg/optimize: Add more identity simplifications
Recognize 0 operand to andc, and -1 operands to and, orc, eqv.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
e64e958e20 tcg/optimize: Optmize ANDC X,Y,Y to MOV X,0
Like we already do for SUB and XOR.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
e201b56418 tcg/optimize: Simply some logical ops to NOT
Given, of course, an appropriate constant.  These could be generated
from the "canonical" operation for inversion on the guest, or via
other optimizations.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Richard Henderson
23ec69ed37 tcg/optimize: Handle known-zeros masks for ANDC
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:29 -06:00
Aurelien Jarno
c8d7027253 tcg/optimize: add known-zero bits compute for load ops
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Aurelien Jarno
f096dc9618 tcg/optimize: improve known-zero bits for 32-bit ops
The shl_i32 op might set some bits of the unused 32 high bits of the
mask. Fix that by clearing the unused 32 high bits for all 32-bit ops
except load/store which operate on tl values.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Aurelien Jarno
3031244b01 tcg/optimize: fix known-zero bits optimization
Known-zero bits optimization is a great idea that helps to generate more
optimized code. However the current implementation only works in very few
cases as the computed mask is not saved.

Fix this to make it really working.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Aurelien Jarno
e46b225a31 tcg/optimize: fix known-zero bits for right shift ops
32-bit versions of sar and shr ops should not propagate known-zero bits
from the unused 32 high bits. For sar it could even lead to wrong code
being generated.

Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:28 -06:00
Huw Davies
7a3a00979d tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].
It's this that should be subtracted from 0x20 when converting to a right rotate.

Cc: qemu-stable@nongnu.org
Signed-off-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17 10:12:08 -06:00
Richard Henderson
f6aa2f7dee TCG: Fix 32-bit host allocation typo
The second half register of a 64-bit temp on a 32-bit host
was allocated with the wrong base_type.

The base_type of the second half register is never checked,
but for consistency it should be the same as the first half.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-15 15:20:17 -08:00
Peter Maydell
c1de788ab9 tcg: Add TCGV_UNUSED_PTR, TCGV_IS_UNUSED_PTR, TCGV_EQUAL_PTR
We have macros for marking TCGv values as unused, checking if they
are unused and comparing them to each other. However these only exist
for TCGv_i32 and TCGv_i64; add them for TCGv_ptr as well.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-08 14:46:55 +00:00
Richard Henderson
c6830cdb2c tcg/s390: Remove sigill_handler
Commit c9baa30f42 failed to
delete all of the relevant code, leading to Werrors about
unused symbols.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-02-01 13:45:20 +04:00
Peter Maydell
dc08f85188 Merge remote-tracking branch 'rth/tcg-movbe' into staging
* rth/tcg-movbe:
  tcg/i386: cleanup useless #ifdef
  tcg/i386: use movbe instruction in qemu_ldst routines
  tcg/i386: add support for three-byte opcodes
  tcg/i386: remove hardcoded P_REXW value
  disas/i386.c: disassemble movbe instruction

Message-id: 1390692772-15282-1-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-30 19:02:16 +00:00
Alexander Graf
18d13fa293 TCG: Fix I64-on-32bit-host temporaries
We have cache pools of temporaries that we can reuse later when they've
already been allocated before.

These cache pools differenciate between the target TCG variable type they
contain. So we have one pool for I32 and one pool for I64 variables.

On a 32bit system, we can't work with 64bit registers though. So instead we
spawn two I32 temporaries for every I64 temporary we create. All caching
works the same way as on a real 64-bit system though: We create a cache entry
in the 64bit array for the first i32 index.

However, when we free such a temporary we free it to the pool of its type
(which is always i32 on 32bit systems) rather than its base_type (which is
i64 or i32 depending on the variable). This means we put a temporary that
is of base_type == i64 into the i32 preallocated temporary pool.

Eventually, this results in failures like this on 32bit hosts:

  qemu-system-ppc64: tcg/tcg.c:515: tcg_temp_new_internal: Assertion `ts->base_type == type' failed.

This patch makes the free routine use the base_type instead for the free case,
so it's consistent with the temporary allocation. It fixes the above failure
for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1390146811-59936-1-git-send-email-agraf@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-01-30 13:25:28 +00:00
Aurelien Jarno
2d23d5edb5 tcg/i386: cleanup useless #ifdef
TCG_TARGET_HAS_movcond_i32 is always defined to 1 in tcg-target.h, so
remove the corresponding #ifdef #endif sequence, left from a previous
refactoring.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 15:21:33 -08:00
Aurelien Jarno
085bb5bb64 tcg/i386: use movbe instruction in qemu_ldst routines
The movbe instruction has been added on some Intel Atom CPUs and on
recent Intel Haswell CPUs. It allows to load/store a value and at the
same time bswap it.

This patch detects the avaibility of this instruction and when available
use it in the qemu load/store routines in replacement of load/store +
bswap. Note that for 16-bit unsigned loads, movbe + movzw is basically the
same as movzw + bswap, so the patch doesn't touch this case.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[RTH: Reduced the number of conditionals using "movop".]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 15:19:19 -08:00
Aurelien Jarno
2a1137753f tcg/i386: add support for three-byte opcodes
Add support for three-byte opcodes, starting with the 0x0f 0x38 prefix.
Use P_EXT38 as the new constant, and shift all other constants so that
P_EXT and P_EXT38 have neighbouring values.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[RTH: Changed the name from P_EXT2 to P_EXT38.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 14:12:45 -08:00
Aurelien Jarno
c9d78213b8 tcg/i386: remove hardcoded P_REXW value
P_REXW is defined has a constant at the beginning of i386/tcg-target.c,
but the corresponding bit is later used in a harcoded way, which defeat
the purpose of a constant.

Fix that by using a conditional expression operator instead of a shift.
On x86 this actually makes the code slightly smaller as GCC does in
practice (opc >> 8) & 8 instead of (opc & 0x800) >> 8 so the constants
are smaller to load.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-01-25 14:12:38 -08:00
Aurelien Jarno
8589467f94 tcg/i386: fix a comment
The comments apply to 8-bit stores, not 8-byte stores.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2013-12-21 16:41:56 +01:00
Richard Henderson
0ec9eabc7f tcg: Use bitmaps for free temporaries
We previously allocated 32-bits per temp for the next_free_temp entry.
We now allocate 4 bits per temp across the 4 bitmaps.

Using a linked list meant that if a translator is tweeked, resulting in
temps being freed in a different order, that would have follow-on effects
throughout the TB.  Always allocating the lowest free temp means that
follow-on effects are minimized, which can make it easier to diff output
when debugging the translators.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-12-10 09:23:45 -08:00
Richard Henderson
c9baa30f42 tcg-s390: Use qemu_getauxval in query_facilities
No need to set up a SIGILL signal handler for detection anymore.

Remove a ton of sanity checks that must be true, given that we're
requiring a 64-bit build (the note about 31-bit KVM is satisfied
by configuring with TCI).

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30 07:45:30 +13:00
Richard Henderson
41d9ea80ac tcg-arm: Use qemu_getauxval
Allow host detection on linux systems without glibc 2.16 or later.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30 07:45:14 +13:00
Richard Henderson
cd629de1cf tcg-ppc64: Use qemu_getauxval
Allow host detection on linux systems without glibc 2.16 or later.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-30 07:45:13 +13:00
Richard Henderson
463230d85e tcg-ia64: Introduce tcg_opc_bswap64_i
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:59 +10:00
Richard Henderson
db008a8de2 tcg-ia64: Introduce tcg_opc_ext_i
Being able to "extend" from 64-bits (with a mov) simplifies
a few places where the conditional breaks the train of thought.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:54 +10:00
Richard Henderson
fa0cdb6c2a tcg-ia64: Introduce tcg_opc_movi_a
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:50 +10:00
Richard Henderson
3b9ccdcc74 tcg-ia64: Introduce tcg_opc_mov_a
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:46 +10:00
Richard Henderson
25c9c73bdc tcg-ia64: Use A3 form of logical operations
We can and/or/xor/andcm small constants, saving one cycle.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:40 +10:00
Richard Henderson
f940fb086c tcg-ia64: Use SUB_A3 and ADDS_A4 for subtraction
We can subtract from more small constants that just 0 with one insn,
and we can add the negative for most small constants.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:33 +10:00
Richard Henderson
8642088a3d tcg-ia64: Use ADDS for small addition
Avoids a wasted cycle loading up small constants.

Simplify the code assuming the tcg optimizer is going to work
and don't expect the first operand of the add to be constant.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:23 +10:00
Richard Henderson
3c289cba9b tcg-ia64: Avoid unnecessary stop bit in tcg_out_alu
When performing an operation with two input registers, we'd leave
the stop bit (and thus an extra cycle) that's only needed when one
or the other input is a constant.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:16 +10:00
Richard Henderson
d15de15ca0 tcg-ia64: Move AREG0 to R32
Since the move away from the global areg0, we're no longer globally
reserving areg0.  Which means our use of R7 clobbers a call-saved
register.  Shift areg0 into the windowed registers.  Indeed, choose
the incoming parameter register that it comes to us by.

This requires moving the register holding the return address elsewhere.
Choose R33 for tidiness.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:57:08 +10:00
Richard Henderson
6d264b38fc tcg-ia64: Simplify brcond
There was a misconception that a stop bit is required between a compare
and the branch that uses the predicate set by the compare.  This lead to
the usage of an extra bundle in which to perform the compare.  The extra
bundle left room for constants to be loaded for use with the compare insn.

If we pack the compare and the branch together in the same bundle, then
there's no longer any room for non-zero constants.  At which point we
can eliminate half the function by not handling them.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:42 +10:00
Richard Henderson
6f65c780b9 tcg-ia64: Handle constant calls
Using only indirect calls results in 3 bundles (one to load the
descriptor address), and 4 stop bits.  By looking through the
descriptor to the constants, we can perform the call with 2
bundles and only 1 stop bit.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:30 +10:00
Richard Henderson
5f7b16877a tcg-ia64: Use shortcuts for nop insns
There's no need to go through the full opcode-to-insn function call
to generate nops.  This makes the source a bit more readable.

Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:25 +10:00
Richard Henderson
e3afa1c4ad tcg-ia64: Use TCGMemOp within qemu_ldst routines
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-11-18 15:56:12 +10:00
Richard Henderson
1768ec0623 tcg-ppc64: Support new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
5dd391604f tcg-ppc: Support new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
e349a8d4ff tcg-ppc64: Convert to le/be ldst helpers
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
92d0acda27 tcg-ppc: Convert to le/be ldst helpers
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
a058557381 tcg-ppc64: Use TCGMemOp within qemu_ldst routines
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
f1a16dcdd5 tcg-ppc: Use TCGMemOp within qemu_ldst routines
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
091d567771 tcg-arm: Improve GUEST_BASE qemu_ld/st
If we pull the code to emit the actual load/store into a subroutine,
we can share the reg+reg addressing mode code between softmmu and
usermode.  This lets us load GUEST_BASE into a temporary register
rather than attempting to add it piece-wise to the address.

Which lets us use movw+movt for armv7, rather than (up to) 4 adds.
Code size for pre-armv7 stays the same.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
15ecf6e394 tcg-arm: Convert to new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
a485cff09c tcg-arm: Tidy variable naming convention in qemu_ld/st
s/addr_reg2/addrhi/
s/addr_reg/addrlo/
s/data_reg2/datahi/
s/data_reg/datalo/

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
0315c51ea9 tcg-arm: Convert to le/be ldst helpers
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
099fcf2e36 tcg-arm: Use TCGMemOp within qemu_ldst routines
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
8221a267fd tcg-i386: Support new ldst opcodes
No support for helpers with non-default endianness yet,
but good enough to test the opcodes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
b3e2bc500f tcg-i386: Remove "cb" output restriction from qemu_st8 for i386
Once we form a combined qemu_st_i32 opcode, we won't be able to
have separate constraints based on size.  This one is fairly easy
to work around, since eax is available as a scratch register.

When storing variable data, this tends to merely exchange one mov
for another.  E.g.

-:  mov    %esi,%ecx
...
-:  mov    %cl,(%edx)
+:  mov    %esi,%eax
+:  mov    %al,(%edx)

Where we do have a regression is when storing constant data, in which
we may load the constant into edi, when only ecx/ebx ought to be used.

The proper way to recover this regression is to allow constants as
arguments to qemu_st_i32, so that we never load the constant data into
a register at all, must less the wrong register.  TBD.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
7352ee546c tcg-i386: Tidy softmmu routines
Pass two TCGReg to tcg_out_tlb_load, rather than idx+args.

Move ldst_optimization routines just below tcg_out_tlb_load to avoid
the need for forward declarations.

Use TCGReg enum in preference to int where apprpriate.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
37c5d0d5d1 tcg-i386: Use TCGMemOp within qemu_ldst routines
Step one in the transition, with constants passed down from tcg_out_op.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
d257e0d7ae tcg: Use TCGMemOp for TCGLabelQemuLdst.opc
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:19 -07:00
Richard Henderson
867b3201a3 exec: Add both big- and little-endian memory helpers
Step three in the transition: helpers not tied to the target
"default" endianness.  To be used when the guest uses a memory
operation with non-default endianness.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 13:19:21 -07:00
Richard Henderson
f713d6ad7b tcg: Add qemu_ld_st_i32/64
Step two in the transition, adding the new ldst opcodes.  Keep the old
opcodes around until all backends support the new opcodes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 13:19:21 -07:00
Richard Henderson
6c5f4ead64 tcg: Add TCGMemOp
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 12:20:59 -07:00
Richard Henderson
9ecefc84dd tcg: Add tcg-be-ldst.h
Move TCGLabelQemuLdst and related stuff out of tcg.h.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:26 -07:00
Richard Henderson
3cf246f0d4 tcg: Add tcg-be-null.h
This is a no-op backend data implementation, for those targets that
are not currently using the load/store optimization path.

This is prepatory to always requiring these functions in all backends.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:26 -07:00
Richard Henderson
023261ef85 tcg-aarch64: Update to helper_ret_*_mmu routines
A minimal update to use the new helpers with the return address argument.

Tested-by: Claudio Fontana <claudio.fontana@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:25 -07:00
Richard Henderson
84fd9dd3f7 tcg: Merge tcg_register_helper into tcg_context_init
Eliminates the repeated checks for having created
the s->helpers hash table.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:25 -07:00
Richard Henderson
4953ee6271 tcg: Add tcg-runtime.c helpers to all_helpers
For the few targets that actually use these, we'd not report
them symbolicly in the tcg opcode logs.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:25 -07:00
Richard Henderson
100b5e0170 tcg: Put target helper data into an array.
One call inside of a loop to tcg_register_helper instead of hundreds
of sequential calls.

Presumably more icache and branch prediction friendly; resulting binary
size mostly unchanged on x86_64, as we're trading 32-bit rip-relative
references in .text for full 64-bit pointers in .rodata.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:25 -07:00
Richard Henderson
5cd8f6210f tcg: Move helper registration into tcg_context_init
No longer needs to be done on a per-target basis.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:43:37 -07:00
Richard Henderson
6e085f72c6 tcg: Use a GHashTable for tcg_find_helper
Slightly changes the interface, in that we now return name
instead of a TCGHelperInfo structure, which goes away.

Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:41:36 -07:00
Richard Henderson
7c57df0d85 tcg: Delete tcg_helper_get_name declaration
The function was deleted in 4dc81f2822.

Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:41:15 -07:00
Richard Henderson
802b508123 tcg-hppa: Remove tcg backend
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:31:06 -07:00
Anthony Liguori
576e81be39 Merge remote-tracking branch 'rth/tcg-arm-pull' into staging
# By Richard Henderson
# Via Richard Henderson
* rth/tcg-arm-pull:
  tcg-arm: Move the tlb addend load earlier
  tcg-arm: Remove restriction on qemu_ld output register
  tcg-arm: Return register containing tlb addend
  tcg-arm: Move load of tlb addend into tcg_out_tlb_read
  tcg-arm: Use QEMU_BUILD_BUG_ON to verify constraints on tlb
  tcg-arm: Use strd for tcg_out_arg_reg64
  tcg-arm: Rearrange slow-path qemu_ld/st
  tcg-arm: Use ldrd/strd for appropriate qemu_ld/st64

Message-id: 1380663109-14434-1-git-send-email-rth@twiddle.net
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
2013-10-09 07:52:57 -07:00
Anthony Liguori
ce079abb41 Merge remote-tracking branch 'sweil/tci' into staging
# By Stefan Weil
# Via Stefan Weil
* sweil/tci:
  misc: Use new rotate functions
  bitops: Add rotate functions (rol8, ror8, ...)
  tci: Add implementation of rotl_i64, rotr_i64

Message-id: 1380137693-3729-1-git-send-email-sw@weilnetz.de
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
2013-10-09 07:51:23 -07:00
Richard Henderson
ee06e23051 tcg-arm: Move the tlb addend load earlier
There are free scheduling slots between the sequence of
comparison instructions.  This requires changing the
register in use to avoid conflict with those compares.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
66c2056fb8 tcg-arm: Remove restriction on qemu_ld output register
The main intent of the patch is to allow the tlb addend register
to be changed, without tying that change to the constraint.  But
the most common side-effect seems to be to enable usage of ldrd
with the r0,r1 pair.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
d3e440bef2 tcg-arm: Return register containing tlb addend
Preparatory to rescheduling the tlb load, and changing said register.
Continues to use R1 for now.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
d0ebde2284 tcg-arm: Move load of tlb addend into tcg_out_tlb_read
This allows us to make more intelligent decisions about the relative
offsets of the tlb comparator and the addend, avoiding any need of
writeback addressing.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
f248873637 tcg-arm: Use QEMU_BUILD_BUG_ON to verify constraints on tlb
One of the two constraints we already checked via #if, but
the tlb offset distance was only checked at runtime.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
e5e2e4a74b tcg-arm: Use strd for tcg_out_arg_reg64
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
d9f4dde4a6 tcg-arm: Rearrange slow-path qemu_ld/st
Use the new helper_ret_*_mmu routines.  Use a conditional call
to arrange for a tail-call from the store path, and to load the
return address for the helper for the load path.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Richard Henderson
23bbc25085 tcg-arm: Use ldrd/strd for appropriate qemu_ld/st64
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-01 10:20:33 -07:00
Stefan Weil
3df2b8fde9 misc: Use new rotate functions
Signed-off-by: Stefan Weil <sw@weilnetz.de>
2013-09-25 21:23:05 +02:00
Stefan Weil
d285bf784b tci: Add implementation of rotl_i64, rotr_i64
It is used by qemu-ppc64 when running Debian's busybox-static.

Cc: qemu-stable <qemu-stable@nongnu.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2013-09-25 21:22:00 +02:00
Richard Henderson
7f12d6497f tcg-ppc64: Implement CONFIG_QEMU_LDST_OPTIMIZATION
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:33 -07:00
Richard Henderson
c7ca6a2b75 tcg-ppc64: Add _noaddr functions for emitting forward branches
... rather than open-coding this stuff through the file.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:32 -07:00
Richard Henderson
fedee3e7fd tcg-ppc64: Streamline tcg_out_tlb_read
Less conditional compilation.  Merge an add insn with the indexed
memory load insn.  Load the tlb addend earlier.  Avoid the address
update memory form.

Fix a bug in not allowing large enough tlb offsets for some guests.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:32 -07:00
Richard Henderson
fa94c3be7a tcg-ppc64: Implement tcg_register_jit
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:32 -07:00
Richard Henderson
b18d5d2b80 tcg-ppc64: Handle long offsets better
Previously we'd only handle 16-bit offsets from memory operand without falling
back to indexed, but it's easy to use ADDIS to handle full 32-bit offsets.

This also lets us unify code that existed inline in tcg_out_op for handling
addition of large constants.

The new R2 temporary was marked reserved for the AIX calling convention, but
the register really is call-clobbered and since tcg generated code has no use
for a TOC, it's available for use.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:32 -07:00