Commit Graph

1129 Commits

Author SHA1 Message Date
Andreas Färber
97a8ea5a3a cpu: Replace do_interrupt() by CPUClass::do_interrupt method
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.

It will also allow to override the interrupt handling for certain CPU
families.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:55 +01:00
Andreas Färber
c3affe5670 cpu: Pass CPUState to cpu_interrupt()
Move it to qom/cpu.h to avoid issues with include order.

Change pc_acpi_smi_interrupt() opaque to X86CPU.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:55 +01:00
Andreas Färber
259186a7d2 cpu: Move halted and interrupt_request fields to CPUState
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.

Pass PowerPCCPU to kvmppc_handle_halt().

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:55 +01:00
Andreas Färber
f56e3a1476 target-i386: Update VMStateDescription to X86CPU
Expose vmstate_cpu as vmstate_x86_cpu and hook it up to CPUClass::vmsd.
Adapt opaques and VMState fields to X86CPU. Drop cpu_{save,load}().

Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:54 +01:00
Anthony Liguori
a6900601ca virtio,vhost,pci,e1000
Mostly bugfixes, but also some ICH work by Laszlo.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.13 (GNU/Linux)
 
 iQEcBAABAgAGBQJRL1gUAAoJECgfDbjSjVRpIk4IAL17zSadWgd99ZrH6EtZ3/cw
 mhuxgm+vRfZPHl82lGC/NthLrTbJ5hpVe1Ff9vrMIkx3OZsh97iqoXS4iPjo9804
 Pb5zhDqHJQJDTQKCllb9seu6e5D9Fw3aPp+BcH5QfyEOc/X5l0c5IffRdo6xDT9G
 1dDEywntl/wwfCej/kVBu4H7G2/bF7wEMvda7kvBPzZsc6y0TsDSAewk5EX54+/p
 wRKw8IBa/T2/ldSoBcqPW1Zd2oeuvKhty4vrXlO1UVZi+uTWNmJxUm6Z1GqNInvE
 im0FGlSxwTJF7nX3JQv6tB46GRL8V/IC5+9I5UJc5nT8ScrX4rIxRbJTnsRkn4Y=
 =eUQN
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'mst/tags/for_anthony' into staging

virtio,vhost,pci,e1000

Mostly bugfixes, but also some ICH work by Laszlo.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Thu 28 Feb 2013 07:13:56 AM CST using RSA key ID D28D5469
# gpg: Can't check signature: public key not found

# By Michael S. Tsirkin (2) and others
# Via Michael S. Tsirkin
* mst/tags/for_anthony:
  Set virtio-serial device to have a default of 2 MSI vectors.
  ICH9 LPC: Reset Control Register, basic implementation
  Fix guest OS hang when 64bit PCI bar present
  e1000: unbreak the guest network migration to 1.3
  vhost: memory sync fixes
2013-03-04 08:22:41 -06:00
Peter Maydell
806f352d3d gen-icount.h: Rename gen_icount_start/end to gen_tb_start/end
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-03-03 14:29:08 +00:00
Andreas Färber
fadf982584 cpu: Introduce ENV_OFFSET macros
Introduce ENV_OFFSET macros which can be used in non-target-specific
code that needs to generate TCG instructions which reference CPUState
fields given the cpu_env register that TCG targets set up with a
pointer to the CPUArchState struct.

Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-03-03 14:28:28 +00:00
Richard Henderson
a4bcea3d67 target-i386: Use mulu2 and muls2
These correspond very closely to the insns that we're emulating.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-27 19:06:28 +00:00
Alexey Korolev
7feb640cf3 Fix guest OS hang when 64bit PCI bar present
This patch addresses the issue fully described here:
http://lists.nongnu.org/archive/html/qemu-devel/2013-02/msg01804.html

Linux kernels prior to 2.6.36 do not disable the PCI device during
enumeration process. Since lower and higher parts of a 64bit BAR
are programmed separately this leads to qemu receiving a request to occupy
a completely wrong address region for a short period of time.
We have found that the boot process screws up completely if kvm-apic range
is overlapped even for a short period of time (it is fine for other
regions though).

This patch raises the priority of the kvm-apic memory region, so it is
never pushed out by PCI devices. The patch is quite safe as it does not
touch memory manager.

Signed-off-by: Alexey Korolev <akorolex@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-02-27 17:23:22 +02:00
Richard Henderson
76f1313323 target-i386: Use add2 to implement the ADX extension
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23 17:25:29 +00:00
Richard Henderson
f437d0a3c2 target-i386: Use movcond to implement shiftd.
With this being all straight-line code, it can get deleted
when the cc variables die.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:19 -08:00
Richard Henderson
e2f515cf2f target-i386: Discard CC_OP computation in set_cc_op also
The shift and rotate insns use movcond to set CC_OP, and thus
achieve a conditional EFLAGS setting.  By discarding CC_OP in
a later flags setting insn, we can discard that movcond.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:19 -08:00
Richard Henderson
34d80a55ff target-i386: Use movcond to implement rotate flags.
With this being all straight-line code, it can get deleted
when the cc variables die.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:19 -08:00
Richard Henderson
a41f62f592 target-i386: Use movcond to implement shift flags.
With this being all straight-line code, it can get deleted
when the cc variables die.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:19 -08:00
Richard Henderson
436ff2d227 target-i386: Add CC_OP_CLR
Special case xor with self.  We need not even store the known
zero into cc_src.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:18 -08:00
Richard Henderson
321c535105 target-i386: Implement tzcnt and fix lzcnt
We weren't computing flags for lzcnt at all.  At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:18 -08:00
Richard Henderson
f1300734cb target-i386: Use clz/ctz for bsf/bsr helpers
And mark the helpers as NO_RWG_SE.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:18 -08:00
Richard Henderson
cd7f97cafd target-i386: Implement ADX extension
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19 23:05:18 -08:00
Richard Henderson
e2c3c2c551 target-i386: Implement RORX
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:52:32 -08:00
Richard Henderson
4a554890e4 target-i386: Implement SHLX, SARX, SHRX
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:52:32 -08:00
Richard Henderson
0592f74a75 target-i386: Implement PDEP, PEXT
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:52:32 -08:00
Richard Henderson
5f1f4b1771 target-i386: Implement MULX
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:52:32 -08:00
Richard Henderson
02ea1e6b4f target-i386: Implement BZHI
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:52:32 -08:00
Richard Henderson
bc4b43dc2f target-i386: Implement BLSR, BLSMSK, BLSI
Do all of group 17 at one time for ease.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:52:05 -08:00
Richard Henderson
c7ab7565bc target-i386: Implement BEXTR
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:39:39 -08:00
Richard Henderson
7073fbada7 target-i386: Implement ANDN
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:39:39 -08:00
Richard Henderson
111994ee05 target-i386: Implement MOVBE
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:39:39 -08:00
Richard Henderson
701ed211d6 target-i386: Decode the VEX prefixes
No actual required uses of these encodings yet.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:39:39 -08:00
Richard Henderson
4a6fd938f5 target-i386: Tidy prefix parsing
Avoid duplicating switch statement between 32 and 64-bit modes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:39:38 -08:00
Richard Henderson
988c3eb0d6 target-i386: Use CC_SRC2 for ADC and SBB
Add another slot in ENV and store two of the three inputs.  This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:39:09 -08:00
Richard Henderson
db9f259772 target-i386: Make helper_cc_compute_{all,c} const
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:25:55 -08:00
Richard Henderson
8601c0b6c5 target-i386: Don't reference ENV through most of cc helpers
In preparation for making this a const helper.

By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:21:31 -08:00
Richard Henderson
a3251186fc target-i386: optimize flags checking after sub using CC_SRCT
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition.  However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).

The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it.  All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.

Example (cmp $0x21,%rax + jbe):

 Before                                     After
----------------------------------------------------------------------------
 movi_i64 tmp1,$0x21                        movi_i64 tmp1,$0x21
 movi_i64 cc_src,$0x21                      movi_i64 cc_src,$0x21
 sub_i64 cc_dst,rax,tmp1                    sub_i64 cc_dst,rax,tmp1
 add_i64 tmp7,cc_dst,cc_src
 movi_i32 cc_op,$0x11                       movi_i32 cc_op,$0x11
 brcond_i64 tmp7,cc_src,leu,$0x0            discard loc11
                                            brcond_i64 rax,cc_src,leu,$0x0

 Before                                     After
----------------------------------------------------------------------------
  mov    (%r14),%rbp                        mov    (%r14),%rbp
  mov    %rbp,%rbx                          mov    %rbp,%rbx
  sub    $0x21,%rbx                         sub    $0x21,%rbx
  lea    0x21(%rbx),%r12
  movl   $0x11,0xa0(%r14)                   movl   $0x11,0xa0(%r14)
  movq   $0x21,0x90(%r14)                   movq   $0x21,0x90(%r14)
  mov    %rbx,0x98(%r14)                    mov    %rbx,0x98(%r14)
  cmp    $0x21,%r12                     |   cmp    $0x21,%rbp
  jbe    ...                                jbe    ...

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:58 -08:00
Richard Henderson
891a5133f1 target-i386: Update cc_op before TCG branches
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations.  This way we have a chance to discard
CC_SRC2 before it gets stored.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:58 -08:00
Richard Henderson
dc259201f8 target-i386: introduce gen_jcc1_noeob
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op.  However, not all jumps end
a basic block, so introduce a variant that does not do this.

This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:58 -08:00
Richard Henderson
63633fe6eb target-i386: use gen_op for cmps/scas
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:58 -08:00
Paolo Bonzini
3b9d3cf160 target-i386: kill cpu_T3
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3.  This value is then written to t2 without
going through a temporary register.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
57eb0cc854 target-i386: expand cmov via movcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Paolo Bonzini
f32d3781de target-i386: introduce gen_cmovcc1
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Paolo Bonzini
cc8b6f5b39 target-i386: cleanup temporary macros for CCPrepare
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
69d1aa31f7 target-i386: inline gen_prepare_cc_slow
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Paolo Bonzini
943131ca98 target-i386: use CCPrepare to generate conditional jumps
This simplifies all the jump generation code.  CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
276e6b5f06 target-i386: introduce gen_prepare_cc
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag.  In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).

However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
bec93d7283 target-i386: introduce CCPrepare
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code.  For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.

[rth: Use ctz with the proper width rather than ffs.]

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Paolo Bonzini
c365395e9b target-i386: optimize setcc instructions
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE).  In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
be10b289d6 target-i386: optimize setle
And allow gen_setcc_slow to operate on cpu_cc_src.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
2cb4764577 target-i386: optimize setbe
This is looking at EFLAGS, but it can do so more efficiently with
setcond.

Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Paolo Bonzini
1a5c635947 target-i386: change gen_setcc_slow_T0 to gen_setcc_slow
Do not hard code the destination register.

Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
06847f1f1a target-i386: convert gen_compute_eflags_c to TCG
Do the switch at translation time, converting the helper templates to
TCG opcodes.  In some cases CF can be computed with a single setcond,
though others it may require a little more work.

In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.

Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00
Richard Henderson
8115f11735 target-i386: use inverted setcond when computing NS or NZ
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0.  We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway.  It is
not worthwhile for PF, as usual.

shr+and+xor could be replaced by and+setcond.  I'm not doing it yet.

Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18 15:03:57 -08:00