Mostly bugfixes, but also some ICH work by Laszlo.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQEcBAABAgAGBQJRL1gUAAoJECgfDbjSjVRpIk4IAL17zSadWgd99ZrH6EtZ3/cw
mhuxgm+vRfZPHl82lGC/NthLrTbJ5hpVe1Ff9vrMIkx3OZsh97iqoXS4iPjo9804
Pb5zhDqHJQJDTQKCllb9seu6e5D9Fw3aPp+BcH5QfyEOc/X5l0c5IffRdo6xDT9G
1dDEywntl/wwfCej/kVBu4H7G2/bF7wEMvda7kvBPzZsc6y0TsDSAewk5EX54+/p
wRKw8IBa/T2/ldSoBcqPW1Zd2oeuvKhty4vrXlO1UVZi+uTWNmJxUm6Z1GqNInvE
im0FGlSxwTJF7nX3JQv6tB46GRL8V/IC5+9I5UJc5nT8ScrX4rIxRbJTnsRkn4Y=
=eUQN
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mst/tags/for_anthony' into staging
virtio,vhost,pci,e1000
Mostly bugfixes, but also some ICH work by Laszlo.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 28 Feb 2013 07:13:56 AM CST using RSA key ID D28D5469
# gpg: Can't check signature: public key not found
# By Michael S. Tsirkin (2) and others
# Via Michael S. Tsirkin
* mst/tags/for_anthony:
Set virtio-serial device to have a default of 2 MSI vectors.
ICH9 LPC: Reset Control Register, basic implementation
Fix guest OS hang when 64bit PCI bar present
e1000: unbreak the guest network migration to 1.3
vhost: memory sync fixes
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce ENV_OFFSET macros which can be used in non-target-specific
code that needs to generate TCG instructions which reference CPUState
fields given the cpu_env register that TCG targets set up with a
pointer to the CPUArchState struct.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
These correspond very closely to the insns that we're emulating.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This patch addresses the issue fully described here:
http://lists.nongnu.org/archive/html/qemu-devel/2013-02/msg01804.html
Linux kernels prior to 2.6.36 do not disable the PCI device during
enumeration process. Since lower and higher parts of a 64bit BAR
are programmed separately this leads to qemu receiving a request to occupy
a completely wrong address region for a short period of time.
We have found that the boot process screws up completely if kvm-apic range
is overlapped even for a short period of time (it is fine for other
regions though).
This patch raises the priority of the kvm-apic memory region, so it is
never pushed out by PCI devices. The patch is quite safe as it does not
touch memory manager.
Signed-off-by: Alexey Korolev <akorolex@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The shift and rotate insns use movcond to set CC_OP, and thus
achieve a conditional EFLAGS setting. By discarding CC_OP in
a later flags setting insn, we can discard that movcond.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We weren't computing flags for lzcnt at all. At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preparation for making this a const helper.
By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag. In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).
However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code. For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.
[rth: Use ctz with the proper width rather than ffs.]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE). In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And allow gen_setcc_slow to operate on cpu_cc_src.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is looking at EFLAGS, but it can do so more efficiently with
setcond.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do not hard code the destination register.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do the switch at translation time, converting the helper templates to
TCG opcodes. In some cases CF can be computed with a single setcond,
though others it may require a little more work.
In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0. We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway. It is
not worthwhile for PF, as usual.
shr+and+xor could be replaced by and+setcond. I'm not doing it yet.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
ZF, SF and PF can always be computed from CC_DST except in the
CC_OP_EFLAGS case (and CC_OP_DYNAMIC, which just resolves to CC_OP_EFLAGS
in gen_compute_eflags). Use setcond to compute ZF and SF.
We could also use a table lookup to compute PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This gets us universal coverage, rather than scattering discards
around at various places. As a bonus, we do not emit redundant
discards e.g. between sequential logic insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes code more similar to the other callers of gen_eob, especially
loopz/loopnz/jcxz.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
After calling gen_compute_eflags, leave the computed value in cc_reg_src
and set cc_op to CC_OP_EFLAGS. The next few patches will remove anyway
most calls to gen_compute_eflags.
As a result of this change it is more natural to remove the register
argument from gen_compute_eflags and change all the callers.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>