Add appropriate spaces around operators, and break line where it needs
to be broken to allow feature-words array to be introduced without
having too-long lines.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It allows APIC to be hotplugged.
* map APIC's mmio at board level if it is present
* do not register mmio region for each APIC, since
only one is used/mapped
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
X86CPU should have parent bus so it could provide bus for child APIC.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Put APIC_SPACE_SIZE in a public header so that it can be
reused elsewhere later.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The property is used from board level to set APIC ID for CPUs it
creates. Do so in a new pc_new_cpu() helper, to be reused for hot-plug.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This helper replaces '_' with '-' in a uniform way.
As a side effect, even custom mappings must use '-' now.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
[AF: Split off; operate on NUL-terminated string rather than '=' delimiter]
Signed-off-by: Andreas Färber <afaerber@suse.de>
get_arch_id() adds possibility for generic code to get a guest-visible
CPU ID without accessing CPUArchState.
If derived classes don't override it, it will return cpu_index.
Override it on target-i386 in X86CPU to return the APIC ID.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Fixed EFLAGS corruption by ROR r8/r16 instruction located at the end of the TB.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Read and write steal time MSR, so that reporting is functional across
migration.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Move CPU creation and features parsing into a separate cpu_x86_create()
function, so that board would be able to set board-specific CPU
properties before CPU is realized.
Keep cpu_x86_init() for compatibility with the code that uses cpu_init()
and doesn't need to modify CPU properties.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
* Add braces to 'if' statements;
* Remove last TAB character from the source.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Changed whitespace]
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
When APIC is hotplugged during CPU hotplug, device_set_realized()
calls device_reset() on it. And if QEMU runs in KVM mode, following
call chain will fail:
apic_reset_common()
-> kvm_apic_vapic_base_update()
-> kvm_vcpu_ioctl(cpu->kvm_fd,...)
due to cpu->kvm_fd not being initialized yet.
cpu->kvm_fd is initialized during qemu_init_vcpu() but x86_cpu_apic_init()
can't be moved after it because kvm_init_vcpu() -> kvm_arch_reset_vcpu()
relies on APIC to determine if CPU is BSP for setting initial env->mp_state.
So split APIC device creation from its initialization and realize APIC
after CPU is created, when it's safe to call APIC's reset method.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
We were missing a bunch of feature lists. Fix this by simply dumping
the meta list feature_word_info.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
kvm_enabled() cannot be true at this point because accelerators are
initialized much later during init. Also, hiding this makes it very hard
to discover for users. Simply dump unconditionally if CONFIG_KVM is set.
Add explanation for "host" CPU type.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The PCLMULQDQ instruction has been introduced on the Westmere CPU.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A common dependency of the constant's current users:
- hw/apic_common.c
- hw/i386/kvmvapic.c
- target-i386/cpu.c
is "target-i386/cpu.h".
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Message-id: 1363821803-3380-9-git-send-email-lersek@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
commit 5ec01c2e96 broke "-cpu ..,enforce",
as it has moved kvm_check_features_against_host() after the
filter_features_for_kvm() call. filter_features_for_kvm() removes all
features not supported by the host, so this effectively made
kvm_check_features_against_host() impossible to fail.
This patch changes the call so we check for host feature support before
filtering the feature bits.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 1364935692-24004-1-git-send-email-ehabkost@redhat.com
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
valids can equals to -1 if the reg/mem string is empty. Change the
expression to have an empty xor mask in that case.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The inner loop should only change the current bit of the result, instead
of the whole result.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
pcmpXstrX instructions in "Equal each" mode force both invalid element
pair to true. It means (upper - MAX(valids, validd)) bits should be set
to 1, not (upper - MAX(valids, validd) + 1).
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Fix the order of the of the comparisons to match the "Intel 64 and
IA-32 Architectures Software Developer's Manual".
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
pcmpXstrm instructions returns their result in the XMM0 register and
not in the first operand.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
ffs1 returns the first bit set to one starting counting from the most
significant bit.
pcmpXstri returns the most significant bit set to one, starting counting
from the least significant bit.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The "Intel 64 and IA-32 Architectures Software Developer's Manual" (at
least recent versions) clearly says that the comparison is signed.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
gen_op_mov_TN_reg() loads the value in cpu_T[0], so this temporary should
be used instead of cpu_tmp0.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We can compute the value in cpu_dump_state anyway, and gratuitous
modifications to eflags creates heisenbugs.
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
When starting from CC_OP_DYNAMIC, and issuing adox before adcx,
a typo used the wrong value for the resulting CC_OP.
Cc: Blue Swirl <blauwirbel@gmail.com>
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Fix various typos and misspellings. The bulk of these were found with
codespell.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move it to qom/cpu.h to avoid issues with include order.
Change pc_acpi_smi_interrupt() opaque to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Expose vmstate_cpu as vmstate_x86_cpu and hook it up to CPUClass::vmsd.
Adapt opaques and VMState fields to X86CPU. Drop cpu_{save,load}().
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Mostly bugfixes, but also some ICH work by Laszlo.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQEcBAABAgAGBQJRL1gUAAoJECgfDbjSjVRpIk4IAL17zSadWgd99ZrH6EtZ3/cw
mhuxgm+vRfZPHl82lGC/NthLrTbJ5hpVe1Ff9vrMIkx3OZsh97iqoXS4iPjo9804
Pb5zhDqHJQJDTQKCllb9seu6e5D9Fw3aPp+BcH5QfyEOc/X5l0c5IffRdo6xDT9G
1dDEywntl/wwfCej/kVBu4H7G2/bF7wEMvda7kvBPzZsc6y0TsDSAewk5EX54+/p
wRKw8IBa/T2/ldSoBcqPW1Zd2oeuvKhty4vrXlO1UVZi+uTWNmJxUm6Z1GqNInvE
im0FGlSxwTJF7nX3JQv6tB46GRL8V/IC5+9I5UJc5nT8ScrX4rIxRbJTnsRkn4Y=
=eUQN
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mst/tags/for_anthony' into staging
virtio,vhost,pci,e1000
Mostly bugfixes, but also some ICH work by Laszlo.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 28 Feb 2013 07:13:56 AM CST using RSA key ID D28D5469
# gpg: Can't check signature: public key not found
# By Michael S. Tsirkin (2) and others
# Via Michael S. Tsirkin
* mst/tags/for_anthony:
Set virtio-serial device to have a default of 2 MSI vectors.
ICH9 LPC: Reset Control Register, basic implementation
Fix guest OS hang when 64bit PCI bar present
e1000: unbreak the guest network migration to 1.3
vhost: memory sync fixes
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce ENV_OFFSET macros which can be used in non-target-specific
code that needs to generate TCG instructions which reference CPUState
fields given the cpu_env register that TCG targets set up with a
pointer to the CPUArchState struct.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
These correspond very closely to the insns that we're emulating.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This patch addresses the issue fully described here:
http://lists.nongnu.org/archive/html/qemu-devel/2013-02/msg01804.html
Linux kernels prior to 2.6.36 do not disable the PCI device during
enumeration process. Since lower and higher parts of a 64bit BAR
are programmed separately this leads to qemu receiving a request to occupy
a completely wrong address region for a short period of time.
We have found that the boot process screws up completely if kvm-apic range
is overlapped even for a short period of time (it is fine for other
regions though).
This patch raises the priority of the kvm-apic memory region, so it is
never pushed out by PCI devices. The patch is quite safe as it does not
touch memory manager.
Signed-off-by: Alexey Korolev <akorolex@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The shift and rotate insns use movcond to set CC_OP, and thus
achieve a conditional EFLAGS setting. By discarding CC_OP in
a later flags setting insn, we can discard that movcond.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We weren't computing flags for lzcnt at all. At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preparation for making this a const helper.
By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag. In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).
However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code. For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.
[rth: Use ctz with the proper width rather than ffs.]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE). In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And allow gen_setcc_slow to operate on cpu_cc_src.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is looking at EFLAGS, but it can do so more efficiently with
setcond.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do not hard code the destination register.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do the switch at translation time, converting the helper templates to
TCG opcodes. In some cases CF can be computed with a single setcond,
though others it may require a little more work.
In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0. We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway. It is
not worthwhile for PF, as usual.
shr+and+xor could be replaced by and+setcond. I'm not doing it yet.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
ZF, SF and PF can always be computed from CC_DST except in the
CC_OP_EFLAGS case (and CC_OP_DYNAMIC, which just resolves to CC_OP_EFLAGS
in gen_compute_eflags). Use setcond to compute ZF and SF.
We could also use a table lookup to compute PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This gets us universal coverage, rather than scattering discards
around at various places. As a bonus, we do not emit redundant
discards e.g. between sequential logic insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes code more similar to the other callers of gen_eob, especially
loopz/loopnz/jcxz.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
After calling gen_compute_eflags, leave the computed value in cc_reg_src
and set cc_op to CC_OP_EFLAGS. The next few patches will remove anyway
most calls to gen_compute_eflags.
As a result of this change it is more natural to remove the register
argument from gen_compute_eflags and change all the callers.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce new functions to extract PF, SF, OF, ZF in addition to CF.
These provide single entry points for optimizing accesses to a single
flag.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
All of the conditional calls to gen_op_set_cc_op go away, and
gen_op_set_cc_op itself gets inlined into its only remaining caller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a dirty flag to know whether env->cc_op is up to date,
rather than forcing s->cc_op to DYNAMIC and losing info.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Before computing flags we need to store the cc_op to memory. Move this
to gen_compute_eflags_c and gen_compute_eflags rather than doing it all
over the place.
Alo, after computing the flags in cpu_cc_src we are in EFLAGS mode.
Set s->cc_op and discard cpu_cc_dst in gen_compute_eflags, rather than
doing it all over the place.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Discard CC_DST and set s->cc_op immediately after computing EFLAGS.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Always compute EFLAGS first since it is needed whenever
the shift is non-zero, i.e. most of the time. This makes it possible
to remove some writes of CC_OP_EFLAGS to cpu_cc_op and more importantly
removes cases where s->cc_op becomes CC_OP_DYNAMIC. Also, we can
remove cc_tmp and just modify cc_src from within the helper.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Always compute EFLAGS first since it is needed whenever the shift is
non-zero, i.e. most of the time. This makes it possible to remove some
writes of CC_OP_EFLAGS to cpu_cc_op and more importantly removes cases
where s->cc_op becomes CC_OP_DYNAMIC. These are slow and we want to
avoid them: CC_OP_EFLAGS is quite efficient once we paid the initial
cost of computing the flags.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As in the gen_repz_scas/gen_repz_cmps case, delay setting
CC_OP_DYNAMIC in gen_jcc until after code generation. All of
gen_jcc1/is_fast_jcc/gen_setcc_slow_T0 now work on s->cc_op, which makes
things a bit easier to follow and to patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Set it to the appropriate CC_OP_SUBx constant in gen_scas/gen_cmps.
In the repz case it can be overridden to CC_OP_DYNAMIC after generating
the code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a function that abstracts extracting an 8, 16, 32 or 64-bit value
with or without sign, generalizing gen_extu and gen_exts.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In order to instantiate a CPU subtype we will need to know which type,
so move the cpu_model splitting into cpu_x86_init().
Parameters need to be set on the X86CPU instance, so move
cpu_x86_parse_featurestr() into cpu_x86_init() as well.
This leaves cpu_x86_register() operating on the model name only.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Consolidate CPU functions in cpu.c.
Allows to make cpu_x86_register() static.
No functional changes.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The target-specific ENV_GET_CPU() macros have allowed us to navigate
from CPUArchState to CPUState. The reverse direction was not supported.
Avoid introducing CPU_GET_ENV() macros by initializing an untyped
pointer that is initialized in derived instance_init functions.
The field may not be called "env" due to it being poisoned.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Adapt the signature of x86_cpu_realize(), hook up to
DeviceClass::realize and set realized = true in cpu_x86_init().
The QOM realizefn cannot depend on errp being non-NULL as in
cpu_x86_init(), so use a local Error to preserve error handling behavior
on APIC initialization errors.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Invoke parent's realizefn]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Use clz32 directly. Which makes slightly more sense given
that the input is type "int" and not type "long".
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Rename the public-facing function cpu_set_log to qemu_set_log. This
requires us to rename the internal-only qemu_set_log() to
do_qemu_set_log().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
CPUs are never added to the composition tree, so delete is achieved
simply by removing the last references to them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Prepares for cpu_interrupt() changing argument to CPUState.
While touching it, rename to x86_cpu_...() now that it takes an X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
* qemu-kvm/uq/master:
target-i386: kvm: prevent buffer overflow if -cpu foo, [x]level is too big
vmxcap: bit 9 of VMX_PROCBASED_CTLS2 is 'virtual interrupt delivery'
Conflicts:
target-i386/kvm.c
Trivial merge resolution due to lack of context.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Stack corruption may occur if too big 'level' or 'xlevel' values passed
on command line with KVM enabled, due to limited size of cpuid_data
in kvm_arch_init_vcpu().
reproduces with:
qemu -enable-kvm -cpu qemu64,level=4294967295
or
qemu -enable-kvm -cpu qemu64,xlevel=4294967295
Check if there is space in cpuid_data before passing it to cpu_x86_cpuid()
or abort() if there is not space.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Setting tsc-frequency from x86_def_t is NOP because default tsc_khz
in x86_def_t is 0 and CPUX86State.tsc_khz is also initialized to 0
by default. So there is no need to overwrite tsc_khz with default 0
because field was already initialized to 0.
Custom tsc-frequency setting is not affected due to it being set
without using x86_def_t.
Field tsc_khz in x86_def_t becomes unused with this patch, so drop it
as well.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move custom features parsing after built-in cpu_model defaults are set
and set custom features directly on CPU instance. That allows to make a
clear distinction between built-in cpu model defaults that eventually
should go into class_init() and extra property setting which is done
after defaults are set on CPU instance.
Impl. details:
* use object_property_parse() property setter so it would be a mechanical
change to switch to global properties later.
* And after all current features/properties are converted into static
properties, it will take a trivial patch to switch to global properties.
Which will allow to:
* get CPU instance initialized with all parameters passed on -cpu ...
cmd. line from object_new() call.
* call cpu_model/featurestr parsing only once before CPUs are created
* open a road for removing CPUxxxState.cpu_model_str field, when other
CPUs are similarly converted to subclasses and static properties.
- re-factor error handling, to use Error instead of fprintf()s, since
it is anyway passed in for property setter.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Commit 8935499831 makes cpuid return to guest host's vendor value
instead of built-in one by default if kvm_enabled() == true and allows
to override this behavior if 'vendor' is specified on -cpu command line.
But every time guest calls cpuid to get 'vendor' value, host's value is
read again and again in default case.
It complicates semantics of vendor property and makes it harder to use.
Instead of reading 'vendor' value from host every time cpuid[vendor] is
called, override 'vendor' value only once in cpu_x86_find_by_name(), when
built-in CPU model is found and if(kvm_enabled() == true).
It provides the same default semantics
if (kvm_enabled() == true) vendor = host's vendor
else vendor = built-in vendor
and then later:
if (custom vendor) vendor = custom vendor
'vendor' value is overridden when user provides it on -cpu command line,
and there is no need for vendor_override field anymore, remove it.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Vendor property setter takes string as vendor value but cpudefs
use uint32_t vendor[123] fields to define vendor value. It makes it
difficult to unify and use property setter for values from cpudefs.
Simplify code by using vendor property setter, vendor[123] fields
are converted into vendor[13] array to keep its value. And vendor
property setter is used to access/set value on CPU.
- Make for() cycle reusable for the next patch by adding
x86_cpu_vendor_words2str()
Intel's CPUID spec[1] says:
"
5.1.1 ...
These registers contain the ASCII string: GenuineIntel
...
"
List[2] of known vendor values shows that they all are 12 ASCII
characters long, padded where necessary with space.
Current supported values are all ASCII characters packed in
ebx, edx, ecx. So lets state that QEMU supports 12 printable ASCII
characters packed in ebx, edx, ecx registers for cpuid(0) instruction.
*1 - http://www.intel.com/Assets/PDF/appnote/241618.pdf
*2 - http://en.wikipedia.org/wiki/CPUID#EAX.3D0:_Get_vendor_ID
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It is no longer needed since dropping cpudef config file support.
Cleaning this up removes knowledge about other models from x86_def_t,
in preparation for reusing x86_def_t as intermediate step towards pure
QOM X86CPU subclasses.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Catch NULL name argument early to avoid repeated checks.
Similarly, check for -cpu host early and untangle from iterating through
model definitions. This prepares for introducing X86CPU subclasses.
Signed-off-by: Andreas Färber <afaerber@suse.de>
This keeps compatibility on machine-types pc-1.2 and older, and prints a
warning in case the requested configuration won't get the correct
topology.
I couldn't think of a better way to warn about broken topology when in
compat mode other than using error_report(). The warning message will
probably be buried in a log file somewhere, but it's better than
nothing.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This introduces utility functions for the APIC ID calculation, based on:
Intel® 64 Architecture Processor Topology Enumeration
http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration/
The code should be compatible with AMD's "Extended Method" described at:
AMD CPUID Specification (Publication #25481)
Section 3: Multiple Core Calcuation
as long as:
- nr_threads is set to 1;
- OFFSET_IDX is assumed to be 0;
- CPUID Fn8000_0008_ECX[ApicIdCoreIdSize[3:0]] is set to
apicid_core_width().
Unit tests included.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This function will be used by both the CPU initialization code and the
fw_cfg table initialization code.
Later this function will be updated to generate APIC IDs according to
the CPU topology.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The CPU ID in KVM is supposed to be the APIC ID, so change the
KVM_CREATE_VCPU call to match it. The current behavior didn't break
anything yet because today the APIC ID is assumed to be equal to the CPU
index, but this won't be true in the future.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This will allow each architecture to define how the VCPU ID is set on
the KVM_CREATE_VCPU ioctl call.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Currently, the pc-1.4 machine init function enables PV EOI and then
calls the pc-1.2 machine init function. The problem with this approach
is that now we can't enable any additional compatibility code inside the
pc-1.2 init function because it would end up enabling the compatibility
behavior on pc-1.3 and pc-1.4 as well.
This reverses the logic so that the pc-1.2 machine init function will
disable PV EOI, and then call the pc-1.4 machine init function.
This way we can change older machine-types to enable compatibility
behavior, and the newer machine-types (pc-1.3, pc-q35-1.4 and
pc-i440fx-1.4) would just use the default behavior.
(This means that one nice side-effect of this change is that pc-q35-1.4
will get PV EOI enabled by default, too)
It would be interesting to eventually change pc_init_pci_no_kvmclock()
and pc_init_isa() to reuse pc_init_pci_1_2() as well (so we don't need
to duplicate compatibility code on those two functions). But this will
be probably much easier to do after we create a PCInitArgs struct for
the PC initialization arguments, and/or after we use global-properties
to implement the compatibility modes present in pc_init_pci_1_2().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This is a cleanup that tries to solve two small issues:
- We don't need a separate kvm_pv_eoi_features variable just to keep a
constant calculated at compile-time, and this style would require
adding a separate variable (that's declared twice because of the
CONFIG_KVM ifdef) for each feature that's going to be
enabled/disabled by machine-type compat code.
- The pc-1.3 code is setting the kvm_pv_eoi flag on cpuid_kvm_features
even when KVM is disabled at runtime. This small inconsistency in
the cpuid_kvm_features field isn't a problem today because
cpuid_kvm_features is ignored by the TCG code, but it may cause
unexpected problems later when refactoring the CPUID handling code.
This patch eliminates the kvm_pv_eoi_features variable and simply uses
kvm_enabled() inside the enable_kvm_pv_eoi() compat function, so it
enables kvm_pv_eoi only if KVM is enabled. I believe this makes the
behavior of enable_kvm_pv_eoi() clearer and easier to understand.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Replace by SYS_BUS_DEVICE() QOM cast macro using a scripted conversion.
Avoids the old macro creeping into new code.
Resolve a Coding Style warning in openpic code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Replace an if statement using magic numbers for breakpoint type with a
more explicit switch statement. This is to aid readability.
Change the return type and force_dr6_update argument type to bool.
While at it, fix Coding Style issues (missing braces).
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
"Go To Statement Considered Harmful" -- E. Dijkstra
To avoid an unnecessary goto within the switch statement, move
watchpoint insertion out of the switch statement. Improves readability.
While at it, fix Coding Style issues (missing braces, indentation).
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
hw_breakpoint_enabled() returned a bit field indicating whether a local
breakpoint and/or global breakpoint was enabled. Avoid this number magic
by using explicit boolean helper functions hw_local_breakpoint_enabled()
and hw_global_breakpoint_enabled(), to aid readability.
Reuse them for the hw_breakpoint_enabled() implementation and change
its return type to bool.
While at it, fix Coding Style issues (missing braces).
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Implicit use of dr7 bit field is a little hard to understand,
so define constants for them and use them consistently.
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
kvm_check_features_against_host() should be called when features can't
be changed, and when features are converted to properties it would be
possible to change them until realize time, so correct way is to call
kvm_check_features_against_host() in x86_cpu_realize().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Freeing resources in one place would require setting 'error'
to not NULL, so add some more error reporting before jumping to
exit branch.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
No functional change, needed for simplifying conversion to properties.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This adds the following feature words to the list of flags to be checked
by kvm_check_features_against_host():
- cpuid_7_0_ebx_features
- ext4_features
- kvm_features
- svm_features
This will ensure the "enforce" flag works as it should: it won't allow
QEMU to be started unless every flag that was requested by the user or
defined in the CPU model is supported by the host.
This patch may cause existing configurations where "enforce" wasn't
preventing QEMU from being started to abort QEMU. But that's exactly the
point of this patch: if a flag was not supported by the host and QEMU
wasn't aborting, it was a bug in the "enforce" code.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Feature names were taken from the X86_FEATURE_* constants in the Linux
kernel code.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Instead of carrying the CPUID leaf/register and feature name array on
the model_features_t struct, move that information into
feature_word_info so it can be reused by other functions.
The goal is to eventually kill model_features_t entirely, but to do that
we have to either convert x86_def_t.features to an array or use
offsetof() inside FeatureWordInfo (to replace the pointers inside
model_features_t). So by now just move most of the model_features_t
fields to FeatureWordInfo except for the two pointers to local
arguments.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This introduces a FeatureWord enum, FeatureWordInfo struct (with
generation information about a feature word), and a FeatureWordArray
typedef, and changes add_flagname_to_bitmaps() code and
cpu_x86_parse_featurestr() to use the new typedefs instead of separate
variables for each feature word.
This will help us keep the code at kvm_check_features_against_host(),
cpu_x86_parse_featurestr() and add_flagname_to_bitmaps() sane while
adding new feature name arrays.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
KVM_CAP_PV_MMU capability reporting was removed from the kernel since
v2.6.33 (see commit a68a6a7282373), and was completely removed from the
kernel since v3.3 (see commit fb92045843). It doesn't make sense to keep
it enabled by default, as it would cause unnecessary hassle when using
the "enforce" flag.
This disables kvm_mmu on all machine-types. With this fix, the possible
scenarios when migrating from QEMU <= 1.3 to QEMU 1.4 are:
------------+----------+----------------------------------------------------
src kernel | dst kern.| Result
------------+----------+----------------------------------------------------
>= 2.6.33 | any | kvm_mmu was already disabled and will stay disabled
<= 2.6.32 | >= 3.3 | correct live migration is impossible
<= 2.6.32 | <= 3.2 | kvm_mmu will be disabled on next guest reboot *
------------+----------+----------------------------------------------------
* If they are running kernel <= 2.6.32 and want kvm_mmu to be kept
enabled on guest reboot, they can explicitly add +kvm_mmu to the QEMU
command-line. Using 2.6.33 and higher, it is not possible to enable
kvm_mmu explicitly anymore.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Note that target-alpha accesses this field from TCG, now using a
negative offset. Therefore the field is placed last in CPUState.
Pass PowerPCCPU to [kvm]ppc_fixup_cpu() to facilitate this change.
Move common parts of mips cpu_state_reset() to mips_cpu_reset().
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
[AF: Rebased onto ppc CPU subclasses and openpic changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
To facilitate the field movements, pass MIPSCPU to malta_mips_config();
avoid that for mips_cpu_map_tc() since callers only access MIPS Thread
Contexts, inside TCG helpers.
Signed-off-by: Andreas Färber <afaerber@suse.de>
((pde & 0x1fe000) << 19) is the bits 39:32 of the final physical address, and
we shouldn't use unit32_t to calculate it. Convert the type to hwaddr to fix
this problem.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Since cpudef config is not supported anymore and all remaining sources
now always set x86_def_t.vendor[123] fields, remove setting default
vendor to simplify future re-factoring.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>