We weren't computing flags for lzcnt at all. At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
In preparation for making this a const helper.
By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag. In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).
However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code. For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.
[rth: Use ctz with the proper width rather than ffs.]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE). In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And allow gen_setcc_slow to operate on cpu_cc_src.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is looking at EFLAGS, but it can do so more efficiently with
setcond.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do not hard code the destination register.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Do the switch at translation time, converting the helper templates to
TCG opcodes. In some cases CF can be computed with a single setcond,
though others it may require a little more work.
In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0. We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway. It is
not worthwhile for PF, as usual.
shr+and+xor could be replaced by and+setcond. I'm not doing it yet.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
ZF, SF and PF can always be computed from CC_DST except in the
CC_OP_EFLAGS case (and CC_OP_DYNAMIC, which just resolves to CC_OP_EFLAGS
in gen_compute_eflags). Use setcond to compute ZF and SF.
We could also use a table lookup to compute PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This gets us universal coverage, rather than scattering discards
around at various places. As a bonus, we do not emit redundant
discards e.g. between sequential logic insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This makes code more similar to the other callers of gen_eob, especially
loopz/loopnz/jcxz.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
After calling gen_compute_eflags, leave the computed value in cc_reg_src
and set cc_op to CC_OP_EFLAGS. The next few patches will remove anyway
most calls to gen_compute_eflags.
As a result of this change it is more natural to remove the register
argument from gen_compute_eflags and change all the callers.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce new functions to extract PF, SF, OF, ZF in addition to CF.
These provide single entry points for optimizing accesses to a single
flag.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
All of the conditional calls to gen_op_set_cc_op go away, and
gen_op_set_cc_op itself gets inlined into its only remaining caller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a dirty flag to know whether env->cc_op is up to date,
rather than forcing s->cc_op to DYNAMIC and losing info.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Before computing flags we need to store the cc_op to memory. Move this
to gen_compute_eflags_c and gen_compute_eflags rather than doing it all
over the place.
Alo, after computing the flags in cpu_cc_src we are in EFLAGS mode.
Set s->cc_op and discard cpu_cc_dst in gen_compute_eflags, rather than
doing it all over the place.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Discard CC_DST and set s->cc_op immediately after computing EFLAGS.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Always compute EFLAGS first since it is needed whenever
the shift is non-zero, i.e. most of the time. This makes it possible
to remove some writes of CC_OP_EFLAGS to cpu_cc_op and more importantly
removes cases where s->cc_op becomes CC_OP_DYNAMIC. Also, we can
remove cc_tmp and just modify cc_src from within the helper.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Always compute EFLAGS first since it is needed whenever the shift is
non-zero, i.e. most of the time. This makes it possible to remove some
writes of CC_OP_EFLAGS to cpu_cc_op and more importantly removes cases
where s->cc_op becomes CC_OP_DYNAMIC. These are slow and we want to
avoid them: CC_OP_EFLAGS is quite efficient once we paid the initial
cost of computing the flags.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As in the gen_repz_scas/gen_repz_cmps case, delay setting
CC_OP_DYNAMIC in gen_jcc until after code generation. All of
gen_jcc1/is_fast_jcc/gen_setcc_slow_T0 now work on s->cc_op, which makes
things a bit easier to follow and to patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Set it to the appropriate CC_OP_SUBx constant in gen_scas/gen_cmps.
In the repz case it can be overridden to CC_OP_DYNAMIC after generating
the code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce a function that abstracts extracting an 8, 16, 32 or 64-bit value
with or without sign, generalizing gen_extu and gen_exts.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In order to instantiate a CPU subtype we will need to know which type,
so move the cpu_model splitting into cpu_x86_init().
Parameters need to be set on the X86CPU instance, so move
cpu_x86_parse_featurestr() into cpu_x86_init() as well.
This leaves cpu_x86_register() operating on the model name only.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Consolidate CPU functions in cpu.c.
Allows to make cpu_x86_register() static.
No functional changes.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The target-specific ENV_GET_CPU() macros have allowed us to navigate
from CPUArchState to CPUState. The reverse direction was not supported.
Avoid introducing CPU_GET_ENV() macros by initializing an untyped
pointer that is initialized in derived instance_init functions.
The field may not be called "env" due to it being poisoned.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Adapt the signature of x86_cpu_realize(), hook up to
DeviceClass::realize and set realized = true in cpu_x86_init().
The QOM realizefn cannot depend on errp being non-NULL as in
cpu_x86_init(), so use a local Error to preserve error handling behavior
on APIC initialization errors.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Invoke parent's realizefn]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Use clz32 directly. Which makes slightly more sense given
that the input is type "int" and not type "long".
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Rename the public-facing function cpu_set_log to qemu_set_log. This
requires us to rename the internal-only qemu_set_log() to
do_qemu_set_log().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
CPUs are never added to the composition tree, so delete is achieved
simply by removing the last references to them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Prepares for cpu_interrupt() changing argument to CPUState.
While touching it, rename to x86_cpu_...() now that it takes an X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
* qemu-kvm/uq/master:
target-i386: kvm: prevent buffer overflow if -cpu foo, [x]level is too big
vmxcap: bit 9 of VMX_PROCBASED_CTLS2 is 'virtual interrupt delivery'
Conflicts:
target-i386/kvm.c
Trivial merge resolution due to lack of context.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Stack corruption may occur if too big 'level' or 'xlevel' values passed
on command line with KVM enabled, due to limited size of cpuid_data
in kvm_arch_init_vcpu().
reproduces with:
qemu -enable-kvm -cpu qemu64,level=4294967295
or
qemu -enable-kvm -cpu qemu64,xlevel=4294967295
Check if there is space in cpuid_data before passing it to cpu_x86_cpuid()
or abort() if there is not space.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Setting tsc-frequency from x86_def_t is NOP because default tsc_khz
in x86_def_t is 0 and CPUX86State.tsc_khz is also initialized to 0
by default. So there is no need to overwrite tsc_khz with default 0
because field was already initialized to 0.
Custom tsc-frequency setting is not affected due to it being set
without using x86_def_t.
Field tsc_khz in x86_def_t becomes unused with this patch, so drop it
as well.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move custom features parsing after built-in cpu_model defaults are set
and set custom features directly on CPU instance. That allows to make a
clear distinction between built-in cpu model defaults that eventually
should go into class_init() and extra property setting which is done
after defaults are set on CPU instance.
Impl. details:
* use object_property_parse() property setter so it would be a mechanical
change to switch to global properties later.
* And after all current features/properties are converted into static
properties, it will take a trivial patch to switch to global properties.
Which will allow to:
* get CPU instance initialized with all parameters passed on -cpu ...
cmd. line from object_new() call.
* call cpu_model/featurestr parsing only once before CPUs are created
* open a road for removing CPUxxxState.cpu_model_str field, when other
CPUs are similarly converted to subclasses and static properties.
- re-factor error handling, to use Error instead of fprintf()s, since
it is anyway passed in for property setter.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Commit 8935499831 makes cpuid return to guest host's vendor value
instead of built-in one by default if kvm_enabled() == true and allows
to override this behavior if 'vendor' is specified on -cpu command line.
But every time guest calls cpuid to get 'vendor' value, host's value is
read again and again in default case.
It complicates semantics of vendor property and makes it harder to use.
Instead of reading 'vendor' value from host every time cpuid[vendor] is
called, override 'vendor' value only once in cpu_x86_find_by_name(), when
built-in CPU model is found and if(kvm_enabled() == true).
It provides the same default semantics
if (kvm_enabled() == true) vendor = host's vendor
else vendor = built-in vendor
and then later:
if (custom vendor) vendor = custom vendor
'vendor' value is overridden when user provides it on -cpu command line,
and there is no need for vendor_override field anymore, remove it.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Vendor property setter takes string as vendor value but cpudefs
use uint32_t vendor[123] fields to define vendor value. It makes it
difficult to unify and use property setter for values from cpudefs.
Simplify code by using vendor property setter, vendor[123] fields
are converted into vendor[13] array to keep its value. And vendor
property setter is used to access/set value on CPU.
- Make for() cycle reusable for the next patch by adding
x86_cpu_vendor_words2str()
Intel's CPUID spec[1] says:
"
5.1.1 ...
These registers contain the ASCII string: GenuineIntel
...
"
List[2] of known vendor values shows that they all are 12 ASCII
characters long, padded where necessary with space.
Current supported values are all ASCII characters packed in
ebx, edx, ecx. So lets state that QEMU supports 12 printable ASCII
characters packed in ebx, edx, ecx registers for cpuid(0) instruction.
*1 - http://www.intel.com/Assets/PDF/appnote/241618.pdf
*2 - http://en.wikipedia.org/wiki/CPUID#EAX.3D0:_Get_vendor_ID
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It is no longer needed since dropping cpudef config file support.
Cleaning this up removes knowledge about other models from x86_def_t,
in preparation for reusing x86_def_t as intermediate step towards pure
QOM X86CPU subclasses.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Catch NULL name argument early to avoid repeated checks.
Similarly, check for -cpu host early and untangle from iterating through
model definitions. This prepares for introducing X86CPU subclasses.
Signed-off-by: Andreas Färber <afaerber@suse.de>
This keeps compatibility on machine-types pc-1.2 and older, and prints a
warning in case the requested configuration won't get the correct
topology.
I couldn't think of a better way to warn about broken topology when in
compat mode other than using error_report(). The warning message will
probably be buried in a log file somewhere, but it's better than
nothing.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This introduces utility functions for the APIC ID calculation, based on:
Intel® 64 Architecture Processor Topology Enumeration
http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration/
The code should be compatible with AMD's "Extended Method" described at:
AMD CPUID Specification (Publication #25481)
Section 3: Multiple Core Calcuation
as long as:
- nr_threads is set to 1;
- OFFSET_IDX is assumed to be 0;
- CPUID Fn8000_0008_ECX[ApicIdCoreIdSize[3:0]] is set to
apicid_core_width().
Unit tests included.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This function will be used by both the CPU initialization code and the
fw_cfg table initialization code.
Later this function will be updated to generate APIC IDs according to
the CPU topology.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The CPU ID in KVM is supposed to be the APIC ID, so change the
KVM_CREATE_VCPU call to match it. The current behavior didn't break
anything yet because today the APIC ID is assumed to be equal to the CPU
index, but this won't be true in the future.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This will allow each architecture to define how the VCPU ID is set on
the KVM_CREATE_VCPU ioctl call.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Currently, the pc-1.4 machine init function enables PV EOI and then
calls the pc-1.2 machine init function. The problem with this approach
is that now we can't enable any additional compatibility code inside the
pc-1.2 init function because it would end up enabling the compatibility
behavior on pc-1.3 and pc-1.4 as well.
This reverses the logic so that the pc-1.2 machine init function will
disable PV EOI, and then call the pc-1.4 machine init function.
This way we can change older machine-types to enable compatibility
behavior, and the newer machine-types (pc-1.3, pc-q35-1.4 and
pc-i440fx-1.4) would just use the default behavior.
(This means that one nice side-effect of this change is that pc-q35-1.4
will get PV EOI enabled by default, too)
It would be interesting to eventually change pc_init_pci_no_kvmclock()
and pc_init_isa() to reuse pc_init_pci_1_2() as well (so we don't need
to duplicate compatibility code on those two functions). But this will
be probably much easier to do after we create a PCInitArgs struct for
the PC initialization arguments, and/or after we use global-properties
to implement the compatibility modes present in pc_init_pci_1_2().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This is a cleanup that tries to solve two small issues:
- We don't need a separate kvm_pv_eoi_features variable just to keep a
constant calculated at compile-time, and this style would require
adding a separate variable (that's declared twice because of the
CONFIG_KVM ifdef) for each feature that's going to be
enabled/disabled by machine-type compat code.
- The pc-1.3 code is setting the kvm_pv_eoi flag on cpuid_kvm_features
even when KVM is disabled at runtime. This small inconsistency in
the cpuid_kvm_features field isn't a problem today because
cpuid_kvm_features is ignored by the TCG code, but it may cause
unexpected problems later when refactoring the CPUID handling code.
This patch eliminates the kvm_pv_eoi_features variable and simply uses
kvm_enabled() inside the enable_kvm_pv_eoi() compat function, so it
enables kvm_pv_eoi only if KVM is enabled. I believe this makes the
behavior of enable_kvm_pv_eoi() clearer and easier to understand.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Replace by SYS_BUS_DEVICE() QOM cast macro using a scripted conversion.
Avoids the old macro creeping into new code.
Resolve a Coding Style warning in openpic code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Replace an if statement using magic numbers for breakpoint type with a
more explicit switch statement. This is to aid readability.
Change the return type and force_dr6_update argument type to bool.
While at it, fix Coding Style issues (missing braces).
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
"Go To Statement Considered Harmful" -- E. Dijkstra
To avoid an unnecessary goto within the switch statement, move
watchpoint insertion out of the switch statement. Improves readability.
While at it, fix Coding Style issues (missing braces, indentation).
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
hw_breakpoint_enabled() returned a bit field indicating whether a local
breakpoint and/or global breakpoint was enabled. Avoid this number magic
by using explicit boolean helper functions hw_local_breakpoint_enabled()
and hw_global_breakpoint_enabled(), to aid readability.
Reuse them for the hw_breakpoint_enabled() implementation and change
its return type to bool.
While at it, fix Coding Style issues (missing braces).
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Implicit use of dr7 bit field is a little hard to understand,
so define constants for them and use them consistently.
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
kvm_check_features_against_host() should be called when features can't
be changed, and when features are converted to properties it would be
possible to change them until realize time, so correct way is to call
kvm_check_features_against_host() in x86_cpu_realize().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Freeing resources in one place would require setting 'error'
to not NULL, so add some more error reporting before jumping to
exit branch.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
No functional change, needed for simplifying conversion to properties.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This adds the following feature words to the list of flags to be checked
by kvm_check_features_against_host():
- cpuid_7_0_ebx_features
- ext4_features
- kvm_features
- svm_features
This will ensure the "enforce" flag works as it should: it won't allow
QEMU to be started unless every flag that was requested by the user or
defined in the CPU model is supported by the host.
This patch may cause existing configurations where "enforce" wasn't
preventing QEMU from being started to abort QEMU. But that's exactly the
point of this patch: if a flag was not supported by the host and QEMU
wasn't aborting, it was a bug in the "enforce" code.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Feature names were taken from the X86_FEATURE_* constants in the Linux
kernel code.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Instead of carrying the CPUID leaf/register and feature name array on
the model_features_t struct, move that information into
feature_word_info so it can be reused by other functions.
The goal is to eventually kill model_features_t entirely, but to do that
we have to either convert x86_def_t.features to an array or use
offsetof() inside FeatureWordInfo (to replace the pointers inside
model_features_t). So by now just move most of the model_features_t
fields to FeatureWordInfo except for the two pointers to local
arguments.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This introduces a FeatureWord enum, FeatureWordInfo struct (with
generation information about a feature word), and a FeatureWordArray
typedef, and changes add_flagname_to_bitmaps() code and
cpu_x86_parse_featurestr() to use the new typedefs instead of separate
variables for each feature word.
This will help us keep the code at kvm_check_features_against_host(),
cpu_x86_parse_featurestr() and add_flagname_to_bitmaps() sane while
adding new feature name arrays.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
KVM_CAP_PV_MMU capability reporting was removed from the kernel since
v2.6.33 (see commit a68a6a7282373), and was completely removed from the
kernel since v3.3 (see commit fb92045843). It doesn't make sense to keep
it enabled by default, as it would cause unnecessary hassle when using
the "enforce" flag.
This disables kvm_mmu on all machine-types. With this fix, the possible
scenarios when migrating from QEMU <= 1.3 to QEMU 1.4 are:
------------+----------+----------------------------------------------------
src kernel | dst kern.| Result
------------+----------+----------------------------------------------------
>= 2.6.33 | any | kvm_mmu was already disabled and will stay disabled
<= 2.6.32 | >= 3.3 | correct live migration is impossible
<= 2.6.32 | <= 3.2 | kvm_mmu will be disabled on next guest reboot *
------------+----------+----------------------------------------------------
* If they are running kernel <= 2.6.32 and want kvm_mmu to be kept
enabled on guest reboot, they can explicitly add +kvm_mmu to the QEMU
command-line. Using 2.6.33 and higher, it is not possible to enable
kvm_mmu explicitly anymore.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Note that target-alpha accesses this field from TCG, now using a
negative offset. Therefore the field is placed last in CPUState.
Pass PowerPCCPU to [kvm]ppc_fixup_cpu() to facilitate this change.
Move common parts of mips cpu_state_reset() to mips_cpu_reset().
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
[AF: Rebased onto ppc CPU subclasses and openpic changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
To facilitate the field movements, pass MIPSCPU to malta_mips_config();
avoid that for mips_cpu_map_tc() since callers only access MIPS Thread
Contexts, inside TCG helpers.
Signed-off-by: Andreas Färber <afaerber@suse.de>
((pde & 0x1fe000) << 19) is the bits 39:32 of the final physical address, and
we shouldn't use unit32_t to calculate it. Convert the type to hwaddr to fix
this problem.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Since cpudef config is not supported anymore and all remaining sources
now always set x86_def_t.vendor[123] fields, remove setting default
vendor to simplify future re-factoring.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
When CPU properties are implemented, ext2_features may change
between object_new(CPU) and cpu_realize_fn(). Sanitizing
ext2_features for AMD based CPU at realize() time will keep
current behavior after CPU features are converted to properties.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>