The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
This patch denies crossing the boundary of the pages in the replay mode,
because it can cause an exception. Do it only when boundary is
crossed by the first instruction in the block.
If current instruction already crossed the bound - it's ok,
because an exception hasn't stopped this code.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add xsaves related definition, it also adds corresponding part
to kvm_get/put, and vmstate.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These represent xsave-related capabilities of the processor, and KVM may
or may not support them.
Add feature bits so that they are considered by "-cpu ...,enforce", and use
the new feature work instead of calling kvm_arch_get_supported_cpuid.
Bit 3 (XSAVES) is not migratables because it requires saving MSR_IA32_XSS.
Neither KVM nor any commonly available hardware supports it anyway.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
sipi_vector is an int; it is shifted by 12 and passed as a 64-bit value,
which makes Coverity think that we wanted (uint64_t)sipi_vector << 12.
But actually it must be between 0 and 255. Make this explicit.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make SVM be disabled by default on all CPU models when in KVM mode.
Nested SVM is enabled by default in the KVM kernel module, but it is
probably less stable than nested VMX (which is already disabled by
default).
Add a new compat function, x86_cpu_compat_kvm_no_autodisable(), to keep
compatibility on previous machine-types.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The x86_cpu_compat_disable_kvm_features() name was a bit confusing, as
it won't forcibly disable the feature for all CPU models (i.e. add it to
kvm_default_unset_features), but it will instead turn off the KVM
auto-enabling of the feature (i.e. remove it from kvm_default_features),
meaning the feature may still be enabled by default in some CPU models).
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This patch introduces cpu_set_fpuc() function, which changes fpuc field
of the CPU state and calls update_fp_status() function.
These calls update status of softfloat library and prevent bugs caused
by non-coherent rounding settings of the FPU and softfloat.
v2 changes:
* Added missed calls and intoduced setter function (as suggested by TeLeMan)
Reviewed-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
The MTRR state in KVM currently runs completely independent of the
QEMU state in CPUX86State.mtrr_*. This means that on migration, the
target loses MTRR state from the source. Generally that's ok though
because KVM ignores it and maps everything as write-back anyway. The
exception to this rule is when we have an assigned device and an IOMMU
that doesn't promote NoSnoop transactions from that device to be cache
coherent. In that case KVM trusts the guest mapping of memory as
configured in the MTRR.
This patch updates kvm_get|put_msrs() so that we retrieve the actual
vCPU MTRR settings and therefore keep CPUX86State synchronized for
migration. kvm_put_msrs() is also used on vCPU reset and therefore
allows future modificaitons of MTRR state at reset to be realized.
Note that the entries array used by both functions was already
slightly undersized for holding every possible MSR, so this patch
increases it beyond the 28 new entries necessary for MTRR state.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We currently define the number of variable range MTRR registers as 8
in the CPUX86State structure and vmstate, but use MSR_MTRRcap_VCNT
(also 8) to report to guests the number available. Change this to
use MSR_MTRRcap_VCNT consistently.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Expose "Invariant TSC" flag, if KVM is enabled. From Intel documentation:
17.13.1 Invariant TSC The time stamp counter in newer processors may
support an enhancement, referred to as invariant TSC. Processor’s
support for invariant TSC is indicated by CPUID.80000007H:EDX[8].
The invariant TSC will run at a constant rate in all ACPI P-, C-.
and T-states. This is the architectural behavior moving forward. On
processors with invariant TSC support, the OS may use the TSC for wall
clock timer services (instead of ACPI or HPET timers). TSC reads are
much more efficient and do not incur the overhead associated with a ring
transition or access to a platform resource.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
[ehabkost: redo feature filtering to use .tcg_features]
[ehabkost: add CPUID_APM_INVTSC macro, add it to .unmigratable_flags]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
* remotes/bonzini/softmmu-smap: (33 commits)
target-i386: cleanup x86_cpu_get_phys_page_debug
target-i386: fix protection bits in the TLB for SMEP
target-i386: support long addresses for 4MB pages (PSE-36)
target-i386: raise page fault for reserved bits in large pages
target-i386: unify reserved bits and NX bit check
target-i386: simplify pte/vaddr calculation
target-i386: raise page fault for reserved physical address bits
target-i386: test reserved PS bit on PML4Es
target-i386: set correct error code for reserved bit access
target-i386: introduce support for 1 GB pages
target-i386: introduce do_check_protect label
target-i386: tweak handling of PG_NX_MASK
target-i386: commonize checks for PAE and non-PAE
target-i386: commonize checks for 4MB and 4KB pages
target-i386: commonize checks for 2MB and 4KB pages
target-i386: fix coding standards in x86_cpu_handle_mmu_fault
target-i386: simplify SMAP handling in MMU_KSMAP_IDX
target-i386: fix kernel accesses with SMAP and CPL = 3
target-i386: move check_io helpers to seg_helper.c
target-i386: rename KSMAP to KNOSMAP
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not use this MMU index at all if CR4.SMAP is false, and drop
the SMAP check from x86_cpu_handle_mmu_fault.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With SMAP, implicit kernel accesses from user mode always behave as
if AC=0. To do this, kernel mode is not anymore a separate MMU mode.
Instead, KERNEL_IDX is renamed to KSMAP_IDX and the kernel mode accessors
wrap KSMAP_IDX and KNOSMAP_IDX.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no reason to keep that out of the function. The comment refers
to the disassembler's cc_op state rather than the CPUState field.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CS.RPL is not equal to the CPL in the few instructions between
setting CR0.PE and reloading CS. We get this right in the common
case, because writes to CR0 do not modify the CPL, but it would
not be enough if an SMI comes exactly during that brief period.
Were this to happen, the RSM instruction would erroneously set
CPL to the low two bits of the real-mode selector; and if they are
not 00, the next instruction fetch cannot access the code segment
and causes a triple fault.
However, SS.DPL *is* always equal to the CPL. In real processors
(AMD only) there is a weird case of SYSRET setting SS.DPL=SS.RPL
from the STAR register while forcing CPL=3, but we do not emulate
that.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets. Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.
Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1. Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most MSRs, plus the FPU, MMX, MXCSR, XMM and YMM registers should not
be zeroed on INIT (Table 9-1 in the Intel SDM). Copy them out of
CPUX86State and back in, instead of special casing env->pat.
The relevant fields are already consecutive except PAT and SMBASE.
However:
- KVM and Hyper-V MSRs should be reset because they include memory
locations written by the hypervisor. These MSRs are moved together
at the end of the preserved area.
- SVM state can be moved out of the way since it is written by VMRUN.
Cc: Andreas Faerber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
BND0-3, BNDCFGU, BNDCFGS, BNDSTATUS were not zeroed on reset, but they
should be (Intel Instruction Set Extensions Programming Reference
319433-015, pages 9-4 and 9-6). Same for YMM.
XCR0 should be reset to 1.
TSC and TSC_RESET were zeroed already by the memset, remove the explicit
assignments.
Cc: Andreas Faerber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.
This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add 'U' suffixes where necessary to avoid (1 << 31) which
shifts left into the sign bit, which is undefined behaviour.
Add the suffix also for other constants in the same groupings
even if they don't shift into bit 31, for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Note that while such functions may exist both for *-user and softmmu,
only *-user uses the CPUState hook, while softmmu reuses the prototype
for calling it directly.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Instead of the feature-specific disable_kvm_pv_eoi() function, create a
more general function that can be used to disable other feature bits in
machine-type compat code.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Default to false.
Tidy variable naming and inline cast uses while at it.
Tested-by: Jia Liu <proljc@gmail.com> (or32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
* remotes/qmp-unstable/queue/qmp: (32 commits)
qapi: Add missing null check to opts_start_struct()
qapi: Clean up superfluous null check in qapi_dealloc_type_str()
qapi: Clean up null checking in generated visitors
qapi: Drop unused code in qapi-commands.py
qapi: Drop nonsensical header guard in generated qapi-visit.c
qapi: Fix licensing of scripts
tests/qapi-schema: Cover flat union types
tests/qapi-schema: Cover union types with base
tests/qapi-schema: Cover complex types with base
tests/qapi-schema: Cover anonymous union types
tests/qapi-schema: Cover simple argument types
tests/qapi-schema: Cover optional command arguments
tests/qapi-schema: Actually check successful QMP command response
monitor: Remove left-over code in do_info_profile.
qerror: Improve QERR_DEVICE_NOT_ACTIVE message
qmp: Check for returned data from __json_read in get_events
dump: add 'query-dump-guest-memory-capability' command
Define the architecture for compressed dump format
dump: make kdump-compressed format available for 'dump-guest-memory'
dump: add API to write dump pages
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we restore the mxcsr register with FXRSTOR, or set it with gdb,
we need to update the various SSE status flags in CPUX86State
Reported-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
http://msdn.microsoft.com/en-us/library/windows/hardware/ff541625%28v=vs.85%29.aspx
This code is generic for activating reference time counter or virtual reference time stamp counter
Signed-off-by: Vadim Rozenfeld <vrozenfe@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qemu-kvm/uq/master:
kvm: always update the MPX model specific register
KVM: fix addr type for KVM_IOEVENTFD
KVM: Retry KVM_CREATE_VM on EINTR
mempath prefault: fix off-by-one error
kvm: x86: Separately write feature control MSR on reset
roms: Flush icache when writing roms to guest memory
target-i386: clear guest TSC on reset
target-i386: do not special case TSC writeback
target-i386: Intel MPX
Conflicts:
exec.c
aliguori: fix trivial merge conflict in exec.c
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
This motion is preparing for refactoring vCPU APIC subsequently.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add some MPX related definiation, and hardcode sizes and offsets
of xsave features 3 and 4. It also add corresponding part to
kvm_get/put_xsave, and vmstate.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On KVM, the KVM_SET_XSAVE would be executed with a 0 xstate_bv,
and not restore anything.
Since FP and SSE data are always valid, set them in xstate_bv at reset
time. In fact, that value is the same that KVM_GET_XSAVE returns on
pre-XSAVE hosts.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
* qemu-kvm/uq/master:
kvm-stub: fix compilation
kvm: shorten the parameter list for get_real_device()
kvm: i386: fix LAPIC TSC deadline timer save/restore
kvm-all.c: max_cpus should not exceed KVM vcpu limit
kvm: Simplify kvm_handle_io
kvm: x86: fix setting IA32_FEATURE_CONTROL with nested VMX disabled
kvm: add KVM_IRQFD_FLAG_RESAMPLE support
kvm: migrate vPMU state
target-i386: remove tabs from target-i386/cpu.h
Initialize IA32_FEATURE_CONTROL MSR in reset and migration
Conflicts:
target-i386/cpu.h
target-i386/kvm.c
aliguori: fixup trivial conflicts due to whitespace and added cpu
argument
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
- since hyperv_* helper functions are used only in target-i386/kvm.c
move them there as static helpers
Requested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The recent KVM patch adds IA32_FEATURE_CONTROL support. QEMU needs
to clear this MSR when reset vCPU and keep the value of it when
migration. This patch add this feature.
Signed-off-by: Arthur Chunqi Li <yzt356@gmail.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Where no extra implementation is needed, fall back to CPUClass::set_pc().
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
The functions cpu_clone_regs() and cpu_set_tls() are not purely CPU
related -- they are specific to the TLS ABI for a a particular OS.
Move them into the linux-user/ tree where they belong.
target-lm32 had entirely unused implementations, since it has no
linux-user target; just drop them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>