The function tcg_gen_lshift() is unused; remove it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* remotes/bonzini/softmmu-smap: (33 commits)
target-i386: cleanup x86_cpu_get_phys_page_debug
target-i386: fix protection bits in the TLB for SMEP
target-i386: support long addresses for 4MB pages (PSE-36)
target-i386: raise page fault for reserved bits in large pages
target-i386: unify reserved bits and NX bit check
target-i386: simplify pte/vaddr calculation
target-i386: raise page fault for reserved physical address bits
target-i386: test reserved PS bit on PML4Es
target-i386: set correct error code for reserved bit access
target-i386: introduce support for 1 GB pages
target-i386: introduce do_check_protect label
target-i386: tweak handling of PG_NX_MASK
target-i386: commonize checks for PAE and non-PAE
target-i386: commonize checks for 4MB and 4KB pages
target-i386: commonize checks for 2MB and 4KB pages
target-i386: fix coding standards in x86_cpu_handle_mmu_fault
target-i386: simplify SMAP handling in MMU_KSMAP_IDX
target-i386: fix kernel accesses with SMAP and CPL = 3
target-i386: move check_io helpers to seg_helper.c
target-i386: rename KSMAP to KNOSMAP
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* remotes/kvm/uq/master:
kvm: Fix eax for cpuid leaf 0x40000000
kvmclock: Ensure proper env->tsc value for kvmclock_current_nsec calculation
kvm: Enable -cpu option to hide KVM
kvm: Ensure negative return value on kvm_init() error handling path
target-i386: set CC_OP to CC_OP_EFLAGS in cpu_load_eflags
target-i386: get CPL from SS.DPL
target-i386: rework CPL checks during task switch, preparing for next patch
target-i386: fix segment flags for SMM and VM86 mode
target-i386: Fix vm86 mode regression introduced in fd460606fd.
kvm_stat: allow choosing between tracepoints and old stats
kvmclock: Ensure time in migration never goes backward
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
User pages must be marked as non-executable when running under SMEP;
otherwise, fetching the page first and then calling it will fail.
With this patch, all SMEP testcases in kvm-unit-tests now pass.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
4MB pages can use 40-bit addresses by putting the higher 8 bits in bits
20-13 of the PDE. Bit 21 is reserved.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove the tail of the PAE case, so that we can use "goto" in the
next patch to jump to the protection checks.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Do not use this MMU index at all if CR4.SMAP is false, and drop
the SMAP check from x86_cpu_handle_mmu_fault.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With SMAP, implicit kernel accesses from user mode always behave as
if AC=0. To do this, kernel mode is not anymore a separate MMU mode.
Instead, KERNEL_IDX is renamed to KSMAP_IDX and the kernel mode accessors
wrap KSMAP_IDX and KNOSMAP_IDX.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They do not need to be in op_helper.c. Because cputlb.c now includes
softmmu_template.h twice for each size, io_readX must be elided the
second time through.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since Linux kernel 3.5, KVM has documented eax for leaf 0x40000000
to be KVM_CPUID_FEATURES:
57c22e5f35
But qemu still tries to set it to 0. It would be better to make qemu
and kvm consistent. This patch just fixes this issue.
Signed-off-by: Jidong Xiao <jidong.xiao@gmail.com>
[Include kvm_base in the value. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The latest Nvidia driver (337.88) specifically checks for KVM as the
hypervisor and reports Code 43 for the driver in a Windows guest when
found. Removing or changing the KVM signature is sufficient for the
driver to load and work. This patch adds an option to easily allow
the KVM hypervisor signature to be hidden using '-cpu kvm=off'. We
continue to expose KVM via the cpuid value by default. The state of
this option does not supercede or replace -enable-kvm or the accel=kvm
machine option. This only changes the visibility of KVM to the guest
and paravirtual features specifically tied to the KVM cpuid.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There is no reason to keep that out of the function. The comment refers
to the disassembler's cc_op state rather than the CPUState field.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CS.RPL is not equal to the CPL in the few instructions between
setting CR0.PE and reloading CS. We get this right in the common
case, because writes to CR0 do not modify the CPL, but it would
not be enough if an SMI comes exactly during that brief period.
Were this to happen, the RSM instruction would erroneously set
CPL to the low two bits of the real-mode selector; and if they are
not 00, the next instruction fetch cannot access the code segment
and causes a triple fault.
However, SS.DPL *is* always equal to the CPL. In real processors
(AMD only) there is a weird case of SYSRET setting SS.DPL=SS.RPL
from the STAR register while forcing CPL=3, but we do not emulate
that.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL. So far this worked
by carefully setting the CS selector and flags before doing the task
switch; but this will not work once we get the CPL from SS.DPL.
Temporarily assume that the CPL comes from CS.RPL during task switch
to a protected-mode task, until the descriptor of SS is loaded.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL. The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.
For consistency, SMM ought to have the same flags except with
CPL=0.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit fd460606fd moved setting of eflags above calls to
cpu_x86_load_seg_cache() in seg_helper.c. Unfortunately, in
do_interrupt_protected() this moved the clearing of VM_MASK above a
test for it.
Fix this regression by storing the value of VM_MASK at the start of
do_interrupt_protected().
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* remotes/kvm/uq/master:
pc: port 92 reset requires a low->high transition
cpu: make CPU_INTERRUPT_RESET available on all targets
apic: do not accept SIPI on the bootstrap processor
target-i386: preserve FPU and MSR state on INIT
target-i386: fix set of registers zeroed on reset
kvm: forward INIT signals coming from the chipset
kvm: reset state from the CPU's reset method
target-i386: the x86 CPL is stored in CS.selector - auto update hflags accordingly.
target-i386: set eflags prior to calling cpu_x86_load_seg_cache() in seg_helper.c
target-i386: set eflags and cr0 prior to calling cpu_x86_load_seg_cache() in smm_helper.c
target-i386: set eflags prior to calling svm_load_seg_cache() in svm_helper.c
pci-assign: limit # of msix vectors
pci-assign: Fix a bug when map MSI-X table memory failed
kvm: make one_reg helpers available for everyone
target-i386: Remove unused data from local array
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets. Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.
Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1. Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most MSRs, plus the FPU, MMX, MXCSR, XMM and YMM registers should not
be zeroed on INIT (Table 9-1 in the Intel SDM). Copy them out of
CPUX86State and back in, instead of special casing env->pat.
The relevant fields are already consecutive except PAT and SMBASE.
However:
- KVM and Hyper-V MSRs should be reset because they include memory
locations written by the hypervisor. These MSRs are moved together
at the end of the preserved area.
- SVM state can be moved out of the way since it is written by VMRUN.
Cc: Andreas Faerber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
BND0-3, BNDCFGU, BNDCFGS, BNDSTATUS were not zeroed on reset, but they
should be (Intel Instruction Set Extensions Programming Reference
319433-015, pages 9-4 and 9-6). Same for YMM.
XCR0 should be reset to 1.
TSC and TSC_RESET were zeroed already by the memset, remove the explicit
assignments.
Cc: Andreas Faerber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now that we have a CPU object with a reset method, it is better to
keep the KVM reset close to the CPU reset. Using qemu_register_reset
as we do now keeps them far apart.
With this patch, PPC no longer calls the kvm_arch_ function, so
it can get removed there. Other arches call it from their CPU
reset handler, and the function gets an ARMCPU/X86CPU/S390CPU.
Note that ARM- and s390-specific functions are called kvm_arm_*
and kvm_s390_*, while x86-specific functions are called kvm_arch_*.
That follows the convention used by the different architectures.
Changing that is the topic of a separate patch.
Reviewed-by: Gleb Natapov <gnatapov@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.
This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The cpu_x86_load_seg_cache() function inspects eflags, so make sure
all changes to eflags are done prior to loading the segment caches.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The cpu_x86_load_seg_cache() function inspects cr0 and eflags, so make
sure all changes to eflags and cr0 are done prior to loading the
segment caches.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The svm_load_seg_cache() function calls cpu_x86_load_seg_cache() which
inspects env->eflags. So, make sure all changes to eflags are done
prior to loading the segment cache.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Older Intel manuals (pre-2010) and current AMD manuals describe Z as
undefined, but newer Intel manuals describe Z as unchanged.
Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Using error_is_set(ERRP) to find out whether a function failed is
either wrong, fragile, or unnecessarily opaque. It's wrong when ERRP
may be null, because errors go undetected when it is. It's fragile
when proving ERRP non-null involves a non-local argument. Else, it's
unnecessarily opaque (see commit 84d18f0).
I guess the error_is_set(errp) in the ObjectProperty set() methods are
merely fragile right now, because I can't find a call chain that
passes a null errp argument.
Make the code more robust and more obviously correct: receive the
error in a local variable, then propagate it through the parameter.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The subsection already exists in one well-known enterprise Linux
distribution, but for some strange reason the fields were swapped
when forward-porting the patch to upstream.
Limit headaches for said enterprise Linux distributor when the
time will come to rebase their version of QEMU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1396452782-21473-1-git-send-email-pbonzini@redhat.com
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Linux guests, when using more than 4GB of RAM, may end up using 1GB pages
to store (kernel) data. When this happens, we're unable to debug a running
Linux kernel with GDB:
(gdb) p node_data[0]->node_id
Cannot access memory at address 0xffff88013fffd3a0
(gdb)
GDB returns this error because x86_cpu_get_phys_page_debug() doesn't support
translating 1GB pages in IA-32e paging mode and returns an error to GDB.
This commit adds support for 1GB page translation for IA32e paging.
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add 'U' suffixes where necessary to avoid (1 << 31) which
shifts left into the sign bit, which is undefined behaviour.
Add the suffix also for other constants in the same groupings
even if they don't shift into bit 31, for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This fixes warnings from the static code analysis (smatch).
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>