While at it, drop the current_cpu assignment since this is a
per-thread variable on modern QEMU.
Cc: Vincent Palatin <vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This batch contains more patches to rework the pseries machine hotplug
infrastructure, plus an assorted batch of bugfixes.
It contains a start on fixes to restore migration from older machine
types on older versions which was broken by some xics changes. There
are still a few missing pieces here, though.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJZOjFrAAoJEGw4ysog2bOS7bUP/3CNnd/eP+NPwK96ZdiRpYF4
sYLyYweev7D+Lk9johJs7JzZwAsxlveGOMEFslUGXeG4aK9VPl8Xw/A9fJGE+epM
ixlcQRREyudHV5H7Pc1+gqy4tVPIwaUcfEYZ0osWlzXgAIzJPLWFxO8GNXpRKXD+
Kjscqy+dc7Rp07+Yta5qQvPnPQmz9gcTB+CeY7aGvjf6dkofnj7wGoHsfuy1qifX
jk4TTFY+5I4NTsI4H3F1DKpYkOwlUt3nSIVBeQI0eLeeVNZU4vJ6Uug6iMNiqmJ1
m+zaDmKVdninJKbGpG9wmaf3Z471WWGScsXGNSTIqWoQBfeUDutR1XCd+NlCmXyy
/CxgejhW96m06TIN3n0Unh5RCNNfP5UMxITgjmwoM4iN2EEJXoUGsVqS3oJdf2ct
wOiiSgCB9hMdV191jIPxjc/CAjuZtpJHIa4liEc3WwmUzoOXCs/vAyvEZe1UXB8/
BU3OUFdvtX6cuMWS0tbB9MM7wHR3I/ZRyWSSQW+e9m7Qq2eIcAw8zZLazFI6t5vf
qDL3dYulhu6bA9et7weCuapdZA8CDcpU2xA1+C6dxZxfSvDTCDUIdojO6InhGmpG
ual58ajW15zhUwNeDsY5WIHRe7F3TvsKXf95RYtIXvuoERsGwBYAMFT+gjS7c3M7
tEycIdLxXN/AsjP4v5wj
=a4tH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.10-20170609' into staging
ppc patch queue 2017-06-09
This batch contains more patches to rework the pseries machine hotplug
infrastructure, plus an assorted batch of bugfixes.
It contains a start on fixes to restore migration from older machine
types on older versions which was broken by some xics changes. There
are still a few missing pieces here, though.
# gpg: Signature made Fri 09 Jun 2017 06:26:03 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170609:
Revert "spapr: fix memory hot-unplugging"
xics: drop ICPStateClass::cpu_setup() handler
xics: setup cpu at realize time
xics: pass appropriate types to realize() handlers.
xics: introduce macros for ICP/ICS link properties
hw/cpu: core.c can be compiled as common object
hw/ppc/spapr: Adjust firmware name for PCI bridges
xics: add reset() handler to ICPStateClass
pnv_core: drop reference on ICPState object during CPU realization
spapr: Rework DRC name handling
spapr: Fold spapr_phb_{add,remove}_pci_device() into their only callers
spapr: Change DRC attach & detach methods to functions
spapr: Clean up handling of DR-indicator
spapr: Clean up RTAS set-indicator
spapr: Don't misuse DR-indicator in spapr_recover_pending_dimm_state()
spapr: Clean up DR entity sense handling
pseries: Correct panic behaviour for pseries machine type
spapr: fix memory leak in spapr_memory_pre_plug()
target/ppc: fix memory leak in kvmppc_is_mem_backend_page_size_ok()
target/ppc: pass const string to kvmppc_is_mem_backend_page_size_ok()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The string returned by object_property_get_str() is dynamically allocated.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function has three implementations. Two are stubs that do nothing
and the third one only passes the obj_path argument to:
Object *object_resolve_path(const char *path, bool *ambiguous);
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If the user set disable smm by '-machine smm=off', we
should not register smram_listener so that we can
avoid waster memory in kvm since the added sencond
address space.
Meanwhile we should assign value of the global kvm_state
before invoking the kvm_arch_init(), because
pc_machine_is_smm_enabled() may use it by kvm_has_mm().
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Message-Id: <1496316915-121196-1-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add an XML description for SSE registers (XMM+MXCSR) for both X86
and X86-64 architectures in the GDB stub:
- configure: Define gdb_xml_files for the X86 targets (32 and 64bit).
- gdb-xml/i386-32bit-sse.xml & gdb-xml/i386-64bit-sse.xml: The XML files
that contain a description of the XMM + MXCSR registers.
- gdb-xml/i386-32bit.xml & gdb-xml/i386-64bit.xml: wrappers that include
the XML file of the core registers and the other XML file of the SSE registers.
- target/i386/cpu.c: Modify the gdb_core_xml_file to the new XML wrapper,
modify the gdb_num_core_regs to fit the registers number defined in each
XML file.
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a fix for the problem [1], where VMCB.CPL was set to 0 and interrupt
was taken on userspace stack. The root cause lies in the specific AMD CPU
behaviour which manifests itself as unusable segment attributes on SYSRET[2].
Here in this patch flags are not touched even segment is unusable or is not
present, therefore CPL (which is stored in DPL field) should not be lost and
will be successfully restored on kvm/svm kernel side.
Also current patch should not break desired behavior described in this commit:
4cae9c9796 ("target-i386: kvm: clear unusable segments' flags in migration")
since present bit will be dropped if segment is unusable or is not present.
This is the second part of the whole fix of the corresponding problem [1],
first part is related to kvm/svm kernel side and does exactly the same:
segment attributes are not zeroed out.
[1] Message id: CAJrWOzD6Xq==b-zYCDdFLgSRMPM-NkNuTSDFEtX=7MreT45i7Q@mail.gmail.com
[2] Message id: 5d120f358612d73fc909f5bfa47e7bd082db0af0.1429841474.git.luto@kernel.org
Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com>
Signed-off-by: Mikhail Sennikovskii <mikhail.sennikovskii@profitbricks.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Michael Chapman <mike@very.puzzling.org>
Cc: qemu-devel@nongnu.org
Message-Id: <20170601085604.12980-1-roman.penyaev@profitbricks.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Running Windows with icount causes a crash in instruction of write cr.
This patch fixes it.
Reading and writing cr cause an icount read because there are called
cpu_get_apic_tpr and cpu_set_apic_tpr functions. So, there is need
gen_io_start()/gen_io_end() calls.
Signed-off-by: Mihail Abakumov <mikhail.abakumov@ispras.ru>
Message-Id: <ffb376034ff184f2fcbe93d5317d9e76@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This speeds up SMM switches. Later on it may remove the need to take
the BQL, and it may also allow to reuse code between TCG and KVM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Ignore env->a20_mask when running in system management mode.
Reported-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1494502528-12670-1-git-send-email-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add "Return and Deallocate" (rtd) instruction.
RTD #d
(SP) -> PC
SP + 4 + d -> SP
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-By: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Message-Id: <20170605100014.22981-1-laurent@vivier.eu>
We have to make the address in the old PSW point at the next
instruction, as addressing exceptions are suppressing and not
nullifying.
I assume that there are a lot of other broken cases (as most instructions
we care about are suppressing) - all trigger_pgm_exception() specifying
and explicit number or ILEN_LATER look suspicious, however this is another
story that might require bigger changes (and I have to understand when
the address might already have been incremented first).
This is needed to make an upcoming kvm-unit-test work.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170529121228.2789-1-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The CDSG instruction requires a 16-byte alignement, as expressed in
the MO_ALIGN_16 passed to helper_atomic_cmpxchgo_be_mmu. In the non
parallel case, use check_alignment to enforce this.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170604202034.16615-4-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a common helper with PACK ASCII as the differences are limited to
the stride of the source operand.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-25-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For that we need to make program_interrupt available to qemu-user.
Fortunately there is almost nothing to change as both kvm_enabled and
CONFIG_KVM evaluate to false in that case.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-22-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As MVCL and MVCLE only differ by their operands, use a common
do_mvcl helper. Optimize it calling fast_memmove and fast_memset.
Correctly write back addresses. Check that r1 and r2/r3 registers
are even.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-21-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
adj_len_to_page doesn't return the correct result when the address
is already page aligned and the length is bigger than a page. Fix that.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-20-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As CLCL and CLCLE mostly differ by their operands, use a common do_clcl
helper. Another difference is that CLCL is not interruptible.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-19-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There are multiple issues with the COMPARE LOGICAL LONG EXTENDED
instruction:
- The test between the two operands is inverted, leading to an inversion
of the cc values 1 and 2.
- The address and length of an operand continue to be decreased after
reaching the end of this operand. These values are then wrong write
back to the registers.
- We should limit the amount of bytes to process, so that interrupts can
be served correctly.
At the same time rename dest into src1 and src into src3 to match the
operand names and make the code less confusing.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-18-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Improve fix_address to also handle the 24-bit mode. Rename fix_address
to wrap_address to better explain what is changed.
Replace the calls to get_address with x2 = 0 and b2 = 0 by
call to wrap_address, leading to the removal of this function. Rename
get_address_31fix into get_address.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-15-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
These functions differ from COMPARE by generating an exception for a
QNaN input. Use the non quiet version of floatXX_compare.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-10-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And at the same time make IPTE SMP aware.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-4-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Currently we only present the plain z900 feature bits to the guest,
but QEMU already emulates some additional features (but not all of
the next CPU generation, so we can not use the next CPU level as
default yet). Since newer Linux kernels are checking the feature bits
and refuse to work if a required feature is missing, it would be nice
to have a way to present more of the supported features when we are
running with the "qemu" CPU.
This patch now adds the supported features to the "full_feat" bitmap,
so that additional features can be enabled on the command line now,
for example with:
qemu-system-s390x -cpu qemu,stfle=true,ldisp=true,eimm=true,stckf=true
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1495704132-5675-1-git-send-email-thuth@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
While the previous patch is required for proper conformance,
the vast majority of target insns are MVC and XC for implementing
memmove and memset respectively. The next most common are CLC,
TR, and SVC.
Implementing these (and a few others for which we already have
an implementation) directly is faster than going through full
translation to a TB.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Previously, helper_ex would construct the insn and then implement
the insn via direct calls other helpers. This was sufficient to
boot Linux but that is all.
It is easy enough to go the whole nine yards by stashing state for
EXECUTE within the cpu, and then rely on a new TB to be created
that properly and completely interprets the insn.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This split will be required for implementing EXECUTE properly.
Do this now as a separate step to aid comparison of before and
after TB listings.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use this saved value instead of recomputing from next_pc difference.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>