Commit Graph

4463 Commits

Author SHA1 Message Date
Richard Henderson 164690b29f target/arm: Split out arm_mmu_idx_el
Avoid calling arm_current_el() twice.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:28 +01:00
Richard Henderson 3d74e2e9ff target/arm: Add arm_rebuild_hflags
This function assumes nothing about the current state of the cpu,
and writes the computed value to env->hflags.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:28 +01:00
Richard Henderson 0a54d68e21 target/arm: Hoist computation of TBFLAG_A32.VFPEN
There are 3 conditions that each enable this flag.  M-profile always
enables; A-profile with EL1 as AA64 always enables.  Both of these
conditions can easily be cached.  The final condition relies on the
FPEXC register which we are not prepared to cache.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:28 +01:00
Richard Henderson 60e12c3776 target/arm: Simplify set of PSTATE_SS in cpu_get_tb_cpu_state
Hoist the variable load for PSTATE into the existing test vs is_a64.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson bbad7c62d4 target/arm: Hoist XSCALE_CPAR, VECLEN, VECSTRIDE in cpu_get_tb_cpu_state
We do not need to compute any of these values for M-profile.
Further, XSCALE_CPAR overlaps VECSTRIDE so obviously the two
sets must be mutually exclusive.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson 83f4baef3e target/arm: Split out rebuild_hflags_aprofile
Create a function to compute the values of the TBFLAG_ANY bits
that will be cached, and are used by A-profile.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson c747224cc3 target/arm: Split out rebuild_hflags_a32
Currently a trivial wrapper for rebuild_hflags_common_32.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson 9550d1bd88 target/arm: Reduce tests vs M-profile in cpu_get_tb_cpu_state
Hoist the computation of some TBFLAG_A32 bits that only apply to
M-profile under a single test for ARM_FEATURE_M.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson 6e33ced563 target/arm: Split out rebuild_hflags_m32
Create a function to compute the values of the TBFLAG_A32 bits
that will be cached, and are used by M-profile.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson 8061a64910 target/arm: Split arm_cpu_data_is_big_endian
Set TBFLAG_ANY.BE_DATA in rebuild_hflags_common_32 and
rebuild_hflags_a64 instead of rebuild_hflags_common, where we do
not need to re-test is_a64() nor re-compute the various inputs.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson 43eccfb6ed target/arm: Split out rebuild_hflags_common_32
Create a function to compute the values of the TBFLAG_A32 bits
that will be cached, and are used by all profiles.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson d4d7503ac6 target/arm: Split out rebuild_hflags_a64
Create a function to compute the values of the TBFLAG_A64 bits
that will be cached.  For now, the env->hflags variable is not
used, and the results are fed back to cpu_get_tb_cpu_state.

Note that not all BTI related flags are cached, so we have to
test the BTI feature twice -- once for those bits moved out to
rebuild_hflags_a64 and once for those bits that remain in
cpu_get_tb_cpu_state.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Richard Henderson fdd1b228c2 target/arm: Split out rebuild_hflags_common
Create a function to compute the values of the TBFLAG_ANY bits
that will be cached.  For now, the env->hflags variable is not
used, and the results are fed back to cpu_get_tb_cpu_state.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191023150057.25731-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 17:16:27 +01:00
Peter Maydell 58560ad254 ppc patch queue 2019-10-24
Last pull request before soft freeze.
   * Lots of fixes and cleanups for spapr interrupt controllers
   * More SLOF updates to fix problems with full FDT rendering at CAS
     time (alas, more yet are to come)
   * A few other assorted changes
 
 This isn't quite as well tested as I usually try to do before a pull
 request.  But I've been sick and running into some other difficulties,
 and wanted to get this sent out before heading towards KVM forum.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl2xXWcACgkQbDjKyiDZ
 s5Jy/BAAsSo514vGCjdszXcRH3nFeODKJadlSsUX+32JFP1yJS9ooxkcmIN7o9Wp
 3VCkMHQPVV9jjIvvShWOSGfDDO3o8fTEucOIX/Nn9wfq+NiY+EJst0v+8OT48CSX
 LEXiy9Wghs9pZMLCUZ3rlLPBiU/Lhzf+KTCoUdc40tfoZMMz1lp/Uy8IdIYTYwLl
 /z++r4X8FOsXsDDsFopWffVdVBLJz6Var6NgBa8ISk2gGnUOAKsrTE3bD9L6n4PR
 YYbMSkv+SbvXg4gm53jUb9cQgpBqQpWHUYBIbKia/16EzbIkkZjFE2jGQMP5c72h
 ZOml7ZQtQVWIEEZwKPN67S8bKiVbEfayxHYViejn/uUqv3AwW0wi7FlBVv37YNJ4
 TxPvLBu+0DaFbk5y6/XHyL6XomG1/oH6qXOM2JhIWON7HI3rRWoMQbZ6QVJ1Gwk2
 uwrvOOL5kVZySotOw5bDkTXYp/Nm1JE4QwOXFPkXzaekcZhRlEqqrkBddhKtF80p
 1e5hGp5RgoILIe8uHJQ7decUMk889J7Qdtakv6BWvOci4dbIiZEp/smFlzgTcPnW
 DQJONP/awnoAOS3v0bItf59DROvkZ5xyv8yQZFP3qThSOfZl4e95WNRbtR3vtjU4
 Bl4Pdte15URKy5nM0XnnLg9mzl2xufdwEsu76lQMuYpe6nCI2h0=
 =FRHF
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.2-20191024' into staging

ppc patch queue 2019-10-24

Last pull request before soft freeze.
  * Lots of fixes and cleanups for spapr interrupt controllers
  * More SLOF updates to fix problems with full FDT rendering at CAS
    time (alas, more yet are to come)
  * A few other assorted changes

This isn't quite as well tested as I usually try to do before a pull
request.  But I've been sick and running into some other difficulties,
and wanted to get this sent out before heading towards KVM forum.

# gpg: Signature made Thu 24 Oct 2019 09:14:31 BST
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-4.2-20191024: (28 commits)
  spapr/xive: Set the OS CAM line at reset
  ppc/pnv: Fix naming of routines realizing the CPUs
  ppc: Reset the interrupt presenter from the CPU reset handler
  ppc/pnv: Add a PnvChip pointer to PnvCore
  ppc/pnv: Introduce a PnvCore reset handler
  spapr_cpu_core: Implement DeviceClass::reset
  spapr: move CPU reset after presenter creation
  spapr: Don't request to unplug the same core twice
  pseries: Update SLOF firmware image
  spapr: Move SpaprIrq::nr_xirqs to SpaprMachineClass
  spapr: Remove SpaprIrq::nr_msis
  spapr, xics, xive: Move SpaprIrq::post_load hook to backends
  spapr, xics, xive: Move SpaprIrq::reset hook logic into activate/deactivate
  spapr: Remove SpaprIrq::init_kvm hook
  spapr, xics, xive: Match signatures for XICS and XIVE KVM connect routines
  spapr, xics, xive: Move dt_populate from SpaprIrq to SpaprInterruptController
  spapr, xics, xive: Move print_info from SpaprIrq to SpaprInterruptController
  spapr, xics, xive: Move set_irq from SpaprIrq to SpaprInterruptController
  spapr: Formalize notion of active interrupt controller
  spapr, xics, xive: Move irq claim and free from SpaprIrq to SpaprInterruptController
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 16:22:58 +01:00
Peter Maydell 81c1f71eeb x86 and machine queue, 2019-10-23
Features:
 * Denverton CPU model (Tao Xu)
 
 Cleanups:
 * Eliminate remaining places that abuse
   memory_region_allocate_system_memory() (Igor Mammedov)
 -----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCAAyFiEEWjIv1avE09usz9GqKAeTb5hNxaYFAl2xEE4UHGVoYWJrb3N0
 QHJlZGhhdC5jb20ACgkQKAeTb5hNxaai7A/+OxL6gyCWUz8TbyHnIyoCoynuwjmf
 UXIX2cOlmu6DwjWXW452cWujA/sJ+3EFDvzVhxtl1h/9n+homt2DKLdWi/b13Q7U
 SnxTIq6L7EdlVxWnDO+o4mO9eXvb4KmGjNCsDKaPPBRtlMul0b10icIkeParMD1X
 sZPAaG3cYX59L/p9jqg5B3G2usF8a3ezp/ik+9yrhxbLDNyCNhI00Gafpe5m7sHs
 wwMAv8LDcohAnLEW86vfB5erKVJsnMb4Sg78xAAdAgVxXPXb74xxw5yRV4adcGCe
 sE7NJfNHjQUDS1cKn7DOJvZslsyiFLvYtGB1J5IRt+iuPNiqy1Tfhy9GOlwuWItU
 LarEITudyIpxP3D3GThAQjEvjPyK3sXblV904PkUFxLVYkmPrPC4MbtQdp5MBRkV
 PIsYGs83RpDMGj3r1OCpOvBiulNEsxicaKzspcf/BeZL8DsmXwnhPUwBi0Sr+Sbh
 PJRp8jLFE+qU1PPCueiwnMnN5PgfjRAhUD05LZ6OKdgVMepzLp6ZgHDot7gjEtTj
 K0yaPVEIl9XywB/z/RKAVG0yZI1NVfwdVzyKL7pz5TmqFtnIDxrNyBZ8g/ZVw/i8
 bhNaCu7sD/XvmneZLHzGQ8lA4LqVSeKYuUw/YrSDvE+Qu6XEbDs9preu50PhLdct
 K6r1Vr0VweK3h0g=
 =vIls
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost/tags/machine-next-pull-request' into staging

x86 and machine queue, 2019-10-23

Features:
* Denverton CPU model (Tao Xu)

Cleanups:
* Eliminate remaining places that abuse
  memory_region_allocate_system_memory() (Igor Mammedov)

# gpg: Signature made Thu 24 Oct 2019 03:45:34 BST
# gpg:                using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6
# gpg:                issuer "ehabkost@redhat.com"
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost/tags/machine-next-pull-request:
  hppa: drop usage of memory_region_allocate_system_memory() for ROM
  ppc: rs6000_mc: drop usage of memory_region_allocate_system_memory()
  sparc64: use memory_region_allocate_system_memory() only for '-m' specified RAM
  target/i386: Introduce Denverton CPU model

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 10:43:20 +01:00
Peter Maydell ea0ec714d3 target/xtensa improvements for v4.2:
- regenerate and reimport test_mmuhifi_c3 core;
 - add virt machine.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEK2eFS5jlMn3N6xfYUfnMkfg/oEQFAl2w2qoTHGpjbXZia2Jj
 QGdtYWlsLmNvbQAKCRBR+cyR+D+gRBv0EACUjynmjQVtxJBIvZrmkMu4Op8GuzAR
 8EqsroQloRptRZpBsPmuKePx7nPjNXDq+0yB6oAya6TmvinHouXpIngvjhNNwLkn
 g2weMb1KS+BKvydJou9sPe/3YfeM4iyjkzcsuc980QU5jUMGWSP8BRjzkhq5VO+5
 KLplzNfbtcAI5mmIjgjprbsITrlnBDp3czI+LOaqLizoeam6owK43tK4Di2x4zPg
 v7d0dlrS1Egm9nDV6uI3ODx58eF1zH/MjYlpQTBAfn4/nIBLi4By6TiY3h1dJZgS
 lKpVvMyfSouen2ZNjO+lVNZPT+UR8OuJVMeiUd0nY6ljuRW/MwlFBDBeFR0vFgHG
 Gtxh6S7l25U5kEIiRuYgQzV+hCDmNeHZwCpXgWYOrjoNXWcJrfXht06DBxjjgh+c
 2XkUbSt/x6jXgse+XPy6IPwIg8t707qHX/nbca/UEFn2pTUvSL4GdKQXbgbLoK0G
 BvYQNFYUVem54w0hAC1q/qufBLHVpGV2TmoOdqwW2qW2G4F7YReNSyq3/0f+iS+7
 3dPXMUffR4J8j7qTU3ceyZ/saxKOSd6hlmiVPGo0K8I381vzpa0p/HEP0JfBBB3/
 eVv5QRK5qX+8AZ39J6uYc5rN/CxZKsxL1Ku/hyGzQzXQXBDzFlVPTZ8EdxMqaOM2
 T3TKVvQjPjvL/w==
 =DD9M
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/xtensa/tags/20191023-xtensa' into staging

target/xtensa improvements for v4.2:

- regenerate and reimport test_mmuhifi_c3 core;
- add virt machine.

# gpg: Signature made Wed 23 Oct 2019 23:56:42 BST
# gpg:                using RSA key 2B67854B98E5327DCDEB17D851F9CC91F83FA044
# gpg:                issuer "jcmvbkbc@gmail.com"
# gpg: Good signature from "Max Filippov <filippov@cadence.com>" [unknown]
# gpg:                 aka "Max Filippov <max.filippov@cogentembedded.com>" [full]
# gpg:                 aka "Max Filippov <jcmvbkbc@gmail.com>" [full]
# Primary key fingerprint: 2B67 854B 98E5 327D CDEB  17D8 51F9 CC91 F83F A044

* remotes/xtensa/tags/20191023-xtensa:
  hw/xtensa: add virt machine
  target/xtensa: regenerate and re-import test_mmuhifi_c3 core

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-24 09:55:01 +01:00
Tao Xu 8b44d8609f target/i386: Introduce Denverton CPU model
Denverton is the Atom Processor of Intel Harrisonville platform.

For more information:
https://ark.intel.com/content/www/us/en/ark/products/\
codename/63508/denverton.html

Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20190718073405.28301-1-tao3.xu@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-23 23:37:42 -03:00
Stefan Brankovic 8d745875c2 target/ppc: Fix for optimized vsl/vsr instructions
In previous implementation, invocation of TCG shift function could request
shift of TCG variable by 64 bits when variable 'sh' is 0, which is not
supported in TCG (values can be shifted by 0 to 63 bits). This patch fixes
this by using two separate invocation of TCG shift functions, with maximum
shift amount of 32.

Name of variable 'shifted' is changed to 'carry' so variable naming
is similar to old helper implementation.

Variables 'avrA' and 'avrB' are replaced with variable 'avr'.

Fixes: 4e6d0920e7
Reported-by: "Paul A. Clark" <pc@us.ibm.com>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Suggested-by: Aleksandar Markovic <aleksandar.markovic@rt-rk.com>
Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Message-Id: <1570196639-7025-2-git-send-email-stefan.brankovic@rt-rk.com>
Tested-by: Paul A. Clarke  <pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-24 09:36:55 +11:00
Tao Xu 6508799707 target/i386: Add support for save/load IA32_UMWAIT_CONTROL MSR
UMWAIT and TPAUSE instructions use 32bits IA32_UMWAIT_CONTROL at MSR
index E1H to determines the maximum time in TSC-quanta that the processor
can reside in either C0.1 or C0.2.

This patch is to Add support for save/load IA32_UMWAIT_CONTROL MSR in
guest.

Co-developed-by: Jingqi Liu <jingqi.liu@intel.com>
Signed-off-by: Jingqi Liu <jingqi.liu@intel.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20191011074103.30393-3-tao3.xu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-23 17:50:27 +02:00
Tao Xu 67192a298f x86/cpu: Add support for UMONITOR/UMWAIT/TPAUSE
UMONITOR, UMWAIT and TPAUSE are a set of user wait instructions.
This patch adds support for user wait instructions in KVM. Availability
of the user wait instructions is indicated by the presence of the CPUID
feature flag WAITPKG CPUID.0x07.0x0:ECX[5]. User wait instructions may
be executed at any privilege level, and use IA32_UMWAIT_CONTROL MSR to
set the maximum time.

The patch enable the umonitor, umwait and tpause features in KVM.
Because umwait and tpause can put a (psysical) CPU into a power saving
state, by default we dont't expose it to kvm and enable it only when
guest CPUID has it. And use QEMU command-line "-overcommit cpu-pm=on"
(enable_cpu_pm is enabled), a VM can use UMONITOR, UMWAIT and TPAUSE
instructions. If the instruction causes a delay, the amount of time
delayed is called here the physical delay. The physical delay is first
computed by determining the virtual delay (the time to delay relative to
the VM’s timestamp counter). Otherwise, UMONITOR, UMWAIT and TPAUSE cause
an invalid-opcode exception(#UD).

The release document ref below link:
https://software.intel.com/sites/default/files/\
managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf

Co-developed-by: Jingqi Liu <jingqi.liu@intel.com>
Signed-off-by: Jingqi Liu <jingqi.liu@intel.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20191011074103.30393-2-tao3.xu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-23 17:50:27 +02:00
Richard Henderson 1ab1708652 target/arm: Fix sign-extension for SMLAL*
The 32-bit product should be sign-extended, not zero-extended.

Fixes: ea96b37464
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190912183058.17947-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-22 16:50:35 +01:00
Vitaly Kuznetsov 30d6ff662d i386/kvm: add NoNonArchitecturalCoreSharing Hyper-V enlightenment
Hyper-V TLFS specifies this enlightenment as:
"NoNonArchitecturalCoreSharing - Indicates that a virtual processor will never
share a physical core with another virtual processor, except for virtual
processors that are reported as sibling SMT threads. This can be used as an
optimization to avoid the performance overhead of STIBP".

However, STIBP is not the only implication. It was found that Hyper-V on
KVM doesn't pass MD_CLEAR bit to its guests if it doesn't see
NoNonArchitecturalCoreSharing bit.

KVM reports NoNonArchitecturalCoreSharing in KVM_GET_SUPPORTED_HV_CPUID to
indicate that SMT on the host is impossible (not supported of forcefully
disabled).

Implement NoNonArchitecturalCoreSharing support in QEMU as tristate:
'off' - the feature is disabled (default)
'on' - the feature is enabled. This is only safe if vCPUS are properly
 pinned and correct topology is exposed. As CPU pinning is done outside
 of QEMU the enablement decision will be made on a higher level.
'auto' - copy KVM setting. As during live migration SMT settings on the
source and destination host may differ this requires us to add a migration
blocker.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20191018163908.10246-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-22 09:38:42 +02:00
Mario Smarduch 73284563dc target/i386: log MCE guest and host addresses
Patch logs MCE AO, AR messages injected to guest or taken by QEMU itself.
We print the QEMU address for guest MCEs, helps on hypervisors that have
another source of MCE logging like mce log, and when they go missing.

For example we found these QEMU logs:

September 26th 2019, 17:36:02.309        Droplet-153258224: Guest MCE Memory Error at qemu addr 0x7f8ce14f5000 and guest 3d6f5000 addr of type BUS_MCEERR_AR injected   qemu-system-x86_64      amsN    ams3nodeNNNN

September 27th 2019, 06:25:03.234        Droplet-153258224: Guest MCE Memory Error at qemu addr 0x7f8ce14f5000 and guest 3d6f5000 addr of type BUS_MCEERR_AR injected   qemu-system-x86_64      amsN    ams3nodeNNNN

The first log had a corresponding mce log entry, the second didnt (logging
thresholds) we can infer from second entry same PA and mce type.

Signed-off-by: Mario Smarduch <msmarduch@digitalocean.com>
Message-Id: <20191009164459.8209-3-msmarduch@digitalocean.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-22 09:38:38 +02:00
David Hildenbrand de60a92ea7 s390x/kvm: Set default cpu model for all machine classes
We have to set the default model of all machine classes, not just for
the active one. Otherwise, "query-machines" will indicate the wrong
CPU model ("qemu-s390x-cpu" instead of "host-s390x-cpu") as
"default-cpu-type".

Doing a
    {"execute":"query-machines"}
under KVM now results in
    {"return": [
        {
            "hotpluggable-cpus": true,
            "name": "s390-ccw-virtio-4.0",
            "numa-mem-supported": false,
            "default-cpu-type": "host-s390x-cpu",
            "cpu-max": 248,
            "deprecated": false},
        {
            "hotpluggable-cpus": true,
            "name": "s390-ccw-virtio-2.7",
            "numa-mem-supported": false,
            "default-cpu-type": "host-s390x-cpu",
            "cpu-max": 248,
            "deprecated": false
        } ...

Libvirt probes all machines via "-machine none,accel=kvm:tcg" and will
currently see the wrong CPU model under KVM.

Reported-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Fixes: b6805e127c ("s390x: use generic cpu_model parsing")
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021100515.6978-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 18:03:08 +02:00
David Hildenbrand 38ad4fa3de s390x/tcg: Fix VECTOR SUBTRACT WITH BORROW COMPUTE BORROW INDICATION
The numbers are unsigned, the computation is wrong. "Each operand is
treated as an unsigned binary integer".
Let's implement as given in the PoP:

"A subtraction is performed by adding the contents of the second operand
 with the bitwise complement of the third operand along with a borrow
 indication from the rightmost bit of the fourth operand."

Reuse gen_accc2_i64().

Fixes: bc725e6515 ("s390x/tcg: Implement VECTOR SUBTRACT WITH BORROW COMPUTE BORROW INDICATION")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-7-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:34:43 +02:00
David Hildenbrand 2cb8a68d37 s390x/tcg: Fix VECTOR SUBTRACT WITH BORROW INDICATION
Testing this, there seems to be something messed up. We are dealing with
unsigned numbers. "Each operand is treated as an unsigned binary integer."
Let's just implement as written in the PoP:

"A subtraction is performed by adding the contents of
 the second operand with the bitwise complement of
 the third operand along with a borrow indication from
 the rightmost bit position of the fourth operand and
 the result is placed in the first operand."

We can reuse gen_ac2_i64().

Fixes: 48390a7c27 ("s390x/tcg: Implement VECTOR SUBTRACT WITH BORROW INDICATION")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-6-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:34:12 +02:00
David Hildenbrand 23e797749f s390x/tcg: Fix VECTOR SUBTRACT COMPUTE BORROW INDICATION
Looks like my idea of what a "borrow" is was wrong. The PoP says:

 "If the resulting subtraction results in a carry out of bit zero, a value
 of one is placed in the corresponding element of the first operand;
 otherwise, a value of zero is placed in the corresponding element"

As clarified by Richard, all we have to do is invert the result.

Fixes: 1ee2d7ba72 ("s390x/tcg: Implement VECTOR SUBTRACT COMPUTE BORROW INDICATION")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-5-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:33:29 +02:00
David Hildenbrand b57b336876 s390x/tcg: Fix VECTOR SHIFT RIGHT ARITHMETIC BY BYTE
We forgot to propagate the highest bit accross the high doubleword in
two cases (shift >=64).

Fixes: 5f724887e3 ("s390x/tcg: Implement VECTOR SHIFT RIGHT ARITHMETIC")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:33:13 +02:00
David Hildenbrand 8b95251947 s390x/tcg: Fix VECTOR MULTIPLY AND ADD *
We missed that we always read a "double-wide even-odd element
pair of the fourth operand". Fix it in all four variants.

Fixes: 1b430aec41 ("s390x/tcg: Implement VECTOR MULTIPLY AND ADD *")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:33:02 +02:00
David Hildenbrand 49a7ce4e03 s390x/tcg: Fix VECTOR MULTIPLY LOGICAL ODD
We have to read from odd offsets.

Fixes: 2bf3ee38f1 ("s390x/tcg: Implement VECTOR MULTIPLY *")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:32:49 +02:00
David Hildenbrand 8064af6b1d s390x/mmu: Remove duplicate check for MMU_DATA_STORE
No need to double-check if we have a write.

Found by Coverity (CID: 1406404).

Fixes: 31b5941906 ("target/s390x: Return exception from mmu_translate_real")
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191017121922.18840-1-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:30:06 +02:00
Andrew Jones be39110d4c s390x/cpumodel: Add missing visit_free
Beata Michalska noticed this missing visit_free() while reviewing
arm's implementation of qmp_query_cpu_model_expansion(), which is
modeled off this s390x implementation.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-Id: <20191016145434.7007-1-drjones@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:30:06 +02:00
Max Filippov d5eaec84e5 target/xtensa: regenerate and re-import test_mmuhifi_c3 core
Overlay part of the test_mmuhifi_c3 core has GPL3 copyright headers in
it. Fix that by regenerating test_mmuhifi_c3 core overlay and
re-importing it.

Fixes: d848ea7767 ("target/xtensa: add test_mmuhifi_c3 core")
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-10-18 19:54:27 -07:00
Xiaoyao Li 69edb0f37a target/i386: Add Snowridge-v2 (no MPX) CPU model
Add new version of Snowridge CPU model that removes MPX feature.

MPX support is being phased out by Intel. GCC has dropped it, Linux kernel
and KVM are also going to do that in the future.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20191012024748.127135-1-xiaoyao.li@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Eduardo Habkost af95cafb87 i386: Omit all-zeroes entries from KVM CPUID table
KVM has a 80-entry limit at KVM_SET_CPUID2.  With the
introduction of CPUID[0x1F], it is now possible to hit this limit
with unusual CPU configurations, e.g.:

  $ ./x86_64-softmmu/qemu-system-x86_64 \
    -smp 1,dies=2,maxcpus=2 \
    -cpu EPYC,check=off,enforce=off \
    -machine accel=kvm
  qemu-system-x86_64: kvm_init_vcpu failed: Argument list too long

This happens because QEMU adds a lot of all-zeroes CPUID entries
for unused CPUID leaves.  In the example above, we end up
creating 48 all-zeroes CPUID entries.

KVM already returns all-zeroes when emulating the CPUID
instruction if an entry is missing, so the all-zeroes entries are
redundant.  Skip those entries.  This reduces the CPUID table
size by half while keeping CPUID output unchanged.

Reported-by: Yumei Huang <yuhuang@redhat.com>
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1741508
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190822225210.32541-1-ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Bingsong Si 76ecd7a514 i386: Fix legacy guest with xsave panic on host kvm without update cpuid.
without kvm commit 412a3c41, CPUID(EAX=0xd,ECX=0).EBX always equal to 0 even
through guest update xcr0, this will crash legacy guest(e.g., CentOS 6).
Below is the call trace on the guest.

[    0.000000] kernel BUG at mm/bootmem.c:469!
[    0.000000] invalid opcode: 0000 [#1] SMP
[    0.000000] last sysfs file:
[    0.000000] CPU 0
[    0.000000] Modules linked in:
[    0.000000]
[    0.000000] Pid: 0, comm: swapper Tainted: G           --------------- H  2.6.32-279#2 Red Hat KVM
[    0.000000] RIP: 0010:[<ffffffff81c4edc4>]  [<ffffffff81c4edc4>] alloc_bootmem_core+0x7b/0x29e
[    0.000000] RSP: 0018:ffffffff81a01cd8  EFLAGS: 00010046
[    0.000000] RAX: ffffffff81cb1748 RBX: ffffffff81cb1720 RCX: 0000000001000000
[    0.000000] RDX: 0000000000000040 RSI: 0000000000000000 RDI: ffffffff81cb1720
[    0.000000] RBP: ffffffff81a01d38 R08: 0000000000000000 R09: 0000000000001000
[    0.000000] R10: 02008921da802087 R11: 00000000ffff8800 R12: 0000000000000000
[    0.000000] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000001000000
[    0.000000] FS:  0000000000000000(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[    0.000000] CS:  0010 DS: 0018 ES: 0018 CR0: 0000000080050033
[    0.000000] CR2: 0000000000000000 CR3: 0000000001a85000 CR4: 00000000001406b0
[    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    0.000000] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    0.000000] Process swapper (pid: 0, threadinfo ffffffff81a00000, task ffffffff81a8d020)
[    0.000000] Stack:
[    0.000000]  0000000000000002 81a01dd881eaf060 000000007e5fe227 0000000000001001
[    0.000000] <d> 0000000000000040 0000000000000001 0000006cffffffff 0000000001000000
[    0.000000] <d> ffffffff81cb1720 0000000000000000 0000000000000000 0000000000000000
[    0.000000] Call Trace:
[    0.000000]  [<ffffffff81c4f074>] ___alloc_bootmem_nopanic+0x8d/0xca
[    0.000000]  [<ffffffff81c4f0cf>] ___alloc_bootmem+0x11/0x39
[    0.000000]  [<ffffffff81c4f172>] __alloc_bootmem+0xb/0xd
[    0.000000]  [<ffffffff814d42d9>] xsave_cntxt_init+0x249/0x2c0
[    0.000000]  [<ffffffff814e0689>] init_thread_xstate+0x17/0x25
[    0.000000]  [<ffffffff814e0710>] fpu_init+0x79/0xaa
[    0.000000]  [<ffffffff814e27e3>] cpu_init+0x301/0x344
[    0.000000]  [<ffffffff81276395>] ? sort+0x155/0x230
[    0.000000]  [<ffffffff81c30cf2>] trap_init+0x24e/0x25f
[    0.000000]  [<ffffffff81c2bd73>] start_kernel+0x21c/0x430
[    0.000000]  [<ffffffff81c2b33a>] x86_64_start_reservations+0x125/0x129
[    0.000000]  [<ffffffff81c2b438>] x86_64_start_kernel+0xfa/0x109
[    0.000000] Code: 03 48 89 f1 49 c1 e8 0c 48 0f af d0 48 c7 c6 00 a6 61 81 48 c7 c7 00 e5 79 81 31 c0 4c 89 74 24 08 e8 f2 d7 89 ff 4d 85 e4 75 04 <0f> 0b eb fe 48 8b 45 c0 48 83 e8 01 48 85 45
c0 74 04 0f 0b eb

Signed-off-by: Bingsong Si <owen.si@ucloud.cn>
Message-Id: <20190822042901.16858-1-owen.si@ucloud.cn>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Tao Xu e7694a5eae target/i386: drop the duplicated definition of cpuid AVX512_VBMI macro
Drop the duplicated definition of cpuid AVX512_VBMI macro and rename
it as CPUID_7_0_ECX_AVX512_VBMI. Rename CPUID_7_0_ECX_VBMI2 as
CPUID_7_0_ECX_AVX512_VBMI2.

Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20190926021055.6970-3-tao3.xu@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Tao Xu f2be0bebb6 target/i386: clean up comments over 80 chars per line
Add some comments, clean up comments over 80 chars per line. And there
is an extra line in comment of CPUID_8000_0008_EBX_WBNOINVD, remove
the extra enter and spaces.

Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20190926021055.6970-2-tao3.xu@intel.com>
[ehabkost: rebase to latest git master]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:33 -03:00
Peter Maydell 6ee1864377 target/arm/arm-semi: Implement SH_EXT_STDOUT_STDERR extension
SH_EXT_STDOUT_STDERR is a v2.0 semihosting extension: the guest
can open ":tt" with a file mode requesting append access in
order to open stderr, in addition to the existing "open for
read for stdin or write for stdout". Implement this and
report it via the :semihosting-features data.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190916141544.17540-16-peter.maydell@linaro.org
2019-10-15 18:09:04 +01:00
Peter Maydell 22a43bb9ab target/arm/arm-semi: Implement SH_EXT_EXIT_EXTENDED extension
SH_EXT_EXIT_EXTENDED is a v2.0 semihosting extension: it
indicates that the implementation supports the SYS_EXIT_EXTENDED
function. This function allows both A64 and A32/T32 guests to
exit with a specified exit status, unlike the older SYS_EXIT
function which only allowed this for A64 guests. Implement
this extension.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190916141544.17540-15-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell c46a653c3a target/arm/arm-semi: Implement support for semihosting feature detection
Version 2.0 of the semihosting specification added support for
allowing a guest to detect whether the implementation supported
particular features. This works by the guest opening a magic
file ":semihosting-features", which contains a fixed set of
data with some magic numbers followed by a sequence of bytes
with feature flags. The file is expected to behave sensibly
for the various semihosting calls which operate on files
(SYS_FLEN, SYS_SEEK, etc).

Implement this as another kind of guest FD using our function
table dispatch mechanism. Initially we report no extended
features, so we have just one feature flag byte which is zero.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190916141544.17540-14-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 1631a7be3a target/arm/arm-semi: Factor out implementation of SYS_FLEN
Factor out the implementation of SYS_FLEN via the new
function tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-13-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 45e88ffc76 target/arm/arm-semi: Factor out implementation of SYS_SEEK
Factor out the implementation of SYS_SEEK via the new function
tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-12-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 0213fa452f target/arm/arm-semi: Factor out implementation of SYS_ISTTY
Factor out the implementation of SYS_ISTTY via the new function
tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-11-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 2c3a09a620 target/arm/arm-semi: Factor out implementation of SYS_READ
Factor out the implementation of SYS_READ via the
new function tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-10-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 52c8a163c1 target/arm/arm-semi: Factor out implementation of SYS_WRITE
Factor out the implementation of SYS_WRITE via the
new function tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-9-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 263eb621de target/arm/arm-semi: Factor out implementation of SYS_CLOSE
Currently for the semihosting calls which take a file descriptor
(SYS_CLOSE, SYS_WRITE, SYS_READ, SYS_ISTTY, SYS_SEEK, SYS_FLEN)
we have effectively two implementations, one for real host files
and one for when we indirect via the gdbstub. We want to add a
third one to deal with the magic :semihosting-features file.

Instead of having a three-way if statement in each of these
cases, factor out the implementation of the calls to separate
functions which we dispatch to via function pointers selected
via the GuestFDType for the guest fd.

In this commit, we set up the framework for the dispatch,
and convert the SYS_CLOSE call to use it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-8-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 939f5b4331 target/arm/arm-semi: Use set_swi_errno() in gdbstub callback functions
When we are routing semihosting operations through the gdbstub, the
work of sorting out the return value and setting errno if necessary
is done by callback functions which are invoked by the gdbstub code.
Clean up some ifdeffery in those functions by having them call
set_swi_errno() to set the semihosting errno.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-7-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 6ed6845532 target/arm/arm-semi: Restrict use of TaskState*
The semihosting code needs accuss to the linux-user only
TaskState pointer so it can set the semihosting errno per-thread
for linux-user mode. At the moment we do this by having some
ifdefs so that we define a 'ts' local in do_arm_semihosting()
which is either a real TaskState * or just a CPUARMState *,
depending on which mode we're compiling for.

This is awkward if we want to refactor do_arm_semihosting()
into other functions which might need to be passed the TaskState.
Restrict usage of the TaskState local by:
 * making set_swi_errno() always take the CPUARMState pointer
   and (for the linux-user version) get TaskState from that
 * creating a new get_swi_errno() which reads the errno
 * having the two semihosting calls which need the TaskState
   for other purposes (SYS_GET_CMDLINE and SYS_HEAPINFO)
   define a variable with scope restricted to just that code

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-6-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 35e9a0a8ce target/arm/arm-semi: Make semihosting code hand out its own file descriptors
Currently the Arm semihosting code returns the guest file descriptors
(handles) which are simply the fd values from the host OS or the
remote gdbstub. Part of the semihosting 2.0 specification requires
that we implement special handling of opening a ":semihosting-features"
filename. Guest fds which result from opening the special file
won't correspond to host fds, so to ensure that we don't end up
with duplicate fds we need to have QEMU code control the allocation
of the fd values we give the guest.

Add in an abstraction layer which lets us allocate new guest FD
values, and translate from a guest FD value back to the host one.
This also fixes an odd hole where a semihosting guest could
use the semihosting API to read, write or close file descriptors
that it had never allocated but which were being used by QEMU itself.
(This isn't a security hole, because enabling semihosting permits
the guest to do arbitrary file access to the whole host filesystem,
and so should only be done if the guest is completely trusted.)

Currently the only kind of guest fd is one which maps to a
host fd, but in a following commit we will add one which maps
to the :semihosting-features magic data.

If the guest is migrated with an open semihosting file descriptor
then subsequent attempts to use the fd will all fail; this is
not a change from the previous situation (where the host fd
being used on the source end would not be re-opened on the
destination end).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-5-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell f8ad2306d1 target/arm/arm-semi: Correct comment about gdb syscall races
In arm_gdb_syscall() we have a comment suggesting a race
because the syscall completion callback might not happen
before the gdb_do_syscallv() call returns. The comment is
correct that the callback may not happen but incorrect about
the effects. Correct it and note the important caveat that
callers must never do any work of any kind after return from
arm_gdb_syscall() that depends on its return value.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-4-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell f7d38cf2d0 target/arm/arm-semi: Always set some kind of errno for failed calls
If we fail a semihosting call we should always set the
semihosting errno to something; we were failing to do
this for some of the "check inputs for sanity" cases.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-3-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell 1b003821d4 target/arm/arm-semi: Capture errno in softmmu version of set_swi_errno()
The set_swi_errno() function is called to capture the errno
from a host system call, so that we can return -1 from the
semihosting function and later allow the guest to get a more
specific error code with the SYS_ERRNO function. It comes in
two versions, one for user-only and one for softmmu. We forgot
to capture the errno in the softmmu version; fix the error.

(Semihosting calls directed to gdb are unaffected because
they go through a different code path that captures the
error return from the gdbstub call in arm_semi_cb() or
arm_semi_flen_cb().)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-2-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Eric Auger fff9f5558d ARM: KVM: Check KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 for smp_cpus > 256
Host kernel within [4.18, 5.3] report an erroneous KVM_MAX_VCPUS=512
for ARM. The actual capability to instantiate more than 256 vcpus
was fixed in 5.4 with the upgrade of the KVM_IRQ_LINE ABI to support
vcpu id encoded on 12 bits instead of 8 and a redistributor consuming
a single KVM IO device instead of 2.

So let's check this capability when attempting to use more than 256
vcpus within any ARM kvm accelerated machine.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-4-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-15 18:09:02 +01:00
Eric Auger f6530926e2 intc/arm_gic: Support IRQ injection for more than 256 vpus
Host kernels that expose the KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 capability
allow injection of interrupts along with vcpu ids larger than 255.
Let's encode the vpcu id on 12 bits according to the upgraded KVM_IRQ_LINE
ABI when needed.

Given that we have two callsites that need to assemble
the value for kvm_set_irq(), a new helper routine, kvm_arm_set_irq
is introduced.

Without that patch qemu exits with "kvm_set_irq: Invalid argument"
message.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-3-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-15 18:09:02 +01:00
David Hildenbrand 1f6493be08 s390x/tcg: MVCL: Exit to main loop if requested
MVCL is interruptible and we should check for interrupts and process
them after writing back the variables to the registers. Let's check
for any exit requests and exit to the main loop. Introduce a new helper
function for that: cpu_loop_exit_requested().

When booting Fedora 30, I can see a handful of these exits and it seems
to work reliable. Also, Richard explained why this works correctly even
when MVCL is called via EXECUTE:

    (1) TB with EXECUTE runs, at address Ae
        - env->psw_addr stored with Ae.
        - helper_ex() runs, memory address Am computed
          from D2a(X2a,B2a) or from psw.addr+RI2.
        - env->ex_value stored with memory value modified by R1a

    (2) TB of executee runs,
        - env->ex_value stored with 0.
        - helper_mvcl() runs, using and updating R1b, R1b+1, R2b, R2b+1.

    (3a) helper_mvcl() completes,
         - TB of executee continues, psw.addr += ilen.
         - Next instruction is the one following EXECUTE.

    (3b) helper_mvcl() exits to main loop,
         - cpu_loop_exit_restore() unwinds psw.addr = Ae.
         - Next instruction is the EXECUTE itself...
         - goto 1.

As the PoP mentiones that an interruptible instruction called via EXECUTE
should avoid modifying storage/registers that are used by EXECUTE itself,
it is fine to retrigger EXECUTE.

Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-10 12:27:15 +02:00
Richard Henderson 1cccdef3e3 target/s390x: Remove ILEN_UNWIND
This setting is no longer used.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-19-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson 5c58704b07 target/s390x: Remove ilen argument from trigger_pgm_exception
All but one caller passes ILEN_UNWIND, which is not stored.
For the one use case in s390_cpu_tlb_fill, set int_pgm_ilen
directly, simply to avoid the assert within do_program_interrupt.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-18-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson f5cbdc4397 target/s390x: Remove ilen argument from trigger_access_exception
The single caller passes ILEN_UNWIND; pass that along to
trigger_pgm_exception directly.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-17-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson 20e1372b7c target/s390x: Remove ILEN_AUTO
This setting is no longer used.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-16-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson 2550953b20 target/s390x: Rely on unwinding in s390_cpu_virt_mem_rw
For TCG, we will always call s390_cpu_virt_mem_handle_exc,
which will go through the unwinder to set ILEN.  For KVM,
we do not go through do_program_interrupt, so this argument
is unused.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-15-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson 9e1dae315f target/s390x: Rely on unwinding in s390_cpu_tlb_fill
We currently set ilen to AUTO, then overwrite that during
unwinding, then overwrite that for the code access case.

This can be simplified to setting ilen to our arbitrary
value for the (undefined) code access case, then rely on
unwinding to overwrite that with the correct value for
the data access case.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-14-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 9accc852d8 target/s390x: Simplify helper_lra
We currently call trigger_pgm_exception to set cs->exception_index
and env->int_pgm_code and then read the values back and then
reset cs->exception_index so that the exception is not delivered.

Instead, use the exception type that we already have directly
without ever triggering an exception that must be suppressed.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-13-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 42007b1982 target/s390x: Remove fail variable from s390_cpu_tlb_fill
Now that excp always contains a real exception number, we can
use that instead of a separate fail variable.  This allows a
redundant test to be removed.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-12-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson a79d225335 target/s390x: Return exception from translate_pages
Do not raise the exception directly within translate_pages,
but pass it back so that caller may do so.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-11-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson ce7ac79d28 target/s390x: Return exception from mmu_translate
Do not raise the exception directly within mmu_translate,
but pass it back so that caller may do so.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-10-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson c7363b28ff target/s390x: Remove exc argument to mmu_translate_asce
Now that mmu_translate_asce returns the exception instead of
raising it, the argument is unused.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-9-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 31b5941906 target/s390x: Return exception from mmu_translate_real
Do not raise the exception directly within mmu_translate_real,
but pass it back so that caller may do so.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-8-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 1ab3302886 target/s390x: Handle tec in s390_cpu_tlb_fill
As a step toward moving all excption handling out of mmu_translate,
copy handling of the LowCore tec value from trigger_access_exception
into s390_cpu_tlb_fill.  So far this new plumbing isn't used.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-7-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 18ab936d00 target/s390x: Push trigger_pgm_exception lower in s390_cpu_tlb_fill
Delay triggering an exception until the end, after we have
determined ultimate success or failure, and also taken into
account whether this is a non-faulting probe.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-6-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 1e36aee636 target/s390x: Use tcg_s390_program_interrupt in TCG helpers
Replace all uses of s390_program_interrupt within files
that are marked CONFIG_TCG.  These are necessarily tcg-only.

This lets each of these users benefit from the QEMU_NORETURN
attribute on tcg_s390_program_interrupt.

Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-5-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 77b703f84f target/s390x: Remove ilen parameter from s390_program_interrupt
This is no longer used, and many of the existing uses -- particularly
within hw/s390x -- seem questionable.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-4-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson 3e20185892 target/s390x: Remove ilen parameter from tcg_s390_program_interrupt
Since we begin the operation with an unwind, we have the proper
value of ilen immediately available.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-3-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson c87ff4d108 target/s390x: Add ilen to unwind data
Use ILEN_UNWIND to signal that we have in fact that cpu_restore_state
will have been called by the time we arrive in do_program_interrupt.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20191001171614.8405-2-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand b580b6ee05 s390x/cpumodel: Add new TCG features to QEMU cpu model
We now implement a bunch of new facilities we can properly indicate.

ESOP-1/ESOP-2 handling is discussed in the PoP Chafter 3-15
("Suppression on Protection"). The "Basic suppression-on-protection (SOP)
facility" is a core part of z/Architecture without a facility
indication. ESOP-2 is indicated by ESOP-1 + Side-effect facility
("ESOP-2"). Besides ESOP-2, the side-effect facility is only relevant for
the guarded-storage facility (we don't implement).

S390_ESOP:
- We indicate DAT exeptions by setting bit 61 of the TEID (TEC) to 1 and
  bit 60 to zero. We don't trigger ALCP exceptions yet. Also, we set
  bit 0-51 and bit 62/63 to the right values.
S390_ACCESS_EXCEPTION_FS_INDICATION:
- The TEID (TEC) properly indicates in bit 52/53 on any access if it was
  a fetch or a store
S390_SIDE_EFFECT_ACCESS_ESOP2:
- We have no side-effect accesses (esp., we don't implement the
  guarded-storage faciliy), we correctly set bit 64 of the TEID (TEC) to
  0 (no side-effect).
- ESOP2: We properly set bit 56, 60, 61 in the TEID (TEC) to indicate the
  type of protection. We don't trigger KCP/ALCP exceptions yet.
S390_INSTRUCTION_EXEC_PROT:
- The MMU properly detects and indicates the exception on instruction fetches
- Protected TLB entries will never get PAGE_EXEC set.

There is no need to fake the abscence of any of the facilities - without
the facilities, some bits of the TEID (TEC) are simply unpredictable.

As IEP was added with z14 and we currently implement a z13, add it to
the MAX model instead.

Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand faa40177bb s390x/cpumodel: Prepare for changes of QEMU model
Setup the 4.1 compatibility model so we can add new features to the
LATEST model.

Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand 3a06f98192 s390x/mmu: Implement Instruction-Execution-Protection Facility
IEP support in the mmu is fairly easy. Set the right permissions for TLB
entries and properly report an exception.

Make sure to handle EDAT-2 by setting bit 56/60/61 of the TEID (TEC) to
the right values.

Let's keep s390_cpu_get_phys_page_debug() working even if IEP is
active. Switch MMU_DATA_LOAD - this has no other effects any more as the
ASC to be used is now fully selected outside of mmu_translate().

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand 3dc29061f3 s390x/mmu: Implement ESOP-2 and access-exception-fetch/store-indication facility
We already implement ESOP-1. For ESOP-2, we only have to indicate all
protection exceptions properly. Due to EDAT-1, we already indicate DAT
exceptions properly. We don't trigger KCP/ALCP/IEP exceptions yet.

So all we have to do is set the TEID (TEC) to the right values
(bit 56, 60, 61) in case of LAP.

We don't have any side-effects (e.g., no guarded-storage facility),
therefore, bit 64 of the TEID (TEC) is always 0.

We always have to indicate whether it is a fetch or a store for all access
exceptions. This is only missing for LAP exceptions.

Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand 90790898a1 s390x/mmu: Add EDAT2 translation support
This only adds basic support to the DAT translation, but no EDAT2 support
for TCG. E.g., the gdbstub under kvm uses this function, too, to
translate virtual addresses.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand a4e95b41a1 s390x/mmu: Convert to non-recursive page table walk
A non-recursive implementation allows to make better use of the
branch predictor, avoids function calls, and makes the implementation of
new features only for a subset of region table levels easier.

We can now directly compare our implementation to the KVM gaccess
implementation in arch/s390/kvm/gaccess.c:guest_translate().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand 3fd0e85f3f s390x/mmu: DAT table definition overhaul
Let's use consistent names for the region/section/page table entries and
for the macros to extract relevant parts from virtual address. Make them
match the definitions in the PoP - e.g., how the relevant bits are actually
called.

Introduce defines for all bits declared in the PoP. This will come in
handy in follow-up patches.

Add a note where additional information about s390x and the used
definitions can be found.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:48:46 +02:00
David Hildenbrand ae6d48d43f s390x/mmu: Use TARGET_PAGE_MASK in mmu_translate_pte()
While ASCE_ORIGIN is not wrong, it is certainly confusing. We want a
page frame address.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand 2ed0cd7cd7 s390x/mmu: Inject PGM_ADDRESSING on bogus table addresses
Let's document how it works and inject PGM_ADDRESSING if reading of
table entries fails.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand 81d7e3bc45 s390x/mmu: Inject DAT exceptions from a single place
Let's return the PGM from the translation functions on error and inject
based on that.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand 124ada6810 s390x/mmu: Move DAT protection handling out of mmu_translate_asce()
We'll reuse the ilen and tec definitions in mmu_translate
soon also for all other DAT exceptions we inject. Move it to the caller,
where we can later pair it up with other protection checks, like IEP.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand a780d096e6 s390x/mmu: Drop debug logging from MMU code
Let's get it out of the way to make some further refactorings easier.
Personally, I've never used these debug statements at all. And if I had
to debug issues, I used plain GDB instead (debug prints are just way too
much noise in the MMU). We might want to introduce tracing at some point
instead, so we can able selected events on demand.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
Peter Maydell 0f0b43868a ppc patch queue 2019-10-04
Here's the next batch of ppc and spapr patches.  Includes:
   * Fist part of a large cleanup to irq infrastructure
   * Recreate the full FDT at CAS time, instead of making a difficult
     to follow set of updates.  This will help us move towards
     eliminating CAS reboots altogether
   * No longer provide RTAS blob to SLOF - SLOF can include it just as
     well itself, since guests will generally need to relocate it with
     a call to instantiate-rtas
   * A number of DFP fixes and cleanups from Mark Cave-Ayland
   * Assorted bugfixes
   * Several new small devices for powernv
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl2XEn0ACgkQbDjKyiDZ
 s5I6bA/7B5sjY/QxuE8axm5KupoAnE8zf205hN8mbYASwtDfFwgaeNreVaOSJUpr
 fgcx/g9G3rAryGZv3O6i02+wcRgNw1DnJ3ynCthIrExZEcfbTYJiS4s9apwPEQy8
 HFmBNdPDqrhFI0aFvXEUauiOp1aapPUUklm34eFscs94lJXxphRUEfa3XT5uEhUh
 xrIZwYq20A+ih4UHwk3Onyx/cvFpl6BRB2nVEllQFqzwF5eTTfz9t8+JGTebxD/7
 8qqt8ti0KM3wxSDTQnmyMUmpgy+C1iCvNYvv6nWFg+07QuGs48EHlQUUVVni4r9j
 kUrDwKS2eC+8e8gP/xdIXEq3R2DsAMq+wFIswXZ3X6x4DoUV0OAJSHc9iMD4l+pr
 LyWnVpDprc6XhJHWKpuHZ5w9EuBnZFbIXdlZGFno+8UvXtusnbbuwAZzHTrRJRqe
 /AWVpFwGAoOF4KxIOFlPVBI8m4vFad/soVojC0vzIbRqaogOFZAjiL/yD5GwLmMa
 tywOEMBUJ/j2lgudTCyKn5uCa/Ew3DS1TSdenJjyqRi/gZM0IaORIhJhyFYW/eO1
 U7Uh8BnbC+4J11wwvFR5+W789dgM2+EEtAX9uI08VcE/R2ASabZlN4Zwrl0w4cb/
 VRybMT4bgmjzHRpfrqYPxpn8wqPcIw0BCeipSOjY3QU1Q25TEYQ=
 =PXXe
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.2-20191004' into staging

ppc patch queue 2019-10-04

Here's the next batch of ppc and spapr patches.  Includes:
  * Fist part of a large cleanup to irq infrastructure
  * Recreate the full FDT at CAS time, instead of making a difficult
    to follow set of updates.  This will help us move towards
    eliminating CAS reboots altogether
  * No longer provide RTAS blob to SLOF - SLOF can include it just as
    well itself, since guests will generally need to relocate it with
    a call to instantiate-rtas
  * A number of DFP fixes and cleanups from Mark Cave-Ayland
  * Assorted bugfixes
  * Several new small devices for powernv

# gpg: Signature made Fri 04 Oct 2019 10:35:57 BST
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-4.2-20191004: (53 commits)
  ppc/pnv: Remove the XICSFabric Interface from the POWER9 machine
  spapr: Eliminate SpaprIrq::init hook
  spapr: Add return value to spapr_irq_check()
  spapr: Use less cryptic representation of which irq backends are supported
  xive: Improve irq claim/free path
  spapr, xics, xive: Better use of assert()s on irq claim/free paths
  spapr: Handle freeing of multiple irqs in frontend only
  spapr: Remove unhelpful tracepoints from spapr_irq_free_xics()
  spapr: Eliminate SpaprIrq:get_nodename method
  spapr: Simplify spapr_qirq() handling
  spapr: Fix indexing of XICS irqs
  spapr: Eliminate nr_irqs parameter to SpaprIrq::init
  spapr: Clarify and fix handling of nr_irqs
  spapr: Replace spapr_vio_qirq() helper with spapr_vio_irq_pulse() helper
  spapr: Fold spapr_phb_lsi_qirq() into its single caller
  xics: Create sPAPR specific ICS subtype
  xics: Merge TYPE_ICS_BASE and TYPE_ICS_SIMPLE classes
  xics: Eliminate reset hook
  xics: Rename misleading ics_simple_*() functions
  xics: Eliminate 'reject', 'resend' and 'eoi' class hooks
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-07 13:49:02 +01:00
Peter Maydell 9e5319ca52 * Compilation fix for KVM (Alex)
* SMM fix (Dmitry)
 * VFIO error reporting (Eric)
 * win32 fixes and workarounds (Marc-André)
 * qemu-pr-helper crash bugfix (Maxim)
 * Memory leak fixes (myself)
 * VMX features (myself)
 * Record-replay deadlock (Pavel)
 * i386 CPUID bits (Sebastian)
 * kconfig tweak (Thomas)
 * Valgrind fix (Thomas)
 * Autoconverge test (Yury)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJdl3oMAAoJEL/70l94x66DPPAH/0ibsi1Mg8px8lyMHsZT6uIH
 dwlgC7zElbiKBZc/yJWGUXs/pYdjHv/vdGRXZqunTpKWvDS6Dyfkxyz+9S7A7bzo
 wFcdvLPTukrt0ZGZ7XCZzCK/dNpKivqgPtEgURy7bfOGf6uqlDC9qx5yF1BIutIX
 UXguw/+BndKlCRqSsuJA926HPtg1vuIODT7fkynCY5YCLFR26UrBvPhYFA9X8oKd
 VIRKwcH44rBjmlogcDG94ZasAjPFkujWEcsCIqmf9aH9GqNA+2YdIFyOGQ+WltV5
 eNh3ppQAwqHN5wBfRW68E58mdZznQjzOiiJuui24JuopsazpJXs+KuPUv4e63rU=
 =9iVL
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Compilation fix for KVM (Alex)
* SMM fix (Dmitry)
* VFIO error reporting (Eric)
* win32 fixes and workarounds (Marc-André)
* qemu-pr-helper crash bugfix (Maxim)
* Memory leak fixes (myself)
* VMX features (myself)
* Record-replay deadlock (Pavel)
* i386 CPUID bits (Sebastian)
* kconfig tweak (Thomas)
* Valgrind fix (Thomas)
* Autoconverge test (Yury)

# gpg: Signature made Fri 04 Oct 2019 17:57:48 BST
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (29 commits)
  target/i386/kvm: Silence warning from Valgrind about uninitialized bytes
  target/i386: work around KVM_GET_MSRS bug for secondary execution controls
  target/i386: add VMX features
  vmxcap: correct the name of the variables
  target/i386: add VMX definitions
  target/i386: expand feature words to 64 bits
  target/i386: introduce generic feature dependency mechanism
  target/i386: handle filtered_features in a new function mark_unavailable_features
  tests/docker: only enable ubsan for test-clang
  win32: work around main-loop busy loop on socket/fd event
  tests: skip serial test on windows
  util: WSAEWOULDBLOCK on connect should map to EINPROGRESS
  Fix wrong behavior of cpu_memory_rw_debug() function in SMM
  memory: allow memory_region_register_iommu_notifier() to fail
  vfio: Turn the container error into an Error handle
  i386: Add CPUID bit for CLZERO and XSAVEERPTR
  docker: test-debug: disable LeakSanitizer
  lm32: do not leak memory on object_new/object_unref
  cris: do not leak struct cris_disasm_data
  mips: fix memory leaks in board initialization
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-04 18:32:34 +01:00
Thomas Huth a1834d975f target/i386/kvm: Silence warning from Valgrind about uninitialized bytes
When I run QEMU with KVM under Valgrind, I currently get this warning:

 Syscall param ioctl(generic) points to uninitialised byte(s)
    at 0x95BA45B: ioctl (in /usr/lib64/libc-2.28.so)
    by 0x429DC3: kvm_ioctl (kvm-all.c:2365)
    by 0x51B249: kvm_arch_get_supported_msr_feature (kvm.c:469)
    by 0x4C2A49: x86_cpu_get_supported_feature_word (cpu.c:3765)
    by 0x4C4116: x86_cpu_expand_features (cpu.c:5065)
    by 0x4C7F8D: x86_cpu_realizefn (cpu.c:5242)
    by 0x5961F3: device_set_realized (qdev.c:835)
    by 0x7038F6: property_set_bool (object.c:2080)
    by 0x707EFE: object_property_set_qobject (qom-qobject.c:26)
    by 0x705814: object_property_set_bool (object.c:1338)
    by 0x498435: pc_new_cpu (pc.c:1549)
    by 0x49C67D: pc_cpus_init (pc.c:1681)
  Address 0x1ffeffee74 is on thread 1's stack
  in frame #2, created by kvm_arch_get_supported_msr_feature (kvm.c:445)

It's harmless, but a little bit annoying, so silence it by properly
initializing the whole structure with zeroes.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:20 +02:00
Paolo Bonzini 048c95163b target/i386: work around KVM_GET_MSRS bug for secondary execution controls
Some secondary controls are automatically enabled/disabled based on the CPUID
values that are set for the guest.  However, they are still available at a
global level and therefore should be present when KVM_GET_MSRS is sent to
/dev/kvm.

Unfortunately KVM forgot to include those, so fix that.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:20 +02:00
Paolo Bonzini 20a78b02d3 target/i386: add VMX features
Add code to convert the VMX feature words back into MSR values,
allowing the user to enable/disable VMX features as they wish.  The same
infrastructure enables support for limiting VMX features in named
CPU models.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:20 +02:00
Paolo Bonzini 704798add8 target/i386: add VMX definitions
These will be used to compile the list of VMX features for named
CPU models, and/or by the code that sets up the VMX MSRs.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Paolo Bonzini ede146c2e7 target/i386: expand feature words to 64 bits
VMX requires 64-bit feature words for the IA32_VMX_EPT_VPID_CAP
and IA32_VMX_BASIC MSRs.  (The VMX control MSRs are 64-bit wide but
actually have only 32 bits of information).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Paolo Bonzini 99e24dbdaa target/i386: introduce generic feature dependency mechanism
Sometimes a CPU feature does not make sense unless another is
present.  In the case of VMX features, KVM does not even allow
setting the VMX controls to some invalid combinations.

Therefore, this patch adds a generic mechanism that looks for bits
that the user explicitly cleared, and uses them to remove other bits
from the expanded CPU definition.  If these dependent bits were also
explicitly *set* by the user, this will be a warning for "-cpu check"
and an error for "-cpu enforce".  If not, then the dependent bits are
cleared silently, for convenience.

With VMX features, this will be used so that for example
"-cpu host,-rdrand" will also hide support for RDRAND exiting.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Paolo Bonzini 245edd0cfb target/i386: handle filtered_features in a new function mark_unavailable_features
The next patch will add a different reason for filtering features, unrelated
to host feature support.  Extract a new function that takes care of disabling
the features and optionally reporting them.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Dmitry Poletaev 56f997500a Fix wrong behavior of cpu_memory_rw_debug() function in SMM
There is a problem, that you don't have access to the data using cpu_memory_rw_debug() function when in SMM. You can't remotely debug SMM mode program because of that for example.
Likely attrs version of get_phys_page_debug should be used to get correct asidx at the end to handle access properly.
Here the patch to fix it.

Signed-off-by: Dmitry Poletaev <poletaev@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:18 +02:00
Sebastian Andrzej Siewior e900135dcf i386: Add CPUID bit for CLZERO and XSAVEERPTR
The CPUID bits CLZERO and XSAVEERPTR are availble on AMD's ZEN platform
and could be passed to the guest.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:17 +02:00
Mark Cave-Ayland 428115c3a9 target/ppc: use Vsr macros in BCD helpers
This allows us to remove more endian-specific defines from int_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190926204453.31837-1-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland f6d4c423a2 target/ppc: remove unnecessary if() around calls to set_dfp{64,128}() in DFP macros
Now that the parameters to both set_dfp64() and set_dfp128() are exactly the
same, there is no need for an explicit if() statement to determine which
function should be called based upon size. Instead we can simply use the
preprocessor to generate the call to set_dfp##size() directly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland 1ea80bf7f4 target/ppc: use existing VsrD() macro to eliminate HI_IDX and LO_IDX from dfp_helper.c
Switch over all accesses to the decimal numbers held in struct PPC_DFP from
using HI_IDX and LO_IDX to using the VsrD() macro instead. Not only does this
allow the compiler to ensure that the various dfp_* functions are being passed
a ppc_vsr_t rather than an arbitrary uint64_t pointer, but also allows the
host endian-specific HI_IDX and LO_IDX to be completely removed from
dfp_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00