Commit Graph

658 Commits

Author SHA1 Message Date
Suraj Jitindar Singh f0ec31b1e2 target/ppc: Add SPR TBU40
The spr TBU40 is used to set the upper 40 bits of the timebase
register, present on POWER5+ and later processors.

This register can only be written by the hypervisor, and cannot be read.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Suraj Jitindar Singh 32d0f0d8de target/ppc: Add SPR ASDR
The Access Segment Descriptor Register (ASDR) provides information about
the storage element when taking a hypervisor storage interrupt. When
performing nested radix address translation, this is normally the guest
real address. This register is present on POWER9 processors and later.

Implement the ADSR, note read and write access is limited to the
hypervisor.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Suraj Jitindar Singh 5cc7e69f6d target/ppc: Work [S]PURR implementation and add HV support
The Processor Utilisation of Resources Register (PURR) and Scaled
Processor Utilisation of Resources Register (SPURR) provide an estimate
of the resources used by the thread, present on POWER7 and later
processors.

Currently the [S]PURR registers simply count at the rate of the
timebase.

Preserve this behaviour but rework the implementation to store an offset
like the timebase rather than doing the calculation manually. Also allow
hypervisor write access to the register along with the currently
available read access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[ clg: rebased on current ppc tree ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Suraj Jitindar Singh 5d62725b2f target/ppc: Implement the VTB for HV access
The virtual timebase register (VTB) is a 64-bit register which
increments at the same rate as the timebase register, present on POWER8
and later processors.

The register is able to be read/written by the hypervisor and read by
the supervisor. All other accesses are illegal.

Currently the VTB is just an alias for the timebase (TB) register.

Implement the VTB so that is can be read/written independent of the TB.
Make use of the existing method for accessing timebase facilities where
by the compensation is stored and used to compute the value on reads/is
updated on writes.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[ clg: rebased on current ppc tree ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191128134700.16091-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Cédric Le Goater 7d37b274ff target/ppc: Add POWER10 DD1.0 model information
This includes in QEMU a new CPU model for the POWER10 processor with
the same capabilities of a POWER9 process. The model will be extended
when support is completed.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191205184454.10722-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Greg Kurz 2b6dda81c3 ppc: Make PPCVirtualHypervisor an incomplete type
PPCVirtualHypervisor is an interface instance. It should never be
dereferenced. Drop the dummy type definition for extra safety, which
is the common practice with QOM interfaces.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157589808041.21182.18121655959115011353.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Greg Kurz 6d38666a89 ppc: Ignore the CPU_INTERRUPT_EXITTB interrupt with KVM
This only makes sense with an emulated CPU. Don't set the bit in
CPUState::interrupt_request when using KVM to avoid confusions.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548863423.3650476.16424649423510075159.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Greg Kurz c1ad0b892c ppc: Don't use CPUPPCState::irq_input_state with modern Book3s CPU models
The power7_set_irq() and power9_set_irq() functions set this but it is
never used actually. Modern Book3s compatible CPUs are only supported
by the pnv and spapr machines. They have an interrupt controller, XICS
for POWER7/8 and XIVE for POWER9, whose models don't require to track
IRQ input states at the CPU level.

Drop these lines to avoid confusion.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548862861.3650476.16622818876928044450.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Greg Kurz 401774387a ppc: Deassert the external interrupt pin in KVM on reset
When a CPU is reset, QEMU makes sure no interrupt is pending by clearing
CPUPPCstate::pending_interrupts in ppc_cpu_reset(). In the case of a
complete machine emulation, eg. a sPAPR machine, an external interrupt
request could still be pending in KVM though, eg. an IPI. It will be
eventually presented to the guest, which is supposed to acknowledge it at
the interrupt controller. If the interrupt controller is emulated in QEMU,
either XICS or XIVE, ppc_set_irq() won't deassert the external interrupt
pin in KVM since it isn't pending anymore for QEMU. When the vCPU re-enters
the guest, the interrupt request is still pending and the vCPU will try
again to acknowledge it. This causes an infinite loop and eventually hangs
the guest.

The code has been broken since the beginning. The issue wasn't hit before
because accel=kvm,kernel-irqchip=off is an awkward setup that never got
used until recently with the LC92x IBM systems (aka, Boston).

Add a ppc_irq_reset() function to do the necessary cleanup, ie. deassert
the IRQ pins of the CPU in QEMU and most importantly the external interrupt
pin for this vCPU in KVM.

Reported-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548861740.3650476.16879693165328764758.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
Vladimir Sementsov-Ogievskiy cdcca22aab ppc: well form kvmppc_hint_smt_possible error hint helper
Make kvmppc_hint_smt_possible hint append helper well formed:
rename errp to errp_in, as it is IN-parameter here (which is unusual
for errp), rename function to be kvmppc_error_append_*_hint.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20191127191434.20945-1-vsementsov@virtuozzo.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-17 10:39:48 +11:00
David Gibson 165dc3edd7 spapr/kvm: Set default cpu model for all machine classes
We have to set the default model of all machine classes, not just for
the active one. Otherwise, "query-machines" will indicate the wrong
CPU model (e.g. "power9_v2.0-powerpc64-cpu" instead of
"host-powerpc64-cpu") as "default-cpu-type".

s390x already fixed this in de60a92e "s390x/kvm: Set default cpu model for
all machine classes".  This patch applies a similar fix for the pseries-*
machine types on ppc64.

Doing a
    {"execute":"query-machines"}
under KVM now results in
    {
      "hotpluggable-cpus": true,
      "name": "pseries-4.2",
      "numa-mem-supported": true,
      "default-cpu-type": "host-powerpc64-cpu",
      "is-default": true,
      "cpu-max": 1024,
      "deprecated": false,
      "alias": "pseries"
    },
    {
      "hotpluggable-cpus": true,
      "name": "pseries-4.1",
      "numa-mem-supported": true,
      "default-cpu-type": "host-powerpc64-cpu",
      "cpu-max": 1024,
      "deprecated": false
    },
    ...

Libvirt probes all machines via "-machine none,accel=kvm:tcg" and will
currently see the wrong CPU model under KVM.

Reported-by: Jiři Denemark <jdenemar@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
2019-11-18 11:50:39 +01:00
Emilio G. Cota 23f42b6053 target/ppc: fetch code with translator_ld
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2019-10-28 15:12:38 +00:00
Wei Yang 038adc2f58 core: replace getpagesize() with qemu_real_host_page_size
There are three page size in qemu:

  real host page size
  host page size
  target page size

All of them have dedicate variable to represent. For the last two, we
use the same form in the whole qemu project, while for the first one we
use two forms: qemu_real_host_page_size and getpagesize().

qemu_real_host_page_size is defined to be a replacement of
getpagesize(), so let it serve the role.

[Note] Not fully tested for some arch or device.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191013021145.16011-3-richardw.yang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-26 15:38:06 +02:00
Stefan Brankovic 8d745875c2 target/ppc: Fix for optimized vsl/vsr instructions
In previous implementation, invocation of TCG shift function could request
shift of TCG variable by 64 bits when variable 'sh' is 0, which is not
supported in TCG (values can be shifted by 0 to 63 bits). This patch fixes
this by using two separate invocation of TCG shift functions, with maximum
shift amount of 32.

Name of variable 'shifted' is changed to 'carry' so variable naming
is similar to old helper implementation.

Variables 'avrA' and 'avrB' are replaced with variable 'avr'.

Fixes: 4e6d0920e7
Reported-by: "Paul A. Clark" <pc@us.ibm.com>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Suggested-by: Aleksandar Markovic <aleksandar.markovic@rt-rk.com>
Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Message-Id: <1570196639-7025-2-git-send-email-stefan.brankovic@rt-rk.com>
Tested-by: Paul A. Clarke  <pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-24 09:36:55 +11:00
Mark Cave-Ayland 428115c3a9 target/ppc: use Vsr macros in BCD helpers
This allows us to remove more endian-specific defines from int_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190926204453.31837-1-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland f6d4c423a2 target/ppc: remove unnecessary if() around calls to set_dfp{64,128}() in DFP macros
Now that the parameters to both set_dfp64() and set_dfp128() are exactly the
same, there is no need for an explicit if() statement to determine which
function should be called based upon size. Instead we can simply use the
preprocessor to generate the call to set_dfp##size() directly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland 1ea80bf7f4 target/ppc: use existing VsrD() macro to eliminate HI_IDX and LO_IDX from dfp_helper.c
Switch over all accesses to the decimal numbers held in struct PPC_DFP from
using HI_IDX and LO_IDX to using the VsrD() macro instead. Not only does this
allow the compiler to ensure that the various dfp_* functions are being passed
a ppc_vsr_t rather than an arbitrary uint64_t pointer, but also allows the
host endian-specific HI_IDX and LO_IDX to be completely removed from
dfp_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland 64b8574e14 target/ppc: change struct PPC_DFP decimal storage from uint64[2] to ppc_vsr_t
There are several places in dfp_helper.c that access the decimal number
representations in struct PPC_DFP via HI_IDX and LO_IDX defines which are set
at the top of dfp_helper.c according to the host endian.

However we can instead switch to using ppc_vsr_t for decimal numbers and then
make subsequent use of the existing VsrD() macros to access the correct
element regardless of host endian. Note that 64-bit decimals are stored in the
LSB of ppc_vsr_t (equivalent to VsrD(1)).

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-6-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland 474c2e931d target/ppc: introduce dfp_finalize_decimal{64,128}() helper functions
Most of the DFP helper functions call decimal{64,128}FromNumber() just before
returning in order to convert the decNumber stored in dfp.t64 back to a
Decimal{64,128} to write back to the FP registers.

Introduce new dfp_finalize_decimal{64,128}() helper functions which both enable
the parameter list to be reduced considerably, and also help minimise the
changes required in the next patch.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland d9acba3130 target/ppc: update {get,set}_dfp{64,128}() helper functions to read/write DFP numbers correctly
Since commit ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr
register array" FP registers are no longer stored consecutively in memory and so
the current method of combining FP register pairs into DFP numbers is incorrect.

Firstly update the definition of the dh_*_fprp defines in helper.h to reflect
that FP registers are now stored as part of an array of ppc_vsr_t elements
rather than plain uint64_t elements, and then introduce a new ppc_fprp_t type
which conceptually represents a DFP even-odd register pair to be consumed by the
DFP helper functions.

Finally update the new DFP {get,set}_dfp{64,128}() helper functions to convert
between DFP numbers and DFP even-odd register pairs correctly, making use of the
existing VsrD() macro to access the correct elements regardless of host endian.

Fixes: ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr register array"
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland 33432d7737 target/ppc: introduce set_dfp{64,128}() helper functions
The existing functions (now incorrectly) assume that the MSB and LSB of DFP
numbers are stored as consecutive 64-bit words in memory. Instead of accessing
the DFP numbers directly, introduce set_dfp{64,128}() helper functions to ease
the switch to the correct representation.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland 6a8fbb9bdb target/ppc: introduce get_dfp{64,128}() helper functions
The existing functions (now incorrectly) assume that the MSB and LSB of DFP
numbers are stored as consecutive 64-bit words in memory. Instead of accessing
the DFP numbers directly, introduce get_dfp{64,128}() helper functions to ease
the switch to the correct representation.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:20 +10:00
Alexey Kardashevskiy 972bd57689 ppc/kvm: Skip writing DPDES back when in run time state
On POWER8 systems the Directed Privileged Door-bell Exception State
register (DPDES) stores doorbell pending status, one bit per a thread
of a core, set by "msgsndp" instruction. The register is shared among
threads of the same core and KVM on POWER9 emulates it in a similar way
(POWER9 does not have DPDES).

DPDES is shared but QEMU assumes all SPRs are per thread so the only safe
way to write DPDES back to VCPU before running a guest is doing so
while all threads are pulled out of the guest so DPDES cannot change.
There is only one situation when this condition is met: incoming migration
when all threads are stopped. Otherwise any QEMU HMP/QMP command causing
kvm_arch_put_registers() (for example printing registers or dumping memory)
can clobber DPDES in a race with other vcpu threads.

This changes DPDES handling so it is not written to KVM at runtime.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20190923084110.34643-1-aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Paul A. Clarke 5c94dd3806 ppc: Use FPSCR defines instead of constants
There are FPSCR-related defines in target/ppc/cpu.h which can be used in
place of constants and explicit shifts which arguably improve the code a
bit in places.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1568817169-1721-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Paul A. Clarke bc7a45ab88 ppc: Add support for 'mffsce' instruction
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR)
instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl.
This patch adds support for 'mffsce' instruction.

'mffsce' is identical to 'mffs', except that it also clears the exception
enable bits in the FPSCR.

On CPUs without support for 'mffsce' (below ISA 3.0), the
instruction will execute identically to 'mffs'.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1568817082-1384-1-git-send-email-pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Paul A. Clarke a2735cf483 ppc: Add support for 'mffscrn','mffscrni' instructions
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR)
instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl.
This patch adds support for 'mffscrn' and 'mffscrni' instructions.

'mffscrn' and 'mffscrni' are similar to 'mffsl', except they do not return
the status bits (FI, FR, FPRF) and they also set the rounding mode in the
FPSCR.

On CPUs without support for 'mffscrn'/'mffscrni' (below ISA 3.0), the
instructions will execute identically to 'mffs'.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1568817081-1345-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Peter Maydell 9de65783e1 Allow page table bit to swap endianness.
Reorganize watchpoints out of i/o path.
 Return host address from probe_write / probe_access.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAl1uiyYdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8AuwgAnYLQQbL8kjSqzp7q
 gRlj0M2SX41ZW3fMkI794RwsljD9Z0QS7YGnpzHolig9XUYrGnip7STrMvlCr/1L
 CIMWNHlgitgBMszLqg42/TB+6RxXn+DMX/ShUzTagC6xQhinCIpdEjoLaTKSgeP+
 foIyJ2uoJLKOBP8cPTQp8evongtoQIljpsZZ0K8a4sreO1d6ytH+olkuoGiROft+
 VoJkA+kNHd9cE+LPCva8UFGu1QE6uCySvhepzOpnvOtK+SXKUm2yLOFGu7RWP1pT
 RkE0oRyRnImtg+cViHfUUFogIffFROdL5tuYMQVuqbINeROPUgJPav+R1Nz1P60a
 xM2HEw==
 =bLLU
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190903' into staging

Allow page table bit to swap endianness.
Reorganize watchpoints out of i/o path.
Return host address from probe_write / probe_access.

# gpg: Signature made Tue 03 Sep 2019 16:47:50 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth/tags/pull-tcg-20190903: (36 commits)
  tcg: Factor out probe_write() logic into probe_access()
  tcg: Make probe_write() return a pointer to the host page
  s390x/tcg: Pass a size to probe_write() in do_csst()
  hppa/tcg: Call probe_write() also for CONFIG_USER_ONLY
  mips/tcg: Call probe_write() for CONFIG_USER_ONLY as well
  tcg: Enforce single page access in probe_write()
  tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
  s390x/tcg: Fix length calculation in probe_write_access()
  s390x/tcg: Use guest_addr_valid() instead of h2g_valid() in probe_write_access()
  tcg: Check for watchpoints in probe_write()
  cputlb: Handle watchpoints via TLB_WATCHPOINT
  cputlb: Remove double-alignment in store_helper
  cputlb: Fix size operand for tlb_fill on unaligned store
  exec: Factor out cpu_watchpoint_address_matches
  cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
  exec: Factor out core logic of check_watchpoint()
  exec: Move user-only watchpoint stubs inline
  target/sparc: sun4u Invert Endian TTE bit
  target/sparc: Add TLB entry with attributes
  cputlb: Byte swap memory transaction attribute
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-04 16:29:18 +01:00
Tony Nguyen 14776ab5a1 tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalized upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <81d9cd7d7f5aaadfa772d6c48ecee834e9cf7882.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:38 -07:00
Suraj Jitindar Singh 289af4ac99 powerpc/spapr: Add host threads parameter to ibm,get_system_parameter
The ibm,get_system_parameter rtas call is used by the guest to retrieve
data relating to certain parameters of the system. The SPLPAR
characteristics option (token 20) is used to determine characteristics of
the environment in which the lpar will run.

It may be useful for a guest to know the number of physical host threads
present on the underlying system where it is being run. Add the
characteristic "HostThrs" to the SPLPAR Characteristics
ibm,get_system_parameter rtas call to expose this information to a
guest. Add a n_host_threads property to the processor class which is
then used to retrieve this information and define it for POWER8 and
POWER9. Other processors will default to 0 and the charateristic won't
be added.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Message-Id: <20190827045751.22123-1-sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-29 09:46:07 +10:00
Stefan Brankovic 897b639789 target/ppc: Refactor emulation of vmrgew and vmrgow instructions
Since I found this two instructions implemented with tcg, I refactored
them so they are consistent with other similar implementations that
I introduced in this patch.

Also, a new dual macro GEN_VXFORM_TRANS_DUAL is added. This macro is
used if one instruction is realized with direct translation, and second
one with a helper.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Message-Id: <1566898663-25858-4-git-send-email-stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-29 09:46:07 +10:00
Richard Henderson 16ce2fffa6 target/ppc: Fix do_float_check_status vs inexact
The underflow and inexact exceptions are not mutually exclusive.
Check for both of them.  Tidy the reset of FPSCR[FI].

Fixes: https://bugs.launchpad.net/bugs/1841442
Reported-by: Paul Clarke <pc@us.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Paul Clarke <pc@us.ibm.com>
Message-Id: <20190826165434.18403-2-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-29 09:46:07 +10:00
Richard Henderson cbc65a8f22 target/ppc: Set float_tininess_before_rounding at cpu reset
As defined in Power 3.0 section 4.4.4 "Underflow Exception",
a tiny result is detected before rounding.

Fixes: https://bugs.launchpad.net/qemu/+bug/1841491
Reported-by: Paul Clarke <pc@us.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190827020013.27154-1-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-29 09:46:07 +10:00
Paul A. Clarke fa7d9cb960 ppc: Fix xscvdpspn for SNAN
The xscvdpspn instruction implements a non-arithmetic conversion.
In particular, NaNs are not silenced and rounding is not performed.

Rewrite to match the pseudocode for ConvertDPtoSP_NS() in the
Power 3.0B manual.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1566321964-1447-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[dwg: Replaced description with clearer version from rth]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-29 09:46:07 +10:00
Paul A. Clarke 256be7d07a ppc: Fix xsmaddmdp and friends
A class of instructions of the form:
  op Target,A,B
which operate like:
  Target = Target * A + B
have a bit set which distinguishes them from instructions that operate as:
  Target = Target * B + A

This bit is not being checked properly (using PPC_BIT macro), so all
instructions in this class are operating incorrectly as the second form
above.  The bit was being checked as if it were part of a 64-bit
instruction opcode, rather than a proper 32-bit opcode.  Fix by using the
macro (PPC_BIT32) which treats the opcode as a 32-bit quantity.

Fixes: c9f4e4d8b6 ("target/ppc: improve VSX_FMADD with new GEN_VSX_HELPER_VSX_MADD macro")

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1566401321-22419-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-29 09:46:07 +10:00
Peter Maydell f3b8f18ebf Monitor patches for 2019-08-21
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAl1dZKsSHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZTJ4QP/10izA+dSofQ9404GRq3TNzwRCKugU44
 nES9CqDh6x5emx+ADQWYkugblgfH9GOvUaAUNtY+uFaEr55yC/F+VWeVXvyjt5U6
 ZpPZqIRDOHo2+PZrddr/KcKmiomS6plz03m9bzb3pYN1yIl2ZzgClAhAqWQLk0WB
 wwiY+YsJ83YR4sdiRMZkuF+UL7N8fSqYvIIj0yzM8+8ONDor9n16PoPeFg3JSsyG
 aMxXIUnSBZAVtClaNkUPtS0Wf9XEuqoG1rvMRV4Vv+eeb7fwA414DqanRJdLlGMA
 yNRtFcVyztCfjgVEXnY9JJlFe6pDkoe8ycoimQ4YA60C9c1DIMHqyjFWXRHfDwk8
 bYMSX6CTpfoEvbTfmwqYR6KSkb/KuXiFDmcYlTYFvIt3grhhdHQbru9vy+E5sm/b
 j3CPV2DTCkeGY+oZFfKIaQT9yoWZOhmMY5doMTYyinXygPTGQROUrHtzUeRXKmJZ
 arqDRmh+mlEiGETNeYQCI45eYCSDYxO+UNrhszxhmv6B1+ixhIrV2oXhi61vVBeY
 yngY4EILbuA2Z/E4BevJk91ESWJTr3UP13c6p7yf21iN4BD1KkHy5HoXCgYfQDeV
 4kar49g6WQ/VQEiwhi65Xd0OwstynkcV69F+kMagVMgaLeRsdU5ikGJQzxTeWJRl
 SPpc7oDwuAS+
 =2F3E
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-08-21' into staging

Monitor patches for 2019-08-21

# gpg: Signature made Wed 21 Aug 2019 16:35:07 BST
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* remotes/armbru/tags/pull-monitor-2019-08-21:
  monitor/qmp: Update comment for commit 4eaca8de26
  qdev: Collect HMP handlers command handlers in qdev-monitor.c
  qapi: Move query-target from misc.json to machine.json
  hw/core: Move cpu.c, cpu.h from qom/ to hw/core/

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-22 10:31:21 +01:00
Peter Maydell fe066b4848 Various trivial fixes
-----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAl1dKK8SHGxhdXJlbnRA
 dml2aWVyLmV1AAoJEPMMOL0/L748WhUP93YZ60SfAFhFLkAexO3fJSMzzb02Zvy3
 6Q0UyR56xigESPhlfD3rPtPdMpQRnROtIsS1qWcd6x/ejKINh08xyx1HkWLfMKOn
 F1Cc43AdkSqsuEbIfYVJbL/Yz7H6SnD7B9ghV6B2S+nDnDXGMzObAb6QXUtlnMV+
 4as6pawKOLZiP0zFb+98m1as1gjjuQA/3jrQjWPfF3pzYAaBQPh/pHgDhUSVWjsi
 G7KWu4iljKio15WhhRlzKylhSD8Z4u1OD5x/3pbPnborVnOzvryWQ1hiRQ1feB/h
 NUaWs3A2KcgM54LIZN0dC4APGdPlLohwNhakaZR8EnJD9eik2kRO3JBmzorkSHua
 y0UBpNkjeIR/Cv4ayZv6NtrNFmuEJRRcusPBHg7sg0IBUtd4YdXrfE5d2tcYS3Hu
 WoZaYLME1FmbuByJFs+cQhMw88ISYvGD5PkzfkZuQC9/nM/zd6pJyRCAhgchIkfh
 G5iCtOMK8gs5xZjr22pOx/XYLFRzFsOIsGx3kcHH3vSJVS5K3QYFt2xH47YmQZHk
 1k3wQfc8ePFPFpR2rnWqg+iYbPgl8FpOsEfz4fuPRwBeRGFRRzfkpXLQ2DJww59/
 8pd2UAWgdwrvtimmykfEmTu+LUuZKwNFJNH3KfNputbBqjnv1KkbmsPqnElj3deo
 a7LJ72Z1k4Q=
 =dzAT
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging

Various trivial fixes

# gpg: Signature made Wed 21 Aug 2019 12:19:11 BST
# gpg:                using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
# gpg:                issuer "laurent@vivier.eu"
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* remotes/vivier2/tags/trivial-branch-pull-request:
  hw/display: Compile various display devices as common object
  hw/display/sm501: Remove unused include
  spapr_events: Rewrite a fall through comment
  vl: Rewrite a fall through comment
  target/ppc: Rewrite a fall through comment
  hw/ipmi: Rewrite a fall through comment
  hw/dma/omap_dma: Move switch 'fall through' comment to correct place
  json: Move switch 'fall through' comment to correct place
  hw/net/e1000: Fix erroneous comment
  .gitignore: ignore some vhost-user* related files
  configure: fix sdl detection using sdl2-config
  configure: remove obsoleted $sparc_cpu variable
  misc: fix naming scheme of compatiblity arrays
  test: Use g_strndup instead of plain strndup

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-21 16:59:22 +01:00
Peter Maydell e65472c7bc ppc patch queue for 2019-08-21
First ppc and spapr pull request for qemu-4.2.  Includes:
    * Some TCG emulation fixes and performance improvements
    * Support for the mffsl instruction in TCG
    * Added missing DPDES SPR
    * Some enhancements to the emulation of the XIVE interrupt
      controller
    * Cleanups to spapr MSI management
    * Some new suspend/resume infrastructure and a draft suspend
      implementation for spapr
    * New spapr hypercall for TPM communication (will be needed for
      secure guests under an Ultravisor)
    * Fix several memory leaks
 
 And a few other assorted fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl1c8bwACgkQbDjKyiDZ
 s5Ko6hAA1Y1xOreKTUP9UtAIaipfdasOMOcGYQ+MMovh05Zn0CwmB0uukeIzbnhi
 hU3qMue6Q0EAt5F9d9z4YWRZqkgsAOBd7SVHpSouoY6DOtIsL9Tc0jTrpr6z8t0L
 j4TYZYlJUybKMocj/8YayTALMZf2myh5A+oxDGPQHqYNWYGCEcttsFbcoeWQbAXG
 eXrGDuSzXDXJSKej99ty/tpSjbJXDbRcvMv+v3v6F+tHWhNke3Ku8s7niDy3fIZU
 lU1Sbz0/UnjKXpCWI/WRBFFWrr1bYICvKPzjK1tNJgA/HhAp37IIsF/j/5kmmF0Y
 dxOCf3kRBhGi5/KKDFrVWwdTiU0CdJ4iF/NvaNlZGZ+oSTZzANz6O/nlAjcBlbt6
 nAJRB4irKkDpL0slwDhl+oF73kFXMUokNgqeaMXE03agMapHrHfmxHs7yL5lAnxf
 I0hyfAUYTZBc1yd8dxEtmEoFYGE9OXU5jZC4BcV8GcrT1tK3ZVzsALetRF2Sm1wm
 wW16B0V6szsDd67cwJdPIs3tR6ZSxX2D6/vhK4mK77TM9TAN7nEMJBFNwjNbnttD
 QLRhFnIZQ61Ja+tDI0aV37bSM32Mi43bYRksh2FujgaYpX92Z0QfsDf9NtM9yQab
 Ihbq7KJ/bK4m9OvmWTUO4CKrCbnzMEzL+ncFamoO2PcvG9uTk+M=
 =E+7d
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.2-20190821' into staging

ppc patch queue for 2019-08-21

First ppc and spapr pull request for qemu-4.2.  Includes:
   * Some TCG emulation fixes and performance improvements
   * Support for the mffsl instruction in TCG
   * Added missing DPDES SPR
   * Some enhancements to the emulation of the XIVE interrupt
     controller
   * Cleanups to spapr MSI management
   * Some new suspend/resume infrastructure and a draft suspend
     implementation for spapr
   * New spapr hypercall for TPM communication (will be needed for
     secure guests under an Ultravisor)
   * Fix several memory leaks

And a few other assorted fixes.

# gpg: Signature made Wed 21 Aug 2019 08:24:44 BST
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-4.2-20190821: (42 commits)
  ppc: Fix emulated single to double denormalized conversions
  ppc: Fix emulated INFINITY and NAN conversions
  ppc: conform to processor User's Manual for xscvdpspn
  ppc: Add support for 'mffsl' instruction
  target/ppc: Add Directed Privileged Door-bell Exception State (DPDES) SPR
  spapr/xive: Mask the EAS when allocating an IRQ
  spapr: Implement better workaround in spapr-vty device
  spapr/irq: Drop spapr_irq_msi_reset()
  spapr/pci: Free MSIs during reset
  spapr/pci: Consolidate de-allocation of MSIs
  ppc: remove idle_timer logic
  spapr: Implement ibm,suspend-me
  i386: use machine class ->wakeup method
  machine: Add wakeup method to MachineClass
  ppc/xive: Improve 'info pic' support
  ppc/xive: Provide silent escalation support
  ppc/xive: Provide unconditional escalation support
  ppc/xive: Provide escalation support
  ppc/xive: Provide backlog support
  ppc/xive: Implement TM_PULL_OS_CTX special command
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-21 14:04:17 +01:00
Markus Armbruster 2e5b09fd0e hw/core: Move cpu.c, cpu.h from qom/ to hw/core/
Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190709152053.16670-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h
in comments replaced]
2019-08-21 13:24:01 +02:00
Philippe Mathieu-Daudé b1d5b6e507 target/ppc: Rewrite a fall through comment
GCC9 is confused by this comment when building with CFLAG
-Wimplicit-fallthrough=2:

  target/ppc/mmu_helper.c: In function ‘dump_mmu’:
  target/ppc/mmu_helper.c:1349:12: error: this statement may fall through [-Werror=implicit-fallthrough=]
   1349 |         if (ppc64_v3_radix(env_archcpu(env))) {
        |            ^
  target/ppc/mmu_helper.c:1356:5: note: here
   1356 |     default:
        |     ^~~~~~~
  cc1: all warnings being treated as errors

Rewrite the comment using 'fall through' which is recognized by
GCC and static analyzers.

Reported-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190719131425.10835-6-philmd@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-08-21 10:57:28 +02:00
Paul A. Clarke c0e6616b66 ppc: Fix emulated single to double denormalized conversions
helper_todouble() was not properly converting any denormalized 32 bit
float to 64 bit double.

Fix-suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>

v2:
- Splitting patch "ppc: Three floating point fixes"; this is just one part.
- Original suggested "fix" was likely flawed.  v2 is rewritten by
  Richard Henderson (Thanks, Richard!); I reformatted the comments in a
  couple of places, compiled, and tested.
Message-Id: <1566250936-14538-1-git-send-email-pc@us.ibm.com>

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:39 +10:00
Paul A. Clarke a7b7b98318 ppc: Fix emulated INFINITY and NAN conversions
helper_todouble() was not properly converting INFINITY from 32 bit
float to 64 bit double.

(Normalized operand conversion is unchanged, other than indentation.)

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1566242388-9244-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:39 +10:00
Paul A. Clarke e6f1bfb211 ppc: conform to processor User's Manual for xscvdpspn
The POWER8 and POWER9 User's Manuals specify the implementation
behavior for what the ISA leaves "undefined" behavior for the
xscvdpspn and xscvdpsp instructions.  This patch corrects the QEMU
implementation to match the hardware implementation for that case.

ISA 3.0B has xscvdpspn leaving its result in word 0 of the target register,
with the other words of the target register left "undefined".

The User's Manuals specify:
  VSX scalar convert from double-precision to single-precision (xscvdpsp,
  xscvdpspn).
  VSR[32:63] is set to VSR[0:31].
So, words 0 and 1 both contain the result.

Note: this is important because GCC as of version 8 or so, assumes and takes
advantage of this behavior to optimize the following sequence:
  xscvdpspn vs0,vs1
  mffprwz   r8,f0
ISA 3.0B has xscvdpspn leaving its result in word 0 of the target register,
and mffprwz expecting its input to come from word 1 of the source register.
This sequence fails with QEMU, as a shift is required between those two
instructions.  However, since the hardware splats the result to both words 0
and 1 of its output register, the shift is not necessary.

Expect a future revision of the ISA to specify this behavior.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>

v2
- Splitting patch "ppc: Three floating point fixes"; this is just one part.
- Updated commit message to clarify behavior is documented in User's Manuals.
- Updated commit message to correct which words are in output and source of
  xscvdpspn and mffprz.
- No source changes to this part of the original patch.

Message-Id: <1566236601-22954-1-git-send-email-pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:39 +10:00
Paul A. Clarke 31eb7dddac ppc: Add support for 'mffsl' instruction
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR)
instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl.
This patch adds support for 'mffsl'.

'mffsl' is identical to 'mffs', except it only returns mode, status, and enable
bits from the FPSCR.

On CPUs without support for 'mffsl' (below ISA 3.0), the 'mffsl' instruction
will execute identically to 'mffs'.

Note: I renamed FPSCR_RN to FPSCR_RN0 so I could create an FPSCR_RN mask which
is both bits of the FPSCR rounding mode, as defined in the ISA.

I also fixed a typo in the definition of FPSCR_FR.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>

v4:
- nit: added some braces to resolve a checkpatch complaint.

v3:
- Changed tcg_gen_and_i64 to tcg_gen_andi_i64, eliminating the need for a
  temporary, per review from Richard Henderson.

v2:
- I found that I copied too much of the 'mffs' implementation.
  The 'Rc' condition code bits are not needed for 'mffsl'.  Removed.
- I now free the (renamed) 'tmask' temporary.
- I now bail early for older ISA to the original 'mffs' implementation.

Message-Id: <1565982203-11048-1-git-send-email-pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:39 +10:00
Alexey Kardashevskiy cfc61ba62f target/ppc: Add Directed Privileged Door-bell Exception State (DPDES) SPR
DPDES stores a status of a doorbell message and if it is lost in
migration, the destination CPU won't receive it. This does not hit us
much as IPIs complete too quick to catch a pending one and even if
we missed one, broadcasts happen often enough to wake that CPU.

This defines DPDES and registers with KVM for migration.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20190816061733.53572-1-aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:39 +10:00
Shivaprasad G Bhat 1e8f51e856 ppc: remove idle_timer logic
The logic is broken for multiple vcpu guests, also causing memory leak.
The logic is in place to handle kvm not having KVM_CAP_PPC_IRQ_LEVEL,
which is part of the kernel now since 2.6.37. Instead of fixing the
leak, drop the redundant logic which is not excercised on new kernels
anymore. Exit with error on older kernels.

Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Message-Id: <156406409479.19996.7606556689856621111.stgit@lep8c.aus.stglabs.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:39 +10:00
Nicholas Piggin 03ef074c04 spapr: Implement dispatch tracking for tcg
Implement cpu_exec_enter/exit on ppc which calls into new methods of
the same name in PPCVirtualHypervisorClass. These are used by spapr
to implement the splpar VPA dispatch counter initially.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20190718034214.14948-2-npiggin@gmail.com>
[dwg: Removed unnecessary CONFIG_USER_ONLY checks as suggested by gkurz]
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic 1872588ede target/ppc: Optimize emulation of vclzw instruction
Optimize Altivec instruction vclzw (Vector Count Leading Zeros Word).
This instruction counts the number of leading zeros of each word element
in source register and places result in the appropriate word element of
destination register.

Counting is to be performed in four iterations of for loop(one for each
word elemnt of source register vB). Every iteration consists of loading
appropriate word element from source register, counting leading zeros
with tcg_gen_clzi_i32, and saving the result in appropriate word element
of destination register.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-7-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic b8313f0d91 target/ppc: Optimize emulation of vclzd instruction
Optimize Altivec instruction vclzd (Vector Count Leading Zeros Doubleword).
This instruction counts the number of leading zeros of each doubleword element
in source register and places result in the appropriate doubleword element of
destination register.

Using tcg-s count leading zeros instruction two times(once for each
doubleword element of source register vB) and placing result in
appropriate doubleword element of destination register vD.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-6-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic 083b3f012f target/ppc: Optimize emulation of vgbbd instruction
Optimize altivec instruction vgbbd (Vector Gather Bits by Bytes by Doubleword)
All ith bits (i in range 1 to 8) of each byte of doubleword element in
source register are concatenated and placed into ith byte of appropriate
doubleword element in destination register.

Following solution is done for both doubleword elements of source register
in parallel, in order to reduce the number of instructions needed(that's why
arrays are used):
First, both doubleword elements of source register vB are placed in
appropriate element of array avr. Bits are gathered in 2x8 iterations(2 for
loops). In first iteration bit 1 of byte 1, bit 2 of byte 2,... bit 8 of
byte 8 are in their final spots so avr[i], i={0,1} can be and-ed with
tcg_mask. For every following iteration, both avr[i] and tcg_mask variables
have to be shifted right for 7 and 8 places, respectively, in order to get
bit 1 of byte 2, bit 2 of byte 3.. bit 7 of byte 8 in their final spots so
shifted avr values(saved in tmp) can be and-ed with new value of tcg_mask...
After first 8 iteration(first loop), all the first bits are in their final
places, all second bits but second bit from eight byte are in their places...
only 1 eight bit from eight byte is in it's place). In second loop we do all
operations symmetrically, in order to get other half of bits in their final
spots. Results for first and second doubleword elements are saved in
result[0] and result[1] respectively. In the end those results are saved in
appropriate doubleword element of destination register vD.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-5-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Alex Bennée 28876bf27d target/ppc: move opcode decode tables to PowerPCCPU
The opcode decode tables aren't really part of the CPUPPCState but an
internal implementation detail for the translator. This can cause
problems with memcpy in cpu_copy as any table created during
ppc_cpu_realize get written over causing a memory leak. To avoid this
move the tables into PowerPCCPU which is better suited to hold
internal implementation details.

Attempts to fix: https://bugs.launchpad.net/qemu/+bug/1836558
Cc: 1836558@bugs.launchpad.net
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190716121352.302-1-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00