There's no point inlining this, if you hit the exception case you exit
anyway, and not inlining saves about 100K of code size (and cache
footprint).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: removed '__attribute__((noinline))' from original patch ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Those instructions are only available in hypervisor real mode and
allow cache inhibited garded access to devices in that mode.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Recent server processors use the Hypervisor Emulation Assistance
interrupt for illegal instructions and *some* type of SPR accesses.
Also the code was always generating inval instructions even for priv
violations due to setting the wrong flags
Finally, the checking for PR/HV was open coded everywhere.
This reworks it all, using little helper macros for checking, and
adding the HV interrupt (which gets converted back to program check
in the slow path of excp_helper.c on CPUs that don't want it).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Under some circumstances, we need to direct ISI and DSI interrupts
at the hypervisor, turning them into HISI/HDSI, and using different
SPRs (HDSISR and HDAR) depending on the combination of MSR_DR and
the corresponding VPM bits in LPCR.
This moves part of the code into helpers that are fixed to select
the right exception type and registers. On pre-P7 processors, LPCR
is 0 which provides the old behaviour of directing the interrupts
at the supervisor.
Thanks to Andrei Warkentin for finding a bug when HV=1
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: Merged a fix on POWERPC_EXCP_HDSI fixing the condition on
msr_hv, from Andrei Warkentin <andrey.warkentin@gmail.com> ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We were initializing unused ones and missing some
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This properly implements LPES0 handling for HV vs. !HV mode and
removes the unsupported LPES1. This has been removed from the specs
since ISA v2.07.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: AIL implementation was fixed in commit 5c94b2a5e5. This patch
only contains the bits of the original patch related to LPES0
handling, adapted commit log.
fixed checkpatch.pl errors. ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows us to set the appropriate LPCR bits which will be used
when fixing the exception model for the HV mode.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: previous commit 26a7f1291b did not include the LPCR setting as
it was not needed at the time, adapted commit log ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This reworks emulation of the various "rfi" variants. I removed
some masking bits that I couldn't make sense of, the only bit that
I am aware we should mask here is POW, the CPU's MSR mask should
take care of the rest.
This also fixes some problems when running 32-bit userspace under
a 64-bit kernel.
This patch broke 32bit OpenBIOS when run under a 970 cpu. A fix was
proposed here :
https://www.coreboot.org/pipermail/openbios/2016-June/009452.html
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: updated the commit log with the reference of the openbios fix ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
[dwg: Remove hunk which disabled rfi on 64-bit CPUS. The change was
correct, but we need to fix OpenBIOS before applying it]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The 75x and 74xx processors have some thermal monitoring SPRs that
some OSes such as MacOS do use. Our current "dumb" implementation
isn't good enough and will cause some versions of MacOS to hang during
boot.
This lifts an improved emulation from MacOnLinux and adapts it to
qemu, thus fixing the problem.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Fixed typo in comment, a number of minor checkpatch warnings,
and a compile failure with CONFIG_USER_ONLY]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In 63ae0915f8, I arranged to use a 32-bit rotate, without
considering the effect of a mask value that wraps around to
the high bits of the word.
[dwg: In 2e11b15 this was partially fixed, but an edge case was still
incorrect, which this fixes]
Signed-off-by: Richard Henderson <rth@twiddle.net>
[dwg: Folded with a revert of 2e11b15, an earlier buggy version of
this patch which already went upstream]
Tested-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
While trying to install a fedora container with
"lxc-create -t fedora -- -I qemu-ppc64" the installation abort with
the following error:
qemu: fatal: Unknown exception 0x65537. Aborting
NIP 0000004000927924 LR 00000040009e325c CTR 0000004000927480 XER 0000000000000000 CPU#0
MSR 9000000102806000 HID0 0000000000000000 HF 9000000002806000 iidx 3 didx 3
TB 00248932 1069155773327487
GPR00 00000040009e325c 00000040007ff800 0000004000aba098 0000000000000000
GPR04 00000040007ff878 0000004000dcb588 0000004000dcb830 0000004000a7a098
GPR08 0000000000000000 0000000000000000 00000040007ff878 0000004000927960
GPR12 0000000022022448 0000004000e2aef0 0000000000000000 0000000000000000
GPR16 0000000000000000 0000000000000000 0000000000000002 0000000000000001
GPR20 0000000000000000 0000000000000000 0000000000000000 0000004000800699
GPR24 0000004000e13320 0000000000000000 0000004000ac9ad8 0000004000ac9ae0
GPR28 0000000000000001 00000000100210a0 0000000000000000 0000000000000038
CR 22022442 [ E E - E E G G E ] RES ffffffffffffffff
FPR00 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR04 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR08 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR12 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR16 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR20 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR24 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPR28 0000000000000000 0000000000000000 0000000000000000 0000000000000000
FPSCR 0000000000000000
/usr/share/lxc/templates/lxc-fedora: line 487: 26661 Aborted (core dumped) chroot . yum -y --nogpgcheck --installroot /run/install install python rpm yum
I've bisected until the commit:
commit b68e60e6f0
Author: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Date: Tue May 3 18:03:33 2016 +0200
ppc: Get out of emulation on SMT "OR" ops
Otherwise tight loops at smt_low for example, which OPAL does,
eat so much CPU that we can't boot a kernel anymore. With that,
I can boot 8 CPUs just fine with powernv.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can fix that by preventing to send EXCP_HLT in the case of linux-user mode,
as the main loop doesn't know how to manage it.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move all trace-events for files in the target-ppc/ directory to
their own file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 1466066426-16657-39-git-send-email-berrange@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Information is tracked inside the TCGContext structure, and later used
by tracing events with the 'tcg' and 'vcpu' properties.
The 'cpu' field is used to check tracing of translation-time
events ("*_trans"). The 'tcg_env' field is used to pass it to
execution-time events ("*_exec").
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 146549350162.18437.3033661139638458143.stgit@fimbulvetr.bsc.es
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add sPAPR specific abastract CPU core device that is based on generic
CPU core device. Use this as base type to create sPAPR CPU specific core
devices.
TODO:
- Add core types for other remaining CPU types
- Handle CPU model alias correctly
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In 63ae0915f8, I arranged to use a 32-bit rotate, without
considering the effect of a mask value that wraps around to
the high bits of the word.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Fixed bug in code generation for the PowerPC "wait" instruction. It
doesn't make sense to store a non-initialized register.
Signed-off-by: Jakub Horak <thement@ibawizard.net>
[dwg: revised commit message]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
qemu/osdep.h checks whether MAP_ANONYMOUS is defined, but this check
is bogus without a previous inclusion of sys/mman.h. Include it in
sysemu/os-posix.h and remove it from everywhere else.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make sure that guests can use the PowerISA 2.07 CPU sPAPR
compatibility mode when they request it and the target CPU
supports it.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When using an olderr PowerISA level, all the upper compatibility
bits have to be enabled, too. For example when we want to run
something in PowerISA 2.05 compatibility mode on POWER8, the bit
for 2.06 has to be set beside the bit for 2.05.
Additionally, to make sure that we do not set bits that are not
supported by the host, we apply a mask with the known-to-be-good
bits here, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
[dwg: Added some #ifs to fix compile on 32-bit targets]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running with KVM, we might be interested in some details
of the host CPU class, too, so provide a function to get the
corresponding CPU class.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current pcr_mask values are ambiguous: Should these be the mask
that defines valid bits in the PCR register? Or should these rather
indicate which compatibility levels are possible? Anyway, POWER6 and
POWER7 should certainly not use the same values here. So let's
introduce an additional variable "pcr_supported" here which is
used to indicate the valid compatibility levels, and use pcr_mask
to signal the valid bits in the PCR register.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This includes some infrastructure for ipmi smbios tables.
Beginning of acpi hotplug rework by Igor for supporting >255 CPUs.
Misc cleanups and fixes.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJXVsQ6AAoJECgfDbjSjVRp/7MH/i39kIjyUY0jrt/UDrNgXLoi
bi92ZdyyaeULg3rBFHlnweMs2VHChUATTc0DmXpr2hJbXm5MlZHQWHsv3UVpZ93h
ZVY99b5AF/2Im1MIfDmxImFU9YfHYQuAqW7ZNx+RtXpzuAvdd89K29y80iwlJ251
B1zhl9Tp8eePE3fZhbRElaTY70ruWonl+HOV9am7tfMDCTugYDPfLqdYT8fnaY98
GMbSkmRnOaRYeo23dsg2pX7DK+H3I4DO8qvis6Va4pRiwCCf9L0N2GAIrljpMdbk
yOCvvW3ujvCwkwwTwL1fPZTk1PTF3xhbgFZVvX2zOAWljhYkcesg4L8oERVBjwQ=
=sqxd
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc, pci, virtio: new features, cleanups, fixes
This includes some infrastructure for ipmi smbios tables.
Beginning of acpi hotplug rework by Igor for supporting >255 CPUs.
Misc cleanups and fixes.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Tue 07 Jun 2016 13:55:22 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream: (25 commits)
virtio: move bi-endian target support to a single location
pc-dimm: introduce realize callback
pc-dimm: get memory region from ->get_memory_region()
acpi: make bios_linker_loader_add_checksum() API offset based
acpi: make bios_linker_loader_add_pointer() API offset based
tpm: apci: cleanup TCPA table initialization
acpi: cleanup bios_linker_loader_cleanup()
acpi: simplify bios_linker API by removing redundant 'table' argument
acpi: convert linker from GArray to BIOSLinker structure
pc: use AcpiDeviceIfClass.send_event to issue GPE events
acpi: extend ACPI interface to provide send_event hook
pc: Postpone SMBIOS table installation to post machine init
ipmi: rework the fwinfo to be fetched from the interface
tests: acpi: update tables with consolidated legacy cpu-hotplug AML
pc: acpi: cpuhp-legacy: switch ProcessorID to possible_cpus idx
pc: acpi: simplify build_legacy_cpu_hotplug_aml() signature
pc: acpi: consolidate legacy CPU hotplug in one file
pc: acpi: mark current CPU hotplug functions as legacy
pc: acpi: cpu-hotplug: make AML CPU_foo defines local to cpu_hotplug_acpi_table.c
pc: acpi: consolidate \GPE._E02 with the rest of CPU hotplug AML
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Paolo's recent cpu.h cleanups broke legacy virtio for ppc64 LE guests (and
arm BE guests as well, even if I have not verified that). Especially, commit
"33c11879fd42 qemu-common: push cpu.h inclusion out of qemu-common.h" has
the side-effect of silently hiding the TARGET_IS_BIENDIAN macro from the
virtio memory accessors, and thus fully disabling support of endian changing
targets.
To be sure this cannot happen again, let's gather all the bi-endian bits
where they belong in include/hw/virtio/virtio-access.h.
The changes in hw/virtio/vhost.c are safe because vhost_needs_vring_endian()
is not called on a hot path and non bi-endian targets will return false
anyway.
While here, also rename TARGET_IS_BIENDIAN to be more precise: it is only for
legacy virtio and bi-endian guests.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
The architecture specifies that mtspr/mfspr on an unknown SPR number
should act as a nop in privileged mode.
I haven't removed the warning however as it can be useful for
diagnosing.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Used to lookup SLB entries by address, for some reason it was missing.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since at least the 2.05 architecture, the slbia instruction takes an
IH field in the opcode to provide some control on the effect of the
slbia on the ERATs (level-1 TLB).
We can safely ignore it as we always flush the whole qemu TLB but
we should allow the bits in the decode.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We had code to handle the L bit in the opcode but we didn't
allow it in the decode mask.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PPC_64BX instruction flag is used for a couple of newer
instructions currently on POWER8 but our implementation for
them works for POWER7 too (and already does the proper checking
of what is permitted) with one exception: stq needs to check
the ISA version.
This fixes the latter and add the instructions to POWER7
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We only had them on POWER8, add them to POWER7 as well
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This ports the existing 64-bit mechanism to 32-bit, thus series
of 64 tlbie's followed by a sync like some versions of Darwin
(ab)use will result in a single flush.
We apply a pending flush on any sync instruction though, as Darwin
doesn't use tlbsync on non-SMP systems.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The processor only uses some bits of the address and invalidates an
entire congruence class. Some OSes such as Darwin and HelenOS take
advantage of this and occasionally invalidate the entire TLB by just
doing a series of 64 consecutive tlbie for example.
Our code tries to be too smart here only invalidating a segment
congruence class (ie, allowing more address bits to be relevant
in the invalidation), this fails miserably on those OSes.
Instead don't bother, do like ppc64 and blow the whole tlb when tlbie
is executed.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We used to always flush the TLB when changing relocation mode in
MSR:IR and MSR:DR (ie. MMU on/off for Instructions and Data).
We don't anymore since we have split mmu_idx for instruction and data.
However, since we hard code the mmu_idx in the translated code, we
now need to also make sure MSR:IR and MSR:DR are part of the hflags
used to tag translated code, so that we use different translated
code for different MMU settings.
Darwin gets hurt by this problem.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This fixes compilation of mmu_helper.c when all of the debug #defines at
the start of the file are enabled.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
commit 74693da988 ('ppc: tlbie, tlbia and tlbisync are HV only')
introduced some extra checks on the instruction privilege. slbia was
changed wrongly and hrfid, tlbia were forgotten.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This helper is only used by the various instructions that can alter
MSR and not interrupts. Add a comment to that effect to the interrupt
code as well in case somebody wants to change this
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We use an env. flag which is set to the initial value of MSR_HVB in
the msr_mask. We also adjust the POWER8 mask to set SHV.
Also use this to adjust ctx.hv so that it is *set* when the processor
doesn't have an HV mode (970 with Apple mode for example), thus enabling
hypervisor instructions/SPRs.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: ctx.hv used to be defined only for the hypervisor kernel
(HV=1|PR=0). It is now defined also when PR=1 and conditions are
fixed accordingly.
stripped unwanted tabs.]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With specification at hand from the reference manual from Freescale
http://cache.nxp.com/files/32bit/doc/ref_manual/SPEPEM.pdf , I have found a fix
to efscmp* instructions handling in QEMU.
efscmp* instructions in QEMU set crD (Condition Register nibble) values as
(0b0100 << 2) = 0b10000 (consider the HELPER_SINGLE_SPE_CMP macro which left
shifts the value returned by efscmp* handler by 2 bits). A value of 0b10000 is
not correct according the to the reference manual.
The reference manual expects efscmp* instructions to return a value of 0bx1xx.
Please find attached a patch which disables left shifting in
HELPER_SINGLE_SPE_CMP macro. This macro is used by efscmp* and efstst*
instructions only. efstst* instruction handlers, in turn, call efscmp* handlers
too.
*Explanation:*
Traditionally, each crD (condition register nibble) consist of 4 bits, which is
set by comparisons as follows:
crD = W X Y Z
where
W = Less than
X = Greater than
Y = Equal to
However, efscmp* instructions being a special case return a binary result.
(efscmpeq will set the crD = 0bx1xx iff when op1 == op2 and 0bx0xx otherwise;
i.e. there is no notion of different crD values based on Less than, Greater
than and Equal to).
This effectively means that crD will store a "Greater than" comparison result
iff efscmp* instruction comparison is TRUE. Compiler exploits this feature by
checking for "Branch if Less than or Equal to" (ble instruction) OR "Branch if
Greater than" (bgt instruction) for Branch if FALSE OR Branch if TRUE
respectively after an efscmp* instruction. This can be seen in a assembly code
snippet below:
27 if (__real__ x != 3.0f || __imag__ x != 4.0f)
10000498: lwz r10,8(r31)
1000049c: lis r9,16448
100004a0: efscmpeq cr7,r10,r9
100004a4: ble- cr7,0x100004b8 <bar+60> //jump to abort() call
100004a8: lwz r10,12(r31)
100004ac: lis r9,16512
100004b0: efscmpeq cr7,r10,r9
100004b4: bgt- cr7,0x100004bc <bar+64> //skip abort() call
28 abort ();
100004b8: bl 0x10000808 <abort>
Signed-off-by: Talha Imran <talha_imran@mentor.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The arm target was handled by 06486077, but other targets
were ignored. This handles all the rest which actually support
disassembly (that is, skipping moxie and tilegx).
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This will enable decoding of hrfid
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Otherwise tight loops at smt_low for example, which OPAL does,
eat so much CPU that we can't boot a kernel anymore. With that,
I can boot 8 CPUs just fine with powernv.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Otherwise it will trip on the forms used in recent architecture.
Ideally, we should have different handlers for different architecture
levels but our current implementation of TLB flushing is dumb enough
that this will do for now.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Not that anything remotely recent supports tlbia but ...
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On ppc64 especially, we flush the tlb on any slbie or tlbie instruction.
However, those instructions often come in bursts of 3 or more (context
switch will favor a series of slbie's for example to an slbia if the
SLB has less than a certain number of entries in it, and tlbie's can
happen in a series, with PAPR, H_BULK_REMOVE can remove up to 4 entries
at a time.
Doing a tlb_flush() each time is a waste of time. We end up doing a memset
of the whole TLB, reloading it for the next instruction, memset'ing again,
etc...
Those instructions don't have to take effect immediately. For slbie, they
can wait for the next context synchronizing event. For tlbie, the next
tlbsync.
This implements batching by keeping a flag that indicates that we have a
TLB in need of flushing. We check it on interrupts, rfi's, isync's and
tlbsync and flush the TLB if needed.
This reduces the number of tlb_flush() on a boot to a ubuntu installer
first dialog screen from roughly 360K down to 36K.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: added a 'CPUPPCState *' variable in h_remove() and
h_bulk_remove() ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
[dwg: removed spurious whitespace change, use 0/1 not true/false
consistently, since tlb_need_flush has int type]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We rework the way the MMU indices are calculated, providing separate
indices for I and D side based on MSR:IR and MSR:DR respectively,
and thus no longer need to flush the TLB on context changes. This also
adds correct support for HV as a separate address space.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We don't use the resulting accessors and this gets in the way of
the split I/D TLB work.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
6a81dd17 "spapr_iommu: Rename vfio_accel parameter" renamed vfio_accel
flag everywhere but one spot was missed.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The KVM API restricts vcpu ids to be < KVM_CAP_MAX_VCPUS. On PowerPC
targets, depending on the number of threads per core in the host and
in the guest, some topologies do generate higher vcpu ids actually.
When this happens, QEMU bails out with the following error:
kvm_init_vcpu failed: Invalid argument
The KVM_CREATE_VCPU ioctl has several EINVAL return paths, so it is
not possible to fully disambiguate.
This patch adds a check in the code that computes vcpu ids, so that
we can detect the error earlier, and print a friendlier message instead
of calling KVM_CREATE_VCPU with an obviously bogus vcpu id.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the cleanups just done to rlwinm, rlwnm and rlwimi.
This adds use of deposit to rldimi.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>