There is a regression with the "-cpu" parameter introduced by
the spapr CPU hotplug code: We used to allow to specify a
"CPU family" name with the "-cpu" parameter when running on KVM so
that the user does not need to know the gory details of the exact
CPU version of the host CPU. For example, it was possible to
use "-cpu POWER8" on a POWER8E host CPU. This behavior does not
work anymore with the new hot-pluggable spapr-cpu-core types.
Since libvirt already heavily depends on the old behavior, this
is quite a severe regression in the QEMU parameter interface.
Let's fix it by supporting a CPU family type for the spapr-cpu-core
on KVM, too.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1363812
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The code for registering the sPAPR CPU host core type has been
added inbetween the generic CPU host core type and the generic
CPU family type. That way the instance_init and the class_init
information got lost when registering the generic CPU family
type. Fix it by moving the generic family registration before
the spapr cpu core registration code.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We will need this function to look up the aliases in the
spapr-cpu-core code, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If we don't provide the page size in target-ppc:cpu_get_dump_info(),
the default one (TARGET_PAGE_SIZE, 4KB) is used to create
the compressed dump. It works fine with Macintosh, but not with
pseries as the kernel default page size is 64KB.
Without this patch, if we generate a compressed dump in the QEMU monitor:
(qemu) dump-guest-memory -z qemu.dump
This dump cannot be read by crash:
# crash vmlinux qemu.dump
...
WARNING: cannot translate vmemmap kernel virtual addresses:
commands requiring page structure contents will fail
...
Page_size is used to determine the dumpfile's block size. The
block size needs to be at least the page size, but a multiple of page
size works fine too. For PPC64, linux supports either 4KB or 64KB software
page size. So we define the page_size to 64KB.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We forgot to do gen_update_nip() for these like we do with other
helpers. Fix this, but in a more efficient way by passing the RA
to the accessors instead so the overhead is only taken on faults.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
According to the e500mc and e5500 core reference manual they have support
for the mftb instruction.
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
After already fixing two issues with the huge page detection mechanism
(see commit 159d2e39a8 and 86b50f2e1b), Greg Kurz noticed another
case that caused the guest to crash where QEMU announces huge pages
though they should not be available for the guest:
qemu-system-ppc64 -enable-kvm ... -mem-path /dev/hugepages \
-m 1G,slots=4,maxmem=32G
-object memory-backend-ram,policy=default,size=1G,id=mem-mem1 \
-device pc-dimm,id=dimm-mem1,memdev=mem-mem1 -smp 2 \
-numa node,nodeid=0 -numa node,nodeid=1
That means if there is a global mem-path option, we still have
to look at the memory-backend objects that have been specified
additionally and return their minimum page size if that value
is smaller than the page size of the main memory.
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adding two hooks to be notified when adding/removing msi routes. There
are two kinds of MSI routes:
- in kvm_irqchip_add_irq_route(): before assigning IRQFD. Used by
vhost, vfio, etc.
- in kvm_irqchip_send_msi(): when sending direct MSI message, if
direct MSI not allowed, we will first create one MSI route entry
in the kernel, then trigger it.
This patch only hooks the first one (irqfd case). We do not need to
take care for the 2nd one, since it's only used by QEMU userspace
(kvm-apic) and the messages will always do in-time translation when
triggered. While we need to note them down for the 1st one, so that we
can notify the kernel when cache invalidation happens.
Also, we do not hook IOAPIC msi routes (we have explicit notifier for
IOAPIC to keep its cache updated). We only need to care about irqfd
users.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Commit 86b50f2e1b ("Disable huge page support if it is not available
for main RAM") already made sure that huge page support is not announced
to the guest if the normal RAM of non-NUMA configurations is not backed
by a huge page filesystem. However, there is one more case that can go
wrong: NUMA is enabled, but the RAM of the NUMA nodes are not configured
with huge page support (and only the memory of a DIMM is configured with
it). When QEMU is started with the following command line for example,
the Linux guest currently crashes because it is trying to use huge pages
on a memory region that does not support huge pages:
qemu-system-ppc64 -enable-kvm ... -m 1G,slots=4,maxmem=32G -object \
memory-backend-file,policy=default,mem-path=/hugepages,size=1G,id=mem-mem1 \
-device pc-dimm,id=dimm-mem1,memdev=mem-mem1 -smp 2 \
-numa node,nodeid=0 -numa node,nodeid=1
To fix this issue, we've got to make sure to disable huge page support,
too, when there is a NUMA node that is not using a memory backend with
huge page support.
Fixes: 86b50f2e1b
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ps->pte_enc is a 32-bit value, which is shifted left and then compared
to a 64-bit value. It needs a cast before the shift.
Reported by Coverity.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It is not possible to set the compat property to an unknown value with
powerpc_set_compat(). Something must have gone terribly wrong in QEMU,
if we detect an "Internal error" in powerpc_get_compat(). Let's abort then.
This patch also drops the "max_compat ? *max_compat : -1" construct. It is
useless since max_compat is dereferenced a few lines above.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
MacOS uses an architecturally illegal MSR combination that
seems nonetheless supported by 32-bit processors, which is
to have MSR[PR]=1 and one or more of MSR[DR/IR/EE]=0.
This adds support for it. To work properly we need to also
properly include support for PR=1,{I,D}R=0 to the MMU index
used by the qemu TLB.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Most of them use guard symbols like CPU_$target_H, but we also have
__MIPS_CPU_H__ and __TRICORE_CPU_H__. They all upset
scripts/clean-header-guards.pl.
The script dislikes CPU_$target_H because they don't match their file
name (they should, to make guard collisions less likely). The others
are reserved identifiers.
Clean them all up: use guard symbol $target_CPU_H for
target-$target/cpu.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tracked down with an ugly, brittle and probably buggy Perl script.
Also move includes converted to <...> up so they get included before
ours where that's obviously okay.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
There are functions tlb_fill(), cpu_unaligned_access() and
do_unaligned_access() that are called with access type and mmu index
arguments. But these arguments are named 'is_write' and 'is_user' in their
declarations. The patches fix the arguments to avoid a confusion.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-id: 1465907177-1399402-1-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We need to ignore the segment page size and essentially treat
all pages as coming from a 4K segment.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Adjusted for differences in my version of the prereq patches]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds proper support for translating real mode addresses based
on the combination of HV and LPCR bits. This handles HRMOR offset
for hypervisor real mode, and both RMA and VRMA modes for guest
real mode. PAPR mode adjusts the offsets appropriately to match the
RMA used in TCG, but we need to limit to the max supported by the
implementation (16G).
This includes some fixes by Cédric Le Goater <clg@kaod.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Adjusted for differences in my version of the prereq patches]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ppc_hash64_pteg_search() now decodes a PTEs page size encoding, which it
didn't previously do. This means we're now double decoding the page size
because we check it int he fault path after ppc64_hash64_htab_lookup()
returns.
To avoid this duplication have ppc_hash64_pteg_search() and
ppc_hash64_htab_lookup() return the page size from the PTE and use that in
the callers instead of decoding again.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
ppc_hash64_pteg_search() explicitly checks each HPTE's VALID and
SECONDARY bits, then uses the HPTE64_V_COMPARE() macro to check the B field
and AVPN. However, a small tweak to HPTE64_V_COMPARE() means we can check
all of these bits at once with a suitable ptem value. So, consolidate all
the comparisons for simplicity.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The architecture specifies that when searching a PTEG for PTEs, entries
with a page size encoding that's not valid for the current segment should
be ignored, continuing the search.
The current implementation does this with ppc_hash64_pte_size_decode()
which is a very incomplete implementation of this check. We already have
code to do a full and correct page size decode in hpte_page_shift().
This patch moves hpte_page_shift() so it can be used in
ppc_hash64_pteg_search() and adjusts the latter's parameters to include
a full SLBE instead of just a segment page shift.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The segment page shift parameter is never used. Let's remove it.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
kvmppc_smt_threads() returns 1 if KVM is not enabled.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xsrdpi, xvrdpi and xvrspi use the round ties away method, not round
nearest even.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Call gen_pause for all "or rx,rx,rx" encodings other nop. This
provides a reasonable implementation for yield, and a better
approximation for mdoio, mdoom, and miso. The choice to pause for all
encodings !=0 leverages the PowerISA admonition that the reserved
encodings might change program priority, providing a slight "future
proofing".
Signed-off-by: Aaron Larson <alarson@ddci.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We were always advertising only 4K & 16M. Additionally the code wasn't
properly matching the page size with the PTE content, which meant we
could potentially hit an incorrect PTE if the guest used multiple sizes.
Finally, honor the CPU capabilities when decoding the size from the SLB
so we don't try to use 64K pages on 970.
This still doesn't add support for MPSS (Multiple Page Sizes per Segment)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors
commits 61a36c9b5a and 1114e712c9 reworked the hpte code
doing insertion/removal in hw/ppc/spapr_hcall.c. The hunks
modifying these areas were removed. ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
They are generally useful when debugging HV mode stuff
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Don't allow access in guest mode
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current behaviour isn't completely right, as for the DEC, we
don't properly re-arm when wrapping around, but I will fix this
in a separate patch.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The architecture specifies that any instruction that sets MSR:PR will also
set MSR:EE, IR and DR.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
External interrupts can bypass the MSR_EE test if they occur in guest
mode and LPES0 is clear. In that case they are directed to the hypervisor
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This handles filtering bits based on what is implemented by a
given architecture version. We also use it to copy to LPCR
some of the relevant 970 HID4 bits.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Includes all the bits up to ISA 2.07
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We don't give them a KVM reg number yet as no current KVM version
supports HV mode.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: SPRs AMOR,DAWR,DARWX were already included in commit f401dd32cb]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function needs to be converted to QOM hook and virtualised for
multi-arch. This rename interferes, as cpu-qom will not have access
to the renaming causing name divergence. This rename doesn't really do
anything anyway so just delete it.
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <69bd25a8678b8b31b91cd9760c777bed1aafb44e.1437212383.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaitepeter@gmail.com>
This patch modifies SoftFloat library so that it can be configured in
run-time in relation to the meaning of signaling NaN bit, while, at the
same time, strictly preserving its behavior on all existing platforms.
Background:
In floating-point calculations, there is a need for denoting undefined or
unrepresentable values. This is achieved by defining certain floating-point
numerical values to be NaNs (which stands for "not a number"). For additional
reasons, virtually all modern floating-point unit implementations use two
kinds of NaNs: quiet and signaling. The binary representations of these two
kinds of NaNs, as a rule, differ only in one bit (that bit is, traditionally,
the first bit of mantissa).
Up to 2008, standards for floating-point did not specify all details about
binary representation of NaNs. More specifically, the meaning of the bit
that is used for distinguishing between signaling and quiet NaNs was not
strictly prescribed. (IEEE 754-2008 was the first floating-point standard
that defined that meaning clearly, see [1], p. 35) As a result, different
platforms took different approaches, and that presented considerable
challenge for multi-platform emulators like QEMU.
Mips platform represents the most complex case among QEMU-supported
platforms regarding signaling NaN bit. Up to the Release 6 of Mips
architecture, "1" in signaling NaN bit denoted signaling NaN, which is
opposite to IEEE 754-2008 standard. From Release 6 on, Mips architecture
adopted IEEE standard prescription, and "0" denotes signaling NaN. On top of
that, Mips architecture for SIMD (also known as MSA, or vector instructions)
also specifies signaling bit in accordance to IEEE standard. MSA unit can be
implemented with both pre-Release 6 and Release 6 main processor units.
QEMU uses SoftFloat library to implement various floating-point-related
instructions on all platforms. The current QEMU implementation allows for
defining meaning of signaling NaN bit during build time, and is implemented
via preprocessor macro called SNAN_BIT_IS_ONE.
On the other hand, the change in this patch enables SoftFloat library to be
configured in run-time. This configuration is meant to occur during CPU
initialization, at the moment when it is definitely known what desired
behavior for particular CPU (or any additional FPUs) is.
The change is implemented so that it is consistent with existing
implementation of similar cases. This means that structure float_status is
used for passing the information about desired signaling NaN bit on each
invocation of SoftFloat functions. The additional field in float_status is
called snan_bit_is_one, which supersedes macro SNAN_BIT_IS_ONE.
IMPORTANT:
This change is not meant to create any change in emulator behavior or
functionality on any platform. It just provides the means for SoftFloat
library to be used in a more flexible way - in other words, it will just
prepare SoftFloat library for usage related to Mips platform and its
specifics regarding signaling bit meaning, which is done in some of
subsequent patches from this series.
Further break down of changes:
1) Added field snan_bit_is_one to the structure float_status, and
correspondent setter function set_snan_bit_is_one().
2) Constants <float16|float32|float64|floatx80|float128>_default_nan
(used both internally and externally) converted to functions
<float16|float32|float64|floatx80|float128>_default_nan(float_status*).
This is necessary since they are dependent on signaling bit meaning.
At the same time, for the sake of code cleanup and simplicity, constants
<floatx80|float128>_default_nan_<low|high> (used only internally within
SoftFloat library) are removed, as not needed.
3) Added a float_status* argument to SoftFloat library functions
XXX_is_quiet_nan(XXX a_), XXX_is_signaling_nan(XXX a_),
XXX_maybe_silence_nan(XXX a_). This argument must be present in
order to enable correct invocation of new version of functions
XXX_default_nan(). (XXX is <float16|float32|float64|floatx80|float128>
here)
4) Updated code for all platforms to reflect changes in SoftFloat library.
This change is twofolds: it includes modifications of SoftFloat library
functions invocations, and an addition of invocation of function
set_snan_bit_is_one() during CPU initialization, with arguments that
are appropriate for each particular platform. It was established that
all platforms zero their main CPU data structures, so snan_bit_is_one(0)
in appropriate places is not added, as it is not needed.
[1] "IEEE Standard for Floating-Point Arithmetic",
IEEE Computer Society, August 29, 2008.
Signed-off-by: Thomas Schwinge <thomas@codesourcery.com>
Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Tested-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Tested-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[leon.alrae@imgtec.com:
* cherry-picked 2 chunks from patch #2 to fix compilation warnings]
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
On powerpc, we must only signal huge page support to the guest if
all memory areas are capable of supporting huge pages. The commit
2d103aae87 ("fix hugepage support when using memory-backend-file")
already fixed the case when the user specified the mem-path property
for NUMA memory nodes instead of using the global "-mem-path" option.
However, there is one more case where it currently can go wrong.
When specifying additional memory DIMMs without using NUMA, e.g.
qemu-system-ppc64 -enable-kvm ... -m 1G,slots=2,maxmem=2G \
-device pc-dimm,id=dimm-mem1,memdev=mem1 -object \
memory-backend-file,policy=default,mem-path=/...,size=1G,id=mem1
the code in getrampagesize() currently assumes that huge pages
are possible since they are enabled for the mem1 object. But
since the main RAM is not backed by a huge page filesystem,
the guest Linux kernel then crashes very quickly after being
started. So in case the we've got "normal" memory without NUMA
and without the global "-mem-path" option, we must not announce
huge pages to the guest. Since this is likely a mis-configuration
by the user, also spill out a message in this case.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds the ISA 2.06 and later power management instructions
(doze, nap, sleep and rvwinkle) and associated wakeup cause testing
in LPCR
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There's no point inlining this, if you hit the exception case you exit
anyway, and not inlining saves about 100K of code size (and cache
footprint).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: removed '__attribute__((noinline))' from original patch ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Those instructions are only available in hypervisor real mode and
allow cache inhibited garded access to devices in that mode.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Recent server processors use the Hypervisor Emulation Assistance
interrupt for illegal instructions and *some* type of SPR accesses.
Also the code was always generating inval instructions even for priv
violations due to setting the wrong flags
Finally, the checking for PR/HV was open coded everywhere.
This reworks it all, using little helper macros for checking, and
adding the HV interrupt (which gets converted back to program check
in the slow path of excp_helper.c on CPUs that don't want it).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Under some circumstances, we need to direct ISI and DSI interrupts
at the hypervisor, turning them into HISI/HDSI, and using different
SPRs (HDSISR and HDAR) depending on the combination of MSR_DR and
the corresponding VPM bits in LPCR.
This moves part of the code into helpers that are fixed to select
the right exception type and registers. On pre-P7 processors, LPCR
is 0 which provides the old behaviour of directing the interrupts
at the supervisor.
Thanks to Andrei Warkentin for finding a bug when HV=1
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: Merged a fix on POWERPC_EXCP_HDSI fixing the condition on
msr_hv, from Andrei Warkentin <andrey.warkentin@gmail.com> ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We were initializing unused ones and missing some
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This properly implements LPES0 handling for HV vs. !HV mode and
removes the unsupported LPES1. This has been removed from the specs
since ISA v2.07.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: AIL implementation was fixed in commit 5c94b2a5e5. This patch
only contains the bits of the original patch related to LPES0
handling, adapted commit log.
fixed checkpatch.pl errors. ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows us to set the appropriate LPCR bits which will be used
when fixing the exception model for the HV mode.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: previous commit 26a7f1291b did not include the LPCR setting as
it was not needed at the time, adapted commit log ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This reworks emulation of the various "rfi" variants. I removed
some masking bits that I couldn't make sense of, the only bit that
I am aware we should mask here is POW, the CPU's MSR mask should
take care of the rest.
This also fixes some problems when running 32-bit userspace under
a 64-bit kernel.
This patch broke 32bit OpenBIOS when run under a 970 cpu. A fix was
proposed here :
https://www.coreboot.org/pipermail/openbios/2016-June/009452.html
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: updated the commit log with the reference of the openbios fix ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
[dwg: Remove hunk which disabled rfi on 64-bit CPUS. The change was
correct, but we need to fix OpenBIOS before applying it]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>