The set of computations used in b5a73f8d8a
are only valid if the current word size == target_long size. This failed
to take ppc64 in 32-bit (narrow) mode into account.
Add a NARROW_MODE macro to avoid conditional compilation.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
After previous cleanups, the many scattered checks of env->mmu_model in
the ppc MMU implementation have, at least for "classic" hash MMUs been
reduced (almost) to a single switch at the top of
cpu_ppc_handle_mmu_fault().
An explicit switch is still a pretty ugly way of handling this though. Now
that Andreas Färber's CPU QOM cleanups for ppc have gone in, it's quite
straightforward to instead make the handle_mmu_fault function a QOM method
on the CPU object.
This patch implements such a scheme, initializing the method pointer at
the same time as the mmu_model variable. We need to keep the latter around
for now, because of the MMU types (BookE, 4xx, et al) which haven't been
converted to the new scheme yet, and also for a few other uses. It would
be good to clean those up eventually.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For softmmu builds the interface from the generic code to the target
specific MMU implementation is through the tlb_fill() function. For ppc
this is currently in mem_helper.c, whereas it would make more sense in
mmu_helper.c. This patch moves it, which also allows
cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
mmu_helper.c is, for obvious reasons, almost entirely concerned with
softmmu builds of qemu. However, it does contain one stub function which
is used when CONFIG_USER_ONLY=y - the user only versoin of
cpu_ppc_handle_mmu_fault, which always triggers an exception. The entire
rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY).
We clean this up by moving the user only stub into its own new file,
removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU
is set. This also lets us remove the #define of cpu_handle_mmu_fault to
cpu_ppc_handle_mmu_fault - that name is only used from generic code for
user only - so we just name our split user version by the generic name.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Version 2.06 of the Power architecture describes an additional page
protection mechanism. Each virtual page has a "class" (0-31) recorded in
the PTE. The AMR register contains bits which can prohibit reads and/or
writes on a class by class basis. Interestingly, the AMR is userspace
readable and writable, however user mode writes are masked by the contents
of the UAMOR which is privileged.
This patch implements this protection mechanism, along with the AMR and
UAMOR SPRs. The architecture also specifies a hypervisor-privileged AMOR
register which masks user and supervisor writes to the AMR and UAMOR. We
leave this out for now, since we don't at present model hypervisor mode
correctly in any case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of
ppc_hash{32,64{_translate(), so this patch combines them together. This
means that instead of one returning a variety of non-obvious error codes
which then get translated into the various mmu exception conditions, we can
just generate the exceptions as we discover problems in the translation
path. This also removes the last usage of mmu_ctx_hash{32,64}.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the hash mmu versionsof get_phys_page_debug() use the same
ppc64_hash64_translate() function to do the translation logic as the normal
mm fault handler code.
That sounds like a good idea, but has some complications. The debug path
doesn't need, or even want some parts of the full translation path, like
permissions checking. Furthermore, the pte flags update included in the
normal path means that the debug call is not quite side effect free.
This patch, therefore, reimplements get_phys_page_debug as the minimal
required subset of the full translation path.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>`z
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
At present we take the whole of word 1 of the hash PTE as the real page
number used to calculate the translated address. This is incorrect,
because it leaves the flags from the low bits of PTE word 1 in place in the
rpm. We mostly get away with that because the value is later masked by
TARGET_PAGE_MASK.
More recent 64-bit CPUs also have a small number of flag bits (PP0 and
KEY) in the top bits of PTE word 1. Any guest which used those bits would
fail with the current code.
This patch fixes the problem by correctly masking out the RPN field of
PTE word 1. This is safe, even for older CPUs which didn't have PP0 and
KEY, because although the RPN notionally extended to the very top of PTE
word 1, none of those CPUs actually implemented that many real address
bits.
We add analogous masking to the 32-bit code, even though it also doesn't
have the high flag bits, for consistency and clarity.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
More recent 64-bit hash MMUs support multiple page sizes, and PTEs for
large pages only include the offset of the whole large page. But the qemu
tlb only handles pages of the base size (4k) so we need to break up the
large pages into 4k pieces for the qemu tlb. To do that we have a somewhat
awkward piece of code that adds the folds address bits 4k and the page size
from the virtual address into the real address from the pte.
This patch simplifies this redefining the raddr output of
ppc_hash64_translate() to be the full real address of the faulting address,
rather than just the (4k) page offset. Computing that turns out to be
simpler, and is fine for the caller, since it already masks with
TARGET_PAGE_MASK before inserting into the qemu tlb.
The multiple page size complication doesn't exist for 32-bit hash mmus, but
we make an analogous cleanup there for consistency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the ppc_hash{32,64}_pte_update_flags() helper functions update a
PTE's referenced and changed bits as necessary to reflect the access. It
is somewhat long winded, though. This patch open codes them in their
(single) callers, in a simpler way.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
Currently, for 64-bit hash mmu, the execute protection bit placed into the
qemu tlb is based only on the N (No execute) bit from the PTE. However,
No Execute can also be set at the segment level. We do check this on
execute faults, but this still means we could incorrectly allow execution
of code from a No Execute segment, if a prior read or write fault caused
the page to be loaded into the qemu tlb with PROT_EXEC set.
To correct this, we (re-)check the segment level no execute permission when
generating the protection bits for the qemu tlb.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently checking of PTE permission bits is split messily amongst
ppc_hash{32,64}_pp_check(), ppc_hash{32,64}_check_prot() and their callers.
This patch cleans this up to have the new function
ppc_hash{32,64}_pte_prot() compute the page permissions from the SLBE (for
64-bit) or segment register (32-bit) and the pte. A greatly simplified
version of the actual permissions check is then open coded in the callers.
The 32-bit version of ppc_hash32_pte_prot() is implemented in terms of
ppc_hash32_pp_prot(), a renamed and slightly cleaned up version of the old
ppc_hash32_pp_check(), which is also used for checking BAT permissions on
the 601.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Previous cleanups have meant the nx field of the mmu_ctx_hash32 structure
is now only used within ppc_hash32_translate(), and so it can be replaced
by a local variable.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
Currently if ppc_hash{32,64}_translate() finds a PTE matching the given
virtual address, it will always update the PTE's R & C (Referenced and
Changed) bits. This happens even if the PTE's permissions mean we are
about to deny the translation.
This is clearly a bug, although we get away with it because:
a) It will only incorrectly set, never reset the bits, which should not
cause guest correctness problems.
b) Linux guests never use the R & C bits anyway.
This patch fixes the behaviour, only updating R & C when access is granted
by the PTE.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
Currently, on any failure translating an address with BATs, we proceed to
normal segment and page table translation. That's incorrect if the
BAT error was due to permissions, rather than not finding a matching BAT.
We've gotten away with it because a guest would not usually put
translations for the same address in both BATs and page table. Nonetheless
this patch corrects the logic, only doing page table lookup if no BAT
is found. A matching BAT with bad permissions will now correctly trigger
an exception.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch makes a general cleanup of the ppc_hash32_get_bat() function,
renaming it to ppc_hash32_bat_lookup(). In particular, the new function
only looks for a matching BAT, with the permissions check from the old
function moved to the caller.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The code to search for a matching BAT for a virtual address is somewhat
longwinded and awkward. In particular, it relies on seperate size and
validity information being returned from the hash32_bat_size() function
(and 601 specific variant).
We simplify this by having hash32_bat_size() return instead a mask of the
virtual address bits to match, and 0 for invalid (since a BAT can never
match the entire address space).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
hash32_bat_size_prot() and its 601 variant, as the name suggests, returns
both a BAT's size - needed to search for a matching BAT - and its
permissions, only relevant once a matching BAT has been located.
There's no particular advantage to combining these, so we split these roles
into seperate functions for clarity.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
In the code for handling BATs, the hash32_bat_size_prot() and
hash32_bat_601_size_prot() functions are passed the BAT contents by
reference (pointer) for no clear reason, since they only need the values
within.
This patch removes this odd usage, and uses the resulting change to clean
up the caller slightly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
With previous cleanups made, the 32-bit and 64-bit pte_check*() functions
are pretty trivial and only have one call site. This patch therefore
clarifies the overall code flow by folding those functions into their
call site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch makes a general cleanup of the address mangling logic in
ppc_hash64_htab_lookup(). In particular it now avoids repeatedly switching
on the segment size. The lack of SLB and multiple segment sizes on 32-bit
means an analogous cleanup is not needed there.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
find_pte{32,64}() are poorly named, since they both find a PTE and do
permissions checking of it. This patch makes them only locate a matching
PTE, moving the permission checking and other logic to the caller. We
rename the resulting search functions ppc_hash{32,64}_htab_lookup().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
find_pte{32,64}() are not particularly well named. They only "find" a PTE
within a given PTE group, and they also do permissions checking and other
things.
This patch makes it somewhat close to matching the name, by folding the
search of both primary and secondary hash bucket into it, along with the
various address bit shuffling to determine the right hash buckets.
In the 32-bit case we also remove the code for splitting large pages into
4k pieces for the qemu tlb, since no 32-bit hash MMUs support multiple page
sizes.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
find_pte{32,64{() do several things. First they search through a PTEG
ooking for a PTE matching our virtual address. Then they do permissions
checking and other processing on that PTE.
This patch separates the search by VA out from the rest. The search is
combined with the pte{32,64}_match() functions into new
ppc_has{32,64}_pteg_search() functions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
The ppc hash mmu hashes each virtual address to a primary and secondary
possible hash bucket (aka PTE group or PTEG) each with 8 PTEs. Then we
need a linear search through the PTEs to find the correct one for the
virtual address we're translating.
It is a programming error for the guest to insert multiple PTEs mapping the
same virtual address into a PTEG - in this case the ppc architecture says
the MMU can either act as if just one was present, or give a machine check.
Currently our code takes the first matching PTE in a PTEG if it finds a
successful translation. But if a matching PTE is found, but permission
bits don't allow the access, we keep looking through the PTEG, checking
that any other matching PTEs contain an identical translation.
That behaviour is perhaps not exactly wrong, but it's certainly not useful.
This patch changes it to always just find the first matching PTE in a PTEG.
In addition, if we get a permissions problem on the primary PTEG, we then
search the secondary PTEG. This is incorrect - a permission denying PTE
in the primary PTEG should not be overwritten by an access granting PTE in
the secondary (although again, it would be a programming error for the
guest to set up such a situation anyway). So additionally we update the
code to only search the secondary PTEG if no matching PTE is found in the
primary at all.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
On the ppc hash mmus, no-execute can be set at the segment level (on more
recent 64-bit hash mmus it can also be set at the page level). This patch
separates out this check to make it clearer what is going on, and avoiding
excessive indentation of the remaining translation code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This further separates the unusual case handling of direct store segments
from the main translation path by moving its logic into a helper function,
with some tiny cleanups along the way.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
At present a large chunk of ppc_hash32_translate() is taken up with an
ugly if selecting between direct store segments (hardly ever used) and
normal paged segments. This patch clarifies the flow of code by
handling direct store segments immediately then returning, leaving the
straight line code to describe the normal MMU path.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
After previous work, ppc_hash{32,64}_get_physical_address() are almost
trivial wrappers around get_segment{32,64}() which does nearly all the work of
translating an address according to the hash mmu model. Therefore combine the
two functions into one, under the better name of
ppc_hash{32,64}_translate().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The eaddr field of mmu_ctx_hash{32,64} is effectively just used to pass the
effective address from get_segment{32,64}() to find_pte{32,64}(). Just
pass it as a normal parameter instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The nx field in mmu_ctx_hash64 is used in two different functions. But its
used for slightly different things in each place, and the value is never
propagated between them. In other words, it might as well be two local
variables. This patch makes it so.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
In ppc env->access_type is updated by e.g. integer load/stores with
ACCESS_INT floating point load/stores with ACCESS_FLOAT and so forth. In
hash mmu fault paths it can also b set to ACCESS_CODE for instruction
fetch accesses.
But the only place which uses anything more of the access_type than
whether it is instruction fetch or data access is the direct store segment
handling. Instruction versus data access can be more simply determined
from the rw value passed down from the top.
This changes the code to use rw in preference to checking access_type.
For the 32-bit case there is a small amount of code (for direct store
segments) that still needs the full access type. Instead of passing it
all the way down the stack, we retrieve it from the env structure, which
is where it came anyway, before this patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
On real hardware the ppc hash page table is stored in memory; accordingly
our mmu emulation code can read a hash page table in guest memory. But,
when paravirtualized under PAPR, the real hash page table is in host
memory, accessible to the guest only via hypercalls. We model this by
also allowing the MMU emulation code to access a specially allocated hash
page table outside the guest's memory image. At present these two options
are implemented with some ugly conditionals at each access point in the mmu
emulation code. In the implementation of the PAPR hypercalls, we assume
the external hash table.
This patch cleans things up by adding helpers to load and store from the
hash table for both 32-bit and 64-bit hash mmus. The 64-bit versions
handle both the in-guest-memory and outside guest memory cases. The 32-bit
versions only handle the in-guest-memory case since no 32-bit systems can
have an external hash table at present.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently cpu.h contains a number of definitions relating to the 64-bit
hash MMU. Some are used in the MMU emulation code, but some are only used
in the spapr MMU management hcall implementations.
This patch moves these definitions (except for a few that are needed
more widely) into mmu-hash64.h header, shared between the MMU emulation
code and the spapr hcall code. The MMU emulation code is also updated to
actually use a number of those definitions in place of hard coded
constants.
Similarly, we add new analogous definitions to mmu-hash32.h and use those
in place of many hard-coded constants in mmu-hash32.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
mmu_ctx_t is currently defined in cpu.h. However it is used for temporary
information relating to mmu translation, and is only used in mmu_helper.c
and (now) mmu-hash{32,64}.c. Furthermore it contains information which
should be specific to particular MMU types. Therefore, move its definition
to mmu_helper.c. mmu-hash{32,64}.c are converted to use new data types
private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
change in future patches).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The functions for looking up BATs (Block Address Translation - essentially
a level 0 TLB) are shared between the classic 32-bit hash MMUs and the
6xx style software loaded TLB implementations.
This patch splits out a copy for the 32-bit hash MMUs, to facilitate
cleaning it up. The remaining version is left, but cleaned up slightly
to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The get_pteg_offset() helper function is currently shared between 32-bit
and 64-bit hash mmus, taking a parameter for the hash pte size. In the
64-bit paths, it's only called in one place, and it's a trivial
calculation. This patch, therefore, open codes it for 64-bit. The
remaining version, which is used in two places is made 32-bit only and
moved to mmu-hash32.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The newly separated paths for hash mmus rely on several helper functions
which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
pte_update_flags(). While these don't have ugly ifdefs on the mmu type,
they're not very well thought out, so sharing them impedes cleaning up the
hash mmu paths. For now, put near-duplicate versions into mmu-hash64.c and
mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
loaded tlb implementations. The hash 32 and software loaded
implementations are simplfied slightly, using the fact that no 32-bit CPUs
implement the 3rd page protection bit.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
cpu_get_phys_page_debug() is a trivial wrapper around
get_physical_address(). But even the signature of
get_physical_address() has some things we'd like to clean up on a
per-mmu basis, so this patch moves the test on mmu model out to
cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out
to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
cpu_ppc_handle_mmu_fault() calls get_physical_address() (whose behaviour
depends on MMU type) then, if that fails, issues an appropriate exception
- which again has a number of dependencies on MMU type.
This patch starts converting cpu_ppc_handle_mmu_fault() to have a
single switch on MMU type, calling MMU specific fault handler
functions which deal with both translation and exception delivery
appropriately for the MMU type. We convert 32-bit and 64-bit hash
MMUs to this new model, but the existing code is left in place for
other MMU types for now.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Depending on the MSR state, for 64-bit hash MMUs, get_physical_address
can either call check_physical (which has further tests for mmu type)
or get_segment64. Similarly for 32-bit hash MMUs we can either call
check_physucal or get_bat() and get_segment32().
This patch splits off the whole get_physical_addresss() path for hash
MMUs into 32-bit and 64-bit versions, handling real mode correctly for
such MMUs without going to check_physical and rechecking the mmu type.
Correspondingly, the hash MMU specific paths in check_physical() are
removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently get_physical_address() first checks to see if translation is
enabled in the MSR, then in the translation on case switches on the mmu
type. Except that for BookE MMUs, translation is always on, and so it
has to switch in the "translation off" case as well and do the same thing
as the translation on path for those MMUs. Plus, even translation off
doesn't behave exactly the same on the various MMU types so there are
further mmu type checks in the "translation off" path.
As a first step to cleaning this up, this patch moves the switch on mmu
type to the top level, then makes the translation on/off check just for
those mmu types where it is meaningful.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The poorly named get_segment() function handles most of the address
translation logic for hash-based MMUs. It has many ugly conditionals on
whether the MMU is 32-bit or 64-bit.
This patch splits the function into 32 and 64-bit versions, using the
switch on mmu_type that's already in the caller
(get_physical_address()) to select the right one. Most of the
original function remains in mmu_helper.c to support the 6xx software
loaded TLB implementations (cleaning those up is a project for another
day).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
32-bit and 64-bit hash MMU implementations currently share a find_pte
function. This results in a whole bunch of ugly conditionals in the shared
function, and not all that much actually shared code.
This patch separates out the 32-bit and 64-bit versions, putting then
in mmu-hash64.c and mmu-has32.c, and removes the conditionals from
both versions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently support for both 32-bit and 64-bit hash MMUs share an
implementation of pte_check. But there are enough differences that this
means the shared function has several very ugly conditionals on "is_64b".
This patch cleans things up by separating out the 64-bit version
(putting it into mmu-hash64.c) and the 32-bit hash version (putting it
in mmu-hash32.c). Another copy remains in mmu_helper.c, which is used
for the 6xx software loaded TLB paths.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
As a first step to disentangling the handling for 64-bit hash MMUs from
the rest, we move the code handling the Segment Lookaside Buffer (SLB)
(which only exists on 64-bit hash MMUs) into a new mmu-hash64.c file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
One LOG_MMU statement in mmu_helper.c has an odd check on the effective
address being translated. I can see no reason for this; I suspect it was
a debugging hack from long ago. This patch removes it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This removes the never-used pte64_invalidate() function, and makes
ppcmas_tlb_check() static, since it's only used within that file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The PowerPC 620 was the very first 64-bit PowerPC implementation, but
hardly anyone ever actually used the chips. qemu notionally supports the
620, but since we don't actually have code to implement the segment table,
the support is broken (quite likely in other ways too).
This patch, therefore, removes all remaining pieces of 620 support, to
stop it cluttering up the platforms we actually care about. This includes
removing support for the ASR register, used only on segment table based
machines.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Although the support of this register may be uncomplete, there are no
reason to prevent the debugger from reading or writing it.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
target-ppc/kvm.c has an #ifdef on CONFIG_PSERIES, for the handling of
KVM exits due to a PAPR hypercall from the guest. However, since commit
e4c8b28cde "ppc: express FDT dependency of
pSeries and e500 boards via default-configs/", this hasn't worked properly.
That patch altered the configuration setup so that although CONFIG_PSERIES
is visible from the Makefiles, it is not visible from C files. This broke
the pseries machine when KVM is in use.
This patch makes a quick and dirty fix, by removing the CONFIG_PSERIES
dependency, replacing it with TARGET_PPC64 (since removing it entirely
leads to type mismatch errors). Technically this breaks the build when
configured with --disable-fdt, since that disables CONFIG_PSERIES on
TARGET_PPC64. However, it turns out the build was already broken in that
case, so this fixes pseries kvm without breaking anything extra. I'm
looking into how to fix that build breakage, but I don't think that need
delay applying this patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move it to qom/cpu.h to avoid issues with include order.
Change pc_acpi_smi_interrupt() opaque to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move array of CPU aliases to cpu-models.c, alongside model definitions.
This requires to zero-terminate the aliases array since ARRAY_SIZE() can
no longer be used in translate_init.c then.
Suggested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The QMP query-cpu-definitions implementation iterated over CPU classes
only, which were getting less and less as aliases were extracted.
Keep them in QMP as valid -cpu arguments even if not guaranteed stable.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Revert adding a separate -cpu ? output section for aliases and list them
per CPU subclass.
Requested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This avoids assigning individual class fields and contributors
forgetting to add field assignments in KVM-only code.
ppc_cpu_class_find_by_pvr() requires the CPU model classes to be
registered, so defer host CPU type registration to kvm_arch_init().
Only register the host CPU type if there is a class with matching PVR.
This lets us drop error handling from instance_init.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
A victim of the d523dd00a7 AREG0
conversion, insert the missing cpu_env arguments.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently qemu does not get and put the state of the floating point and
vector registers to KVM. This is obviously a problem for savevm, as well
as possibly being problematic for debugging of FP-using guests.
This patch fixes this by using new extensions to the ONE_REG interface to
synchronize the qemu floating point state with KVM.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently when runing under KVM on ppc, we synchronize a certain number of
vital SPRs to KVM through the SET_SREGS call. This leaves out quite a lot
of important SPRs which are maintained in KVM. It would be helpful to
have their contents in qemu for debugging purposes, and when we implement
migration it will be vital, since they include important guest state that
will need to be restored on the target.
This patch sets up for synchronization of any registers supported by the
KVM ONE_REG calls. A new variant on spr_register() allows a ONE_REG id to
be stored with the SPR information. When we set/get information to KVM
we also synchronize any SPRs so registered.
For now we set this mechanism up to synchronize a handful of important
registers that already have ONE_REG IDs, notably the DAR and DSISR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Let it resolve to v2.3 rather than v2.0.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Now that model definitions only reference their parent type, model
definitions are independent of the family definitions and can be
compiled independently of TCG translation.
Keep all #if defined(TODO) code local to cpu-models.c.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This gets rid of some more overly long comments that have lost most of
their purpose now that in most cases there's only two functions left per
CPU family.
The class field is inherited by the actual CPU models, so override it.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Now POWERPC_DEF_SVR() no longer sets family-specific fields itself.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Don't attempt to suppress registration of CPU types, since the criteria
is actually a property of the class and should thus become a field.
Since we can't check a field set in a class_init function before
registering the type that leads to execution of that function, guard the
-cpu class lookup instead and suppress exposing these classes in -cpu ?
and in QMP.
In case someone tries to hot-add an incompatible CPU via device_add,
error out in realize.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Instead of assigning *_<family> constants, set .parent to a family type.
Introduce a POWERPC_FAMILY() macro to keep type registration close to
its implementation. This macro will need tweaking later.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Turn the array of model definitions into a set of self-registering QOM
types with their own class_init. Unique identifiers are obtained from
the combination of PVR, SVR and family identifiers; this requires all
alias #defines to be removed from the list. Possibly there are some more
left after this commit that are not currently being compiled.
Prepares for introducing abstract intermediate CPU types for families.
Keep the right-aligned macro line breaks within 78 chars to aid
three-way merges.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
We are about to drop the redundant name field along with ppc_def_t.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Drop the #if 0'ed alternative to make it "ppc64" for TARGET_PPC64.
If we ever want to change it, we can more easily do so now.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move definitions that were 100% identical except for the name into a
list of aliases so that we don't register duplicate CPU types.
Drop the accompanying comments since they don't really add value.
We need to support recursive lookup due to code names referencing a
generic name referencing a specific model revision.
List aliases separately for -cpu ?.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
To repurpose the POWERPC_DEF_SVR() macro outside of an array,
move the comma into the macro. No functional change.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
It is within a large TARGET_PPC64 section from 970 to 620,
so an #endif /* TARGET_PPC64 */ is confusing. Clean this up.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Commit fe828a4d4b added a new fatal error
message while QOM realize'ification was in flight.
Convert it to return an Error instead of exit()ing.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Unlike derived PVR constants mapped to CPU_POWERPC_G2LEgp3, the
"G2leGP3" model definition itself used the CPU_POWERPC_G2LEgp1 PVR.
Fixing this will allow to alias CPU_POWERPC_G2LEgp3-using types to
"G2leGP3".
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
It was defined to ..._MPC8545E_v21 rather than ..._MPC8547E_v21.
Due to both resolving to CPU_POWERPC_e500v2_v21 this did not show.
Fixing this nontheless helps with QOM'ifying CPU aliases.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce ENV_OFFSET macros which can be used in non-target-specific
code that needs to generate TCG instructions which reference CPUState
fields given the cpu_env register that TCG targets set up with a
pointer to the CPUArchState struct.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
While ~T0+T1+CF = T1-T0+CF-1 is true for the low 32-bits,
it does not produce the correct carry-out to bit 33. Do
exactly what the manual says.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Which means that callers need not copy data into local tmps.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In preparation for more efficient setting of these fields.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The target-specific ENV_GET_CPU() macros have allowed us to navigate
from CPUArchState to CPUState. The reverse direction was not supported.
Avoid introducing CPU_GET_ENV() macros by initializing an untyped
pointer that is initialized in derived instance_init functions.
The field may not be called "env" due to it being poisoned.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Adapt ppc_cpu_realize() signature, hook it up to DeviceClass and set
realized = true in cpu_ppc_init().
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
CPUs are never added to the composition tree, so delete is achieved
simply by removing the last references to them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Since HWADDR_PRIx is always the same now, use %016 for TARGET_PPC64 and
%08 for common code. This may slightly change the ppc64 debug output.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
In r5949 / 76db3ba44e (target-ppc: memory
load/store rework) variable little_endian was replaced with ctx.le_mode.
Update the debug code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The bit that makes a dcbz instruction a dcbzl instruction was declared as
reserved in ppc32 ISAs. However, hardware simply ignores the bit, making
code valid if it simply invokes dcbzl instead of dcbz even on 750 and G4.
Thus, mark the bit as unreserved so that we properly emulate a simple dcbz
in case we're running on non-G5s.
While at it, also refactor the code to check the 970 special case during
runtime. This way we don't need to differenciate between a 970 dcbz and
any other dcbz anymore. We also allow for future improvements to add e500mc
dcbz handling.
Reported-by: Amadeusz Sławiński <amade@asmblr.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Introduce CPUClass::class_by_name and add a default implementation.
Hook up the alpha and ppc implementations.
Introduce a wrapper function cpu_class_by_name().
Signed-off-by: Andreas Färber <afaerber@suse.de>
This will allow each architecture to define how the VCPU ID is set on
the KVM_CREATE_VCPU ioctl call.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Currently the target-ppc tcg code only supports a single thread. You can
specify more, but they're treated identically to multiple cores. On KVM
we obviously can't support more threads than the hardware; if more are
specified it will cause strange and cryptic errors.
This patch clarifies the situation by giving a simple meaningful error if
more threads are specified than we can support.
Signed-off-by: Mike Qiu <qiudayu@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Even though our -cpu types for e500mc and e5500 are no real CPUs that
actually have version registers, a guest might still want to access
said version register and that has to succeed for a guest to be happy.
So let's expose a zero SVR value on E500_SVR SPR reads.
Signed-off-by: Alexander Graf <agraf@suse.de>
Note that target-alpha accesses this field from TCG, now using a
negative offset. Therefore the field is placed last in CPUState.
Pass PowerPCCPU to [kvm]ppc_fixup_cpu() to facilitate this change.
Move common parts of mips cpu_state_reset() to mips_cpu_reset().
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
[AF: Rebased onto ppc CPU subclasses and openpic changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Previously we silently exited, with subclasses we got an opcode warning.
Instead, explicitly tell the user what's wrong.
An indication for this is -cpu ? showing "host" with an all-zero PVR.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Since the model list is highly macrofied, keep ppc_def_t for now and
save a pointer to it in PowerPCCPUClass. This results in a flat list of
subclasses including aliases, to be refined later.
Move cpu_ppc_init() to translate_init.c and drop helper.c.
Long-term the idea is to turn translate_init.c into a standalone cpu.c.
Inline cpu_ppc_usable() into type registration.
Split cpu_ppc_register() in two by code movement into the initfn and
by turning the remaining part into a realizefn.
Move qemu_init_vcpu() call into the new realizefn and adapt
create_ppc_opcodes() to return an Error.
Change ppc_find_by_pvr() -> ppc_cpu_class_by_pvr().
Change ppc_find_by_name() -> ppc_cpu_class_by_name().
Turn -cpu host into its own subclass. This requires to move the
kvm_enabled() check in ppc_cpu_class_by_name() to avoid the class being
found via the normal name lookup in the !kvm_enabled() case.
Turn kvmppc_host_cpu_def() into the class_init and add an initfn that
asserts KVM is in fact enabled.
Implement -cpu ? and the QMP equivalent in terms of subclasses.
This newly exposes -cpu host to the user, ordered last for -cpu ?.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
We already used to support the external proxy facility of FSL MPICs,
but only implemented it halfway correctly.
This patch adds support for
* dynamic enablement of the EPR facility
* interrupt acknowledgement only when the interrupt is delivered
This way the implementation now is closer to real hardware.
Signed-off-by: Alexander Graf <agraf@suse.de>
On e500mc, the platform doesn't provide a way for the CPU to go idle.
To still not uselessly burn CPU time, expose an idle hypercall to the guest
if kvm supports it.
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
[agraf: adjust for current code base, add patch description, fix non-kvm case]
Signed-off-by: Alexander Graf <agraf@suse.de>
Book E does not play games with certain bits of xSRR1 being MSR save
bits and others being error status. xSRR1 is the old MSR, period.
This was causing things like MSR[CE] to be lost, even in the saved
version, as soon as you take an exception.
rfci/rfdi/rfmci are fixed to pass the actual xSRR1 register contents,
rather than the register number.
Put FIXME comments on the hack that is "asrr0/1". The whole point of
separate exception levels is so that you can, for example, take a machine
check or debug interrupt without corrupting critical-level operations.
The right xSRR0/1 set needs to be chosen based on CPU type flags.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>