The 7448 CPU is an evolution of the PowerPC 7447A and the last of the
G4 family. Change its family to reflect correctly its features. This
fixes Linux boot.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220117092555.1616512-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Commit c8f49e6b93 ("target/ppc: remove 401/403 CPUs") left a few
things behind.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220117091541.1615807-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220118104150.1899661-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This breaks migration compatibility from (very) old versions of
QEMU. This should not be a problem for the pseries machine for which
migration is only supported on recent QEMUs ( > 2.x). There is no
clear status on what is supported or not for the other machines. Let's
move forward and remove the .load_state_old handler.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220118104150.1899661-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We use the endianness of interrupts to determine which endianness to
use for the guest kernel memory dump. For machines that support HILE
(powernv8 and up) we have been always generating big endian dump
files.
This patch uses the HILE support recently added to
ppc_interrupts_little_endian to fix the endianness of the dumps for
powernv machines.
Here are two dumps created at different moments:
$ file skiboot.dump
skiboot.dump: ELF 64-bit MSB core file, 64-bit PowerPC ...
$ file kernel.dump
kernel.dump: ELF 64-bit LSB core file, 64-bit PowerPC ...
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Next patches will split powerpc_excp in multiple family specific
handlers. This patch adds a wrapper to make the transition clearer.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The ppc_interrupts_little_endian function is now suitable for
determining the endianness of interrupts for all CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Some CPUs set ILE via an MSR bit. We can make
ppc_interrupts_little_endian handle that case as well. Now we have a
centralized way of determining the endianness of interrupts.
This change has no functional impact.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The ppc_interrupts_little_endian function could be used for interrupts
delivered in Hypervisor mode, so add support for powernv8 and powernv9
to it.
Also drop the comment because it is inaccurate, all CPUs that can run
little endian can have interrupts in little endian. The point is
whether they can take interrupts in an endianness different from
MSR_LE.
This change has no functional impact.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the compile time definition and make the logging be controlled
by the `-d mmu` option in the cmdline.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220107222601.4101511-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v2.03 introduced Floating Round to Integer instructions : frin,
friz, frip, and frim. Add them to POWER5+.
The PPC_FLOAT_EXT flag also includes the fre (Floating Reciprocal
Estimate) instruction which was introduced in ISA v2.0x. The
architecture document says its optional and that might be the reason
why it has been kept under the PPC_FLOAT_EXT flag. This means 970 CPUs
can not use it under QEMU, which doesn't seem to be a problem.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
popcntb instruction was added in ISA v2.02. Add support for POWER5+
processors since they implement ISA v2.03.
PPC970 CPUs implement v2.01 and do not support popcntb.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220105095142.3990430-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
MMCR0 writes will change only MMCR0 bits which are used to calculate
HFLAGS_PMCC0, HFLAGS_PMCC1 and HFLAGS_INSN_CNT hflags. No other machine
register will be changed during this operation. This means that
hreg_compute_hflags() is overkill for what we need to do.
pmu_update_summaries() is already updating HFLAGS_INSN_CNT without
calling hreg_compure_hflags(). Let's do the same for the other 2 MMCR0
hflags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220103224746.167831-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use the cached pmc_cyc_cnt value in pmu_update_cycles
and pmc_update_overflow_timer. This leaves pmc_get_event
and pmc_is_inactive unused, so remove them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103224746.167831-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Use the cached pmc_ins_cnt value. Unroll the loop over the
different PMC counters. Treat the PMC4 run-latch specially.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103224746.167831-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is the combination of frozen bit and counter type, on a per
counter basis. So far this is only used by HFLAGS_INSN_CNT, but
will be used more later.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[danielhb: fixed PMC4 cyc_cnt shift, insn run latch code,
MMCR0_FC handling, "PMC[1-6]" comment]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220103224746.167831-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We can just access it directly in powerpc_excp.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[ clg: Took into account removal of inline ]
Message-Id: <20211229165751.3774248-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that 'vector' is known before calling the interrupt-specific setup
code, we can move all of the scv setup into one place.
No functional change intended.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211229165751.3774248-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
None of the interrupt setup code touches 'vector', so we can move it
earlier in the function. This will allow us to later move the System
Call Vectored setup that is on the top level into the
POWERPC_EXCP_SYSCALL_VECTORED code block.
This patch also moves the verification for when 'excp' does not have
an address associated with it. We now bail a little earlier when that
is the case. This should not cause any visible effects.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The next patch will start accessing the excp_vectors array earlier in
the function, so add a bounds check as first thing here.
This converts the empty return on POWERPC_EXCP_NONE to an error. This
exception number never reaches this function and if it does it
probably means something else went wrong up the line.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There are currently only two interrupts that use alternate SRRs, so
let them write to them directly during the setup code.
No functional change intended.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20211229165751.3774248-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The non-signalling versions of VSX scalar convert to shorter/longer
precision insns doesn't silence SNaNs in the hardware. To better match
this behavior, use the non-arithmatic conversion of helper_todouble
instead of float32_to_float64. A test is added to prevent future
regressions.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211228120310.1957990-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Rework slightly ppc_cpu_dump_state() to replace the various 'if'
statements with a 'switch'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-10-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PID SPR of the 405 CPU contains the translation ID of the TLB
which is a 8-bit field. Enforce the mask with a store helper.
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-8-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-9-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 405 timers were broken when booke support was added. Assumption
was made that the register numbers were the same but it's not :
SPR_BOOKE_TSR (0x150)
SPR_BOOKE_TCR (0x154)
SPR_40x_TSR (0x3D8)
SPR_40x_TCR (0x3DA)
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Fixes: ddd1055b07 ("PPC: booke timers")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-6-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no need to deactivate MMU logging at compile time. Remove all
use of defines. Only keep DUMP_PAGE_TABLES for another series since
page tables could be dumped from the monitor.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211222064025.1541490-4-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-5-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
It facilitates reading the logs when mask CPU_LOG_INT is activated. We
should do the same for error codes.
Cc: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211222064025.1541490-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220103063441.3424853-3-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The compiler should know better how to inline code if necessary.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220103063441.3424853-2-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
For Radix translation, the EA range is 64-bits. when EA(2:11) are
nonzero, a segment interrupt should occur.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com>
Message-Id: <20211231073122.3183583-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
An Event-Based Branch (EBB) allows applications to change the NIA when a
event-based exception occurs. Event-based exceptions are enabled by
setting the Branch Event Status and Control Register (BESCR). If the
event-based exception is enabled when the exception occurs, an EBB
happens.
The following operations happens during an EBB:
- Global Enable (GE) bit of BESCR is set to 0;
- bits 0-61 of the Event-Based Branch Return Register (EBBRR) are set
to the the effective address of the NIA that would have executed if the EBB
didn't happen;
- Instruction fetch and execution will continue in the effective address
contained in the Event-Based Branch Handler Register (EBBHR).
The EBB Handler will process the event and then execute the Return From
Event-Based Branch (rfebb) instruction. rfebb sets BESCR_GE and then
redirects execution to the address pointed in EBBRR. This process is
described in the PowerISA v3.1, Book II, Chapter 6 [1].
This patch implements the rfebb instruction. Descriptions of all
relevant BESCR bits are also added - this patch is only using BESCR_GE,
but the next patches will use the remaining bits.
[1] https://wiki.raptorcs.com/w/images/f/f5/PowerISA_public.v3.1.pdf
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-9-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PM_RUN_INST_CMPL, instructions completed with the run latch set, is
the architected PowerISA v3.1 event defined with PMC4SEL = 0xFA.
Implement it by checking for the CTRL RUN bit before incrementing the
counter. To make this work properly we also need to force a new
translation block each time SPR_CTRL is written. A small tweak in
pmu_increment_insns() is then needed to only increment this event
if the thread has the run latch.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-8-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PMU is already counting cycles by calculating time elapsed in
nanoseconds. Counting instructions is a different matter and requires
another approach.
This patch adds the capability of counting completed instructions (Perf
event PM_INST_CMPL) by counting the amount of instructions translated in
each translation block right before exiting it.
A new pmu_count_insns() helper in translation.c was added to do that.
After verifying that the PMU is counting instructions, call
helper_insns_inc(). This new helper from power8-pmu.c will add the
instructions to the relevant counters. It'll also be responsible for
triggering counter negative overflows as it is already being done with
cycles.
To verify whether the PMU is counting instructions or now, a new hflags
named 'HFLAGS_INSN_CNT' is introduced. This flag will match the internal
state of the PMU. We're be using this flag to avoid calling
helper_insn_inc() when we do not have a valid instruction event being
sampled.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-7-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerISA v3.1 defines that if the proper bits are set (MMCR0_PMC1CE
for PMC1 and MMCR0_PMCjCE for the remaining PMCs), counter negative
conditions are enabled. This means that if the counter value overflows
(i.e. exceeds 0x80000000) a performance monitor alert will occur. This alert
can trigger an event-based exception (to be implemented in the next patches)
if the MMCR0_EBE bit is set.
For now, overflowing the counter when the PMC is counting cycles will
just trigger a performance monitor alert. This is done by starting the
overflow timer to expire in the moment the overflow would be occuring. The
timer will call fire_PMC_interrupt() (via cpu_ppc_pmu_timer_cb) which will
trigger the PMU alert and, if the conditions are met, an EBB exception.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-6-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
MMCR1 determines the events to be sampled by the PMU. Updating the
counters at every MMCR1 write ensures that we're not sampling more
or less events by looking only at MMCR0 and the PMCs.
It is worth noticing that both the Book3S PowerPC PMU, and this IBM
Power8+ PMU that we're modeling, also uses MMCRA, MMCR2 and MMCR3 to
control the PMU. These three registers aren't being handled in this
initial implementation, so for now we're controlling all the PMU
aspects using MMCR0, MMCR1 and the PMCs.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Calling pmu_update_cycles() on every PMC read/write operation ensures
that the values being fetched are up to date with the current PMU state.
In theory we can get away by just trapping PMCs reads, but we're going
to trap PMC writes to deal with counter overflow logic later on. Let's
put the required wiring for that and make our lives a bit easier in the
next patches.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch adds the barebones of the PMU logic by enabling cycle
counting. The overall logic goes as follows:
- MMCR0 reg initial value is set to 0x80000000 (MMCR0_FC set) to avoid
having to spin the PMU right at system init;
- to retrieve the events that are being profiled, pmc_get_event() will
check the current MMCR0 and MMCR1 value and return the appropriate
PMUEventType. For PMCs 1-4, event 0x2 is the implementation dependent
value of PMU_EVENT_INSTRUCTIONS and event 0x1E is the implementation
dependent value of PMU_EVENT_CYCLES. These events are supported by IBM
Power chips since Power8, at least, and the Linux Perf driver makes use
of these events until kernel v5.15. For PMC1, event 0xF0 is the
architected PowerISA event for cycles. Event 0xFE is the architected
PowerISA event for instructions;
- if the counter is frozen, either via the global MMCR0_FC bit or its
individual frozen counter bits, PMU_EVENT_INACTIVE is returned;
- pmu_update_cycles() will go through each counter and update the
values of all PMCs that are counting cycles. This function will be
called every time a MMCR0 update is done to keep counters values
up to date. Upcoming patches will use this function to allow the
counters to be properly updated during read/write of the PMCs
and MMCR1 writes.
Given that the base CPU frequency is fixed at 1Ghz for both powernv and
pseries clock, cycle calculation assumes that 1 nanosecond equals 1 CPU
cycle. Cycle value is then calculated by adding the elapsed time, in
nanoseconds, of the last cycle update done via pmu_update_cycles().
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch starts an IBM Power8+ compatible PMU implementation by adding
the representation of PMU events that we are going to sample,
PMUEventType. This enum represents a Perf event that is being sampled by
a specific counter 'sprn'. Events that aren't available (i.e. no event
was set in MMCR1) will be of type 'PMU_EVENT_INVALID'. Events that are
inactive due to frozen counter bits state are of type
'PMU_EVENT_INACTIVE'. Other types added in this patch are
PMU_EVENT_CYCLES and PMU_EVENT_INSTRUCTIONS. More types will be added
later on.
Let's also add the required PMU cycle overflow timers. They will be used
to trigger cycle overflows when cycle events are being sampled. This
timer will call cpu_ppc_pmu_timer_cb(), which in turn calls
fire_PMC_interrupt(). Both functions are stubs that will be implemented
later on when EBB support is added.
Two new helper files are created to host this new logic.
cpu_ppc_pmu_init() will init all overflow timers during CPU init time.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211201151734.654994-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This reverts commit 336e91f853.
It breaks the --disable-tcg build:
../target/ppc/excp_helper.c:463:29: error: implicit declaration of
function ‘cpu_ldl_code’ [-Werror=implicit-function-declaration]
We should not have TCG code in powerpc_excp because some kvm-only
routines use it indirectly to dispatch interrupts. See
kvm_handle_debug, spapr_mce_req_event and
spapr_do_system_reset_on_cpu.
We can re-introduce the change once we have split the interrupt
injection code between KVM and TCG.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20211209173323.2166642-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When Altivec support was added to the e6500 kernel in 2012[1], the
QEMU code was not changed, so we don't register the VPU/VPUA
exceptions for the e6500:
qemu: fatal: Raised an exception without defined vector 73
Note that the error message says 73, instead of 32, which is the IVOR
for VPU. This is because QEMU knows only knows about the VPU interrupt
for the 7400s. In theory, we should not be raising _that_ VPU
interrupt, but instead another one specific for the e6500.
We unfortunately cannot register e6500-specific VPU/VPUA interrupts
because the SPEU/EFPDI interrupts also use IVOR32/33. These are
present only in the e500v1/2 versions. From the user manual:
e500v1, e500v2: only SPEU/EFPDI/EFPRI
e500mc, e5500: no SPEU/EFPDI/EFPRI/VPU/VPUA
e6500: only VPU/VPUA
So I'm leaving IVOR32/33 as SPEU/EFPDI, but altering the dispatch code
to convert the VPU #73 to a #32 when we're in the e6500. Since the
handling for SPEU and VPU is the same this is the only change that's
needed. The EFPDI is not implemented and will cause an abort. I don't
think it worth it changing the error message to take VPUA into
consideration, so I'm not changing anything there.
This bug was discussed in the thread:
https://lists.gnu.org/archive/html/qemu-ppc/2021-06/msg00222.html
1- https://git.kernel.org/torvalds/c/cd66cc2ee52
Reported-by: <mario@locati.it>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211213133542.2608540-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This instruction has VRT and VRB fields instead of T/TX and B/BX.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20211213120958.24443-4-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-3-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PPC instruction xsmaxcdp, xsmincdp, xsmaxjdp, and xsminjdp are using
vector registers when they should be using VSX ones. This happens
because the instructions are using GEN_VSX_HELPER_R3, which adds 32
to the register numbers, effectively making them vector registers.
This patch fixes it by changing these instructions to use
GEN_VSX_HELPER_X3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Victor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20211213120958.24443-2-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
They have been there since 2007 without any board using them, most
were protected by a TODO define. Drop support.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20211202191108.1291515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The exception model id for 601v has been removed without mention
why. I assume it was inadvertent and restore it here.
Fixes: b632a148b6 ("target-ppc: Use QOM method dispatch for MMU fault handling")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 603e uses the same exception code as 603 so we don't need a
dedicated entry for it.
This is only a removal of redundant code, no functional change.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The Floating-point Unavailable and Decrementer interrupts are being
registered at the same 0x900 address. The FPU should be at 0x800
instead.
Verified on MPC555, MPC860 and MPC885 user manuals.
Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20211208123029.2052625-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>