2004-04-12 22:39:29 +02:00
|
|
|
/*
|
2007-04-10 00:45:36 +02:00
|
|
|
* QEMU generic PowerPC hardware System Emulator
|
2007-09-16 23:08:06 +02:00
|
|
|
*
|
2007-03-07 09:32:30 +01:00
|
|
|
* Copyright (c) 2003-2007 Jocelyn Mayer
|
2007-09-16 23:08:06 +02:00
|
|
|
*
|
2004-04-12 22:39:29 +02:00
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
|
|
* in the Software without restriction, including without limitation the rights
|
|
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
|
|
* furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
|
|
* THE SOFTWARE.
|
|
|
|
*/
|
2019-08-12 07:23:42 +02:00
|
|
|
|
2016-01-26 19:16:58 +01:00
|
|
|
#include "qemu/osdep.h"
|
2019-08-12 07:23:42 +02:00
|
|
|
#include "hw/irq.h"
|
2013-02-05 17:06:20 +01:00
|
|
|
#include "hw/ppc/ppc.h"
|
2013-06-16 17:04:21 +02:00
|
|
|
#include "hw/ppc/ppc_e500.h"
|
2012-12-17 18:20:00 +01:00
|
|
|
#include "qemu/timer.h"
|
2014-02-01 15:45:51 +01:00
|
|
|
#include "sysemu/cpus.h"
|
2012-12-17 18:20:00 +01:00
|
|
|
#include "qemu/log.h"
|
Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h). It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.
Include qemu/main-loop.h only where it's needed. Touching it now
recompiles only some 1700 objects. For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the
others, they shrink only slightly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-12 07:23:50 +02:00
|
|
|
#include "qemu/main-loop.h"
|
2014-05-01 12:37:09 +02:00
|
|
|
#include "qemu/error-report.h"
|
2012-12-17 18:20:04 +01:00
|
|
|
#include "sysemu/kvm.h"
|
2019-08-12 07:23:59 +02:00
|
|
|
#include "sysemu/runstate.h"
|
2010-08-30 13:49:15 +02:00
|
|
|
#include "kvm_ppc.h"
|
2019-08-12 07:23:45 +02:00
|
|
|
#include "migration/vmstate.h"
|
2014-05-01 12:37:09 +02:00
|
|
|
#include "trace.h"
|
2004-04-12 22:39:29 +02:00
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
static void cpu_ppc_tb_stop (CPUPPCState *env);
|
|
|
|
static void cpu_ppc_tb_start (CPUPPCState *env);
|
2007-10-14 11:35:30 +02:00
|
|
|
|
2012-12-01 03:55:58 +01:00
|
|
|
void ppc_set_irq(PowerPCCPU *cpu, int n_IRQ, int level)
|
2007-03-30 11:38:04 +02:00
|
|
|
{
|
2013-01-17 22:30:20 +01:00
|
|
|
CPUState *cs = CPU(cpu);
|
2012-12-01 03:55:58 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
tcg: drop global lock during TCG code execution
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.
We have to revert a few optimization for the current TCG threading
model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
kicking it in qemu_cpu_kick. We also need to disable RAM block
reordering until we have a more efficient locking mechanism at hand.
Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
These numbers demonstrate where we gain something:
20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm
20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm
The guest CPU was fully loaded, but the iothread could still run mostly
independent on a second core. Without the patch we don't get beyond
32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm
32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm
We don't benefit significantly, though, when the guest is not fully
loading a host CPU.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
[FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[EGC: fixed iothread lock for cpu-exec IRQ handling]
Signed-off-by: Emilio G. Cota <cota@braap.org>
[AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
[PM: target-arm changes]
Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-23 19:29:11 +01:00
|
|
|
unsigned int old_pending;
|
|
|
|
bool locked = false;
|
|
|
|
|
|
|
|
/* We may already have the BQL if coming from the reset path */
|
|
|
|
if (!qemu_mutex_iothread_locked()) {
|
|
|
|
locked = true;
|
|
|
|
qemu_mutex_lock_iothread();
|
|
|
|
}
|
|
|
|
|
|
|
|
old_pending = env->pending_interrupts;
|
2010-08-30 13:49:15 +02:00
|
|
|
|
2007-03-30 11:38:04 +02:00
|
|
|
if (level) {
|
|
|
|
env->pending_interrupts |= 1 << n_IRQ;
|
2013-01-18 15:03:43 +01:00
|
|
|
cpu_interrupt(cs, CPU_INTERRUPT_HARD);
|
2007-03-30 11:38:04 +02:00
|
|
|
} else {
|
|
|
|
env->pending_interrupts &= ~(1 << n_IRQ);
|
2013-01-17 22:30:20 +01:00
|
|
|
if (env->pending_interrupts == 0) {
|
|
|
|
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
|
|
|
|
}
|
2007-03-30 11:38:04 +02:00
|
|
|
}
|
2010-08-30 13:49:15 +02:00
|
|
|
|
|
|
|
if (old_pending != env->pending_interrupts) {
|
2012-12-01 03:55:58 +01:00
|
|
|
kvmppc_set_interrupt(cpu, n_IRQ, level);
|
2010-08-30 13:49:15 +02:00
|
|
|
}
|
|
|
|
|
tcg: drop global lock during TCG code execution
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.
We have to revert a few optimization for the current TCG threading
model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
kicking it in qemu_cpu_kick. We also need to disable RAM block
reordering until we have a more efficient locking mechanism at hand.
Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
These numbers demonstrate where we gain something:
20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm
20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm
The guest CPU was fully loaded, but the iothread could still run mostly
independent on a second core. Without the patch we don't get beyond
32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm
32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm
We don't benefit significantly, though, when the guest is not fully
loading a host CPU.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
[FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[EGC: fixed iothread lock for cpu-exec IRQ handling]
Signed-off-by: Emilio G. Cota <cota@braap.org>
[AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
[PM: target-arm changes]
Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-23 19:29:11 +01:00
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_exit(env, n_IRQ, level, env->pending_interrupts,
|
|
|
|
CPU(cpu)->interrupt_request);
|
tcg: drop global lock during TCG code execution
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.
We have to revert a few optimization for the current TCG threading
model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
kicking it in qemu_cpu_kick. We also need to disable RAM block
reordering until we have a more efficient locking mechanism at hand.
Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
These numbers demonstrate where we gain something:
20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm
20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm
The guest CPU was fully loaded, but the iothread could still run mostly
independent on a second core. Without the patch we don't get beyond
32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm
32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm
We don't benefit significantly, though, when the guest is not fully
loading a host CPU.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
[FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[EGC: fixed iothread lock for cpu-exec IRQ handling]
Signed-off-by: Emilio G. Cota <cota@braap.org>
[AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
[PM: target-arm changes]
Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-23 19:29:11 +01:00
|
|
|
|
|
|
|
if (locked) {
|
|
|
|
qemu_mutex_unlock_iothread();
|
|
|
|
}
|
2007-03-30 11:38:04 +02:00
|
|
|
}
|
|
|
|
|
2007-04-10 00:45:36 +02:00
|
|
|
/* PowerPC 6xx / 7xx internal IRQ controller */
|
2012-05-03 02:48:44 +02:00
|
|
|
static void ppc6xx_set_irq(void *opaque, int pin, int level)
|
2007-04-07 20:14:41 +02:00
|
|
|
{
|
2012-05-03 02:48:44 +02:00
|
|
|
PowerPCCPU *cpu = opaque;
|
|
|
|
CPUPPCState *env = &cpu->env;
|
2007-04-10 00:45:36 +02:00
|
|
|
int cur_level;
|
2007-04-07 20:14:41 +02:00
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set(env, pin, level);
|
|
|
|
|
2007-04-10 00:45:36 +02:00
|
|
|
cur_level = (env->irq_input_state >> pin) & 1;
|
|
|
|
/* Don't generate spurious events */
|
2007-04-12 23:24:29 +02:00
|
|
|
if ((cur_level == 1 && level == 0) || (cur_level == 0 && level != 0)) {
|
2013-01-17 18:51:17 +01:00
|
|
|
CPUState *cs = CPU(cpu);
|
|
|
|
|
2007-04-10 00:45:36 +02:00
|
|
|
switch (pin) {
|
2007-10-14 11:35:30 +02:00
|
|
|
case PPC6xx_INPUT_TBEN:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("time base", level);
|
2007-10-14 11:35:30 +02:00
|
|
|
if (level) {
|
|
|
|
cpu_ppc_tb_start(env);
|
|
|
|
} else {
|
|
|
|
cpu_ppc_tb_stop(env);
|
|
|
|
}
|
2020-11-16 03:48:09 +01:00
|
|
|
break;
|
2007-04-12 23:24:29 +02:00
|
|
|
case PPC6xx_INPUT_INT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("external IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_EXT, level);
|
2007-04-10 00:45:36 +02:00
|
|
|
break;
|
2007-04-12 23:24:29 +02:00
|
|
|
case PPC6xx_INPUT_SMI:
|
2007-04-10 00:45:36 +02:00
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("SMI IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_SMI, level);
|
2007-04-10 00:45:36 +02:00
|
|
|
break;
|
2007-04-12 23:24:29 +02:00
|
|
|
case PPC6xx_INPUT_MCP:
|
2007-04-10 00:45:36 +02:00
|
|
|
/* Negative edge sensitive */
|
|
|
|
/* XXX: TODO: actual reaction may depends on HID0 status
|
|
|
|
* 603/604/740/750: check HID0[EMCP]
|
|
|
|
*/
|
|
|
|
if (cur_level == 1 && level == 0) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("machine check", 1);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_MCK, 1);
|
2007-04-10 00:45:36 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-04-12 23:24:29 +02:00
|
|
|
case PPC6xx_INPUT_CKSTP_IN:
|
2007-04-10 00:45:36 +02:00
|
|
|
/* Level sensitive - active low */
|
|
|
|
/* XXX: TODO: relay the signal to CKSTP_OUT pin */
|
2007-10-14 10:48:23 +02:00
|
|
|
/* XXX: Note that the only way to restart the CPU is to reset it */
|
2007-04-10 00:45:36 +02:00
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_cpu("stop");
|
2013-01-17 18:51:17 +01:00
|
|
|
cs->halted = 1;
|
2007-04-10 00:45:36 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-04-12 23:24:29 +02:00
|
|
|
case PPC6xx_INPUT_HRESET:
|
2007-04-10 00:45:36 +02:00
|
|
|
/* Level sensitive - active low */
|
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_reset("CPU");
|
2013-01-18 15:03:43 +01:00
|
|
|
cpu_interrupt(cs, CPU_INTERRUPT_RESET);
|
2007-04-10 00:45:36 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-04-12 23:24:29 +02:00
|
|
|
case PPC6xx_INPUT_SRESET:
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("RESET IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_RESET, level);
|
2007-04-10 00:45:36 +02:00
|
|
|
break;
|
|
|
|
default:
|
2021-09-20 08:12:01 +02:00
|
|
|
g_assert_not_reached();
|
2007-04-10 00:45:36 +02:00
|
|
|
}
|
|
|
|
if (level)
|
|
|
|
env->irq_input_state |= 1 << pin;
|
|
|
|
else
|
|
|
|
env->irq_input_state &= ~(1 << pin);
|
2007-04-07 20:14:41 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-15 14:32:19 +01:00
|
|
|
void ppc6xx_irq_init(PowerPCCPU *cpu)
|
2007-03-30 11:38:04 +02:00
|
|
|
{
|
2016-03-15 14:32:19 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
2012-05-03 02:48:44 +02:00
|
|
|
|
|
|
|
env->irq_inputs = (void **)qemu_allocate_irqs(&ppc6xx_set_irq, cpu,
|
2007-11-17 03:04:00 +01:00
|
|
|
PPC6xx_INPUT_NB);
|
2007-03-30 11:38:04 +02:00
|
|
|
}
|
|
|
|
|
2007-10-03 03:05:39 +02:00
|
|
|
#if defined(TARGET_PPC64)
|
2007-04-16 09:34:39 +02:00
|
|
|
/* PowerPC 970 internal IRQ controller */
|
2012-05-03 02:48:44 +02:00
|
|
|
static void ppc970_set_irq(void *opaque, int pin, int level)
|
2007-04-16 09:34:39 +02:00
|
|
|
{
|
2012-05-03 02:48:44 +02:00
|
|
|
PowerPCCPU *cpu = opaque;
|
|
|
|
CPUPPCState *env = &cpu->env;
|
2007-04-16 09:34:39 +02:00
|
|
|
int cur_level;
|
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set(env, pin, level);
|
|
|
|
|
2007-04-16 09:34:39 +02:00
|
|
|
cur_level = (env->irq_input_state >> pin) & 1;
|
|
|
|
/* Don't generate spurious events */
|
|
|
|
if ((cur_level == 1 && level == 0) || (cur_level == 0 && level != 0)) {
|
2013-01-17 18:51:17 +01:00
|
|
|
CPUState *cs = CPU(cpu);
|
|
|
|
|
2007-04-16 09:34:39 +02:00
|
|
|
switch (pin) {
|
|
|
|
case PPC970_INPUT_INT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("external IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_EXT, level);
|
2007-04-16 09:34:39 +02:00
|
|
|
break;
|
|
|
|
case PPC970_INPUT_THINT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("SMI IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_THERM, level);
|
2007-04-16 09:34:39 +02:00
|
|
|
break;
|
|
|
|
case PPC970_INPUT_MCP:
|
|
|
|
/* Negative edge sensitive */
|
|
|
|
/* XXX: TODO: actual reaction may depends on HID0 status
|
|
|
|
* 603/604/740/750: check HID0[EMCP]
|
|
|
|
*/
|
|
|
|
if (cur_level == 1 && level == 0) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("machine check", 1);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_MCK, 1);
|
2007-04-16 09:34:39 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PPC970_INPUT_CKSTP:
|
|
|
|
/* Level sensitive - active low */
|
|
|
|
/* XXX: TODO: relay the signal to CKSTP_OUT pin */
|
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_cpu("stop");
|
2013-01-17 18:51:17 +01:00
|
|
|
cs->halted = 1;
|
2007-04-16 09:34:39 +02:00
|
|
|
} else {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_cpu("restart");
|
2013-01-17 18:51:17 +01:00
|
|
|
cs->halted = 0;
|
|
|
|
qemu_cpu_kick(cs);
|
2007-04-16 09:34:39 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PPC970_INPUT_HRESET:
|
|
|
|
/* Level sensitive - active low */
|
|
|
|
if (level) {
|
2013-01-18 15:03:43 +01:00
|
|
|
cpu_interrupt(cs, CPU_INTERRUPT_RESET);
|
2007-04-16 09:34:39 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PPC970_INPUT_SRESET:
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("RESET IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_RESET, level);
|
2007-04-16 09:34:39 +02:00
|
|
|
break;
|
|
|
|
case PPC970_INPUT_TBEN:
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("TBEN IRQ", level);
|
2007-04-16 09:34:39 +02:00
|
|
|
/* XXX: TODO */
|
|
|
|
break;
|
|
|
|
default:
|
2021-09-20 08:12:01 +02:00
|
|
|
g_assert_not_reached();
|
2007-04-16 09:34:39 +02:00
|
|
|
}
|
|
|
|
if (level)
|
|
|
|
env->irq_input_state |= 1 << pin;
|
|
|
|
else
|
|
|
|
env->irq_input_state &= ~(1 << pin);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-15 14:32:19 +01:00
|
|
|
void ppc970_irq_init(PowerPCCPU *cpu)
|
2007-04-16 09:34:39 +02:00
|
|
|
{
|
2016-03-15 14:32:19 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
2012-05-03 02:48:44 +02:00
|
|
|
|
|
|
|
env->irq_inputs = (void **)qemu_allocate_irqs(&ppc970_set_irq, cpu,
|
2007-11-17 03:04:00 +01:00
|
|
|
PPC970_INPUT_NB);
|
2007-04-16 09:34:39 +02:00
|
|
|
}
|
2011-04-01 06:15:19 +02:00
|
|
|
|
|
|
|
/* POWER7 internal IRQ controller */
|
2012-05-03 02:48:44 +02:00
|
|
|
static void power7_set_irq(void *opaque, int pin, int level)
|
2011-04-01 06:15:19 +02:00
|
|
|
{
|
2012-05-03 02:48:44 +02:00
|
|
|
PowerPCCPU *cpu = opaque;
|
2011-04-01 06:15:19 +02:00
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set(&cpu->env, pin, level);
|
2011-04-01 06:15:19 +02:00
|
|
|
|
|
|
|
switch (pin) {
|
|
|
|
case POWER7_INPUT_INT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("external IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_EXT, level);
|
2011-04-01 06:15:19 +02:00
|
|
|
break;
|
|
|
|
default:
|
2021-09-20 08:12:01 +02:00
|
|
|
g_assert_not_reached();
|
2011-04-01 06:15:19 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-15 14:32:19 +01:00
|
|
|
void ppcPOWER7_irq_init(PowerPCCPU *cpu)
|
2011-04-01 06:15:19 +02:00
|
|
|
{
|
2016-03-15 14:32:19 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
2012-05-03 02:48:44 +02:00
|
|
|
|
|
|
|
env->irq_inputs = (void **)qemu_allocate_irqs(&power7_set_irq, cpu,
|
2011-04-01 06:15:19 +02:00
|
|
|
POWER7_INPUT_NB);
|
|
|
|
}
|
2019-02-15 17:16:47 +01:00
|
|
|
|
|
|
|
/* POWER9 internal IRQ controller */
|
|
|
|
static void power9_set_irq(void *opaque, int pin, int level)
|
|
|
|
{
|
|
|
|
PowerPCCPU *cpu = opaque;
|
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set(&cpu->env, pin, level);
|
2019-02-15 17:16:47 +01:00
|
|
|
|
|
|
|
switch (pin) {
|
|
|
|
case POWER9_INPUT_INT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("external IRQ", level);
|
2019-02-15 17:16:47 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_EXT, level);
|
|
|
|
break;
|
|
|
|
case POWER9_INPUT_HINT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("HV external IRQ", level);
|
2019-02-15 17:16:47 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_HVIRT, level);
|
|
|
|
break;
|
|
|
|
default:
|
2021-09-20 08:12:01 +02:00
|
|
|
g_assert_not_reached();
|
2021-09-20 08:12:02 +02:00
|
|
|
return;
|
2019-02-15 17:16:47 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void ppcPOWER9_irq_init(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
CPUPPCState *env = &cpu->env;
|
|
|
|
|
|
|
|
env->irq_inputs = (void **)qemu_allocate_irqs(&power9_set_irq, cpu,
|
|
|
|
POWER9_INPUT_NB);
|
|
|
|
}
|
2007-10-03 03:05:39 +02:00
|
|
|
#endif /* defined(TARGET_PPC64) */
|
2007-04-16 09:34:39 +02:00
|
|
|
|
2019-01-30 15:30:49 +01:00
|
|
|
void ppc40x_core_reset(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
CPUPPCState *env = &cpu->env;
|
|
|
|
target_ulong dbsr;
|
|
|
|
|
|
|
|
qemu_log_mask(CPU_LOG_RESET, "Reset PowerPC core\n");
|
|
|
|
cpu_interrupt(CPU(cpu), CPU_INTERRUPT_RESET);
|
|
|
|
dbsr = env->spr[SPR_40x_DBSR];
|
|
|
|
dbsr &= ~0x00000300;
|
|
|
|
dbsr |= 0x00000100;
|
|
|
|
env->spr[SPR_40x_DBSR] = dbsr;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ppc40x_chip_reset(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
CPUPPCState *env = &cpu->env;
|
|
|
|
target_ulong dbsr;
|
|
|
|
|
|
|
|
qemu_log_mask(CPU_LOG_RESET, "Reset PowerPC chip\n");
|
|
|
|
cpu_interrupt(CPU(cpu), CPU_INTERRUPT_RESET);
|
|
|
|
/* XXX: TODO reset all internal peripherals */
|
|
|
|
dbsr = env->spr[SPR_40x_DBSR];
|
|
|
|
dbsr &= ~0x00000300;
|
|
|
|
dbsr |= 0x00000200;
|
|
|
|
env->spr[SPR_40x_DBSR] = dbsr;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ppc40x_system_reset(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
qemu_log_mask(CPU_LOG_RESET, "Reset PowerPC system\n");
|
|
|
|
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
|
|
|
|
}
|
|
|
|
|
|
|
|
void store_40x_dbcr0(CPUPPCState *env, uint32_t val)
|
|
|
|
{
|
2019-03-23 03:07:57 +01:00
|
|
|
PowerPCCPU *cpu = env_archcpu(env);
|
2019-01-30 15:30:49 +01:00
|
|
|
|
2021-10-06 09:11:40 +02:00
|
|
|
qemu_mutex_lock_iothread();
|
|
|
|
|
2019-01-30 15:30:49 +01:00
|
|
|
switch ((val >> 28) & 0x3) {
|
|
|
|
case 0x0:
|
|
|
|
/* No action */
|
|
|
|
break;
|
|
|
|
case 0x1:
|
|
|
|
/* Core reset */
|
|
|
|
ppc40x_core_reset(cpu);
|
|
|
|
break;
|
|
|
|
case 0x2:
|
|
|
|
/* Chip reset */
|
|
|
|
ppc40x_chip_reset(cpu);
|
|
|
|
break;
|
|
|
|
case 0x3:
|
|
|
|
/* System reset */
|
|
|
|
ppc40x_system_reset(cpu);
|
|
|
|
break;
|
|
|
|
}
|
2021-10-06 09:11:40 +02:00
|
|
|
|
|
|
|
qemu_mutex_unlock_iothread();
|
2019-01-30 15:30:49 +01:00
|
|
|
}
|
|
|
|
|
2007-10-01 03:27:10 +02:00
|
|
|
/* PowerPC 40x internal IRQ controller */
|
2012-05-03 02:48:44 +02:00
|
|
|
static void ppc40x_set_irq(void *opaque, int pin, int level)
|
2007-04-12 23:24:29 +02:00
|
|
|
{
|
2012-05-03 02:48:44 +02:00
|
|
|
PowerPCCPU *cpu = opaque;
|
|
|
|
CPUPPCState *env = &cpu->env;
|
2007-04-12 23:24:29 +02:00
|
|
|
int cur_level;
|
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set(env, pin, level);
|
|
|
|
|
2007-04-12 23:24:29 +02:00
|
|
|
cur_level = (env->irq_input_state >> pin) & 1;
|
|
|
|
/* Don't generate spurious events */
|
|
|
|
if ((cur_level == 1 && level == 0) || (cur_level == 0 && level != 0)) {
|
2013-01-17 18:51:17 +01:00
|
|
|
CPUState *cs = CPU(cpu);
|
|
|
|
|
2007-04-12 23:24:29 +02:00
|
|
|
switch (pin) {
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_RESET_SYS:
|
2007-04-16 22:09:45 +02:00
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_reset("system");
|
2013-01-18 15:57:51 +01:00
|
|
|
ppc40x_system_reset(cpu);
|
2007-04-16 22:09:45 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_RESET_CHIP:
|
2007-04-16 22:09:45 +02:00
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_reset("chip");
|
2013-01-18 15:57:51 +01:00
|
|
|
ppc40x_chip_reset(cpu);
|
2007-04-16 22:09:45 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_RESET_CORE:
|
2007-04-12 23:24:29 +02:00
|
|
|
/* XXX: TODO: update DBSR[MRR] */
|
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_reset("core");
|
2013-01-18 15:57:51 +01:00
|
|
|
ppc40x_core_reset(cpu);
|
2007-04-12 23:24:29 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_CINT:
|
2007-04-12 23:24:29 +02:00
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("critical IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_CEXT, level);
|
2007-04-12 23:24:29 +02:00
|
|
|
break;
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_INT:
|
2007-04-12 23:24:29 +02:00
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("external IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_EXT, level);
|
2007-04-12 23:24:29 +02:00
|
|
|
break;
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_HALT:
|
2007-04-12 23:24:29 +02:00
|
|
|
/* Level sensitive - active low */
|
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_cpu("stop");
|
2013-01-17 18:51:17 +01:00
|
|
|
cs->halted = 1;
|
2007-04-12 23:24:29 +02:00
|
|
|
} else {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_cpu("restart");
|
2013-01-17 18:51:17 +01:00
|
|
|
cs->halted = 0;
|
|
|
|
qemu_cpu_kick(cs);
|
2007-04-12 23:24:29 +02:00
|
|
|
}
|
|
|
|
break;
|
2007-10-01 03:27:10 +02:00
|
|
|
case PPC40x_INPUT_DEBUG:
|
2007-04-12 23:24:29 +02:00
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("debug pin", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_DEBUG, level);
|
2007-04-12 23:24:29 +02:00
|
|
|
break;
|
|
|
|
default:
|
2021-09-20 08:12:01 +02:00
|
|
|
g_assert_not_reached();
|
2007-04-12 23:24:29 +02:00
|
|
|
}
|
|
|
|
if (level)
|
|
|
|
env->irq_input_state |= 1 << pin;
|
|
|
|
else
|
|
|
|
env->irq_input_state &= ~(1 << pin);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-15 14:32:19 +01:00
|
|
|
void ppc40x_irq_init(PowerPCCPU *cpu)
|
2007-04-12 23:24:29 +02:00
|
|
|
{
|
2016-03-15 14:32:19 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
2012-05-03 02:48:44 +02:00
|
|
|
|
2007-10-01 03:27:10 +02:00
|
|
|
env->irq_inputs = (void **)qemu_allocate_irqs(&ppc40x_set_irq,
|
2012-05-03 02:48:44 +02:00
|
|
|
cpu, PPC40x_INPUT_NB);
|
2007-04-12 23:24:29 +02:00
|
|
|
}
|
|
|
|
|
2009-03-02 17:42:32 +01:00
|
|
|
/* PowerPC E500 internal IRQ controller */
|
2012-05-03 02:48:44 +02:00
|
|
|
static void ppce500_set_irq(void *opaque, int pin, int level)
|
2009-03-02 17:42:32 +01:00
|
|
|
{
|
2012-05-03 02:48:44 +02:00
|
|
|
PowerPCCPU *cpu = opaque;
|
|
|
|
CPUPPCState *env = &cpu->env;
|
2009-03-02 17:42:32 +01:00
|
|
|
int cur_level;
|
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set(env, pin, level);
|
|
|
|
|
2009-03-02 17:42:32 +01:00
|
|
|
cur_level = (env->irq_input_state >> pin) & 1;
|
|
|
|
/* Don't generate spurious events */
|
|
|
|
if ((cur_level == 1 && level == 0) || (cur_level == 0 && level != 0)) {
|
|
|
|
switch (pin) {
|
|
|
|
case PPCE500_INPUT_MCK:
|
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_reset("system");
|
2017-05-15 23:41:13 +02:00
|
|
|
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
|
2009-03-02 17:42:32 +01:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PPCE500_INPUT_RESET_CORE:
|
|
|
|
if (level) {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_reset("core");
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_MCK, level);
|
2009-03-02 17:42:32 +01:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PPCE500_INPUT_CINT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("critical IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_CEXT, level);
|
2009-03-02 17:42:32 +01:00
|
|
|
break;
|
|
|
|
case PPCE500_INPUT_INT:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("core IRQ", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_EXT, level);
|
2009-03-02 17:42:32 +01:00
|
|
|
break;
|
|
|
|
case PPCE500_INPUT_DEBUG:
|
|
|
|
/* Level sensitive - active high */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_irq_set_state("debug pin", level);
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_DEBUG, level);
|
2009-03-02 17:42:32 +01:00
|
|
|
break;
|
|
|
|
default:
|
2021-09-20 08:12:01 +02:00
|
|
|
g_assert_not_reached();
|
2009-03-02 17:42:32 +01:00
|
|
|
}
|
|
|
|
if (level)
|
|
|
|
env->irq_input_state |= 1 << pin;
|
|
|
|
else
|
|
|
|
env->irq_input_state &= ~(1 << pin);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-15 14:32:19 +01:00
|
|
|
void ppce500_irq_init(PowerPCCPU *cpu)
|
2009-03-02 17:42:32 +01:00
|
|
|
{
|
2016-03-15 14:32:19 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
2012-05-03 02:48:44 +02:00
|
|
|
|
2009-03-02 17:42:32 +01:00
|
|
|
env->irq_inputs = (void **)qemu_allocate_irqs(&ppce500_set_irq,
|
2012-05-03 02:48:44 +02:00
|
|
|
cpu, PPCE500_INPUT_NB);
|
2009-03-02 17:42:32 +01:00
|
|
|
}
|
2013-01-17 11:32:21 +01:00
|
|
|
|
|
|
|
/* Enable or Disable the E500 EPR capability */
|
|
|
|
void ppce500_set_mpic_proxy(bool enabled)
|
|
|
|
{
|
2013-05-29 22:29:20 +02:00
|
|
|
CPUState *cs;
|
2013-01-17 11:32:21 +01:00
|
|
|
|
2013-06-24 23:50:24 +02:00
|
|
|
CPU_FOREACH(cs) {
|
2013-05-29 22:29:20 +02:00
|
|
|
PowerPCCPU *cpu = POWERPC_CPU(cs);
|
2013-01-17 11:54:38 +01:00
|
|
|
|
2013-05-29 22:29:20 +02:00
|
|
|
cpu->env.mpic_proxy = enabled;
|
2013-01-17 11:54:38 +01:00
|
|
|
if (kvm_enabled()) {
|
2013-05-29 22:29:20 +02:00
|
|
|
kvmppc_set_mpic_proxy(cpu, enabled);
|
2013-01-17 11:54:38 +01:00
|
|
|
}
|
2013-01-17 11:32:21 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2004-05-21 14:59:32 +02:00
|
|
|
/*****************************************************************************/
|
2007-04-10 00:45:36 +02:00
|
|
|
/* PowerPC time base and decrementer emulation */
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2011-09-13 06:00:32 +02:00
|
|
|
uint64_t cpu_ppc_get_tb(ppc_tb_t *tb_env, uint64_t vmclk, int64_t tb_offset)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
|
|
|
/* TB time in tb periods */
|
2016-03-21 17:02:30 +01:00
|
|
|
return muldiv64(vmclk, tb_env->tb_freq, NANOSECONDS_PER_SECOND) + tb_offset;
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint64_t cpu_ppc_load_tbl (CPUPPCState *env)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2004-05-21 14:59:32 +02:00
|
|
|
uint64_t tb;
|
|
|
|
|
2011-04-30 00:10:23 +02:00
|
|
|
if (kvm_enabled()) {
|
|
|
|
return env->spr[SPR_TBL];
|
|
|
|
}
|
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->tb_offset);
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_tb_load(tb);
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2009-12-21 12:24:17 +01:00
|
|
|
return tb;
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
static inline uint32_t _cpu_ppc_load_tbu(CPUPPCState *env)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2004-05-21 14:59:32 +02:00
|
|
|
uint64_t tb;
|
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->tb_offset);
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_tb_load(tb);
|
2007-03-07 09:32:30 +01:00
|
|
|
|
2004-05-21 14:59:32 +02:00
|
|
|
return tb >> 32;
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint32_t cpu_ppc_load_tbu (CPUPPCState *env)
|
2007-09-30 16:44:52 +02:00
|
|
|
{
|
2011-04-30 00:10:23 +02:00
|
|
|
if (kvm_enabled()) {
|
|
|
|
return env->spr[SPR_TBU];
|
|
|
|
}
|
|
|
|
|
2007-09-30 16:44:52 +02:00
|
|
|
return _cpu_ppc_load_tbu(env);
|
|
|
|
}
|
|
|
|
|
2009-10-01 23:12:16 +02:00
|
|
|
static inline void cpu_ppc_store_tb(ppc_tb_t *tb_env, uint64_t vmclk,
|
2009-08-16 11:06:54 +02:00
|
|
|
int64_t *tb_offsetp, uint64_t value)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2016-03-21 17:02:30 +01:00
|
|
|
*tb_offsetp = value -
|
|
|
|
muldiv64(vmclk, tb_env->tb_freq, NANOSECONDS_PER_SECOND);
|
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_tb_store(value, *tb_offsetp);
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
void cpu_ppc_store_tbl (CPUPPCState *env, uint32_t value)
|
2007-09-30 02:38:38 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-30 02:38:38 +02:00
|
|
|
uint64_t tb;
|
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->tb_offset);
|
2007-09-30 02:38:38 +02:00
|
|
|
tb &= 0xFFFFFFFF00000000ULL;
|
2013-08-21 17:03:08 +02:00
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
2007-10-14 11:35:30 +02:00
|
|
|
&tb_env->tb_offset, tb | (uint64_t)value);
|
2007-09-30 02:38:38 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
static inline void _cpu_ppc_store_tbu(CPUPPCState *env, uint32_t value)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-30 02:38:38 +02:00
|
|
|
uint64_t tb;
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->tb_offset);
|
2007-09-30 02:38:38 +02:00
|
|
|
tb &= 0x00000000FFFFFFFFULL;
|
2013-08-21 17:03:08 +02:00
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
2007-10-14 11:35:30 +02:00
|
|
|
&tb_env->tb_offset, ((uint64_t)value << 32) | tb);
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
void cpu_ppc_store_tbu (CPUPPCState *env, uint32_t value)
|
2007-09-30 16:44:52 +02:00
|
|
|
{
|
|
|
|
_cpu_ppc_store_tbu(env, value);
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint64_t cpu_ppc_load_atbl (CPUPPCState *env)
|
2007-09-30 02:38:38 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-30 02:38:38 +02:00
|
|
|
uint64_t tb;
|
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->atb_offset);
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_tb_load(tb);
|
2007-09-30 02:38:38 +02:00
|
|
|
|
2009-12-21 13:52:08 +01:00
|
|
|
return tb;
|
2007-09-30 02:38:38 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint32_t cpu_ppc_load_atbu (CPUPPCState *env)
|
2007-09-30 02:38:38 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-30 02:38:38 +02:00
|
|
|
uint64_t tb;
|
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->atb_offset);
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_tb_load(tb);
|
2007-09-30 02:38:38 +02:00
|
|
|
|
|
|
|
return tb >> 32;
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
void cpu_ppc_store_atbl (CPUPPCState *env, uint32_t value)
|
2007-09-30 02:38:38 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-30 02:38:38 +02:00
|
|
|
uint64_t tb;
|
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->atb_offset);
|
2007-09-30 02:38:38 +02:00
|
|
|
tb &= 0xFFFFFFFF00000000ULL;
|
2013-08-21 17:03:08 +02:00
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
2007-10-14 11:35:30 +02:00
|
|
|
&tb_env->atb_offset, tb | (uint64_t)value);
|
2007-09-30 02:38:38 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-30 02:38:38 +02:00
|
|
|
uint64_t tb;
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->atb_offset);
|
2007-09-30 02:38:38 +02:00
|
|
|
tb &= 0x00000000FFFFFFFFULL;
|
2013-08-21 17:03:08 +02:00
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
2007-10-14 11:35:30 +02:00
|
|
|
&tb_env->atb_offset, ((uint64_t)value << 32) | tb);
|
|
|
|
}
|
|
|
|
|
2019-11-28 14:46:54 +01:00
|
|
|
uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
|
|
|
|
{
|
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
|
|
|
|
|
|
|
return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
|
|
|
tb_env->vtb_offset);
|
|
|
|
}
|
|
|
|
|
|
|
|
void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
|
|
|
|
{
|
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
|
|
|
|
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
|
|
|
&tb_env->vtb_offset, value);
|
|
|
|
}
|
|
|
|
|
2019-11-28 14:46:57 +01:00
|
|
|
void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value)
|
|
|
|
{
|
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
|
|
|
uint64_t tb;
|
|
|
|
|
|
|
|
tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
|
|
|
tb_env->tb_offset);
|
|
|
|
tb &= 0xFFFFFFUL;
|
|
|
|
tb |= (value & ~0xFFFFFFUL);
|
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
|
|
|
&tb_env->tb_offset, tb);
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
static void cpu_ppc_tb_stop (CPUPPCState *env)
|
2007-10-14 11:35:30 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-10-14 11:35:30 +02:00
|
|
|
uint64_t tb, atb, vmclk;
|
|
|
|
|
|
|
|
/* If the time base is already frozen, do nothing */
|
|
|
|
if (tb_env->tb_freq != 0) {
|
2013-08-21 17:03:08 +02:00
|
|
|
vmclk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2007-10-14 11:35:30 +02:00
|
|
|
/* Get the time base */
|
|
|
|
tb = cpu_ppc_get_tb(tb_env, vmclk, tb_env->tb_offset);
|
|
|
|
/* Get the alternate time base */
|
|
|
|
atb = cpu_ppc_get_tb(tb_env, vmclk, tb_env->atb_offset);
|
|
|
|
/* Store the time base value (ie compute the current offset) */
|
|
|
|
cpu_ppc_store_tb(tb_env, vmclk, &tb_env->tb_offset, tb);
|
|
|
|
/* Store the alternate time base value (compute the current offset) */
|
|
|
|
cpu_ppc_store_tb(tb_env, vmclk, &tb_env->atb_offset, atb);
|
|
|
|
/* Set the time base frequency to zero */
|
|
|
|
tb_env->tb_freq = 0;
|
|
|
|
/* Now, the time bases are frozen to tb_offset / atb_offset value */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
static void cpu_ppc_tb_start (CPUPPCState *env)
|
2007-10-14 11:35:30 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-10-14 11:35:30 +02:00
|
|
|
uint64_t tb, atb, vmclk;
|
2007-11-24 03:56:36 +01:00
|
|
|
|
2007-10-14 11:35:30 +02:00
|
|
|
/* If the time base is not frozen, do nothing */
|
|
|
|
if (tb_env->tb_freq == 0) {
|
2013-08-21 17:03:08 +02:00
|
|
|
vmclk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2007-10-14 11:35:30 +02:00
|
|
|
/* Get the time base from tb_offset */
|
|
|
|
tb = tb_env->tb_offset;
|
|
|
|
/* Get the alternate time base from atb_offset */
|
|
|
|
atb = tb_env->atb_offset;
|
|
|
|
/* Restore the tb frequency from the decrementer frequency */
|
|
|
|
tb_env->tb_freq = tb_env->decr_freq;
|
|
|
|
/* Store the time base value */
|
|
|
|
cpu_ppc_store_tb(tb_env, vmclk, &tb_env->tb_offset, tb);
|
|
|
|
/* Store the alternate time base value */
|
|
|
|
cpu_ppc_store_tb(tb_env, vmclk, &tb_env->atb_offset, atb);
|
|
|
|
}
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2014-04-06 01:32:06 +02:00
|
|
|
bool ppc_decr_clear_on_delivery(CPUPPCState *env)
|
|
|
|
{
|
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
|
|
|
int flags = PPC_DECR_UNDERFLOW_TRIGGERED | PPC_DECR_UNDERFLOW_LEVEL;
|
|
|
|
return ((tb_env->flags & flags) == PPC_DECR_UNDERFLOW_TRIGGERED);
|
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
static inline int64_t _cpu_ppc_load_decr(CPUPPCState *env, uint64_t next)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2019-03-01 03:43:15 +01:00
|
|
|
int64_t decr, diff;
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2013-08-21 17:03:08 +02:00
|
|
|
diff = next - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2011-09-13 06:00:32 +02:00
|
|
|
if (diff >= 0) {
|
2016-03-21 17:02:30 +01:00
|
|
|
decr = muldiv64(diff, tb_env->decr_freq, NANOSECONDS_PER_SECOND);
|
2011-09-13 06:00:32 +02:00
|
|
|
} else if (tb_env->flags & PPC_TIMER_BOOKE) {
|
|
|
|
decr = 0;
|
|
|
|
} else {
|
2016-03-21 17:02:30 +01:00
|
|
|
decr = -muldiv64(-diff, tb_env->decr_freq, NANOSECONDS_PER_SECOND);
|
2011-09-13 06:00:32 +02:00
|
|
|
}
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_decr_load(decr);
|
2007-03-07 09:32:30 +01:00
|
|
|
|
2004-05-21 14:59:32 +02:00
|
|
|
return decr;
|
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
target_ulong cpu_ppc_load_decr(CPUPPCState *env)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2019-03-01 03:43:15 +01:00
|
|
|
uint64_t decr;
|
2007-09-29 15:21:37 +02:00
|
|
|
|
2011-04-30 00:10:23 +02:00
|
|
|
if (kvm_enabled()) {
|
|
|
|
return env->spr[SPR_DECR];
|
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
decr = _cpu_ppc_load_decr(env, tb_env->decr_next);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If large decrementer is enabled then the decrementer is signed extened
|
|
|
|
* to 64 bits, otherwise it is a 32 bit value.
|
|
|
|
*/
|
|
|
|
if (env->spr[SPR_LPCR] & LPCR_LD) {
|
|
|
|
return decr;
|
|
|
|
}
|
|
|
|
return (uint32_t) decr;
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
target_ulong cpu_ppc_load_hdecr(CPUPPCState *env)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2019-03-23 03:07:57 +01:00
|
|
|
PowerPCCPU *cpu = env_archcpu(env);
|
2019-03-01 03:43:15 +01:00
|
|
|
PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2019-03-01 03:43:15 +01:00
|
|
|
uint64_t hdecr;
|
|
|
|
|
|
|
|
hdecr = _cpu_ppc_load_decr(env, tb_env->hdecr_next);
|
2007-09-29 15:21:37 +02:00
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
/*
|
|
|
|
* If we have a large decrementer (POWER9 or later) then hdecr is sign
|
|
|
|
* extended to 64 bits, otherwise it is 32 bits.
|
|
|
|
*/
|
|
|
|
if (pcc->lrg_decr_bits > 32) {
|
|
|
|
return hdecr;
|
|
|
|
}
|
|
|
|
return (uint32_t) hdecr;
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint64_t cpu_ppc_load_purr (CPUPPCState *env)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-29 15:21:37 +02:00
|
|
|
|
2019-11-28 14:46:55 +01:00
|
|
|
return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
|
|
|
tb_env->purr_offset);
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2004-05-21 14:59:32 +02:00
|
|
|
/* When decrementer expires,
|
|
|
|
* all we need to do is generate or queue a CPU exception
|
|
|
|
*/
|
2012-12-01 04:18:02 +01:00
|
|
|
static inline void cpu_ppc_decr_excp(PowerPCCPU *cpu)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
|
|
|
/* Raise it */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_decr_excp("raise");
|
2012-12-01 03:55:58 +01:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_DECR, 1);
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2014-04-06 01:32:06 +02:00
|
|
|
static inline void cpu_ppc_decr_lower(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_DECR, 0);
|
|
|
|
}
|
|
|
|
|
2012-12-01 04:18:02 +01:00
|
|
|
static inline void cpu_ppc_hdecr_excp(PowerPCCPU *cpu)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2016-06-27 08:55:19 +02:00
|
|
|
CPUPPCState *env = &cpu->env;
|
|
|
|
|
2007-09-29 15:21:37 +02:00
|
|
|
/* Raise it */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_decr_excp("raise HV");
|
2016-06-27 08:55:19 +02:00
|
|
|
|
|
|
|
/* The architecture specifies that we don't deliver HDEC
|
|
|
|
* interrupts in a PM state. Not only they don't cause a
|
|
|
|
* wakeup but they also get effectively discarded.
|
|
|
|
*/
|
2019-02-15 17:16:43 +01:00
|
|
|
if (!env->resume_as_sreset) {
|
2016-06-27 08:55:19 +02:00
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_HDECR, 1);
|
|
|
|
}
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2014-04-06 01:32:06 +02:00
|
|
|
static inline void cpu_ppc_hdecr_lower(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_HDECR, 0);
|
|
|
|
}
|
|
|
|
|
2012-12-01 04:18:02 +01:00
|
|
|
static void __cpu_ppc_store_decr(PowerPCCPU *cpu, uint64_t *nextp,
|
2013-12-01 08:49:47 +01:00
|
|
|
QEMUTimer *timer,
|
2014-04-06 01:32:06 +02:00
|
|
|
void (*raise_excp)(void *),
|
|
|
|
void (*lower_excp)(PowerPCCPU *),
|
2019-03-01 03:43:15 +01:00
|
|
|
target_ulong decr, target_ulong value,
|
|
|
|
int nr_bits)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2012-12-01 04:18:02 +01:00
|
|
|
CPUPPCState *env = &cpu->env;
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2004-05-21 14:59:32 +02:00
|
|
|
uint64_t now, next;
|
2021-09-20 08:12:03 +02:00
|
|
|
int64_t signed_value;
|
|
|
|
int64_t signed_decr;
|
2019-03-01 03:43:15 +01:00
|
|
|
|
|
|
|
/* Truncate value to decr_width and sign extend for simplicity */
|
2021-09-20 08:12:03 +02:00
|
|
|
signed_value = sextract64(value, 0, nr_bits);
|
|
|
|
signed_decr = sextract64(decr, 0, nr_bits);
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc_decr_store(nr_bits, decr, value);
|
2011-10-16 21:26:17 +02:00
|
|
|
|
|
|
|
if (kvm_enabled()) {
|
|
|
|
/* KVM handles decrementer exceptions, we don't need our own timer */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-04-06 01:32:06 +02:00
|
|
|
/*
|
|
|
|
* Going from 2 -> 1, 1 -> 0 or 0 -> -1 is the event to generate a DEC
|
|
|
|
* interrupt.
|
|
|
|
*
|
|
|
|
* If we get a really small DEC value, we can assume that by the time we
|
|
|
|
* handled it we should inject an interrupt already.
|
|
|
|
*
|
|
|
|
* On MSB level based DEC implementations the MSB always means the interrupt
|
|
|
|
* is pending, so raise it on those.
|
|
|
|
*
|
|
|
|
* On MSB edge based DEC implementations the MSB going from 0 -> 1 triggers
|
|
|
|
* an edge interrupt, so raise it here too.
|
|
|
|
*/
|
2021-10-05 07:33:24 +02:00
|
|
|
if ((value < 3) ||
|
2021-09-20 08:12:03 +02:00
|
|
|
((tb_env->flags & PPC_DECR_UNDERFLOW_LEVEL) && signed_value < 0) ||
|
|
|
|
((tb_env->flags & PPC_DECR_UNDERFLOW_TRIGGERED) && signed_value < 0
|
|
|
|
&& signed_decr >= 0)) {
|
2014-04-06 01:32:06 +02:00
|
|
|
(*raise_excp)(cpu);
|
|
|
|
return;
|
2011-09-13 06:00:32 +02:00
|
|
|
}
|
2014-04-06 01:32:06 +02:00
|
|
|
|
|
|
|
/* On MSB level based systems a 0 for the MSB stops interrupt delivery */
|
2021-09-20 08:12:03 +02:00
|
|
|
if (signed_value >= 0 && (tb_env->flags & PPC_DECR_UNDERFLOW_LEVEL)) {
|
2014-04-06 01:32:06 +02:00
|
|
|
(*lower_excp)(cpu);
|
2011-09-13 06:00:32 +02:00
|
|
|
}
|
2014-04-06 01:32:06 +02:00
|
|
|
|
|
|
|
/* Calculate the next timer event */
|
|
|
|
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2016-03-21 17:02:30 +01:00
|
|
|
next = now + muldiv64(value, NANOSECONDS_PER_SECOND, tb_env->decr_freq);
|
2007-09-29 15:21:37 +02:00
|
|
|
*nextp = next;
|
2014-04-06 01:32:06 +02:00
|
|
|
|
2004-05-21 14:59:32 +02:00
|
|
|
/* Adjust timer */
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_mod(timer, next);
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
static inline void _cpu_ppc_store_decr(PowerPCCPU *cpu, target_ulong decr,
|
|
|
|
target_ulong value, int nr_bits)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2012-12-01 04:18:02 +01:00
|
|
|
ppc_tb_t *tb_env = cpu->env.tb_env;
|
2007-09-29 15:21:37 +02:00
|
|
|
|
2012-12-01 04:18:02 +01:00
|
|
|
__cpu_ppc_store_decr(cpu, &tb_env->decr_next, tb_env->decr_timer,
|
2014-04-06 01:32:06 +02:00
|
|
|
tb_env->decr_timer->cb, &cpu_ppc_decr_lower, decr,
|
2019-03-01 03:43:15 +01:00
|
|
|
value, nr_bits);
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
void cpu_ppc_store_decr(CPUPPCState *env, target_ulong value)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2019-03-23 03:07:57 +01:00
|
|
|
PowerPCCPU *cpu = env_archcpu(env);
|
2019-03-01 03:43:15 +01:00
|
|
|
PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
|
|
|
|
int nr_bits = 32;
|
|
|
|
|
|
|
|
if (env->spr[SPR_LPCR] & LPCR_LD) {
|
|
|
|
nr_bits = pcc->lrg_decr_bits;
|
|
|
|
}
|
2012-12-01 04:18:02 +01:00
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
_cpu_ppc_store_decr(cpu, cpu_ppc_load_decr(env), value, nr_bits);
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2012-12-01 04:26:55 +01:00
|
|
|
static void cpu_ppc_decr_cb(void *opaque)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2012-12-01 04:26:55 +01:00
|
|
|
PowerPCCPU *cpu = opaque;
|
2012-12-01 04:18:02 +01:00
|
|
|
|
2014-04-06 01:32:06 +02:00
|
|
|
cpu_ppc_decr_excp(cpu);
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
static inline void _cpu_ppc_store_hdecr(PowerPCCPU *cpu, target_ulong hdecr,
|
|
|
|
target_ulong value, int nr_bits)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2012-12-01 04:18:02 +01:00
|
|
|
ppc_tb_t *tb_env = cpu->env.tb_env;
|
2007-09-29 15:21:37 +02:00
|
|
|
|
2007-11-17 02:37:44 +01:00
|
|
|
if (tb_env->hdecr_timer != NULL) {
|
2012-12-01 04:18:02 +01:00
|
|
|
__cpu_ppc_store_decr(cpu, &tb_env->hdecr_next, tb_env->hdecr_timer,
|
2014-04-06 01:32:06 +02:00
|
|
|
tb_env->hdecr_timer->cb, &cpu_ppc_hdecr_lower,
|
2019-03-01 03:43:15 +01:00
|
|
|
hdecr, value, nr_bits);
|
2007-11-17 02:37:44 +01:00
|
|
|
}
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
void cpu_ppc_store_hdecr(CPUPPCState *env, target_ulong value)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2019-03-23 03:07:57 +01:00
|
|
|
PowerPCCPU *cpu = env_archcpu(env);
|
2019-03-01 03:43:15 +01:00
|
|
|
PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
|
2012-12-01 04:18:02 +01:00
|
|
|
|
2019-03-01 03:43:15 +01:00
|
|
|
_cpu_ppc_store_hdecr(cpu, cpu_ppc_load_hdecr(env), value,
|
|
|
|
pcc->lrg_decr_bits);
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2012-12-01 04:26:55 +01:00
|
|
|
static void cpu_ppc_hdecr_cb(void *opaque)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2012-12-01 04:26:55 +01:00
|
|
|
PowerPCCPU *cpu = opaque;
|
2012-12-01 04:18:02 +01:00
|
|
|
|
2014-04-06 01:32:06 +02:00
|
|
|
cpu_ppc_hdecr_excp(cpu);
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2019-11-28 14:46:55 +01:00
|
|
|
void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value)
|
2007-09-29 15:21:37 +02:00
|
|
|
{
|
2019-11-28 14:46:55 +01:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-09-29 15:21:37 +02:00
|
|
|
|
2019-11-28 14:46:55 +01:00
|
|
|
cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
|
|
|
|
&tb_env->purr_offset, value);
|
2007-09-29 15:21:37 +02:00
|
|
|
}
|
|
|
|
|
2007-04-16 22:09:45 +02:00
|
|
|
static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
|
|
|
|
{
|
2012-03-14 01:38:23 +01:00
|
|
|
CPUPPCState *env = opaque;
|
2019-03-23 03:07:57 +01:00
|
|
|
PowerPCCPU *cpu = env_archcpu(env);
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-04-16 22:09:45 +02:00
|
|
|
|
|
|
|
tb_env->tb_freq = freq;
|
2007-10-14 11:35:30 +02:00
|
|
|
tb_env->decr_freq = freq;
|
2007-04-16 22:09:45 +02:00
|
|
|
/* There is a bug in Linux 2.4 kernels:
|
|
|
|
* if a decrementer exception is pending when it enables msr_ee at startup,
|
|
|
|
* it's not ready to handle it...
|
|
|
|
*/
|
2019-03-01 03:43:15 +01:00
|
|
|
_cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
|
|
|
|
_cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
|
2019-11-28 14:46:55 +01:00
|
|
|
cpu_ppc_store_purr(env, 0x0000000000000000ULL);
|
2007-04-16 22:09:45 +02:00
|
|
|
}
|
|
|
|
|
2017-01-27 13:24:58 +01:00
|
|
|
static void timebase_save(PPCTimebase *tb)
|
2014-05-01 12:37:09 +02:00
|
|
|
{
|
2015-09-25 16:42:21 +02:00
|
|
|
uint64_t ticks = cpu_get_host_ticks();
|
2014-05-01 12:37:09 +02:00
|
|
|
PowerPCCPU *first_ppc_cpu = POWERPC_CPU(first_cpu);
|
|
|
|
|
|
|
|
if (!first_ppc_cpu->env.tb_env) {
|
|
|
|
error_report("No timebase object");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-01-27 13:24:58 +01:00
|
|
|
/* not used anymore, we keep it for compatibility */
|
2014-11-26 15:01:01 +01:00
|
|
|
tb->time_of_the_day_ns = qemu_clock_get_ns(QEMU_CLOCK_HOST);
|
2014-05-01 12:37:09 +02:00
|
|
|
/*
|
2017-01-27 13:24:58 +01:00
|
|
|
* tb_offset is only expected to be changed by QEMU so
|
2014-05-01 12:37:09 +02:00
|
|
|
* there is no need to update it from KVM here
|
|
|
|
*/
|
|
|
|
tb->guest_timebase = ticks + first_ppc_cpu->env.tb_env->tb_offset;
|
2019-07-11 21:47:02 +02:00
|
|
|
|
2020-12-02 18:28:26 +01:00
|
|
|
tb->runstate_paused =
|
|
|
|
runstate_check(RUN_STATE_PAUSED) || runstate_check(RUN_STATE_SAVE_VM);
|
2014-05-01 12:37:09 +02:00
|
|
|
}
|
|
|
|
|
2017-01-27 13:24:58 +01:00
|
|
|
static void timebase_load(PPCTimebase *tb)
|
2014-05-01 12:37:09 +02:00
|
|
|
{
|
|
|
|
CPUState *cpu;
|
|
|
|
PowerPCCPU *first_ppc_cpu = POWERPC_CPU(first_cpu);
|
2017-01-27 13:24:58 +01:00
|
|
|
int64_t tb_off_adj, tb_off;
|
2014-05-01 12:37:09 +02:00
|
|
|
unsigned long freq;
|
|
|
|
|
|
|
|
if (!first_ppc_cpu->env.tb_env) {
|
|
|
|
error_report("No timebase object");
|
2017-01-27 13:24:58 +01:00
|
|
|
return;
|
2014-05-01 12:37:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
freq = first_ppc_cpu->env.tb_env->tb_freq;
|
|
|
|
|
2017-01-27 13:24:58 +01:00
|
|
|
tb_off_adj = tb->guest_timebase - cpu_get_host_ticks();
|
2014-05-01 12:37:09 +02:00
|
|
|
|
|
|
|
tb_off = first_ppc_cpu->env.tb_env->tb_offset;
|
|
|
|
trace_ppc_tb_adjust(tb_off, tb_off_adj, tb_off_adj - tb_off,
|
|
|
|
(tb_off_adj - tb_off) / freq);
|
|
|
|
|
|
|
|
/* Set new offset to all CPUs */
|
|
|
|
CPU_FOREACH(cpu) {
|
|
|
|
PowerPCCPU *pcpu = POWERPC_CPU(cpu);
|
|
|
|
pcpu->env.tb_env->tb_offset = tb_off_adj;
|
2019-06-14 13:09:17 +02:00
|
|
|
kvmppc_set_reg_tb_offset(pcpu, pcpu->env.tb_env->tb_offset);
|
2017-01-27 13:24:58 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-11 16:20:20 +01:00
|
|
|
void cpu_ppc_clock_vm_state_change(void *opaque, bool running,
|
2017-01-27 13:24:58 +01:00
|
|
|
RunState state)
|
|
|
|
{
|
|
|
|
PPCTimebase *tb = opaque;
|
|
|
|
|
|
|
|
if (running) {
|
|
|
|
timebase_load(tb);
|
|
|
|
} else {
|
|
|
|
timebase_save(tb);
|
2014-05-01 12:37:09 +02:00
|
|
|
}
|
2017-01-27 13:24:58 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2019-07-11 21:47:02 +02:00
|
|
|
* When migrating a running guest, read the clock just
|
|
|
|
* before migration, so that the guest clock counts
|
|
|
|
* during the events between:
|
2017-01-27 13:24:58 +01:00
|
|
|
*
|
|
|
|
* * vm_stop()
|
|
|
|
* *
|
|
|
|
* * pre_save()
|
|
|
|
*
|
|
|
|
* This reduces clock difference on migration from 5s
|
|
|
|
* to 0.1s (when max_downtime == 5s), because sending the
|
|
|
|
* final pages of memory (which happens between vm_stop()
|
|
|
|
* and pre_save()) takes max_downtime.
|
|
|
|
*/
|
2017-09-25 13:29:12 +02:00
|
|
|
static int timebase_pre_save(void *opaque)
|
2017-01-27 13:24:58 +01:00
|
|
|
{
|
|
|
|
PPCTimebase *tb = opaque;
|
2014-05-01 12:37:09 +02:00
|
|
|
|
2020-12-02 18:28:26 +01:00
|
|
|
/* guest_timebase won't be overridden in case of paused guest or savevm */
|
2019-07-11 21:47:02 +02:00
|
|
|
if (!tb->runstate_paused) {
|
|
|
|
timebase_save(tb);
|
|
|
|
}
|
2017-09-25 13:29:12 +02:00
|
|
|
|
|
|
|
return 0;
|
2014-05-01 12:37:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
const VMStateDescription vmstate_ppc_timebase = {
|
|
|
|
.name = "timebase",
|
|
|
|
.version_id = 1,
|
|
|
|
.minimum_version_id = 1,
|
|
|
|
.minimum_version_id_old = 1,
|
|
|
|
.pre_save = timebase_pre_save,
|
|
|
|
.fields = (VMStateField []) {
|
|
|
|
VMSTATE_UINT64(guest_timebase, PPCTimebase),
|
|
|
|
VMSTATE_INT64(time_of_the_day_ns, PPCTimebase),
|
|
|
|
VMSTATE_END_OF_LIST()
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2004-05-21 14:59:32 +02:00
|
|
|
/* Set up (once) timebase frequency (in Hz) */
|
2012-03-14 01:38:23 +01:00
|
|
|
clk_setup_cb cpu_ppc_tb_init (CPUPPCState *env, uint32_t freq)
|
2004-05-21 14:59:32 +02:00
|
|
|
{
|
2019-03-23 03:07:57 +01:00
|
|
|
PowerPCCPU *cpu = env_archcpu(env);
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env;
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2011-08-21 05:09:37 +02:00
|
|
|
tb_env = g_malloc0(sizeof(ppc_tb_t));
|
2004-05-21 14:59:32 +02:00
|
|
|
env->tb_env = tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
tb_env->flags = PPC_DECR_UNDERFLOW_TRIGGERED;
|
2019-03-22 19:03:51 +01:00
|
|
|
if (is_book3s_arch2x(env)) {
|
2014-04-06 01:32:06 +02:00
|
|
|
/* All Book3S 64bit CPUs implement level based DEC logic */
|
|
|
|
tb_env->flags |= PPC_DECR_UNDERFLOW_LEVEL;
|
|
|
|
}
|
2007-04-16 22:09:45 +02:00
|
|
|
/* Create new timer */
|
2013-08-21 17:03:08 +02:00
|
|
|
tb_env->decr_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, &cpu_ppc_decr_cb, cpu);
|
2016-06-27 08:55:19 +02:00
|
|
|
if (env->has_hv_mode) {
|
2013-08-21 17:03:08 +02:00
|
|
|
tb_env->hdecr_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, &cpu_ppc_hdecr_cb,
|
2012-12-01 04:26:55 +01:00
|
|
|
cpu);
|
2007-11-17 02:37:44 +01:00
|
|
|
} else {
|
|
|
|
tb_env->hdecr_timer = NULL;
|
|
|
|
}
|
2007-04-16 22:09:45 +02:00
|
|
|
cpu_ppc_set_tb_clk(env, freq);
|
2004-05-21 14:59:32 +02:00
|
|
|
|
2007-04-16 22:09:45 +02:00
|
|
|
return &cpu_ppc_set_tb_clk;
|
2004-05-21 14:59:32 +02:00
|
|
|
}
|
|
|
|
|
2007-03-07 09:32:30 +01:00
|
|
|
/* Specific helpers for POWER & PowerPC 601 RTC */
|
2012-03-14 01:38:23 +01:00
|
|
|
void cpu_ppc601_store_rtcu (CPUPPCState *env, uint32_t value)
|
2007-09-30 16:44:52 +02:00
|
|
|
{
|
|
|
|
_cpu_ppc_store_tbu(env, value);
|
|
|
|
}
|
2007-03-07 09:32:30 +01:00
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint32_t cpu_ppc601_load_rtcu (CPUPPCState *env)
|
2007-09-30 16:44:52 +02:00
|
|
|
{
|
|
|
|
return _cpu_ppc_load_tbu(env);
|
|
|
|
}
|
2007-03-07 09:32:30 +01:00
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
void cpu_ppc601_store_rtcl (CPUPPCState *env, uint32_t value)
|
2007-03-07 09:32:30 +01:00
|
|
|
{
|
|
|
|
cpu_ppc_store_tbl(env, value & 0x3FFFFF80);
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env)
|
2007-03-07 09:32:30 +01:00
|
|
|
{
|
|
|
|
return cpu_ppc_load_tbl(env) & 0x3FFFFF80;
|
|
|
|
}
|
|
|
|
|
2007-03-31 13:38:38 +02:00
|
|
|
/*****************************************************************************/
|
2011-09-13 06:00:32 +02:00
|
|
|
/* PowerPC 40x timers */
|
2007-03-31 13:38:38 +02:00
|
|
|
|
|
|
|
/* PIT, FIT & WDT */
|
2011-09-13 06:00:32 +02:00
|
|
|
typedef struct ppc40x_timer_t ppc40x_timer_t;
|
|
|
|
struct ppc40x_timer_t {
|
2007-03-31 13:38:38 +02:00
|
|
|
uint64_t pit_reload; /* PIT auto-reload value */
|
|
|
|
uint64_t fit_next; /* Tick for next FIT interrupt */
|
2013-12-01 08:49:47 +01:00
|
|
|
QEMUTimer *fit_timer;
|
2007-03-31 13:38:38 +02:00
|
|
|
uint64_t wdt_next; /* Tick for next WDT interrupt */
|
2013-12-01 08:49:47 +01:00
|
|
|
QEMUTimer *wdt_timer;
|
2010-09-20 19:08:42 +02:00
|
|
|
|
|
|
|
/* 405 have the PIT, 440 have a DECR. */
|
|
|
|
unsigned int decr_excp;
|
2007-03-31 13:38:38 +02:00
|
|
|
};
|
2007-09-17 10:09:54 +02:00
|
|
|
|
2007-03-31 13:38:38 +02:00
|
|
|
/* Fixed interval timer */
|
|
|
|
static void cpu_4xx_fit_cb (void *opaque)
|
|
|
|
{
|
2012-12-01 03:55:58 +01:00
|
|
|
PowerPCCPU *cpu;
|
2012-03-14 01:38:23 +01:00
|
|
|
CPUPPCState *env;
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer_t *ppc40x_timer;
|
2007-03-31 13:38:38 +02:00
|
|
|
uint64_t now, next;
|
|
|
|
|
|
|
|
env = opaque;
|
2019-03-23 03:07:57 +01:00
|
|
|
cpu = env_archcpu(env);
|
2007-03-31 13:38:38 +02:00
|
|
|
tb_env = env->tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer = tb_env->opaque;
|
2013-08-21 17:03:08 +02:00
|
|
|
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2007-03-31 13:38:38 +02:00
|
|
|
switch ((env->spr[SPR_40x_TCR] >> 24) & 0x3) {
|
|
|
|
case 0:
|
|
|
|
next = 1 << 9;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
next = 1 << 13;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
next = 1 << 17;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
next = 1 << 21;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Cannot occur, but makes gcc happy */
|
|
|
|
return;
|
|
|
|
}
|
2016-03-21 17:02:30 +01:00
|
|
|
next = now + muldiv64(next, NANOSECONDS_PER_SECOND, tb_env->tb_freq);
|
2007-03-31 13:38:38 +02:00
|
|
|
if (next == now)
|
|
|
|
next++;
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_mod(ppc40x_timer->fit_timer, next);
|
2007-03-31 13:38:38 +02:00
|
|
|
env->spr[SPR_40x_TSR] |= 1 << 26;
|
2012-12-01 03:55:58 +01:00
|
|
|
if ((env->spr[SPR_40x_TCR] >> 23) & 0x1) {
|
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_FIT, 1);
|
|
|
|
}
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc4xx_fit((int)((env->spr[SPR_40x_TCR] >> 23) & 0x1),
|
|
|
|
env->spr[SPR_40x_TCR], env->spr[SPR_40x_TSR]);
|
2007-03-31 13:38:38 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Programmable interval timer */
|
2012-03-14 01:38:23 +01:00
|
|
|
static void start_stop_pit (CPUPPCState *env, ppc_tb_t *tb_env, int is_excp)
|
2007-03-07 09:32:30 +01:00
|
|
|
{
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer_t *ppc40x_timer;
|
2007-03-31 13:38:38 +02:00
|
|
|
uint64_t now, next;
|
|
|
|
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer = tb_env->opaque;
|
|
|
|
if (ppc40x_timer->pit_reload <= 1 ||
|
2007-04-24 08:32:00 +02:00
|
|
|
!((env->spr[SPR_40x_TCR] >> 26) & 0x1) ||
|
|
|
|
(is_excp && !((env->spr[SPR_40x_TCR] >> 22) & 0x1))) {
|
|
|
|
/* Stop PIT */
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc4xx_pit_stop();
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_del(tb_env->decr_timer);
|
2007-04-24 08:32:00 +02:00
|
|
|
} else {
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc4xx_pit_start(ppc40x_timer->pit_reload);
|
2013-08-21 17:03:08 +02:00
|
|
|
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2011-09-13 06:00:32 +02:00
|
|
|
next = now + muldiv64(ppc40x_timer->pit_reload,
|
2016-03-21 17:02:30 +01:00
|
|
|
NANOSECONDS_PER_SECOND, tb_env->decr_freq);
|
2007-04-24 08:32:00 +02:00
|
|
|
if (is_excp)
|
|
|
|
next += tb_env->decr_next - now;
|
2007-03-31 13:38:38 +02:00
|
|
|
if (next == now)
|
|
|
|
next++;
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_mod(tb_env->decr_timer, next);
|
2007-03-31 13:38:38 +02:00
|
|
|
tb_env->decr_next = next;
|
|
|
|
}
|
2007-04-24 08:32:00 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void cpu_4xx_pit_cb (void *opaque)
|
|
|
|
{
|
2012-12-01 03:55:58 +01:00
|
|
|
PowerPCCPU *cpu;
|
2012-03-14 01:38:23 +01:00
|
|
|
CPUPPCState *env;
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer_t *ppc40x_timer;
|
2007-04-24 08:32:00 +02:00
|
|
|
|
|
|
|
env = opaque;
|
2019-03-23 03:07:57 +01:00
|
|
|
cpu = env_archcpu(env);
|
2007-04-24 08:32:00 +02:00
|
|
|
tb_env = env->tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer = tb_env->opaque;
|
2007-03-31 13:38:38 +02:00
|
|
|
env->spr[SPR_40x_TSR] |= 1 << 27;
|
2012-12-01 03:55:58 +01:00
|
|
|
if ((env->spr[SPR_40x_TCR] >> 26) & 0x1) {
|
|
|
|
ppc_set_irq(cpu, ppc40x_timer->decr_excp, 1);
|
|
|
|
}
|
2007-04-24 08:32:00 +02:00
|
|
|
start_stop_pit(env, tb_env, 1);
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc4xx_pit((int)((env->spr[SPR_40x_TCR] >> 22) & 0x1),
|
2009-08-16 13:13:18 +02:00
|
|
|
(int)((env->spr[SPR_40x_TCR] >> 26) & 0x1),
|
|
|
|
env->spr[SPR_40x_TCR], env->spr[SPR_40x_TSR],
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->pit_reload);
|
2007-03-31 13:38:38 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Watchdog timer */
|
|
|
|
static void cpu_4xx_wdt_cb (void *opaque)
|
|
|
|
{
|
2012-12-01 03:55:58 +01:00
|
|
|
PowerPCCPU *cpu;
|
2012-03-14 01:38:23 +01:00
|
|
|
CPUPPCState *env;
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer_t *ppc40x_timer;
|
2007-03-31 13:38:38 +02:00
|
|
|
uint64_t now, next;
|
|
|
|
|
|
|
|
env = opaque;
|
2019-03-23 03:07:57 +01:00
|
|
|
cpu = env_archcpu(env);
|
2007-03-31 13:38:38 +02:00
|
|
|
tb_env = env->tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer = tb_env->opaque;
|
2013-08-21 17:03:08 +02:00
|
|
|
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
|
2007-03-31 13:38:38 +02:00
|
|
|
switch ((env->spr[SPR_40x_TCR] >> 30) & 0x3) {
|
|
|
|
case 0:
|
|
|
|
next = 1 << 17;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
next = 1 << 21;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
next = 1 << 25;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
next = 1 << 29;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Cannot occur, but makes gcc happy */
|
|
|
|
return;
|
|
|
|
}
|
2016-03-21 17:02:30 +01:00
|
|
|
next = now + muldiv64(next, NANOSECONDS_PER_SECOND, tb_env->decr_freq);
|
2007-03-31 13:38:38 +02:00
|
|
|
if (next == now)
|
|
|
|
next++;
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc4xx_wdt(env->spr[SPR_40x_TCR], env->spr[SPR_40x_TSR]);
|
2007-03-31 13:38:38 +02:00
|
|
|
switch ((env->spr[SPR_40x_TSR] >> 30) & 0x3) {
|
|
|
|
case 0x0:
|
|
|
|
case 0x1:
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_mod(ppc40x_timer->wdt_timer, next);
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->wdt_next = next;
|
2014-03-17 17:00:37 +01:00
|
|
|
env->spr[SPR_40x_TSR] |= 1U << 31;
|
2007-03-31 13:38:38 +02:00
|
|
|
break;
|
|
|
|
case 0x2:
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_mod(ppc40x_timer->wdt_timer, next);
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->wdt_next = next;
|
2007-03-31 13:38:38 +02:00
|
|
|
env->spr[SPR_40x_TSR] |= 1 << 30;
|
2012-12-01 03:55:58 +01:00
|
|
|
if ((env->spr[SPR_40x_TCR] >> 27) & 0x1) {
|
|
|
|
ppc_set_irq(cpu, PPC_INTERRUPT_WDT, 1);
|
|
|
|
}
|
2007-03-31 13:38:38 +02:00
|
|
|
break;
|
|
|
|
case 0x3:
|
|
|
|
env->spr[SPR_40x_TSR] &= ~0x30000000;
|
|
|
|
env->spr[SPR_40x_TSR] |= env->spr[SPR_40x_TCR] & 0x30000000;
|
|
|
|
switch ((env->spr[SPR_40x_TCR] >> 28) & 0x3) {
|
|
|
|
case 0x0:
|
|
|
|
/* No reset */
|
|
|
|
break;
|
|
|
|
case 0x1: /* Core reset */
|
2013-01-18 15:57:51 +01:00
|
|
|
ppc40x_core_reset(cpu);
|
2007-04-16 22:09:45 +02:00
|
|
|
break;
|
2007-03-31 13:38:38 +02:00
|
|
|
case 0x2: /* Chip reset */
|
2013-01-18 15:57:51 +01:00
|
|
|
ppc40x_chip_reset(cpu);
|
2007-04-16 22:09:45 +02:00
|
|
|
break;
|
2007-03-31 13:38:38 +02:00
|
|
|
case 0x3: /* System reset */
|
2013-01-18 15:57:51 +01:00
|
|
|
ppc40x_system_reset(cpu);
|
2007-04-16 22:09:45 +02:00
|
|
|
break;
|
2007-03-31 13:38:38 +02:00
|
|
|
}
|
|
|
|
}
|
2007-03-07 09:32:30 +01:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
void store_40x_pit (CPUPPCState *env, target_ulong val)
|
2007-03-07 09:32:30 +01:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer_t *ppc40x_timer;
|
2007-03-31 13:38:38 +02:00
|
|
|
|
|
|
|
tb_env = env->tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer = tb_env->opaque;
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc40x_store_pit(val);
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->pit_reload = val;
|
2007-04-24 08:32:00 +02:00
|
|
|
start_stop_pit(env, tb_env, 0);
|
2007-03-07 09:32:30 +01:00
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
target_ulong load_40x_pit (CPUPPCState *env)
|
2007-03-07 09:32:30 +01:00
|
|
|
{
|
2007-03-31 13:38:38 +02:00
|
|
|
return cpu_ppc_load_decr(env);
|
2007-03-07 09:32:30 +01:00
|
|
|
}
|
|
|
|
|
2011-09-13 06:00:32 +02:00
|
|
|
static void ppc_40x_set_tb_clk (void *opaque, uint32_t freq)
|
2007-04-24 08:32:00 +02:00
|
|
|
{
|
2012-03-14 01:38:23 +01:00
|
|
|
CPUPPCState *env = opaque;
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env = env->tb_env;
|
2007-04-24 08:32:00 +02:00
|
|
|
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc40x_set_tb_clk(freq);
|
2007-04-24 08:32:00 +02:00
|
|
|
tb_env->tb_freq = freq;
|
2007-10-14 11:35:30 +02:00
|
|
|
tb_env->decr_freq = freq;
|
2007-04-24 08:32:00 +02:00
|
|
|
/* XXX: we should also update all timers */
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
clk_setup_cb ppc_40x_timers_init (CPUPPCState *env, uint32_t freq,
|
2010-09-20 19:08:42 +02:00
|
|
|
unsigned int decr_excp)
|
2007-03-31 13:38:38 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_tb_t *tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer_t *ppc40x_timer;
|
2007-03-31 13:38:38 +02:00
|
|
|
|
2011-08-21 05:09:37 +02:00
|
|
|
tb_env = g_malloc0(sizeof(ppc_tb_t));
|
2007-04-16 22:09:45 +02:00
|
|
|
env->tb_env = tb_env;
|
2011-09-13 06:00:32 +02:00
|
|
|
tb_env->flags = PPC_DECR_UNDERFLOW_TRIGGERED;
|
|
|
|
ppc40x_timer = g_malloc0(sizeof(ppc40x_timer_t));
|
2007-04-16 22:09:45 +02:00
|
|
|
tb_env->tb_freq = freq;
|
2007-10-14 11:35:30 +02:00
|
|
|
tb_env->decr_freq = freq;
|
2011-09-13 06:00:32 +02:00
|
|
|
tb_env->opaque = ppc40x_timer;
|
2021-09-20 08:12:02 +02:00
|
|
|
trace_ppc40x_timers_init(freq);
|
2011-09-13 06:00:32 +02:00
|
|
|
if (ppc40x_timer != NULL) {
|
2007-03-31 13:38:38 +02:00
|
|
|
/* We use decr timer for PIT */
|
2013-08-21 17:03:08 +02:00
|
|
|
tb_env->decr_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, &cpu_4xx_pit_cb, env);
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->fit_timer =
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_new_ns(QEMU_CLOCK_VIRTUAL, &cpu_4xx_fit_cb, env);
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->wdt_timer =
|
2013-08-21 17:03:08 +02:00
|
|
|
timer_new_ns(QEMU_CLOCK_VIRTUAL, &cpu_4xx_wdt_cb, env);
|
2011-09-13 06:00:32 +02:00
|
|
|
ppc40x_timer->decr_excp = decr_excp;
|
2007-03-31 13:38:38 +02:00
|
|
|
}
|
2007-04-16 22:09:45 +02:00
|
|
|
|
2011-09-13 06:00:32 +02:00
|
|
|
return &ppc_40x_set_tb_clk;
|
2007-03-07 09:32:30 +01:00
|
|
|
}
|
|
|
|
|
2007-04-12 23:11:03 +02:00
|
|
|
/*****************************************************************************/
|
|
|
|
/* Embedded PowerPC Device Control Registers */
|
2009-10-01 23:12:16 +02:00
|
|
|
typedef struct ppc_dcrn_t ppc_dcrn_t;
|
|
|
|
struct ppc_dcrn_t {
|
2007-04-12 23:11:03 +02:00
|
|
|
dcr_read_cb dcr_read;
|
|
|
|
dcr_write_cb dcr_write;
|
|
|
|
void *opaque;
|
|
|
|
};
|
|
|
|
|
Great rework and cleanups to ease PowerPC implementations definitions.
* cleanup cpu.h, removing definitions used only in translate.c/translate_init.c
* add new flags to define instructions sets more precisely
* various changes in MMU models definitions
* add definitions for PowerPC 440/460 support (insns and SPRs).
* add definitions for PowerPC 401/403 and 620 input pins model
* Fix definitions for most PowerPC 401, 403, 405, 440, 601, 602, 603 and 7x0
* Preliminary support for PowerPC 74xx (aka G4) without altivec.
* Code provision for other PowerPC support (7x5, 970, ...).
* New SPR and PVR defined, from PowerPC 2.04 specification and other sources
* Misc code bugs, error messages and styles fixes.
* Update status files for PowerPC cores support.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3244 c046a42c-6fe2-441c-8c8c-71466251a162
2007-09-27 01:54:22 +02:00
|
|
|
/* XXX: on 460, DCR addresses are 32 bits wide,
|
|
|
|
* using DCRIPR to get the 22 upper bits of the DCR address
|
|
|
|
*/
|
2007-04-12 23:11:03 +02:00
|
|
|
#define DCRN_NB 1024
|
2009-10-01 23:12:16 +02:00
|
|
|
struct ppc_dcr_t {
|
|
|
|
ppc_dcrn_t dcrn[DCRN_NB];
|
2007-04-12 23:11:03 +02:00
|
|
|
int (*read_error)(int dcrn);
|
|
|
|
int (*write_error)(int dcrn);
|
|
|
|
};
|
|
|
|
|
2009-12-21 14:02:39 +01:00
|
|
|
int ppc_dcr_read (ppc_dcr_t *dcr_env, int dcrn, uint32_t *valp)
|
2007-04-12 23:11:03 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_dcrn_t *dcr;
|
2007-04-12 23:11:03 +02:00
|
|
|
|
|
|
|
if (dcrn < 0 || dcrn >= DCRN_NB)
|
|
|
|
goto error;
|
|
|
|
dcr = &dcr_env->dcrn[dcrn];
|
|
|
|
if (dcr->dcr_read == NULL)
|
|
|
|
goto error;
|
|
|
|
*valp = (*dcr->dcr_read)(dcr->opaque, dcrn);
|
2021-12-17 17:57:17 +01:00
|
|
|
trace_ppc_dcr_read(dcrn, *valp);
|
2007-04-12 23:11:03 +02:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
error:
|
|
|
|
if (dcr_env->read_error != NULL)
|
|
|
|
return (*dcr_env->read_error)(dcrn);
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2009-12-21 14:02:39 +01:00
|
|
|
int ppc_dcr_write (ppc_dcr_t *dcr_env, int dcrn, uint32_t val)
|
2007-04-12 23:11:03 +02:00
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_dcrn_t *dcr;
|
2007-04-12 23:11:03 +02:00
|
|
|
|
|
|
|
if (dcrn < 0 || dcrn >= DCRN_NB)
|
|
|
|
goto error;
|
|
|
|
dcr = &dcr_env->dcrn[dcrn];
|
|
|
|
if (dcr->dcr_write == NULL)
|
|
|
|
goto error;
|
2021-12-17 17:57:17 +01:00
|
|
|
trace_ppc_dcr_write(dcrn, val);
|
2007-04-12 23:11:03 +02:00
|
|
|
(*dcr->dcr_write)(dcr->opaque, dcrn, val);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
error:
|
|
|
|
if (dcr_env->write_error != NULL)
|
|
|
|
return (*dcr_env->write_error)(dcrn);
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
int ppc_dcr_register (CPUPPCState *env, int dcrn, void *opaque,
|
2007-04-12 23:11:03 +02:00
|
|
|
dcr_read_cb dcr_read, dcr_write_cb dcr_write)
|
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_dcr_t *dcr_env;
|
|
|
|
ppc_dcrn_t *dcr;
|
2007-04-12 23:11:03 +02:00
|
|
|
|
|
|
|
dcr_env = env->dcr_env;
|
|
|
|
if (dcr_env == NULL)
|
|
|
|
return -1;
|
|
|
|
if (dcrn < 0 || dcrn >= DCRN_NB)
|
|
|
|
return -1;
|
|
|
|
dcr = &dcr_env->dcrn[dcrn];
|
|
|
|
if (dcr->opaque != NULL ||
|
|
|
|
dcr->dcr_read != NULL ||
|
|
|
|
dcr->dcr_write != NULL)
|
|
|
|
return -1;
|
|
|
|
dcr->opaque = opaque;
|
|
|
|
dcr->dcr_read = dcr_read;
|
|
|
|
dcr->dcr_write = dcr_write;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-14 01:38:23 +01:00
|
|
|
int ppc_dcr_init (CPUPPCState *env, int (*read_error)(int dcrn),
|
2007-04-12 23:11:03 +02:00
|
|
|
int (*write_error)(int dcrn))
|
|
|
|
{
|
2009-10-01 23:12:16 +02:00
|
|
|
ppc_dcr_t *dcr_env;
|
2007-04-12 23:11:03 +02:00
|
|
|
|
2011-08-21 05:09:37 +02:00
|
|
|
dcr_env = g_malloc0(sizeof(ppc_dcr_t));
|
2007-04-12 23:11:03 +02:00
|
|
|
dcr_env->read_error = read_error;
|
|
|
|
dcr_env->write_error = write_error;
|
|
|
|
env->dcr_env = dcr_env;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2004-05-27 00:55:16 +02:00
|
|
|
/*****************************************************************************/
|
2019-03-06 09:50:07 +01:00
|
|
|
|
2019-11-25 07:58:05 +01:00
|
|
|
int ppc_cpu_pir(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
CPUPPCState *env = &cpu->env;
|
|
|
|
return env->spr_cb[SPR_PIR].default_value;
|
|
|
|
}
|
|
|
|
|
2019-03-06 09:50:07 +01:00
|
|
|
PowerPCCPU *ppc_get_vcpu_by_pir(int pir)
|
|
|
|
{
|
|
|
|
CPUState *cs;
|
|
|
|
|
|
|
|
CPU_FOREACH(cs) {
|
|
|
|
PowerPCCPU *cpu = POWERPC_CPU(cs);
|
|
|
|
|
2019-11-25 07:58:05 +01:00
|
|
|
if (ppc_cpu_pir(cpu) == pir) {
|
2019-03-06 09:50:07 +01:00
|
|
|
return cpu;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
ppc: Deassert the external interrupt pin in KVM on reset
When a CPU is reset, QEMU makes sure no interrupt is pending by clearing
CPUPPCstate::pending_interrupts in ppc_cpu_reset(). In the case of a
complete machine emulation, eg. a sPAPR machine, an external interrupt
request could still be pending in KVM though, eg. an IPI. It will be
eventually presented to the guest, which is supposed to acknowledge it at
the interrupt controller. If the interrupt controller is emulated in QEMU,
either XICS or XIVE, ppc_set_irq() won't deassert the external interrupt
pin in KVM since it isn't pending anymore for QEMU. When the vCPU re-enters
the guest, the interrupt request is still pending and the vCPU will try
again to acknowledge it. This causes an infinite loop and eventually hangs
the guest.
The code has been broken since the beginning. The issue wasn't hit before
because accel=kvm,kernel-irqchip=off is an awkward setup that never got
used until recently with the LC92x IBM systems (aka, Boston).
Add a ppc_irq_reset() function to do the necessary cleanup, ie. deassert
the IRQ pins of the CPU in QEMU and most importantly the external interrupt
pin for this vCPU in KVM.
Reported-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <157548861740.3650476.16879693165328764758.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-12-04 20:43:37 +01:00
|
|
|
|
|
|
|
void ppc_irq_reset(PowerPCCPU *cpu)
|
|
|
|
{
|
|
|
|
CPUPPCState *env = &cpu->env;
|
|
|
|
|
|
|
|
env->irq_input_state = 0;
|
|
|
|
kvmppc_set_interrupt(cpu, PPC_INTERRUPT_EXT, 0);
|
|
|
|
}
|