Which means that callers need not copy data into local tmps.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In preparation for more efficient setting of these fields.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In r5949 / 76db3ba44e (target-ppc: memory
load/store rework) variable little_endian was replaced with ctx.le_mode.
Update the debug code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The bit that makes a dcbz instruction a dcbzl instruction was declared as
reserved in ppc32 ISAs. However, hardware simply ignores the bit, making
code valid if it simply invokes dcbzl instead of dcbz even on 750 and G4.
Thus, mark the bit as unreserved so that we properly emulate a simple dcbz
in case we're running on non-G5s.
While at it, also refactor the code to check the 970 special case during
runtime. This way we don't need to differenciate between a 970 dcbz and
any other dcbz anymore. We also allow for future improvements to add e500mc
dcbz handling.
Reported-by: Amadeusz Sławiński <amade@asmblr.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch fixes bug 1031698 :
https://bugs.launchpad.net/qemu/+bug/1031698
If we look at the (truncated) translation of the conditional branch
instruction in the test submitted in the bug post, the call to the
exception helper is missing in the "bne-false" chunk of translated
code :
IN:
bne- 0x1800278
OUT:
0xb544236d: jne 0xb5442396
0xb5442373: mov %ebp,(%esp)
0xb5442376: mov $0x44,%ebx
0xb544237b: mov %ebx,0x4(%esp)
0xb544237f: mov $0x1800278,%ebx
0xb5442384: mov %ebx,0x25c(%ebp)
0xb544238a: call 0x827475a
^^^^^^^^^^^^^^^^^^
0xb5442396: mov %ebp,(%esp)
0xb5442399: mov $0x44,%ebx
0xb544239e: mov %ebx,0x4(%esp)
0xb54423a2: mov $0x1800270,%ebx
0xb54423a7: mov %ebx,0x25c(%ebp)
Indeed, gen_exception(ctx, excp) called by gen_goto_tb (called by
gen_bcond) changes ctx->exception's value to excp's :
gen_bcond()
{
gen_goto_tb(ctx, 0, ctx->nip + li - 4);
/* ctx->exception value is POWERPC_EXCP_BRANCH */
gen_goto_tb(ctx, 1, ctx->nip);
/* ctx->exception now value is POWERPC_EXCP_TRACE */
}
Making the following gen_goto_tb()'s test false during the second call :
if ((ctx->singlestep_enabled &
(CPU_BRANCH_STEP | CPU_SINGLE_STEP)) &&
ctx->exception == POWERPC_EXCP_BRANCH /* false...*/) {
target_ulong tmp = ctx->nip;
ctx->nip = dest;
/* ... and this is the missing call */
gen_exception(ctx, POWERPC_EXCP_TRACE);
ctx->nip = tmp;
}
So the patch simply adds the missing matching case, fixing our problem.
Signed-off-by: Julio Guerra <guerr@julio.in>
Signed-off-by: Alexander Graf <agraf@suse.de>
Pass around CPUArchState instead of using global cpu_single_env.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
This patch adds some extra FPU state to CPUPPCState. Specifically,
fpscr is extended to a target_ulong bits, since some recent (64 bit)
CPUs now have more status bits than fit inside 32 bits. Also, we add
the 32 VSR registers present on CPUs with VSX (these extend the
standard FP regs, which together with the Altivec/VMX registers form a
64 x 128bit register file for VSX).
We don't actually support the instructions using these extra registers
in TCG yet, but we still need a place to store the state so we can
sync it with KVM and savevm/loadvm it. This patch updates the savevm
code to not fail on the extended state, but also does not actually
save it - that's a project for another patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For all targets that currently call tcg_gen_debug_insn_start,
add CPU_LOG_TB_OP_OPT to the condition that gates it.
This is useful for comparing optimization dumps, when the
pre-optimization dump is merely noise.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Altivec instructions are not working anymore in PowerPC emulation,
following commit d15f74fb, which inverted two registers in the call
to helper. Fix that.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Acked-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The BookE variant of MSR_SF is MSR_CM. Implement everything it takes in TCG to
support running 64bit code with MSR_CM set.
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0
and rename op_helper.c (which only contains load and store helpers)
to mem_helper.c. Remove AREG0 swapping in
tlb_fill().
Switch to AREG0 free mode. Use cpu_ld{l,uw}_code in translation
and interrupt handling, cpu_{ld,st}{l,uw}_data in loads and stores.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[fix unwanted whitespace line in Makefile.target]
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move code from cpu_state_reset() into ppc_cpu_reset().
Reorder #include of helper_regs.h to use it in translate_init.c.
Adjust whitespace and add braces.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
When we dump the CPU registers, there's a certain chance they haven't been
synchronized with KVM yet, so we have to manually trigger that.
This aligns the code with x86 and fixes a bug where the register state was
bogus on invalid/unknown kvm exit reasons.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
These instructions for loading and storing byte-swapped 64-bit values have
been introduced in PowerISA 2.06.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Scripted conversion:
sed -i "s/CPUState/CPUPPCState/g" target-ppc/*.[hc]
sed -i "s/#define CPUPPCState/#define CPUState/" target-ppc/cpu.h
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
This patch implements the msgsnd instruction. It is part of the
Embedded.Processor Control specification and allows one CPU to
IPI another CPU without going through an interrupt controller.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements the msgclr instruction. It is part of the
Embedded.Processor Control specification and clears pending doorbell
interrupts on the current CPU.
Signed-off-by: Alexander Graf <agraf@suse.de>
Our internal helpers to fetch TLB entries were not able to tell us
that an entry doesn't even exist. Pass an error out if we hit such
a case to not accidently pass beyond the TLB array.
Signed-off-by: Alexander Graf <agraf@suse.de>
The PowerPC 2.06 BookE ISA defines an opcode called "tlbilx" which is used
to flush TLB entries. It's the recommended way of flushing in virtualized
environments.
So far we got away without implementing it, but Linux for e500mc uses this
instruction, so we better add it :).
Signed-off-by: Alexander Graf <agraf@suse.de>
The msync instruction as defined today is only valid on 4xx cores, not
on e500 which also supports msync, but treats it the same way as sync.
Rename it to reflect that it's 4xx only.
Signed-off-by: Alexander Graf <agraf@suse.de>
The e500 CPUs don't use 440's msync which falls on the same opcode IDs,
but instead use the real powerpc sync instruction. This is important,
since the invalid mask differs between the two.
Signed-off-by: Alexander Graf <agraf@suse.de>
When using gdb to single step a ppc interrupt routine, the execution
flow passes the rfi instruction without actually returning from the
interrupt.
The patch fixes this by avoiding to update the nip when the debug
exception is raised and a previous POWERPC_EXCP_SYNC was set.
The latter is the case only, if code for rfi or a related instruction
was generated.
Signed-off-by: Sebastian Bauer <mail@sebastianbauer.info>
Signed-off-by: Alexander Graf <agraf@suse.de>
SPE instructions are defined by pairs. Currently, the invalid-bits mask is set
for the first instruction, but the second one can have a different mask.
example:
GEN_SPE(efdcmpeq, efdcfs, 0x17, 0x0B, 0x00600000, 0x00180000, PPC_SPE_DOUBLE),
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements support for the CFAR SPR on POWER7 (Come From
Address Register), which snapshots the PC value at the time of a branch or
an rfid. The latest powerpc-next kernel also catches it and can show it in
xmon or in the signal frames.
This works well enough to let recent kernels boot (which otherwise oops
on the CFAR access). It hasn't been tested enough to be confident that the
CFAR values are actually accurate, but one thing at a time.
Signed-off-by: Ben Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
When accessing an SPE instruction despite it being not available,
throw an SPE exception instead of an APU exception. That way the
guest knows what's going on and actually uses SPE.
Reported-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
* 'ppc-next' of git://repo.or.cz/qemu/agraf:
PPC: move TLBs to their own arrays
PPC: 440: Use 440 style MMU as default, so Qemu knows the MMU type
PPC: E500: Use MAS registers instead of internal TLB representation
PPC: Only set lower 32bits with mtmsr
PPC: update openbios firmware
PPC: mpc8544ds: Add hypervisor node
PPC: calculate kernel,initrd,cmdline locations dynamically
target-ppc: Handle memory-forced I/O controller access
PPC: E500: Implement reboot controller
As Nathan pointed out correctly, the mtmsr instruction does not modify
the high 32 bits of MSR. It also doesn't matter if SF is set or not,
the instruction always behaves the same.
This patch moves it a bit closer to the spec.
Reported-by: Nathan Whitehorn <nwhitehorn@freebsd.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
target-ppc has been switched to softfloat only long ago, but a
few #ifdef CONFIG_SOFTFLOAT have been forgotten. Remove them.
Cc: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Most of the code to support e500 style MMUs is already in place, but
we're missing on some of the special TLB0-TLB1 handling code and slightly
different TLB modification.
This patch adds support for the FSL style MMU.
Signed-off-by: Alexander Graf <agraf@suse.de>
To enable quick runtime detection of instruction groups to the currently
selected CPU emulation, we have a feature mask of what exactly the respective
instruction supports.
This feature mask is 64 bits long and we just successfully exceeded those 64
bits. To add more features, we need to think of something.
The easiest solution that came to my mind was to simply add another 64 bits
that we can also match on. Since the comparison is only done on start of the
qemu process to generate an internal opcode calling table, we should be fine
on any performance penalties here.
Signed-off-by: Alexander Graf <agraf@suse.de>
Read them via KVM_GET_SREGS in kvm_arch_get_registers(),
and display them in "info registers".
Also get CR and PID from the existing KVM_GET_REGS.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>