The architecture requires that for an exception return to AArch32 the
low bits of ELR_ELx are ignored when the PC is set from them:
* if returning to Thumb mode, ignore ELR_ELx[0]
* if returning to ARM mode, ignore ELR_ELx[1:0]
We were only squashing bit 0; also squash bit 1 if the SPSR T bit
indicates this is a return to ARM code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We already implement almost all the checks for the illegal
return events from AArch64 state described in the ARM ARM section
D1.11.2. Add the two missing ones:
* return to EL2 when EL3 is implemented and SCR_EL3.NS is 0
* return to Non-secure EL1 when EL2 is implemented and HCR_EL2.TGE is 1
(We don't implement external debug, so the case of "debug state exit
from EL0 using AArch64 state to EL0 using AArch32 state" doesn't apply
for QEMU.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Remove the assumptions that the AArch64 exception return code was
making about a return to AArch32 always being a return to EL0.
This includes pulling out the illegal-SPSR checks so we can apply
them for return to 32 bit as well as return to 64-bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
The entry offset when taking an exception to AArch64 from a lower
exception level may be 0x400 or 0x600. 0x400 is used if the
implemented exception level immediately lower than the target level
is using AArch64, and 0x600 if it is using AArch32. We were
incorrectly implementing this as checking the exception level
that the exception was taken from. (The two can be different if
for example we take an exception from EL0 to AArch64 EL3; we should
in this case be checking EL2 if EL2 is implemented, and EL1 if
EL2 is not implemented.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Handling of semihosting calls should depend on the register width
of the calling code, not on that of any higher exception level,
so we need to identify and handle semihosting calls before we
decide whether to deliver the exception as an entry to AArch32
or AArch64. (EXCP_SEMIHOST is also an "internal exception" so
it has no target exception level in the first place.)
This will allow AArch32 EL1 code to use semihosting calls when
running under an AArch64 EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If EL2 or EL3 is present on an AArch64 CPU, then exceptions can be
taken to an exception level which is running AArch32 (if only EL0
and EL1 are present then EL1 must be AArch64 and all exceptions are
taken to AArch64). To support this we need to have a single
implementation of the CPU do_interrupt() method which can handle both
32 and 64 bit exception entry.
Pull the common parts of aarch64_cpu_do_interrupt() and
arm_cpu_do_interrupt() out into a new function which calls
either the AArch32 or AArch64 specific entry code once it has
worked out which one is needed.
We temporarily special-case the handling of EXCP_SEMIHOST to
avoid an assertion in arm_el_is_aa64(); the next patch will
pull all the semihosting handling out to the arm_cpu_do_interrupt()
level (since semihosting semantics depend on the register width
of the calling code, not on that of any higher EL).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Move the aarch64_cpu_do_interrupt() function to helper.c. We want
to be able to call this from code that isn't AArch64-only, and
the move allows us to avoid awkward #ifdeffery at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Support EL2 and EL3 in arm_el_is_aa64() by implementing the
logic for checking the SCR_EL3 and HCR_EL2 register-width bits
as appropriate to determine the register width of lower exception
levels.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If we have a secure address space, use it in page table walks:
when doing the physical accesses to read descriptors, make them
through the correct address space.
(The descriptor reads are the only direct physical accesses
made in target-arm/ for CPUs which might have TrustZone.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement cpu_get_phys_page_attrs_debug instead of cpu_get_phys_page_debug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement the asidx_from_attrs CPU method to return the
Secure or NonSecure address space as appropriate.
(The function is inline so we can use it directly in target-arm
code to be added in later patches.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add QOM property to the ARM CPU which boards can use to tell us what
memory region to use for secure accesses. Nonsecure accesses
go via the memory region specified with the base CPU class 'memory'
property.
By default, if no secure region is specified it is the same as the
nonsecure region, and if no nonsecure region is specified we will use
address_space_memory.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1449505425-32022-3-git-send-email-peter.maydell@linaro.org
gdb won't actually dump these with 'info all-registers' since
it first tries to confirm that it should by checking the VFP
hwcap in the .auxv note. Well, we don't generate an .auxv note.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1452542185-10914-9-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1452542185-10914-7-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the support needed for creating prstatus elf notes. This
allows us to use QMP dump-guest-memory.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-id: 1452542185-10914-6-git-send-email-drjones@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: moved setting of cpu::write_elf64_note inside !CONFIG_USER_ONLY
ifdef to avoid compile failure for linux-user build]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.
In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com>
Message-id: 1452854262-19550-1-git-send-email-a.rigo@virtualopensystems.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 6daf194d, be62a2eb and 312fd5f got rid of a bunch, but they
keep coming back. Tracked down with the Coccinelle semantic patch
from commit 312fd5f.
Cc: Fam Zheng <famz@redhat.com>
Cc: Peter Crosthwaite <crosthwaitepeter@gmail.com>
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Changchun Ouyang <changchun.ouyang@intel.com>
Cc: zhanghailiang <zhang.zhanghailiang@huawei.com>
Cc: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Markus Armbruster <armbru@pond.sub.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1450452927-8346-17-git-send-email-armbru@redhat.com>
This patch adds support for split IRQ chip mode. When
KVM_CAP_SPLIT_IRQCHIP is enabled:
1.) The PIC, PIT, and IOAPIC are implemented in userspace while
the LAPIC is implemented by KVM.
2.) The software IOAPIC delivers interrupts to the KVM LAPIC via
kvm_set_irq. Interrupt delivery is configured via the MSI routing
table, for which routes are reserved in target-i386/kvm.c then
configured in hw/intc/ioapic.c
3.) KVM delivers IOAPIC EOIs via a new exit KVM_EXIT_IOAPIC_EOI,
which is handled in target-i386/kvm.c and relayed to the software
IOAPIC via ioapic_eoi_broadcast.
Signed-off-by: Matt Gingell <gingell@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If we can't find details for the debug exception in our debug state
then we can assume the exception is due to debugging inside the guest.
To inject the exception into the guest state we re-use the TCG exception
code (do_interrupt).
However while guest debugging is in effect we currently can't handle the
guest using single step as we will keep trapping to back to userspace.
GDB makes heavy use of single-step behind the scenes which effectively
means the guest's ability to debug itself is disabled while it is being
debugged.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-6-git-send-email-alex.bennee@linaro.org
[PMM: Fixed a few typos in comments and commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds basic support for HW assisted debug. The ioctl interface to
KVM allows us to pass an implementation defined number of break and
watch point registers. When KVM_GUESTDBG_USE_HW is specified these
debug registers will be installed in place on the world switch into the
guest.
The hardware is actually capable of more advanced matching but it is
unclear if this expressiveness is available via the gdbstub protocol.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-5-git-send-email-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds support for single-step. There isn't much to do on the QEMU
side as after we set-up the request for single step via the debug ioctl
it is all handled within the kernel.
The actual setting of the KVM_GUESTDBG_SINGLESTEP flag is already in the
common code. If the kernel doesn't support guest debug the ioctl will
simply error.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-4-git-send-email-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These don't involve messing around with debug registers, just setting
the breakpoint instruction in memory. GDB will not use this mechanism if
it can't access the memory to write the breakpoint.
All the kernel has to do is ensure the hypervisor traps the breakpoint
exceptions and returns to userspace.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-3-git-send-email-alex.bennee@linaro.org
[PMM: Fixed typo in comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As we haven't always had guest debug support we need to probe for it.
Additionally we don't do this in the start-up capability code so we
don't fall over on old kernels.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-2-git-send-email-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The AArch32 translation completion code for singlestep enabled/active
case was a way more confusing and too repetitive then it needs to be.
Probably that was the cause for a bug to be introduced into it at some
point. The bug was that SWI/HVC/SMC exception would be generated in
condition-failed instruction code path whereas it shouldn't.
This patch rewrites the code in a way similar to the non-singlestep
case.
In the condition-passed/unconditional instruction code path we need to:
- Write the condexec bits back to the CPU state
- Advance the singlestep state machine and generate a corresponding
exception in case of SWI/HVC/SMC
- Write the PC back to the CPU state if it hasn't already been written
and generate an appropriate singlestep exception otherwise
In the condition-failed instruction code path we need to:
- Set a TCG label to jump to it if the condition is failed
- Write the condexec bits back to the CPU state
- Write the PC back to the CPU state since it hasn't been written in
this case
- Generate an appropriate singlestep exception
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1448474560-22475-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.
This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).
Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Message-id: 1449167808-5656-1-git-send-email-Andrew.Baumann@microsoft.com
[PMM: set WnR bits in syndrome and FSR as appropriate]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The checks for the unallocated encodings in the ldst_excl group
(exclusives and load-acquire/store-release) were not correct. This
error meant that in turn we ended up with code attempting to handle
the non-existent case of "non-exclusive load-acquire/store-release
pair". Delete that broken and now unreachable code.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
In an LPAE format descriptor in ARMv8 the address field extends
up to bit 47, not just bit 39. Correct the masking so we don't
give incorrect results if the output address size is greater
than 40 bits, as it can be for AArch64.
(Note that we don't yet support the new-in-v8 Address Size fault which
should be generated if any translation table entry or TTBR contains
an address with non-zero bits above the most significant bit of the
maximum output address size.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1448029971-9875-1-git-send-email-peter.maydell@linaro.org
Architectural breakpoint check could raise an exceptions, thus condexec
bits should be updated before calling gen_helper_check_breakpoints().
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447767527-21268-3-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coprocessor access instructions are allowed inside IT block.
gen_helper_access_check_cp_reg() can raise an exceptions thus condexec
bits should be updated before.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447767527-21268-2-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
PC should be updated in the CPU state before calling check_breakpoints()
helper. Otherwise, the helper would not see the correct PC in the CPU
state if it is not at the start of a TB.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447176222-16401-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
AArch32 translation code does not distinguish between DISAS_UPDATE and
DISAS_JUMP. Thus, we cannot use any of them without first updating PC in
CPU state. Furthermore, it is too complicated to update PC in CPU state
before PC gets updated in disas context. So it is hardly possible to
correctly end TB early if is is not likely to be executed before calling
disas_*_insn(), e.g. just after calling breakpoint check helper.
Modify DISAS_UPDATE and DISAS_JUMP usage in AArch32 translation and
apply to them the same semantic as AArch64 translation does:
- DISAS_UPDATE: update PC in CPU state when finishing translation
- DISAS_JUMP: preserve current PC value in CPU state when finishing
translation
This patch fixes a bug in AArch32 breakpoint handling: when
check_breakpoints helper does not generate an exception, ending the TB
early with DISAS_UPDATE couldn't update PC in CPU state and execution
hangs.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447097859-586-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not raise a CPU exception if no CPU breakpoint has fired, since
singlestep is also done by generating a debug internal exception. This
fixes a bug with singlestepping in gdbstub.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1446726361-18328-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If this CPU supports EL3, enhance the printing of the current
CPU mode in debug logging to distinguish S from NS modes as
appropriate.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1445883178-576-3-git-send-email-peter.maydell@linaro.org
The AArch64 debug CPU display of PSTATE as "PSTATE=200003c5 (flags --C-)"
on the end of the same line as the last of the general purpose registers
is unnecessarily different from the AArch32 display of PSR as
"PSR=200001d3 --C- A svc32" on its own line. Update the AArch64
code to put PSTATE in its own line and in the same format, including
printing the exception level (mode).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1445883178-576-2-git-send-email-peter.maydell@linaro.org
Add BANK_<cpumode> #defines to index banked registers.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some targets already had this within their logic, but make sure
it's present for all targets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-14-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for applying S2 translation to 32bit S1
page-table walks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-13-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for applying S2 translation to 64bit S1
page-table walks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-12-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce ARMMMUFaultInfo to propagate MMU Fault information
across the MMU translation code path. This is in preparation for
adding Stage-2 translation.
No functional changes.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-11-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Avoid inline for get_phys_addr() to prepare for future recursive use.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-10-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-9-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The starting level for S2 pagetable walks is computed
differently from the S1 starting level. Implement the S2
variant.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-8-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename granule_sz to stride to better match the reference manuals.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-7-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the tsz variable and introduce inputsize.
This simplifies the code a little and makes it easier to
compare with the reference manuals.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for AArch32 S2 negative t0sz. In preparation for
using 40bit IPAs on AArch32.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-5-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move declaration of t0sz and t1sz to the top of the function
avoiding a mix of code and variable declarations.
No functional change.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-4-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Make t0sz and t1sz signed integers to match tsz and to make
it easier to implement support for AArch32 negative t0sz.
t1sz is changed for consistensy.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When the memory we're trying to translate code from is not executable we have
to turn this into a guest fault. In order to report the correct PC for this
fault, and to make sure it is not reported until after any other possible
faults for instructions earlier in execution, we must terminate TBs at
the end of a page, in case the next instruction is in a non-executable page.
This is simple for T16, A32 and A64 instructions, which are always aligned
to their size. However T32 instructions may be 32-bits but only 16-aligned,
so they can straddle a page boundary.
Correct the condition that checks whether the next instruction will touch
the following page, to ensure that if we're 2 bytes before the boundary
and this insn is T32 then we end the TB.
Reported-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The code in arm_excp_unmasked() suppresses the ability of PSTATE.AIF
to mask exceptions from a lower EL targeting EL2 or EL3 if the
CPU is 64-bit. This is correct for a target of EL3, but not correct
for targeting EL2. Further, we go to some effort to calculate
scr and hcr values which are not used at all for the 64-bit CPU
case.
Rearrange the code to correctly implement the 64-bit CPU logic
and keep the hcr/scr calculations in the 32-bit CPU codepath.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1444327729-4120-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* Support for Linux 4.4's new Hyper-V features
* Eliminate g_slice from areas I maintain
* checkpatch fix
* Peter's cpu_reload_memory_map() cleanups
* More changes to MAINTAINERS
* Require Python 2.6
* chardev creation fixes
* PCI requester id for ARM KVM
* cleanups and doc fixes
* Allow customization of the Hyper-V vendor id
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJWJKYWAAoJEL/70l94x66D2yYH/Rw06gj9FFVEhfNODmJozCsK
zRqRREo+VMo/lIGUSwzI+OCX+yUoivxnsJXchqunK0udPuQ5vZ+mVGyKedg8/SU+
uqXzXMK7QgJK/w7qNA1n0OacNYSosZz9MpOwPgzSLPRda8FbtVKqPBOugSEs+Ymg
APtiumz3DGWXUmt+vqRdgdiAvoGkefPODjjPjfSQFukg205KR88tf/b9oN8Z+kDW
LtGqG9dUNS/60ulLNQdFInn3x5WpuGky5kk57f47QHpInNcN4/CH0BiguvYNkA9A
aFFEWj5RsK7xkhcwSw6JIaSoWoTdrQVd4mB6+WTZN4tfGIIaoDeI6fp2MFmVpZU=
=9Tf9
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* KVM page size fix for PPC
* Support for Linux 4.4's new Hyper-V features
* Eliminate g_slice from areas I maintain
* checkpatch fix
* Peter's cpu_reload_memory_map() cleanups
* More changes to MAINTAINERS
* Require Python 2.6
* chardev creation fixes
* PCI requester id for ARM KVM
* cleanups and doc fixes
* Allow customization of the Hyper-V vendor id
# gpg: Signature made Mon 19 Oct 2015 09:13:10 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (49 commits)
kvm: Allow the Hyper-V vendor ID to be specified
kvm: Move x86-specific functions into target-i386/kvm.c
kvm: Pass PCI device pointer to MSI routing functions
hw/pci: Introduce pci_requester_id()
kvm: Make KVM_CAP_SIGNAL_MSI globally available
doc/rcu: fix g_free_rcu() usage example
qemu-char: cleanup after completed conversion to cd->create
qemu-char: convert ringbuf backend to data-driven creation
qemu-char: convert vc backend to data-driven creation
qemu-char: convert spice backend to data-driven creation
qemu-char: convert console backend to data-driven creation
qemu-char: convert stdio backend to data-driven creation
qemu-char: convert testdev backend to data-driven creation
qemu-char: convert braille backend to data-driven creation
qemu-char: convert msmouse backend to data-driven creation
qemu-char: convert mux backend to data-driven creation
qemu-char: convert null backend to data-driven creation
qemu-char: convert pty backend to data-driven creation
qemu-char: convert UDP backend to data-driven creation
qemu-char: convert socket backend to data-driven creation
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In-kernel ITS emulation on ARM64 will require to supply requester IDs.
These IDs can now be retrieved from the device pointer using new
pci_requester_id() function.
This patch adds pci_dev pointer to KVM GSI routing functions and makes
callers passing it.
x86 architecture does not use requester IDs, but hw/i386/kvm/pci-assign.c
also made passing PCI device pointer instead of NULL for consistency with
the rest of the code.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-Id: <ce081423ba2394a4efc30f30708fca07656bc500.1444916432.git.p.fedin@samsung.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A QEMU breakpoint match is not definitely an architectural breakpoint
match. If an exception is generated unconditionally during translation,
it is hardly possible to ignore it in the debug exception handler.
Generate a call to a helper to check CPU breakpoints and raise an
exception only if any breakpoint matches architecturally.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
GDB breakpoints have higher priority so they have to be checked first.
Should GDB breakpoint match, just return from the debug exception
handler.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement debug exception routing according to ARM ARM D2.3.1 Pseudocode
description of routing debug exceptions.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the MDCR_EL2 register. We don't implement any of
the debug-related traps this register controls yet, so
currently it simply reads back as written.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1444383794-16767-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweaked commit message; moved non-dummy definition from
debug_cp_reginfo to el2_cp_reginfo.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Added oslar_write function to OSLAR_EL1 sysreg, using a status variable
in ARMCPUState.cp15 struct (oslsr_el1). This variable is also linked
to the newly added read-only OSLSR_EL1 register.
Linux reads from this register during its suspend/resume procedure.
Signed-off-by: Davorin Mista <davorin.mista@aggios.com>
[PMM: folded a long line and tweaked a comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It is incorrect to call arm_el_is_aa64() function for unimplemented EL.
This patch fixes several attempts to do so.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
[PMM: Reworked several of the comments to be more verbose.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If any store instruction writes the code inside the same TB
after this store insn, the execution of the TB must be stopped
to execute new code correctly.
As described in ARMv8 manual D3.4.6 self-modifying code must do an
IC invalidation to be valid, and an ISB after it. So it's enough to end
the TB after ISB instruction on the code translation.
Also this TB break is necessary to take any pending interrupts immediately
after an ISB (as required by ARMv8 ARM D1.14.4).
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
[PMM: tweaked commit message and comments slightly]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1443213733-9807-1-git-send-email-sw@weilnetz.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Several devices don't survive object_unref(object_new(T)): they crash
or hang during cleanup, or they leave dangling pointers behind.
This breaks at least device-list-properties, because
qmp_device_list_properties() needs to create a device to find its
properties. Broken in commit f4eb32b "qmp: show QOM properties in
device-list-properties", v2.1. Example reproducer:
$ qemu-system-aarch64 -nodefaults -display none -machine none -S -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 2}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "device-list-properties", "arguments": { "typename": "pxa2xx-pcmcia" } }
qemu-system-aarch64: /home/armbru/work/qemu/memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
Aborted (core dumped)
[Exit 134 (SIGABRT)]
Unfortunately, I can't fix the problems in these devices right now.
Instead, add DeviceClass member cannot_destroy_with_object_finalize_yet
to mark them:
* Hang during cleanup (didn't debug, so I can't say why):
"realview_pci", "versatile_pci".
* Dangling pointer in cpus: most CPUs, plus "allwinner-a10", "digic",
"fsl,imx25", "fsl,imx31", "xlnx,zynqmp", because they create such
CPUs
* Assert kvm_enabled(): "host-x86_64-cpu", host-i386-cpu",
"host-powerpc64-cpu", "host-embedded-powerpc-cpu",
"host-powerpc-cpu" (the powerpc ones can't currently reach the
assertion, because the CPUs are only registered when KVM is enabled,
but the assertion is arguably in the wrong place all the same)
Make qmp_device_list_properties() fail cleanly when the device is so
marked. This improves device-list-properties from "crashes, hangs or
leaves dangling pointers behind" to "fails". Not a complete fix, just
a better-than-nothing work-around. In the above reproducer,
device-list-properties now fails with "Can't list properties of device
'pxa2xx-pcmcia'".
This also protects -device FOO,help, which uses the same machinery
since commit ef52358 "qdev-monitor: include QOM properties in -device
FOO, help output", v2.2. Example reproducer:
$ qemu-system-aarch64 -machine none -device pxa2xx-pcmcia,help
Before:
qemu-system-aarch64: .../memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
After:
Can't list properties of device 'pxa2xx-pcmcia'
Cc: "Andreas Färber" <afaerber@suse.de>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Anthony Green <green@moxielogic.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Jia Liu <proljc@gmail.com>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: qemu-ppc@nongnu.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1443689999-12182-10-git-send-email-armbru@redhat.com>
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Adjust all translators to respect it.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This symbol no longer exists.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This does tidy the icount test common to all targets.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
While we're at it, emit the opcode adjacent to where we currently
record data for search_pc. This puts gen_io_start et al on the
"correct" side of the marker.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
With an eye toward making it mandatory.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
* IOAPIC fixes (to pass kvm-unit-tests with -machine kernel_irqchip=off)
* NBD API upgrades from Daniel
* strtosz fixes from Marc-André
* improved support for readonly=on on scsi-generic devices
* new "info ioapic" and "info lapic" monitor commands
* Peter Crosthwaite's ELF_MACHINE cleanups
* docs patches from Thomas and Daniel
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJWBSAEAAoJEL/70l94x66DeL4H/21YR4GWCqo30f+W5kx24ZNo
by8H2kdZmWKRr/La1JlAReki9GCP1U8Q0cYC8V885gHLKcahWS/75UKwNbw0OSyg
2jj4uREc645TTFAvV5kQ+uAw9F/dchvkXylrVgOoUPipfmYibXY8JLu9AcVnZi6H
X5Rvpqo4Uhp2cbRG7rYWrwgpNL+VZmKc8LDdqdlXrkjjanhuAYO2E9NBKaE+xJQQ
FHcpkV92iSZFEZ0CB535BTIdNdDM/ae6bw1As27EF10YBTfneCQNazSeh13pLO2n
lHit2GZr2VeTSBrPkPsItToY/Gw38duVZK4QM5/wSkHBzyeUJY0ltQrf53veYfk=
=uc+I
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* First batch of MAINTAINERS updates
* IOAPIC fixes (to pass kvm-unit-tests with -machine kernel_irqchip=off)
* NBD API upgrades from Daniel
* strtosz fixes from Marc-André
* improved support for readonly=on on scsi-generic devices
* new "info ioapic" and "info lapic" monitor commands
* Peter Crosthwaite's ELF_MACHINE cleanups
* docs patches from Thomas and Daniel
# gpg: Signature made Fri 25 Sep 2015 11:20:52 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (52 commits)
doc: Refresh URLs in the qemu-tech documentation
docs: describe the QEMU build system structure / design
typedef: add typedef for QemuOpts
i386: interrupt poll processing
i386: partial revert of interrupt poll fix
ppc: Rename ELF_MACHINE to be PPC specific
i386: Rename ELF_MACHINE to be x86 specific
alpha: Remove ELF_MACHINE from cpu.h
mips: Remove ELF_MACHINE from cpu.h
sparc: Remove ELF_MACHINE from cpu.h
s390: Remove ELF_MACHINE from cpu.h
sh4: Remove ELF_MACHINE from cpu.h
xtensa: Remove ELF_MACHINE from cpu.h
tricore: Remove ELF_MACHINE from cpu.h
or32: Remove ELF_MACHINE from cpu.h
lm32: Remove ELF_MACHINE from cpu.h
unicore: Remove ELF_MACHINE from cpu.h
moxie: Remove ELF_MACHINE from cpu.h
cris: Remove ELF_MACHINE from cpu.h
m68k: Remove ELF_MACHINE from cpu.h
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
muldiv64() is used to convert microseconds into CPU ticks.
But it is not clear and not commented. This patch uses macro
to clearly identify what is used: time, CPU frequency and ticks.
For an elapsed time and a given frequency, we compute how many ticks
we have.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
The only generic code relying on this is linux-user. Linux user
already has a lot of #ifdef TARGET_ customisation so instead, define
ELF_ARCH as either EM_ARM or EM_AARCH64 appropriately.
The armv7m bootloader can just pass EM_ARM directly, as that
is architecture specific code. Note that arm_boot already has its own
logic selecting an arm specific elf machine so this makes V7M more
consistent with arm_boot.
This removes another architecture specific definition from the global
namespace.
Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-By: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is the initial version of KVM-accelerated GICv3 support.
State load and save are not yet supported, live migration is
not possible.
In order to get correct class name in a simpler way, gicv3_class_name()
function is implemented, similar to gic_class_name().
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Ashok kumar <ashoks@broadcom.com>
Message-id: 69d8f01d14994d7a1a140e96aef59fd332d02293.1441784344.git.p.fedin@samsung.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This allows us to use different GIC types from v2. There are no kernels
which could advertise KVM_CAP_DEVICE_CTRL without the actual ability to
create GIC with it.
GIC version probe code moved to kvm_arm_vgic_probe() which will be used
later.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Reviewed-by: Eric Auger <eric.auger@linaro.org>
Tested-by: Ashok kumar <ashoks@broadcom.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 015f4d9e4a8a50dfbdd734c4730558e24a69c6dc.1441784344.git.p.fedin@samsung.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-9-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Break out mpidr_read_val() to allow future sharing of the
code that conditionally sets the M and U bits of MPIDR.
No functional changes.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-8-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-7-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Stage-2 translations, EL2 and EL3 regimes don't have the
EPD control.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-6-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Stage-2 MMU translations do not have configurable TBI as
the top byte is always 0 (48-bit IPAs).
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-5-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-4-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1442135278-25281-3-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed typo in comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Usually, eliminate an operation from the translator by combining
a shift with an extract.
In the case of gen_set_NZ64, we don't need a boolean value for cpu_ZF,
merely a non-zero value. Given that we can extract both halves of a
64-bit input in one call, this simplifies the code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-12-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-11-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For !SF, this initial ext32u can't be optimized away by the
current TCG code generator. (It would require backward bit
liveness propagation.)
But since the range of bits for !SF are already constrained by
unallocated_encoding, we'll never reference the high bits anyway.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-10-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are all special case aliases of UBFM.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-9-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are all special case aliases of SBFM.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-8-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-7-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This can allow much of a ccmp to be elided when particular
flags are subsequently dead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-6-git-send-email-rth@twiddle.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-5-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handling this with TCG_COND_ALWAYS will allow these unlikely
cases to be handled without special cases in the rest of the
translator. The TCG optimizer ought to be able to reduce
these ALWAYS conditions completely.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-4-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split arm_gen_test_cc into 3 functions, so that it can be reused
for non-branch TCG comparisons.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-3-git-send-email-rth@twiddle.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a bug fix for aarch64. At present, we have branches using
the 32-bit (translate.c) versions of cpu_[NZCV]F, but we set the flags
using the 64-bit (translate-a64.c) versions of cpu_[NZCV]F. From
the view of the TCG code generator, these are unrelated variables.
The bug is hard to see because we currently only read these variables
from branches, and upon reaching a branch TCG will first spill live
variables and then reload the arguments of the branch. Since the
32-bit versions were never live until reaching the branch, we'd re-read
the data that had just been spilled from the 64-bit versions.
There is currently no such problem with the cpu_exclusive_* variables,
but there's no point in tempting fate.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1441909103-24666-2-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is set to true when the index is for an instruction fetch
translation.
The core get_page_addr_code() sets it, as do the SOFTMMU_CODE_ACCESS
acessors.
All targets ignore it for now, and all other callers pass "false".
This will allow targets who wish to split the mmu index between
instruction and data accesses to do so. A subsequent patch will
do just that for PowerPC.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Message-Id: <1439796853-4410-2-git-send-email-benh@kernel.crashing.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Many source files have doubled words (eg "the the", "to to",
and so on). Most of these can simply be removed, but a couple
were actual mis-spellings (eg "to to" instead of "to do").
There was even one triple word score "to to to" :-)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1441311266-8644-4-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1441311266-8644-3-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Log the target EL when taking exceptions. This is useful when
debugging guest SW or QEMU itself while transitioning through
the various ELs.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1441311266-8644-2-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If EL3 is not supported in current configuration,
we should not try to get EL3 bitness.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-id: 1441208342-10601-2-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduces reusable definitions for CPU affinity masks/shifts and gets rid
of hardcoded magic numbers.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-id: 7e6def4d0d91ae64615cdd2035b94d408d0a23c6.1441366248.git.p.fedin@samsung.com
[PMM: folded overlong line]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is an error in arm_excp_unmasked() function:
bitwise operator & is used with integer and bool operands
causing an incorrect zeroed result.
The patch fixes it.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-id: 1441209238-16881-1-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is an error in functions aarch64_sync_32_to_64() and
aarch64_sync_64_to_32() with mapping of registers between AArch32 and
AArch64. This commit fixes the mapping to match the v8 ARM ARM
section D1.20.1 (table D1-77).
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-id: 1440796451-15276-1-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tidied commit message a bit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All of these hw_errors are fatal and indicate something wrong with
QEMU implementation.
Convert to g_assert_not_reached.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-id: 169194d09017e5725535d31a1507d454c0043706.1440842587.git.crosthwaite.peter@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <christopher.covington@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-10-git-send-email-peter.maydell@linaro.org
The A64 semihosting API changes the interface for SYS_EXIT so
that instead of taking a single exception type in a register,
it takes a parameter block containing the exception type and
a sub-code. Implement this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-9-git-send-email-peter.maydell@linaro.org
The A64 semihosting ABI defines a new call SyncCacheRange
for doing a 'clean D-cache and invalidate I-cache' sequence.
Since QEMU doesn't implement caches, we can implement this as a nop.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <christopher.covington@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-8-git-send-email-peter.maydell@linaro.org
The 64-bit A64 semihosting API has some pervasive changes from
the 32-bit version:
* all parameter blocks are arrays of 64-bit values, not 32-bit
* the semihosting call number is passed in W0
* the return value is a 64-bit value in X0
Implement the necessary handling for this widening.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <christopher.covington@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-7-git-send-email-peter.maydell@linaro.org
Factor out a repeated pattern in the semihosting code:
gdb_do_syscall(arm_semi_cb, "system,%s", arg0, (int)arg1+1);
/* arm_semi_cb sets env->regs[0] to the syscall return value */
return env->regs[0];
For A64 the return value will go in a different register; pull
the sequence out into its own function that passes the return
value in a static variable rather than overloading regs[0]
for the purpose, so the code will work on both A32/T32 and A64.
Note that the lack-of-synchronization bug noted in the FIXME
comment is not introduced by this commit, but was already present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <christopher.covington@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-5-git-send-email-peter.maydell@linaro.org
Print semihosting debugging information before the
do_arm_semihosting() call so that angel_SWIreason_ReportException,
which causes the function to not return, gets the same debug prints as
other semihosting calls. Also print out the semihosting call number.
Signed-off-by: Christopher Covington <christopher.covington@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-3-git-send-email-peter.maydell@linaro.org
A spurious trailing "\n" in the gdb syscall format string used
for SYS_WRITE0 meant that gdb would reject the remote syscall,
with the effect that the output from the guest was silently dropped.
Remove the newline so that gdb accepts the packet.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-7-git-send-email-peter.maydell@linaro.org
Implement the remaining stage 1 TLB invalidate operations
visible from EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-6-git-send-email-peter.maydell@linaro.org
Implement the missing TLBI operations that exist only
if EL2 is implemented.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-5-git-send-email-peter.maydell@linaro.org
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-4-git-send-email-peter.maydell@linaro.org
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-3-git-send-email-peter.maydell@linaro.org
Implement the AArch32 ATS1H* operations which perform
Hyp mode stage 1 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-6-git-send-email-peter.maydell@linaro.org
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-5-git-send-email-peter.maydell@linaro.org
Some coprocessor register access functions need to be able
to report "trap to EL3 with an 'uncategorized' syndrome";
add the necessary CPAccessResult enum and handling for it.
I don't currently know of any registers that need to trap
to EL2 with the 'uncategorized' syndrome, but adding the
_EL2 enum as well is trivial and fills in what would
otherwise be an odd gap in the handling.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-4-git-send-email-peter.maydell@linaro.org
Wire up the AArch64 EL2 and EL3 address translation operations
(AT S12E1*, AT S12E0*, AT S1E2*, AT S1E3*), and correct some
errors in the ats_write64() function in previously unused code
that would have done the wrong kind of lookup for accesses from
EL3 when SCR.NS==0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-3-git-send-email-peter.maydell@linaro.org
For EL2 stage 1 translations, there is no TTBR1. We were already
handling this for 64-bit EL2; add the code to take the 'no TTBR1'
code path for 64-bit EL2 as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-2-git-send-email-peter.maydell@linaro.org
We already implemented ACTLR_EL1; add the missing ACTLR_EL2 and
ACTLR_EL3, for consistency.
Since we don't currently have any CPUs that need the EL2/EL3
versions to reset to non-zero values, implement as RAZ/WI.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-5-git-send-email-peter.maydell@linaro.org
The AFSR registers are implementation dependent auxiliary fault
status registers. We already implemented a RAZ/WI AFSR0_EL1 and
AFSR_EL1; add the missing AFSR{0,1}_EL{2,3} for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-4-git-send-email-peter.maydell@linaro.org
The AMAIR registers are for providing auxiliary implementation
defined memory attributes. We already implemented a RAZ/WI
AMAIR_EL1; add the EL2 and EL3 versions for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-3-git-send-email-peter.maydell@linaro.org
Add the AArch64 registers MAIR_EL3 and TPIDR_EL3, which are the only
two which we had implemented the 32-bit Secure equivalents of but
not the 64-bit Secure versions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-2-git-send-email-peter.maydell@linaro.org
If EL3 is AArch32, then the secure physical timer is accessed via
banking of the registers used for the non-secure physical timer.
Implement this banking.
Note that the access controls for the AArch32 banked registers
remain the same as the physical-timer checks; they are not the
same as the controls on the AArch64 secure timer registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1437047249-2357-3-git-send-email-peter.maydell@linaro.org
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
On CPUs with EL3, there are two physical timers, one for Secure and one
for Non-secure. Implement this extra timer and the AArch64 registers
which access it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437047249-2357-2-git-send-email-peter.maydell@linaro.org
It's easy to accidentally define two cpregs which both try
to reset the same underlying state field (for instance a
clash between an AArch64 EL3 definition and an AArch32
banked register definition). if the two definitions disagree
about the reset value then the result is dependent on which
one happened to be reached last in the hashtable enumeration.
Add a consistency check to detect and assert in these cases:
after reset, we run a second pass where we check that the
reset operation doesn't change the value of the register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436797559-20835-1-git-send-email-peter.maydell@linaro.org
This small inline returns correct GIC class name depending on whether we
use KVM acceleration or not. Avoids duplicating the condition everywhere.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 4f26901be9b844b563673ce3ad08eeedbb7a7132.1438758065.git.p.fedin@samsung.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Prepare for adding the Hypervisor timer, no functional change.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-5-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename gt_cnt_reset to gt_timer_reset as the function really
resets the timers and not the counters. Move the registration
from counter regs to timer regs.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds control for trapping selected timer and counter accesses to EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436791864-4582-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds support for the virtual timer offset controlled by EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436791864-4582-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some registers like the CNTVCT register should only be written to the
kernel as part of machine initialization or on vmload operations, but
never during runtime, as this can potentially make time go backwards or
create inconsistent time observations between VCPUs.
Introduce a list of registers that should not be written back at runtime
and check this list on syncing the register state to the KVM state.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1437046488-10773-1-git-send-email-christoffer.dall@linaro.org
[PMM: tweaked a few comments, added the new argument to the stub
write_list_to_kvmstate() in target-arm/kvm-stub.c]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The SCTLR_EL3 cpreg definition was implicitly resetting the
register state to 0, which is both wrong and clashes with
the reset done via the SCTLR definition (since sctlr[3]
is unioned with sctlr_s). This went unnoticed until recently,
when an unrelated change (commit a903c449b4) happened to
perturb the order of enumeration through the cpregs hashtable for
reset such that the erroneous reset happened after the correct one
rather than before it. Fix this by marking SCTLR_EL3 as an alias,
so its reset is left up to the AArch32 view.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Move the target_disas() ARM specifics to the QOM disas_set_info hook
and delete the ARM specific code in disas.c.
This has the extra advantage of the more fully featured target_disas()
implementation now applying to monitor_disas().
Currently, target_disas() has multi-endian, thumb and AArch64
support whereas the existing monitor_disas() support only has vanilla
AA32 support.
E.G. Running an AA64 linux kernel the following -d in_asm disas happens
(taget_disas()):
IN:
0x0000000040000000: 580000c0 ldr x0, pc+24 (addr 0x40000018)
0x0000000040000004: aa1f03e1 mov x1, xzr
However before this patch, disasing the same from the monitor:
(qemu) xp/i 0x40000000
0x0000000040000000: 580000c0 stmdapl r0, {r6, r7}
After this patch:
(qemu) xp/i 0x40000000
0x0000000040000000: 580000c0 ldr x0, pc+24 (addr 0x40000018)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The callers (most of them in target-foo/cpu.c) to this function all
have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from
core code (in exec.c).
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Anthony Green <green@moxielogic.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add an Error argument to cpu_exec_init() to let users collect the
error. This is in preparation to change the CPU enumeration logic
in cpu_exec_init(). With the new enumeration logic, cpu_exec_init()
can fail if cpu_index values corresponding to max_cpus have already
been handed out.
Since all current callers of cpu_exec_init() are from instance_init,
use error_abort Error argument to abort in case of an error.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
To prepare for a generic internal cipher API, move the
built-in AES implementation into the crypto/ directory
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1435770638-25715-3-git-send-email-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Implement the YIELD instruction in the ARM and Thumb translators to
actually yield control back to the top level loop rather than being
a simple no-op. (We already do this for A64.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1435672316-3311-3-git-send-email-peter.maydell@linaro.org
Currently we use DISAS_WFE for both WFE and YIELD instructions.
This is functionally correct because at the moment both of them
are implemented as "yield this CPU back to the top level loop so
another CPU has a chance to run". However it's rather confusing
that YIELD ends up calling HELPER(wfe), and if we ever want to
implement real behaviour for WFE and SEV it's likely to trip us up.
Split out the yield codepath to use DISAS_YIELD and a new
HELPER(yield) function, and have HELPER(wfe) call HELPER(yield).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1435672316-3311-2-git-send-email-peter.maydell@linaro.org
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>