Implement trapping of the "debug ROM" registers, which are controlled
by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Implement the traps to EL2 and EL3 controlled by the bits
MDCR_EL2.TDOSA MDCR_EL3.TDOSA. These can configurably trap
accesses to the "powerdown debug" registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
We weren't quite implementing the handling of SCR.SMD correctly.
The condition governing whether the SMD bit should apply only
for NS state is "is EL3 is AArch32", not "is the current EL AArch32".
Fix the condition, and clarify the comment both to reflect this and
to expand slightly on what's going on for the v7-no-Virtualization case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Correct some corner cases we were getting wrong for
CNTFRQ access rights:
* should UNDEF from 32-bit Secure EL1
* only writable from the highest implemented exception level,
which might not be EL1 now
To clarify the code, provide a new utility function
arm_highest_el() which returns the highest implemented
exception level.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
ARM stops before access to a location covered by watchpoint. Also, QEMU
watchpoint fire is not necessarily an architectural watchpoint match.
Unfortunately, that is hardly possible to ignore a fired watchpoint in
debug exception handler. So move watchpoint check from debug exception
handler to the dedicated watchpoint checking callback.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1454256948-10485-3-git-send-email-serge.fdrv@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All Thumb Neon and VFP instructions are 32 bits, so the IL
bit in the syndrome register should be set. Pass false to the
syn_* function's is_16bit argument rather than s->thumb
so we report the correct IL bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454683067-16001-4-git-send-email-peter.maydell@linaro.org
All Thumb coprocessor instructions are 32 bits, so the IL
bit in the syndrome register should be set. Pass false to the
syn_* function's is_16bit argument rather than s->thumb
so we report the correct IL bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454683067-16001-3-git-send-email-peter.maydell@linaro.org
In syndrome register values, the IL bit indicates the instruction
length, and is 1 for 4-byte instructions and 0 for 2-byte
instructions. All A64 and A32 instructions are 4-byte, but
Thumb instructions may be either 2 or 4 bytes long. Unfortunately
we named the parameter to the syn_* functions for constructing
syndromes "is_thumb", which falsely implies that it should be
set for all Thumb instructions, rather than only the 16-bit ones.
Fix the functions to name the parameter 'is_16bit' instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454683067-16001-2-git-send-email-peter.maydell@linaro.org
Enable EL3 support for our Cortex-A53 and Cortex-A57 CPU models.
We have enough implemented now to be able to run real world code
at least to some extent (I can boot ARM Trusted Firmware to the
point where it pulls in OP-TEE and then falls over because it
doesn't have a UEFI image it can chain to).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454506721-11843-8-git-send-email-peter.maydell@linaro.org
Implement some corner cases of the behaviour of the NSACR
register on ARMv8:
* if EL3 is AArch64 then accessing the NSACR from Secure EL1
with AArch32 should trap to EL3
* if EL3 is not present or is AArch64 then reads from NS EL1 and
NS EL2 return constant 0xc00
It would in theory be possible to implement all these with
a single reginfo definition, but for clarity we use three
separate definitions for the three cases and install the
right one based on the CPU feature flags.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1454506721-11843-7-git-send-email-peter.maydell@linaro.org
System registers might have access requirements which need to
be described via a CPAccessFn and which differ for reads and
writes. For this to be possible we need to pass the access
function a parameter to tell it whether the access being checked
is a read or a write.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454506721-11843-6-git-send-email-peter.maydell@linaro.org
The arm_generate_debug_exceptions() function as originally implemented
assumes no EL2 or EL3. Since we now have much more of an implementation
of those now, fix this assumption.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454506721-11843-5-git-send-email-peter.maydell@linaro.org
The registers MVBAR and SCR should have the behaviour of trapping to
EL3 if accessed from Secure EL1, but we were incorrectly implementing
them to UNDEF (which would trap to EL1). Fix this by using the new
access_trap_aa32s_el1() access function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1454506721-11843-4-git-send-email-peter.maydell@linaro.org
Implement the MDCR_EL3 register (which is SDCR for AArch32).
For the moment we implement it as reads-as-written.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1454506721-11843-3-git-send-email-peter.maydell@linaro.org
Fix a typo where "EL2" was written but "EL3" intended.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454506721-11843-2-git-send-email-peter.maydell@linaro.org
Thus, use cpu_env as the parameter, not TCG_AREG0 directly.
Update all uses in the translators.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We already modify the processor feature bits to not report EL3
support to the guest if EL3 isn't enabled for the CPU we're emulating.
Add similar support for not reporting EL2 unless it is enabled.
This is necessary because real world guest code running at EL3
(trusted firmware or bootloaders) will query the ID registers to
determine whether it should start a guest Linux kernel in EL2 or EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454437242-10262-1-git-send-email-peter.maydell@linaro.org
Implement the inputsize > pamax check for Stage 2 translations.
This is CONSTRAINED UNPREDICTABLE and we choose to fault.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1453932970-14576-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename check_s2_startlevel to check_s2_mmu_setup in preparation
for additional checks.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1453932970-14576-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The S2 starting level table size check applies to both AArch32
and AArch64. Move it to common code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1453932970-14576-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The AArch64 system registers DACR32_EL2, IFSR32_EL2, SPSR_IRQ,
SPSR_ABT, SPSR_UND and SPSR_FIQ are visible and fully functional from
EL3 even if the CPU has no EL2 (unlike some others which are RES0
from EL3 in that configuration). Move them from el2_cp_reginfo[] to
v8_cp_reginfo[] so they are always present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1453227802-9991-1-git-send-email-peter.maydell@linaro.org
Split the bits that require it to exec/log.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1452174932-28657-8-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-13-git-send-email-peter.maydell@linaro.org
This patch provides the name of the architecture in the target.xml
if available.
This allows the remote gdb to detect the target architecture on its
own - so there is no need to specify it manually (e.g. if gdb is
started without a binary) using "set arch *arch_name*".
The name of the architecture is provided by a callback that can
be implemented by all architectures. The arm implementation has
special handling for iwmmxt and returns arm otherwise. This can
be extended if necessary.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[rework to use a callback]
Message-Id: <1449144881-130935-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The AArch64 FPEXC32_EL2 system register is visible at EL2 and EL3,
and allows those exception levels to read and write the FPEXC
register for a lower exception level that is using AArch32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1453132414-8127-1-git-send-email-peter.maydell@linaro.org
The architecture requires that for an exception return to AArch32 the
low bits of ELR_ELx are ignored when the PC is set from them:
* if returning to Thumb mode, ignore ELR_ELx[0]
* if returning to ARM mode, ignore ELR_ELx[1:0]
We were only squashing bit 0; also squash bit 1 if the SPSR T bit
indicates this is a return to ARM code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We already implement almost all the checks for the illegal
return events from AArch64 state described in the ARM ARM section
D1.11.2. Add the two missing ones:
* return to EL2 when EL3 is implemented and SCR_EL3.NS is 0
* return to Non-secure EL1 when EL2 is implemented and HCR_EL2.TGE is 1
(We don't implement external debug, so the case of "debug state exit
from EL0 using AArch64 state to EL0 using AArch32 state" doesn't apply
for QEMU.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Remove the assumptions that the AArch64 exception return code was
making about a return to AArch32 always being a return to EL0.
This includes pulling out the illegal-SPSR checks so we can apply
them for return to 32 bit as well as return to 64-bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
The entry offset when taking an exception to AArch64 from a lower
exception level may be 0x400 or 0x600. 0x400 is used if the
implemented exception level immediately lower than the target level
is using AArch64, and 0x600 if it is using AArch32. We were
incorrectly implementing this as checking the exception level
that the exception was taken from. (The two can be different if
for example we take an exception from EL0 to AArch64 EL3; we should
in this case be checking EL2 if EL2 is implemented, and EL1 if
EL2 is not implemented.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Handling of semihosting calls should depend on the register width
of the calling code, not on that of any higher exception level,
so we need to identify and handle semihosting calls before we
decide whether to deliver the exception as an entry to AArch32
or AArch64. (EXCP_SEMIHOST is also an "internal exception" so
it has no target exception level in the first place.)
This will allow AArch32 EL1 code to use semihosting calls when
running under an AArch64 EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If EL2 or EL3 is present on an AArch64 CPU, then exceptions can be
taken to an exception level which is running AArch32 (if only EL0
and EL1 are present then EL1 must be AArch64 and all exceptions are
taken to AArch64). To support this we need to have a single
implementation of the CPU do_interrupt() method which can handle both
32 and 64 bit exception entry.
Pull the common parts of aarch64_cpu_do_interrupt() and
arm_cpu_do_interrupt() out into a new function which calls
either the AArch32 or AArch64 specific entry code once it has
worked out which one is needed.
We temporarily special-case the handling of EXCP_SEMIHOST to
avoid an assertion in arm_el_is_aa64(); the next patch will
pull all the semihosting handling out to the arm_cpu_do_interrupt()
level (since semihosting semantics depend on the register width
of the calling code, not on that of any higher EL).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Move the aarch64_cpu_do_interrupt() function to helper.c. We want
to be able to call this from code that isn't AArch64-only, and
the move allows us to avoid awkward #ifdeffery at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Support EL2 and EL3 in arm_el_is_aa64() by implementing the
logic for checking the SCR_EL3 and HCR_EL2 register-width bits
as appropriate to determine the register width of lower exception
levels.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If we have a secure address space, use it in page table walks:
when doing the physical accesses to read descriptors, make them
through the correct address space.
(The descriptor reads are the only direct physical accesses
made in target-arm/ for CPUs which might have TrustZone.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement cpu_get_phys_page_attrs_debug instead of cpu_get_phys_page_debug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement the asidx_from_attrs CPU method to return the
Secure or NonSecure address space as appropriate.
(The function is inline so we can use it directly in target-arm
code to be added in later patches.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add QOM property to the ARM CPU which boards can use to tell us what
memory region to use for secure accesses. Nonsecure accesses
go via the memory region specified with the base CPU class 'memory'
property.
By default, if no secure region is specified it is the same as the
nonsecure region, and if no nonsecure region is specified we will use
address_space_memory.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1449505425-32022-3-git-send-email-peter.maydell@linaro.org
gdb won't actually dump these with 'info all-registers' since
it first tries to confirm that it should by checking the VFP
hwcap in the .auxv note. Well, we don't generate an .auxv note.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1452542185-10914-9-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1452542185-10914-7-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the support needed for creating prstatus elf notes. This
allows us to use QMP dump-guest-memory.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-id: 1452542185-10914-6-git-send-email-drjones@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: moved setting of cpu::write_elf64_note inside !CONFIG_USER_ONLY
ifdef to avoid compile failure for linux-user build]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.
In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com>
Message-id: 1452854262-19550-1-git-send-email-a.rigo@virtualopensystems.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 6daf194d, be62a2eb and 312fd5f got rid of a bunch, but they
keep coming back. Tracked down with the Coccinelle semantic patch
from commit 312fd5f.
Cc: Fam Zheng <famz@redhat.com>
Cc: Peter Crosthwaite <crosthwaitepeter@gmail.com>
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Changchun Ouyang <changchun.ouyang@intel.com>
Cc: zhanghailiang <zhang.zhanghailiang@huawei.com>
Cc: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Markus Armbruster <armbru@pond.sub.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1450452927-8346-17-git-send-email-armbru@redhat.com>
This patch adds support for split IRQ chip mode. When
KVM_CAP_SPLIT_IRQCHIP is enabled:
1.) The PIC, PIT, and IOAPIC are implemented in userspace while
the LAPIC is implemented by KVM.
2.) The software IOAPIC delivers interrupts to the KVM LAPIC via
kvm_set_irq. Interrupt delivery is configured via the MSI routing
table, for which routes are reserved in target-i386/kvm.c then
configured in hw/intc/ioapic.c
3.) KVM delivers IOAPIC EOIs via a new exit KVM_EXIT_IOAPIC_EOI,
which is handled in target-i386/kvm.c and relayed to the software
IOAPIC via ioapic_eoi_broadcast.
Signed-off-by: Matt Gingell <gingell@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If we can't find details for the debug exception in our debug state
then we can assume the exception is due to debugging inside the guest.
To inject the exception into the guest state we re-use the TCG exception
code (do_interrupt).
However while guest debugging is in effect we currently can't handle the
guest using single step as we will keep trapping to back to userspace.
GDB makes heavy use of single-step behind the scenes which effectively
means the guest's ability to debug itself is disabled while it is being
debugged.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-6-git-send-email-alex.bennee@linaro.org
[PMM: Fixed a few typos in comments and commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds basic support for HW assisted debug. The ioctl interface to
KVM allows us to pass an implementation defined number of break and
watch point registers. When KVM_GUESTDBG_USE_HW is specified these
debug registers will be installed in place on the world switch into the
guest.
The hardware is actually capable of more advanced matching but it is
unclear if this expressiveness is available via the gdbstub protocol.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-5-git-send-email-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds support for single-step. There isn't much to do on the QEMU
side as after we set-up the request for single step via the debug ioctl
it is all handled within the kernel.
The actual setting of the KVM_GUESTDBG_SINGLESTEP flag is already in the
common code. If the kernel doesn't support guest debug the ioctl will
simply error.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-4-git-send-email-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These don't involve messing around with debug registers, just setting
the breakpoint instruction in memory. GDB will not use this mechanism if
it can't access the memory to write the breakpoint.
All the kernel has to do is ensure the hypervisor traps the breakpoint
exceptions and returns to userspace.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-3-git-send-email-alex.bennee@linaro.org
[PMM: Fixed typo in comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As we haven't always had guest debug support we need to probe for it.
Additionally we don't do this in the start-up capability code so we
don't fall over on old kernels.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1449599553-24713-2-git-send-email-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The AArch32 translation completion code for singlestep enabled/active
case was a way more confusing and too repetitive then it needs to be.
Probably that was the cause for a bug to be introduced into it at some
point. The bug was that SWI/HVC/SMC exception would be generated in
condition-failed instruction code path whereas it shouldn't.
This patch rewrites the code in a way similar to the non-singlestep
case.
In the condition-passed/unconditional instruction code path we need to:
- Write the condexec bits back to the CPU state
- Advance the singlestep state machine and generate a corresponding
exception in case of SWI/HVC/SMC
- Write the PC back to the CPU state if it hasn't already been written
and generate an appropriate singlestep exception otherwise
In the condition-failed instruction code path we need to:
- Set a TCG label to jump to it if the condition is failed
- Write the condexec bits back to the CPU state
- Write the PC back to the CPU state since it hasn't been written in
this case
- Generate an appropriate singlestep exception
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1448474560-22475-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.
This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).
Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Message-id: 1449167808-5656-1-git-send-email-Andrew.Baumann@microsoft.com
[PMM: set WnR bits in syndrome and FSR as appropriate]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The checks for the unallocated encodings in the ldst_excl group
(exclusives and load-acquire/store-release) were not correct. This
error meant that in turn we ended up with code attempting to handle
the non-existent case of "non-exclusive load-acquire/store-release
pair". Delete that broken and now unreachable code.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
In an LPAE format descriptor in ARMv8 the address field extends
up to bit 47, not just bit 39. Correct the masking so we don't
give incorrect results if the output address size is greater
than 40 bits, as it can be for AArch64.
(Note that we don't yet support the new-in-v8 Address Size fault which
should be generated if any translation table entry or TTBR contains
an address with non-zero bits above the most significant bit of the
maximum output address size.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1448029971-9875-1-git-send-email-peter.maydell@linaro.org
Architectural breakpoint check could raise an exceptions, thus condexec
bits should be updated before calling gen_helper_check_breakpoints().
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447767527-21268-3-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coprocessor access instructions are allowed inside IT block.
gen_helper_access_check_cp_reg() can raise an exceptions thus condexec
bits should be updated before.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447767527-21268-2-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
PC should be updated in the CPU state before calling check_breakpoints()
helper. Otherwise, the helper would not see the correct PC in the CPU
state if it is not at the start of a TB.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447176222-16401-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
AArch32 translation code does not distinguish between DISAS_UPDATE and
DISAS_JUMP. Thus, we cannot use any of them without first updating PC in
CPU state. Furthermore, it is too complicated to update PC in CPU state
before PC gets updated in disas context. So it is hardly possible to
correctly end TB early if is is not likely to be executed before calling
disas_*_insn(), e.g. just after calling breakpoint check helper.
Modify DISAS_UPDATE and DISAS_JUMP usage in AArch32 translation and
apply to them the same semantic as AArch64 translation does:
- DISAS_UPDATE: update PC in CPU state when finishing translation
- DISAS_JUMP: preserve current PC value in CPU state when finishing
translation
This patch fixes a bug in AArch32 breakpoint handling: when
check_breakpoints helper does not generate an exception, ending the TB
early with DISAS_UPDATE couldn't update PC in CPU state and execution
hangs.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1447097859-586-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not raise a CPU exception if no CPU breakpoint has fired, since
singlestep is also done by generating a debug internal exception. This
fixes a bug with singlestepping in gdbstub.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1446726361-18328-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If this CPU supports EL3, enhance the printing of the current
CPU mode in debug logging to distinguish S from NS modes as
appropriate.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1445883178-576-3-git-send-email-peter.maydell@linaro.org
The AArch64 debug CPU display of PSTATE as "PSTATE=200003c5 (flags --C-)"
on the end of the same line as the last of the general purpose registers
is unnecessarily different from the AArch32 display of PSR as
"PSR=200001d3 --C- A svc32" on its own line. Update the AArch64
code to put PSTATE in its own line and in the same format, including
printing the exception level (mode).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1445883178-576-2-git-send-email-peter.maydell@linaro.org
Add BANK_<cpumode> #defines to index banked registers.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some targets already had this within their logic, but make sure
it's present for all targets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-14-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for applying S2 translation to 32bit S1
page-table walks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-13-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for applying S2 translation to 64bit S1
page-table walks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-12-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce ARMMMUFaultInfo to propagate MMU Fault information
across the MMU translation code path. This is in preparation for
adding Stage-2 translation.
No functional changes.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-11-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Avoid inline for get_phys_addr() to prepare for future recursive use.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-10-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-9-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The starting level for S2 pagetable walks is computed
differently from the S1 starting level. Implement the S2
variant.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-8-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename granule_sz to stride to better match the reference manuals.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-7-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the tsz variable and introduce inputsize.
This simplifies the code a little and makes it easier to
compare with the reference manuals.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for AArch32 S2 negative t0sz. In preparation for
using 40bit IPAs on AArch32.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-5-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move declaration of t0sz and t1sz to the top of the function
avoiding a mix of code and variable declarations.
No functional change.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-4-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Make t0sz and t1sz signed integers to match tsz and to make
it easier to implement support for AArch32 negative t0sz.
t1sz is changed for consistensy.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When the memory we're trying to translate code from is not executable we have
to turn this into a guest fault. In order to report the correct PC for this
fault, and to make sure it is not reported until after any other possible
faults for instructions earlier in execution, we must terminate TBs at
the end of a page, in case the next instruction is in a non-executable page.
This is simple for T16, A32 and A64 instructions, which are always aligned
to their size. However T32 instructions may be 32-bits but only 16-aligned,
so they can straddle a page boundary.
Correct the condition that checks whether the next instruction will touch
the following page, to ensure that if we're 2 bytes before the boundary
and this insn is T32 then we end the TB.
Reported-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The code in arm_excp_unmasked() suppresses the ability of PSTATE.AIF
to mask exceptions from a lower EL targeting EL2 or EL3 if the
CPU is 64-bit. This is correct for a target of EL3, but not correct
for targeting EL2. Further, we go to some effort to calculate
scr and hcr values which are not used at all for the 64-bit CPU
case.
Rearrange the code to correctly implement the 64-bit CPU logic
and keep the hcr/scr calculations in the 32-bit CPU codepath.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1444327729-4120-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* Support for Linux 4.4's new Hyper-V features
* Eliminate g_slice from areas I maintain
* checkpatch fix
* Peter's cpu_reload_memory_map() cleanups
* More changes to MAINTAINERS
* Require Python 2.6
* chardev creation fixes
* PCI requester id for ARM KVM
* cleanups and doc fixes
* Allow customization of the Hyper-V vendor id
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJWJKYWAAoJEL/70l94x66D2yYH/Rw06gj9FFVEhfNODmJozCsK
zRqRREo+VMo/lIGUSwzI+OCX+yUoivxnsJXchqunK0udPuQ5vZ+mVGyKedg8/SU+
uqXzXMK7QgJK/w7qNA1n0OacNYSosZz9MpOwPgzSLPRda8FbtVKqPBOugSEs+Ymg
APtiumz3DGWXUmt+vqRdgdiAvoGkefPODjjPjfSQFukg205KR88tf/b9oN8Z+kDW
LtGqG9dUNS/60ulLNQdFInn3x5WpuGky5kk57f47QHpInNcN4/CH0BiguvYNkA9A
aFFEWj5RsK7xkhcwSw6JIaSoWoTdrQVd4mB6+WTZN4tfGIIaoDeI6fp2MFmVpZU=
=9Tf9
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* KVM page size fix for PPC
* Support for Linux 4.4's new Hyper-V features
* Eliminate g_slice from areas I maintain
* checkpatch fix
* Peter's cpu_reload_memory_map() cleanups
* More changes to MAINTAINERS
* Require Python 2.6
* chardev creation fixes
* PCI requester id for ARM KVM
* cleanups and doc fixes
* Allow customization of the Hyper-V vendor id
# gpg: Signature made Mon 19 Oct 2015 09:13:10 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (49 commits)
kvm: Allow the Hyper-V vendor ID to be specified
kvm: Move x86-specific functions into target-i386/kvm.c
kvm: Pass PCI device pointer to MSI routing functions
hw/pci: Introduce pci_requester_id()
kvm: Make KVM_CAP_SIGNAL_MSI globally available
doc/rcu: fix g_free_rcu() usage example
qemu-char: cleanup after completed conversion to cd->create
qemu-char: convert ringbuf backend to data-driven creation
qemu-char: convert vc backend to data-driven creation
qemu-char: convert spice backend to data-driven creation
qemu-char: convert console backend to data-driven creation
qemu-char: convert stdio backend to data-driven creation
qemu-char: convert testdev backend to data-driven creation
qemu-char: convert braille backend to data-driven creation
qemu-char: convert msmouse backend to data-driven creation
qemu-char: convert mux backend to data-driven creation
qemu-char: convert null backend to data-driven creation
qemu-char: convert pty backend to data-driven creation
qemu-char: convert UDP backend to data-driven creation
qemu-char: convert socket backend to data-driven creation
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In-kernel ITS emulation on ARM64 will require to supply requester IDs.
These IDs can now be retrieved from the device pointer using new
pci_requester_id() function.
This patch adds pci_dev pointer to KVM GSI routing functions and makes
callers passing it.
x86 architecture does not use requester IDs, but hw/i386/kvm/pci-assign.c
also made passing PCI device pointer instead of NULL for consistency with
the rest of the code.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-Id: <ce081423ba2394a4efc30f30708fca07656bc500.1444916432.git.p.fedin@samsung.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A QEMU breakpoint match is not definitely an architectural breakpoint
match. If an exception is generated unconditionally during translation,
it is hardly possible to ignore it in the debug exception handler.
Generate a call to a helper to check CPU breakpoints and raise an
exception only if any breakpoint matches architecturally.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
GDB breakpoints have higher priority so they have to be checked first.
Should GDB breakpoint match, just return from the debug exception
handler.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement debug exception routing according to ARM ARM D2.3.1 Pseudocode
description of routing debug exceptions.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the MDCR_EL2 register. We don't implement any of
the debug-related traps this register controls yet, so
currently it simply reads back as written.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1444383794-16767-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweaked commit message; moved non-dummy definition from
debug_cp_reginfo to el2_cp_reginfo.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Added oslar_write function to OSLAR_EL1 sysreg, using a status variable
in ARMCPUState.cp15 struct (oslsr_el1). This variable is also linked
to the newly added read-only OSLSR_EL1 register.
Linux reads from this register during its suspend/resume procedure.
Signed-off-by: Davorin Mista <davorin.mista@aggios.com>
[PMM: folded a long line and tweaked a comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It is incorrect to call arm_el_is_aa64() function for unimplemented EL.
This patch fixes several attempts to do so.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
[PMM: Reworked several of the comments to be more verbose.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If any store instruction writes the code inside the same TB
after this store insn, the execution of the TB must be stopped
to execute new code correctly.
As described in ARMv8 manual D3.4.6 self-modifying code must do an
IC invalidation to be valid, and an ISB after it. So it's enough to end
the TB after ISB instruction on the code translation.
Also this TB break is necessary to take any pending interrupts immediately
after an ISB (as required by ARMv8 ARM D1.14.4).
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
[PMM: tweaked commit message and comments slightly]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1443213733-9807-1-git-send-email-sw@weilnetz.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Several devices don't survive object_unref(object_new(T)): they crash
or hang during cleanup, or they leave dangling pointers behind.
This breaks at least device-list-properties, because
qmp_device_list_properties() needs to create a device to find its
properties. Broken in commit f4eb32b "qmp: show QOM properties in
device-list-properties", v2.1. Example reproducer:
$ qemu-system-aarch64 -nodefaults -display none -machine none -S -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 2}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "device-list-properties", "arguments": { "typename": "pxa2xx-pcmcia" } }
qemu-system-aarch64: /home/armbru/work/qemu/memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
Aborted (core dumped)
[Exit 134 (SIGABRT)]
Unfortunately, I can't fix the problems in these devices right now.
Instead, add DeviceClass member cannot_destroy_with_object_finalize_yet
to mark them:
* Hang during cleanup (didn't debug, so I can't say why):
"realview_pci", "versatile_pci".
* Dangling pointer in cpus: most CPUs, plus "allwinner-a10", "digic",
"fsl,imx25", "fsl,imx31", "xlnx,zynqmp", because they create such
CPUs
* Assert kvm_enabled(): "host-x86_64-cpu", host-i386-cpu",
"host-powerpc64-cpu", "host-embedded-powerpc-cpu",
"host-powerpc-cpu" (the powerpc ones can't currently reach the
assertion, because the CPUs are only registered when KVM is enabled,
but the assertion is arguably in the wrong place all the same)
Make qmp_device_list_properties() fail cleanly when the device is so
marked. This improves device-list-properties from "crashes, hangs or
leaves dangling pointers behind" to "fails". Not a complete fix, just
a better-than-nothing work-around. In the above reproducer,
device-list-properties now fails with "Can't list properties of device
'pxa2xx-pcmcia'".
This also protects -device FOO,help, which uses the same machinery
since commit ef52358 "qdev-monitor: include QOM properties in -device
FOO, help output", v2.2. Example reproducer:
$ qemu-system-aarch64 -machine none -device pxa2xx-pcmcia,help
Before:
qemu-system-aarch64: .../memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
After:
Can't list properties of device 'pxa2xx-pcmcia'
Cc: "Andreas Färber" <afaerber@suse.de>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Anthony Green <green@moxielogic.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Jia Liu <proljc@gmail.com>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: qemu-ppc@nongnu.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1443689999-12182-10-git-send-email-armbru@redhat.com>
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Adjust all translators to respect it.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This symbol no longer exists.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This does tidy the icount test common to all targets.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
While we're at it, emit the opcode adjacent to where we currently
record data for search_pc. This puts gen_io_start et al on the
"correct" side of the marker.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>