There is an error in functions aarch64_sync_32_to_64() and
aarch64_sync_64_to_32() with mapping of registers between AArch32 and
AArch64. This commit fixes the mapping to match the v8 ARM ARM
section D1.20.1 (table D1-77).
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-id: 1440796451-15276-1-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tidied commit message a bit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All of these hw_errors are fatal and indicate something wrong with
QEMU implementation.
Convert to g_assert_not_reached.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-id: 169194d09017e5725535d31a1507d454c0043706.1440842587.git.crosthwaite.peter@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Print semihosting debugging information before the
do_arm_semihosting() call so that angel_SWIreason_ReportException,
which causes the function to not return, gets the same debug prints as
other semihosting calls. Also print out the semihosting call number.
Signed-off-by: Christopher Covington <christopher.covington@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-3-git-send-email-peter.maydell@linaro.org
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-7-git-send-email-peter.maydell@linaro.org
Implement the remaining stage 1 TLB invalidate operations
visible from EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-6-git-send-email-peter.maydell@linaro.org
Implement the missing TLBI operations that exist only
if EL2 is implemented.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-5-git-send-email-peter.maydell@linaro.org
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-4-git-send-email-peter.maydell@linaro.org
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-3-git-send-email-peter.maydell@linaro.org
Implement the AArch32 ATS1H* operations which perform
Hyp mode stage 1 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-6-git-send-email-peter.maydell@linaro.org
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-5-git-send-email-peter.maydell@linaro.org
Wire up the AArch64 EL2 and EL3 address translation operations
(AT S12E1*, AT S12E0*, AT S1E2*, AT S1E3*), and correct some
errors in the ats_write64() function in previously unused code
that would have done the wrong kind of lookup for accesses from
EL3 when SCR.NS==0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-3-git-send-email-peter.maydell@linaro.org
For EL2 stage 1 translations, there is no TTBR1. We were already
handling this for 64-bit EL2; add the code to take the 'no TTBR1'
code path for 64-bit EL2 as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-2-git-send-email-peter.maydell@linaro.org
We already implemented ACTLR_EL1; add the missing ACTLR_EL2 and
ACTLR_EL3, for consistency.
Since we don't currently have any CPUs that need the EL2/EL3
versions to reset to non-zero values, implement as RAZ/WI.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-5-git-send-email-peter.maydell@linaro.org
The AFSR registers are implementation dependent auxiliary fault
status registers. We already implemented a RAZ/WI AFSR0_EL1 and
AFSR_EL1; add the missing AFSR{0,1}_EL{2,3} for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-4-git-send-email-peter.maydell@linaro.org
The AMAIR registers are for providing auxiliary implementation
defined memory attributes. We already implemented a RAZ/WI
AMAIR_EL1; add the EL2 and EL3 versions for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-3-git-send-email-peter.maydell@linaro.org
Add the AArch64 registers MAIR_EL3 and TPIDR_EL3, which are the only
two which we had implemented the 32-bit Secure equivalents of but
not the 64-bit Secure versions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-2-git-send-email-peter.maydell@linaro.org
If EL3 is AArch32, then the secure physical timer is accessed via
banking of the registers used for the non-secure physical timer.
Implement this banking.
Note that the access controls for the AArch32 banked registers
remain the same as the physical-timer checks; they are not the
same as the controls on the AArch64 secure timer registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1437047249-2357-3-git-send-email-peter.maydell@linaro.org
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
On CPUs with EL3, there are two physical timers, one for Secure and one
for Non-secure. Implement this extra timer and the AArch64 registers
which access it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437047249-2357-2-git-send-email-peter.maydell@linaro.org
It's easy to accidentally define two cpregs which both try
to reset the same underlying state field (for instance a
clash between an AArch64 EL3 definition and an AArch32
banked register definition). if the two definitions disagree
about the reset value then the result is dependent on which
one happened to be reached last in the hashtable enumeration.
Add a consistency check to detect and assert in these cases:
after reset, we run a second pass where we check that the
reset operation doesn't change the value of the register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436797559-20835-1-git-send-email-peter.maydell@linaro.org
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Prepare for adding the Hypervisor timer, no functional change.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-5-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename gt_cnt_reset to gt_timer_reset as the function really
resets the timers and not the counters. Move the registration
from counter regs to timer regs.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds control for trapping selected timer and counter accesses to EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436791864-4582-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds support for the virtual timer offset controlled by EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436791864-4582-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The SCTLR_EL3 cpreg definition was implicitly resetting the
register state to 0, which is both wrong and clashes with
the reset done via the SCTLR definition (since sctlr[3]
is unioned with sctlr_s). This went unnoticed until recently,
when an unrelated change (commit a903c449b4) happened to
perturb the order of enumeration through the cpregs hashtable for
reset such that the erroneous reset happened after the correct one
rather than before it. Fix this by marking SCTLR_EL3 as an alias,
so its reset is left up to the AArch32 view.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
TLBI ALLE1IS is an operation that does invalidate TLB entries on all PEs
in the same Inner Sharable domain, not just on the current CPU. So we
must use tlbiall_is_write() here.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1435676538-31345-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove semihosting_enabled and semihosting_target and replace them with
SemihostingConfig structure containing equivalent fields. The structure
is defined in vl.c where it is actually set.
Also introduce separate header file include/exec/semihost.h allowing to
access semihosting config related stuff from target specific semihosting
code.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1434643256-16858-2-git-send-email-leon.alrae@imgtec.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unified MPU only. Uses ARM architecture major revision to switch
between PMSAv5 and v7 when ARM_FEATURE_MPU is set. PMSA v6 remains
unsupported and is asserted against.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: dcb03cda6dd754c5cc6a962fa11f25089811e954.1434501320.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define the arm CP registers for PMSAv7 and their accessor functions.
RGNR serves as a shared index that indexes into arrays storing the
DRBAR, DRSR and DRACR registers. DRBAR and friends have to be VMSDd
separately from the CP interface using a new PMSA specific VMSD
subsection.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 172cf135fbd8f5cea413c00e71cc1c3cac704744.1434501320.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define the MPUIR register for MPU supporting ARMv6 and onwards.
Currently we only support unified MPU.
The size of the unified MPU is defined via the number of "dregions".
So just a single config is added to specify this size. (When split MPU
is implemented we will add an extra iregions config).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 9f248950b803a08c8b3c978931663182f7e882e7.1434501320.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cp_reg_reset() is called from g_hash_table_foreach() which does not
define a specific ordering of the hash table iteration. Thus doing reset
for registers marked as ALIAS would give an ambiguous result when
resetvalue is different for original and alias registers. Exit
cp_reg_reset() early when passed an alias register. Then clean up alias
register definitions from needless resetvalue and resetfn.
In particular, this fixes a bug in the handling of the PMCR register,
which had different resetvalues for its 32 and 64-bit views.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1434554713-10220-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a boolean for indicating uniprocessors with MP extensions. This
drives the U bit in MPIDR. Prepares support for Cortex-R5.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: a70a80583df265e0174f01fa1fc92b33ea6d1db5.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently, the return code for get_phys_addr is overloaded for both
success/fail and FSR value return. This doesn't handle the case where
there is an error with a 0 FSR. This case exists in PMSAv7.
So rework get_phys_addr and friends to return a success/failure boolean
return code and populate the FSR via a caller provided uint32_t
pointer.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: a209e3d8ae00cda55260c970891f520210e26bad.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
V6+ PMSA and VMSA share some common registers that are currently
in the VMSA definition block. Split them out into a new def that can
be shared to PMSA.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 284db78a43c63c9bfbb60de539672c361bcb6af8.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These registers are VMSA specific so they should be conditional on
VMSA (i.e. !MPU).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 7bb8843e45f2635c6b7a583c5bb5da51ed4442a0.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If doing a PMSA (MPU) system do not define the VMSA specific TLBTR CP.
The def is done separately from VMSA registers group as it is affected
by both the OMAP/STRONGARM RW errata and the MIDR backgrounding.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: b03fea3840207edf633f5c9189400c3dd6a28d14.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we're using KVM, the kernel's internal idea of the MPIDR
affinity fields must match the values we tell it for the guest
vcpu cluster configuration in the device tree. Since at the moment
the kernel doesn't support letting userspace tell it the correct
affinity fields to use, we must read the kernel's view and
reflect that back in the device tree.
Signed-off-by: Shlomo Pongratz <shlomo.pongratz@huawei.com>
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-id: 02f601d0a1e6$90c7d630$b2578290$@samsung.com
[PMM: Use a local #define rather than a global variable for
the TCG ARM_CPUS_PER_CLUSTER setting. Tweak a comment. Update the
commit message.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to ARMv8 ARM, there are additional aliases to MIDR system register in
AArch32 state. So add them to the list.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1433321048-23793-3-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to ARM Cortex-A53/A57 TRM, REVIDR reset value should be zero. So let
REVIDR reset value be specified by CPU model and correct it for Cortex-A53/A57.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1433321048-23793-2-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since ARMv7 with LPAE support, a supersection short translation table
descriptor has had extended base address fields which hold bits 39:32 of
translated address. These fields are IMPDEF in ARMv6 and ARMv7 without
LPAE support.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1433235718-30485-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The old ARMv5-style page table format includes a kind of second level
descriptor named the "extended small page" format, whose primary purpose
is to allow specification of the TEX memory attribute bits on a 4K page.
This exists on ARMv6 and also (as an implementation extension) on XScale
CPUs; it's UNPREDICTABLE on v5.
We were mishandling this in two ways:
(1) we weren't implementing it for v6 (probably never noticed because
Linux will use the new-style v6 page table format there)
(2) we were not correctly setting the page_size, which is 4K, not 1K
The latter bug went unnoticed for years because the only thing which
the page_size affects is which TLB entries get flushed when the guest
does a TLB invalidate on an address in the page, and prior to commit
2f0d8631b7 we were doing a full TLB flush very frequently due to Linux's
habit of writing the SCTLR pointlessly a lot.
(We can assume that after commit 2f0d8631b7 the bug went unnoticed
for a year because nobody's actually using the Zaurus/XScale emulation...)
Report the correct page size for these descriptors, and permit them
on ARMv6 CPUs. This fixes a problem where a kernel image for Zaurus
can boot the kernel OK but gets random segfaults when it tries to
run userspace programs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1432844085-16441-1-git-send-email-peter.maydell@linaro.org
The ARMCPRegInfo arrays v8_el3_no_el2_cp_reginfo and v8_el2_cp_reginfo
are actually used on non-v8 CPUs as well. Remove the incorrect v8_
prefix from their names.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1433182716-6400-1-git-send-email-peter.maydell@linaro.org
Since we now require GLib 2.22+ (commit f40685c), we don't have to
work around lack of g_hash_table_get_keys() anymore.
This reverts commit 82a3a11897.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1432749090-4698-1-git-send-email-armbru@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-11-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-10-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-9-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-8-git-send-email-edgar.iglesias@gmail.com
[PMM: Switch to preferred opc1/crm order for 64-bit AArch32 cpregs;
drop unneeded use of vmsa_ttbr_writefn]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-7-git-send-email-edgar.iglesias@gmail.com
[PMM: reordered fields into preferred opc0/opc1/crn/crm/opc2 order]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-6-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>