target-arm queue:

* MAINTAINERS/.mailmap: update email for Leif Lindholm
  * hw/arm: add version information to sbsa-ref machine DT
  * Enable new features for -cpu max:
    FEAT_Debugv8p2, FEAT_Debugv8p4, FEAT_RAS (minimal version only),
    FEAT_IESB, FEAT_CSV2, FEAT_CSV2_2, FEAT_CSV3, FEAT_DGH
  * Emulate Cortex-A76
  * Emulate Neoverse-N1
  * Fix the virt board default NUMA topology
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmJ5AbsZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3vyFEACZZ6tRVJYB6YpIzI7rho9x
 hVQIMTc4D5lmVetJnbLdLazifIy60oIOtSKV3Y3oj5DLMcsf6NITrPaFPWNRX3Nm
 mcbTCT5FGj8i7b1CkpEylLwvRQbIaoz2GnJPckdYelxxAq1uJNog3fmoG8nVtJ1F
 HfXVCVkZGQyiyr6Y2/zn3vpdp9n6/4RymN8ugizkcgIRII87DKV+DNDalw613JG4
 5xxBOGkYzo5DZM8TgL8Ylmb5Jy9XY0EN1xpkyHFOg6gi0B3UZTxHq5SvK6NFoZLJ
 ogyhmMh6IjEfhUIDCtWG9VCoPyWpOXAFoh7D7akFVB4g2SIvBvcuGzFxCAsh5q3K
 s+9CgNX1SZpJQkT1jLjQlNzoUhh8lNc7QvhPWVrbAj3scc+1xVnS5MJsokEV21Cx
 /bp3mFwCL+Q4gjsMKx1nKSvxLv8xlxRtIilmlfj+wvpkenIfIwHYjbvItJTlAy1L
 +arx8fqImNQorxO6oMjOuAlSbNnDKup5qvwGghyu/qz/YEnGQVzN6gI324Km081L
 1u31H/B3C2rj3qMsYMp5yOqgprXi1D5c6wfYIpLD/C4UfHgIlRiprawZPDM7fAhX
 vxhUhhj3e9OgkbC9yqd6SUR2Uk3YaQlp319LyoZa3VKSvjBTciFsMXXnIV1UitYp
 BGtz8+FypPVkYH7zQB9c7Q==
 =ey1m
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20220509' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * MAINTAINERS/.mailmap: update email for Leif Lindholm
 * hw/arm: add version information to sbsa-ref machine DT
 * Enable new features for -cpu max:
   FEAT_Debugv8p2, FEAT_Debugv8p4, FEAT_RAS (minimal version only),
   FEAT_IESB, FEAT_CSV2, FEAT_CSV2_2, FEAT_CSV3, FEAT_DGH
 * Emulate Cortex-A76
 * Emulate Neoverse-N1
 * Fix the virt board default NUMA topology

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmJ5AbsZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3vyFEACZZ6tRVJYB6YpIzI7rho9x
# hVQIMTc4D5lmVetJnbLdLazifIy60oIOtSKV3Y3oj5DLMcsf6NITrPaFPWNRX3Nm
# mcbTCT5FGj8i7b1CkpEylLwvRQbIaoz2GnJPckdYelxxAq1uJNog3fmoG8nVtJ1F
# HfXVCVkZGQyiyr6Y2/zn3vpdp9n6/4RymN8ugizkcgIRII87DKV+DNDalw613JG4
# 5xxBOGkYzo5DZM8TgL8Ylmb5Jy9XY0EN1xpkyHFOg6gi0B3UZTxHq5SvK6NFoZLJ
# ogyhmMh6IjEfhUIDCtWG9VCoPyWpOXAFoh7D7akFVB4g2SIvBvcuGzFxCAsh5q3K
# s+9CgNX1SZpJQkT1jLjQlNzoUhh8lNc7QvhPWVrbAj3scc+1xVnS5MJsokEV21Cx
# /bp3mFwCL+Q4gjsMKx1nKSvxLv8xlxRtIilmlfj+wvpkenIfIwHYjbvItJTlAy1L
# +arx8fqImNQorxO6oMjOuAlSbNnDKup5qvwGghyu/qz/YEnGQVzN6gI324Km081L
# 1u31H/B3C2rj3qMsYMp5yOqgprXi1D5c6wfYIpLD/C4UfHgIlRiprawZPDM7fAhX
# vxhUhhj3e9OgkbC9yqd6SUR2Uk3YaQlp319LyoZa3VKSvjBTciFsMXXnIV1UitYp
# BGtz8+FypPVkYH7zQB9c7Q==
# =ey1m
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 09 May 2022 04:57:47 AM PDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]

* tag 'pull-target-arm-20220509' of https://git.linaro.org/people/pmaydell/qemu-arm: (32 commits)
  hw/acpi/aml-build: Use existing CPU topology to build PPTT table
  hw/arm/virt: Fix CPU's default NUMA node ID
  qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
  hw/arm/virt: Consider SMP configuration in CPU topology
  qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
  qapi/machine.json: Add cluster-id
  hw/arm: add versioning to sbsa-ref machine DT
  target/arm: Define neoverse-n1
  target/arm: Define cortex-a76
  target/arm: Enable FEAT_DGH for -cpu max
  target/arm: Enable FEAT_CSV3 for -cpu max
  target/arm: Enable FEAT_CSV2_2 for -cpu max
  target/arm: Enable FEAT_CSV2 for -cpu max
  target/arm: Enable FEAT_IESB for -cpu max
  target/arm: Enable FEAT_RAS for -cpu max
  target/arm: Implement ESB instruction
  target/arm: Implement virtual SError exceptions
  target/arm: Enable SCR and HCR bits for RAS
  target/arm: Add minimal RAS registers
  target/arm: Enable FEAT_Debugv8p4 for -cpu max
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2022-05-09 09:33:53 -07:00
commit b0c3c60366
25 changed files with 1068 additions and 562 deletions

View File

@ -62,7 +62,8 @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com> Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn> Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com> James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Leif Lindholm <leif@nuviainc.com> <leif.lindholm@linaro.org> Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org> Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com> Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com> Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>

View File

@ -886,7 +886,7 @@ F: include/hw/ssi/imx_spi.h
SBSA-REF SBSA-REF
M: Radoslaw Biernacki <rad@semihalf.com> M: Radoslaw Biernacki <rad@semihalf.com>
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
R: Leif Lindholm <leif@nuviainc.com> R: Leif Lindholm <quic_llindhol@quicinc.com>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/arm/sbsa-ref.c F: hw/arm/sbsa-ref.c

View File

@ -12,8 +12,16 @@ the following architecture extensions:
- FEAT_BBM at level 2 (Translation table break-before-make levels) - FEAT_BBM at level 2 (Translation table break-before-make levels)
- FEAT_BF16 (AArch64 BFloat16 instructions) - FEAT_BF16 (AArch64 BFloat16 instructions)
- FEAT_BTI (Branch Target Identification) - FEAT_BTI (Branch Target Identification)
- FEAT_CSV2 (Cache speculation variant 2)
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
- FEAT_CSV3 (Cache speculation variant 3)
- FEAT_DGH (Data gathering hint)
- FEAT_DIT (Data Independent Timing instructions) - FEAT_DIT (Data Independent Timing instructions)
- FEAT_DPB (DC CVAP instruction) - FEAT_DPB (DC CVAP instruction)
- FEAT_Debugv8p2 (Debug changes for v8.2)
- FEAT_Debugv8p4 (Debug changes for v8.4)
- FEAT_DotProd (Advanced SIMD dot product instructions) - FEAT_DotProd (Advanced SIMD dot product instructions)
- FEAT_FCMA (Floating-point complex number instructions) - FEAT_FCMA (Floating-point complex number instructions)
- FEAT_FHM (Floating-point half-precision multiplication instructions) - FEAT_FHM (Floating-point half-precision multiplication instructions)
@ -23,6 +31,7 @@ the following architecture extensions:
- FEAT_FlagM2 (Enhancements to flag manipulation instructions) - FEAT_FlagM2 (Enhancements to flag manipulation instructions)
- FEAT_HPDS (Hierarchical permission disables) - FEAT_HPDS (Hierarchical permission disables)
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions) - FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
- FEAT_IESB (Implicit error synchronization event)
- FEAT_JSCVT (JavaScript conversion instructions) - FEAT_JSCVT (JavaScript conversion instructions)
- FEAT_LOR (Limited ordering regions) - FEAT_LOR (Limited ordering regions)
- FEAT_LPA (Large Physical Address space) - FEAT_LPA (Large Physical Address space)
@ -40,6 +49,7 @@ the following architecture extensions:
- FEAT_PMULL (PMULL, PMULL2 instructions) - FEAT_PMULL (PMULL, PMULL2 instructions)
- FEAT_PMUv3p1 (PMU Extensions v3.1) - FEAT_PMUv3p1 (PMU Extensions v3.1)
- FEAT_PMUv3p4 (PMU Extensions v3.4) - FEAT_PMUv3p4 (PMU Extensions v3.4)
- FEAT_RAS (Reliability, availability, and serviceability)
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions) - FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
- FEAT_RNG (Random number generator) - FEAT_RNG (Random number generator)
- FEAT_SB (Speculation Barrier) - FEAT_SB (Speculation Barrier)

View File

@ -55,8 +55,10 @@ Supported guest CPU types:
- ``cortex-a53`` (64-bit) - ``cortex-a53`` (64-bit)
- ``cortex-a57`` (64-bit) - ``cortex-a57`` (64-bit)
- ``cortex-a72`` (64-bit) - ``cortex-a72`` (64-bit)
- ``cortex-a76`` (64-bit)
- ``a64fx`` (64-bit) - ``a64fx`` (64-bit)
- ``host`` (with KVM only) - ``host`` (with KVM only)
- ``neoverse-n1`` (64-bit)
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG) - ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
Note that the default is ``cortex-a15``, so for an AArch64 guest you must Note that the default is ``cortex-a15``, so for an AArch64 guest you must

View File

@ -2002,86 +2002,71 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
const char *oem_id, const char *oem_table_id) const char *oem_id, const char *oem_table_id)
{ {
MachineClass *mc = MACHINE_GET_CLASS(ms); MachineClass *mc = MACHINE_GET_CLASS(ms);
GQueue *list = g_queue_new(); CPUArchIdList *cpus = ms->possible_cpus;
guint pptt_start = table_data->len; int64_t socket_id = -1, cluster_id = -1, core_id = -1;
guint parent_offset; uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
guint length, i; uint32_t pptt_start = table_data->len;
int uid = 0; int n;
int socket;
AcpiTable table = { .sig = "PPTT", .rev = 2, AcpiTable table = { .sig = "PPTT", .rev = 2,
.oem_id = oem_id, .oem_table_id = oem_table_id }; .oem_id = oem_id, .oem_table_id = oem_table_id };
acpi_table_begin(&table, table_data); acpi_table_begin(&table, table_data);
for (socket = 0; socket < ms->smp.sockets; socket++) { /*
g_queue_push_tail(list, * This works with the assumption that cpus[n].props.*_id has been
GUINT_TO_POINTER(table_data->len - pptt_start)); * sorted from top to down levels in mc->possible_cpu_arch_ids().
build_processor_hierarchy_node( * Otherwise, the unexpected and duplicated containers will be
table_data, * created.
/* */
* Physical package - represents the boundary for (n = 0; n < cpus->len; n++) {
* of a physical package if (cpus->cpus[n].props.socket_id != socket_id) {
*/ assert(cpus->cpus[n].props.socket_id > socket_id);
(1 << 0), socket_id = cpus->cpus[n].props.socket_id;
0, socket, NULL, 0); cluster_id = -1;
} core_id = -1;
socket_offset = table_data->len - pptt_start;
if (mc->smp_props.clusters_supported) { build_processor_hierarchy_node(table_data,
length = g_queue_get_length(list); (1 << 0), /* Physical package */
for (i = 0; i < length; i++) { 0, socket_id, NULL, 0);
int cluster;
parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
g_queue_push_tail(list,
GUINT_TO_POINTER(table_data->len - pptt_start));
build_processor_hierarchy_node(
table_data,
(0 << 0), /* not a physical package */
parent_offset, cluster, NULL, 0);
}
} }
}
length = g_queue_get_length(list); if (mc->smp_props.clusters_supported) {
for (i = 0; i < length; i++) { if (cpus->cpus[n].props.cluster_id != cluster_id) {
int core; assert(cpus->cpus[n].props.cluster_id > cluster_id);
cluster_id = cpus->cpus[n].props.cluster_id;
parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list)); core_id = -1;
for (core = 0; core < ms->smp.cores; core++) { cluster_offset = table_data->len - pptt_start;
if (ms->smp.threads > 1) { build_processor_hierarchy_node(table_data,
g_queue_push_tail(list, (0 << 0), /* Not a physical package */
GUINT_TO_POINTER(table_data->len - pptt_start)); socket_offset, cluster_id, NULL, 0);
build_processor_hierarchy_node(
table_data,
(0 << 0), /* not a physical package */
parent_offset, core, NULL, 0);
} else {
build_processor_hierarchy_node(
table_data,
(1 << 1) | /* ACPI Processor ID valid */
(1 << 3), /* Node is a Leaf */
parent_offset, uid++, NULL, 0);
} }
} else {
cluster_offset = socket_offset;
} }
}
length = g_queue_get_length(list); if (ms->smp.threads == 1) {
for (i = 0; i < length; i++) { build_processor_hierarchy_node(table_data,
int thread; (1 << 1) | /* ACPI Processor ID valid */
(1 << 3), /* Node is a Leaf */
cluster_offset, n, NULL, 0);
} else {
if (cpus->cpus[n].props.core_id != core_id) {
assert(cpus->cpus[n].props.core_id > core_id);
core_id = cpus->cpus[n].props.core_id;
core_offset = table_data->len - pptt_start;
build_processor_hierarchy_node(table_data,
(0 << 0), /* Not a physical package */
cluster_offset, core_id, NULL, 0);
}
parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list)); build_processor_hierarchy_node(table_data,
for (thread = 0; thread < ms->smp.threads; thread++) {
build_processor_hierarchy_node(
table_data,
(1 << 1) | /* ACPI Processor ID valid */ (1 << 1) | /* ACPI Processor ID valid */
(1 << 2) | /* Processor is a Thread */ (1 << 2) | /* Processor is a Thread */
(1 << 3), /* Node is a Leaf */ (1 << 3), /* Node is a Leaf */
parent_offset, uid++, NULL, 0); core_offset, n, NULL, 0);
} }
} }
g_queue_free(list);
acpi_table_end(linker, &table); acpi_table_end(linker, &table);
} }

View File

@ -145,6 +145,8 @@ static const int sbsa_ref_irqmap[] = {
static const char * const valid_cpus[] = { static const char * const valid_cpus[] = {
ARM_CPU_TYPE_NAME("cortex-a57"), ARM_CPU_TYPE_NAME("cortex-a57"),
ARM_CPU_TYPE_NAME("cortex-a72"), ARM_CPU_TYPE_NAME("cortex-a72"),
ARM_CPU_TYPE_NAME("cortex-a76"),
ARM_CPU_TYPE_NAME("neoverse-n1"),
ARM_CPU_TYPE_NAME("max"), ARM_CPU_TYPE_NAME("max"),
}; };
@ -190,6 +192,20 @@ static void create_fdt(SBSAMachineState *sms)
qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2); qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2);
qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2); qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2);
/*
* This versioning scheme is for informing platform fw only. It is neither:
* - A QEMU versioned machine type; a given version of QEMU will emulate
* a given version of the platform.
* - A reflection of level of SBSA (now SystemReady SR) support provided.
*
* machine-version-major: updated when changes breaking fw compatibility
* are introduced.
* machine-version-minor: updated when features are added that don't break
* fw compatibility.
*/
qemu_fdt_setprop_cell(fdt, "/", "machine-version-major", 0);
qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 0);
if (ms->numa_state->have_numa_distance) { if (ms->numa_state->have_numa_distance) {
int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t); int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
uint32_t *matrix = g_malloc0(size); uint32_t *matrix = g_malloc0(size);

View File

@ -202,7 +202,9 @@ static const char *valid_cpus[] = {
ARM_CPU_TYPE_NAME("cortex-a53"), ARM_CPU_TYPE_NAME("cortex-a53"),
ARM_CPU_TYPE_NAME("cortex-a57"), ARM_CPU_TYPE_NAME("cortex-a57"),
ARM_CPU_TYPE_NAME("cortex-a72"), ARM_CPU_TYPE_NAME("cortex-a72"),
ARM_CPU_TYPE_NAME("cortex-a76"),
ARM_CPU_TYPE_NAME("a64fx"), ARM_CPU_TYPE_NAME("a64fx"),
ARM_CPU_TYPE_NAME("neoverse-n1"),
ARM_CPU_TYPE_NAME("host"), ARM_CPU_TYPE_NAME("host"),
ARM_CPU_TYPE_NAME("max"), ARM_CPU_TYPE_NAME("max"),
}; };
@ -2552,7 +2554,9 @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx) static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
{ {
return idx % ms->numa_state->num_nodes; int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
return socket_id % ms->numa_state->num_nodes;
} }
static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms) static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
@ -2560,6 +2564,7 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
int n; int n;
unsigned int max_cpus = ms->smp.max_cpus; unsigned int max_cpus = ms->smp.max_cpus;
VirtMachineState *vms = VIRT_MACHINE(ms); VirtMachineState *vms = VIRT_MACHINE(ms);
MachineClass *mc = MACHINE_GET_CLASS(vms);
if (ms->possible_cpus) { if (ms->possible_cpus) {
assert(ms->possible_cpus->len == max_cpus); assert(ms->possible_cpus->len == max_cpus);
@ -2573,8 +2578,20 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
ms->possible_cpus->cpus[n].type = ms->cpu_type; ms->possible_cpus->cpus[n].type = ms->cpu_type;
ms->possible_cpus->cpus[n].arch_id = ms->possible_cpus->cpus[n].arch_id =
virt_cpu_mp_affinity(vms, n); virt_cpu_mp_affinity(vms, n);
assert(!mc->smp_props.dies_supported);
ms->possible_cpus->cpus[n].props.has_socket_id = true;
ms->possible_cpus->cpus[n].props.socket_id =
n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
ms->possible_cpus->cpus[n].props.has_cluster_id = true;
ms->possible_cpus->cpus[n].props.cluster_id =
(n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
ms->possible_cpus->cpus[n].props.has_core_id = true;
ms->possible_cpus->cpus[n].props.core_id =
(n / ms->smp.threads) % ms->smp.cores;
ms->possible_cpus->cpus[n].props.has_thread_id = true; ms->possible_cpus->cpus[n].props.has_thread_id = true;
ms->possible_cpus->cpus[n].props.thread_id = n; ms->possible_cpus->cpus[n].props.thread_id =
n % ms->smp.threads;
} }
return ms->possible_cpus; return ms->possible_cpus;
} }

View File

@ -77,6 +77,10 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
if (c->has_die_id) { if (c->has_die_id) {
monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id); monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id);
} }
if (c->has_cluster_id) {
monitor_printf(mon, " cluster-id: \"%" PRIu64 "\"\n",
c->cluster_id);
}
if (c->has_core_id) { if (c->has_core_id) {
monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id); monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id);
} }

View File

@ -682,6 +682,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
return; return;
} }
if (props->has_cluster_id && !slot->props.has_cluster_id) {
error_setg(errp, "cluster-id is not supported");
return;
}
if (props->has_socket_id && !slot->props.has_socket_id) { if (props->has_socket_id && !slot->props.has_socket_id) {
error_setg(errp, "socket-id is not supported"); error_setg(errp, "socket-id is not supported");
return; return;
@ -701,6 +706,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
continue; continue;
} }
if (props->has_cluster_id &&
props->cluster_id != slot->props.cluster_id) {
continue;
}
if (props->has_die_id && props->die_id != slot->props.die_id) { if (props->has_die_id && props->die_id != slot->props.die_id) {
continue; continue;
} }
@ -995,6 +1005,12 @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
} }
g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id); g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
} }
if (cpu->props.has_cluster_id) {
if (s->len) {
g_string_append_printf(s, ", ");
}
g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
}
if (cpu->props.has_core_id) { if (cpu->props.has_core_id) {
if (s->len) { if (s->len) {
g_string_append_printf(s, ", "); g_string_append_printf(s, ", ");

View File

@ -868,10 +868,11 @@
# @node-id: NUMA node ID the CPU belongs to # @node-id: NUMA node ID the CPU belongs to
# @socket-id: socket number within node/board the CPU belongs to # @socket-id: socket number within node/board the CPU belongs to
# @die-id: die number within socket the CPU belongs to (since 4.1) # @die-id: die number within socket the CPU belongs to (since 4.1)
# @core-id: core number within die the CPU belongs to # @cluster-id: cluster number within die the CPU belongs to (since 7.1)
# @core-id: core number within cluster the CPU belongs to
# @thread-id: thread number within core the CPU belongs to # @thread-id: thread number within core the CPU belongs to
# #
# Note: currently there are 5 properties that could be present # Note: currently there are 6 properties that could be present
# but management should be prepared to pass through other # but management should be prepared to pass through other
# properties with device_add command to allow for future # properties with device_add command to allow for future
# interface extension. This also requires the filed names to be kept in # interface extension. This also requires the filed names to be kept in
@ -883,6 +884,7 @@
'data': { '*node-id': 'int', 'data': { '*node-id': 'int',
'*socket-id': 'int', '*socket-id': 'int',
'*die-id': 'int', '*die-id': 'int',
'*cluster-id': 'int',
'*core-id': 'int', '*core-id': 'int',
'*thread-id': 'int' '*thread-id': 'int'
} }

View File

@ -187,13 +187,17 @@ SMULTT .... 0001 0110 .... 0000 .... 1110 .... @rd0mn
{ {
{ {
YIELD ---- 0011 0010 0000 1111 ---- 0000 0001 [
WFE ---- 0011 0010 0000 1111 ---- 0000 0010 YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
WFI ---- 0011 0010 0000 1111 ---- 0000 0011 WFE ---- 0011 0010 0000 1111 ---- 0000 0010
WFI ---- 0011 0010 0000 1111 ---- 0000 0011
# TODO: Implement SEV, SEVL; may help SMP performance. # TODO: Implement SEV, SEVL; may help SMP performance.
# SEV ---- 0011 0010 0000 1111 ---- 0000 0100 # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
# SEVL ---- 0011 0010 0000 1111 ---- 0000 0101 # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
ESB ---- 0011 0010 0000 1111 ---- 0001 0000
]
# The canonical nop ends in 00000000, but the whole of the # The canonical nop ends in 00000000, but the whole of the
# rest of the space executes as nop if otherwise unsupported. # rest of the space executes as nop if otherwise unsupported.

View File

@ -102,6 +102,17 @@ enum {
ARM_CP_SVE = 1 << 14, ARM_CP_SVE = 1 << 14,
/* Flag: Do not expose in gdb sysreg xml. */ /* Flag: Do not expose in gdb sysreg xml. */
ARM_CP_NO_GDB = 1 << 15, ARM_CP_NO_GDB = 1 << 15,
/*
* Flags: If EL3 but not EL2...
* - UNDEF: discard the cpreg,
* - KEEP: retain the cpreg as is,
* - C_NZ: set const on the cpreg, but retain resetvalue,
* - else: set const on the cpreg, zero resetvalue, aka RES0.
* See rule RJFFP in section D1.1.3 of DDI0487H.a.
*/
ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
}; };
/* /*

View File

@ -85,7 +85,7 @@ static bool arm_cpu_has_work(CPUState *cs)
return (cpu->power_state != PSCI_OFF) return (cpu->power_state != PSCI_OFF)
&& cs->interrupt_request & && cs->interrupt_request &
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
| CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
| CPU_INTERRUPT_EXITTB); | CPU_INTERRUPT_EXITTB);
} }
@ -230,6 +230,11 @@ static void arm_cpu_reset(DeviceState *dev)
*/ */
env->cp15.gcr_el1 = 0x1ffff; env->cp15.gcr_el1 = 0x1ffff;
} }
/*
* Disable access to SCXTNUM_EL0 from CSV2_1p2.
* This is not yet exposed from the Linux kernel in any way.
*/
env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
#else #else
/* Reset into the highest available EL */ /* Reset into the highest available EL */
if (arm_feature(env, ARM_FEATURE_EL3)) { if (arm_feature(env, ARM_FEATURE_EL3)) {
@ -511,6 +516,12 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
return false; return false;
} }
return !(env->daif & PSTATE_I); return !(env->daif & PSTATE_I);
case EXCP_VSERR:
if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
/* VIRQs are only taken when hypervized. */
return false;
}
return !(env->daif & PSTATE_A);
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
@ -632,6 +643,17 @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
goto found; goto found;
} }
} }
if (interrupt_request & CPU_INTERRUPT_VSERR) {
excp_idx = EXCP_VSERR;
target_el = 1;
if (arm_excp_unmasked(cs, excp_idx, target_el,
cur_el, secure, hcr_el2)) {
/* Taking a virtual abort clears HCR_EL2.VSE */
env->cp15.hcr_el2 &= ~HCR_VSE;
cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
goto found;
}
}
return false; return false;
found: found:
@ -684,6 +706,25 @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
} }
} }
void arm_cpu_update_vserr(ARMCPU *cpu)
{
/*
* Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
*/
CPUARMState *env = &cpu->env;
CPUState *cs = CPU(cpu);
bool new_state = env->cp15.hcr_el2 & HCR_VSE;
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERR) != 0)) {
if (new_state) {
cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
} else {
cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
}
}
}
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
static void arm_cpu_set_irq(void *opaque, int irq, int level) static void arm_cpu_set_irq(void *opaque, int irq, int level)
{ {
@ -1793,11 +1834,14 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
*/ */
unset_feature(env, ARM_FEATURE_EL3); unset_feature(env, ARM_FEATURE_EL3);
/* Disable the security extension feature bits in the processor feature /*
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12]. * Disable the security extension feature bits in the processor
* feature registers as well.
*/ */
cpu->isar.id_pfr1 &= ~0xf0; cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
cpu->isar.id_aa64pfr0 &= ~0xf000; cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, COPSDBG, 0);
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
ID_AA64PFR0, EL3, 0);
} }
if (!cpu->has_el2) { if (!cpu->has_el2) {
@ -1828,12 +1872,14 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
} }
if (!arm_feature(env, ARM_FEATURE_EL2)) { if (!arm_feature(env, ARM_FEATURE_EL2)) {
/* Disable the hypervisor feature bits in the processor feature /*
* registers if we don't have EL2. These are id_pfr1[15:12] and * Disable the hypervisor feature bits in the processor feature
* id_aa64pfr0_el1[11:8]. * registers if we don't have EL2.
*/ */
cpu->isar.id_aa64pfr0 &= ~0xf00; cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
cpu->isar.id_pfr1 &= ~0xf000; ID_AA64PFR0, EL2, 0);
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1,
ID_PFR1, VIRTUALIZATION, 0);
} }
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY

View File

@ -56,6 +56,7 @@
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */ #define EXCP_LSERR 21 /* v8M LSERR SecureFault */
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */ #define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */ #define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
#define EXCP_VSERR 24
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */ /* NB: add new EXCP_ defines to the array in arm_log_exception() too */
#define ARMV7M_EXCP_RESET 1 #define ARMV7M_EXCP_RESET 1
@ -89,6 +90,7 @@ enum {
#define CPU_INTERRUPT_FIQ CPU_INTERRUPT_TGT_EXT_1 #define CPU_INTERRUPT_FIQ CPU_INTERRUPT_TGT_EXT_1
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2 #define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3 #define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
/* The usual mapping for an AArch64 system register to its AArch32 /* The usual mapping for an AArch64 system register to its AArch32
* counterpart is for the 32 bit world to have access to the lower * counterpart is for the 32 bit world to have access to the lower
@ -525,6 +527,11 @@ typedef struct CPUArchState {
uint64_t tfsr_el[4]; /* tfsre0_el1 is index 0. */ uint64_t tfsr_el[4]; /* tfsre0_el1 is index 0. */
uint64_t gcr_el1; uint64_t gcr_el1;
uint64_t rgsr_el1; uint64_t rgsr_el1;
/* Minimal RAS registers */
uint64_t disr_el1;
uint64_t vdisr_el2;
uint64_t vsesr_el2;
} cp15; } cp15;
struct { struct {
@ -681,6 +688,8 @@ typedef struct CPUArchState {
ARMPACKey apdb; ARMPACKey apdb;
ARMPACKey apga; ARMPACKey apga;
} keys; } keys;
uint64_t scxtnum_el[4];
#endif #endif
#if defined(CONFIG_USER_ONLY) #if defined(CONFIG_USER_ONLY)
@ -1204,6 +1213,7 @@ void pmu_init(ARMCPU *cpu);
#define SCTLR_WXN (1U << 19) #define SCTLR_WXN (1U << 19)
#define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */ #define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */
#define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */ #define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */
#define SCTLR_TSCXT (1U << 20) /* FEAT_CSV2_1p2, AArch64 only */
#define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */ #define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */
#define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */ #define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */
#define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */ #define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */
@ -4015,6 +4025,19 @@ static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0; return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
} }
static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
{
int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
if (key >= 2) {
return true; /* FEAT_CSV2_2 */
}
if (key == 1) {
key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
return key >= 2; /* FEAT_CSV2_1p2 */
}
return false;
}
static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id) static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
{ {
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0; return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;

View File

@ -34,65 +34,9 @@
#include "hvf_arm.h" #include "hvf_arm.h"
#include "qapi/visitor.h" #include "qapi/visitor.h"
#include "hw/qdev-properties.h" #include "hw/qdev-properties.h"
#include "cpregs.h" #include "internals.h"
#ifndef CONFIG_USER_ONLY
static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
{
ARMCPU *cpu = env_archcpu(env);
/* Number of cores is in [25:24]; otherwise we RAZ */
return (cpu->core_count - 1) << 24;
}
#endif
static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
#ifndef CONFIG_USER_ONLY
{ .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
.access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
.writefn = arm_cp_write_ignore },
{ .name = "L2CTLR",
.cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
.access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
.writefn = arm_cp_write_ignore },
#endif
{ .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "L2ECTLR",
.cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUACTLR",
.cp = 15, .opc1 = 0, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUECTLR",
.cp = 15, .opc1 = 1, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUMERRSR",
.cp = 15, .opc1 = 2, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "L2MERRSR",
.cp = 15, .opc1 = 3, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
};
static void aarch64_a57_initfn(Object *obj) static void aarch64_a57_initfn(Object *obj)
{ {
ARMCPU *cpu = ARM_CPU(obj); ARMCPU *cpu = ARM_CPU(obj);
@ -143,7 +87,7 @@ static void aarch64_a57_initfn(Object *obj)
cpu->gic_num_lrs = 4; cpu->gic_num_lrs = 4;
cpu->gic_vpribits = 5; cpu->gic_vpribits = 5;
cpu->gic_vprebits = 5; cpu->gic_vprebits = 5;
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo); define_cortex_a72_a57_a53_cp_reginfo(cpu);
} }
static void aarch64_a53_initfn(Object *obj) static void aarch64_a53_initfn(Object *obj)
@ -196,7 +140,7 @@ static void aarch64_a53_initfn(Object *obj)
cpu->gic_num_lrs = 4; cpu->gic_num_lrs = 4;
cpu->gic_vpribits = 5; cpu->gic_vpribits = 5;
cpu->gic_vprebits = 5; cpu->gic_vprebits = 5;
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo); define_cortex_a72_a57_a53_cp_reginfo(cpu);
} }
static void aarch64_a72_initfn(Object *obj) static void aarch64_a72_initfn(Object *obj)
@ -247,7 +191,137 @@ static void aarch64_a72_initfn(Object *obj)
cpu->gic_num_lrs = 4; cpu->gic_num_lrs = 4;
cpu->gic_vpribits = 5; cpu->gic_vpribits = 5;
cpu->gic_vprebits = 5; cpu->gic_vprebits = 5;
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo); define_cortex_a72_a57_a53_cp_reginfo(cpu);
}
static void aarch64_a76_initfn(Object *obj)
{
ARMCPU *cpu = ARM_CPU(obj);
cpu->dtb_compatible = "arm,cortex-a76";
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
set_feature(&cpu->env, ARM_FEATURE_EL2);
set_feature(&cpu->env, ARM_FEATURE_EL3);
set_feature(&cpu->env, ARM_FEATURE_PMU);
/* Ordered by B2.4 AArch64 registers by functional group */
cpu->clidr = 0x82000023;
cpu->ctr = 0x8444C004;
cpu->dcz_blocksize = 4;
cpu->isar.id_aa64dfr0 = 0x0000000010305408ull;
cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
cpu->isar.id_aa64mmfr0 = 0x0000000000101122ull;
cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
cpu->isar.id_aa64pfr1 = 0x0000000000000010ull;
cpu->id_afr0 = 0x00000000;
cpu->isar.id_dfr0 = 0x04010088;
cpu->isar.id_isar0 = 0x02101110;
cpu->isar.id_isar1 = 0x13112111;
cpu->isar.id_isar2 = 0x21232042;
cpu->isar.id_isar3 = 0x01112131;
cpu->isar.id_isar4 = 0x00010142;
cpu->isar.id_isar5 = 0x01011121;
cpu->isar.id_isar6 = 0x00000010;
cpu->isar.id_mmfr0 = 0x10201105;
cpu->isar.id_mmfr1 = 0x40000000;
cpu->isar.id_mmfr2 = 0x01260000;
cpu->isar.id_mmfr3 = 0x02122211;
cpu->isar.id_mmfr4 = 0x00021110;
cpu->isar.id_pfr0 = 0x10010131;
cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
cpu->isar.id_pfr2 = 0x00000011;
cpu->midr = 0x414fd0b1; /* r4p1 */
cpu->revidr = 0;
/* From B2.18 CCSIDR_EL1 */
cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
cpu->ccsidr[2] = 0x707fe03a; /* 512KB L2 cache */
/* From B2.93 SCTLR_EL3 */
cpu->reset_sctlr = 0x30c50838;
/* From B4.23 ICH_VTR_EL2 */
cpu->gic_num_lrs = 4;
cpu->gic_vpribits = 5;
cpu->gic_vprebits = 5;
/* From B5.1 AdvSIMD AArch64 register summary */
cpu->isar.mvfr0 = 0x10110222;
cpu->isar.mvfr1 = 0x13211111;
cpu->isar.mvfr2 = 0x00000043;
}
static void aarch64_neoverse_n1_initfn(Object *obj)
{
ARMCPU *cpu = ARM_CPU(obj);
cpu->dtb_compatible = "arm,neoverse-n1";
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
set_feature(&cpu->env, ARM_FEATURE_EL2);
set_feature(&cpu->env, ARM_FEATURE_EL3);
set_feature(&cpu->env, ARM_FEATURE_PMU);
/* Ordered by B2.4 AArch64 registers by functional group */
cpu->clidr = 0x82000023;
cpu->ctr = 0x8444c004;
cpu->dcz_blocksize = 4;
cpu->isar.id_aa64dfr0 = 0x0000000110305408ull;
cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
cpu->isar.id_aa64mmfr0 = 0x0000000000101125ull;
cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
cpu->isar.id_aa64pfr1 = 0x0000000000000020ull;
cpu->id_afr0 = 0x00000000;
cpu->isar.id_dfr0 = 0x04010088;
cpu->isar.id_isar0 = 0x02101110;
cpu->isar.id_isar1 = 0x13112111;
cpu->isar.id_isar2 = 0x21232042;
cpu->isar.id_isar3 = 0x01112131;
cpu->isar.id_isar4 = 0x00010142;
cpu->isar.id_isar5 = 0x01011121;
cpu->isar.id_isar6 = 0x00000010;
cpu->isar.id_mmfr0 = 0x10201105;
cpu->isar.id_mmfr1 = 0x40000000;
cpu->isar.id_mmfr2 = 0x01260000;
cpu->isar.id_mmfr3 = 0x02122211;
cpu->isar.id_mmfr4 = 0x00021110;
cpu->isar.id_pfr0 = 0x10010131;
cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
cpu->isar.id_pfr2 = 0x00000011;
cpu->midr = 0x414fd0c1; /* r4p1 */
cpu->revidr = 0;
/* From B2.23 CCSIDR_EL1 */
cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
cpu->ccsidr[2] = 0x70ffe03a; /* 1MB L2 cache */
/* From B2.98 SCTLR_EL3 */
cpu->reset_sctlr = 0x30c50838;
/* From B4.23 ICH_VTR_EL2 */
cpu->gic_num_lrs = 4;
cpu->gic_vpribits = 5;
cpu->gic_vprebits = 5;
/* From B5.1 AdvSIMD AArch64 register summary */
cpu->isar.mvfr0 = 0x10110222;
cpu->isar.mvfr1 = 0x13211111;
cpu->isar.mvfr2 = 0x00000043;
} }
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
@ -738,7 +812,6 @@ static void aarch64_max_initfn(Object *obj)
{ {
ARMCPU *cpu = ARM_CPU(obj); ARMCPU *cpu = ARM_CPU(obj);
uint64_t t; uint64_t t;
uint32_t u;
if (kvm_enabled() || hvf_enabled()) { if (kvm_enabled() || hvf_enabled()) {
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */ /* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
@ -770,51 +843,56 @@ static void aarch64_max_initfn(Object *obj)
cpu->midr = t; cpu->midr = t;
t = cpu->isar.id_aa64isar0; t = cpu->isar.id_aa64isar0;
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1); t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); /* FEAT_SM3 */
t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); /* FEAT_SM4 */
t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); /* FEAT_DotProd */
t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); /* FEAT_FHM */
t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* FEAT_FlagM2 */
t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */ t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); /* FEAT_RNG */
cpu->isar.id_aa64isar0 = t; cpu->isar.id_aa64isar0 = t;
t = cpu->isar.id_aa64isar1; t = cpu->isar.id_aa64isar1;
t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); /* FEAT_DPB2 */
t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); /* FEAT_JSCVT */
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); /* FEAT_FCMA */
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* FEAT_LRCPC2 */
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); /* FEAT_FRINTTS */
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); t = FIELD_DP64(t, ID_AA64ISAR1, DGH, 1); /* FEAT_DGH */
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
cpu->isar.id_aa64isar1 = t; cpu->isar.id_aa64isar1 = t;
t = cpu->isar.id_aa64pfr0; t = cpu->isar.id_aa64pfr0;
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
t = FIELD_DP64(t, ID_AA64PFR0, RAS, 1); /* FEAT_RAS */
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1); t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); t = FIELD_DP64(t, ID_AA64PFR0, CSV3, 1); /* FEAT_CSV3 */
cpu->isar.id_aa64pfr0 = t; cpu->isar.id_aa64pfr0 = t;
t = cpu->isar.id_aa64pfr1; t = cpu->isar.id_aa64pfr1;
t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); /* FEAT_BTI */
t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); /* FEAT_SSBS2 */
/* /*
* Begin with full support for MTE. This will be downgraded to MTE=0 * Begin with full support for MTE. This will be downgraded to MTE=0
* during realize if the board provides no tag memory, much like * during realize if the board provides no tag memory, much like
* we do for EL2 with the virtualization=on property. * we do for EL2 with the virtualization=on property.
*/ */
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
cpu->isar.id_aa64pfr1 = t; cpu->isar.id_aa64pfr1 = t;
t = cpu->isar.id_aa64mmfr0; t = cpu->isar.id_aa64mmfr0;
@ -826,86 +904,43 @@ static void aarch64_max_initfn(Object *obj)
cpu->isar.id_aa64mmfr0 = t; cpu->isar.id_aa64mmfr0 = t;
t = cpu->isar.id_aa64mmfr1; t = cpu->isar.id_aa64mmfr1;
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */ t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */ t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
cpu->isar.id_aa64mmfr1 = t; cpu->isar.id_aa64mmfr1 = t;
t = cpu->isar.id_aa64mmfr2; t = cpu->isar.id_aa64mmfr2;
t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */ t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */ t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */ t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */ t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */ t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
cpu->isar.id_aa64mmfr2 = t; cpu->isar.id_aa64mmfr2 = t;
t = cpu->isar.id_aa64zfr0; t = cpu->isar.id_aa64zfr0;
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1); t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */ t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* FEAT_SVE_PMULL128 */
t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); /* FEAT_SVE_BitPerm */
t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1); t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1); /* FEAT_BF16 */
t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); /* FEAT_SVE_SHA3 */
t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); /* FEAT_SVE_SM4 */
t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); /* FEAT_I8MM */
t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); /* FEAT_F32MM */
t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); /* FEAT_F64MM */
cpu->isar.id_aa64zfr0 = t; cpu->isar.id_aa64zfr0 = t;
/* Replicate the same data to the 32-bit id registers. */
u = cpu->isar.id_isar5;
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
cpu->isar.id_isar5 = u;
u = cpu->isar.id_isar6;
u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
u = FIELD_DP32(u, ID_ISAR6, SB, 1);
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
cpu->isar.id_isar6 = u;
u = cpu->isar.id_pfr0;
u = FIELD_DP32(u, ID_PFR0, DIT, 1);
cpu->isar.id_pfr0 = u;
u = cpu->isar.id_pfr2;
u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
cpu->isar.id_pfr2 = u;
u = cpu->isar.id_mmfr3;
u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
cpu->isar.id_mmfr3 = u;
u = cpu->isar.id_mmfr4;
u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
cpu->isar.id_mmfr4 = u;
t = cpu->isar.id_aa64dfr0; t = cpu->isar.id_aa64dfr0;
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 9); /* FEAT_Debugv8p4 */
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
cpu->isar.id_aa64dfr0 = t; cpu->isar.id_aa64dfr0 = t;
u = cpu->isar.id_dfr0; /* Replicate the same data to the 32-bit id registers. */
u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */ aa32_max_features(cpu);
cpu->isar.id_dfr0 = u;
u = cpu->isar.mvfr1;
u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */
u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
cpu->isar.mvfr1 = u;
#ifdef CONFIG_USER_ONLY #ifdef CONFIG_USER_ONLY
/* /*
@ -976,7 +1011,9 @@ static const ARMCPUInfo aarch64_cpus[] = {
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn }, { .name = "cortex-a57", .initfn = aarch64_a57_initfn },
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn }, { .name = "cortex-a53", .initfn = aarch64_a53_initfn },
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn }, { .name = "cortex-a72", .initfn = aarch64_a72_initfn },
{ .name = "cortex-a76", .initfn = aarch64_a76_initfn },
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn }, { .name = "a64fx", .initfn = aarch64_a64fx_initfn },
{ .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
{ .name = "max", .initfn = aarch64_max_initfn }, { .name = "max", .initfn = aarch64_max_initfn },
#if defined(CONFIG_KVM) || defined(CONFIG_HVF) #if defined(CONFIG_KVM) || defined(CONFIG_HVF)
{ .name = "host", .initfn = aarch64_host_initfn }, { .name = "host", .initfn = aarch64_host_initfn },

View File

@ -20,6 +20,130 @@
#endif #endif
#include "cpregs.h" #include "cpregs.h"
/* Share AArch32 -cpu max features with AArch64. */
void aa32_max_features(ARMCPU *cpu)
{
uint32_t t;
/* Add additional features supported by QEMU */
t = cpu->isar.id_isar5;
t = FIELD_DP32(t, ID_ISAR5, AES, 2); /* FEAT_PMULL */
t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); /* FEAT_SHA1 */
t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); /* FEAT_SHA256 */
t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
t = FIELD_DP32(t, ID_ISAR5, RDM, 1); /* FEAT_RDM */
t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); /* FEAT_FCMA */
cpu->isar.id_isar5 = t;
t = cpu->isar.id_isar6;
t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); /* FEAT_JSCVT */
t = FIELD_DP32(t, ID_ISAR6, DP, 1); /* Feat_DotProd */
t = FIELD_DP32(t, ID_ISAR6, FHM, 1); /* FEAT_FHM */
t = FIELD_DP32(t, ID_ISAR6, SB, 1); /* FEAT_SB */
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); /* FEAT_SPECRES */
t = FIELD_DP32(t, ID_ISAR6, BF16, 1); /* FEAT_AA32BF16 */
t = FIELD_DP32(t, ID_ISAR6, I8MM, 1); /* FEAT_AA32I8MM */
cpu->isar.id_isar6 = t;
t = cpu->isar.mvfr1;
t = FIELD_DP32(t, MVFR1, FPHP, 3); /* FEAT_FP16 */
t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* FEAT_FP16 */
cpu->isar.mvfr1 = t;
t = cpu->isar.mvfr2;
t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
cpu->isar.mvfr2 = t;
t = cpu->isar.id_mmfr3;
t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* FEAT_PAN2 */
cpu->isar.id_mmfr3 = t;
t = cpu->isar.id_mmfr4;
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX*/
cpu->isar.id_mmfr4 = t;
t = cpu->isar.id_pfr0;
t = FIELD_DP32(t, ID_PFR0, CSV2, 2); /* FEAT_CVS2 */
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
cpu->isar.id_pfr0 = t;
t = cpu->isar.id_pfr2;
t = FIELD_DP32(t, ID_PFR2, CSV3, 1); /* FEAT_CSV3 */
t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
cpu->isar.id_pfr2 = t;
t = cpu->isar.id_dfr0;
t = FIELD_DP32(t, ID_DFR0, COPDBG, 9); /* FEAT_Debugv8p4 */
t = FIELD_DP32(t, ID_DFR0, COPSDBG, 9); /* FEAT_Debugv8p4 */
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
cpu->isar.id_dfr0 = t;
}
#ifndef CONFIG_USER_ONLY
static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
{
ARMCPU *cpu = env_archcpu(env);
/* Number of cores is in [25:24]; otherwise we RAZ */
return (cpu->core_count - 1) << 24;
}
static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
{ .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
.access = PL1_RW, .readfn = l2ctlr_read,
.writefn = arm_cp_write_ignore },
{ .name = "L2CTLR",
.cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
.access = PL1_RW, .readfn = l2ctlr_read,
.writefn = arm_cp_write_ignore },
{ .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "L2ECTLR",
.cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUACTLR",
.cp = 15, .opc1 = 0, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUECTLR",
.cp = 15, .opc1 = 1, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPUMERRSR",
.cp = 15, .opc1 = 2, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "L2MERRSR",
.cp = 15, .opc1 = 3, .crm = 15,
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
};
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu)
{
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
}
#endif /* !CONFIG_USER_ONLY */
/* CPU models. These are not needed for the AArch64 linux-user build. */ /* CPU models. These are not needed for the AArch64 linux-user build. */
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) #if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
@ -936,70 +1060,55 @@ static void arm_max_initfn(Object *obj)
{ {
ARMCPU *cpu = ARM_CPU(obj); ARMCPU *cpu = ARM_CPU(obj);
cortex_a15_initfn(obj); /* aarch64_a57_initfn, advertising none of the aarch64 features */
cpu->dtb_compatible = "arm,cortex-a57";
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
set_feature(&cpu->env, ARM_FEATURE_EL2);
set_feature(&cpu->env, ARM_FEATURE_EL3);
set_feature(&cpu->env, ARM_FEATURE_PMU);
cpu->midr = 0x411fd070;
cpu->revidr = 0x00000000;
cpu->reset_fpsid = 0x41034070;
cpu->isar.mvfr0 = 0x10110222;
cpu->isar.mvfr1 = 0x12111111;
cpu->isar.mvfr2 = 0x00000043;
cpu->ctr = 0x8444c004;
cpu->reset_sctlr = 0x00c50838;
cpu->isar.id_pfr0 = 0x00000131;
cpu->isar.id_pfr1 = 0x00011011;
cpu->isar.id_dfr0 = 0x03010066;
cpu->id_afr0 = 0x00000000;
cpu->isar.id_mmfr0 = 0x10101105;
cpu->isar.id_mmfr1 = 0x40000000;
cpu->isar.id_mmfr2 = 0x01260000;
cpu->isar.id_mmfr3 = 0x02102211;
cpu->isar.id_isar0 = 0x02101110;
cpu->isar.id_isar1 = 0x13112111;
cpu->isar.id_isar2 = 0x21232042;
cpu->isar.id_isar3 = 0x01112131;
cpu->isar.id_isar4 = 0x00011142;
cpu->isar.id_isar5 = 0x00011121;
cpu->isar.id_isar6 = 0;
cpu->isar.dbgdidr = 0x3516d000;
cpu->clidr = 0x0a200023;
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
define_cortex_a72_a57_a53_cp_reginfo(cpu);
/* old-style VFP short-vector support */ aa32_max_features(cpu);
cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
#ifdef CONFIG_USER_ONLY #ifdef CONFIG_USER_ONLY
/* /*
* We don't set these in system emulation mode for the moment, * Break with true ARMv8 and add back old-style VFP short-vector support.
* since we don't correctly set (all of) the ID registers to * Only do this for user-mode, where -cpu max is the default, so that
* advertise them. * older v6 and v7 programs are more likely to work without adjustment.
*/ */
set_feature(&cpu->env, ARM_FEATURE_V8); cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
{ #endif
uint32_t t;
t = cpu->isar.id_isar5;
t = FIELD_DP32(t, ID_ISAR5, AES, 2);
t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
cpu->isar.id_isar5 = t;
t = cpu->isar.id_isar6;
t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
t = FIELD_DP32(t, ID_ISAR6, DP, 1);
t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
t = FIELD_DP32(t, ID_ISAR6, SB, 1);
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
cpu->isar.id_isar6 = t;
t = cpu->isar.mvfr1;
t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
cpu->isar.mvfr1 = t;
t = cpu->isar.mvfr2;
t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
cpu->isar.mvfr2 = t;
t = cpu->isar.id_mmfr3;
t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
cpu->isar.id_mmfr3 = t;
t = cpu->isar.id_mmfr4;
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
cpu->isar.id_mmfr4 = t;
t = cpu->isar.id_pfr0;
t = FIELD_DP32(t, ID_PFR0, DIT, 1);
cpu->isar.id_pfr0 = t;
t = cpu->isar.id_pfr2;
t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
cpu->isar.id_pfr2 = t;
}
#endif /* CONFIG_USER_ONLY */
} }
#endif /* !TARGET_AARCH64 */ #endif /* !TARGET_AARCH64 */

View File

@ -1755,6 +1755,9 @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
} }
valid_mask &= ~SCR_NET; valid_mask &= ~SCR_NET;
if (cpu_isar_feature(aa64_ras, cpu)) {
valid_mask |= SCR_TERR;
}
if (cpu_isar_feature(aa64_lor, cpu)) { if (cpu_isar_feature(aa64_lor, cpu)) {
valid_mask |= SCR_TLOR; valid_mask |= SCR_TLOR;
} }
@ -1767,8 +1770,14 @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
if (cpu_isar_feature(aa64_mte, cpu)) { if (cpu_isar_feature(aa64_mte, cpu)) {
valid_mask |= SCR_ATA; valid_mask |= SCR_ATA;
} }
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
valid_mask |= SCR_ENSCXT;
}
} else { } else {
valid_mask &= ~(SCR_RW | SCR_ST); valid_mask &= ~(SCR_RW | SCR_ST);
if (cpu_isar_feature(aa32_ras, cpu)) {
valid_mask |= SCR_TERR;
}
} }
if (!arm_feature(env, ARM_FEATURE_EL2)) { if (!arm_feature(env, ARM_FEATURE_EL2)) {
@ -1857,7 +1866,12 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
} }
} }
/* External aborts are not possible in QEMU so A bit is always clear */ if (hcr_el2 & HCR_AMO) {
if (cs->interrupt_request & CPU_INTERRUPT_VSERR) {
ret |= CPSR_A;
}
}
return ret; return ret;
} }
@ -5056,16 +5070,17 @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
.access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write }, .access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write },
{ .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64, { .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0, .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_ALIAS | ARM_CP_FPU, .access = PL2_RW,
.type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
.fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) }, .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
{ .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64, { .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0, .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
.access = PL2_RW, .resetvalue = 0, .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
.writefn = dacr_write, .raw_writefn = raw_write, .writefn = dacr_write, .raw_writefn = raw_write,
.fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) }, .fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
{ .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64, { .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1, .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
.access = PL2_RW, .resetvalue = 0, .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
.fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) }, .fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
{ .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64, { .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64,
.type = ARM_CP_ALIAS, .type = ARM_CP_ALIAS,
@ -5098,124 +5113,6 @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) }, .fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
}; };
/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
{ .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0,
.access = PL2_RW,
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
{ .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
.access = PL2_RW,
.type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0,
.access = PL2_RW,
.type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "HMAIR1", .state = ARM_CP_STATE_AA32,
.cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "HAMAIR1", .state = ARM_CP_STATE_AA32,
.cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1,
.access = PL2_RW, .type = ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1,
.access = PL2_RW, .type = ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
.access = PL2_RW, .accessfn = access_el3_aa32ns,
.type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "VTTBR", .state = ARM_CP_STATE_AA32,
.cp = 15, .opc1 = 6, .crm = 2,
.access = PL2_RW, .accessfn = access_el3_aa32ns,
.type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
{ .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14,
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14,
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
.resetvalue = 0 },
{ .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
.access = PL2_RW, .accessfn = access_tda,
.type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
.access = PL2_RW, .accessfn = access_el3_aa32ns,
.type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "HIFAR", .state = ARM_CP_STATE_AA32,
.type = ARM_CP_CONST,
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
.access = PL2_RW, .resetvalue = 0 },
};
/* Ditto, but for registers which exist in ARMv8 but not v7 */
static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
.access = PL2_RW,
.type = ARM_CP_CONST, .resetvalue = 0 },
};
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask) static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
{ {
ARMCPU *cpu = env_archcpu(env); ARMCPU *cpu = env_archcpu(env);
@ -5243,6 +5140,9 @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
if (cpu_isar_feature(aa64_vh, cpu)) { if (cpu_isar_feature(aa64_vh, cpu)) {
valid_mask |= HCR_E2H; valid_mask |= HCR_E2H;
} }
if (cpu_isar_feature(aa64_ras, cpu)) {
valid_mask |= HCR_TERR | HCR_TEA;
}
if (cpu_isar_feature(aa64_lor, cpu)) { if (cpu_isar_feature(aa64_lor, cpu)) {
valid_mask |= HCR_TLOR; valid_mask |= HCR_TLOR;
} }
@ -5252,6 +5152,9 @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
if (cpu_isar_feature(aa64_mte, cpu)) { if (cpu_isar_feature(aa64_mte, cpu)) {
valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5; valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5;
} }
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
valid_mask |= HCR_ENSCXT;
}
} }
/* Clear RES0 bits. */ /* Clear RES0 bits. */
@ -5283,6 +5186,7 @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
g_assert(qemu_mutex_iothread_locked()); g_assert(qemu_mutex_iothread_locked());
arm_cpu_update_virq(cpu); arm_cpu_update_virq(cpu);
arm_cpu_update_vfiq(cpu); arm_cpu_update_vfiq(cpu);
arm_cpu_update_vserr(cpu);
} }
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
@ -5542,27 +5446,27 @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
.writefn = tlbimva_hyp_is_write }, .writefn = tlbimva_hyp_is_write },
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64, { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
.type = ARM_CP_NO_RAW, .access = PL2_W, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_alle2_write }, .writefn = tlbi_aa64_alle2_write },
{ .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
.type = ARM_CP_NO_RAW, .access = PL2_W, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_vae2_write }, .writefn = tlbi_aa64_vae2_write },
{ .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_vae2_write }, .writefn = tlbi_aa64_vae2_write },
{ .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_alle2is_write }, .writefn = tlbi_aa64_alle2is_write },
{ .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
.type = ARM_CP_NO_RAW, .access = PL2_W, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_vae2is_write }, .writefn = tlbi_aa64_vae2is_write },
{ .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_vae2is_write }, .writefn = tlbi_aa64_vae2is_write },
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
/* Unlike the other EL2-related AT operations, these must /* Unlike the other EL2-related AT operations, these must
@ -5572,11 +5476,13 @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
{ .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64, { .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0, .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
.access = PL2_W, .accessfn = at_s1e2_access, .access = PL2_W, .accessfn = at_s1e2_access,
.type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = ats_write64 },
{ .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64, { .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
.access = PL2_W, .accessfn = at_s1e2_access, .access = PL2_W, .accessfn = at_s1e2_access,
.type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = ats_write64 },
/* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE /* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3 * if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose * with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
@ -5900,6 +5806,10 @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0), { K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte }, "TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
{ K(3, 0, 13, 0, 7), K(3, 4, 13, 0, 7), K(3, 5, 13, 0, 7),
"SCXTNUM_EL1", "SCXTNUM_EL2", "SCXTNUM_EL12",
isar_feature_aa64_scxtnum },
/* TODO: ARMv8.2-SPE -- PMSCR_EL2 */ /* TODO: ARMv8.2-SPE -- PMSCR_EL2 */
/* TODO: ARMv8.4-Trace -- TRFCR_EL2 */ /* TODO: ARMv8.4-Trace -- TRFCR_EL2 */
}; };
@ -6076,7 +5986,7 @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
{ .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64, { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0, .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
.access = PL2_RW, .accessfn = access_tda, .access = PL2_RW, .accessfn = access_tda,
.type = ARM_CP_NOP }, .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
/* Dummy MDCCINT_EL1, since we don't implement the Debug Communications /* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
* Channel but Linux may try to access this register. The 32-bit * Channel but Linux may try to access this register. The 32-bit
* alias is DBGDCCINT. * alias is DBGDCCINT.
@ -6095,6 +6005,87 @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 }, .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
}; };
/*
* Check for traps to RAS registers, which are controlled
* by HCR_EL2.TERR and SCR_EL3.TERR.
*/
static CPAccessResult access_terr(CPUARMState *env, const ARMCPRegInfo *ri,
bool isread)
{
int el = arm_current_el(env);
if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TERR)) {
return CP_ACCESS_TRAP_EL2;
}
if (el < 3 && (env->cp15.scr_el3 & SCR_TERR)) {
return CP_ACCESS_TRAP_EL3;
}
return CP_ACCESS_OK;
}
static uint64_t disr_read(CPUARMState *env, const ARMCPRegInfo *ri)
{
int el = arm_current_el(env);
if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
return env->cp15.vdisr_el2;
}
if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
return 0; /* RAZ/WI */
}
return env->cp15.disr_el1;
}
static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
{
int el = arm_current_el(env);
if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
env->cp15.vdisr_el2 = val;
return;
}
if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
return; /* RAZ/WI */
}
env->cp15.disr_el1 = val;
}
/*
* Minimal RAS implementation with no Error Records.
* Which means that all of the Error Record registers:
* ERXADDR_EL1
* ERXCTLR_EL1
* ERXFR_EL1
* ERXMISC0_EL1
* ERXMISC1_EL1
* ERXMISC2_EL1
* ERXMISC3_EL1
* ERXPFGCDN_EL1 (RASv1p1)
* ERXPFGCTL_EL1 (RASv1p1)
* ERXPFGF_EL1 (RASv1p1)
* ERXSTATUS_EL1
* and
* ERRSELR_EL1
* may generate UNDEFINED, which is the effect we get by not
* listing them at all.
*/
static const ARMCPRegInfo minimal_ras_reginfo[] = {
{ .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 1,
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.disr_el1),
.readfn = disr_read, .writefn = disr_write, .raw_writefn = raw_write },
{ .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
.access = PL1_R, .accessfn = access_terr,
.type = ARM_CP_CONST, .resetvalue = 0 },
{ .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vdisr_el2) },
{ .name = "VSESR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 3,
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
};
/* Return the exception level to which exceptions should be taken /* Return the exception level to which exceptions should be taken
* via SVEAccessTrap. If an exception should be routed through * via SVEAccessTrap. If an exception should be routed through
* AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
@ -6237,35 +6228,22 @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
} }
} }
static const ARMCPRegInfo zcr_el1_reginfo = { static const ARMCPRegInfo zcr_reginfo[] = {
.name = "ZCR_EL1", .state = ARM_CP_STATE_AA64, { .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0, .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
.access = PL1_RW, .type = ARM_CP_SVE, .access = PL1_RW, .type = ARM_CP_SVE,
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]), .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
.writefn = zcr_write, .raw_writefn = raw_write .writefn = zcr_write, .raw_writefn = raw_write },
}; { .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
static const ARMCPRegInfo zcr_el2_reginfo = { .access = PL2_RW, .type = ARM_CP_SVE,
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64, .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0, .writefn = zcr_write, .raw_writefn = raw_write },
.access = PL2_RW, .type = ARM_CP_SVE, { .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]), .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
.writefn = zcr_write, .raw_writefn = raw_write .access = PL3_RW, .type = ARM_CP_SVE,
}; .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
.writefn = zcr_write, .raw_writefn = raw_write },
static const ARMCPRegInfo zcr_no_el2_reginfo = {
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
.access = PL2_RW, .type = ARM_CP_SVE,
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
};
static const ARMCPRegInfo zcr_el3_reginfo = {
.name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
.access = PL3_RW, .type = ARM_CP_SVE,
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
.writefn = zcr_write, .raw_writefn = raw_write
}; };
void hw_watchpoint_update(ARMCPU *cpu, int n) void hw_watchpoint_update(ARMCPU *cpu, int n)
@ -6892,11 +6870,11 @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
.access = PL2_W, .type = ARM_CP_NOP }, .access = PL2_W, .type = ARM_CP_NOP },
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_rvae2is_write }, .writefn = tlbi_aa64_rvae2is_write },
{ .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_rvae2is_write }, .writefn = tlbi_aa64_rvae2is_write },
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
@ -6906,19 +6884,19 @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
.access = PL2_W, .type = ARM_CP_NOP }, .access = PL2_W, .type = ARM_CP_NOP },
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_rvae2is_write }, .writefn = tlbi_aa64_rvae2is_write },
{ .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_rvae2is_write }, .writefn = tlbi_aa64_rvae2is_write },
{ .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_rvae2_write }, .writefn = tlbi_aa64_rvae2_write },
{ .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_rvae2_write }, .writefn = tlbi_aa64_rvae2_write },
{ .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1, .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
@ -6973,11 +6951,11 @@ static const ARMCPRegInfo tlbios_reginfo[] = {
.writefn = tlbi_aa64_vae1is_write }, .writefn = tlbi_aa64_vae1is_write },
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_alle2is_write }, .writefn = tlbi_aa64_alle2is_write },
{ .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_vae2is_write }, .writefn = tlbi_aa64_vae2is_write },
{ .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
@ -6985,7 +6963,7 @@ static const ARMCPRegInfo tlbios_reginfo[] = {
.writefn = tlbi_aa64_alle1is_write }, .writefn = tlbi_aa64_alle1is_write },
{ .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
.access = PL2_W, .type = ARM_CP_NO_RAW, .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
.writefn = tlbi_aa64_vae2is_write }, .writefn = tlbi_aa64_vae2is_write },
{ .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64, { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
@ -7255,7 +7233,52 @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
}, },
}; };
#endif static CPAccessResult access_scxtnum(CPUARMState *env, const ARMCPRegInfo *ri,
bool isread)
{
uint64_t hcr = arm_hcr_el2_eff(env);
int el = arm_current_el(env);
if (el == 0 && !((hcr & HCR_E2H) && (hcr & HCR_TGE))) {
if (env->cp15.sctlr_el[1] & SCTLR_TSCXT) {
if (hcr & HCR_TGE) {
return CP_ACCESS_TRAP_EL2;
}
return CP_ACCESS_TRAP;
}
} else if (el < 2 && (env->cp15.sctlr_el[2] & SCTLR_TSCXT)) {
return CP_ACCESS_TRAP_EL2;
}
if (el < 2 && arm_is_el2_enabled(env) && !(hcr & HCR_ENSCXT)) {
return CP_ACCESS_TRAP_EL2;
}
if (el < 3
&& arm_feature(env, ARM_FEATURE_EL3)
&& !(env->cp15.scr_el3 & SCR_ENSCXT)) {
return CP_ACCESS_TRAP_EL3;
}
return CP_ACCESS_OK;
}
static const ARMCPRegInfo scxtnum_reginfo[] = {
{ .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
.access = PL0_RW, .accessfn = access_scxtnum,
.fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
{ .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
.access = PL1_RW, .accessfn = access_scxtnum,
.fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
{ .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
.access = PL2_RW, .accessfn = access_scxtnum,
.fieldoffset = offsetof(CPUARMState, scxtnum_el[2]) },
{ .name = "SCXTNUM_EL3", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 7,
.access = PL3_RW,
.fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
};
#endif /* TARGET_AARCH64 */
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri, static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
bool isread) bool isread)
@ -7374,11 +7397,14 @@ static const ARMCPRegInfo jazelle_regs[] = {
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
}; };
static const ARMCPRegInfo contextidr_el2 = {
.name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
.access = PL2_RW,
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2])
};
static const ARMCPRegInfo vhe_reginfo[] = { static const ARMCPRegInfo vhe_reginfo[] = {
{ .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
.access = PL2_RW,
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
{ .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64, { .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1, .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
.access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write, .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
@ -7899,27 +7925,40 @@ void register_cp_regs_for_features(ARMCPU *cpu)
define_arm_cp_regs(cpu, v8_idregs); define_arm_cp_regs(cpu, v8_idregs);
define_arm_cp_regs(cpu, v8_cp_reginfo); define_arm_cp_regs(cpu, v8_cp_reginfo);
} }
if (arm_feature(env, ARM_FEATURE_EL2)) {
/*
* Register the base EL2 cpregs.
* Pre v8, these registers are implemented only as part of the
* Virtualization Extensions (EL2 present). Beginning with v8,
* if EL2 is missing but EL3 is enabled, mostly these become
* RES0 from EL3, with some specific exceptions.
*/
if (arm_feature(env, ARM_FEATURE_EL2)
|| (arm_feature(env, ARM_FEATURE_EL3)
&& arm_feature(env, ARM_FEATURE_V8))) {
uint64_t vmpidr_def = mpidr_read_val(env); uint64_t vmpidr_def = mpidr_read_val(env);
ARMCPRegInfo vpidr_regs[] = { ARMCPRegInfo vpidr_regs[] = {
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32, { .name = "VPIDR", .state = ARM_CP_STATE_AA32,
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
.access = PL2_RW, .accessfn = access_el3_aa32ns, .access = PL2_RW, .accessfn = access_el3_aa32ns,
.resetvalue = cpu->midr, .type = ARM_CP_ALIAS, .resetvalue = cpu->midr,
.type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
.fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) }, .fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) },
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64, { .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
.access = PL2_RW, .resetvalue = cpu->midr, .access = PL2_RW, .resetvalue = cpu->midr,
.type = ARM_CP_EL3_NO_EL2_C_NZ,
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) }, .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
{ .name = "VMPIDR", .state = ARM_CP_STATE_AA32, { .name = "VMPIDR", .state = ARM_CP_STATE_AA32,
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
.access = PL2_RW, .accessfn = access_el3_aa32ns, .access = PL2_RW, .accessfn = access_el3_aa32ns,
.resetvalue = vmpidr_def, .type = ARM_CP_ALIAS, .resetvalue = vmpidr_def,
.type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
.fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) }, .fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) },
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64, { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
.access = PL2_RW, .access = PL2_RW, .resetvalue = vmpidr_def,
.resetvalue = vmpidr_def, .type = ARM_CP_EL3_NO_EL2_C_NZ,
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) }, .fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
}; };
define_arm_cp_regs(cpu, vpidr_regs); define_arm_cp_regs(cpu, vpidr_regs);
@ -7940,33 +7979,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
}; };
define_one_arm_cp_reg(cpu, &rvbar); define_one_arm_cp_reg(cpu, &rvbar);
} }
} else {
/* If EL2 is missing but higher ELs are enabled, we need to
* register the no_el2 reginfos.
*/
if (arm_feature(env, ARM_FEATURE_EL3)) {
/* When EL3 exists but not EL2, VPIDR and VMPIDR take the value
* of MIDR_EL1 and MPIDR_EL1.
*/
ARMCPRegInfo vpidr_regs[] = {
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
.access = PL2_RW, .accessfn = access_el3_aa32ns,
.type = ARM_CP_CONST, .resetvalue = cpu->midr,
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
.access = PL2_RW, .accessfn = access_el3_aa32ns,
.type = ARM_CP_NO_RAW,
.writefn = arm_cp_write_ignore, .readfn = mpidr_read },
};
define_arm_cp_regs(cpu, vpidr_regs);
define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
if (arm_feature(env, ARM_FEATURE_V8)) {
define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo);
}
}
} }
/* Register the base EL3 cpregs. */
if (arm_feature(env, ARM_FEATURE_EL3)) { if (arm_feature(env, ARM_FEATURE_EL3)) {
define_arm_cp_regs(cpu, el3_cp_reginfo); define_arm_cp_regs(cpu, el3_cp_reginfo);
ARMCPRegInfo el3_regs[] = { ARMCPRegInfo el3_regs[] = {
@ -8353,21 +8368,20 @@ void register_cp_regs_for_features(ARMCPU *cpu)
if (cpu_isar_feature(aa64_ssbs, cpu)) { if (cpu_isar_feature(aa64_ssbs, cpu)) {
define_one_arm_cp_reg(cpu, &ssbs_reginfo); define_one_arm_cp_reg(cpu, &ssbs_reginfo);
} }
if (cpu_isar_feature(any_ras, cpu)) {
define_arm_cp_regs(cpu, minimal_ras_reginfo);
}
if (cpu_isar_feature(aa64_vh, cpu) ||
cpu_isar_feature(aa64_debugv8p2, cpu)) {
define_one_arm_cp_reg(cpu, &contextidr_el2);
}
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) { if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
define_arm_cp_regs(cpu, vhe_reginfo); define_arm_cp_regs(cpu, vhe_reginfo);
} }
if (cpu_isar_feature(aa64_sve, cpu)) { if (cpu_isar_feature(aa64_sve, cpu)) {
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo); define_arm_cp_regs(cpu, zcr_reginfo);
if (arm_feature(env, ARM_FEATURE_EL2)) {
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
} else {
define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo);
}
if (arm_feature(env, ARM_FEATURE_EL3)) {
define_one_arm_cp_reg(cpu, &zcr_el3_reginfo);
}
} }
#ifdef TARGET_AARCH64 #ifdef TARGET_AARCH64
@ -8406,6 +8420,10 @@ void register_cp_regs_for_features(ARMCPU *cpu)
define_arm_cp_regs(cpu, mte_tco_ro_reginfo); define_arm_cp_regs(cpu, mte_tco_ro_reginfo);
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo); define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
} }
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
define_arm_cp_regs(cpu, scxtnum_reginfo);
}
#endif #endif
if (cpu_isar_feature(any_predinv, cpu)) { if (cpu_isar_feature(any_predinv, cpu)) {
@ -8506,13 +8524,14 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
int crm, int opc1, int opc2, int crm, int opc1, int opc2,
const char *name) const char *name)
{ {
CPUARMState *env = &cpu->env;
uint32_t key; uint32_t key;
ARMCPRegInfo *r2; ARMCPRegInfo *r2;
bool is64 = r->type & ARM_CP_64BIT; bool is64 = r->type & ARM_CP_64BIT;
bool ns = secstate & ARM_CP_SECSTATE_NS; bool ns = secstate & ARM_CP_SECSTATE_NS;
int cp = r->cp; int cp = r->cp;
bool isbanked;
size_t name_len; size_t name_len;
bool make_const;
switch (state) { switch (state) {
case ARM_CP_STATE_AA32: case ARM_CP_STATE_AA32:
@ -8547,6 +8566,32 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
} }
} }
/*
* Eliminate registers that are not present because the EL is missing.
* Doing this here makes it easier to put all registers for a given
* feature into the same ARMCPRegInfo array and define them all at once.
*/
make_const = false;
if (arm_feature(env, ARM_FEATURE_EL3)) {
/*
* An EL2 register without EL2 but with EL3 is (usually) RES0.
* See rule RJFFP in section D1.1.3 of DDI0487H.a.
*/
int min_el = ctz32(r->access) / 2;
if (min_el == 2 && !arm_feature(env, ARM_FEATURE_EL2)) {
if (r->type & ARM_CP_EL3_NO_EL2_UNDEF) {
return;
}
make_const = !(r->type & ARM_CP_EL3_NO_EL2_KEEP);
}
} else {
CPAccessRights max_el = (arm_feature(env, ARM_FEATURE_EL2)
? PL2_RW : PL1_RW);
if ((r->access & max_el) == 0) {
return;
}
}
/* Combine cpreg and name into one allocation. */ /* Combine cpreg and name into one allocation. */
name_len = strlen(name) + 1; name_len = strlen(name) + 1;
r2 = g_malloc(sizeof(*r2) + name_len); r2 = g_malloc(sizeof(*r2) + name_len);
@ -8567,44 +8612,77 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
r2->opaque = opaque; r2->opaque = opaque;
} }
isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]; if (make_const) {
if (isbanked) { /* This should not have been a very special register to begin. */
int old_special = r2->type & ARM_CP_SPECIAL_MASK;
assert(old_special == 0 || old_special == ARM_CP_NOP);
/* /*
* Register is banked (using both entries in array). * Set the special function to CONST, retaining the other flags.
* Overwriting fieldoffset as the array is only used to define * This is important for e.g. ARM_CP_SVE so that we still
* banked registers but later only fieldoffset is used. * take the SVE trap if CPTR_EL3.EZ == 0.
*/ */
r2->fieldoffset = r->bank_fieldoffsets[ns]; r2->type = (r2->type & ~ARM_CP_SPECIAL_MASK) | ARM_CP_CONST;
} /*
* Usually, these registers become RES0, but there are a few
* special cases like VPIDR_EL2 which have a constant non-zero
* value with writes ignored.
*/
if (!(r->type & ARM_CP_EL3_NO_EL2_C_NZ)) {
r2->resetvalue = 0;
}
/*
* ARM_CP_CONST has precedence, so removing the callbacks and
* offsets are not strictly necessary, but it is potentially
* less confusing to debug later.
*/
r2->readfn = NULL;
r2->writefn = NULL;
r2->raw_readfn = NULL;
r2->raw_writefn = NULL;
r2->resetfn = NULL;
r2->fieldoffset = 0;
r2->bank_fieldoffsets[0] = 0;
r2->bank_fieldoffsets[1] = 0;
} else {
bool isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
if (state == ARM_CP_STATE_AA32) {
if (isbanked) { if (isbanked) {
/* /*
* If the register is banked then we don't need to migrate or * Register is banked (using both entries in array).
* reset the 32-bit instance in certain cases: * Overwriting fieldoffset as the array is only used to define
* * banked registers but later only fieldoffset is used.
* 1) If the register has both 32-bit and 64-bit instances then we
* can count on the 64-bit instance taking care of the
* non-secure bank.
* 2) If ARMv8 is enabled then we can count on a 64-bit version
* taking care of the secure bank. This requires that separate
* 32 and 64-bit definitions are provided.
*/ */
if ((r->state == ARM_CP_STATE_BOTH && ns) || r2->fieldoffset = r->bank_fieldoffsets[ns];
(arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) { }
if (state == ARM_CP_STATE_AA32) {
if (isbanked) {
/*
* If the register is banked then we don't need to migrate or
* reset the 32-bit instance in certain cases:
*
* 1) If the register has both 32-bit and 64-bit instances
* then we can count on the 64-bit instance taking care
* of the non-secure bank.
* 2) If ARMv8 is enabled then we can count on a 64-bit
* version taking care of the secure bank. This requires
* that separate 32 and 64-bit definitions are provided.
*/
if ((r->state == ARM_CP_STATE_BOTH && ns) ||
(arm_feature(env, ARM_FEATURE_V8) && !ns)) {
r2->type |= ARM_CP_ALIAS;
}
} else if ((secstate != r->secure) && !ns) {
/*
* The register is not banked so we only want to allow
* migration of the non-secure instance.
*/
r2->type |= ARM_CP_ALIAS; r2->type |= ARM_CP_ALIAS;
} }
} else if ((secstate != r->secure) && !ns) {
/*
* The register is not banked so we only want to allow migration
* of the non-secure instance.
*/
r2->type |= ARM_CP_ALIAS;
}
if (HOST_BIG_ENDIAN && if (HOST_BIG_ENDIAN &&
r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) { r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
r2->fieldoffset += sizeof(uint32_t); r2->fieldoffset += sizeof(uint32_t);
}
} }
} }
@ -8615,7 +8693,7 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
* multiple times. Special registers (ie NOP/WFI) are * multiple times. Special registers (ie NOP/WFI) are
* never migratable and not even raw-accessible. * never migratable and not even raw-accessible.
*/ */
if (r->type & ARM_CP_SPECIAL_MASK) { if (r2->type & ARM_CP_SPECIAL_MASK) {
r2->type |= ARM_CP_NO_RAW; r2->type |= ARM_CP_NO_RAW;
} }
if (((r->crm == CP_ANY) && crm != 0) || if (((r->crm == CP_ANY) && crm != 0) ||
@ -9318,6 +9396,7 @@ void arm_log_exception(CPUState *cs)
[EXCP_LSERR] = "v8M LSERR UsageFault", [EXCP_LSERR] = "v8M LSERR UsageFault",
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault", [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault", [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
[EXCP_VSERR] = "Virtual SERR",
}; };
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
@ -9830,6 +9909,31 @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
mask = CPSR_A | CPSR_I | CPSR_F; mask = CPSR_A | CPSR_I | CPSR_F;
offset = 4; offset = 4;
break; break;
case EXCP_VSERR:
{
/*
* Note that this is reported as a data abort, but the DFAR
* has an UNKNOWN value. Construct the SError syndrome from
* AET and ExT fields.
*/
ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal, };
if (extended_addresses_enabled(env)) {
env->exception.fsr = arm_fi_to_lfsc(&fi);
} else {
env->exception.fsr = arm_fi_to_sfsc(&fi);
}
env->exception.fsr |= env->cp15.vsesr_el2 & 0xd000;
A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x\n",
env->exception.fsr);
new_mode = ARM_CPU_MODE_ABT;
addr = 0x10;
mask = CPSR_A | CPSR_I;
offset = 8;
}
break;
case EXCP_SMC: case EXCP_SMC:
new_mode = ARM_CPU_MODE_MON; new_mode = ARM_CPU_MODE_MON;
addr = 0x08; addr = 0x08;
@ -10050,6 +10154,12 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
case EXCP_VFIQ: case EXCP_VFIQ:
addr += 0x100; addr += 0x100;
break; break;
case EXCP_VSERR:
addr += 0x180;
/* Construct the SError syndrome from IDS and ISS fields. */
env->exception.syndrome = syn_serror(env->cp15.vsesr_el2 & 0x1ffffff);
env->cp15.esr_el[new_el] = env->exception.syndrome;
break;
default: default:
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
} }

View File

@ -54,6 +54,7 @@ DEF_HELPER_1(wfe, void, env)
DEF_HELPER_1(yield, void, env) DEF_HELPER_1(yield, void, env)
DEF_HELPER_1(pre_hvc, void, env) DEF_HELPER_1(pre_hvc, void, env)
DEF_HELPER_2(pre_smc, void, env, i32) DEF_HELPER_2(pre_smc, void, env, i32)
DEF_HELPER_1(vesb, void, env)
DEF_HELPER_3(cpsr_write, void, env, i32, i32) DEF_HELPER_3(cpsr_write, void, env, i32, i32)
DEF_HELPER_2(cpsr_write_eret, void, env, i32) DEF_HELPER_2(cpsr_write_eret, void, env, i32)

View File

@ -947,6 +947,14 @@ void arm_cpu_update_virq(ARMCPU *cpu);
*/ */
void arm_cpu_update_vfiq(ARMCPU *cpu); void arm_cpu_update_vfiq(ARMCPU *cpu);
/**
* arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
*
* Update the CPU_INTERRUPT_VSERR bit in cs->interrupt_request,
* following a change to the HCR_EL2.VSE bit.
*/
void arm_cpu_update_vserr(ARMCPU *cpu);
/** /**
* arm_mmu_idx_el: * arm_mmu_idx_el:
* @env: The cpu environment * @env: The cpu environment
@ -1307,4 +1315,12 @@ int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg); int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
#endif #endif
#ifdef CONFIG_USER_ONLY
static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
#else
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
#endif
void aa32_max_features(ARMCPU *cpu);
#endif #endif

View File

@ -960,3 +960,46 @@ void HELPER(probe_access)(CPUARMState *env, target_ulong ptr,
access_type, mmu_idx, ra); access_type, mmu_idx, ra);
} }
} }
/*
* This function corresponds to AArch64.vESBOperation().
* Note that the AArch32 version is not functionally different.
*/
void HELPER(vesb)(CPUARMState *env)
{
/*
* The EL2Enabled() check is done inside arm_hcr_el2_eff,
* and will return HCR_EL2.VSE == 0, so nothing happens.
*/
uint64_t hcr = arm_hcr_el2_eff(env);
bool enabled = !(hcr & HCR_TGE) && (hcr & HCR_AMO);
bool pending = enabled && (hcr & HCR_VSE);
bool masked = (env->daif & PSTATE_A);
/* If VSE pending and masked, defer the exception. */
if (pending && masked) {
uint32_t syndrome;
if (arm_el_is_aa64(env, 1)) {
/* Copy across IDS and ISS from VSESR. */
syndrome = env->cp15.vsesr_el2 & 0x1ffffff;
} else {
ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal };
if (extended_addresses_enabled(env)) {
syndrome = arm_fi_to_lfsc(&fi);
} else {
syndrome = arm_fi_to_sfsc(&fi);
}
/* Copy across AET and ExT from VSESR. */
syndrome |= env->cp15.vsesr_el2 & 0xd000;
}
/* Set VDISR_EL2.A along with the syndrome. */
env->cp15.vdisr_el2 = syndrome | (1u << 31);
/* Clear pending virtual SError */
env->cp15.hcr_el2 &= ~HCR_VSE;
cpu_reset_interrupt(env_cpu(env), CPU_INTERRUPT_VSERR);
}
}

View File

@ -287,4 +287,9 @@ static inline uint32_t syn_pcalignment(void)
return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL; return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL;
} }
static inline uint32_t syn_serror(uint32_t extra)
{
return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
}
#endif /* TARGET_ARM_SYNDROME_H */ #endif /* TARGET_ARM_SYNDROME_H */

View File

@ -364,17 +364,17 @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
[ [
# Hints, and CPS # Hints, and CPS
{ {
YIELD 1111 0011 1010 1111 1000 0000 0000 0001 [
WFE 1111 0011 1010 1111 1000 0000 0000 0010 YIELD 1111 0011 1010 1111 1000 0000 0000 0001
WFI 1111 0011 1010 1111 1000 0000 0000 0011 WFE 1111 0011 1010 1111 1000 0000 0000 0010
WFI 1111 0011 1010 1111 1000 0000 0000 0011
# TODO: Implement SEV, SEVL; may help SMP performance. # TODO: Implement SEV, SEVL; may help SMP performance.
# SEV 1111 0011 1010 1111 1000 0000 0000 0100 # SEV 1111 0011 1010 1111 1000 0000 0000 0100
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101 # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
# For M-profile minimal-RAS ESB can be a NOP, which is the ESB 1111 0011 1010 1111 1000 0000 0001 0000
# default behaviour since it is in the hint space. ]
# ESB 1111 0011 1010 1111 1000 0000 0001 0000
# The canonical nop ends in 0000 0000, but the whole rest # The canonical nop ends in 0000 0000, but the whole rest
# of the space is "reserved hint, behaves as nop". # of the space is "reserved hint, behaves as nop".

View File

@ -1427,6 +1427,7 @@ static void handle_hint(DisasContext *s, uint32_t insn,
break; break;
case 0b00100: /* SEV */ case 0b00100: /* SEV */
case 0b00101: /* SEVL */ case 0b00101: /* SEVL */
case 0b00110: /* DGH */
/* we treat all as NOP at least for now */ /* we treat all as NOP at least for now */
break; break;
case 0b00111: /* XPACLRI */ case 0b00111: /* XPACLRI */
@ -1454,6 +1455,23 @@ static void handle_hint(DisasContext *s, uint32_t insn,
gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]); gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
} }
break; break;
case 0b10000: /* ESB */
/* Without RAS, we must implement this as NOP. */
if (dc_isar_feature(aa64_ras, s)) {
/*
* QEMU does not have a source of physical SErrors,
* so we are only concerned with virtual SErrors.
* The pseudocode in the ARM for this case is
* if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
* AArch64.vESBOperation();
* Most of the condition can be evaluated at translation time.
* Test for EL2 present, and defer test for SEL2 to runtime.
*/
if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
gen_helper_vesb(cpu_env);
}
}
break;
case 0b11000: /* PACIAZ */ case 0b11000: /* PACIAZ */
if (s->pauth_active) { if (s->pauth_active) {
gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30], gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],

View File

@ -6239,6 +6239,29 @@ static bool trans_WFI(DisasContext *s, arg_WFI *a)
return true; return true;
} }
static bool trans_ESB(DisasContext *s, arg_ESB *a)
{
/*
* For M-profile, minimal-RAS ESB can be a NOP.
* Without RAS, we must implement this as NOP.
*/
if (!arm_dc_feature(s, ARM_FEATURE_M) && dc_isar_feature(aa32_ras, s)) {
/*
* QEMU does not have a source of physical SErrors,
* so we are only concerned with virtual SErrors.
* The pseudocode in the ARM for this case is
* if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
* AArch32.vESBOperation();
* Most of the condition can be evaluated at translation time.
* Test for EL2 present, and defer test for SEL2 to runtime.
*/
if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
gen_helper_vesb(cpu_env);
}
}
return true;
}
static bool trans_NOP(DisasContext *s, arg_NOP *a) static bool trans_NOP(DisasContext *s, arg_NOP *a)
{ {
return true; return true;

View File

@ -223,17 +223,18 @@ static void aarch64_numa_cpu(const void *data)
QTestState *qts; QTestState *qts;
g_autofree char *cli = NULL; g_autofree char *cli = NULL;
cli = make_cli(data, "-machine smp.cpus=2 " cli = make_cli(data, "-machine "
"smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 " "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
"-numa cpu,node-id=1,thread-id=0 " "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
"-numa cpu,node-id=0,thread-id=1"); "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
qts = qtest_init(cli); qts = qtest_init(cli);
cpus = get_cpus(qts, &resp); cpus = get_cpus(qts, &resp);
g_assert(cpus); g_assert(cpus);
while ((e = qlist_pop(cpus))) { while ((e = qlist_pop(cpus))) {
QDict *cpu, *props; QDict *cpu, *props;
int64_t thread, node; int64_t socket, cluster, core, thread, node;
cpu = qobject_to(QDict, e); cpu = qobject_to(QDict, e);
g_assert(qdict_haskey(cpu, "props")); g_assert(qdict_haskey(cpu, "props"));
@ -241,12 +242,18 @@ static void aarch64_numa_cpu(const void *data)
g_assert(qdict_haskey(props, "node-id")); g_assert(qdict_haskey(props, "node-id"));
node = qdict_get_int(props, "node-id"); node = qdict_get_int(props, "node-id");
g_assert(qdict_haskey(props, "socket-id"));
socket = qdict_get_int(props, "socket-id");
g_assert(qdict_haskey(props, "cluster-id"));
cluster = qdict_get_int(props, "cluster-id");
g_assert(qdict_haskey(props, "core-id"));
core = qdict_get_int(props, "core-id");
g_assert(qdict_haskey(props, "thread-id")); g_assert(qdict_haskey(props, "thread-id"));
thread = qdict_get_int(props, "thread-id"); thread = qdict_get_int(props, "thread-id");
if (thread == 0) { if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
g_assert_cmpint(node, ==, 1); g_assert_cmpint(node, ==, 1);
} else if (thread == 1) { } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
g_assert_cmpint(node, ==, 0); g_assert_cmpint(node, ==, 0);
} else { } else {
g_assert(false); g_assert(false);