target-arm queue:

* hw/cpu/a9mpcore: Verify the machine use Cortex-A9 cores
  * hw/arm/smmuv3: Implement SMMUv3.2 range-invalidation
  * docs/system/arm: Document the Xilinx Versal Virt board
  * target/arm: Make M-profile NOCP take precedence over UNDEF
  * target/arm: Use correct FPST for VCMLA, VCADD on fp16
  * target/arm: Various cleanups preparing for fp16 support
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAl9DjKIZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3q/4D/wKOkDqzAztjudA9Iftaab3
 fdSbMmyjBzbe7XIGFpp5jq5qIcST8mh/0uxXP8w/QCnj15PAYgnleJsssCLfGZ9D
 gRld/aLMLpk+R0kreUF1vsBZct7/joN0McokQe/1PP5zxm99Psu8rfT0r2qZcZ6U
 vm5PHh4VdIfCwIvnLb+5iB8l6Yw4h2nS1295tCK1+cjVeUi/oglJ4iYaicboStvv
 oOhyViixcT1qmBXIitrQwisEwrwyhJW0jQYj5nhD0RBjZgMixm92vjciR1wtYIBZ
 QVzNvH354++zawnltcXG6ZO7Lyg+DSPZ5S13+KFqFqRJ+ZMj6a6/uZ3IRo3HGdeY
 ZcO1dBN8xTptsYnTQch1r09xcIL7VAKOL+SrIR1P0udZO64laaLaHtfCyRsExdb4
 aoPBfURwtw84aTiEoVuBDQp/v53XvDd700NgSFtrQBbxBR/WT+Jax+jXOWwsDzYr
 O/0DD7vl5NJ8Xpv5ezYG0oRU7jG+qZ9ziJzJKbzw+3XYyr6QsSbSN05Op+J/2Dj7
 tuRZgjDf8uQysbfioW4w3UlfpzIE/UUpV1mjVKjrw1HJ4Nsk2arpROtvl7Wcfm21
 JK1daHqXrGGb1nCyBUFJvXwlGnbSheiSAfrCnm1/Umqy6CWa0v4SMN58FqEcVYtL
 H3F9PGd4HrRu4SQwEba70Q==
 =cuKX
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200824' into staging

target-arm queue:
 * hw/cpu/a9mpcore: Verify the machine use Cortex-A9 cores
 * hw/arm/smmuv3: Implement SMMUv3.2 range-invalidation
 * docs/system/arm: Document the Xilinx Versal Virt board
 * target/arm: Make M-profile NOCP take precedence over UNDEF
 * target/arm: Use correct FPST for VCMLA, VCADD on fp16
 * target/arm: Various cleanups preparing for fp16 support

# gpg: Signature made Mon 24 Aug 2020 10:47:14 BST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20200824: (27 commits)
  target/arm: Use correct FPST for VCMLA, VCADD on fp16
  target/arm: Implement FPST_STD_F16 fpstatus
  target/arm: Make A32/T32 use new fpstatus_ptr() API
  target/arm: Replace A64 get_fpstatus_ptr() with generic fpstatus_ptr()
  target/arm: Delete unused ARM_FEATURE_CRC
  target/arm/translate.c: Delete/amend incorrect comments
  target/arm: Delete unused VFP_DREG macros
  target/arm: Remove ARCH macro
  target/arm: Convert T32 coprocessor insns to decodetree
  target/arm: Do M-profile NOCP checks early and via decodetree
  target/arm: Tidy up disas_arm_insn()
  target/arm: Convert A32 coprocessor insns to decodetree
  target/arm: Separate decode from handling of coproc insns
  target/arm: Pull handling of XScale insns out of disas_coproc_insn()
  docs/system/arm: Document the Xilinx Versal Virt board
  hw/arm/smmuv3: Advertise SMMUv3.2 range invalidation
  hw/arm/smmuv3: Support HAD and advertise SMMUv3.1 support
  hw/arm/smmuv3: Let AIDR advertise SMMUv3.0 support
  hw/arm/smmuv3: Fix IIDR offset
  hw/arm/smmuv3: Get prepared for range invalidation
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
Peter Maydell 2020-08-24 12:13:09 +01:00
commit 07d914cb94
27 changed files with 885 additions and 415 deletions

View File

@ -894,7 +894,7 @@ F: hw/misc/zynq*
F: include/hw/misc/zynq*
X: hw/ssi/xilinx_*
Xilinx ZynqMP
Xilinx ZynqMP and Versal
M: Alistair Francis <alistair@alistair23.me>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
M: Peter Maydell <peter.maydell@linaro.org>
@ -905,6 +905,7 @@ F: include/hw/*/xlnx*.h
F: include/hw/ssi/xilinx_spips.h
F: hw/display/dpcd.c
F: include/hw/display/dpcd.h
F: docs/system/arm/xlnx-versal-virt.rst
ARM ACPI Subsystem
M: Shannon Zhao <shannon.zhaosl@gmail.com>

View File

@ -0,0 +1,176 @@
Xilinx Versal Virt (``xlnx-versal-virt``)
=========================================
Xilinx Versal is a family of heterogeneous multi-core SoCs
(System on Chip) that combine traditional hardened CPUs and I/O
peripherals in a Processing System (PS) with runtime programmable
FPGA logic (PL) and an Artificial Intelligence Engine (AIE).
More details here:
https://www.xilinx.com/products/silicon-devices/acap/versal.html
The family of Versal SoCs share a single architecture but come in
different parts with different speed grades, amounts of PL and
other differences.
The Xilinx Versal Virt board in QEMU is a model of a virtual board
(does not exist in reality) with a virtual Versal SoC without I/O
limitations. Currently, we support the following cores and devices:
Implemented CPU cores:
- 2 ACPUs (ARM Cortex-A72)
Implemented devices:
- Interrupt controller (ARM GICv3)
- 2 UARTs (ARM PL011)
- An RTC (Versal built-in)
- 2 GEMs (Cadence MACB Ethernet MACs)
- 8 ADMA (Xilinx zDMA) channels
- 2 SD Controllers
- OCM (256KB of On Chip Memory)
- DDR memory
QEMU does not yet model any other devices, including the PL and the AI Engine.
Other differences between the hardware and the QEMU model:
- QEMU allows the amount of DDR memory provided to be specified with the
``-m`` argument. If a DTB is provided on the command line then QEMU will
edit it to include suitable entries describing the Versal DDR memory ranges.
- QEMU provides 8 virtio-mmio virtio transports; these start at
address ``0xa0000000`` and have IRQs from 111 and upwards.
Running
"""""""
If the user provides an Operating System to be loaded, we expect users
to use the ``-kernel`` command line option.
Users can load firmware or boot-loaders with the ``-device loader`` options.
When loading an OS, QEMU generates a DTB and selects an appropriate address
where it gets loaded. This DTB will be passed to the kernel in register x0.
If there's no ``-kernel`` option, we generate a DTB and place it at 0x1000
for boot-loaders or firmware to pick it up.
If users want to provide their own DTB, they can use the ``-dtb`` option.
These DTBs will have their memory nodes modified to match QEMU's
selected ram_size option before they get passed to the kernel or FW.
When loading an OS, we turn on QEMU's PSCI implementation with SMC
as the PSCI conduit. When there's no ``-kernel`` option, we assume the user
provides EL3 firmware to handle PSCI.
A few examples:
Direct Linux boot of a generic ARM64 upstream Linux kernel:
.. code-block:: bash
$ qemu-system-aarch64 -M xlnx-versal-virt -m 2G \
-serial mon:stdio -display none \
-kernel arch/arm64/boot/Image \
-nic user -nic user \
-device virtio-rng-device,bus=virtio-mmio-bus.0 \
-drive if=none,index=0,file=hd0.qcow2,id=hd0,snapshot \
-drive file=qemu_sd.qcow2,if=sd,index=0,snapshot \
-device virtio-blk-device,drive=hd0 -append root=/dev/vda
Direct Linux boot of PetaLinux 2019.2:
.. code-block:: bash
$ qemu-system-aarch64 -M xlnx-versal-virt -m 2G \
-serial mon:stdio -display none \
-kernel petalinux-v2019.2/Image \
-append "rdinit=/sbin/init console=ttyAMA0,115200n8 earlycon=pl011,mmio,0xFF000000,115200n8" \
-net nic,model=cadence_gem,netdev=net0 -netdev user,id=net0 \
-device virtio-rng-device,bus=virtio-mmio-bus.0,rng=rng0 \
-object rng-random,filename=/dev/urandom,id=rng0
Boot PetaLinux 2019.2 via ARM Trusted Firmware (2018.3 because the 2019.2
version of ATF tries to configure the CCI which we don't model) and U-boot:
.. code-block:: bash
$ qemu-system-aarch64 -M xlnx-versal-virt -m 2G \
-serial stdio -display none \
-device loader,file=petalinux-v2018.3/bl31.elf,cpu-num=0 \
-device loader,file=petalinux-v2019.2/u-boot.elf \
-device loader,addr=0x20000000,file=petalinux-v2019.2/Image \
-nic user -nic user \
-device virtio-rng-device,bus=virtio-mmio-bus.0,rng=rng0 \
-object rng-random,filename=/dev/urandom,id=rng0
Run the following at the U-Boot prompt:
.. code-block:: bash
Versal>
fdt addr $fdtcontroladdr
fdt move $fdtcontroladdr 0x40000000
fdt set /timer clock-frequency <0x3dfd240>
setenv bootargs "rdinit=/sbin/init maxcpus=1 console=ttyAMA0,115200n8 earlycon=pl011,mmio,0xFF000000,115200n8"
booti 20000000 - 40000000
fdt addr $fdtcontroladdr
Boot Linux as DOM0 on Xen via U-Boot:
.. code-block:: bash
$ qemu-system-aarch64 -M xlnx-versal-virt -m 4G \
-serial stdio -display none \
-device loader,file=petalinux-v2019.2/u-boot.elf,cpu-num=0 \
-device loader,addr=0x30000000,file=linux/2018-04-24/xen \
-device loader,addr=0x40000000,file=petalinux-v2019.2/Image \
-nic user -nic user \
-device virtio-rng-device,bus=virtio-mmio-bus.0,rng=rng0 \
-object rng-random,filename=/dev/urandom,id=rng0
Run the following at the U-Boot prompt:
.. code-block:: bash
Versal>
fdt addr $fdtcontroladdr
fdt move $fdtcontroladdr 0x20000000
fdt set /timer clock-frequency <0x3dfd240>
fdt set /chosen xen,xen-bootargs "console=dtuart dtuart=/uart@ff000000 dom0_mem=640M bootscrub=0 maxcpus=1 timer_slop=0"
fdt set /chosen xen,dom0-bootargs "rdinit=/sbin/init clk_ignore_unused console=hvc0 maxcpus=1"
fdt mknode /chosen dom0
fdt set /chosen/dom0 compatible "xen,multiboot-module"
fdt set /chosen/dom0 reg <0x00000000 0x40000000 0x0 0x03100000>
booti 30000000 - 20000000
Boot Linux as Dom0 on Xen via ARM Trusted Firmware and U-Boot:
.. code-block:: bash
$ qemu-system-aarch64 -M xlnx-versal-virt -m 4G \
-serial stdio -display none \
-device loader,file=petalinux-v2018.3/bl31.elf,cpu-num=0 \
-device loader,file=petalinux-v2019.2/u-boot.elf \
-device loader,addr=0x30000000,file=linux/2018-04-24/xen \
-device loader,addr=0x40000000,file=petalinux-v2019.2/Image \
-nic user -nic user \
-device virtio-rng-device,bus=virtio-mmio-bus.0,rng=rng0 \
-object rng-random,filename=/dev/urandom,id=rng0
Run the following at the U-Boot prompt:
.. code-block:: bash
Versal>
fdt addr $fdtcontroladdr
fdt move $fdtcontroladdr 0x20000000
fdt set /timer clock-frequency <0x3dfd240>
fdt set /chosen xen,xen-bootargs "console=dtuart dtuart=/uart@ff000000 dom0_mem=640M bootscrub=0 maxcpus=1 timer_slop=0"
fdt set /chosen xen,dom0-bootargs "rdinit=/sbin/init clk_ignore_unused console=hvc0 maxcpus=1"
fdt mknode /chosen dom0
fdt set /chosen/dom0 compatible "xen,multiboot-module"
fdt set /chosen/dom0 reg <0x00000000 0x40000000 0x0 0x03100000>
booti 30000000 - 20000000

View File

@ -93,6 +93,7 @@ undocumented; you can get a complete list by running
arm/sx1
arm/stellaris
arm/virt
arm/xlnx-versal-virt
Arm CPU features
================

View File

@ -32,6 +32,91 @@
/* IOTLB Management */
static guint smmu_iotlb_key_hash(gconstpointer v)
{
SMMUIOTLBKey *key = (SMMUIOTLBKey *)v;
uint32_t a, b, c;
/* Jenkins hash */
a = b = c = JHASH_INITVAL + sizeof(*key);
a += key->asid + key->level + key->tg;
b += extract64(key->iova, 0, 32);
c += extract64(key->iova, 32, 32);
__jhash_mix(a, b, c);
__jhash_final(a, b, c);
return c;
}
static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
{
SMMUIOTLBKey *k1 = (SMMUIOTLBKey *)v1, *k2 = (SMMUIOTLBKey *)v2;
return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
(k1->level == k2->level) && (k1->tg == k2->tg);
}
SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova,
uint8_t tg, uint8_t level)
{
SMMUIOTLBKey key = {.asid = asid, .iova = iova, .tg = tg, .level = level};
return key;
}
SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
SMMUTransTableInfo *tt, hwaddr iova)
{
uint8_t tg = (tt->granule_sz - 10) / 2;
uint8_t inputsize = 64 - tt->tsz;
uint8_t stride = tt->granule_sz - 3;
uint8_t level = 4 - (inputsize - 4) / stride;
SMMUTLBEntry *entry = NULL;
while (level <= 3) {
uint64_t subpage_size = 1ULL << level_shift(level, tt->granule_sz);
uint64_t mask = subpage_size - 1;
SMMUIOTLBKey key;
key = smmu_get_iotlb_key(cfg->asid, iova & ~mask, tg, level);
entry = g_hash_table_lookup(bs->iotlb, &key);
if (entry) {
break;
}
level++;
}
if (entry) {
cfg->iotlb_hits++;
trace_smmu_iotlb_lookup_hit(cfg->asid, iova,
cfg->iotlb_hits, cfg->iotlb_misses,
100 * cfg->iotlb_hits /
(cfg->iotlb_hits + cfg->iotlb_misses));
} else {
cfg->iotlb_misses++;
trace_smmu_iotlb_lookup_miss(cfg->asid, iova,
cfg->iotlb_hits, cfg->iotlb_misses,
100 * cfg->iotlb_hits /
(cfg->iotlb_hits + cfg->iotlb_misses));
}
return entry;
}
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
{
SMMUIOTLBKey *key = g_new0(SMMUIOTLBKey, 1);
uint8_t tg = (new->granule - 10) / 2;
if (g_hash_table_size(bs->iotlb) >= SMMU_IOTLB_MAX_SIZE) {
smmu_iotlb_inv_all(bs);
}
*key = smmu_get_iotlb_key(cfg->asid, new->entry.iova, tg, new->level);
trace_smmu_iotlb_insert(cfg->asid, new->entry.iova, tg, new->level);
g_hash_table_insert(bs->iotlb, key, new);
}
inline void smmu_iotlb_inv_all(SMMUState *s)
{
trace_smmu_iotlb_inv_all();
@ -44,15 +129,44 @@ static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
uint16_t asid = *(uint16_t *)user_data;
SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
return iotlb_key->asid == asid;
return SMMU_IOTLB_ASID(*iotlb_key) == asid;
}
inline void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova)
static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
gpointer user_data)
{
SMMUIOTLBKey key = {.asid = asid, .iova = iova};
SMMUTLBEntry *iter = (SMMUTLBEntry *)value;
IOMMUTLBEntry *entry = &iter->entry;
SMMUIOTLBPageInvInfo *info = (SMMUIOTLBPageInvInfo *)user_data;
SMMUIOTLBKey iotlb_key = *(SMMUIOTLBKey *)key;
trace_smmu_iotlb_inv_iova(asid, iova);
g_hash_table_remove(s->iotlb, &key);
if (info->asid >= 0 && info->asid != SMMU_IOTLB_ASID(iotlb_key)) {
return false;
}
return ((info->iova & ~entry->addr_mask) == entry->iova) ||
((entry->iova & ~info->mask) == info->iova);
}
inline void
smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
uint8_t tg, uint64_t num_pages, uint8_t ttl)
{
if (ttl && (num_pages == 1)) {
SMMUIOTLBKey key = smmu_get_iotlb_key(asid, iova, tg, ttl);
g_hash_table_remove(s->iotlb, &key);
} else {
/* if tg is not set we use 4KB range invalidation */
uint8_t granule = tg ? tg * 2 + 10 : 12;
SMMUIOTLBPageInvInfo info = {
.asid = asid, .iova = iova,
.mask = (num_pages * 1 << granule) - 1};
g_hash_table_foreach_remove(s->iotlb,
smmu_hash_remove_by_asid_iova,
&info);
}
}
inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
@ -149,7 +263,7 @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
* @cfg: translation config
* @iova: iova to translate
* @perm: access type
* @tlbe: IOMMUTLBEntry (out)
* @tlbe: SMMUTLBEntry (out)
* @info: handle to an error info
*
* Return 0 on success, < 0 on error. In case of error, @info is filled
@ -159,7 +273,7 @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
*/
static int smmu_ptw_64(SMMUTransCfg *cfg,
dma_addr_t iova, IOMMUAccessFlags perm,
IOMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
{
dma_addr_t baseaddr, indexmask;
int stage = cfg->stage;
@ -179,14 +293,11 @@ static int smmu_ptw_64(SMMUTransCfg *cfg,
baseaddr = extract64(tt->ttb, 0, 48);
baseaddr &= ~indexmask;
tlbe->iova = iova;
tlbe->addr_mask = (1 << granule_sz) - 1;
while (level <= 3) {
uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
uint64_t mask = subpage_size - 1;
uint32_t offset = iova_level_offset(iova, inputsize, level, granule_sz);
uint64_t pte;
uint64_t pte, gpa;
dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
uint8_t ap;
@ -199,60 +310,50 @@ static int smmu_ptw_64(SMMUTransCfg *cfg,
if (is_invalid_pte(pte) || is_reserved_pte(pte, level)) {
trace_smmu_ptw_invalid_pte(stage, level, baseaddr,
pte_addr, offset, pte);
info->type = SMMU_PTW_ERR_TRANSLATION;
goto error;
break;
}
if (is_page_pte(pte, level)) {
uint64_t gpa = get_page_pte_address(pte, granule_sz);
if (is_table_pte(pte, level)) {
ap = PTE_APTABLE(pte);
ap = PTE_AP(pte);
if (is_permission_fault(ap, perm)) {
if (is_permission_fault(ap, perm) && !tt->had) {
info->type = SMMU_PTW_ERR_PERMISSION;
goto error;
}
tlbe->translated_addr = gpa + (iova & mask);
tlbe->perm = PTE_AP_TO_PERM(ap);
baseaddr = get_table_pte_address(pte, granule_sz);
level++;
continue;
} else if (is_page_pte(pte, level)) {
gpa = get_page_pte_address(pte, granule_sz);
trace_smmu_ptw_page_pte(stage, level, iova,
baseaddr, pte_addr, pte, gpa);
return 0;
}
if (is_block_pte(pte, level)) {
} else {
uint64_t block_size;
hwaddr gpa = get_block_pte_address(pte, level, granule_sz,
&block_size);
ap = PTE_AP(pte);
if (is_permission_fault(ap, perm)) {
info->type = SMMU_PTW_ERR_PERMISSION;
goto error;
}
gpa = get_block_pte_address(pte, level, granule_sz,
&block_size);
trace_smmu_ptw_block_pte(stage, level, baseaddr,
pte_addr, pte, iova, gpa,
block_size >> 20);
tlbe->translated_addr = gpa + (iova & mask);
tlbe->perm = PTE_AP_TO_PERM(ap);
return 0;
}
/* table pte */
ap = PTE_APTABLE(pte);
ap = PTE_AP(pte);
if (is_permission_fault(ap, perm)) {
info->type = SMMU_PTW_ERR_PERMISSION;
goto error;
}
baseaddr = get_table_pte_address(pte, granule_sz);
level++;
}
tlbe->entry.translated_addr = gpa;
tlbe->entry.iova = iova & ~mask;
tlbe->entry.addr_mask = mask;
tlbe->entry.perm = PTE_AP_TO_PERM(ap);
tlbe->level = level;
tlbe->granule = granule_sz;
return 0;
}
info->type = SMMU_PTW_ERR_TRANSLATION;
error:
tlbe->perm = IOMMU_NONE;
tlbe->entry.perm = IOMMU_NONE;
return -EINVAL;
}
@ -268,7 +369,7 @@ error:
* return 0 on success
*/
inline int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
IOMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
{
if (!cfg->aa64) {
/*
@ -361,31 +462,6 @@ IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid)
return NULL;
}
static guint smmu_iotlb_key_hash(gconstpointer v)
{
SMMUIOTLBKey *key = (SMMUIOTLBKey *)v;
uint32_t a, b, c;
/* Jenkins hash */
a = b = c = JHASH_INITVAL + sizeof(*key);
a += key->asid;
b += extract64(key->iova, 0, 32);
c += extract64(key->iova, 32, 32);
__jhash_mix(a, b, c);
__jhash_final(a, b, c);
return c;
}
static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
{
const SMMUIOTLBKey *k1 = v1;
const SMMUIOTLBKey *k2 = v2;
return (k1->asid == k2->asid) && (k1->iova == k2->iova);
}
/* Unmap the whole notifier's range */
static void smmu_unmap_notifier_range(IOMMUNotifier *n)
{

View File

@ -96,4 +96,12 @@ uint64_t iova_level_offset(uint64_t iova, int inputsize,
MAKE_64BIT_MASK(0, gsz - 3);
}
#define SMMU_IOTLB_ASID(key) ((key).asid)
typedef struct SMMUIOTLBPageInvInfo {
int asid;
uint64_t iova;
uint64_t mask;
} SMMUIOTLBPageInvInfo;
#endif

View File

@ -54,6 +54,8 @@ REG32(IDR1, 0x4)
REG32(IDR2, 0x8)
REG32(IDR3, 0xc)
FIELD(IDR3, HAD, 2, 1);
FIELD(IDR3, RIL, 10, 1);
REG32(IDR4, 0x10)
REG32(IDR5, 0x14)
FIELD(IDR5, OAS, 0, 3);
@ -63,7 +65,8 @@ REG32(IDR5, 0x14)
#define SMMU_IDR5_OAS 4
REG32(IIDR, 0x1c)
REG32(IIDR, 0x18)
REG32(AIDR, 0x1c)
REG32(CR0, 0x20)
FIELD(CR0, SMMU_ENABLE, 0, 1)
FIELD(CR0, EVENTQEN, 2, 1)
@ -298,6 +301,8 @@ enum { /* Command completion notification */
};
#define CMD_TYPE(x) extract32((x)->word[0], 0 , 8)
#define CMD_NUM(x) extract32((x)->word[0], 12 , 5)
#define CMD_SCALE(x) extract32((x)->word[0], 20 , 5)
#define CMD_SSEC(x) extract32((x)->word[0], 10, 1)
#define CMD_SSV(x) extract32((x)->word[0], 11, 1)
#define CMD_RESUME_AC(x) extract32((x)->word[0], 12, 1)
@ -310,6 +315,8 @@ enum { /* Command completion notification */
#define CMD_RESUME_STAG(x) extract32((x)->word[2], 0 , 16)
#define CMD_RESP(x) extract32((x)->word[2], 11, 2)
#define CMD_LEAF(x) extract32((x)->word[2], 0 , 1)
#define CMD_TTL(x) extract32((x)->word[2], 8 , 2)
#define CMD_TG(x) extract32((x)->word[2], 10, 2)
#define CMD_STE_RANGE(x) extract32((x)->word[2], 0 , 5)
#define CMD_ADDR(x) ({ \
uint64_t high = (uint64_t)(x)->word[3]; \
@ -573,6 +580,7 @@ static inline int pa_range(STE *ste)
lo = (x)->word[(sel) * 2 + 2] & ~0xfULL; \
hi | lo; \
})
#define CD_HAD(x, sel) extract32((x)->word[(sel) * 2 + 2], 1, 1)
#define CD_TSZ(x, sel) extract32((x)->word[0], (16 * (sel)) + 0, 6)
#define CD_TG(x, sel) extract32((x)->word[0], (16 * (sel)) + 6, 2)

View File

@ -254,6 +254,9 @@ static void smmuv3_init_regs(SMMUv3State *s)
s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
/* 4K and 64K granule support */
s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
@ -270,6 +273,7 @@ static void smmuv3_init_regs(SMMUv3State *s)
s->features = 0;
s->sid_split = 0;
s->aidr = 0x1;
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
@ -506,7 +510,8 @@ static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
if (tt->ttb & ~(MAKE_64BIT_MASK(0, cfg->oas))) {
goto bad_cd;
}
trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz);
tt->had = CD_HAD(cd, i);
trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
}
event->record_trans_faults = CD_R(cd);
@ -626,7 +631,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
SMMUTranslationStatus status;
SMMUState *bs = ARM_SMMU(s);
uint64_t page_mask, aligned_addr;
IOMMUTLBEntry *cached_entry = NULL;
SMMUTLBEntry *cached_entry = NULL;
SMMUTransTableInfo *tt;
SMMUTransCfg *cfg = NULL;
IOMMUTLBEntry entry = {
@ -636,7 +641,6 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
.addr_mask = ~(hwaddr)0,
.perm = IOMMU_NONE,
};
SMMUIOTLBKey key, *new_key;
qemu_mutex_lock(&s->mutex);
@ -675,17 +679,9 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
page_mask = (1ULL << (tt->granule_sz)) - 1;
aligned_addr = addr & ~page_mask;
key.asid = cfg->asid;
key.iova = aligned_addr;
cached_entry = g_hash_table_lookup(bs->iotlb, &key);
cached_entry = smmu_iotlb_lookup(bs, cfg, tt, aligned_addr);
if (cached_entry) {
cfg->iotlb_hits++;
trace_smmu_iotlb_cache_hit(cfg->asid, aligned_addr,
cfg->iotlb_hits, cfg->iotlb_misses,
100 * cfg->iotlb_hits /
(cfg->iotlb_hits + cfg->iotlb_misses));
if ((flag & IOMMU_WO) && !(cached_entry->perm & IOMMU_WO)) {
if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
status = SMMU_TRANS_ERROR;
if (event.record_trans_faults) {
event.type = SMMU_EVT_F_PERMISSION;
@ -698,17 +694,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
goto epilogue;
}
cfg->iotlb_misses++;
trace_smmu_iotlb_cache_miss(cfg->asid, addr & ~page_mask,
cfg->iotlb_hits, cfg->iotlb_misses,
100 * cfg->iotlb_hits /
(cfg->iotlb_hits + cfg->iotlb_misses));
if (g_hash_table_size(bs->iotlb) >= SMMU_IOTLB_MAX_SIZE) {
smmu_iotlb_inv_all(bs);
}
cached_entry = g_new0(IOMMUTLBEntry, 1);
cached_entry = g_new0(SMMUTLBEntry, 1);
if (smmu_ptw(cfg, aligned_addr, flag, cached_entry, &ptw_info)) {
g_free(cached_entry);
@ -753,10 +739,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
}
status = SMMU_TRANS_ERROR;
} else {
new_key = g_new0(SMMUIOTLBKey, 1);
new_key->asid = cfg->asid;
new_key->iova = aligned_addr;
g_hash_table_insert(bs->iotlb, new_key, cached_entry);
smmu_iotlb_insert(bs, cfg, cached_entry);
status = SMMU_TRANS_SUCCESS;
}
@ -765,9 +748,9 @@ epilogue:
switch (status) {
case SMMU_TRANS_SUCCESS:
entry.perm = flag;
entry.translated_addr = cached_entry->translated_addr +
(addr & page_mask);
entry.addr_mask = cached_entry->addr_mask;
entry.translated_addr = cached_entry->entry.translated_addr +
(addr & cached_entry->entry.addr_mask);
entry.addr_mask = cached_entry->entry.addr_mask;
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
entry.translated_addr, entry.perm);
break;
@ -807,42 +790,49 @@ epilogue:
* @n: notifier to be called
* @asid: address space ID or negative value if we don't care
* @iova: iova
* @tg: translation granule (if communicated through range invalidation)
* @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
*/
static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
IOMMUNotifier *n,
int asid,
dma_addr_t iova)
int asid, dma_addr_t iova,
uint8_t tg, uint64_t num_pages)
{
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
SMMUEventInfo event = {.inval_ste_allowed = true};
SMMUTransTableInfo *tt;
SMMUTransCfg *cfg;
IOMMUTLBEntry entry;
uint8_t granule = tg;
cfg = smmuv3_get_config(sdev, &event);
if (!cfg) {
return;
}
if (!tg) {
SMMUEventInfo event = {.inval_ste_allowed = true};
SMMUTransCfg *cfg = smmuv3_get_config(sdev, &event);
SMMUTransTableInfo *tt;
if (asid >= 0 && cfg->asid != asid) {
return;
}
if (!cfg) {
return;
}
tt = select_tt(cfg, iova);
if (!tt) {
return;
if (asid >= 0 && cfg->asid != asid) {
return;
}
tt = select_tt(cfg, iova);
if (!tt) {
return;
}
granule = tt->granule_sz;
}
entry.target_as = &address_space_memory;
entry.iova = iova;
entry.addr_mask = (1 << tt->granule_sz) - 1;
entry.addr_mask = num_pages * (1 << granule) - 1;
entry.perm = IOMMU_NONE;
memory_region_notify_one(n, &entry);
}
/* invalidate an asid/iova tuple in all mr's */
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
/* invalidate an asid/iova range tuple in all mr's */
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
uint8_t tg, uint64_t num_pages)
{
SMMUDevice *sdev;
@ -850,14 +840,41 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
IOMMUMemoryRegion *mr = &sdev->iommu;
IOMMUNotifier *n;
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova,
tg, num_pages);
IOMMU_NOTIFIER_FOREACH(n, mr) {
smmuv3_notify_iova(mr, n, asid, iova);
smmuv3_notify_iova(mr, n, asid, iova, tg, num_pages);
}
}
}
static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
{
uint8_t scale = 0, num = 0, ttl = 0;
dma_addr_t addr = CMD_ADDR(cmd);
uint8_t type = CMD_TYPE(cmd);
uint16_t vmid = CMD_VMID(cmd);
bool leaf = CMD_LEAF(cmd);
uint8_t tg = CMD_TG(cmd);
hwaddr num_pages = 1;
int asid = -1;
if (tg) {
scale = CMD_SCALE(cmd);
num = CMD_NUM(cmd);
ttl = CMD_TTL(cmd);
num_pages = (num + 1) * (1 << (scale));
}
if (type == SMMU_CMD_TLBI_NH_VA) {
asid = CMD_ASID(cmd);
}
trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
smmu_iotlb_inv_iova(s, asid, addr, tg, num_pages, ttl);
}
static int smmuv3_cmdq_consume(SMMUv3State *s)
{
SMMUState *bs = ARM_SMMU(s);
@ -988,27 +1005,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
smmu_iotlb_inv_all(bs);
break;
case SMMU_CMD_TLBI_NH_VAA:
{
dma_addr_t addr = CMD_ADDR(&cmd);
uint16_t vmid = CMD_VMID(&cmd);
trace_smmuv3_cmdq_tlbi_nh_vaa(vmid, addr);
smmuv3_inv_notifiers_iova(bs, -1, addr);
smmu_iotlb_inv_all(bs);
break;
}
case SMMU_CMD_TLBI_NH_VA:
{
uint16_t asid = CMD_ASID(&cmd);
uint16_t vmid = CMD_VMID(&cmd);
dma_addr_t addr = CMD_ADDR(&cmd);
bool leaf = CMD_LEAF(&cmd);
trace_smmuv3_cmdq_tlbi_nh_va(vmid, asid, addr, leaf);
smmuv3_inv_notifiers_iova(bs, asid, addr);
smmu_iotlb_inv_iova(bs, asid, addr);
smmuv3_s1_range_inval(bs, &cmd);
break;
}
case SMMU_CMD_TLBI_EL3_ALL:
case SMMU_CMD_TLBI_EL3_VA:
case SMMU_CMD_TLBI_EL2_ALL:
@ -1257,6 +1256,9 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
case A_IIDR:
*data = s->iidr;
return MEMTX_OK;
case A_AIDR:
*data = s->aidr;
return MEMTX_OK;
case A_CR0:
*data = s->cr[0];
return MEMTX_OK;

View File

@ -14,6 +14,9 @@ smmu_iotlb_inv_all(void) "IOTLB invalidate all"
smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
smmu_iotlb_lookup_hit(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
smmu_iotlb_lookup_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
smmu_iotlb_insert(uint16_t asid, uint64_t addr, uint8_t tg, uint8_t level) "IOTLB ++ asid=%d addr=0x%"PRIx64" tg=%d level=%d"
# smmuv3.c
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
@ -36,20 +39,17 @@ smmuv3_translate_abort(const char *n, uint16_t sid, uint64_t addr, bool is_write
smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
smmuv3_decode_cd(uint32_t oas) "oas=%d"
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d"
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz, bool had) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d had:%d"
smmuv3_cmdq_cfgi_ste(int streamid) "streamid =%d"
smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%d - end=0x%d"
smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
smmuv3_cmdq_tlbi_nh_va(int vmid, int asid, uint64_t addr, bool leaf) "vmid =%d asid =%d addr=0x%"PRIx64" leaf=%d"
smmuv3_cmdq_tlbi_nh_vaa(int vmid, uint64_t addr) "vmid =%d addr=0x%"PRIx64
smmuv3_s1_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid =%d asid =%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
smmuv3_cmdq_tlbi_nh(void) ""
smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
smmu_iotlb_cache_hit(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
smmu_iotlb_cache_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint64_t iova) "iommu mr=%s asid=%d iova=0x%"PRIx64
smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64

View File

@ -15,6 +15,7 @@
#include "hw/irq.h"
#include "hw/qdev-properties.h"
#include "hw/core/cpu.h"
#include "cpu.h"
#define A9_GIC_NUM_PRIORITY_BITS 5
@ -52,8 +53,18 @@ static void a9mp_priv_realize(DeviceState *dev, Error **errp)
*wdtbusdev;
int i;
bool has_el3;
CPUState *cpu0;
Object *cpuobj;
cpu0 = qemu_get_cpu(0);
cpuobj = OBJECT(cpu0);
if (strcmp(object_get_typename(cpuobj), ARM_CPU_TYPE_NAME("cortex-a9"))) {
/* We might allow Cortex-A5 once we model it */
error_setg(errp,
"Cortex-A9MPCore peripheral can only use Cortex-A9 CPU");
return;
}
scudev = DEVICE(&s->scu);
qdev_prop_set_uint32(scudev, "num-cpu", s->num_cpu);
if (!sysbus_realize(SYS_BUS_DEVICE(&s->scu), errp)) {
@ -70,7 +81,6 @@ static void a9mp_priv_realize(DeviceState *dev, Error **errp)
/* Make the GIC's TZ support match the CPUs. We assume that
* either all the CPUs have TZ, or none do.
*/
cpuobj = OBJECT(qemu_get_cpu(0));
has_el3 = object_property_find(cpuobj, "has_el3", NULL) &&
object_property_get_bool(cpuobj, "has_el3", &error_abort);
qdev_prop_set_bit(gicdev, "has-security-extensions", has_el3);

View File

@ -50,8 +50,15 @@ typedef struct SMMUTransTableInfo {
uint64_t ttb; /* TT base address */
uint8_t tsz; /* input range, ie. 2^(64 -tsz)*/
uint8_t granule_sz; /* granule page shift */
bool had; /* hierarchical attribute disable */
} SMMUTransTableInfo;
typedef struct SMMUTLBEntry {
IOMMUTLBEntry entry;
uint8_t level;
uint8_t granule;
} SMMUTLBEntry;
/*
* Generic structure populated by derived SMMU devices
* after decoding the configuration information and used as
@ -91,6 +98,8 @@ typedef struct SMMUPciBus {
typedef struct SMMUIOTLBKey {
uint64_t iova;
uint16_t asid;
uint8_t tg;
uint8_t level;
} SMMUIOTLBKey;
typedef struct SMMUState {
@ -140,7 +149,7 @@ static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
* pair, according to @cfg translation config
*/
int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
IOMMUTLBEntry *tlbe, SMMUPTWEventInfo *info);
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info);
/**
* select_tt - compute which translation table shall be used according to
@ -153,9 +162,15 @@ IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid);
#define SMMU_IOTLB_MAX_SIZE 256
SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
SMMUTransTableInfo *tt, hwaddr iova);
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova,
uint8_t tg, uint8_t level);
void smmu_iotlb_inv_all(SMMUState *s);
void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova);
void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
uint8_t tg, uint64_t num_pages, uint8_t ttl);
/* Unmap the range of all the notifiers registered to any IOMMU mr */
void smmu_inv_notifiers_all(SMMUState *s);

View File

@ -41,6 +41,7 @@ typedef struct SMMUv3State {
uint32_t idr[6];
uint32_t iidr;
uint32_t aidr;
uint32_t cr[3];
uint32_t cr0ack;
uint32_t statusr;

View File

@ -47,6 +47,8 @@
&bfi rd rn lsb msb
&sat rd rn satimm imm sh
&pkh rd rn rm imm tb
&mcr cp opc1 crn crm opc2 rt
&mcrr cp opc1 crm rt rt2
# Data-processing (register)
@ -529,6 +531,23 @@ LDM_a32 ---- 100 b:1 i:1 u:1 w:1 1 rn:4 list:16 &ldst_block
B .... 1010 ........................ @branch
BL .... 1011 ........................ @branch
# Coprocessor instructions
# We decode MCR, MCR, MRRC and MCRR only, because for QEMU the
# other coprocessor instructions always UNDEF.
# The trans_ functions for these will ignore cp values 8..13 for v7 or
# earlier, and 0..13 for v8 and later, because those areas of the
# encoding space may be used for other things, such as VFP or Neon.
@mcr ---- .... opc1:3 . crn:4 rt:4 cp:4 opc2:3 . crm:4 &mcr
@mcrr ---- .... .... rt2:4 rt:4 cp:4 opc1:4 crm:4 &mcrr
MCRR .... 1100 0100 .... .... .... .... .... @mcrr
MRRC .... 1100 0101 .... .... .... .... .... @mcrr
MCR .... 1110 ... 0 .... .... .... ... 1 .... @mcr
MRC .... 1110 ... 1 .... .... .... ... 1 .... @mcr
# Supervisor call
SVC ---- 1111 imm:24 &i

View File

@ -391,12 +391,15 @@ static void arm_cpu_reset(DeviceState *dev)
set_flush_to_zero(1, &env->vfp.standard_fp_status);
set_flush_inputs_to_zero(1, &env->vfp.standard_fp_status);
set_default_nan_mode(1, &env->vfp.standard_fp_status);
set_default_nan_mode(1, &env->vfp.standard_fp_status_f16);
set_float_detect_tininess(float_tininess_before_rounding,
&env->vfp.fp_status);
set_float_detect_tininess(float_tininess_before_rounding,
&env->vfp.standard_fp_status);
set_float_detect_tininess(float_tininess_before_rounding,
&env->vfp.fp_status_f16);
set_float_detect_tininess(float_tininess_before_rounding,
&env->vfp.standard_fp_status_f16);
#ifndef CONFIG_USER_ONLY
if (kvm_enabled()) {
kvm_arm_reset_vcpu(cpu);

View File

@ -609,6 +609,8 @@ typedef struct CPUARMState {
* fp_status: is the "normal" fp status.
* fp_status_fp16: used for half-precision calculations
* standard_fp_status : the ARM "Standard FPSCR Value"
* standard_fp_status_fp16 : used for half-precision
* calculations with the ARM "Standard FPSCR Value"
*
* Half-precision operations are governed by a separate
* flush-to-zero control bit in FPSCR:FZ16. We pass a separate
@ -619,15 +621,20 @@ typedef struct CPUARMState {
* Neon) which the architecture defines as controlled by the
* standard FPSCR value rather than the FPSCR.
*
* The "standard FPSCR but for fp16 ops" is needed because
* the "standard FPSCR" tracks the FPSCR.FZ16 bit rather than
* using a fixed value for it.
*
* To avoid having to transfer exception bits around, we simply
* say that the FPSCR cumulative exception flags are the logical
* OR of the flags in the three fp statuses. This relies on the
* OR of the flags in the four fp statuses. This relies on the
* only thing which needs to read the exception flags being
* an explicit FPSCR read.
*/
float_status fp_status;
float_status fp_status_f16;
float_status standard_fp_status;
float_status standard_fp_status_f16;
/* ZCR_EL[1-3] */
uint64_t zcr_el[4];
@ -1950,7 +1957,6 @@ enum arm_features {
ARM_FEATURE_V8,
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
ARM_FEATURE_CBAR, /* has cp15 CBAR */
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */

View File

@ -8462,6 +8462,35 @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
assert((r->state != ARM_CP_STATE_AA32) || (r->opc0 == 0));
/* AArch64 regs are all 64 bit so ARM_CP_64BIT is meaningless */
assert((r->state != ARM_CP_STATE_AA64) || !(r->type & ARM_CP_64BIT));
/*
* This API is only for Arm's system coprocessors (14 and 15) or
* (M-profile or v7A-and-earlier only) for implementation defined
* coprocessors in the range 0..7. Our decode assumes this, since
* 8..13 can be used for other insns including VFP and Neon. See
* valid_cp() in translate.c. Assert here that we haven't tried
* to use an invalid coprocessor number.
*/
switch (r->state) {
case ARM_CP_STATE_BOTH:
/* 0 has a special meaning, but otherwise the same rules as AA32. */
if (r->cp == 0) {
break;
}
/* fall through */
case ARM_CP_STATE_AA32:
if (arm_feature(&cpu->env, ARM_FEATURE_V8) &&
!arm_feature(&cpu->env, ARM_FEATURE_M)) {
assert(r->cp >= 14 && r->cp <= 15);
} else {
assert(r->cp < 8 || (r->cp >= 14 && r->cp <= 15));
}
break;
case ARM_CP_STATE_AA64:
assert(r->cp == 0 || r->cp == CP_REG_ARM64_SYSREG_CP);
break;
default:
g_assert_not_reached();
}
/* The AArch64 pseudocode CheckSystemAccess() specifies that op1
* encodes a minimum access level for the register. We roll this
* runtime check into our general permission check code, so check

42
target/arm/m-nocp.decode Normal file
View File

@ -0,0 +1,42 @@
# M-profile UserFault.NOCP exception handling
#
# Copyright (c) 2020 Linaro, Ltd
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, see <http://www.gnu.org/licenses/>.
#
# This file is processed by scripts/decodetree.py
#
# For M-profile, the architecture specifies that NOCP UsageFaults
# should take precedence over UNDEF faults over the whole wide
# range of coprocessor-space encodings, with the exception of
# VLLDM and VLSTM. (Compare v8.1M IsCPInstruction() pseudocode and
# v8M Arm ARM rule R_QLGM.) This isn't mandatory for v8.0M but we choose
# to behave the same as v8.1M.
# This decode is handled before any others (and in particular before
# decoding FP instructions which are in the coprocessor space).
# If the coprocessor is not present or disabled then we will generate
# the NOCP exception; otherwise we let the insn through to the main decode.
{
# Special cases which do not take an early NOCP: VLLDM and VLSTM
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
# TODO: VSCCLRM (new in v8.1M) is similar:
#VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ----
NOCP 111- 110- ---- ---- ---- cp:4 ---- ----
# TODO: From v8.1M onwards we will also want this range to NOCP
#NOCP_8_1 111- 1111 ---- ---- ---- ---- ---- ---- cp=10
}

View File

@ -5,6 +5,7 @@ gen = [
decodetree.process('neon-ls.decode', extra_args: '--static-decode=disas_neon_ls'),
decodetree.process('vfp.decode', extra_args: '--static-decode=disas_vfp'),
decodetree.process('vfp-uncond.decode', extra_args: '--static-decode=disas_vfp_uncond'),
decodetree.process('m-nocp.decode', extra_args: '--static-decode=disas_m_nocp'),
decodetree.process('a32.decode', extra_args: '--static-decode=disas_a32'),
decodetree.process('a32-uncond.decode', extra_args: '--static-decode=disas_a32_uncond'),
decodetree.process('t32.decode', extra_args: '--static-decode=disas_t32'),

View File

@ -45,6 +45,8 @@
&sat !extern rd rn satimm imm sh
&pkh !extern rd rn rm imm tb
&cps !extern mode imod M A I F
&mcr !extern cp opc1 crn crm opc2 rt
&mcrr !extern cp opc1 crm rt rt2
# Data-processing (register)
@ -621,6 +623,23 @@ RFE 1110 1001 10.1 .... 1100000000000000 @rfe pu=1
SRS 1110 1000 00.0 1101 1100 0000 000. .... @srs pu=2
SRS 1110 1001 10.0 1101 1100 0000 000. .... @srs pu=1
# Coprocessor instructions
# We decode MCR, MCR, MRRC and MCRR only, because for QEMU the
# other coprocessor instructions always UNDEF.
# The trans_ functions for these will ignore cp values 8..13 for v7 or
# earlier, and 0..13 for v8 and later, because those areas of the
# encoding space may be used for other things, such as VFP or Neon.
@mcr .... .... opc1:3 . crn:4 rt:4 cp:4 opc2:3 . crm:4
@mcrr .... .... .... rt2:4 rt:4 cp:4 opc1:4 crm:4
MCRR 1110 1100 0100 .... .... .... .... .... @mcrr
MRRC 1110 1100 0101 .... .... .... .... .... @mcrr
MCR 1110 1110 ... 0 .... .... .... ... 1 .... @mcr
MRC 1110 1110 ... 1 .... .... .... ... 1 .... @mcr
# Branches
%imm24 26:s1 13:1 11:1 16:10 0:11 !function=t32_branch24

View File

@ -609,25 +609,6 @@ static void write_fp_sreg(DisasContext *s, int reg, TCGv_i32 v)
tcg_temp_free_i64(tmp);
}
TCGv_ptr get_fpstatus_ptr(bool is_f16)
{
TCGv_ptr statusptr = tcg_temp_new_ptr();
int offset;
/* In A64 all instructions (both FP and Neon) use the FPCR; there
* is no equivalent of the A32 Neon "standard FPSCR value".
* However half-precision operations operate under a different
* FZ16 flag and use vfp.fp_status_f16 instead of vfp.fp_status.
*/
if (is_f16) {
offset = offsetof(CPUARMState, vfp.fp_status_f16);
} else {
offset = offsetof(CPUARMState, vfp.fp_status);
}
tcg_gen_addi_ptr(statusptr, cpu_env, offset);
return statusptr;
}
/* Expand a 2-operand AdvSIMD vector operation using an expander function. */
static void gen_gvec_fn2(DisasContext *s, bool is_q, int rd, int rn,
GVecGen2Fn *gvec_fn, int vece)
@ -689,7 +670,7 @@ static void gen_gvec_op3_fpst(DisasContext *s, bool is_q, int rd, int rn,
int rm, bool is_fp16, int data,
gen_helper_gvec_3_ptr *fn)
{
TCGv_ptr fpst = get_fpstatus_ptr(is_fp16);
TCGv_ptr fpst = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
vec_full_reg_offset(s, rn),
vec_full_reg_offset(s, rm), fpst,
@ -5898,7 +5879,7 @@ static void handle_fp_compare(DisasContext *s, int size,
bool cmp_with_zero, bool signal_all_nans)
{
TCGv_i64 tcg_flags = tcg_temp_new_i64();
TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
TCGv_ptr fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
if (size == MO_64) {
TCGv_i64 tcg_vn, tcg_vm;
@ -6157,7 +6138,7 @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
break;
case 0x3: /* FSQRT */
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_sqrt_f16(tcg_res, tcg_op, fpst);
break;
case 0x8: /* FRINTN */
@ -6167,7 +6148,7 @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
case 0xc: /* FRINTA */
{
TCGv_i32 tcg_rmode = tcg_const_i32(arm_rmode_to_sf(opcode & 7));
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
@ -6177,11 +6158,11 @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
break;
}
case 0xe: /* FRINTX */
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, fpst);
break;
case 0xf: /* FRINTI */
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
break;
default:
@ -6253,7 +6234,7 @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
g_assert_not_reached();
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
if (rmode >= 0) {
TCGv_i32 tcg_rmode = tcg_const_i32(rmode);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
@ -6330,7 +6311,7 @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
g_assert_not_reached();
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
if (rmode >= 0) {
TCGv_i32 tcg_rmode = tcg_const_i32(rmode);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
@ -6365,7 +6346,7 @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
/* Single to half */
TCGv_i32 tcg_rd = tcg_temp_new_i32();
TCGv_i32 ahp = get_ahp_flag();
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_fcvt_f32_to_f16(tcg_rd, tcg_rn, fpst, ahp);
/* write_fp_sreg is OK here because top half of tcg_rd is zero */
@ -6385,7 +6366,7 @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
/* Double to single */
gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, cpu_env);
} else {
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
TCGv_i32 ahp = get_ahp_flag();
/* Double to half */
gen_helper_vfp_fcvt_f64_to_f16(tcg_rd, tcg_rn, fpst, ahp);
@ -6401,7 +6382,7 @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
case 0x3:
{
TCGv_i32 tcg_rn = read_fp_sreg(s, rn);
TCGv_ptr tcg_fpst = get_fpstatus_ptr(false);
TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
TCGv_i32 tcg_ahp = get_ahp_flag();
tcg_gen_ext16u_i32(tcg_rn, tcg_rn);
if (dtype == 0) {
@ -6518,7 +6499,7 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
TCGv_ptr fpst;
tcg_res = tcg_temp_new_i32();
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
tcg_op1 = read_fp_sreg(s, rn);
tcg_op2 = read_fp_sreg(s, rm);
@ -6571,7 +6552,7 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
TCGv_ptr fpst;
tcg_res = tcg_temp_new_i64();
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
tcg_op1 = read_fp_dreg(s, rn);
tcg_op2 = read_fp_dreg(s, rm);
@ -6624,7 +6605,7 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
TCGv_ptr fpst;
tcg_res = tcg_temp_new_i32();
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
tcg_op1 = read_fp_hreg(s, rn);
tcg_op2 = read_fp_hreg(s, rm);
@ -6723,7 +6704,7 @@ static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
{
TCGv_i32 tcg_op1, tcg_op2, tcg_op3;
TCGv_i32 tcg_res = tcg_temp_new_i32();
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
tcg_op1 = read_fp_sreg(s, rn);
tcg_op2 = read_fp_sreg(s, rm);
@ -6761,7 +6742,7 @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
{
TCGv_i64 tcg_op1, tcg_op2, tcg_op3;
TCGv_i64 tcg_res = tcg_temp_new_i64();
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
tcg_op1 = read_fp_dreg(s, rn);
tcg_op2 = read_fp_dreg(s, rm);
@ -6799,7 +6780,7 @@ static void handle_fp_3src_half(DisasContext *s, bool o0, bool o1,
{
TCGv_i32 tcg_op1, tcg_op2, tcg_op3;
TCGv_i32 tcg_res = tcg_temp_new_i32();
TCGv_ptr fpst = get_fpstatus_ptr(true);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR_F16);
tcg_op1 = read_fp_hreg(s, rn);
tcg_op2 = read_fp_hreg(s, rm);
@ -6945,7 +6926,7 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
TCGv_i32 tcg_shift, tcg_single;
TCGv_i64 tcg_double;
tcg_fpstatus = get_fpstatus_ptr(type == 3);
tcg_fpstatus = fpstatus_ptr(type == 3 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_shift = tcg_const_i32(64 - scale);
@ -7233,7 +7214,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
static void handle_fjcvtzs(DisasContext *s, int rd, int rn)
{
TCGv_i64 t = read_fp_dreg(s, rn);
TCGv_ptr fpstatus = get_fpstatus_ptr(false);
TCGv_ptr fpstatus = fpstatus_ptr(FPST_FPCR);
gen_helper_fjcvtzs(t, t, fpstatus);
@ -7847,7 +7828,7 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
* Note that correct NaN propagation requires that we do these
* operations in exactly the order specified by the pseudocode.
*/
TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
TCGv_ptr fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
int fpopcode = opcode | is_min << 4 | is_u << 5;
int vmap = (1 << elements) - 1;
TCGv_i32 tcg_res32 = do_reduction_op(s, fpopcode, rn, esize,
@ -8359,7 +8340,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
return;
}
fpst = get_fpstatus_ptr(size == MO_16);
fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
break;
default:
unallocated_encoding(s);
@ -8872,7 +8853,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
int elements, int is_signed,
int fracbits, int size)
{
TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
TCGv_ptr tcg_fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
TCGv_i32 tcg_shift = NULL;
MemOp mop = size | (is_signed ? MO_SIGN : 0);
@ -9053,7 +9034,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
assert(!(is_scalar && is_q));
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO));
tcg_fpstatus = get_fpstatus_ptr(size == MO_16);
tcg_fpstatus = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
fracbits = (16 << size) - immhb;
tcg_shift = tcg_const_i32(fracbits);
@ -9392,7 +9373,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
int fpopcode, int rd, int rn, int rm)
{
int pass;
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
for (pass = 0; pass < elements; pass++) {
if (size) {
@ -9785,7 +9766,7 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
return;
}
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
tcg_op1 = read_fp_hreg(s, rn);
tcg_op2 = read_fp_hreg(s, rm);
@ -10038,7 +10019,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
return;
}
fpst = get_fpstatus_ptr(size == MO_16);
fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
if (is_double) {
TCGv_i64 tcg_op = tcg_temp_new_i64();
@ -10168,7 +10149,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
int size, int rn, int rd)
{
bool is_double = (size == 3);
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
if (is_double) {
TCGv_i64 tcg_op = tcg_temp_new_i64();
@ -10309,7 +10290,7 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
} else {
TCGv_i32 tcg_lo = tcg_temp_new_i32();
TCGv_i32 tcg_hi = tcg_temp_new_i32();
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
TCGv_i32 ahp = get_ahp_flag();
tcg_gen_extr_i64_i32(tcg_lo, tcg_hi, tcg_op);
@ -10571,7 +10552,7 @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
if (is_fcvt) {
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(rmode));
tcg_fpstatus = get_fpstatus_ptr(false);
tcg_fpstatus = fpstatus_ptr(FPST_FPCR);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
} else {
tcg_rmode = NULL;
@ -11396,7 +11377,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
/* Floating point operations need fpst */
if (opcode >= 0x58) {
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
} else {
fpst = NULL;
}
@ -11994,7 +11975,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
break;
}
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_FPCR_F16);
if (pairwise) {
int maxpass = is_q ? 8 : 4;
@ -12287,7 +12268,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
/* 16 -> 32 bit fp conversion */
int srcelt = is_q ? 4 : 0;
TCGv_i32 tcg_res[4];
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
TCGv_i32 ahp = get_ahp_flag();
for (pass = 0; pass < 4; pass++) {
@ -12759,7 +12740,7 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
}
if (need_fpstatus || need_rmode) {
tcg_fpstatus = get_fpstatus_ptr(false);
tcg_fpstatus = fpstatus_ptr(FPST_FPCR);
} else {
tcg_fpstatus = NULL;
}
@ -13149,7 +13130,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
}
if (need_rmode || need_fpst) {
tcg_fpstatus = get_fpstatus_ptr(true);
tcg_fpstatus = fpstatus_ptr(FPST_FPCR_F16);
}
if (need_rmode) {
@ -13458,7 +13439,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
if (is_fp) {
fpst = get_fpstatus_ptr(is_fp16);
fpst = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
} else {
fpst = NULL;
}

View File

@ -37,7 +37,6 @@ TCGv_i64 cpu_reg_sp(DisasContext *s, int reg);
TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf);
TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf);
void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v);
TCGv_ptr get_fpstatus_ptr(bool);
bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
unsigned int imms, unsigned int immr);
bool sve_access_check(DisasContext *s);

View File

@ -181,7 +181,7 @@ static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
}
opr_sz = (1 + a->q) * 8;
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(a->size == 0 ? FPST_STD_F16 : FPST_STD);
fn_gvec_ptr = a->size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
vfp_reg_offset(1, a->vn),
@ -218,7 +218,7 @@ static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
}
opr_sz = (1 + a->q) * 8;
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(a->size == 0 ? FPST_STD_F16 : FPST_STD);
fn_gvec_ptr = a->size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
vfp_reg_offset(1, a->vn),
@ -322,7 +322,7 @@ static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
fn_gvec_ptr = (a->size ? gen_helper_gvec_fcmlas_idx
: gen_helper_gvec_fcmlah_idx);
opr_sz = (1 + a->q) * 8;
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(a->size == 0 ? FPST_STD_F16 : FPST_STD);
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
vfp_reg_offset(1, a->vn),
vfp_reg_offset(1, a->vm),
@ -358,7 +358,7 @@ static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
fn_gvec = a->u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
opr_sz = (1 + a->q) * 8;
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(FPST_STD);
tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
vfp_reg_offset(1, a->vn),
vfp_reg_offset(1, a->rm),
@ -1063,7 +1063,7 @@ static bool do_3same_fp(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn,
return true;
}
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
TCGv_ptr fpstatus = fpstatus_ptr(FPST_STD);
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
tmp = neon_load_reg(a->vn, pass);
tmp2 = neon_load_reg(a->vm, pass);
@ -1091,7 +1091,7 @@ static bool do_3same_fp(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn,
uint32_t rn_ofs, uint32_t rm_ofs, \
uint32_t oprsz, uint32_t maxsz) \
{ \
TCGv_ptr fpst = get_fpstatus_ptr(1); \
TCGv_ptr fpst = fpstatus_ptr(FPST_STD); \
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpst, \
oprsz, maxsz, 0, FUNC); \
tcg_temp_free_ptr(fpst); \
@ -1287,7 +1287,7 @@ static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
* early. Since Q is 0 there are always just two passes, so instead
* of a complicated loop over each pass we just unroll.
*/
fpstatus = get_fpstatus_ptr(1);
fpstatus = fpstatus_ptr(FPST_STD);
tmp = neon_load_reg(a->vn, 0);
tmp2 = neon_load_reg(a->vn, 1);
fn(tmp, tmp, tmp2, fpstatus);
@ -1790,7 +1790,7 @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
return true;
}
fpstatus = get_fpstatus_ptr(1);
fpstatus = fpstatus_ptr(FPST_STD);
shiftv = tcg_const_i32(a->shift);
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
tmp = neon_load_reg(a->vm, pass);
@ -2591,7 +2591,7 @@ static bool trans_VMLS_2sc(DisasContext *s, arg_2scalar *a)
#define WRAP_FP_FN(WRAPNAME, FUNC) \
static void WRAPNAME(TCGv_i32 rd, TCGv_i32 rn, TCGv_i32 rm) \
{ \
TCGv_ptr fpstatus = get_fpstatus_ptr(1); \
TCGv_ptr fpstatus = fpstatus_ptr(FPST_STD); \
FUNC(rd, rn, rm, fpstatus); \
tcg_temp_free_ptr(fpstatus); \
}
@ -3480,7 +3480,7 @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
return true;
}
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_STD);
ahp = get_ahp_flag();
tmp = neon_load_reg(a->vm, 0);
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
@ -3528,7 +3528,7 @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
return true;
}
fpst = get_fpstatus_ptr(true);
fpst = fpstatus_ptr(FPST_STD);
ahp = get_ahp_flag();
tmp3 = tcg_temp_new_i32();
tmp = neon_load_reg(a->vm, 0);
@ -3838,7 +3838,7 @@ static bool do_2misc_fp(DisasContext *s, arg_2misc *a,
return true;
}
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(FPST_STD);
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
TCGv_i32 tmp = neon_load_reg(a->vm, pass);
fn(tmp, tmp, fpst);
@ -3932,7 +3932,7 @@ static bool do_vrint(DisasContext *s, arg_2misc *a, int rmode)
return true;
}
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(FPST_STD);
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(rmode));
gen_helper_set_neon_rmode(tcg_rmode, tcg_rmode, cpu_env);
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
@ -3993,7 +3993,7 @@ static bool do_vcvt(DisasContext *s, arg_2misc *a, int rmode, bool is_signed)
return true;
}
fpst = get_fpstatus_ptr(1);
fpst = fpstatus_ptr(FPST_STD);
tcg_shift = tcg_const_i32(0);
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(rmode));
gen_helper_set_neon_rmode(tcg_rmode, tcg_rmode, cpu_env);

View File

@ -3470,7 +3470,7 @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3496,7 +3496,7 @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a)
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3528,7 +3528,7 @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,
tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
status = get_fpstatus_ptr(a->esz == MO_16);
status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
fn(temp, t_zn, t_pg, status, t_desc);
tcg_temp_free_ptr(t_zn);
@ -3570,7 +3570,7 @@ DO_VPZ(FMAXV, fmaxv)
static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
{
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
@ -3618,7 +3618,7 @@ static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a,
gen_helper_gvec_3_ptr *fn)
{
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
@ -3670,7 +3670,7 @@ static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a)
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3710,7 +3710,7 @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
t_pg = tcg_temp_new_ptr();
tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
t_fpst = get_fpstatus_ptr(a->esz == MO_16);
t_fpst = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
@ -3737,7 +3737,7 @@ static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3779,7 +3779,7 @@ static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3831,7 +3831,7 @@ static void do_fp_scalar(DisasContext *s, int zd, int zn, int pg, bool is_fp16,
tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, zn));
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
status = get_fpstatus_ptr(is_fp16);
status = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
fn(t_zd, t_zn, t_pg, scalar, status, desc);
@ -3895,7 +3895,7 @@ static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3939,7 +3939,7 @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a)
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -3958,7 +3958,7 @@ static bool do_fmla(DisasContext *s, arg_rprrr_esz *a,
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_5_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -4001,7 +4001,7 @@ static bool trans_FCMLA_zpzzz(DisasContext *s, arg_FCMLA_zpzzz *a)
}
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_5_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -4024,7 +4024,7 @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
tcg_debug_assert(a->rd == a->ra);
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
@ -4045,7 +4045,7 @@ static bool do_zpz_ptr(DisasContext *s, int rd, int rn, int pg,
{
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_ptr status = get_fpstatus_ptr(is_fp16);
TCGv_ptr status = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
vec_full_reg_offset(s, rn),
pred_full_reg_offset(s, pg),
@ -4191,7 +4191,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
if (sve_access_check(s)) {
unsigned vsz = vec_full_reg_size(s);
TCGv_i32 tmode = tcg_const_i32(mode);
TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
gen_helper_set_rmode(tmode, tmode, status);

View File

@ -95,14 +95,11 @@ static inline long vfp_f16_offset(unsigned reg, bool top)
static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
{
if (s->fp_excp_el) {
if (arm_dc_feature(s, ARM_FEATURE_M)) {
gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized(),
s->fp_excp_el);
} else {
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
syn_fp_access_trap(1, 0xe, false),
s->fp_excp_el);
}
/* M-profile handled this earlier, in disas_m_nocp() */
assert (!arm_dc_feature(s, ARM_FEATURE_M));
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
syn_fp_access_trap(1, 0xe, false),
s->fp_excp_el);
return false;
}
@ -362,7 +359,7 @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
return true;
}
fpst = get_fpstatus_ptr(0);
fpst = fpstatus_ptr(FPST_FPCR);
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(rounding));
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
@ -425,7 +422,7 @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
return true;
}
fpst = get_fpstatus_ptr(0);
fpst = fpstatus_ptr(FPST_FPCR);
tcg_shift = tcg_const_i32(0);
@ -1234,7 +1231,7 @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
f0 = tcg_temp_new_i32();
f1 = tcg_temp_new_i32();
fd = tcg_temp_new_i32();
fpst = get_fpstatus_ptr(0);
fpst = fpstatus_ptr(FPST_FPCR);
neon_load_reg32(f0, vn);
neon_load_reg32(f1, vm);
@ -1317,7 +1314,7 @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
f0 = tcg_temp_new_i64();
f1 = tcg_temp_new_i64();
fd = tcg_temp_new_i64();
fpst = get_fpstatus_ptr(0);
fpst = fpstatus_ptr(FPST_FPCR);
neon_load_reg64(f0, vn);
neon_load_reg64(f1, vm);
@ -1799,7 +1796,7 @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
/* VFNMA, VFNMS */
gen_helper_vfp_negs(vd, vd);
}
fpst = get_fpstatus_ptr(0);
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
neon_store_reg32(vd, a->vd);
@ -1890,7 +1887,7 @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
/* VFNMA, VFNMS */
gen_helper_vfp_negd(vd, vd);
}
fpst = get_fpstatus_ptr(0);
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
neon_store_reg64(vd, a->vd);
@ -2174,7 +2171,7 @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a)
return true;
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
ahp_mode = get_ahp_flag();
tmp = tcg_temp_new_i32();
/* The T bit tells us if we want the low or high 16 bits of Vm */
@ -2211,7 +2208,7 @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
return true;
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
ahp_mode = get_ahp_flag();
tmp = tcg_temp_new_i32();
/* The T bit tells us if we want the low or high 16 bits of Vm */
@ -2240,7 +2237,7 @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
return true;
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
ahp_mode = get_ahp_flag();
tmp = tcg_temp_new_i32();
@ -2277,7 +2274,7 @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
return true;
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
ahp_mode = get_ahp_flag();
tmp = tcg_temp_new_i32();
vm = tcg_temp_new_i64();
@ -2307,7 +2304,7 @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
tmp = tcg_temp_new_i32();
neon_load_reg32(tmp, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_rints(tmp, tmp, fpst);
neon_store_reg32(tmp, a->vd);
tcg_temp_free_ptr(fpst);
@ -2339,7 +2336,7 @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
tmp = tcg_temp_new_i64();
neon_load_reg64(tmp, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_rintd(tmp, tmp, fpst);
neon_store_reg64(tmp, a->vd);
tcg_temp_free_ptr(fpst);
@ -2363,7 +2360,7 @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
tmp = tcg_temp_new_i32();
neon_load_reg32(tmp, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
tcg_rmode = tcg_const_i32(float_round_to_zero);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
gen_helper_rints(tmp, tmp, fpst);
@ -2400,7 +2397,7 @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
tmp = tcg_temp_new_i64();
neon_load_reg64(tmp, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
tcg_rmode = tcg_const_i32(float_round_to_zero);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
gen_helper_rintd(tmp, tmp, fpst);
@ -2427,7 +2424,7 @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a)
tmp = tcg_temp_new_i32();
neon_load_reg32(tmp, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_rints_exact(tmp, tmp, fpst);
neon_store_reg32(tmp, a->vd);
tcg_temp_free_ptr(fpst);
@ -2459,7 +2456,7 @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
tmp = tcg_temp_new_i64();
neon_load_reg64(tmp, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_rintd_exact(tmp, tmp, fpst);
neon_store_reg64(tmp, a->vd);
tcg_temp_free_ptr(fpst);
@ -2538,7 +2535,7 @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
vm = tcg_temp_new_i32();
neon_load_reg32(vm, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
if (a->s) {
/* i32 -> f32 */
gen_helper_vfp_sitos(vm, vm, fpst);
@ -2574,7 +2571,7 @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
vm = tcg_temp_new_i32();
vd = tcg_temp_new_i64();
neon_load_reg32(vm, a->vm);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
if (a->s) {
/* i32 -> f64 */
gen_helper_vfp_sitod(vd, vm, fpst);
@ -2640,7 +2637,7 @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
vd = tcg_temp_new_i32();
neon_load_reg32(vd, a->vd);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
shift = tcg_const_i32(frac_bits);
/* Switch on op:U:sx bits */
@ -2705,7 +2702,7 @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
vd = tcg_temp_new_i64();
neon_load_reg64(vd, a->vd);
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
shift = tcg_const_i32(frac_bits);
/* Switch on op:U:sx bits */
@ -2758,7 +2755,7 @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
return true;
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
vm = tcg_temp_new_i32();
neon_load_reg32(vm, a->vm);
@ -2800,7 +2797,7 @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
return true;
}
fpst = get_fpstatus_ptr(false);
fpst = fpstatus_ptr(FPST_FPCR);
vm = tcg_temp_new_i64();
vd = tcg_temp_new_i32();
neon_load_reg64(vm, a->vm);
@ -2842,9 +2839,14 @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
!arm_dc_feature(s, ARM_FEATURE_V8)) {
return false;
}
/* If not secure, UNDEF. */
/*
* If not secure, UNDEF. We must emit code for this
* rather than returning false so that this takes
* precedence over the m-nocp.decode NOCP fallback.
*/
if (!s->v8m_secure) {
return false;
unallocated_encoding(s);
return true;
}
/* If no fpu, NOP. */
if (!dc_isar_feature(aa32_vfp, s)) {
@ -2863,3 +2865,33 @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
s->base.is_jmp = DISAS_UPDATE_EXIT;
return true;
}
static bool trans_NOCP(DisasContext *s, arg_NOCP *a)
{
/*
* Handle M-profile early check for disabled coprocessor:
* all we need to do here is emit the NOCP exception if
* the coprocessor is disabled. Otherwise we return false
* and the real VFP/etc decode will handle the insn.
*/
assert(arm_dc_feature(s, ARM_FEATURE_M));
if (a->cp == 11) {
a->cp = 10;
}
/* TODO: in v8.1M cp 8, 9, 14, 15 also are governed by the cp10 enable */
if (a->cp != 10) {
gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
syn_uncategorized(), default_exception_el(s));
return true;
}
if (s->fp_excp_el != 0) {
gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
syn_uncategorized(), s->fp_excp_el);
return true;
}
return false;
}

View File

@ -49,8 +49,6 @@
#define ENABLE_ARCH_7 arm_dc_feature(s, ARM_FEATURE_V7)
#define ENABLE_ARCH_8 arm_dc_feature(s, ARM_FEATURE_V8)
#define ARCH(x) do { if (!ENABLE_ARCH_##x) goto illegal_op; } while(0)
#include "translate.h"
#if defined(CONFIG_USER_ONLY)
@ -59,8 +57,9 @@
#define IS_USER(s) (s->user)
#endif
/* We reuse the same 64-bit temporaries for efficiency. */
/* These are TCG temporaries used only by the legacy iwMMXt decoder */
static TCGv_i64 cpu_V0, cpu_V1, cpu_M0;
/* These are TCG globals which alias CPUARMState fields */
static TCGv_i32 cpu_R[16];
TCGv_i32 cpu_CF, cpu_NF, cpu_VF, cpu_ZF;
TCGv_i64 cpu_exclusive_addr;
@ -1095,19 +1094,6 @@ static inline void gen_hlt(DisasContext *s, int imm)
unallocated_encoding(s);
}
static TCGv_ptr get_fpstatus_ptr(int neon)
{
TCGv_ptr statusptr = tcg_temp_new_ptr();
int offset;
if (neon) {
offset = offsetof(CPUARMState, vfp.standard_fp_status);
} else {
offset = offsetof(CPUARMState, vfp.fp_status);
}
tcg_gen_addi_ptr(statusptr, cpu_env, offset);
return statusptr;
}
static inline long vfp_reg_offset(bool dp, unsigned reg)
{
if (dp) {
@ -1176,6 +1162,7 @@ static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
#define ARM_CP_RW_BIT (1 << 20)
/* Include the VFP and Neon decoders */
#include "decode-m-nocp.c.inc"
#include "translate-vfp.c.inc"
#include "translate-neon.c.inc"
@ -2471,21 +2458,6 @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
return 1;
}
#define VFP_REG_SHR(x, n) (((n) > 0) ? (x) >> (n) : (x) << -(n))
#define VFP_DREG(reg, insn, bigbit, smallbit) do { \
if (dc_isar_feature(aa32_simd_r32, s)) { \
reg = (((insn) >> (bigbit)) & 0x0f) \
| (((insn) >> ((smallbit) - 4)) & 0x10); \
} else { \
if (insn & (1 << (smallbit))) \
return 1; \
reg = ((insn) >> (bigbit)) & 0x0f; \
}} while (0)
#define VFP_DREG_D(reg, insn) VFP_DREG(reg, insn, 12, 22)
#define VFP_DREG_N(reg, insn) VFP_DREG(reg, insn, 16, 7)
#define VFP_DREG_M(reg, insn) VFP_DREG(reg, insn, 0, 5)
static inline bool use_goto_tb(DisasContext *s, target_ulong dest)
{
#ifndef CONFIG_USER_ONLY
@ -4544,48 +4516,12 @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
int opc1, int crn, int crm, int opc2,
bool isread, int rt, int rt2)
{
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
const ARMCPRegInfo *ri;
cpnum = (insn >> 8) & 0xf;
/* First check for coprocessor space used for XScale/iwMMXt insns */
if (arm_dc_feature(s, ARM_FEATURE_XSCALE) && (cpnum < 2)) {
if (extract32(s->c15_cpar, cpnum, 1) == 0) {
return 1;
}
if (arm_dc_feature(s, ARM_FEATURE_IWMMXT)) {
return disas_iwmmxt_insn(s, insn);
} else if (arm_dc_feature(s, ARM_FEATURE_XSCALE)) {
return disas_dsp_insn(s, insn);
}
return 1;
}
/* Otherwise treat as a generic register access */
is64 = (insn & (1 << 25)) == 0;
if (!is64 && ((insn & (1 << 4)) == 0)) {
/* cdp */
return 1;
}
crm = insn & 0xf;
if (is64) {
crn = 0;
opc1 = (insn >> 4) & 0xf;
opc2 = 0;
rt2 = (insn >> 16) & 0xf;
} else {
crn = (insn >> 16) & 0xf;
opc1 = (insn >> 21) & 7;
opc2 = (insn >> 5) & 7;
rt2 = 0;
}
isread = (insn >> 20) & 1;
rt = (insn >> 12) & 0xf;
ri = get_arm_cp_reginfo(s->cp_regs,
ENCODE_CP_REG(cpnum, is64, s->ns, crn, crm, opc1, opc2));
if (ri) {
@ -4593,7 +4529,8 @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
/* Check access permissions */
if (!cp_access_ok(s->current_el, ri, isread)) {
return 1;
unallocated_encoding(s);
return;
}
if (s->hstr_active || ri->accessfn ||
@ -4667,14 +4604,15 @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
/* Handle special cases first */
switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
case ARM_CP_NOP:
return 0;
return;
case ARM_CP_WFI:
if (isread) {
return 1;
unallocated_encoding(s);
return;
}
gen_set_pc_im(s, s->base.pc_next);
s->base.is_jmp = DISAS_WFI;
return 0;
return;
default:
break;
}
@ -4734,7 +4672,7 @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
/* Write */
if (ri->type & ARM_CP_CONST) {
/* If not forbidden by access permissions, treat as WI */
return 0;
return;
}
if (is64) {
@ -4800,7 +4738,7 @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
gen_lookup_tb(s);
}
return 0;
return;
}
/* Unknown register; this might be a guest error or a QEMU
@ -4820,9 +4758,27 @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
s->ns ? "non-secure" : "secure");
}
return 1;
unallocated_encoding(s);
return;
}
/* Decode XScale DSP or iWMMXt insn (in the copro space, cp=0 or 1) */
static void disas_xscale_insn(DisasContext *s, uint32_t insn)
{
int cpnum = (insn >> 8) & 0xf;
if (extract32(s->c15_cpar, cpnum, 1) == 0) {
unallocated_encoding(s);
} else if (arm_dc_feature(s, ARM_FEATURE_IWMMXT)) {
if (disas_iwmmxt_insn(s, insn)) {
unallocated_encoding(s);
}
} else if (arm_dc_feature(s, ARM_FEATURE_XSCALE)) {
if (disas_dsp_insn(s, insn)) {
unallocated_encoding(s);
}
}
}
/* Store a 64-bit value to a register pair. Clobbers val. */
static void gen_storeq_reg(DisasContext *s, int rlow, int rhigh, TCGv_i64 val)
@ -5222,6 +5178,68 @@ static int t16_pop_list(DisasContext *s, int x)
#include "decode-t32.c.inc"
#include "decode-t16.c.inc"
static bool valid_cp(DisasContext *s, int cp)
{
/*
* Return true if this coprocessor field indicates something
* that's really a possible coprocessor.
* For v7 and earlier, coprocessors 8..15 were reserved for Arm use,
* and of those only cp14 and cp15 were used for registers.
* cp10 and cp11 were used for VFP and Neon, whose decode is
* dealt with elsewhere. With the advent of fp16, cp9 is also
* now part of VFP.
* For v8A and later, the encoding has been tightened so that
* only cp14 and cp15 are valid, and other values aren't considered
* to be in the coprocessor-instruction space at all. v8M still
* permits coprocessors 0..7.
*/
if (arm_dc_feature(s, ARM_FEATURE_V8) &&
!arm_dc_feature(s, ARM_FEATURE_M)) {
return cp >= 14;
}
return cp < 8 || cp >= 14;
}
static bool trans_MCR(DisasContext *s, arg_MCR *a)
{
if (!valid_cp(s, a->cp)) {
return false;
}
do_coproc_insn(s, a->cp, false, a->opc1, a->crn, a->crm, a->opc2,
false, a->rt, 0);
return true;
}
static bool trans_MRC(DisasContext *s, arg_MRC *a)
{
if (!valid_cp(s, a->cp)) {
return false;
}
do_coproc_insn(s, a->cp, false, a->opc1, a->crn, a->crm, a->opc2,
true, a->rt, 0);
return true;
}
static bool trans_MCRR(DisasContext *s, arg_MCRR *a)
{
if (!valid_cp(s, a->cp)) {
return false;
}
do_coproc_insn(s, a->cp, true, a->opc1, 0, a->crm, 0,
false, a->rt, a->rt2);
return true;
}
static bool trans_MRRC(DisasContext *s, arg_MRRC *a)
{
if (!valid_cp(s, a->cp)) {
return false;
}
do_coproc_insn(s, a->cp, true, a->opc1, 0, a->crm, 0,
true, a->rt, a->rt2);
return true;
}
/* Helpers to swap operands for reverse-subtract. */
static void gen_rsb(TCGv_i32 dst, TCGv_i32 a, TCGv_i32 b)
{
@ -7862,7 +7880,7 @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
{
TCGv_i32 tmp;
/* For A32, ARCH(5) is checked near the start of the uncond block. */
/* For A32, ARM_FEATURE_V5 is checked near the start of the uncond block. */
if (s->thumb && (a->imm & 2)) {
return false;
}
@ -8228,7 +8246,10 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
* choose to UNDEF. In ARMv5 and above the space is used
* for miscellaneous unconditional instructions.
*/
ARCH(5);
if (!arm_dc_feature(s, ARM_FEATURE_V5)) {
unallocated_encoding(s);
return;
}
/* Unconditional instructions. */
/* TODO: Perhaps merge these into one decodetree output file. */
@ -8265,25 +8286,18 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
return;
}
/* fall back to legacy decoder */
switch ((insn >> 24) & 0xf) {
case 0xc:
case 0xd:
case 0xe:
if (((insn >> 8) & 0xe) == 10) {
/* VFP, but failed disas_vfp. */
goto illegal_op;
/* TODO: convert xscale/iwmmxt decoder to decodetree ?? */
if (arm_dc_feature(s, ARM_FEATURE_XSCALE)) {
if (((insn & 0x0c000e00) == 0x0c000000)
&& ((insn & 0x03000000) != 0x03000000)) {
/* Coprocessor insn, coprocessor 0 or 1 */
disas_xscale_insn(s, insn);
return;
}
if (disas_coproc_insn(s, insn)) {
/* Coprocessor. */
goto illegal_op;
}
break;
default:
illegal_op:
unallocated_encoding(s);
break;
}
illegal_op:
unallocated_encoding(s);
}
static bool thumb_insn_is_16bit(DisasContext *s, uint32_t pc, uint32_t insn)
@ -8360,7 +8374,23 @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
goto illegal_op;
}
} else if ((insn & 0xf800e800) != 0xf000e800) {
ARCH(6T2);
if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
unallocated_encoding(s);
return;
}
}
if (arm_dc_feature(s, ARM_FEATURE_M)) {
/*
* NOCP takes precedence over any UNDEF for (almost) the
* entire wide range of coprocessor-space encodings, so check
* for it first before proceeding to actually decode eg VFP
* insns. This decode also handles the few insns which are
* in copro space but do not have NOCP checks (eg VLLDM, VLSTM).
*/
if (disas_m_nocp(s, insn)) {
return;
}
}
if ((insn & 0xef000000) == 0xef000000) {
@ -8401,52 +8431,9 @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
((insn >> 28) == 0xe && disas_vfp(s, insn))) {
return;
}
/* fall back to legacy decoder */
switch ((insn >> 25) & 0xf) {
case 0: case 1: case 2: case 3:
/* 16-bit instructions. Should never happen. */
abort();
case 6: case 7: case 14: case 15:
/* Coprocessor. */
if (arm_dc_feature(s, ARM_FEATURE_M)) {
/* 0b111x_11xx_xxxx_xxxx_xxxx_xxxx_xxxx_xxxx */
if (extract32(insn, 24, 2) == 3) {
goto illegal_op; /* op0 = 0b11 : unallocated */
}
if (((insn >> 8) & 0xe) == 10 &&
dc_isar_feature(aa32_fpsp_v2, s)) {
/* FP, and the CPU supports it */
goto illegal_op;
} else {
/* All other insns: NOCP */
gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
syn_uncategorized(),
default_exception_el(s));
}
break;
}
if (((insn >> 24) & 3) == 3) {
/* Neon DP, but failed disas_neon_dp() */
goto illegal_op;
} else if (((insn >> 8) & 0xe) == 10) {
/* VFP, but failed disas_vfp. */
goto illegal_op;
} else {
if (insn & (1 << 28))
goto illegal_op;
if (disas_coproc_insn(s, insn)) {
goto illegal_op;
}
}
break;
case 12:
goto illegal_op;
default:
illegal_op:
unallocated_encoding(s);
}
illegal_op:
unallocated_encoding(s);
}
static void disas_thumb_insn(DisasContext *s, uint32_t insn)
@ -8567,7 +8554,6 @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
cpu_V0 = tcg_temp_new_i64();
cpu_V1 = tcg_temp_new_i64();
/* FIXME: cpu_M0 can probably be the same as cpu_V0. */
cpu_M0 = tcg_temp_new_i64();
}

View File

@ -393,4 +393,56 @@ typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
/*
* Enum for argument to fpstatus_ptr().
*/
typedef enum ARMFPStatusFlavour {
FPST_FPCR,
FPST_FPCR_F16,
FPST_STD,
FPST_STD_F16,
} ARMFPStatusFlavour;
/**
* fpstatus_ptr: return TCGv_ptr to the specified fp_status field
*
* We have multiple softfloat float_status fields in the Arm CPU state struct
* (see the comment in cpu.h for details). Return a TCGv_ptr which has
* been set up to point to the requested field in the CPU state struct.
* The options are:
*
* FPST_FPCR
* for non-FP16 operations controlled by the FPCR
* FPST_FPCR_F16
* for operations controlled by the FPCR where FPCR.FZ16 is to be used
* FPST_STD
* for A32/T32 Neon operations using the "standard FPSCR value"
* FPST_STD_F16
* as FPST_STD, but where FPCR.FZ16 is to be used
*/
static inline TCGv_ptr fpstatus_ptr(ARMFPStatusFlavour flavour)
{
TCGv_ptr statusptr = tcg_temp_new_ptr();
int offset;
switch (flavour) {
case FPST_FPCR:
offset = offsetof(CPUARMState, vfp.fp_status);
break;
case FPST_FPCR_F16:
offset = offsetof(CPUARMState, vfp.fp_status_f16);
break;
case FPST_STD:
offset = offsetof(CPUARMState, vfp.standard_fp_status);
break;
case FPST_STD_F16:
offset = offsetof(CPUARMState, vfp.standard_fp_status_f16);
break;
default:
g_assert_not_reached();
}
tcg_gen_addi_ptr(statusptr, cpu_env, offset);
return statusptr;
}
#endif /* TARGET_ARM_TRANSLATE_H */

View File

@ -213,5 +213,3 @@ VCVT_sp_int ---- 1110 1.11 110 s:1 .... 1010 rz:1 1.0 .... \
vd=%vd_sp vm=%vm_sp
VCVT_dp_int ---- 1110 1.11 110 s:1 .... 1011 rz:1 1.0 .... \
vd=%vd_sp vm=%vm_dp
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000

View File

@ -93,6 +93,8 @@ static uint32_t vfp_get_fpscr_from_host(CPUARMState *env)
/* FZ16 does not generate an input denormal exception. */
i |= (get_float_exception_flags(&env->vfp.fp_status_f16)
& ~float_flag_input_denormal);
i |= (get_float_exception_flags(&env->vfp.standard_fp_status_f16)
& ~float_flag_input_denormal);
return vfp_exceptbits_from_host(i);
}
@ -124,7 +126,9 @@ static void vfp_set_fpscr_to_host(CPUARMState *env, uint32_t val)
if (changed & FPCR_FZ16) {
bool ftz_enabled = val & FPCR_FZ16;
set_flush_to_zero(ftz_enabled, &env->vfp.fp_status_f16);
set_flush_to_zero(ftz_enabled, &env->vfp.standard_fp_status_f16);
set_flush_inputs_to_zero(ftz_enabled, &env->vfp.fp_status_f16);
set_flush_inputs_to_zero(ftz_enabled, &env->vfp.standard_fp_status_f16);
}
if (changed & FPCR_FZ) {
bool ftz_enabled = val & FPCR_FZ;
@ -146,6 +150,7 @@ static void vfp_set_fpscr_to_host(CPUARMState *env, uint32_t val)
set_float_exception_flags(i, &env->vfp.fp_status);
set_float_exception_flags(0, &env->vfp.fp_status_f16);
set_float_exception_flags(0, &env->vfp.standard_fp_status);
set_float_exception_flags(0, &env->vfp.standard_fp_status_f16);
}
#else