Commit Graph

31633 Commits

Author SHA1 Message Date
Corey Minyard 3fde641e72 ipmi:smbus: Add a check around a memcpy
In one case:

  memcpy(sid->inmsg + sid->inlen, buf, len);

if len == 0 then sid->inmsg + sig->inlen can point to one past the inmsg
array if the array is full.  We have to allow len == 0 due to some
vagueness in the spec, but we don't have to call memcpy.

Found by Coverity.  This is not a problem in practice, but the results
are technically (maybe) undefined.  So make Coverity happy.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2022-08-01 06:40:50 -05:00
Richard Henderson cc42559ab1 ppc patch queue for 2022-07-28:
Short queue with 2 Coverity fixes and one fix of the
 'wait' insns that is causing hangs if the guest kernel uses
 the most up to date wait opcode.
 
 - target/ppc:
   - implement new wait variants to fix guest hang when using the new opcode
 - ppc440_uc: initialize length passed to cpu_physical_memory_map()
 - spapr_nvdimm: check if spapr_drc_index() returns NULL
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYuK8VgAKCRA82cqW3gMx
 ZOc7AQDPMsFY9NHNqJ3O0MiX4Qoy8IGUreZ9dzZSS3zT1nxtEAD+Lwl0/aGO+dk+
 +NiIO80A5Agy/0g8PHie4qR3EqHEnwA=
 =Q4eR
 -----END PGP SIGNATURE-----

Merge tag 'pull-ppc-20220728' of https://gitlab.com/danielhb/qemu into staging

ppc patch queue for 2022-07-28:

Short queue with 2 Coverity fixes and one fix of the
'wait' insns that is causing hangs if the guest kernel uses
the most up to date wait opcode.

- target/ppc:
  - implement new wait variants to fix guest hang when using the new opcode
- ppc440_uc: initialize length passed to cpu_physical_memory_map()
- spapr_nvdimm: check if spapr_drc_index() returns NULL

# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYuK8VgAKCRA82cqW3gMx
# ZOc7AQDPMsFY9NHNqJ3O0MiX4Qoy8IGUreZ9dzZSS3zT1nxtEAD+Lwl0/aGO+dk+
# +NiIO80A5Agy/0g8PHie4qR3EqHEnwA=
# =Q4eR
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 28 Jul 2022 09:41:58 AM PDT
# gpg:                using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28  3819 3CD9 CA96 DE03 3164

* tag 'pull-ppc-20220728' of https://gitlab.com/danielhb/qemu:
  target/ppc: Implement new wait variants
  hw/ppc/ppc440_uc: Initialize length passed to cpu_physical_memory_map()
  hw/ppc: check if spapr_drc_index() returns NULL in spapr_nvdimm.c

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-07-28 15:06:42 -07:00
Peter Maydell eda3f17bcd hw/ppc/ppc440_uc: Initialize length passed to cpu_physical_memory_map()
In dcr_write_dma(), there is code that uses cpu_physical_memory_map()
to implement a DMA transfer.  That function takes a 'plen' argument,
which points to a hwaddr which is used for both input and output: the
caller must set it to the size of the range it wants to map, and on
return it is updated to the actual length mapped. The dcr_write_dma()
code fails to initialize rlen and wlen, so will end up mapping an
unpredictable amount of memory.

Initialize the length values correctly, and check that we managed to
map the entire range before using the fast-path memmove().

This was spotted by Coverity, which points out that we never
initialized the variables before using them.

Fixes: Coverity CID 1487137, 1487150
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220726182341.1888115-2-peter.maydell@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-07-28 10:31:54 -03:00
Daniel Henrique Barboza edccf661e6 hw/ppc: check if spapr_drc_index() returns NULL in spapr_nvdimm.c
spapr_nvdimm_flush_completion_cb() and flush_worker_cb() are using the
DRC object returned by spapr_drc_index() without checking it for NULL.
In this case we would be dereferencing a NULL pointer when doing
SPAPR_NVDIMM(drc->dev) and PC_DIMM(drc->dev).

This can happen if, during a scm_flush(), the DRC object is wrongly
freed/released (e.g. a bug in another part of the code).
spapr_drc_index() would then return NULL in the callbacks.

Fixes: Coverity CID 1487108, 1487178
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220409200856.283076-2-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
2022-07-28 10:31:54 -03:00
Atish Patra 54f2183630 hw/intc: sifive_plic: Fix multi-socket plic configuraiton
Since commit 40244040a7, multi-socket configuration with plic is
broken as the hartid for second socket is calculated incorrectly.
The hartid stored in addr_config already includes the offset
for the base hartid for that socket. Adding it again would lead
to segfault while creating the plic device for the virt machine.
qdev_connect_gpio_out was also invoked with incorrect number of gpio
lines.

Fixes: 40244040a7 (hw/intc: sifive_plic: Avoid overflowing the addr_config buffer)

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220723090335.671105-1-atishp@rivosinc.com>
[ Changes by AF:
 - Change the qdev_connect_gpio_out() numbering
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-07-28 09:08:44 +10:00
Richard Henderson e5b6555fb8 pc,virtio: fixes
Several fixes. From now on, regression fixes only.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmLgQr8PHG1zdEByZWRo
 YXQuY29tAAoJECgfDbjSjVRpGUUIAKtNhrnKopGm4LlRpx8zN3Jc1Jo0nb648gaM
 Oyi+Pl8+hpESUhaWN10XDk38/QuPQfIFeR2ZhfYjFTRlZE+n3X9LVlwL8ejjP8KH
 AcWm78Ff/SLA45aMKMmw74pvEDNsoPYTp7TrfeIej5ub8BIXr8+8pqDdIR9WwtWO
 PbhLNXkTT2yLEs6jCVT4/dyh7zivSkrY7G/RVmtUaFe3PgY8fdW2z3+Txz7UIMgw
 CQoGuAucCO5ToBbs2CbT0V5yxY6G5VO6Qd8g0PzDW4M6GsY/Xr5QCnyJe0jTW0d6
 Dcc7UZFAzGNzyQCxHCic9xwTO+ZcJPJlH5TwknunxOb9xwCx4Qs=
 =zN41
 -----END PGP SIGNATURE-----

Merge tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging

pc,virtio: fixes

Several fixes. From now on, regression fixes only.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmLgQr8PHG1zdEByZWRo
# YXQuY29tAAoJECgfDbjSjVRpGUUIAKtNhrnKopGm4LlRpx8zN3Jc1Jo0nb648gaM
# Oyi+Pl8+hpESUhaWN10XDk38/QuPQfIFeR2ZhfYjFTRlZE+n3X9LVlwL8ejjP8KH
# AcWm78Ff/SLA45aMKMmw74pvEDNsoPYTp7TrfeIej5ub8BIXr8+8pqDdIR9WwtWO
# PbhLNXkTT2yLEs6jCVT4/dyh7zivSkrY7G/RVmtUaFe3PgY8fdW2z3+Txz7UIMgw
# CQoGuAucCO5ToBbs2CbT0V5yxY6G5VO6Qd8g0PzDW4M6GsY/Xr5QCnyJe0jTW0d6
# Dcc7UZFAzGNzyQCxHCic9xwTO+ZcJPJlH5TwknunxOb9xwCx4Qs=
# =zN41
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 26 Jul 2022 12:38:39 PM PDT
# gpg:                using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469
# gpg:                issuer "mst@redhat.com"
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [undefined]
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17  0970 C350 3912 AFBE 8E67
#      Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA  8A0D 281F 0DB8 D28D 5469

* tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu:
  hw/virtio/virtio-iommu: Enforce power-of-two notify for both MAP and UNMAP
  i386/pc: restrict AMD only enforcing of 1Tb hole to new machine type
  i386/pc: relocate 4g start to 1T where applicable
  i386/pc: bounds check phys-bits against max used GPA
  i386/pc: factor out device_memory base/size to helper
  i386/pc: handle unitialized mr in pc_get_cxl_range_end()
  i386/pc: factor out cxl range start to helper
  i386/pc: factor out cxl range end to helper
  i386/pc: factor out above-4g end to an helper
  i386/pc: pass pci_hole64_size to pc_memory_init()
  i386/pc: create pci-host qdev prior to pc_memory_init()
  hw/i386: add 4g boundary start to X86MachineState
  hw/cxl: Fix size of constant in interleave granularity function.
  hw/i386/pc: Always place CXL Memory Regions after device_memory
  hw/machine: Clear out left over CXL related pointer from move of state handling to machines.
  acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug()

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-07-26 12:57:20 -07:00
Jean-Philippe Brucker 0522be9a0c hw/virtio/virtio-iommu: Enforce power-of-two notify for both MAP and UNMAP
Currently we only enforce power-of-two mappings (required by the QEMU
notifier) for UNMAP requests. A MAP request not aligned on a
power-of-two may be successfully handled by VFIO, and then the
corresponding UNMAP notify will fail because it will attempt to split
that mapping. Ensure MAP and UNMAP notifications are consistent.

Fixes: dde3f08b5c ("virtio-iommu: Handle non power of 2 range invalidations")
Reported-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-Id: <20220718135636.338264-1-jean-philippe@linaro.org>
Tested-by: Tina Zhang <tina.zhang@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 15:33:29 -04:00
Joao Martins b3e6982b41 i386/pc: restrict AMD only enforcing of 1Tb hole to new machine type
The added enforcing is only relevant in the case of AMD where the
range right before the 1TB is restricted and cannot be DMA mapped
by the kernel consequently leading to IOMMU INVALID_DEVICE_REQUEST
or possibly other kinds of IOMMU events in the AMD IOMMU.

Although, there's a case where it may make sense to disable the
IOVA relocation/validation when migrating from a
non-amd-1tb-aware qemu to one that supports it.

Relocating RAM regions to after the 1Tb hole has consequences for
guest ABI because we are changing the memory mapping, so make
sure that only new machine enforce but not older ones.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-12-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 8504f12945 i386/pc: relocate 4g start to 1T where applicable
It is assumed that the whole GPA space is available to be DMA
addressable, within a given address space limit, except for a
tiny region before the 4G. Since Linux v5.4, VFIO validates
whether the selected GPA is indeed valid i.e. not reserved by
IOMMU on behalf of some specific devices or platform-defined
restrictions, and thus failing the ioctl(VFIO_DMA_MAP) with
 -EINVAL.

AMD systems with an IOMMU are examples of such platforms and
particularly may only have these ranges as allowed:

        0000000000000000 - 00000000fedfffff (0      .. 3.982G)
        00000000fef00000 - 000000fcffffffff (3.983G .. 1011.9G)
        0000010000000000 - ffffffffffffffff (1Tb    .. 16Pb[*])

We already account for the 4G hole, albeit if the guest is big
enough we will fail to allocate a guest with  >1010G due to the
~12G hole at the 1Tb boundary, reserved for HyperTransport (HT).

[*] there is another reserved region unrelated to HT that exists
in the 256T boundary in Fam 17h according to Errata #1286,
documeted also in "Open-Source Register Reference for AMD Family
17h Processors (PUB)"

When creating the region above 4G, take into account that on AMD
platforms the HyperTransport range is reserved and hence it
cannot be used either as GPAs. On those cases rather than
establishing the start of ram-above-4g to be 4G, relocate instead
to 1Tb. See AMD IOMMU spec, section 2.1.2 "IOMMU Logical
Topology", for more information on the underlying restriction of
IOVAs.

After accounting for the 1Tb hole on AMD hosts, mtree should
look like:

0000000000000000-000000007fffffff (prio 0, i/o):
         alias ram-below-4g @pc.ram 0000000000000000-000000007fffffff
0000010000000000-000001ff7fffffff (prio 0, i/o):
        alias ram-above-4g @pc.ram 0000000080000000-000000ffffffffff

If the relocation is done or the address space covers it, we
also add the the reserved HT e820 range as reserved.

Default phys-bits on Qemu is TCG_PHYS_ADDR_BITS (40) which is enough
to address 1Tb (0xff ffff ffff). On AMD platforms, if a
ram-above-4g relocation is attempted and the CPU wasn't configured
with a big enough phys-bits, an error message will be printed
due to the maxphysaddr vs maxusedaddr check previously added.

Suggested-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-11-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 1caab5cf86 i386/pc: bounds check phys-bits against max used GPA
Calculate max *used* GPA against the CPU maximum possible address
and error out if the former surprasses the latter. This ensures
max used GPA is reacheable by configured phys-bits. Default phys-bits
on Qemu is TCG_PHYS_ADDR_BITS (40) which is enough for the CPU to
address 1Tb (0xff ffff ffff) or 1010G (0xfc ffff ffff) in AMD hosts
with IOMMU.

This is preparation for AMD guests with >1010G, where it will want relocate
ram-above-4g to be after 1Tb instead of 4G.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-10-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 8288a8286d i386/pc: factor out device_memory base/size to helper
Move obtaining hole64_start from device_memory memory region base/size
into an helper alongside correspondent getters in pc_memory_init() when
the hotplug range is unitialized. While doing that remove the memory
region based logic from this newly added helper.

This is the final step that allows pc_pci_hole64_start() to be callable
at the beginning of pc_memory_init() before any memory regions are
initialized.

Cc: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-9-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 1065b21993 i386/pc: handle unitialized mr in pc_get_cxl_range_end()
Remove pc_get_cxl_range_end() dependency on the CXL memory region,
and replace with one that does not require the CXL host_mr to determine
the start of CXL start.

This in preparation to allow pc_pci_hole64_start() to be called early
in pc_memory_init(), handle CXL memory region end when its underlying
memory region isn't yet initialized.

Cc: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Message-Id: <20220719170014.27028-8-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 42bed07127 i386/pc: factor out cxl range start to helper
Factor out the calculation of the base address of the memory region.
It will be used later on for the cxl range end counterpart calculation
and as well in pc_memory_init() CXL memory region initialization, thus
avoiding duplication.

Cc: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-7-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 55668e409b i386/pc: factor out cxl range end to helper
Move calculation of CXL memory region end to separate helper.

This is in preparation to a future change that removes CXL range
dependency on the CXL memory region, with the goal of allowing
pc_pci_hole64_start() to be called before any memory region are
initialized.

Cc: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-6-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 5ff62e2afe i386/pc: factor out above-4g end to an helper
There's a couple of places that seem to duplicate this calculation
of RAM size above the 4G boundary. Move all those to a helper function.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-5-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins c48eb7a4e8 i386/pc: pass pci_hole64_size to pc_memory_init()
Use the pre-initialized pci-host qdev and fetch the
pci-hole64-size into pc_memory_init() newly added argument.
Use PCI_HOST_PROP_PCI_HOLE64_SIZE pci-host property for
fetching pci-hole64-size.

This is in preparation to determine that host-phys-bits are
enough and for pci-hole64-size to be considered to relocate
ram-above-4g to be at 1T (on AMD platforms).

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-4-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 4876778749 i386/pc: create pci-host qdev prior to pc_memory_init()
At the start of pc_memory_init() we usually pass a range of
0..UINT64_MAX as pci_memory, when really its 2G (i440fx) or
32G (q35). To get the real user value, we need to get pci-host
passed property for default pci_hole64_size. Thus to get that,
create the qdev prior to memory init to better make estimations
on max used/phys addr.

This is in preparation to determine that host-phys-bits are
enough and also for pci-hole64-size to be considered to relocate
ram-above-4g to be at 1T (on AMD platforms).

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-3-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Joao Martins 4ab4c33014 hw/i386: add 4g boundary start to X86MachineState
Rather than hardcoding the 4G boundary everywhere, introduce a
X86MachineState field @above_4g_mem_start and use it
accordingly.

This is in preparation for relocating ram-above-4g to be
dynamically start at 1T on AMD platforms.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220719170014.27028-2-joao.m.martins@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Jonathan Cameron 4a447a710c hw/i386/pc: Always place CXL Memory Regions after device_memory
Previously broken_reserved_end was taken into account, but Igor Mammedov
identified that this could lead to a clash between potential RAM being
mapped in the region and CXL usage. Hence always add the size of the
device_memory memory region.  This only affects the case where the
broken_reserved_end flag was set.

Fixes: 6e4e3ae936 ("hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)")
Reported-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20220701132300.2264-3-Jonathan.Cameron@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:40:58 -04:00
Robert Hoo e4bcec0c3c acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug()
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
Message-Id: <20220704085852.330005-1-robert.hu@linux.intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26 10:37:46 -04:00
Alan Jian 5865d99fe8 hw/display/bcm2835_fb: Fix framebuffer allocation address
This patch fixes the dedicated framebuffer mailbox interface by
removing an unneeded offset.  This means that we pick the framebuffer
address in the same way that we do if the guest code uses the buffer
allocate mechanism of the bcm2835_property interface (case
0x00040001: /* Allocate buffer */ in bcm2835_property.c).

The documentation of this mailbox interface doesn't say anything
about using parts of the request buffer address to affect the
chosen framebuffer address:
https://github.com/raspberrypi/firmware/wiki/Mailbox-framebuffer-interface

Some baremetal applications like the Screen01/Screen02 examples from
Baking Pi tutorial[1] didn't work before this patch.

[1] https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html

Signed-off-by: Alan Jian <alanjian85@outlook.com>
Message-id: 20220725145838.8412-1-alanjian85@outlook.com
[PMM: tweaked commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-26 14:09:44 +01:00
Peter Maydell 0d0275c31f -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
 
 iQEcBAABAgAGBQJi36ocAAoJEO8Ells5jWIRuOYH/jtaDNGTBs/h8A041gQaCMmw
 jufUXHCdKGgmZMpJ/AoCUWx4USdx8hEGSt/j4kvSmIPX+VLuCfLefHDlTxndiWAv
 fnUr4NB7LAz2b5D3d5QX1Np+zHG5mHx95KfDIaWdcz9N1HUHlEOakxTDc2EvR1hF
 yh8g2n5xdvzK5kWvPcNgJpU/ezDumOFo04JndBb4fIqDmZfW3hvJQ3IKiS3P1J9C
 Kbb/usoXGrdoZ9T1R2cqtn1CxrgfMlF2pKJFWzs3nU+ewD9C6oKS4rDQCZxx+JEx
 ZvfnSTUPgBBlT4zqZTTjyFQMQdtis5qK5iAKDEENkqVC1iULPhnM9DN0qxcIoQs=
 =SpWG
 -----END PGP SIGNATURE-----

Merge tag 'net-pull-request' of https://github.com/jasowang/qemu into staging

# gpg: Signature made Tue 26 Jul 2022 09:47:24 BST
# gpg:                using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg:          It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F  3562 EF04 965B 398D 6211

* tag 'net-pull-request' of https://github.com/jasowang/qemu:
  vdpa: Fix memory listener deletions of iova tree
  vhost: Get vring base from vq, not svq
  e1000e: Fix possible interrupt loss when using MSI

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-26 10:31:02 +01:00
Eugenio Pérez 75a8ce64f6 vdpa: Fix memory listener deletions of iova tree
vhost_vdpa_listener_region_del is always deleting the first iova entry
of the tree, since it's using the needle iova instead of the result's
one.

This was detected using a vga virtual device in the VM using vdpa SVQ.
It makes some extra memory adding and deleting, so the wrong one was
mapped / unmapped. This was undetected before since all the memory was
mappend and unmapped totally without that device, but other conditions
could trigger it too:

* mem_region was with .iova = 0, .translated_addr = (correct GPA).
* iova_tree_find_iova returned right result, but does not update
  mem_region.
* iova_tree_remove always removed region with .iova = 0. Right iova were
  sent to the device.
* Next map will fill the first region with .iova = 0, causing a mapping
  with the same iova and device complains, if the next action is a map.
* Next unmap will cause to try to unmap again iova = 0, causing the
  device to complain that no region was mapped at iova = 0.

Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-26 16:24:19 +08:00
Eugenio Pérez 2fdac348fd vhost: Get vring base from vq, not svq
The SVQ vring used idx usually match with the guest visible one, as long
as all the guest buffers (GPA) maps to exactly one buffer within qemu's
VA. However, as we can see in virtqueue_map_desc, a single guest buffer
could map to many buffers in SVQ vring.

Also, its also a mistake to rewind them at the source of migration.
Since VirtQueue is able to migrate the inflight descriptors, its
responsability of the destination to perform the rewind just in case it
cannot report the inflight descriptors to the device.

This makes easier to migrate between backends or to recover them in
vhost devices that support set in flight descriptors.

Fixes: 6d0b222666 ("vdpa: Adapt vhost_vdpa_get_vring_base to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-26 16:23:54 +08:00
Ake Koomsin dd0ef12866 e1000e: Fix possible interrupt loss when using MSI
Commit "e1000e: Prevent MSI/MSI-X storms" introduced msi_causes_pending
to prevent interrupt storms problem. It was tested with MSI-X.

In case of MSI, the guest can rely solely on interrupts to clear ICR.
Upon clearing all pending interrupts, msi_causes_pending gets cleared.
However, when e1000e_itr_should_postpone() in e1000e_send_msi() returns
true, MSI never gets fired by e1000e_intrmgr_on_throttling_timer()
because msi_causes_pending is still set. This results in interrupt loss.

To prevent this, we need to clear msi_causes_pending when MSI is going
to get fired by the throttling timer. The guest can then receive
interrupts eventually.

Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-26 16:23:54 +08:00
Jason A. Donenfeld 67f7e426e5 hw/i386: pass RNG seed via setup_data entry
Tiny machines optimized for fast boot time generally don't use EFI,
which means a random seed has to be supplied some other way. For this
purpose, Linux (≥5.20) supports passing a seed in the setup_data table
with SETUP_RNG_SEED, specially intended for hypervisors, kexec, and
specialized bootloaders. The linked commit shows the upstream kernel
implementation.

At Paolo's request, we don't pass these to versioned machine types ≤7.0.

Link: https://git.kernel.org/tip/tip/c/68b8e9713c8
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <eduardo@habkost.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220721125636.446842-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-07-22 19:26:34 +02:00
Jason A. Donenfeld c287941a4d hw/rx: pass random seed to fdt
If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This FDT node is part of the DT specification.

Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220719122033.135902-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-07-22 19:01:44 +02:00
Jason A. Donenfeld 5e19cc68fb hw/mips: boston: pass random seed to fdt
If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This FDT node is part of the DT specification.

I'd do the same for other MIPS platforms but boston is the only one that
seems to use FDT.

Cc: Paul Burton <paulburton@kernel.org>
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220719120843.134392-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-07-22 19:01:44 +02:00
Jason A. Donenfeld 6b23a67916 hw/nios2: virt: pass random seed to fdt
If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This FDT node is part of the DT specification.

Cc: Chris Wulff <crwulff@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220719120113.118034-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-07-22 19:01:44 +02:00
Peter Maydell 8ec4bc3c8c -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
 
 iQEcBAABAgAGBQJi18PHAAoJEO8Ells5jWIRCEQH+wepXDoT6Q56xmUgxVs+hlAD
 CXGy71/cNV08Yu3PTTXo8SYaw+KXxsA9ECgIr2hsfPXarAdoOpJFpZR0HoqIzaXd
 kpD6bvwN8bEEOlAHxKcb6/VM+VYntZBfkH9m1WLGx3fHILazLblyL8w2Hkp7NK9J
 IBpQQ63uU8Xt0+js96Z/sPOKRjrtbKXFT1bhY2CI8MKZpuqNyED0jZYwbNdnRwZN
 fuKbpsaaT4Wxx+mQMg7H7a0e/xx3DNi2F6cAtGLH98WYzbLFgExSSK8G8jnwEVfM
 EKWfU7N4zmokq7jN99yvGzjIzLrnLX6yn/ifSs+lQOzdtCA9zEbotI+CDCVdPs4=
 =9zus
 -----END PGP SIGNATURE-----

Merge tag 'net-pull-request' of https://github.com/jasowang/qemu into staging

# gpg: Signature made Wed 20 Jul 2022 09:58:47 BST
# gpg:                using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg:          It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F  3562 EF04 965B 398D 6211

* tag 'net-pull-request' of https://github.com/jasowang/qemu: (25 commits)
  net/colo.c: fix segmentation fault when packet is not parsed correctly
  net/colo.c: No need to track conn_list for filter-rewriter
  net/colo: Fix a "double free" crash to clear the conn_list
  softmmu/runstate.c: add RunStateTransition support form COLO to PRELAUNCH
  vdpa: Add x-svq to NetdevVhostVDPAOptions
  vdpa: Add device migration blocker
  vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs
  vdpa: Buffer CVQ support on shadow virtqueue
  vdpa: manual forward CVQ buffers
  vhost-net-vdpa: add stubs for when no virtio-net device is present
  vdpa: Export vhost_vdpa_dma_map and unmap calls
  vhost: Add svq avail_handler callback
  vhost: add vhost_svq_poll
  vhost: Expose vhost_svq_add
  vhost: add vhost_svq_push_elem
  vhost: Track number of descs in SVQDescState
  vhost: Add SVQDescState
  vhost: Decouple vhost_svq_add from VirtQueueElement
  vhost: Check for queue full at vhost_svq_add
  vhost: Move vhost_svq_kick call to vhost_svq_add
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-20 16:27:57 +01:00
Eugenio Pérez c156d5bf2b vdpa: Add device migration blocker
Since the vhost-vdpa device is exposing _F_LOG, adding a migration blocker if
it uses CVQ.

However, qemu is able to migrate simple devices with no CVQ as long as
they use SVQ. To allow it, add a placeholder error to vhost_vdpa, and
only add to vhost_dev when used. vhost_dev machinery place the migration
blocker if needed.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez bd907ae4b0 vdpa: manual forward CVQ buffers
Do a simple forwarding of CVQ buffers, the same work SVQ could do but
through callbacks. No functional change intended.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 463ba1e3b8 vdpa: Export vhost_vdpa_dma_map and unmap calls
Shadow CVQ will copy buffers on qemu VA, so we avoid TOCTOU attacks from
the guest that could set a different state in qemu device model and vdpa
device.

To do so, it needs to be able to map these new buffers to the device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez e966c0b781 vhost: Add svq avail_handler callback
This allows external handlers to be aware of new buffers that the guest
places in the virtqueue.

When this callback is defined the ownership of the guest's virtqueue
element is transferred to the callback. This means that if the user
wants to forward the descriptor it needs to manually inject it. The
callback is also free to process the command by itself and use the
element with svq_push.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 3f44d13dda vhost: add vhost_svq_poll
It allows the Shadow Control VirtQueue to wait for the device to use the
available buffers.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez d0291f3f28 vhost: Expose vhost_svq_add
This allows external parts of SVQ to forward custom buffers to the
device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 432efd144e vhost: add vhost_svq_push_elem
This function allows external SVQ users to return guest's available
buffers.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez ac4cfdc6f3 vhost: Track number of descs in SVQDescState
A guest's buffer continuos on GPA may need multiple descriptors on
qemu's VA, so SVQ should track its length sepparatedly.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 9e87868fca vhost: Add SVQDescState
This will allow SVQ to add context to the different queue elements.

This patch only store the actual element, no functional change intended.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 1f46ae65d8 vhost: Decouple vhost_svq_add from VirtQueueElement
VirtQueueElement comes from the guest, but we're heading SVQ to be able
to modify the element presented to the device without the guest's
knowledge.

To do so, make SVQ accept sg buffers directly, instead of using
VirtQueueElement.

Add vhost_svq_add_element to maintain element convenience.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez f20b70eb5a vhost: Check for queue full at vhost_svq_add
The series need to expose vhost_svq_add with full functionality,
including checking for full queue.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 98b5adef84 vhost: Move vhost_svq_kick call to vhost_svq_add
The series needs to expose vhost_svq_add with full functionality,
including kick

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez d93a2405ca vhost: Reorder vhost_svq_kick
Future code needs to call it from vhost_svq_add.

No functional change intended.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez c381abc37f vdpa: Avoid compiler to squash reads to used idx
In the next patch we will allow busypolling of this value. The compiler
have a running path where shadow_used_idx, last_used_idx, and vring used
idx are not modified within the same thread busypolling.

This was not an issue before since we always cleared device event
notifier before checking it, and that could act as memory barrier.
However, the busypoll needs something similar to kernel READ_ONCE.

Let's add it here, sepparated from the polling.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 640b8a1c58 virtio-net: Expose ctrl virtqueue logic
This allows external vhost-net devices to modify the state of the
VirtIO device model once the vhost-vdpa device has acknowledged the
control commands.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 6758c01f05 virtio-net: Expose MAC_TABLE_ENTRIES
vhost-vdpa control virtqueue needs to know the maximum entries supported
by the virtio-net device, so we know if it is possible to apply the
filter.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Eugenio Pérez 009c2549bb vhost: move descriptor translation to vhost_svq_vring_write_descs
It's done for both in and out descriptors so it's better placed here.

Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-07-20 16:58:08 +08:00
Peter Maydell 68e26e1e81 LoongArch64 patch queue:
Add dockerfile for loongarch cross compile
 Add reference files for float tests.
 Add simple tests for div, mod, clo, fclass, fcmp, pcadd
 Add bios and kernel boot support.
 Add smbios, acpi, and fdt support.
 Fix pch-pic update-irq.
 Fix some errors identified by coverity.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmLW6SwdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV88GAf8CzH7+dQD80UUI+IZ
 ydt43SgteEftoJJINQW9QwGOk22gSptFGqkKTARFg19yBlw13P/Qj2qnwJVpEE/1
 SoWKCkAMlnxgHKhqvmPqjH/opLSJ1eDuQq3Ok0taaCjJS0uAZiUoz+3k0H3Lf0Yj
 wEusXNqkiHPXmqgTFlJDhOfrOw0ZNU6fbhoSZJ0Wj6f5X11FjxuNn7+CzO0bkfuv
 u+4vJRNTmhcflJUwYFgbjjjvcZhBJhc15WEp+6u8As0v89oci1LjgRNFUgJuI0gh
 1DZh61b0FiDpTq/KsZ/aPdl4nuMoVRJTOOvyHlaVhjWvK0EGI144eKlqvRaA9cX5
 SoHHqA==
 =3mAr
 -----END PGP SIGNATURE-----

Merge tag 'pull-la-20220719' of https://gitlab.com/rth7680/qemu into staging

LoongArch64 patch queue:

Add dockerfile for loongarch cross compile
Add reference files for float tests.
Add simple tests for div, mod, clo, fclass, fcmp, pcadd
Add bios and kernel boot support.
Add smbios, acpi, and fdt support.
Fix pch-pic update-irq.
Fix some errors identified by coverity.

# gpg: Signature made Tue 19 Jul 2022 18:26:04 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-la-20220719' of https://gitlab.com/rth7680/qemu: (21 commits)
  hw/loongarch: Add fdt support
  hw/loongarch: Add acpi ged support
  hw/loongarch: Add smbios support
  hw/loongarch: Add linux kernel booting support
  hw/loongarch: Add uefi bios loading support
  hw/loongarch: Add fw_cfg table support
  tests/tcg/loongarch64: Add pcadd related instructions test
  tests/tcg/loongarch64: Add fp comparison instructions test
  tests/tcg/loongarch64: Add fclass test
  tests/tcg/loongarch64: Add div and mod related instructions test
  tests/tcg/loongarch64: Add clo related instructions test
  tests/tcg/loongarch64: Add float reference files
  target/loongarch: Fix float_convd/float_convs test failing
  fpu/softfloat: Add LoongArch specializations for pickNaN*
  target/loongarch/cpu: Fix cpucfg default value
  target/loongarch/op_helper: Fix coverity cond_at_most error
  target/loongarch/tlb_helper: Fix coverity integer overflow error
  target/loongarch/cpu: Fix coverity errors about excp_names
  hw/intc/loongarch_pch_pic: Fix bugs for update_irq function
  target/loongarch: Fix loongarch_cpu_class_by_name
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-19 22:54:43 +01:00
Xiaojuan Yang fda3f15b00 hw/loongarch: Add fdt support
Add LoongArch flatted device tree, adding cpu device node, firmware cfg node,
pcie node into it, and create fdt rom memory region. Now fdt info is not
full since only uefi bios uses fdt, linux kernel does not use fdt.
Loongarch Linux kernel uses acpi table which is full in qemu virt
machine.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-7-yangxiaojuan@loongson.cn>
[rth: Set TARGET_NEED_FDT, add fdt to meson.build]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-07-19 22:55:10 +05:30
Xiaojuan Yang 735143f10d hw/loongarch: Add acpi ged support
Loongarch virt machine uses general hardware reduces acpi method, rather
than LS7A acpi device. Now only power management function is used in
acpi ged device, memory hotplug will be added later. Also acpi tables
such as RSDP/RSDT/FADT etc.

The acpi table has submited to acpi spec, and will release soon.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-6-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-07-19 22:55:10 +05:30