We can pass C/CPP/LD flags via CFLAGS/CXXFLAGS/LDFLAGS environment
variables, or via configure --extra-cflags / --extra-cxxflags /
--extra-ldflags options. Provide similar behavior for Objective C:
use existing flags from $OBJCFLAGS, or passed via --extra-objcflags.
Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Tested-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
* Removal of user-created PHB devices
* Avocado fixes for --disable-tcg
* Instruction and Radix MMU fixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmIvXDcACgkQUaNDx8/7
7KHhjg//ZfMUtFUNmEBPuG40qWFfnI1Bv9n6Gr4ctoTpfCtWiImApVM45L/hDyh5
Jpyy2JuhYg5XpGc9lH3UvcAIOniQZMQfGHrD4OsjBeW9PnwMOV6njgU2GBz7rESW
xjNdfdk7M48RuXQBiMpHP/8MNPS2U/GEEN3KDHTgy2fIzW+x9lBEA60Bb4aO7rjb
fCszU9LQ8LfzVhpAzxV0rLaQKAY7WCg8RI6qCAUYsfWzsongLe1b8vWESFa71UxF
r+Iz4A7KK6WNsuI4M/ZK8Jo3Xq8Q4XPYnTgnV7AGRPHjz2LCRxhjZqzX/EBZ+OYZ
KtqCcgq0URv0pvOUorj9Q6U/8ectmbv9zoHQJMxYpeoEijZ8bsFS4eihfHSvlrPq
hCgP9gFzLJQ1z+BwhGkfYwA3+BDvGpoOSJNSvncWnVuxGeCmeZce5Rv0wWH/PFLQ
n+axIPUgFMUdto6k72T8Cpa5HHat9jrXYQtkIkFViZrzwg0+aI5i8A0Sy3LcG1E8
jrzAD3//ZEEuStTMOGTaDopI9IMy/i5UOHRfmFYHF1ZOb+AW+PnMJrl7S+5k4XYG
Qo5PXooyRxEcTZRiwP/OYGL/Rum0cTTCujmz42AIkKnyyyXeiKsg8b8Hl1oRdSuv
9AsIqSs4pP6T9GhbkkMVjpELAXTl221v+luDFeu6DQy/IdRI6BY=
=A6RF
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220314' of https://github.com/legoater/qemu into staging
ppc-7.0 queue :
* Removal of user-created PHB devices
* Avocado fixes for --disable-tcg
* Instruction and Radix MMU fixes
# gpg: Signature made Mon 14 Mar 2022 15:16:07 GMT
# gpg: using RSA key A0F66548F04895EBFE6B0B6051A343C7CFFBECA1
# gpg: Good signature from "Cédric Le Goater <clg@kaod.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: A0F6 6548 F048 95EB FE6B 0B60 51A3 43C7 CFFB ECA1
* tag 'pull-ppc-20220314' of https://github.com/legoater/qemu:
ppc/pnv: Remove user-created PHB{3,4,5} devices
ppc/pnv: Always create the PHB5 PEC devices
ppc/pnv: Introduce a pnv-phb5 device to match root port
ppc/xive2: Make type Xive2EndSource not user creatable
target/ppc: fix xxspltw for big endian hosts
target/ppc: fix ISI fault cause for Radix MMU
avocado/ppc_virtex_ml507.py: check TCG accel in test_ppc_virtex_ml507()
avocado/ppc_prep_40p.py: check TCG accel in all tests
avocado/ppc_mpc8544ds.py: check TCG accel in test_ppc_mpc8544ds()
avocado/ppc_bamboo.py: check TCG accel in test_ppc_bamboo()
avocado/ppc_74xx.py: check TCG accel for all tests
avocado/ppc_405.py: check TCG accel in test_ppc_ref405ep()
avocado/ppc_405.py: remove test_ppc_taihu()
avocado/boot_linux_console.py: check TCG accel in test_ppc_mac99()
avocado/boot_linux_console.py: check TCG accel in test_ppc_g3beige()
avocado/replay_kernel.py: make tcg-icount check in run_vm()
avocado/boot_linux_console.py: check tcg accel in test_ppc64_e500
avocado/boot_linux_console.py: check for tcg in test_ppc_powernv8/9
qtest/meson.build: check CONFIG_TCG for boot-serial-test in qtests_ppc
qtest/meson.build: check CONFIG_TCG for prom-env-test in qtests_ppc
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Even when the feature is not supported in guest CPUID,
still set the msr to the default value which will
be the only value KVM will accept in this case
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220223115824.319821-1-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Windows 11 with WSL2 enabled (Hyper-V) fails to boot with Icelake-Server
{-v5} CPU model but boots well with '-cpu host'. Apparently, it expects
5-level paging and 5-level EPT support to come in pair but QEMU's
Icelake-Server CPU model lacks the later. Introduce 'Icelake-Server-v6'
CPU model with 'vmx-page-walk-5' enabled by default.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220221145316.576138-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
XFD(eXtended Feature Disable) allows to enable a
feature on xsave state while preventing specific
user threads from using the feature.
Support save and restore XFD MSRs if CPUID.D.1.EAX[4]
enumerate to be valid. Likewise migrate the MSRs and
related xsave state necessarily.
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-8-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When dynamic xfeatures (e.g. AMX) are used by the guest, the xsave
area would be larger than 4KB. KVM_GET_XSAVE2 and KVM_SET_XSAVE
under KVM_CAP_XSAVE2 works with a xsave buffer larger than 4KB.
Always use the new ioctls under KVM_CAP_XSAVE2 when KVM supports it.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-7-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add AMX primary feature bits XFD and AMX_TILE to
enumerate the CPU's AMX capability. Meanwhile, add
AMX TILE and TMUL CPUID leaf and subleaves which
exist when AMX TILE is present to provide the maximum
capability of TILE and TMUL.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-6-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel introduces XFD faulting mechanism for extended
XSAVE features to dynamically enable the features in
runtime. If CPUID (EAX=0Dh, ECX=n, n>1).ECX[2] is set
as 1, it indicates support for XFD faulting of this
state component.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-5-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kernel allocates 4K xstate buffer by default. For XSAVE features
which require large state component (e.g. AMX), Linux kernel
dynamically expands the xstate buffer only after the process has
acquired the necessary permissions. Those are called dynamically-
enabled XSAVE features (or dynamic xfeatures).
There are separate permissions for native tasks and guests.
Qemu should request the guest permissions for dynamic xfeatures
which will be exposed to the guest. This only needs to be done
once before the first vcpu is created.
KVM implemented one new ARCH_GET_XCOMP_SUPP system attribute API to
get host side supported_xcr0 and Qemu can decide if it can request
dynamically enabled XSAVE features permission.
https://lore.kernel.org/all/20220126152210.3044876-1-pbonzini@redhat.com/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Message-Id: <20220217060434.52460-4-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The AMX TILECFG register and the TMMx tile data registers are
saved/restored via XSAVE, respectively in state component 17
(64 bytes) and state component 18 (8192 bytes).
Add AMX feature bits to x86_ext_save_areas array to set
up AMX components. Add structs that define the layout of
AMX XSAVE areas and use QEMU_BUILD_BUG_ON to validate the
structs sizes.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-3-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The extended state subleaves (EAX=0Dh, ECX=n, n>1).ECX[1]
indicate whether the extended state component locates
on the next 64-byte boundary following the preceding state
component when the compacted format of an XSAVE area is
used.
Right now, they are all zero because no supported component
needed the bit to be set, but the upcoming AMX feature will
use it. Fix the subleaves value according to KVM's supported
cpuid.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-2-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Loading a non-canonical address into rsp when handling an interrupt or
performing a far call should raise a #SS not a #GP.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/870
Signed-off-by: Gareth Webb <gareth.webb@umbralsoftware.co.uk>
Message-Id: <164529651121.25406.15337137068584246397-0@git.sr.ht>
[Move get_pg_mode to seg_helper.c for user-mode emulators. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
LA57/PKE/PKS is only relevant in 64-bit mode, and NXE is only relevant if
PAE is in use. Since there is code that checks PG_MODE_LA57 to determine
the canonicality of addresses, make sure that the bit is not set by
mistake in 32-bit mode. While it would not be a problem because 32-bit
addresses by definition fit in both 48-bit and 57-bit address spaces,
it is nicer if get_pg_mode() actually returns whether a feature is enabled,
and it allows a few simplifications in the page table walker.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We invoke the kvm_irqchip_commit_routes() for each addition to MSI route
table, which is not efficient if we are adding lots of routes in some cases.
This patch lets callers invoke the kvm_irqchip_commit_routes(), so the
callers can decide how to optimize.
[1] https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00967.html
Signed-off-by: Longpeng <longpeng2@huawei.com>
Message-Id: <20220222141116.2091-3-longpeng2@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo suggested adding the new API to support route changes [1]. We should invoke
kvm_irqchip_begin_route_changes() before changing the routes, increasing the
KVMRouteChange.changes if the routes are changed, and commit the changes at last.
[1] https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg02898.html
Signed-off-by: Longpeng <longpeng2@huawei.com>
Message-Id: <20220222141116.2091-2-longpeng2@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The headers are now all available in MinGW master branch.
(commit 13390dbbf885f and earlier) aiming for 10.0.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220222194008.610377-4-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The VssCoordinator & VssAdmin interfaces have been moved to vsadmin.h in
the Windows SDK.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220222194008.610377-3-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a left-over, despite requesting the change before the merge.
Fixes: commit 8821a389 ("configure, meson: replace VSS SDK checks and options with --enable-vss-sdk")
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220222194008.610377-2-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5-level EPT is present in Icelake Server CPUs and is supported by QEMU
('vmx-page-walk-5').
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220221145316.576138-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes the following error triggered when stopping and resuming a 64-bit
Linux kernel via gdb:
qemu-system-x86_64.exe: WHPX: Failed to set virtual processor context, hr=c0350005
The previous logic for synchronizing the values did not take into account
that the lower 4 bits of the CR8 register, containing the priority level,
mapped to bits 7:4 of the APIC.TPR register (see section 10.8.6.1 of
Volume 3 of Intel 64 and IA-32 Architectures Software Developer's Manual).
The caused WHvSetVirtualProcessorRegisters() to fail with an error,
effectively preventing GDB from changing the guest context.
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Message-Id: <010b01d82874$bb4ef160$31ecd420$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make sure that pausing the VM while in 64-bit mode will set the
HF_CS64_MASK flag in env->hflags (see x86_update_hflags() in
target/i386/cpu.c).
Without it, the code in gdbstub.c would only use the 32-bit register values
when debugging 64-bit targets, making debugging effectively impossible.
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Message-Id: <00f701d82874$68b02000$3a106000$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Version: GnuPG v1
iQEcBAABAgAGBQJiMCsfAAoJEO8Ells5jWIRtk0H/3xPsrgRCFJt62ITjD+0C86n
eJBzsSkYqZIEMfQZ8Rp1QZTzfHf8T4OsqwA7kemTMvW44s4Z5Z+k1rktrIt0t0w0
ynAenxLqUGhHpSGfGQ2j1iy/DNRCX2MZfIO/Jo3SOzTpIVf4uGyJ/GR6i0iraXW0
+tOa/nutgnZFYrASmTBH2JgqQAieWrsW+tjxUK+yEC+eRbh2rrzbH8f91OeXxeH6
RGxmJPnW8XmEkIDIsiYotcrEYHUDYL+fiI5vf5whiEx2U1Vp/OWsBpzcVruOhgG3
tXI/d/gzRriH59Ly3rqGk/uAMrIIwzswobKp4kpfLiRdET4yJenhNIaBfHejS+w=
=kiYq
-----END PGP SIGNATURE-----
Merge tag 'net-pull-request' of https://github.com/jasowang/qemu into staging
# gpg: Signature made Tue 15 Mar 2022 05:58:55 GMT
# gpg: using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* tag 'net-pull-request' of https://github.com/jasowang/qemu:
vdpa: Expose VHOST_F_LOG_ALL on SVQ
vdpa: Never set log_base addr if SVQ is enabled
vdpa: Adapt vhost_vdpa_get_vring_base to SVQ
vdpa: Add custom IOTLB translations to SVQ
vhost: Add VhostIOVATree
util: add iova_tree_find_iova
util: Add iova_tree_alloc_map
vhost: Shadow virtqueue buffers forwarding
vdpa: adapt vhost_ops callbacks to svq
virtio: Add vhost_svq_get_vring_addr
vhost: Add vhost_svq_valid_features to shadow vq
vhost: Add Shadow VirtQueue call forwarding capabilities
vhost: Add Shadow VirtQueue kick forwarding capabilities
vhost: Add VhostShadowVirtqueue
virtio-net: fix map leaking on error during receive
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
egl-headless depends on the backing surface to be set before texture are
set and updated. Display it (update=true) iff the current scanout kind
is SURFACE.
Reported-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
gfx_switch() is called to set the new_surface, not necessarily to
display it. It should be displayed after gfx_update(). Send the whole
scanout only in this case.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
The DBus listener naively create, update and destroy textures without
taking into account other listeners. The texture were shared, but
texture update was unnecessarily duplicated.
Teach DisplayGLCtx to do optionally shared texture handling. This is
only implemented for DBus display at this point, however the same
infrastructure could potentially be used for other future combinations.
Reported-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Upstream CI uses ubuntu 18.04 too, so pick
that version (instead of something newer).
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
python2 is not supported any more,
so go install python3 instead.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
SVQ is able to log the dirty bits by itself, so let's use it to not
block migration.
Also, ignore set and clear of VHOST_F_LOG_ALL on set_features if SVQ is
enabled. Even if the device supports it, the reports would be nonsense
because SVQ memory is in the qemu region.
The log region is still allocated. Future changes might skip that, but
this series is already long enough.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Setting the log address would make the device start reporting invalid
dirty memory because the SVQ vrings are located in qemu's memory.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This is needed to achieve migration, so the destination can restore its
index.
Setting base as last used idx, so destination will see as available all
the entries that the device did not use, including the in-flight
processing ones.
This is ok for networking, but other kinds of devices might have
problems with these retransmissions.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Use translations added in VhostIOVATree in SVQ.
Only introduce usage here, not allocation and deallocation. As with
previous patches, we use the dead code paths of shadow_vqs_enabled to
avoid commiting too many changes at once. These are impossible to take
at the moment.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This tree is able to look for a translated address from an IOVA address.
At first glance it is similar to util/iova-tree. However, SVQ working on
devices with limited IOVA space need more capabilities, like allocating
IOVA chunks or performing reverse translations (qemu addresses to iova).
The allocation capability, as "assign a free IOVA address to this chunk
of memory in qemu's address space" allows shadow virtqueue to create a
new address space that is not restricted by guest's addressable one, so
we can allocate shadow vqs vrings outside of it.
It duplicates the tree so it can search efficiently in both directions,
and it will signal overlap if iova or the translated address is present
in any tree.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This function does the reverse operation of iova_tree_find: To look for
a mapping that match a translated address so we can do the reverse.
This have linear complexity instead of logarithmic, but it supports
overlapping HVA. Future developments could reduce it.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This iova tree function allows it to look for a hole in allocated
regions and return a totally new translation for a given translated
address.
It's usage is mainly to allow devices to access qemu address space,
remapping guest's one into a new iova space where qemu can add chunks of
addresses.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Initial version of shadow virtqueue that actually forward buffers. There
is no iommu support at the moment, and that will be addressed in future
patches of this series. Since all vhost-vdpa devices use forced IOMMU,
this means that SVQ is not usable at this point of the series on any
device.
For simplicity it only supports modern devices, that expects vring
in little endian, with split ring and no event idx or indirect
descriptors. Support for them will not be added in this series.
It reuses the VirtQueue code for the device part. The driver part is
based on Linux's virtio_ring driver, but with stripped functionality
and optimizations so it's easier to review.
However, forwarding buffers have some particular pieces: One of the most
unexpected ones is that a guest's buffer can expand through more than
one descriptor in SVQ. While this is handled gracefully by qemu's
emulated virtio devices, it may cause unexpected SVQ queue full. This
patch also solves it by checking for this condition at both guest's
kicks and device's calls. The code may be more elegant in the future if
SVQ code runs in its own iocontext.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
First half of the buffers forwarding part, preparing vhost-vdpa
callbacks to SVQ to offer it. QEMU cannot enable it at this moment, so
this is effectively dead code at the moment, but it helps to reduce
patch size.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
It reports the shadow virtqueue address from qemu virtual address space.
Since this will be different from the guest's vaddr, but the device can
access it, SVQ takes special care about its alignment & lack of garbage
data. It assumes that IOMMU will work in host_page_size ranges for that.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This allows SVQ to negotiate features with the guest and the device. For
the device, SVQ is a driver. While this function bypasses all
non-transport features, it needs to disable the features that SVQ does
not support when forwarding buffers. This includes packed vq layout,
indirect descriptors or event idx.
Future changes can add support to offer more features to the guest,
since the use of VirtQueue gives this for free. This is left out at the
moment for simplicity.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>