The PCI MMIO might be disabled or the device in the reset state.
Make sure we do not dump these memory regions.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On this way, we can assure the new bootindex take effect
during vm rebooting.
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Remove bootindex form qdev property to qom, things will
continue to work just fine, and we can use qom features
which are not supported by qdev property.
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
All memory regions used by VFIO are LITTLE_ENDIAN and they
already take care of endiannes when accessing real device BARs
except ROM - it was broken on BE hosts.
This fixes endiannes for ROM BARs the same way as it is done
for other BARs.
This has been tested on PPC64 BE/LE host/guest in all possible
combinations including TCG.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[aik: added commit log]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
This reverts commit c40708176a.
The resulting code wrongly assumed target and host endianness are
the same which is not always the case for PPC64.
[aw: or potentially any host supporting VFIO and TCG]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
If we make use of OVMF for the BIOS then we can use GPUs without VGA
space access, but we still need this quirk. Disassociate it from the
x-vga option and enable it on all NVIDIA VGA display class devices.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Memory changes for QOMification and automatic tracking of MR lifetime.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJT8et9AAoJEBvWZb6bTYbyIJAQAI3AlLSe27xWoUGfQUgWH30z
Rt/pShHz3BJMfQpD79JfTH8u6uBpkQmKtflerNT7FhXN9ULDzNq+b/jRtke8nkuy
ctCt05FhhK00rfWpUoRue4XiCuvbizBU7MK0DI3yCyNdXQyYnFvgnvsJtlqox8Zh
J5HZcBJEmdCiWBxq7UPk0qBitp4PqNoy7jlD/Ex3m7fJN5WK2cyspQIT9zmhehVn
B8Nwp+RitDDbXbwm0r18col5rFr/6Nj6+dW1gr+7sVJDLNsmJEqC2l3Kgk0wbPkG
Uqwbih29me9PC9/L1VLGHY0ApKDQ8JGE0GrYgEg162hbhoxEHkjjoHMhDUfV6Pj8
NkqcjjWl11UUhgkNqrGafayXbBVnOiEglxy8uXCeq14y9Xd/gjK9Fz6MQvRSOjms
PFmaKknhdmpxh0DuZmTix7WBmKim8zOiCE0/vrAPvwx5L+d1bn5xh6yQvtVjBMpU
Sru3Mhdm9bL9dUDBgOM/G6WCxSTVLBlExOblcYkQh03MfabD7bfplcrKYPXt5ull
Y8YLjqkoIfoy5t0ErvtlpdBJjeEz99JXU+wLQ6NYHnzwzTV+oUtSaEph14mAFOcY
XkFKdoPDI9PnyEfvy4193du8z/dSbhu7sWgHWbTCQyrcaNnSaVhlH43NUC+p23YN
8vfEsVLd1X7MFkDBUmWp
=M+/m
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
SCSI changes that enable sending vendor-specific commands via virtio-scsi.
Memory changes for QOMification and automatic tracking of MR lifetime.
# gpg: Signature made Mon 18 Aug 2014 13:03:09 BST using RSA key ID 9B4D86F2
# gpg: Good signature from "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: aka "Paolo Bonzini <bonzini@gnu.org>"
* remotes/bonzini/tags/for-upstream:
mtree: remove write-only field
memory: Use canonical path component as the name
memory: Use memory_region_name for name access
memory: constify memory_region_name
exec: Abstract away ref to memory region names
loader: Abstract away ref to memory region names
tpm_tis: remove instance_finalize callback
memory: remove memory_region_destroy
memory: convert memory_region_destroy to object_unparent
ioport: split deletion and destruction
nic: do not destroy memory regions in cleanup functions
vga: do not dynamically allocate chain4_alias
sysbus: remove unused function sysbus_del_io
qom: object: move unparenting to the child property's release callback
qom: object: delete properties before calling instance_finalize
virtio-scsi: implement parse_cdb
scsi-block, scsi-generic: implement parse_cdb
scsi-block: extract scsi_block_is_passthrough
scsi-bus: introduce parse_cdb in SCSIDeviceClass and SCSIBusInfo
scsi-bus: prepare scsi_req_new for introduction of parse_cdb
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The function is empty after the previous patch, so remove it.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Explicitly call object_unparent in the few places where we
will re-create the memory region. If the memory region is
simply being destroyed as part of device teardown, let QOM
handle it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 40509f7f added a test to avoid updating KVM MSI routes when the
MSIMessage is unchanged and f4d45d47 switched to relying on this
rather than doing our own comparison. Our cached msg is effectively
unused now. Remove it.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
When new MSI-X vectors are enabled we need to disable MSI-X and
re-enable it with the correct number of vectors. That means we need
to reprogram the eventfd triggers for each vector. Prior to f4d45d47
vector->use tracked whether a vector was masked or unmasked and we
could always pick the KVM path when available for unmasked vectors.
Now vfio doesn't track mask state itself and vector->use and virq
remains configured even for masked vectors. Therefore we need to ask
the MSI-X code whether a vector is masked in order to select the
correct signaling path. As noted in the comment, MSI relies on
hardware to handle masking.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org # QEMU 2.1
The permission of TCE entry should exclude physical base address.
Otherwise, unmapping TCE entry can be interpreted to mapping TCE
entry wrongly for VFIO devices.
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
io-error is for block device errors; it should always be preceded
by a BLOCK_IO_ERROR event. I think vfio wants to use
RUN_STATE_INTERNAL_ERROR instead.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Slow BAR access path is used when VFIO fails to mmap() BAR.
Since this is just a transport between the guest and a device, there is
no need to do endianness swapping.
This changes BARs to use native endianness. Since non-ROM BARs were
doing byte swapping, we need to remove it so does the patch.
As the result, this eliminates cancelling byte swaps and there is
no change in behavior for non-ROM BARs.
ROM BARs were declared little endian too but byte swapping was not
implemented for them so they never actually worked on big endian systems
as there was no cancelling byte swap. This fixes endiannes for ROM BARs
by declaring them native endian and only fixing access sizes as it is
done for non-ROM BARs.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
There are still old guests out there that over-exercise MSI-X masking.
The current code completely sets-up and tears-down an MSI-X vector on
the "use" and "release" callbacks. While this is functional, it can
slow an old guest to a crawl. We can easily skip the KVM parts of
this so that we keep the MSI route and irqfd setup. We do however
need to switch VFIO to trigger a different eventfd while masked.
Actually, we have the option of continuing to use -1 to disable the
trigger, but by using another EventNotifier we can allow the MSI-X
core to emulate pending bits and re-fire the vector once unmasked.
MSI code gets updated as well to use the same setup and teardown
structures and functions.
Prior to this change, an igbvf assigned to a RHEL5 guest gets about
20Mbps and 50 transactions/s with netperf (remote or VF->PF). With
this change, we get line rate and 3k transactions/s remote or 2Gbps
and 6k+ transactions/s to the PF. No significant change is expected
for newer guests with more well behaved MSI-X support.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
This turns the sPAPR support on and enables VFIO container use
in the kernel.
This extends vfio_connect_container to support VFIO_SPAPR_TCE_IOMMU type
in the host kernel.
This registers a memory listener which sPAPR IOMMU will notify when
executing H_PUT_TCE/etc DMA calls. The listener then will notify the host
kernel about DMA map/unmap operation via VFIO_IOMMU_MAP_DMA/
VFIO_IOMMU_UNMAP_DMA ioctls.
This executes VFIO_IOMMU_ENABLE ioctl to make sure that the IOMMU is free
of mappings and can be exclusively given to the user. At the moment SPAPR
is the only platform requiring this call to be implemented.
Note that the host kernel function implementing VFIO_IOMMU_DISABLE
is called automatically when container's fd is closed so there is
no need to call it explicitly from QEMU. We may need to call
VFIO_IOMMU_DISABLE explicitly in the future for some sort of dynamic
reconfiguration (PCI hotplug or dynamic IOMMU group management).
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
While most operations with VFIO IOMMU driver are generic and used inside
vfio.c, there are still some operations which only specific VFIO IOMMU
drivers implement. The first example of it will be reading a DMA window
start from the host.
This adds a helper which passes an ioctl request to the container's fd.
The helper will check if @req is known. For this, stub is added. This return
-1 on any requests for now.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch uses the new IOMMU notifiers to allow VFIO pass through devices
to work with guest side IOMMUs, as long as the host-side VFIO iommu has
sufficient capability and granularity to match the guest side. This works
by tracking all map and unmap operations on the guest IOMMU using the
notifiers, and mirroring them into VFIO.
There are a number of FIXMEs, and the scheme involves rather more notifier
structures than I'd like, but it should make for a reasonable proof of
concept.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
So far, VFIO has a notion of different logical DMA address spaces, but
only ever uses one (system memory). This patch extends this, creating
new VFIOAddressSpace objects as necessary, according to the AddressSpace
reported by the PCI subsystem for this device's DMAs.
This isn't enough yet to support guest side IOMMUs with VFIO, but it does
mean we could now support VFIO devices on, for example, a guest side PCI
host bridge which maps system memory at somewhere other than 0 in PCI
space.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The only model so far supported for VFIO passthrough devices is the model
usually used on x86, where all of the guest's RAM is mapped into the
(host) IOMMU and there is no IOMMU visible in the guest.
This patch begins to relax this model, introducing the notion of a
VFIOAddressSpace. This represents a logical DMA address space which will
be visible to one or more VFIO devices by appropriate mapping in the (host)
IOMMU. Thus the currently global list of containers becomes local to
a VFIOAddressSpace, and we verify that we don't attempt to add a VFIO
group to multiple address spaces.
For now, only one VFIOAddressSpace is created and used, corresponding to
main system memory, that will change in future patches.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
This reworks vfio_connect_container() and vfio_get_group() to have
common exit path at the end of the function bodies.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Upcoming VFIO on SPAPR PPC64 support will initialize the IOMMU
memory region with UINT64_MAX (2^64 bytes) size so int128_get64()
will assert.
The patch takes care of this check. The existing type1 IOMMU code
is not expected to map all 64 bits of RAM so the patch does not
touch that part.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
This device is ridiculous. It has two MMIO BARs, BAR4 and BAR2. BAR4
hosts the MSI-X table, so oviously it would be too easy to access it
directly, instead it creates a window register in BAR2 that, among
other things, provides access to the MSI-X table. This means MSI-X
doesn't work in the guest because the driver actually manages to
program the physical table. When interrupt remapping is present, the
device MSI will be blocked. The Linux driver doesn't make use of this
window, so apparently it's not required to make use of MSI-X. This
quirk makes the device work with the Windows driver that does use this
window for MSI-X, but I certainly cannot recommend this device for
assignment (the Windows 7 driver also constantly pokes PCI config
space).
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
* Remove terminating newlines from hw_error() and error_report() calls
* Fix cut-n-paste error in text (s/to/from/)
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
commit e638073c56 added a flag to track whether
a previous rom read had failed. Accidentally, the code
ended up adding vfio_load_option_rom twice. (Thanks to Alex
for spotting it)
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Certain cards such as the Broadcom BCM57810 have rom quirks
that exhibit unstable system behavior duing device assignment. In
the particular case of 57810, rom execution hangs and if a FLR
follows, the device becomes inoperable until a power cycle. This
change blacklists loading of rom for such cards unless the user
specifies a romfile or rombar=1 on the cmd line
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
readlink() returns the number of bytes written to the buffer, and it
doesn't write a terminating null byte. vfio_init() writes it itself.
Overruns the buffer when readlink() filled it completely.
Fix by treating readlink() filling the buffer completely as error,
like we do in pci-assign.c's assign_failed_examine().
Spotted by Coverity.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Change to DEBUG_VFIO in vfio_msi_interrupt() for debug
messages to get printed
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
VFIO virtualizes MSIX table for the guest but not mapping the part of
a BAR which contains an MSIX table. Since vfio_mmap_bar() mmaps chunks
before and after the MSIX table, they have to be aligned to the host
page size which may be TARGET_PAGE_MASK (4K) or 64K in case of PPC64.
This fixes boundaries calculations to use the real host page size.
Without the patch, the chunk before MSIX table may overlap with the MSIX
table and mmap will fail in the host kernel. The result will be serious
slowdown as the whole BAR will be emulated by QEMU.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The vfio-pci initfn will currently succeed even if DMA mappings fail.
A typical reason for failure is if the user does not have sufficient
privilege to lock all the memory for the guest. In this case, the
device gets attached, but can only access a portion of guest memory
and is extremely unlikely to work.
DMA mappings are done via a MemoryListener, which provides no direct
error return path. We therefore stuff the errno into our container
structure and check for error after registration completes. We can
also test for mapping errors during runtime, but our only option for
resolution at that point is to kill the guest with a hw_error.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Since 57271d63 we now see spurious mappings with the upper bits set
if 64bit PCI BARs are sized while enabled. The guest writes a mask
of 0xffffffff to the lower BAR to size it, then restores it, then
writes the same mask to the upper BAR resulting in a spurious BAR
mapping into the last 4G of the 64bit address space. Most
architectures do not support or make use of the full 64bits address
space for PCI BARs, so we filter out mappings with the high bit set.
Long term, we probably need to think about vfio telling us the
address width limitations of the IOMMU.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
During lazy rom loading, if rom read fails, and the
guest attempts a read again, vfio will again attempt it.
Add a boolean to prevent this. There could be a case where
a failed rom read might succeed the next time because of
a device reset or such, but it's best to exclude unpredictable
behavior
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
If the device rom can't be read, report an error to the
user. This alerts the user that the device has a bad
state that is causing rom read failure or option rom
loading has been disabled from the device boot menu
(among other reasons).
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Somehow this has been lurking for a while; we remove our subregions
from the base BAR and VGA region mappings, but we don't destroy them,
creating a leak and more serious problems when we try to migrate after
removing these devices. Add the trivial bit of final cleanup to
remove these entirely.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
We were relying on msix_unset_vector_notifiers() to release all the
vectors when we disable MSI-X, but this only happens when MSI-X is
still enabled on the device. Perform further cleanup by releasing
any remaining vectors listed as in-use after this call. This caused
a leak of IRQ routes on hotplug depending on how the guest OS prepared
the device for removal.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
When MSI is enabled on Nvidia GeForce cards the driver seems to
acknowledge the interrupt by writing a 0xff byte to the MSI capability
ID register using the PCI config space mirror at offset 0x88000 from
BAR0. Without this, the device will only fire a single interrupt.
VFIO handles the PCI capability ID/next registers as virtual w/o write
support, so any write through config space is currently dropped. Add
a check for this and allow the write through the BAR window. The
registers are read-only anyway.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Add and remove groups from the KVM virtual VFIO device as we make
use of them. This allows KVM to optimize for performance and
correctness based on properties of the group.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
When an assigned device is initialized it copies the device config
space into the emulated config space. Unfortunately multifunction is
setup prior to the device initfn and gets clobbered. We need to
restore it just like pci-assign does.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Bandan Das <bsd@redhat.com>
Message-id: 20131112185059.7262.33780.stgit@bling.home
Cc: qemu-stable@nongnu.org
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
This includes some pretty big changes:
- pci master abort support by Marcel
- pci IRQ API rework by Marcel
- acpi generation support by myself
Everything has gone through several revisions, latest versions have been on
list for a while without any more comments, tested by several
people.
Please pull for 1.7.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
iQEcBAABAgAGBQJSXNO8AAoJECgfDbjSjVRp7VAH/0B73mCOiyVACGx7fazK3SGK
X8TxZWVtG5A77ISqKyrtjLAhK9DCQjEzQTbMNhXHM3Ar6crwo7nJZnQvH2Gh1X2p
34BOQSVc4rtXz5pwDIr48dBLrxeslwXub79chUs+IK1/4RSn3h3nuS3k6JVkmLJN
rcHMj4ljJmi4Hd9vOpmS1jo/a61usi36hhU7CMgcrsXzStZycBBzCozOB3VW8p1X
/iwyf91YjmNPkn9gA3/aViGjszu8jE91dkA0C+ljwvcGbs2yEl3LCWEJfsMvoh5P
2M+k0XXbHwq/P9PFMa/2/lWOo4EO4Oxa+G/6QvovJrteYnktr+E9DqjU8pCT7yI=
=CVfs
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mst/tags/for_anthony' into staging
pci, pc, acpi fixes, enhancements
This includes some pretty big changes:
- pci master abort support by Marcel
- pci IRQ API rework by Marcel
- acpi generation support by myself
Everything has gone through several revisions, latest versions have been on
list for a while without any more comments, tested by several
people.
Please pull for 1.7.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Tue 15 Oct 2013 07:33:48 AM CEST using RSA key ID D28D5469
# gpg: Can't check signature: public key not found
* mst/tags/for_anthony: (39 commits)
ssdt-proc: update generated file
ssdt: fix PBLK length
i386: ACPI table generation code from seabios
pc: use new api to add builtin tables
acpi: add interface to access user-installed tables
hpet: add API to find it
pvpanic: add API to access io port
ich9: APIs for pc guest info
piix: APIs for pc guest info
acpi/piix: add macros for acpi property names
i386: define pc guest info
loader: allow adding ROMs in done callbacks
i386: add bios linker/loader
loader: use file path size from fw_cfg.h
acpi: ssdt pcihp: updat generated file
acpi: pre-compiled ASL files
acpi: add rules to compile ASL source
i386: add ACPI table files from seabios
q35: expose mmcfg size as a property
q35: use macro for MCFG property name
...
Message-id: 1381818560-18367-1-git-send-email-mst@redhat.com
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
pci_set_irq and the other pci irq wrappers use
PCI_INTERRUPT_PIN config register to compute device
INTx pin to assert/deassert.
save INTX pin into the config register before calling
pci_set_irq
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
VFIO is always little endian so do byte swapping of our mask on the
way in and byte swapping of the size on the way out.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Just to be sure we don't jump off any NULL pointer cliffs.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Memory regions can easily be 2^64 byte long and therefore overflow
for just a bit but that is enough for int128_get64() to assert.
This takes care of debug printing of huge section sizes.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Now that VFIO has a PCI hot reset interface, take advantage of it.
There are two modes that we need to consider. The first is when only
one device within the set of devices affected is actually assigned to
the guest. In this case the other devices are are just held by VFIO
for isolation and we can pretend they're not there, doing an entire
bus reset whenever the device reset callback is triggered. Supporting
this case separately allows us to do the best reset we can do of the
device even if the device is hotplugged.
The second mode is when multiple affected devices are all exposed to
the guest. In this case we can only do a hot reset when the entire
system is being reset. However, this also allows us to track which
individual devices are affected by a reset and only do them once.
We split our reset function into pre- and post-reset helper functions
prioritize the types of device resets available to us, and create
separate _one vs _multi reset interfaces to handle the distinct cases
above.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
During vfio-pci initfn, the device is not always in a state where the
option ROM can be read. In the case of graphics cards, there's often
no per function reset, which means we have host driver state affecting
whether the option ROM is usable. Ideally we want to move reading the
option ROM past any co-assigned device resets to the point where the
guest first tries to read the ROM itself.
To accomplish this, we switch the memory region for the option rom to
an I/O region rather than a memory mapped region. This has the side
benefit that we don't waste KVM memory slots for a BAR where we don't
care about performance. This also allows us to delay loading the ROM
from the device until the first read by the guest. We then use the
PCI config space size of the ROM BAR when setting up the BAR through
QEMU PCI.
Another benefit of this approach is that previously when a user set
the ROM to a file using the romfile= option, we still probed VFIO for
the parameters of the ROM, which can result in dmesg errors about an
invalid ROM. We now only probe VFIO to get the ROM contents if the
guest actually tries to read the ROM.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Not all resets are created equal. PM reset is not very reliable,
especially for GPUs, so we might want to opt for a bus reset if a
standard reset will only do a D3hot->D0 transition. We can also
use this to tell if the standard reset will do a bus reset (if
neither has_pm_reset or has_flr is probed, but the device still
supports reset).
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
When MSI is accelerated through KVM the vectors are only programmed
when the guest first enables MSI support. Subsequent writes to the
vector address or data fields are ignored. Unfortunately that means
we're ignore updates done to adjust SMP affinity of the vectors.
MSI SMP affinity already works in non-KVM mode because the address
and data fields are read from their backing store on each interrupt.
This patch stores the MSIMessage programmed into KVM so that we can
determine when changes are made and update the routes.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>