2012-11-14 15:39:30 +01:00
|
|
|
/*
|
|
|
|
* Dedicated thread for virtio-blk I/O processing
|
|
|
|
*
|
|
|
|
* Copyright 2012 IBM, Corp.
|
|
|
|
* Copyright 2012 Red Hat, Inc. and/or its affiliates
|
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
*
|
|
|
|
* This work is licensed under the terms of the GNU GPL, version 2 or later.
|
|
|
|
* See the COPYING file in the top-level directory.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2016-01-18 19:01:42 +01:00
|
|
|
#include "qemu/osdep.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 09:01:28 +01:00
|
|
|
#include "qapi/error.h"
|
2012-11-14 15:39:30 +01:00
|
|
|
#include "trace.h"
|
|
|
|
#include "qemu/iov.h"
|
Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h). It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.
Include qemu/main-loop.h only where it's needed. Touching it now
recompiles only some 1700 objects. For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the
others, they shrink only slightly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-12 07:23:50 +02:00
|
|
|
#include "qemu/main-loop.h"
|
2012-11-14 15:39:30 +01:00
|
|
|
#include "qemu/thread.h"
|
2013-02-04 11:37:52 +01:00
|
|
|
#include "qemu/error-report.h"
|
2015-01-26 17:26:42 +01:00
|
|
|
#include "hw/virtio/virtio-access.h"
|
2013-02-05 17:06:20 +01:00
|
|
|
#include "hw/virtio/virtio-blk.h"
|
|
|
|
#include "virtio-blk.h"
|
2013-02-22 10:40:34 +01:00
|
|
|
#include "block/aio.h"
|
2013-04-24 10:21:21 +02:00
|
|
|
#include "hw/virtio/virtio-bus.h"
|
2014-03-20 15:06:32 +01:00
|
|
|
#include "qom/object_interfaces.h"
|
2012-11-14 15:39:30 +01:00
|
|
|
|
|
|
|
struct VirtIOBlockDataPlane {
|
2013-09-04 14:16:15 +02:00
|
|
|
bool starting;
|
2013-01-15 17:19:38 +01:00
|
|
|
bool stopping;
|
2012-11-14 15:39:30 +01:00
|
|
|
|
2014-10-07 13:59:17 +02:00
|
|
|
VirtIOBlkConf *conf;
|
2012-11-14 15:39:30 +01:00
|
|
|
VirtIODevice *vdev;
|
2014-07-12 06:08:53 +02:00
|
|
|
QEMUBH *bh; /* bh for guest notification */
|
2016-06-21 14:13:11 +02:00
|
|
|
unsigned long *batch_notify_vqs;
|
virtio-blk: dataplane: Don't batch notifications if EVENT_IDX is present
Commit 5b2ffbe4d99843fd8305c573a100047a8c962327 ("virtio-blk: dataplane:
notify guest as a batch") deferred guest notification to a BH in order
batch notifications, with purpose of avoiding flooding the guest with
interruptions.
This optimization came with a cost. The average latency perceived in the
guest is increased by a few microseconds, but also when multiple IO
operations finish at the same time, the guest won't be notified until
all completions from each operation has been run. On the contrary,
virtio-scsi issues the notification at the end of each completion.
On the other hand, nowadays we have the EVENT_IDX feature that allows a
better coordination between QEMU and the Guest OS to avoid sending
unnecessary interruptions.
With this change, virtio-blk/dataplane only batches notifications if the
EVENT_IDX feature is not present.
Some numbers obtained with fio (ioengine=sync, iodepth=1, direct=1):
- Test specs:
* fio-3.4 (ioengine=sync, iodepth=1, direct=1)
* qemu master
* virtio-blk with a dedicated iothread (default poll-max-ns)
* backend: null_blk nr_devices=1 irqmode=2 completion_nsec=280000
* 8 vCPUs pinned to isolated physical cores
* Emulator and iothread also pinned to separate isolated cores
* variance between runs < 1%
- Not patched
* numjobs=1: lat_avg=327.32 irqs=29998
* numjobs=4: lat_avg=337.89 irqs=29073
* numjobs=8: lat_avg=342.98 irqs=28643
- Patched:
* numjobs=1: lat_avg=323.92 irqs=30262
* numjobs=4: lat_avg=332.65 irqs=29520
* numjobs=8: lat_avg=335.54 irqs=29323
Signed-off-by: Sergio Lopez <slp@redhat.com>
Message-id: 20180307114459.26636-1-slp@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-03-07 12:44:59 +01:00
|
|
|
bool batch_notifications;
|
2012-11-14 15:39:30 +01:00
|
|
|
|
2013-02-22 10:40:34 +01:00
|
|
|
/* Note that these EventNotifiers are assigned by value. This is
|
|
|
|
* fine as long as you do not call event_notifier_cleanup on them
|
|
|
|
* (because you don't own the file descriptor or handle; you just
|
|
|
|
* use it).
|
|
|
|
*/
|
2014-03-03 11:30:08 +01:00
|
|
|
IOThread *iothread;
|
2013-02-22 10:40:34 +01:00
|
|
|
AioContext *ctx;
|
2012-11-14 15:39:30 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
/* Raise an interrupt to signal guest, if necessary */
|
2016-06-21 14:13:12 +02:00
|
|
|
void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq)
|
2012-11-14 15:39:30 +01:00
|
|
|
{
|
virtio-blk: dataplane: Don't batch notifications if EVENT_IDX is present
Commit 5b2ffbe4d99843fd8305c573a100047a8c962327 ("virtio-blk: dataplane:
notify guest as a batch") deferred guest notification to a BH in order
batch notifications, with purpose of avoiding flooding the guest with
interruptions.
This optimization came with a cost. The average latency perceived in the
guest is increased by a few microseconds, but also when multiple IO
operations finish at the same time, the guest won't be notified until
all completions from each operation has been run. On the contrary,
virtio-scsi issues the notification at the end of each completion.
On the other hand, nowadays we have the EVENT_IDX feature that allows a
better coordination between QEMU and the Guest OS to avoid sending
unnecessary interruptions.
With this change, virtio-blk/dataplane only batches notifications if the
EVENT_IDX feature is not present.
Some numbers obtained with fio (ioengine=sync, iodepth=1, direct=1):
- Test specs:
* fio-3.4 (ioengine=sync, iodepth=1, direct=1)
* qemu master
* virtio-blk with a dedicated iothread (default poll-max-ns)
* backend: null_blk nr_devices=1 irqmode=2 completion_nsec=280000
* 8 vCPUs pinned to isolated physical cores
* Emulator and iothread also pinned to separate isolated cores
* variance between runs < 1%
- Not patched
* numjobs=1: lat_avg=327.32 irqs=29998
* numjobs=4: lat_avg=337.89 irqs=29073
* numjobs=8: lat_avg=342.98 irqs=28643
- Patched:
* numjobs=1: lat_avg=323.92 irqs=30262
* numjobs=4: lat_avg=332.65 irqs=29520
* numjobs=8: lat_avg=335.54 irqs=29323
Signed-off-by: Sergio Lopez <slp@redhat.com>
Message-id: 20180307114459.26636-1-slp@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-03-07 12:44:59 +01:00
|
|
|
if (s->batch_notifications) {
|
|
|
|
set_bit(virtio_get_queue_index(vq), s->batch_notify_vqs);
|
|
|
|
qemu_bh_schedule(s->bh);
|
|
|
|
} else {
|
|
|
|
virtio_notify_irqfd(s->vdev, vq);
|
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|
|
|
|
|
2014-07-12 06:08:53 +02:00
|
|
|
static void notify_guest_bh(void *opaque)
|
|
|
|
{
|
|
|
|
VirtIOBlockDataPlane *s = opaque;
|
2016-06-21 14:13:11 +02:00
|
|
|
unsigned nvqs = s->conf->num_queues;
|
|
|
|
unsigned long bitmap[BITS_TO_LONGS(nvqs)];
|
|
|
|
unsigned j;
|
2014-07-12 06:08:53 +02:00
|
|
|
|
2016-06-21 14:13:11 +02:00
|
|
|
memcpy(bitmap, s->batch_notify_vqs, sizeof(bitmap));
|
|
|
|
memset(s->batch_notify_vqs, 0, sizeof(bitmap));
|
|
|
|
|
|
|
|
for (j = 0; j < nvqs; j += BITS_PER_LONG) {
|
2019-12-16 03:30:50 +01:00
|
|
|
unsigned long bits = bitmap[j / BITS_PER_LONG];
|
2016-02-14 18:17:09 +01:00
|
|
|
|
2016-06-21 14:13:11 +02:00
|
|
|
while (bits != 0) {
|
|
|
|
unsigned i = j + ctzl(bits);
|
|
|
|
VirtQueue *vq = virtio_get_queue(s->vdev, i);
|
|
|
|
|
virtio: set ISR on dataplane notifications
Dataplane has been omitting forever the step of setting ISR when
an interrupt is raised. This caused little breakage, because the
specification actually says that ISR may not be updated in MSI mode.
Some versions of the Windows drivers however didn't clear MSI mode
correctly, and proceeded using polling mode (using ISR, not the used
ring index!) for crashdump and hibernation. If it were just crashdump
and hibernation it would not be a big deal, but recent releases of
Windows do not really shut down, but rather log out and hibernate to
make the next startup faster. Hence, this manifested as a more serious
hang during shutdown with e.g. Windows 8.1 and virtio-win 1.8.0 RPMs.
Newer versions fixed this, while older versions do not use MSI at all.
The failure has always been there for virtio dataplane, but it became
visible after commits 9ffe337 ("virtio-blk: always use dataplane path
if ioeventfd is active", 2016-10-30) and ad07cd6 ("virtio-scsi: always
use dataplane path if ioeventfd is active", 2016-10-30) made virtio-blk
and virtio-scsi always use the dataplane code under KVM. The good news
therefore is that it was not a bug in the patches---they were doing
exactly what they were meant for, i.e. shake out remaining dataplane bugs.
The fix is not hard, so it's worth arranging for the broken drivers.
The virtio_should_notify+event_notifier_set pair that is common to
virtio-blk and virtio-scsi dataplane is replaced with a new public
function virtio_notify_irqfd that also sets ISR. The irqfd emulation
code now need not set ISR anymore, so virtio_irq is removed.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Tested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-11-18 16:07:02 +01:00
|
|
|
virtio_notify_irqfd(s->vdev, vq);
|
2016-06-21 14:13:11 +02:00
|
|
|
|
|
|
|
bits &= bits - 1; /* clear right-most bit */
|
|
|
|
}
|
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
/* Context: QEMU global mutex held */
|
2017-11-22 04:08:44 +01:00
|
|
|
bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
|
2013-06-07 16:18:50 +02:00
|
|
|
VirtIOBlockDataPlane **dataplane,
|
|
|
|
Error **errp)
|
2012-11-14 15:39:30 +01:00
|
|
|
{
|
|
|
|
VirtIOBlockDataPlane *s;
|
2014-06-18 11:58:30 +02:00
|
|
|
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
|
|
|
|
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
|
2012-11-14 15:39:30 +01:00
|
|
|
|
|
|
|
*dataplane = NULL;
|
|
|
|
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
if (conf->iothread) {
|
|
|
|
if (!k->set_guest_notifiers || !k->ioeventfd_assign) {
|
|
|
|
error_setg(errp,
|
|
|
|
"device is incompatible with iothread "
|
|
|
|
"(transport does not support notifiers)");
|
2017-11-22 04:08:44 +01:00
|
|
|
return false;
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
}
|
|
|
|
if (!virtio_device_ioeventfd_enabled(vdev)) {
|
|
|
|
error_setg(errp, "ioeventfd is required for iothread");
|
2017-11-22 04:08:44 +01:00
|
|
|
return false;
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
/* If dataplane is (re-)enabled while the guest is running there could
|
|
|
|
* be block jobs that can conflict.
|
|
|
|
*/
|
|
|
|
if (blk_op_is_blocked(conf->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
|
|
|
|
error_prepend(errp, "cannot start virtio-blk dataplane: ");
|
2017-11-22 04:08:44 +01:00
|
|
|
return false;
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
}
|
2014-06-18 11:58:30 +02:00
|
|
|
}
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
/* Don't try if transport does not support notifiers. */
|
|
|
|
if (!virtio_device_ioeventfd_enabled(vdev)) {
|
2017-11-22 04:08:44 +01:00
|
|
|
return false;
|
2013-07-29 15:02:00 +02:00
|
|
|
}
|
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
s = g_new0(VirtIOBlockDataPlane, 1);
|
|
|
|
s->vdev = vdev;
|
2014-10-07 13:59:17 +02:00
|
|
|
s->conf = conf;
|
2012-11-14 15:39:30 +01:00
|
|
|
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
if (conf->iothread) {
|
|
|
|
s->iothread = conf->iothread;
|
|
|
|
object_ref(OBJECT(s->iothread));
|
|
|
|
s->ctx = iothread_get_aio_context(s->iothread);
|
|
|
|
} else {
|
|
|
|
s->ctx = qemu_get_aio_context();
|
|
|
|
}
|
2014-07-12 06:08:53 +02:00
|
|
|
s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
|
2016-06-21 14:13:11 +02:00
|
|
|
s->batch_notify_vqs = bitmap_new(conf->num_queues);
|
2014-03-03 11:30:08 +01:00
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
*dataplane = s;
|
2017-11-22 04:08:44 +01:00
|
|
|
|
|
|
|
return true;
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
/* Context: QEMU global mutex held */
|
2012-11-14 15:39:30 +01:00
|
|
|
void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
|
|
|
|
{
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
VirtIOBlock *vblk;
|
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
if (!s) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
vblk = VIRTIO_BLK(s->vdev);
|
|
|
|
assert(!vblk->dataplane_started);
|
2016-06-21 14:13:11 +02:00
|
|
|
g_free(s->batch_notify_vqs);
|
2014-07-12 06:08:53 +02:00
|
|
|
qemu_bh_delete(s->bh);
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
if (s->iothread) {
|
|
|
|
object_unref(OBJECT(s->iothread));
|
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
g_free(s);
|
|
|
|
}
|
|
|
|
|
2017-02-09 09:40:47 +01:00
|
|
|
static bool virtio_blk_data_plane_handle_output(VirtIODevice *vdev,
|
2016-04-06 12:16:26 +02:00
|
|
|
VirtQueue *vq)
|
|
|
|
{
|
|
|
|
VirtIOBlock *s = (VirtIOBlock *)vdev;
|
|
|
|
|
|
|
|
assert(s->dataplane);
|
|
|
|
assert(s->dataplane_started);
|
|
|
|
|
2017-02-09 09:40:47 +01:00
|
|
|
return virtio_blk_handle_vq(s, vq);
|
2016-04-06 12:16:26 +02:00
|
|
|
}
|
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
/* Context: QEMU global mutex held */
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
int virtio_blk_data_plane_start(VirtIODevice *vdev)
|
2012-11-14 15:39:30 +01:00
|
|
|
{
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
VirtIOBlock *vblk = VIRTIO_BLK(vdev);
|
|
|
|
VirtIOBlockDataPlane *s = vblk->dataplane;
|
|
|
|
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vblk)));
|
2013-04-24 10:21:21 +02:00
|
|
|
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
|
2020-12-14 18:05:16 +01:00
|
|
|
AioContext *old_context;
|
2016-06-21 14:13:15 +02:00
|
|
|
unsigned i;
|
|
|
|
unsigned nvqs = s->conf->num_queues;
|
2019-05-02 11:10:59 +02:00
|
|
|
Error *local_err = NULL;
|
2014-07-25 14:10:46 +02:00
|
|
|
int r;
|
2012-11-14 15:39:30 +01:00
|
|
|
|
virtio-blk: fix "disabled data plane" mode
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is a bit tricky, because the current check for s->dataplane will
always trigger, causing a continuous stream of calls to
virtio_blk_data_plane_start. Unfortunately, these calls will not
do anything. To fix this, set the "started" flag even in disabled
mode, and skip virtio_blk_data_plane_start if the started flag is true.
The resulting changes also prepare the code for the next patch, were
virtio-blk dataplane will reuse the same virtio_blk_handle_output function
as "regular" virtio-blk.
Because struct VirtIOBlockDataPlane is opaque in virtio-blk.c, we have
to move s->dataplane->started inside struct VirtIOBlock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-14 18:17:08 +01:00
|
|
|
if (vblk->dataplane_started || s->starting) {
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
return 0;
|
2013-09-04 14:16:15 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
s->starting = true;
|
2012-11-14 15:39:30 +01:00
|
|
|
|
virtio-blk: dataplane: Don't batch notifications if EVENT_IDX is present
Commit 5b2ffbe4d99843fd8305c573a100047a8c962327 ("virtio-blk: dataplane:
notify guest as a batch") deferred guest notification to a BH in order
batch notifications, with purpose of avoiding flooding the guest with
interruptions.
This optimization came with a cost. The average latency perceived in the
guest is increased by a few microseconds, but also when multiple IO
operations finish at the same time, the guest won't be notified until
all completions from each operation has been run. On the contrary,
virtio-scsi issues the notification at the end of each completion.
On the other hand, nowadays we have the EVENT_IDX feature that allows a
better coordination between QEMU and the Guest OS to avoid sending
unnecessary interruptions.
With this change, virtio-blk/dataplane only batches notifications if the
EVENT_IDX feature is not present.
Some numbers obtained with fio (ioengine=sync, iodepth=1, direct=1):
- Test specs:
* fio-3.4 (ioengine=sync, iodepth=1, direct=1)
* qemu master
* virtio-blk with a dedicated iothread (default poll-max-ns)
* backend: null_blk nr_devices=1 irqmode=2 completion_nsec=280000
* 8 vCPUs pinned to isolated physical cores
* Emulator and iothread also pinned to separate isolated cores
* variance between runs < 1%
- Not patched
* numjobs=1: lat_avg=327.32 irqs=29998
* numjobs=4: lat_avg=337.89 irqs=29073
* numjobs=8: lat_avg=342.98 irqs=28643
- Patched:
* numjobs=1: lat_avg=323.92 irqs=30262
* numjobs=4: lat_avg=332.65 irqs=29520
* numjobs=8: lat_avg=335.54 irqs=29323
Signed-off-by: Sergio Lopez <slp@redhat.com>
Message-id: 20180307114459.26636-1-slp@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-03-07 12:44:59 +01:00
|
|
|
if (!virtio_vdev_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX)) {
|
|
|
|
s->batch_notifications = true;
|
|
|
|
} else {
|
|
|
|
s->batch_notifications = false;
|
|
|
|
}
|
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
/* Set up guest notifier (irq) */
|
2016-06-21 14:13:15 +02:00
|
|
|
r = k->set_guest_notifiers(qbus->parent, nvqs, true);
|
2014-07-25 14:10:46 +02:00
|
|
|
if (r != 0) {
|
2018-06-13 07:05:19 +02:00
|
|
|
error_report("virtio-blk failed to set guest notifier (%d), "
|
|
|
|
"ensure -accel kvm is set.", r);
|
2014-07-25 14:10:47 +02:00
|
|
|
goto fail_guest_notifiers;
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|
|
|
|
|
2021-05-17 15:26:37 +02:00
|
|
|
/*
|
|
|
|
* Batch all the host notifiers in a single transaction to avoid
|
|
|
|
* quadratic time complexity in address_space_update_ioeventfds().
|
|
|
|
*/
|
2021-04-07 16:34:59 +02:00
|
|
|
memory_region_transaction_begin();
|
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
/* Set up virtqueue notify */
|
2016-06-21 14:13:15 +02:00
|
|
|
for (i = 0; i < nvqs; i++) {
|
|
|
|
r = virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, true);
|
|
|
|
if (r != 0) {
|
2021-04-07 16:34:59 +02:00
|
|
|
int j = i;
|
|
|
|
|
2016-06-21 14:13:15 +02:00
|
|
|
fprintf(stderr, "virtio-blk failed to set host notifier (%d)\n", r);
|
|
|
|
while (i--) {
|
|
|
|
virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
|
2021-04-07 16:34:59 +02:00
|
|
|
}
|
|
|
|
|
2021-05-17 15:26:37 +02:00
|
|
|
/*
|
|
|
|
* The transaction expects the ioeventfds to be open when it
|
|
|
|
* commits. Do it now, before the cleanup loop.
|
|
|
|
*/
|
2021-04-07 16:34:59 +02:00
|
|
|
memory_region_transaction_commit();
|
|
|
|
|
|
|
|
while (j--) {
|
2018-01-29 15:20:56 +01:00
|
|
|
virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
|
2016-06-21 14:13:15 +02:00
|
|
|
}
|
2021-04-07 16:34:58 +02:00
|
|
|
goto fail_host_notifiers;
|
2016-06-21 14:13:15 +02:00
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|
2014-07-12 06:08:52 +02:00
|
|
|
|
2021-04-07 16:34:59 +02:00
|
|
|
memory_region_transaction_commit();
|
|
|
|
|
2013-09-04 14:16:15 +02:00
|
|
|
s->starting = false;
|
virtio-blk: fix "disabled data plane" mode
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is a bit tricky, because the current check for s->dataplane will
always trigger, causing a continuous stream of calls to
virtio_blk_data_plane_start. Unfortunately, these calls will not
do anything. To fix this, set the "started" flag even in disabled
mode, and skip virtio_blk_data_plane_start if the started flag is true.
The resulting changes also prepare the code for the next patch, were
virtio-blk dataplane will reuse the same virtio_blk_handle_output function
as "regular" virtio-blk.
Because struct VirtIOBlockDataPlane is opaque in virtio-blk.c, we have
to move s->dataplane->started inside struct VirtIOBlock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-14 18:17:08 +01:00
|
|
|
vblk->dataplane_started = true;
|
2012-11-14 15:39:30 +01:00
|
|
|
trace_virtio_blk_data_plane_start(s);
|
|
|
|
|
2020-12-14 18:05:16 +01:00
|
|
|
old_context = blk_get_aio_context(s->conf->conf.blk);
|
|
|
|
aio_context_acquire(old_context);
|
2019-05-02 11:10:59 +02:00
|
|
|
r = blk_set_aio_context(s->conf->conf.blk, s->ctx, &local_err);
|
2020-12-14 18:05:16 +01:00
|
|
|
aio_context_release(old_context);
|
2019-05-02 11:10:59 +02:00
|
|
|
if (r < 0) {
|
|
|
|
error_report_err(local_err);
|
2021-04-07 16:34:58 +02:00
|
|
|
goto fail_aio_context;
|
2019-05-02 11:10:59 +02:00
|
|
|
}
|
2014-05-08 16:34:55 +02:00
|
|
|
|
virtio-blk: On restart, process queued requests in the proper context
On restart, we were scheduling a BH to process queued requests, which
would run before starting up the data plane, leading to those requests
being assigned and started on coroutines on the main context.
This could cause requests to be wrongly processed in parallel from
different threads (the main thread and the iothread managing the data
plane), potentially leading to multiple issues.
For example, stopping and resuming a VM multiple times while the guest
is generating I/O on a virtio_blk device can trigger a crash with a
stack tracing looking like this one:
<------>
Thread 2 (Thread 0x7ff736765700 (LWP 1062503)):
#0 0x00005567a13b99d6 in iov_memset
(iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803)
at util/iov.c:69
#1 0x00005567a13bab73 in qemu_iovec_memset
(qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530
#2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86
#3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217
#4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323
#5 0x00005567a12f43d9 in qemu_laio_process_completions_and_submit (s=0x7ff7182e8420)
at block/linux-aio.c:236
#6 0x00005567a12f44c2 in qemu_laio_poll_cb (opaque=0x7ff7182e8430) at block/linux-aio.c:267
#7 0x00005567a13aed83 in run_poll_handlers_once (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8)
at util/aio-posix.c:520
#8 0x00005567a13aee9f in run_poll_handlers (ctx=0x5567a2b58c70, max_ns=16000, timeout=0x7ff7367645f8)
at util/aio-posix.c:562
#9 0x00005567a13aefde in try_poll_mode (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8)
at util/aio-posix.c:597
#10 0x00005567a13af115 in aio_poll (ctx=0x5567a2b58c70, blocking=true) at util/aio-posix.c:639
#11 0x00005567a109acca in iothread_run (opaque=0x5567a2b29760) at iothread.c:75
#12 0x00005567a13b2790 in qemu_thread_start (args=0x5567a2b694c0) at util/qemu-thread-posix.c:519
#13 0x00007ff73eedf2de in start_thread () at /lib64/libpthread.so.0
#14 0x00007ff73ec10e83 in clone () at /lib64/libc.so.6
Thread 1 (Thread 0x7ff743986f00 (LWP 1062500)):
#0 0x00005567a13b99d6 in iov_memset
(iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803)
at util/iov.c:69
#1 0x00005567a13bab73 in qemu_iovec_memset
(qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530
#2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86
#3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217
#4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323
#5 0x00005567a12f4a2f in laio_do_submit (fd=19, laiocb=0x7ff5f4ff9ae0, offset=472363008, type=2)
at block/linux-aio.c:375
#6 0x00005567a12f4af2 in laio_co_submit
(bs=0x5567a2b8c460, s=0x7ff7182e8420, fd=19, offset=472363008, qiov=0x7ff5f4ff9ca0, type=2)
at block/linux-aio.c:394
#7 0x00005567a12f1803 in raw_co_prw
(bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, type=2)
at block/file-posix.c:1892
#8 0x00005567a12f1941 in raw_co_pwritev
(bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, flags=0)
at block/file-posix.c:1925
#9 0x00005567a12fe3e1 in bdrv_driver_pwritev
(bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, qiov_offset=0, flags=0)
at block/io.c:1183
#10 0x00005567a1300340 in bdrv_aligned_pwritev
(child=0x5567a2b5b070, req=0x7ff5f4ff9db0, offset=472363008, bytes=20480, align=512, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0) at block/io.c:1980
#11 0x00005567a1300b29 in bdrv_co_pwritev_part
(child=0x5567a2b5b070, offset=472363008, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0)
at block/io.c:2137
#12 0x00005567a12baba1 in qcow2_co_pwritev_task
(bs=0x5567a2b92740, file_cluster_offset=472317952, offset=487305216, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, l2meta=0x0) at block/qcow2.c:2444
#13 0x00005567a12bacdb in qcow2_co_pwritev_task_entry (task=0x5567a2b48540) at block/qcow2.c:2475
#14 0x00005567a13167d8 in aio_task_co (opaque=0x5567a2b48540) at block/aio_task.c:45
#15 0x00005567a13cf00c in coroutine_trampoline (i0=738245600, i1=32759) at util/coroutine-ucontext.c:115
#16 0x00007ff73eb622e0 in __start_context () at /lib64/libc.so.6
#17 0x00007ff6626f1350 in ()
#18 0x0000000000000000 in ()
<------>
This is also known to cause crashes with this message (assertion
failed):
aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1812765
Signed-off-by: Sergio Lopez <slp@redhat.com>
Message-Id: <20200603093240.40489-3-slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-06-03 11:32:40 +02:00
|
|
|
/* Process queued requests before the ones in vring */
|
|
|
|
virtio_blk_process_queued_requests(vblk, false);
|
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
/* Kick right away to begin processing requests already in vring */
|
2016-06-21 14:13:15 +02:00
|
|
|
for (i = 0; i < nvqs; i++) {
|
|
|
|
VirtQueue *vq = virtio_get_queue(s->vdev, i);
|
|
|
|
|
|
|
|
event_notifier_set(virtio_queue_get_host_notifier(vq));
|
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
/* Get this show started by hooking up our callbacks */
|
|
|
|
aio_context_acquire(s->ctx);
|
2016-06-21 14:13:15 +02:00
|
|
|
for (i = 0; i < nvqs; i++) {
|
|
|
|
VirtQueue *vq = virtio_get_queue(s->vdev, i);
|
|
|
|
|
|
|
|
virtio_queue_aio_set_host_notifier_handler(vq, s->ctx,
|
|
|
|
virtio_blk_data_plane_handle_output);
|
|
|
|
}
|
2014-03-03 11:30:08 +01:00
|
|
|
aio_context_release(s->ctx);
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
return 0;
|
2014-07-25 14:10:47 +02:00
|
|
|
|
2021-04-07 16:34:58 +02:00
|
|
|
fail_aio_context:
|
2021-04-07 16:34:59 +02:00
|
|
|
memory_region_transaction_begin();
|
|
|
|
|
2021-04-07 16:34:58 +02:00
|
|
|
for (i = 0; i < nvqs; i++) {
|
|
|
|
virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
|
2021-04-07 16:34:59 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
memory_region_transaction_commit();
|
|
|
|
|
|
|
|
for (i = 0; i < nvqs; i++) {
|
2021-04-07 16:34:58 +02:00
|
|
|
virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
|
|
|
|
}
|
|
|
|
fail_host_notifiers:
|
|
|
|
k->set_guest_notifiers(qbus->parent, nvqs, false);
|
2014-07-25 14:10:47 +02:00
|
|
|
fail_guest_notifiers:
|
virtio-blk: On restart, process queued requests in the proper context
On restart, we were scheduling a BH to process queued requests, which
would run before starting up the data plane, leading to those requests
being assigned and started on coroutines on the main context.
This could cause requests to be wrongly processed in parallel from
different threads (the main thread and the iothread managing the data
plane), potentially leading to multiple issues.
For example, stopping and resuming a VM multiple times while the guest
is generating I/O on a virtio_blk device can trigger a crash with a
stack tracing looking like this one:
<------>
Thread 2 (Thread 0x7ff736765700 (LWP 1062503)):
#0 0x00005567a13b99d6 in iov_memset
(iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803)
at util/iov.c:69
#1 0x00005567a13bab73 in qemu_iovec_memset
(qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530
#2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86
#3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217
#4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323
#5 0x00005567a12f43d9 in qemu_laio_process_completions_and_submit (s=0x7ff7182e8420)
at block/linux-aio.c:236
#6 0x00005567a12f44c2 in qemu_laio_poll_cb (opaque=0x7ff7182e8430) at block/linux-aio.c:267
#7 0x00005567a13aed83 in run_poll_handlers_once (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8)
at util/aio-posix.c:520
#8 0x00005567a13aee9f in run_poll_handlers (ctx=0x5567a2b58c70, max_ns=16000, timeout=0x7ff7367645f8)
at util/aio-posix.c:562
#9 0x00005567a13aefde in try_poll_mode (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8)
at util/aio-posix.c:597
#10 0x00005567a13af115 in aio_poll (ctx=0x5567a2b58c70, blocking=true) at util/aio-posix.c:639
#11 0x00005567a109acca in iothread_run (opaque=0x5567a2b29760) at iothread.c:75
#12 0x00005567a13b2790 in qemu_thread_start (args=0x5567a2b694c0) at util/qemu-thread-posix.c:519
#13 0x00007ff73eedf2de in start_thread () at /lib64/libpthread.so.0
#14 0x00007ff73ec10e83 in clone () at /lib64/libc.so.6
Thread 1 (Thread 0x7ff743986f00 (LWP 1062500)):
#0 0x00005567a13b99d6 in iov_memset
(iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803)
at util/iov.c:69
#1 0x00005567a13bab73 in qemu_iovec_memset
(qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530
#2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86
#3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217
#4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323
#5 0x00005567a12f4a2f in laio_do_submit (fd=19, laiocb=0x7ff5f4ff9ae0, offset=472363008, type=2)
at block/linux-aio.c:375
#6 0x00005567a12f4af2 in laio_co_submit
(bs=0x5567a2b8c460, s=0x7ff7182e8420, fd=19, offset=472363008, qiov=0x7ff5f4ff9ca0, type=2)
at block/linux-aio.c:394
#7 0x00005567a12f1803 in raw_co_prw
(bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, type=2)
at block/file-posix.c:1892
#8 0x00005567a12f1941 in raw_co_pwritev
(bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, flags=0)
at block/file-posix.c:1925
#9 0x00005567a12fe3e1 in bdrv_driver_pwritev
(bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, qiov_offset=0, flags=0)
at block/io.c:1183
#10 0x00005567a1300340 in bdrv_aligned_pwritev
(child=0x5567a2b5b070, req=0x7ff5f4ff9db0, offset=472363008, bytes=20480, align=512, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0) at block/io.c:1980
#11 0x00005567a1300b29 in bdrv_co_pwritev_part
(child=0x5567a2b5b070, offset=472363008, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0)
at block/io.c:2137
#12 0x00005567a12baba1 in qcow2_co_pwritev_task
(bs=0x5567a2b92740, file_cluster_offset=472317952, offset=487305216, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, l2meta=0x0) at block/qcow2.c:2444
#13 0x00005567a12bacdb in qcow2_co_pwritev_task_entry (task=0x5567a2b48540) at block/qcow2.c:2475
#14 0x00005567a13167d8 in aio_task_co (opaque=0x5567a2b48540) at block/aio_task.c:45
#15 0x00005567a13cf00c in coroutine_trampoline (i0=738245600, i1=32759) at util/coroutine-ucontext.c:115
#16 0x00007ff73eb622e0 in __start_context () at /lib64/libc.so.6
#17 0x00007ff6626f1350 in ()
#18 0x0000000000000000 in ()
<------>
This is also known to cause crashes with this message (assertion
failed):
aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1812765
Signed-off-by: Sergio Lopez <slp@redhat.com>
Message-Id: <20200603093240.40489-3-slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-06-03 11:32:40 +02:00
|
|
|
/*
|
|
|
|
* If we failed to set up the guest notifiers queued requests will be
|
|
|
|
* processed on the main context.
|
|
|
|
*/
|
|
|
|
virtio_blk_process_queued_requests(vblk, false);
|
2016-04-06 12:16:23 +02:00
|
|
|
vblk->dataplane_disabled = true;
|
2014-07-25 14:10:47 +02:00
|
|
|
s->starting = false;
|
virtio-blk: fix "disabled data plane" mode
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is a bit tricky, because the current check for s->dataplane will
always trigger, causing a continuous stream of calls to
virtio_blk_data_plane_start. Unfortunately, these calls will not
do anything. To fix this, set the "started" flag even in disabled
mode, and skip virtio_blk_data_plane_start if the started flag is true.
The resulting changes also prepare the code for the next patch, were
virtio-blk dataplane will reuse the same virtio_blk_handle_output function
as "regular" virtio-blk.
Because struct VirtIOBlockDataPlane is opaque in virtio-blk.c, we have
to move s->dataplane->started inside struct VirtIOBlock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-14 18:17:08 +01:00
|
|
|
vblk->dataplane_started = true;
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
return -ENOSYS;
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|
|
|
|
|
2018-03-07 15:42:03 +01:00
|
|
|
/* Stop notifications for new requests from guest.
|
|
|
|
*
|
|
|
|
* Context: BH in IOThread
|
|
|
|
*/
|
|
|
|
static void virtio_blk_data_plane_stop_bh(void *opaque)
|
|
|
|
{
|
|
|
|
VirtIOBlockDataPlane *s = opaque;
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i < s->conf->num_queues; i++) {
|
|
|
|
VirtQueue *vq = virtio_get_queue(s->vdev, i);
|
|
|
|
|
|
|
|
virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
/* Context: QEMU global mutex held */
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
void virtio_blk_data_plane_stop(VirtIODevice *vdev)
|
2012-11-14 15:39:30 +01:00
|
|
|
{
|
virtio-blk: always use dataplane path if ioeventfd is active
Override start_ioeventfd and stop_ioeventfd to start/stop the
whole dataplane logic. This has some positive side effects:
- no need anymore for virtio_add_queue_aio (i.e. a revert of
commit 0ff841f6d138904d514efa1d885bcaf54583852d)
- no need anymore to switch from generic ioeventfd handlers to
dataplane
It detects some errors better:
$ qemu-system-x86_64 -object iothread,id=io \
-drive id=null,file=null-aio://,if=none,format=raw \
-device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null
qemu-system-x86_64: -device virtio-blk-pci,ioeventfd=off,iothread=io,drive=null:
ioeventfd is required for iothread
while previously it would have started just fine.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-10-21 22:48:09 +02:00
|
|
|
VirtIOBlock *vblk = VIRTIO_BLK(vdev);
|
|
|
|
VirtIOBlockDataPlane *s = vblk->dataplane;
|
|
|
|
BusState *qbus = qdev_get_parent_bus(DEVICE(vblk));
|
2013-04-24 10:21:21 +02:00
|
|
|
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
|
2016-06-21 14:13:15 +02:00
|
|
|
unsigned i;
|
|
|
|
unsigned nvqs = s->conf->num_queues;
|
2014-07-25 14:10:48 +02:00
|
|
|
|
virtio-blk: fix "disabled data plane" mode
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is a bit tricky, because the current check for s->dataplane will
always trigger, causing a continuous stream of calls to
virtio_blk_data_plane_start. Unfortunately, these calls will not
do anything. To fix this, set the "started" flag even in disabled
mode, and skip virtio_blk_data_plane_start if the started flag is true.
The resulting changes also prepare the code for the next patch, were
virtio-blk dataplane will reuse the same virtio_blk_handle_output function
as "regular" virtio-blk.
Because struct VirtIOBlockDataPlane is opaque in virtio-blk.c, we have
to move s->dataplane->started inside struct VirtIOBlock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-14 18:17:08 +01:00
|
|
|
if (!vblk->dataplane_started || s->stopping) {
|
|
|
|
return;
|
|
|
|
}
|
2014-07-25 14:10:48 +02:00
|
|
|
|
|
|
|
/* Better luck next time. */
|
2016-04-06 12:16:23 +02:00
|
|
|
if (vblk->dataplane_disabled) {
|
|
|
|
vblk->dataplane_disabled = false;
|
virtio-blk: fix "disabled data plane" mode
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is a bit tricky, because the current check for s->dataplane will
always trigger, causing a continuous stream of calls to
virtio_blk_data_plane_start. Unfortunately, these calls will not
do anything. To fix this, set the "started" flag even in disabled
mode, and skip virtio_blk_data_plane_start if the started flag is true.
The resulting changes also prepare the code for the next patch, were
virtio-blk dataplane will reuse the same virtio_blk_handle_output function
as "regular" virtio-blk.
Because struct VirtIOBlockDataPlane is opaque in virtio-blk.c, we have
to move s->dataplane->started inside struct VirtIOBlock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-14 18:17:08 +01:00
|
|
|
vblk->dataplane_started = false;
|
2012-11-14 15:39:30 +01:00
|
|
|
return;
|
|
|
|
}
|
2013-01-15 17:19:38 +01:00
|
|
|
s->stopping = true;
|
2012-11-14 15:39:30 +01:00
|
|
|
trace_virtio_blk_data_plane_stop(s);
|
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
aio_context_acquire(s->ctx);
|
2018-03-07 15:42:03 +01:00
|
|
|
aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
|
2014-03-03 11:30:08 +01:00
|
|
|
|
2019-05-02 11:10:59 +02:00
|
|
|
/* Drain and try to switch bs back to the QEMU main loop. If other users
|
|
|
|
* keep the BlockBackend in the iothread, that's ok */
|
|
|
|
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context(), NULL);
|
2012-11-14 15:39:30 +01:00
|
|
|
|
2014-03-03 11:30:08 +01:00
|
|
|
aio_context_release(s->ctx);
|
2012-11-14 15:39:30 +01:00
|
|
|
|
2021-05-17 15:26:37 +02:00
|
|
|
/*
|
|
|
|
* Batch all the host notifiers in a single transaction to avoid
|
|
|
|
* quadratic time complexity in address_space_update_ioeventfds().
|
|
|
|
*/
|
2021-04-07 16:34:59 +02:00
|
|
|
memory_region_transaction_begin();
|
|
|
|
|
2016-06-21 14:13:15 +02:00
|
|
|
for (i = 0; i < nvqs; i++) {
|
|
|
|
virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
|
2021-04-07 16:34:59 +02:00
|
|
|
}
|
|
|
|
|
2021-05-17 15:26:37 +02:00
|
|
|
/*
|
|
|
|
* The transaction expects the ioeventfds to be open when it
|
|
|
|
* commits. Do it now, before the cleanup loop.
|
|
|
|
*/
|
2021-04-07 16:34:59 +02:00
|
|
|
memory_region_transaction_commit();
|
|
|
|
|
|
|
|
for (i = 0; i < nvqs; i++) {
|
2018-01-29 15:20:56 +01:00
|
|
|
virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
|
2016-06-21 14:13:15 +02:00
|
|
|
}
|
2012-11-14 15:39:30 +01:00
|
|
|
|
virtio-blk: Cancel the pending BH when the dataplane is reset
When 'system_reset' is called, the main loop clear the memory
region cache before the BH has a chance to execute. Later when
the deferred function is called, some assumptions that were
made when scheduling them are no longer true when they actually
execute.
This is what happens using a virtio-blk device (fresh RHEL7.8 install):
$ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo q) \
| qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
-device virtio-blk-pci,id=image1,drive=drive_image1 \
-drive file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none \
-device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
-netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
-monitor stdio -serial null -nographic
(qemu) system_reset
(qemu) system_reset
(qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: Assertion `caches != NULL' failed.
Aborted
(gdb) bt
Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
#0 0x00005604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at hw/virtio/virtio.c:227
#1 0x000056040832972b in vring_avail_flags (vq=0x56040a24bdd0) at hw/virtio/virtio.c:235
#2 0x000056040832d13d in virtio_should_notify (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
#3 0x000056040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
#4 0x00005604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at hw/block/dataplane/virtio-blk.c:75
#5 0x000056040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
#6 0x000056040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118
#7 0x0000560408842af7 in aio_dispatch (ctx=0x560409161980) at util/aio-posix.c:460
#8 0x000056040883e068 in aio_ctx_dispatch (source=0x560409161980, callback=0x0, user_data=0x0) at util/async.c:261
#9 0x00007f10a8fca06d in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#10 0x0000560408841445 in glib_pollfds_poll () at util/main-loop.c:215
#11 0x00005604088414bf in os_host_main_loop_wait (timeout=0) at util/main-loop.c:238
#12 0x00005604088415c4 in main_loop_wait (nonblocking=0) at util/main-loop.c:514
#13 0x0000560408416b1e in main_loop () at vl.c:1923
#14 0x000056040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, envp=0x7ffc2c3f9d00) at vl.c:4578
Fix this by cancelling the BH when the virtio dataplane is stopped.
[This is version of the patch was modified as discussed with Philippe on
the mailing list thread.
--Stefan]
Reported-by: Yihuang Yu <yihyu@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190816171503.24761-1-philmd@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-08-16 19:15:03 +02:00
|
|
|
qemu_bh_cancel(s->bh);
|
|
|
|
notify_guest_bh(s); /* final chance to notify guest */
|
|
|
|
|
2012-11-14 15:39:30 +01:00
|
|
|
/* Clean up guest notifier (irq) */
|
2016-06-21 14:13:15 +02:00
|
|
|
k->set_guest_notifiers(qbus->parent, nvqs, false);
|
2012-11-14 15:39:30 +01:00
|
|
|
|
virtio-blk: fix "disabled data plane" mode
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is a bit tricky, because the current check for s->dataplane will
always trigger, causing a continuous stream of calls to
virtio_blk_data_plane_start. Unfortunately, these calls will not
do anything. To fix this, set the "started" flag even in disabled
mode, and skip virtio_blk_data_plane_start if the started flag is true.
The resulting changes also prepare the code for the next patch, were
virtio-blk dataplane will reuse the same virtio_blk_handle_output function
as "regular" virtio-blk.
Because struct VirtIOBlockDataPlane is opaque in virtio-blk.c, we have
to move s->dataplane->started inside struct VirtIOBlock.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-14 18:17:08 +01:00
|
|
|
vblk->dataplane_started = false;
|
2013-01-15 17:19:38 +01:00
|
|
|
s->stopping = false;
|
2012-11-14 15:39:30 +01:00
|
|
|
}
|