This is the first step towards having fine-grained critical sections in
dataplane threads, which will resolve lock ordering problems between
address_space_* functions (which need the BQL when doing MMIO, even
after we complete RCU-based dispatch) and the AioContext.
Because AioContext does not use contention callbacks anymore, the
unit test has to be changed.
Previously applied as a0710f7995 and
then reverted.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477565348-5458-19-git-send-email-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Soon bdrv_drain will not call aio_poll itself on iothreads. If block
devices are left hanging off the iothread's AioContext, there will be no
one to do I/O for those poor devices.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-Id: <1477565348-5458-13-git-send-email-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
This will be used by BDRV_POLL_WHILE (and thus by bdrv_drain)
to choose how to wait for I/O completion.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477565348-5458-12-git-send-email-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
if iothread->ctx is set to NULL, aio_context_unref triggers the assertion:
g_source_unref: assertion 'source != NULL' failed.
The patch fixes it.
Signed-off-by: Lin Ma <lma@suse.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20160926052958.10716-1-lma@suse.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Right after main_loop ends, we release various things but keep iothread
alive. The latter is not prepared to the sudden change of resources.
Specifically, after bdrv_close_all(), virtio-scsi dataplane get a
surprise at the empty BlockBackend:
(gdb) bt
at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543
at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577
It is because the d->conf.blk->root is set to NULL, then
blk_get_aio_context() returns qemu_aio_context, whereas s->ctx is still
pointing to the iothread:
hw/scsi/virtio-scsi.c:543:
if (s->dataplane_started) {
assert(blk_get_aio_context(d->conf.blk) == s->ctx);
}
To fix this, let's stop iothreads before doing bdrv_close_all().
Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1473326931-9699-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1454089805-5470-16-git-send-email-peter.maydell@linaro.org
This makes it easier to find the desired thread. Use "IO" plus the id;
even with the 14 character limit on the thread name, enough of the id should
be readable (e.g. "IO iothreadNNN" with three characters for the number).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1448372804-5034-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add object_get_objects_root() function which is a convenience for
obtaining the Object * located at /objects in the object
composition tree. Convert existing code over to use the new
API where appropriate.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This reverts commit a0710f7995.
In qemu-devel email message <556DBF87.2020908@de.ibm.com>, Christian
Borntraeger writes:
Having many guests all with a kernel/ramdisk (via -kernel) and
several null block devices will result in hangs. All hanging
guests are in partition detection code waiting for an I/O to return
so very early maybe even the first I/O.
Reverting that commit "fixes" the hangs.
Reverting this commit for the 2.4 release. More time is needed to
investigate and correct this patch.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The functions tpm_backend_thread_tpm_reset() and iothread_find()
are completely unused, let's remove them.
Signed-off-by: Thomas Huth <huth@tuxfamily.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This is the first step towards having fine-grained critical sections in
dataplane threads, which resolves lock ordering problems between
address_space_* functions (which need the BQL when doing MMIO, even
after we complete RCU-based dispatch) and the AioContext.
Because AioContext does not use contention callbacks anymore, the
unit test has to be changed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1424449612-18215-4-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On a system with a low limit of open files the initialization
of the event notifier could fail and QEMU exits without printing any
error information to the user.
The problem can be easily reproduced by enforcing a low limit of open
files and start QEMU with enough I/O threads to hit this limit.
The same problem raises, without the creation of I/O threads, while
QEMU initializes the main event loop by enforcing an even lower limit of
open files.
This commit adds an error message on failure:
# qemu [...] -object iothread,id=iothread0 -object iothread,id=iothread1
qemu: Failed to initialize event notifier: Too many open files in system
Signed-off-by: Chrysostomos Nanakos <cnanakos@grnet.gr>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Currently, whenever aio_poll(ctx, true) has completed all pending
work it returns true *and* the next call to aio_poll(ctx, true)
will not block.
This invariant has its roots in qemu_aio_flush()'s implementation
as "while (qemu_aio_wait()) {}". However, qemu_aio_flush() does
not exist anymore and bdrv_drain_all() is implemented differently;
and this invariant is complicated to maintain and subtly different
from the return value of GMainLoop's g_main_context_iteration.
All calls to aio_poll(ctx, true) except one are guarded by a
while() loop checking for a request to be incomplete, or a
BlockDriverState to be idle. The one remaining call (in
iothread.c) uses this to delay the aio_context_release/acquire
pair until the AioContext is quiescent, however:
- we can do the same just by using non-blocking aio_poll,
similar to how vl.c invokes main_loop_wait
- it is buggy, because it does not ensure that the AioContext
is released between an aio_notify and the next time the
iothread goes to sleep. This leads to hangs when stopping
the dataplane thread.
In the end, these semantics are a bad match for the current
users of AioContext. So modify that one exception in iothread.c,
which also fixes the hangs, as well as the testcase so that
it use the same idiom as the actual QEMU code.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Make the IOThread struct definition public so objects can be embedded in
parent structs.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The "query-iothreads" command returns a list of information about
iothreads. See the patch for API documentation.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Keep the thread ID around so we can report it via QMP.
There's only one problem: qemu_get_thread_id() (gettid() wrapper on
Linux) must be called from the thread itself. There is no way to get
the thread ID outside the thread.
This patch uses a condvar to wait for iothread_run() to populate the
thread_id inside the thread.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is a stand-in for Michael Roth's QContext. I expect this to be
replaced once QContext is completed.
The IOThread object is an AioContext event loop thread. This patch adds
the concept of multiple event loop threads, allowing users to define
them.
When SMP guests run on SMP hosts it makes sense to instantiate multiple
IOThreads. This spreads event loop processing across multiple cores.
Note that additional patches are required to actually bind a device to
an IOThread.
[Andreas Färber <afaerber@suse.de> pointed out that the embedded parent
object instance should be called "parent_obj" and have a newline
afterwards. This patch has been changed to reflect this.
-- Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>