aio: document locking

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20170112180800.21085-10-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This commit is contained in:
Paolo Bonzini 2017-01-12 19:07:59 +01:00 committed by Stefan Hajnoczi
parent b92d9a91ab
commit 7c690fd193
2 changed files with 21 additions and 24 deletions

View File

@ -84,9 +84,8 @@ How to synchronize with an IOThread
AioContext is not thread-safe so some rules must be followed when using file AioContext is not thread-safe so some rules must be followed when using file
descriptors, event notifiers, timers, or BHs across threads: descriptors, event notifiers, timers, or BHs across threads:
1. AioContext functions can be called safely from file descriptor, event 1. AioContext functions can always be called safely. They handle their
notifier, timer, or BH callbacks invoked by the AioContext. No locking is own locking internally.
necessary.
2. Other threads wishing to access the AioContext must use 2. Other threads wishing to access the AioContext must use
aio_context_acquire()/aio_context_release() for mutual exclusion. Once the aio_context_acquire()/aio_context_release() for mutual exclusion. Once the
@ -94,16 +93,14 @@ context is acquired no other thread can access it or run event loop iterations
in this AioContext. in this AioContext.
aio_context_acquire()/aio_context_release() calls may be nested. This aio_context_acquire()/aio_context_release() calls may be nested. This
means you can call them if you're not sure whether #1 applies. means you can call them if you're not sure whether #2 applies.
There is currently no lock ordering rule if a thread needs to acquire multiple There is currently no lock ordering rule if a thread needs to acquire multiple
AioContexts simultaneously. Therefore, it is only safe for code holding the AioContexts simultaneously. Therefore, it is only safe for code holding the
QEMU global mutex to acquire other AioContexts. QEMU global mutex to acquire other AioContexts.
Side note: the best way to schedule a function call across threads is to create Side note: the best way to schedule a function call across threads is to call
a BH in the target AioContext beforehand and then call qemu_bh_schedule(). No aio_bh_schedule_oneshot(). No acquire/release or locking is needed.
acquire/release or locking is needed for the qemu_bh_schedule() call. But be
sure to acquire the AioContext for aio_bh_new() if necessary.
AioContext and the block layer AioContext and the block layer
------------------------------ ------------------------------

View File

@ -53,18 +53,12 @@ struct LinuxAioState;
struct AioContext { struct AioContext {
GSource source; GSource source;
/* Protects all fields from multi-threaded access */ /* Used by AioContext users to protect from multi-threaded access. */
QemuRecMutex lock; QemuRecMutex lock;
/* The list of registered AIO handlers */ /* The list of registered AIO handlers. Protected by ctx->list_lock. */
QLIST_HEAD(, AioHandler) aio_handlers; QLIST_HEAD(, AioHandler) aio_handlers;
/* This is a simple lock used to protect the aio_handlers list.
* Specifically, it's used to ensure that no callbacks are removed while
* we're walking and dispatching callbacks.
*/
int walking_handlers;
/* Used to avoid unnecessary event_notifier_set calls in aio_notify; /* Used to avoid unnecessary event_notifier_set calls in aio_notify;
* accessed with atomic primitives. If this field is 0, everything * accessed with atomic primitives. If this field is 0, everything
* (file descriptors, bottom halves, timers) will be re-evaluated * (file descriptors, bottom halves, timers) will be re-evaluated
@ -90,9 +84,9 @@ struct AioContext {
*/ */
uint32_t notify_me; uint32_t notify_me;
/* A lock to protect between bh's adders and deleter, and to ensure /* A lock to protect between QEMUBH and AioHandler adders and deleter,
* that no callbacks are removed while we're walking and dispatching * and to ensure that no callbacks are removed while we're walking and
* them. * dispatching them.
*/ */
QemuLockCnt list_lock; QemuLockCnt list_lock;
@ -114,7 +108,9 @@ struct AioContext {
bool notified; bool notified;
EventNotifier notifier; EventNotifier notifier;
/* Thread pool for performing work and receiving completion callbacks */ /* Thread pool for performing work and receiving completion callbacks.
* Has its own locking.
*/
struct ThreadPool *thread_pool; struct ThreadPool *thread_pool;
#ifdef CONFIG_LINUX_AIO #ifdef CONFIG_LINUX_AIO
@ -124,7 +120,9 @@ struct AioContext {
struct LinuxAioState *linux_aio; struct LinuxAioState *linux_aio;
#endif #endif
/* TimerLists for calling timers - one per clock type */ /* TimerLists for calling timers - one per clock type. Has its own
* locking.
*/
QEMUTimerListGroup tlg; QEMUTimerListGroup tlg;
int external_disable_cnt; int external_disable_cnt;
@ -178,9 +176,11 @@ void aio_context_unref(AioContext *ctx);
* automatically takes care of calling aio_context_acquire and * automatically takes care of calling aio_context_acquire and
* aio_context_release. * aio_context_release.
* *
* Access to timers and BHs from a thread that has not acquired AioContext * Note that this is separate from bdrv_drained_begin/bdrv_drained_end. A
* is possible. Access to callbacks for now must be done while the AioContext * thread still has to call those to avoid being interrupted by the guest.
* is owned by the thread (FIXME). *
* Bottom halves, timers and callbacks can be created or removed without
* acquiring the AioContext.
*/ */
void aio_context_acquire(AioContext *ctx); void aio_context_acquire(AioContext *ctx);