QEMU will crash with the follow backtrace if the new created thread exited before
we call qemu_thread_set_name() for it.
(gdb) bt
#0 0x00007f9a68b095d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007f9a68b0acc8 in __GI_abort () at abort.c:90
#2 0x00007f9a69cda389 in PAT_abort () from /usr/lib64/libuvpuserhotfix.so
#3 0x00007f9a69cdda0d in patchIllInsHandler () from /usr/lib64/libuvpuserhotfix.so
#4 <signal handler called>
#5 pthread_setname_np (th=140298470549248, name=name@entry=0x8cc74a "io-task-worker") at ../nptl/sysdeps/unix/sysv/linux/pthread_setname.c:49
#6 0x00000000007f5f20 in qemu_thread_set_name (thread=thread@entry=0x7ffd2ac09680, name=name@entry=0x8cc74a "io-task-worker") at util/qemu_thread_posix.c:459
#7 0x00000000007f679e in qemu_thread_create (thread=thread@entry=0x7ffd2ac09680, name=name@entry=0x8cc74a "io-task-worker",start_routine=start_routine@entry=0x7c1300 <qio_task_thread_worker>, arg=arg@entry=0x7f99b8001720, mode=mode@entry=1) at util/qemu_thread_posix.c:498
#8 0x00000000007c15b6 in qio_task_run_in_thread (task=task@entry=0x7f99b80033d0, worker=worker@entry=0x7bd920 <qio_channel_socket_connect_worker>, opaque=0x7f99b8003370, destroy=0x7c6220 <qapi_free_SocketAddress>) at io/task.c:133
#9 0x00000000007bda04 in qio_channel_socket_connect_async (ioc=0x7f99b80014c0, addr=0x37235d0, callback=callback@entry=0x54ad00 <qemu_chr_socket_connected>, opaque=opaque@entry=0x38118b0, destroy=destroy@entry=0x0) at io/channel_socket.c:191
#10 0x00000000005487f6 in socket_reconnect_timeout (opaque=0x38118b0) at qemu_char.c:4402
#11 0x00007f9a6a1533b3 in g_timeout_dispatch () from /usr/lib64/libglib-2.0.so.0
#12 0x00007f9a6a15299a in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0
#13 0x0000000000747386 in glib_pollfds_poll () at main_loop.c:227
#14 0x0000000000747424 in os_host_main_loop_wait (timeout=404000000) at main_loop.c:272
#15 0x0000000000747575 in main_loop_wait (nonblocking=nonblocking@entry=0) at main_loop.c:520
#16 0x0000000000557d31 in main_loop () at vl.c:2170
#17 0x000000000041c8b7 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:5083
Let's detach the new thread after calling qemu_thread_set_name().
Signed-off-by: Caoxinhua <caoxinhua@huawei.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Message-Id: <1483493521-9604-1-git-send-email-zhang.zhanghailiang@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is complex, but I think it is reasonably documented in the source.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170112180800.21085-5-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
GRecMutex is new in glib 2.32, so we cannot use it. Introduce
a recursive mutex in qemu-thread instead, which will be used
instead of RFifoLock.
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477565348-5458-20-git-send-email-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Do not use the somewhat mysterious atomic_mb_read/atomic_mb_set,
instead make sure that the operations on QemuEvent are annotated
with the desired acquire and release semantics.
In particular, qemu_event_set wakes up the waiting thread, so it must
be a release from the POV of the waker (compare with qemu_mutex_unlock).
And it actually needs a full barrier, because that's the only thing that
provides something like a "load-release".
Use smp_mb_acquire until we have atomic_load_acquire and
atomic_store_release in atomic.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1454089805-5470-6-git-send-email-peter.maydell@linaro.org
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJV8Tk7AAoJEL/70l94x66Ds3QH/3bi0RRR2NtKIXAQrGo5tfuD
NPMu1K5Hy+/26AC6mEVNRh4kh7dPH5E4NnDGbxet1+osvmpjxAjc2JrxEybhHD0j
fkpzqynuBN6cA2Gu5GUNoKzxxTmi2RrEYigWDZqCftRXBeO2Hsr1etxJh9UoZw5H
dgpU3j/n0Q8s08jUJ1o789knZI/ckwL4oXK4u2KhSC7ZTCWhJT7Qr7c0JmiKReaF
JEYAsKkQhICVKRVmC8NxML8U58O8maBjQ62UN6nQpVaQd0Yo/6cstFTZsRrHMHL3
7A2Tyg862cMvp+1DOX3Bk02yXA+nxnzLF8kUe0rYo6llqDBDStzqyn1j9R0qeqA=
=nB06
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
# gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (44 commits)
cutils: work around platform differences in strto{l,ul,ll,ull}
cpu-exec: fix lock hierarchy for user-mode emulation
exec: make mmap_lock/mmap_unlock globally available
tcg: comment on which functions have to be called with mmap_lock held
tcg: add memory barriers in page_find_alloc accesses
remove unused spinlock.
replace spinlock by QemuMutex.
cpus: remove tcg_halt_cond and tcg_cpu_thread globals
cpus: protect work list with work_mutex
scripts/dump-guest-memory.py: fix after RAMBlock change
configure: Add support for jemalloc
add macro file for coccinelle
configure: factor out adding disas configure
vhost-scsi: fix wrong vhost-scsi firmware path
checkpatch: remove tests that are not relevant outside the kernel
checkpatch: adapt some tests to QEMU
CODING_STYLE: update mixed declaration rules
qmp: Add example usage of strto*l() qemu wrapper
cutils: Add qemu_strtoull() wrapper
cutils: Add qemu_strtoll() wrapper
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
PTHREAD_MUTEX_ERRORCHECK is completely broken with respect to fork.
The way to safely do fork is to bring all threads to a quiescent
state by acquiring locks (either in callers---as we do for the
iothread mutex---or using pthread_atfork's prepare callbacks)
and then release them in the child.
The problem is that releasing error-checking locks in the child
fails under glibc with EPERM, because the mutex stores a different
owner tid than the duplicated thread in the child process. We
could make it work for locks acquired via pthread_atfork, by
recreating the mutex in the child instead of unlocking it
(we know that there are no other threads that could have taken
the mutex; but when the lock is acquired in fork's caller
that would not be possible.
The simplest solution is just to forgo error checking.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This had a possible deadlock that was visible with rcutorture.
qemu_event_set qemu_event_wait
----------------------------------------------------------------
cmpxchg reads FREE, writes BUSY
futex_wait: pthread_mutex_lock
futex_wait: value == BUSY
xchg reads BUSY, writes SET
futex_wake: pthread_cond_broadcast
futex_wait: pthread_cond_wait
<deadlock>
The fix is simply to avoid condvar tricks and do the obvious locking
around pthread_cond_broadcast:
qemu_event_set qemu_event_wait
----------------------------------------------------------------
cmpxchg reads FREE, writes BUSY
futex_wait: pthread_mutex_lock
futex_wait: value == BUSY
xchg reads BUSY, writes SET
futex_wake: pthread_mutex_lock
(blocks)
futex_wait: pthread_cond_wait
(mutex unlocked)
futex_wake: pthread_cond_broadcast
futex_wake: pthread_mutex_unlock
futex_wait: pthread_mutex_unlock
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Destructors are the main additional feature of pthread TLS compared
to __thread. If we were using C++ (hint, hint!) we could have used
thread-local objects with a destructor. Since we are not, instead,
we add a simple Notifier-based API.
Note that the notifier must be per-thread as well. We can add a
global list as well later, perhaps.
The Win32 implementation has some complications because a) detached
threads used not to have a QemuThreadData; b) the main thread does
not go through win32_start_routine, so we have to use atexit too.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 1417518350-6167-3-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Warn if no way of setting thread name is available.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
pthread_setname_np was introduced with 2.12.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If enabled, set the thread name at creation (on GNU systems with
pthread_set_np)
Fix up all the callers with a thread name
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Add flag storage to qemu-thread-* to store the namethreads flag
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
This emulates Win32 manual-reset events using futexes or conditional
variables. Typical ways to use them are with multi-producer,
single-consumer data structures, to test for a complex condition whose
elements come from different threads:
for (;;) {
qemu_event_reset(ev);
... test complex condition ...
if (condition is true) {
break;
}
qemu_event_wait(ev);
}
Or more efficiently (but with some duplication):
... evaluate condition ...
while (!condition) {
qemu_event_reset(ev);
... evaluate condition ...
if (!condition) {
qemu_event_wait(ev);
... evaluate condition ...
}
}
QemuEvent provides a very fast userspace path in the common case when
no other thread is waiting, or the event is not changing state.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fix following bugs in "fallback implementation of counting semaphores
with mutex+condvar" added in c166cb72f1:
- waiting threads are not restarted properly if more than one threads
are waiting unblock signals in qemu_sem_timedwait()
- possible missing pthread_cond_signal(3) calls when waiting threads
are returned by ETIMEDOUT
- fix an uninitialized variable
The problem is analyzed by and fix is provided by Noriyuki Soda.
Also put additional cleanup suggested by Laszlo Ersek:
- make QemuSemaphore.count unsigned (it won't be negative)
- check a return value of in pthread_cond_wait() in qemu_sem_wait()
Signed-off-by: Izumi Tsutsui <tsutsui@ceres.dti.ne.jp>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1372841894-10634-1-git-send-email-tsutsui@ceres.dti.ne.jp
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>