Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190709152053.16670-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h
in comments replaced]
It was introduced in commit ab129972c8,
with the following motivation:
Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.
However, it seems to be redundant, because the cpu-exclusive
infrastructure provides suffificent protection against the newly added
CPU starting execution while the cpu-exclusive work is running, and the
aforementioned traversing of the cpu list is protected by
qemu_cpu_list_lock.
Besides, this appears to be the only place where the cpu-exclusive
section is entered with the BQL taken, which has been found to trigger
AB-BA deadlock as follows:
vCPU thread main thread
----------- -----------
async_safe_run_on_cpu(self,
async_synic_update)
... [cpu hot-add]
process_queued_cpu_work()
qemu_mutex_unlock_iothread()
[grab BQL]
start_exclusive() cpu_list_add()
async_synic_update() finish_safe_work()
qemu_mutex_lock_iothread() cpu_exec_start()
So remove it. This paves the way to establishing a strict nesting rule
of never entering the exclusive section with the BQL taken.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20190523105440.27045-2-rkagan@virtuozzo.com>
Iterating over the list without using atomics is undefined behaviour,
since the list can be modified concurrently by other threads (e.g.
every time a new thread is created in user-mode).
Fix it by implementing the CPU list as an RCU QTAILQ. This requires
a little bit of extra work to traverse list in reverse order (see
previous patch), but other than that the conversion is trivial.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-12-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This changes the *_run_on_cpu APIs (and helpers) to pass data in a
run_on_cpu_data type instead of a plain void *. This is because we
sometimes want to pass a target address (target_ulong) and this fails on
32 bit hosts emulating 64 bit guests.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Set cpu->running without taking the cpu_list lock, only requiring it if
there is a concurrent exclusive section. This requires adding a new
field to CPUState, which records whether a running CPU is being counted
in pending_cpus.
When an exclusive section is started concurrently with cpu_exec_start,
cpu_exec_start can use the new field to determine if it has to wait for
the end of the exclusive section. Likewise, cpu_exec_end can use it to
see if start_exclusive is waiting for that CPU.
This a separate patch for easier bisection of issues.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is not necessary to hold qemu_cpu_list_mutex throughout the
exclusive section, because no other exclusive section can run
while pending_cpus != 0.
exclusive_idle() is called in cpu_exec_start(), and that prevents
any CPUs created after start_exclusive() from entering cpu_exec()
during an exclusive section.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
No need to call exclusive_idle() from cpu_exec_end since it is done
immediately afterwards in cpu_exec_start. Any exclusive section could
run as soon as cpu_exec_end leaves, because cpu->running is false and the
mutex is not taken, so the call does not add any protection either.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
async_run_on_cpu is only called from the I/O thread, not from CPU threads,
so it doesn't make any difference. It will make a difference however
for async_safe_run_on_cpu.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will serve as the base for async_safe_run_on_cpu. Because
start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make CPU work core functions common between system and user-mode
emulation. User-mode does not use run_on_cpu, so do not implement it.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a mutex for the CPU list to system emulation, as it will be used to
manage safe work. Abstract manipulation of the CPU list in new functions
cpu_list_add and cpu_list_remove.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>