cpus-common: nuke finish_safe_work
It was introduced in commit ab129972c8
,
with the following motivation:
Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.
However, it seems to be redundant, because the cpu-exclusive
infrastructure provides suffificent protection against the newly added
CPU starting execution while the cpu-exclusive work is running, and the
aforementioned traversing of the cpu list is protected by
qemu_cpu_list_lock.
Besides, this appears to be the only place where the cpu-exclusive
section is entered with the BQL taken, which has been found to trigger
AB-BA deadlock as follows:
vCPU thread main thread
----------- -----------
async_safe_run_on_cpu(self,
async_synic_update)
... [cpu hot-add]
process_queued_cpu_work()
qemu_mutex_unlock_iothread()
[grab BQL]
start_exclusive() cpu_list_add()
async_synic_update() finish_safe_work()
qemu_mutex_lock_iothread() cpu_exec_start()
So remove it. This paves the way to establishing a strict nesting rule
of never entering the exclusive section with the BQL taken.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20190523105440.27045-2-rkagan@virtuozzo.com>
This commit is contained in:
parent
9e9b10c649
commit
e533f45d7d
@ -69,12 +69,6 @@ static int cpu_get_free_index(void)
|
||||
return cpu_index;
|
||||
}
|
||||
|
||||
static void finish_safe_work(CPUState *cpu)
|
||||
{
|
||||
cpu_exec_start(cpu);
|
||||
cpu_exec_end(cpu);
|
||||
}
|
||||
|
||||
void cpu_list_add(CPUState *cpu)
|
||||
{
|
||||
qemu_mutex_lock(&qemu_cpu_list_lock);
|
||||
@ -86,8 +80,6 @@ void cpu_list_add(CPUState *cpu)
|
||||
}
|
||||
QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node);
|
||||
qemu_mutex_unlock(&qemu_cpu_list_lock);
|
||||
|
||||
finish_safe_work(cpu);
|
||||
}
|
||||
|
||||
void cpu_list_remove(CPUState *cpu)
|
||||
|
Loading…
Reference in New Issue
Block a user