d157e540ed
Unless I'm missing something egregious, the jmp cache is only every
populated with a valid entry by the same thread that reads the cache.
Therefore, the contents of any valid entry are always consistent and
there is no need for any acquire/release magic.
Indeed ->tb has to be accessed with atomics, because concurrent
invalidations would otherwise cause data races. But ->pc is only ever
accessed by one thread, and accesses to ->tb and ->pc within tb_lookup
can never race with another tb_lookup. While the TranslationBlock
(especially the flags) could be modified by a concurrent invalidation,
store-release and load-acquire operations on the cache entry would
not add any additional ordering beyond what you get from performing
the accesses within a single thread.
Because of this, there is really nothing to win in splitting the CF_PCREL
and !CF_PCREL paths. It is easier to just always use the ->pc field in
the jump cache.
I noticed this while working on splitting commit 8ed558ec0c
("accel/tcg: Introduce TARGET_TB_PCREL", 2022-10-04) into multiple
pieces, for the sake of finding a more fine-grained bisection
result for https://gitlab.com/qemu-project/qemu/-/issues/2092.
It does not (and does not intend to) fix that issue; therefore
it may make sense to not commit it until the root cause
of issue #2092 is found.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240122153409.351959-1-pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
31 lines
800 B
C
31 lines
800 B
C
/*
|
|
* The per-CPU TranslationBlock jump cache.
|
|
*
|
|
* Copyright (c) 2003 Fabrice Bellard
|
|
*
|
|
* SPDX-License-Identifier: GPL-2.0-or-later
|
|
*/
|
|
|
|
#ifndef ACCEL_TCG_TB_JMP_CACHE_H
|
|
#define ACCEL_TCG_TB_JMP_CACHE_H
|
|
|
|
#define TB_JMP_CACHE_BITS 12
|
|
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
|
|
|
|
/*
|
|
* Invalidated in parallel; all accesses to 'tb' must be atomic.
|
|
* A valid entry is read/written by a single CPU, therefore there is
|
|
* no need for qatomic_rcu_read() and pc is always consistent with a
|
|
* non-NULL value of 'tb'. Strictly speaking pc is only needed for
|
|
* CF_PCREL, but it's used always for simplicity.
|
|
*/
|
|
struct CPUJumpCache {
|
|
struct rcu_head rcu;
|
|
struct {
|
|
TranslationBlock *tb;
|
|
vaddr pc;
|
|
} array[TB_JMP_CACHE_SIZE];
|
|
};
|
|
|
|
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */
|