Commit Graph

171 Commits

Author SHA1 Message Date
Richard Henderson
a976a99a29 include/hw/core: Create struct CPUJumpCache
Wrap the bare TranslationBlock pointer into a structure.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-04 12:13:12 -07:00
Richard Henderson
1d41a79b3c accel/tcg: Inline tb_flush_jmp_cache
This function has two users, who use it incompatibly.
In tlb_flush_page_by_mmuidx_async_0, when flushing a
single page, we need to flush exactly two pages.
In tlb_flush_range_by_mmuidx_async_0, when flushing a
range of pages, we need to flush N+1 pages.

This avoids double-flushing of jmp cache pages in a range.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-04 12:13:12 -07:00
Richard Henderson
93b996161b accel/tcg: Do not align tb->page_addr[0]
Let tb->page_addr[0] contain the address of the first byte of the
translated block, rather than the address of the page containing the
start of the translated block.  We need to recover this value anyway
at various points, and it is easier to discard a page offset when it
is not needed, which happens naturally via the existing find_page shift.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-04 12:13:04 -07:00
Richard Henderson
4047368938 accel/tcg: Introduce tlb_set_page_full
Now that we have collected all of the page data into
CPUTLBEntryFull, provide an interface to record that
all in one go, instead of using 4 arguments.  This interface
allows CPUTLBEntryFull to be extended without having to
change the number of arguments.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Richard Henderson
af803a4fcb accel/tcg: Introduce probe_access_full
Add an interface to return the CPUTLBEntryFull struct
that goes with the lookup.  The result is not intended
to be valid across multiple lookups, so the user must
use the results immediately.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Richard Henderson
c3c8bf579b accel/tcg: Suppress auto-invalidate in probe_access_internal
When PAGE_WRITE_INV is set when calling tlb_set_page,
we immediately set TLB_INVALID_MASK in order to force
tlb_fill to be called on the next lookup.  Here in
probe_access_internal, we have just called tlb_fill
and eliminated true misses, thus the lookup must be valid.

This allows us to remove a warning comment from s390x.
There doesn't seem to be a reason to change the code though.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Richard Henderson
37523ff734 accel/tcg: Drop addr member from SavedIOTLB
This field is only written, not read; remove it.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Richard Henderson
25d3ec5831 accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFull
This structure will shortly contain more than just
data for accessing MMIO.  Rename the 'addr' member
to 'xlat_section' to more clearly indicate its purpose.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Alex Bennée
8810ee2ac0 cputlb: used cached CPUClass in our hot-paths
Before: 35.912 s ±  0.168 s
  After: 35.565 s ±  0.087 s

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220811151413.3350684-5-alex.bennee@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220923084803.498337-5-clg@kaod.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Richard Henderson
7e0d9973ea accel/tcg: Use probe_access_internal for softmmu get_page_addr_code_hostp
Simplify the implementation of get_page_addr_code_hostp
by reusing the existing probe_access infrastructure.

Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:26 +01:00
Richard Henderson
97e03465f7 accel/tcg: Move qemu_ram_addr_from_host_nofail to physmem.c
The base qemu_ram_addr_from_host function is already in
softmmu/physmem.c; move the nofail version to be adjacent.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:26 +01:00
Richard Henderson
cdf7130851 accel/tcg: Properly implement get_page_addr_code for user-only
The current implementation is a no-op, simply returning addr.
This is incorrect, because we ought to be checking the page
permissions for execution.

Make get_page_addr_code inline for both implementations.

Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:25 +01:00
Ilya Leoshkevich
b0f650f047 accel/tcg: Fix unaligned stores to s390x low-address-protected lowcore
If low-address-protection is active, unaligned stores to non-protected
parts of lowcore lead to protection exceptions. The reason is that in
such cases tlb_fill() call in store_helper_unaligned() covers
[0, addr + size) range, which contains the protected portion of
lowcore. This range is too large.

The most straightforward fix would be to make sure we stay within the
original [addr, addr + size) range. However, if an unaligned access
affects a single page, we don't need to call tlb_fill() in
store_helper_unaligned() at all, since it would be identical to
the previous tlb_fill() call in store_helper(), and therefore a no-op.
If an unaligned access covers multiple pages, this situation does not
occur.

Therefore simply skip TLB handling in store_helper_unaligned() if we
are dealing with a single page.

Fixes: 2bcf018340 ("s390x/tcg: low-address protection support")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220711185640.3558813-2-iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-07-12 10:43:33 +05:30
Richard Henderson
b826044fc0 accel/tcg: Assert mmu_idx in range before use in cputlb
Coverity reports out-of-bound accesses within cputlb.c.
This should be a false positive due to how the index is
decoded from MemOpIdx.  To be fair, nothing is checking
the correct bounds during encoding either.

Assert index in range before use, both to catch user errors
and to pacify static analysis.

Fixes: Coverity CID 1487120, 1487127, 1487170, 1487196, 1487215, 1487238
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220401170813.318609-1-richard.henderson@linaro.org>
2022-04-26 19:57:56 -07:00
Richard Henderson
5b6af141da accel/tcg: Remove ATOMIC_MMU_IDX
The last use of this macro was removed in f3e182b100
("accel/tcg: Push trace info building into atomic_common.c.inc")

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-04-20 12:12:47 -07:00
Richard Henderson
46697cb96e accel/tcg: Fix cpu_ldq_be_mmu typo
In the conversion to cpu_ld_*_mmu, the retaddr parameter
was corrupted in the one case of cpu_ldq_be_mmu.

Fixes: f83bcecb1 ("accel/tcg: Add cpu_{ld,st}*_mmu interfaces")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/902
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220315002506.152030-1-richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-03-16 08:43:10 +01:00
Peter Maydell
50a75ff680 Fix safe_syscall_base for sparc64.
Fix host signal handling for sparc64-linux.
 Speedups for jump cache and work list probing.
 Fix for exception replays.
 Raise guest SIGBUS for user-only misaligned accesses.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmIFu3QdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9tHwf+KFk9Pa9M+vhnHTZJ
 ZvIRs9BaSzoDYqxLlHSKmXN3w3G5PbIcbHVHXTty2o28bT0jk05T9zQn3TzMfcbl
 O+Yx8rygUJmbzlEQ+GaSI69pppFj8ahlS/ylfwd5MABZun2mawexEU9sqXqGCKR9
 kJY8IpkZ6vqEDONcS1ZMQ+HFsNvw6LYBd567SY8g9ZsyPLWtQSqwdcuPqAJDFWCv
 zNe6b07IRoFVOsbtQix9Dl/ntMxk5jto+UvdEVuW2FJOeRZJRshLWF5cGHNavSgQ
 Culb5ALOzoxSlcZ4xfVfWtBGoFr/BNu9D0omTSmbosvXAd4HmPVxD6kV17wXV3+g
 G/cvew==
 =D+x7
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20220211' into staging

Fix safe_syscall_base for sparc64.
Fix host signal handling for sparc64-linux.
Speedups for jump cache and work list probing.
Fix for exception replays.
Raise guest SIGBUS for user-only misaligned accesses.

# gpg: Signature made Fri 11 Feb 2022 01:27:16 GMT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth-gitlab/tags/pull-tcg-20220211: (34 commits)
  tests/tcg/multiarch: Add sigbus.c
  tcg/sparc: Support unaligned access for user-only
  tcg/sparc: Add tcg_out_jmpl_const for better tail calls
  tcg/sparc: Use the constant pool for 64-bit constants
  tcg/sparc: Convert patch_reloc to return bool
  tcg/sparc: Improve code gen for shifted 32-bit constants
  tcg/sparc: Add scratch argument to tcg_out_movi_int
  tcg/sparc: Split out tcg_out_movi_imm32
  tcg/sparc: Use tcg_out_movi_imm13 in tcg_out_addsub2_i64
  tcg/mips: Support unaligned access for softmmu
  tcg/mips: Support unaligned access for user-only
  tcg/arm: Support raising sigbus for user-only
  tcg/arm: Reserve a register for guest_base
  tcg/arm: Support unaligned access for softmmu
  tcg/arm: Check alignment for ldrd and strd
  tcg/arm: Remove use_armv6_instructions
  tcg/arm: Remove use_armv5t_instructions
  tcg/arm: Drop support for armv4 and armv5 hosts
  tcg/loongarch64: Support raising sigbus for user-only
  tcg/tci: Support raising sigbus for user-only
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-02-14 15:24:26 +00:00
Alex Bennée
c51e51005b tracing: remove TCG memory access tracing
If you really want to trace all memory operations TCG plugins gives
you a more flexible interface for doing so.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Luis Vilanova <vilanova@imperial.ac.uk>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220204204335.1689602-19-alex.bennee@linaro.org>
2022-02-09 12:08:42 +00:00
Idan Horowitz
cfc2a2d69d accel/tcg: Optimize jump cache flush during tlb range flush
When the length of the range is large enough, clearing the whole cache is
faster than iterating over the (possibly extremely large) set of pages
contained in the range.

This mimics the pre-existing similar optimization done on the flush of the
tlb itself.

Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Message-Id: <20220110164754.1066025-1-idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Frédéric Pétrot
fc313c6434 exec/memop: Adding signedness to quad definitions
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.

Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-01-08 15:46:10 +10:00
Richard Henderson
d2ba802657 tcg: Move helper_*_mmu decls to tcg/tcg-ldst.h
These functions have been replaced by cpu_*_mmu as the
most proper interface to use from target code.

Hide these declarations from code that should not use them.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 08:46:42 -07:00
Richard Henderson
f83bcecb1f accel/tcg: Add cpu_{ld,st}*_mmu interfaces
These functions are much closer to the softmmu helper
functions, in that they take the complete MemOpIdx,
and from that they may enforce required alignment.

The previous cpu_ldst.h functions did not have alignment info,
and so did not enforce it.  Retain this by adding MO_UNALN to
the MemOp that we create in calling the new functions.

Note that we are not yet enforcing alignment for user-only,
but we now have the information with which to do so.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 08:09:53 -07:00
Richard Henderson
0583f775d2 trace: Split guest_mem_before
There is no point in encoding load/store within a bit of
the memory trace info operand.  Represent atomic operations
as a single read-modify-write tracepoint.  Use MemOpIdx
instead of inventing a form specifically for traces.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
37aff08726 plugins: Reorg arguments to qemu_plugin_vcpu_mem_cb
Use the MemOpIdx directly, rather than the rearrangement
of the same bits currently done by the trace infrastructure.
Pass in enum qemu_plugin_mem_rw so that we are able to treat
read-modify-write operations as a single operation.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
b0702c91c6 trace/mem: Pass MemOpIdx to trace_mem_get_info
We (will) often have the complete MemOpIdx handy, so use that.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
9002ffcb72 tcg: Rename TCGMemOpIdx to MemOpIdx
We're about to move this out of tcg.h, so rename it
as we did when moving MemOp.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
c433e298d9 accel/tcg: Drop signness in tracing in cputlb.c
We are already inconsistent about whether or not
MO_SIGN is set in trace_mem_get_info.  Dropping it
entirely allows some simplification.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
a754f7f34e accel/tcg: Expand ATOMIC_MMU_LOOKUP_*
Unify the parameters of atomic_mmu_lookup between cputlb.c and
user-exec.c.  Call the function directly, and remove the macros.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Richard Henderson
fcff001441 accel/tcg: Remove ATOMIC_MMU_DECLS
All definitions are now empty.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Richard Henderson
48688fafeb accel/tcg: Fold EXTRA_ARGS into atomic_template.h
All instances of EXTRA_ARGS are now identical.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Richard Henderson
e28a866438 accel/tcg: Standardize atomic helpers on softmmu api
Reduce the amount of code duplication by always passing
the TCGMemOpIdx argument to helper_atomic_*.  This is not
currently used for user-only, but it's easy to ignore.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Richard Henderson
be9568b4e0 tcg: Rename helper_atomic_*_mmu and provide for user-only
Always provide the atomic interface using TCGMemOpIdx oi
and uintptr_t retaddr.  Rename from helper_* to cpu_* so
as to (mostly) match the exec/cpu_ldst.h functions, and
to emphasize that they are not callable from TCG directly.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Alex Bennée
2d93203998 plugins: fix-up handling of internal hostaddr for 32 bit
The compiler rightly complains when we build on 32 bit that casting
uint64_t into a void is a bad idea. We are really dealing with a host
pointer at this point so treat it as such. This does involve
a uintptr_t cast of the result of the TLB addend as we know that has
to point to the host memory.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210709143005.1554-28-alex.bennee@linaro.org>
2021-07-14 14:33:53 +01:00
Richard Henderson
08dff435e2 accel/tcg: Probe the proper permissions for atomic ops
We had a single ATOMIC_MMU_LOOKUP macro that probed for
read+write on all atomic ops.  This is incorrect for
plain atomic load and atomic store.

For user-only, we rely on the host page permissions.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/390
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-19 11:09:10 -07:00
Philippe Mathieu-Daudé
e5ceadff47 accel/tcg: Keep TranslationBlock headers local to TCG
Only the TCG accelerator uses the TranslationBlock API.
Move the tb-context.h / tb-hash.h / tb-lookup.h from the
global namespace to the TCG one (in accel/tcg).

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210524170453.3791436-3-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-26 15:33:59 -07:00
Richard Henderson
206a583d13 accel/tlb: Rename tlb_flush_[page_bits > range]_by_mmuidx_async_[2 > 1]
Rename to match tlb_flush_range_locked.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-9-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
6be48e45ac accel/tcg: Rename tlb_flush_page_bits -> range]_by_mmuidx_async_0
Rename to match tlb_flush_range_locked.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-8-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
c13b27d826 accel/tlb: Add tlb_flush_range_by_mmuidx_all_cpus_synced()
Forward tlb_flush_page_bits_by_mmuidx_all_cpus_synced to
tlb_flush_range_by_mmuidx_all_cpus_synced passing TARGET_PAGE_SIZE.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-7-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
600b819f23 accel/tcg: Add tlb_flush_range_by_mmuidx_all_cpus()
Forward tlb_flush_page_bits_by_mmuidx_all_cpus to
tlb_flush_range_by_mmuidx_all_cpus passing TARGET_PAGE_SIZE.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-6-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
e5b1921bd4 accel/tcg: Add tlb_flush_range_by_mmuidx()
Forward tlb_flush_page_bits_by_mmuidx to tlb_flush_range_by_mmuidx
passing TARGET_PAGE_SIZE.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-5-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
d34e4d1afa accel/tcg: Remove {encode,decode}_pbm_to_runon
We will not be able to fit address + length into a 64-bit packet.
Drop this optimization before re-organizing this code.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-10-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMM: Moved patch earlier in the series]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
3960a59f8d accel/tlb: Rename TLBFlushPageBitsByMMUIdxData -> TLBFlushRangeData
Rename the structure to match the rename of tlb_flush_range_locked.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-4-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
3c4ddec169 accel/tcg: Pass length argument to tlb_flush_range_locked()
Rename tlb_flush_page_bits_locked() -> tlb_flush_range_locked(), and
have callers pass a length argument (currently TARGET_PAGE_SIZE) via
the TLBFlushPageBitsByMMUIdxData structure.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-3-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Richard Henderson
6d24478861 accel/tcg: Replace g_new() + memcpy() by g_memdup()
Using g_memdup is a bit more compact than g_new + memcpy.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210509151618.2331764-2-f4bug@amsat.org
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
[PMD: Split from bigger patch]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25 16:01:43 +01:00
Thomas Huth
ee86213aa3 Do not include exec/address-spaces.h if it's not really necessary
Stop including exec/address-spaces.h in files that don't need it.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210416171314.2074665-5-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02 17:24:51 +02:00
Thomas Huth
2068cabd3f Do not include cpu.h if it's not really necessary
Stop including cpu.h in files that don't need it.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210416171314.2074665-4-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02 17:24:51 +02:00
Claudio Fontana
7827168471 cpu: tcg_ops: move to tcg-cpu-ops.h, keep a pointer in CPUClass
we cannot in principle make the TCG Operations field definitions
conditional on CONFIG_TCG in code that is included by both common_ss
and specific_ss modules.

Therefore, what we can do safely to restrict the TCG fields to TCG-only
builds, is to move all tcg cpu operations into a separate header file,
which is only included by TCG, target-specific code.

This leaves just a NULL pointer in the cpu.h for the non-TCG builds.

This also tidies up the code in all targets a bit, having all TCG cpu
operations neatly contained by a dedicated data struct.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-16-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:15 -10:00
Eduardo Habkost
e124536f37 cpu: Move tlb_fill to tcg_ops
[claudio: wrapped target code in CONFIG_TCG]

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210204163931.7358-7-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05 10:24:14 -10:00
Philippe Mathieu-Daudé
6526919224 accel/tcg: Restrict cpu_io_recompile() from other accelerators
As cpu_io_recompile() is only called within TCG accelerator
in cputlb.c, declare it locally.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210117164813.4101761-6-f4bug@amsat.org>
[rth: Adjust vs changed tb_flush_jmp_cache patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-01-23 12:13:00 -10:00
Richard Henderson
0f4abea8ef accel/tcg: Move tb_flush_jmp_cache() to cputlb.c
Move and make the function static, as the only users
are here in cputlb.c.

Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-01-23 12:12:59 -10:00