cputlb: Fix size operand for tlb_fill on unaligned store

We are currently passing the size of the full write to
the tlb_fill for the second page.  Instead pass the real
size of the write to that page.

This argument is unused within all tlb_fill, except to be
logged via tracing, so in practice this makes no difference.

But in a moment we'll need the value of size2 for watchpoints,
and if we've computed the value we might as well use it.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2019-08-28 15:25:28 -07:00
parent 56ad8b007d
commit 8f7cd2ad4a

View File

@ -1504,6 +1504,8 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
uintptr_t index2; uintptr_t index2;
CPUTLBEntry *entry2; CPUTLBEntry *entry2;
target_ulong page2, tlb_addr2; target_ulong page2, tlb_addr2;
size_t size2;
do_unaligned_access: do_unaligned_access:
/* /*
* Ensure the second page is in the TLB. Note that the first page * Ensure the second page is in the TLB. Note that the first page
@ -1511,13 +1513,14 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
* cannot evict the first. * cannot evict the first.
*/ */
page2 = (addr + size) & TARGET_PAGE_MASK; page2 = (addr + size) & TARGET_PAGE_MASK;
size2 = (addr + size) & ~TARGET_PAGE_MASK;
index2 = tlb_index(env, mmu_idx, page2); index2 = tlb_index(env, mmu_idx, page2);
entry2 = tlb_entry(env, mmu_idx, page2); entry2 = tlb_entry(env, mmu_idx, page2);
tlb_addr2 = tlb_addr_write(entry2); tlb_addr2 = tlb_addr_write(entry2);
if (!tlb_hit_page(tlb_addr2, page2) if (!tlb_hit_page(tlb_addr2, page2)
&& !victim_tlb_hit(env, mmu_idx, index2, tlb_off, && !victim_tlb_hit(env, mmu_idx, index2, tlb_off,
page2 & TARGET_PAGE_MASK)) { page2 & TARGET_PAGE_MASK)) {
tlb_fill(env_cpu(env), page2, size, MMU_DATA_STORE, tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE,
mmu_idx, retaddr); mmu_idx, retaddr);
} }