AArch64: Improve backwards memmove performance
On some microarchitectures performance of the backwards memmove improves if
the stores use STR with decreasing addresses. So change the memmove loop
in memcpy_advsimd.S to use 2x STR rather than STP.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit bd394d131c
)
This commit is contained in:
parent
58c6a7ae53
commit
64458aabeb
|
@ -223,12 +223,13 @@ L(copy_long_backwards):
|
|||
b.ls L(copy64_from_start)
|
||||
|
||||
L(loop64_backwards):
|
||||
stp A_q, B_q, [dstend, -32]
|
||||
str B_q, [dstend, -16]
|
||||
str A_q, [dstend, -32]
|
||||
ldp A_q, B_q, [srcend, -96]
|
||||
stp C_q, D_q, [dstend, -64]
|
||||
str D_q, [dstend, -48]
|
||||
str C_q, [dstend, -64]!
|
||||
ldp C_q, D_q, [srcend, -128]
|
||||
sub srcend, srcend, 64
|
||||
sub dstend, dstend, 64
|
||||
subs count, count, 64
|
||||
b.hi L(loop64_backwards)
|
||||
|
||||
|
|
Loading…
Reference in New Issue