aarch64: Fix up __aarch64_cas16_acq_rel fallback

As mentioned in the PR, the fallback path when LSE is unavailable writes
incorrect registers to the memory if the previous content compares equal
to x0, x1 - it writes copy of x0, x1 from the start of function, but it
should write x2, x3.

2020-08-03  Jakub Jelinek  <jakub@redhat.com>

	PR target/96402
	* config/aarch64/lse.S (__aarch64_cas16_acq_rel): Use x2, x3 instead
	of x(tmp0), x(tmp1) in STXP arguments.

	* gcc.target/aarch64/pr96402.c: New test.
This commit is contained in:
Jakub Jelinek 2020-08-03 22:55:28 +02:00
parent 2ac7fe2769
commit 90b43856fd
2 changed files with 17 additions and 1 deletions

View File

@ -0,0 +1,16 @@
/* PR target/96402 */
/* { dg-do run { target int128 } } */
/* { dg-options "-moutline-atomics" } */
int
main ()
{
__int128 a = 0;
__sync_val_compare_and_swap (&a, (__int128) 0, (__int128) 1);
if (a != 1)
__builtin_abort ();
__sync_val_compare_and_swap (&a, (__int128) 1, (((__int128) 0xdeadbeeffeedbac1ULL) << 64) | 0xabadcafe00c0ffeeULL);
if (a != ((((__int128) 0xdeadbeeffeedbac1ULL) << 64) | 0xabadcafe00c0ffeeULL))
__builtin_abort ();
return 0;
}

View File

@ -203,7 +203,7 @@ STARTFN NAME(cas)
cmp x0, x(tmp0)
ccmp x1, x(tmp1), #0, eq
bne 1f
STXP w(tmp2), x(tmp0), x(tmp1), [x4]
STXP w(tmp2), x2, x3, [x4]
cbnz w(tmp2), 0b
1: ret