The compiler builtin will use the hardware instruction cdsg if the
memory operand is properly aligned and will fall back to the
library call otherwise.
In case the compiler for one part is able to detect that the
location is aligned and fails to do so for another usage of the hw
instruction and the sw fall back would be mixed on the same memory
location. To avoid this the library fall back also has to use the
hardware instruction if possible.
libatomic/ChangeLog:
2018-03-09 Andreas Krebbel <krebbel@linux.vnet.ibm.com>
* config/s390/exch_n.c: New file.
* configure.tgt: Add the config directory for s390.
From-SVN: r258384
gcc/
* builtins.c (fold_builtin_atomic_always_lock_free): Make "lock-free"
conditional on existance of a fast atomic load.
* optabs-query.c (can_atomic_load_p): New function.
* optabs-query.h (can_atomic_load_p): Declare it.
* optabs.c (expand_atomic_exchange): Always delegate to libatomic if
no fast atomic load is available for the particular size of access.
(expand_atomic_compare_and_swap): Likewise.
(expand_atomic_load): Likewise.
(expand_atomic_store): Likewise.
(expand_atomic_fetch_op): Likewise.
* testsuite/lib/target-supports.exp
(check_effective_target_sync_int_128): Remove x86 because it provides
no fast atomic load.
(check_effective_target_sync_int_128_runtime): Likewise.
libatomic/
* acinclude.m4: Add #define FAST_ATOMIC_LDST_*.
* auto-config.h.in: Regenerate.
* config/x86/host-config.h (FAST_ATOMIC_LDST_16): Define to 0.
(atomic_compare_exchange_n): New.
* glfree.c (EXACT, LARGER): Change condition and add comments.
From-SVN: r245098
ARM libatomic inline asm uses sel, uadd8, uadd16 instructions
which are only available if __ARM_FEATURE_SIMD32 is defined.
libatomic/
2017-01-30 Szabolcs Nagy <szabolcs.nagy@arm.com>
PR target/78945
* config/arm/exch_n.c (libat_exchange): Check __ARM_FEATURE_SIMD32.
From-SVN: r245023