ia64: atomic.h: fix atomic_exchange_and_add 64bit handling

Way back in 2005 the atomic_exchange_and_add function was cleaned up to
avoid the explicit size checking and instead let gcc handle things itself.
Unfortunately that change ended up leaving beyond a cast to int, even when
the incoming value was a long.  This has flown under the radar for a long
time due to the function not being heavily used in the tree (especially as
a full 64bit field), but a recent change to semaphores made some nptl tests
fail reliably.  This is due to the code packing two 32bit values into one
64bit variable (where the high 32bits contained the number of waiters), and
then the whole variable being atomically updated between threads.  On ia64,
that meant we never atomically updated the count, so sometimes the sem_post
would not wake up the waiters.
This commit is contained in:
Mike Frysinger 2015-07-28 02:19:49 -04:00
parent 18855eca32
commit cf31a2c799
2 changed files with 6 additions and 3 deletions

View File

@ -1,3 +1,8 @@
2015-07-27 Mike Frysinger <vapier@gentoo.org>
* sysdeps/ia64/bits/atomic.h (atomic_exchange_and_add): Define
directly in terms of __sync_fetch_and_add and delete (int) cast.
2015-07-27 Mike Frysinger <vapier@gentoo.org>
* sysdeps/unix/sysv/linux/ia64/Makefile (CPPFLAGS): Delete

View File

@ -82,9 +82,7 @@ typedef uintmax_t uatomic_max_t;
(__sync_synchronize (), __sync_lock_test_and_set (mem, value))
#define atomic_exchange_and_add(mem, value) \
({ __typeof (*mem) __result; \
__result = __sync_fetch_and_add ((mem), (int) (value)); \
__result; })
__sync_fetch_and_add ((mem), (value))
#define atomic_decrement_if_positive(mem) \
({ __typeof (*mem) __oldval, __val; \