locking, sparc64: Fix atomics

The patch folding the atomic ops had a silly fail in the _return primitives.

Fixes: 4f3316c2b5 ("locking,arch,sparc: Fold atomic_ops")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: David S. Miller <davem@davemloft.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/20140902094016.GD31157@worktop.ger.corp.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Peter Zijlstra 2014-09-02 11:40:16 +02:00 committed by Ingo Molnar
parent 560cb12a40
commit caa17d49f9
1 changed files with 2 additions and 2 deletions

View File

@ -37,7 +37,7 @@ ENTRY(atomic_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */ \
cas [%o1], %g1, %g7; \
cmp %g1, %g7; \
bne,pn %icc, BACKOFF_LABEL(2f, 1b); \
add %g1, %o0, %g1; \
op %g1, %o0, %g1; \
retl; \
sra %g1, 0, %o0; \
2: BACKOFF_SPIN(%o2, %o3, 1b); \
@ -76,7 +76,7 @@ ENTRY(atomic64_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */ \
bne,pn %xcc, BACKOFF_LABEL(2f, 1b); \
nop; \
retl; \
add %g1, %o0, %o0; \
op %g1, %o0, %o0; \
2: BACKOFF_SPIN(%o2, %o3, 1b); \
ENDPROC(atomic64_##op##_return);