Replace some instances of smp_llsc_mb() with a new macro
smp_mb__before_llsc(). It is used before ll/sc sequences that are
documented as needing write barrier semantics.
The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(),
so there are no changes in semantics.
Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just
barrier() in the non-SMP case.
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/851/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If __xchg() is not getting inlined the outline version of the function
will have a reference to __xchg_called_with_bad_pointer() which does not
exist remaining. Fixed by using BUILD_BUG_ON() to check for allowable
operand sizes.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Patchwork: http://patchwork.linux-mips.org/patch/705/
On some CPUs, it is more efficient to disable and enable interrupts in the
kernel rather than use ll/sc for atomic operations. But if we were to set
cpu_has_llsc to false, we would break the userspace futex interface (in
asm/futex.h).
We separate the two concepts, with a new predicate kernel_uses_llsc, that
lets us disable the kernel's use of ll/sc while still allowing the futex
code to use it.
Also there were a couple of cases in bitops.h where we were using ll/sc
unconditionally even if cpu_has_llsc were false.
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Here we hook up the scheduler. Whenever we switch to a new process,
we check to see if the watch registers should be installed, and do it
if needed.
Signed-off-by: David Daney <ddaney@avtrex.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>