Commit Graph

9 Commits

Author SHA1 Message Date
Ralf Baechle 7837314d14 MIPS: Get rid of branches to .subsections.
It was a nice optimization - on paper at least.  In practice it results in
branches that may exceed the maximum legal range for a branch.  We can
fight that problem with -ffunction-sections but -ffunction-sections again
is incompatible with -pg used by the function tracer.

By rewriting the loop around all simple LL/SC blocks to C we reduce the
amount of inline assembler and at the same time allow GCC to often fill
the branch delay slots with something sensible or whatever else clever
optimization it may have up in its sleeve.

With this optimization gone we also no longer need -ffunction-sections,
so drop it.

This optimization was originally introduced in 2.6.21, commit
5999eca25c1fd4b9b9aca7833b04d10fe4bc877d (linux-mips.org) rsp.
f65e4fa8e0 (kernel.org).

Original fix for the issues which caused me to pull this optimization by
Paul Gortmaker <paul.gortmaker@windriver.com>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-10-29 19:08:24 +01:00
David Daney f252ffd50c MIPS: New macro smp_mb__before_llsc.
Replace some instances of smp_llsc_mb() with a new macro
smp_mb__before_llsc().  It is used before ll/sc sequences that are
documented as needing write barrier semantics.

The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(),
so there are no changes in semantics.

Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just
barrier() in the non-SMP case.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/851/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-02-27 12:53:06 +01:00
Ralf Baechle c677189af9 MIPS: Fix build error if __xchg() is not getting inlined.
If __xchg() is not getting inlined the outline version of the function
will have a reference to __xchg_called_with_bad_pointer() which does not
exist remaining.  Fixed by using BUILD_BUG_ON() to check for allowable
operand sizes.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Patchwork: http://patchwork.linux-mips.org/patch/705/
2009-12-01 16:21:25 +00:00
David Daney b791d1193a MIPS: Allow kernel use of LL/SC to be separate from the presence of LL/SC.
On some CPUs, it is more efficient to disable and enable interrupts in the
kernel rather than use ll/sc for atomic operations.  But if we were to set
cpu_has_llsc to false, we would break the userspace futex interface (in
asm/futex.h).

We separate the two concepts, with a new predicate kernel_uses_llsc, that
lets us disable the kernel's use of ll/sc while still allowing the futex
code to use it.

Also there were a couple of cases in bitops.h where we were using ll/sc
unconditionally even if cpu_has_llsc were false.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-09-17 20:07:50 +02:00
Ralf Baechle 43e6ae6d9f MIPS: Rewrite clearing of ll_bit on context switch in C
This also means there is now only one implementation not 3 left.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-09-17 20:07:49 +02:00
Ralf Baechle f1e39a4a61 MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler
This way it doesn't have to use CONFIG_CPU_HAS_LLSC anymore.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-09-17 20:07:49 +02:00
Ralf Baechle f4c6b6bc5a MIPS: Consolidate all CONFIG_CPU_HAS_LLSC use in a single C file.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-09-17 20:07:49 +02:00
David Daney 2c708cbaa6 MIPS: Scheduler support for HARDWARE_WATCHPOINTS.
Here we hook up the scheduler.  Whenever we switch to a new process,
we check to see if the watch registers should be installed, and do it
if needed.

Signed-off-by: David Daney <ddaney@avtrex.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-11 16:18:57 +01:00
Ralf Baechle 384740dc49 MIPS: Move headfiles to new location below arch/mips/include
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-11 16:18:52 +01:00