Commit Graph

3 Commits

Author SHA1 Message Date
Paul Gortmaker
527581b9cf MIPS: lib: Audit and remove any unnecessary uses of module.h
Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends.  That changed
when we forked out support for the latter into the export.h file.

This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig.  The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.

Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.

The compiler.h additions are for an implict presence of the
"notrace" which module.h brought in but export.h does not.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14034/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-10-05 01:31:20 +02:00
Harvey Hunt
aedcfbe065 MIPS: lib: Mark intrinsics notrace
On certain MIPS32 devices, the ftrace tracer "function_graph" uses
__lshrdi3() during the capturing of trace data. ftrace then attempts to
trace __lshrdi3() which leads to infinite recursion and a stack overflow.
Fix this by marking __lshrdi3() as notrace. Mark the other compiler
intrinsics as notrace in case the compiler decides to use them in the
ftrace path.

Signed-off-by: Harvey Hunt <harvey.hunt@imgtec.com>
Cc: <linux-mips@linux-mips.org>
Cc: <linux-kernel@vger.kernel.org>
Cc: <stable@vger.kernel.org> # 4.2.x-
Patchwork: https://patchwork.linux-mips.org/patch/13354/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-05-28 12:35:11 +02:00
Ralf Baechle
1ee3630a3e MIPS: Use ARCH_USE_BUILTIN_BSWAP.
ARCH_USE_BUILTIN_BSWAP will use __builtin_bswap16(), __builtin_bswap32()
and __builtin_bswap64() where available.  This allows better instruction
scheduling.  On pre-R2 processors it will result in 32 bit and 64 bit
swapping being performed in a call to a __bswapsi2() rsp. __bswapdi2()
functions, so we add these, too.

For a 4.2 kernel with GCC 4.9 this yields the following kernel sizes:

   text    data     bss     dec     hex filename
3996071  155804   88992 4240867  40b5e3 vmlinux         ip22 baseline
3985687  159900   88992 4234579  409d53 vmlinux         ip22 + bswap patch
6913157  378552  251024 7542733  7317cd vmlinux         ip27 baseline
6878581  378552  251024 7508157  7290bd vmlinux         ip27 + bswap patch
5773777  268752  187424 6229953  5f0fc1 vmlinux         malta baseline
5773401  268752  187424 6229577  5f0e49 vmlinux         malta + bswap patch

Presumably the code size improvments yield better cache hit rate thus
better performance compensating for the extra function call but this
will still need to be benchmarked.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-10-26 09:49:43 +01:00