Commit Graph

7 Commits

Author SHA1 Message Date
Russell King 15d07dc9c5 ARM: move CP15 definitions to separate header file
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.

Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
2012-03-28 18:30:01 +01:00
Nicolas Pitre 25cbe45440 ARM: fix cache-xsc3l2 after stack based kmap_atomic()
Since commit 3e4d3af501 "mm: stack based kmap_atomic()", it is actively
wrong to rely on fixed kmap type indices (namely KM_L2_CACHE) as
kmap_atomic() totally ignores them and a concurrent instance of it may
happily reuse any slot for any purpose.  Because kmap_atomic() is now
able to deal with reentrancy, we can get rid of the ad hoc mapping here,
and we even don't have to disable IRQs anymore (highmem case).

While the code is made much simpler, there is a needless cache flush
introduced by the usage of __kunmap_atomic().  It is not clear if the
performance difference to remove that is worth the cost in code
maintenance (I don't think there are that many highmem users on that
platform if at all anyway).

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-12-19 12:57:08 -05:00
Haojian Zhuang dc8601a224 [ARM] pxa: do not enable L2 after MMU is enabled
Outer cache checked whether L2 is enabled or not. If L2 isn't enabled in XSC3,
it would enable L2. This operation is evil that would make system hang.

In XSC3 core document, these words are mentioned in below.

"Following reset, the L2 Unified Cache Enable bit is cleared. To enable the L2
Cache, software may set the bit to a '1' before or at the same time as enabling
the MMU. Enabling the L2 Cache after the MMU has been enabled or disabling the
L2 Cache after the L2 Cache has been enabled, may result in unpredictable
behavior of the processor."

When outer cache is initialized, the MMU is already enabled. We couldn't enable
L2 after MMU enabled.

Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-01-01 15:50:34 +08:00
Nicolas Pitre 3902a15e78 [ARM] xsc3: add highmem support to L2 cache handling code
On xsc3, L2 cache ops are possible only on virtual addresses.  The code
is rearranged so to have a linear progression requiring the least amount
of pte setups in the highmem case.  To protect the virtual mapping so
created, interrupts must be disabled currently up to a page worth of
address range.

The interrupt disabling is done in a way to minimize the overhead within
the inner loop.  The alternative would consist in separate code for
the highmem and non highmem compilation which is less preferable.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:21 -04:00
Dan Williams c7cf72dcad [ARM] xsc3: fix xsc3_l2_inv_range
When 'start' and 'end' are less than a cacheline apart and 'start' is
unaligned we are done after cleaning and invalidating the first
cacheline.  So check for (start < end) which will not walk off into
invalid address ranges when (start > end).

This issue was caught by drivers/dma/dmatest.

2.6.27 is susceptible.

Cc: <stable@kernel.org>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Cc: Lothar WaÃ<9f>mann <LW@KARO-electronics.de>
Cc: Lennert Buytenhek <buytenh@marvell.com>
Cc: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-11-06 10:48:29 -07:00
Russell King 0ba8b9b273 [ARM] cputype: separate definitions, use them
Add asm/cputype.h, moving functions and definitions from asm/system.h
there.  Convert all users of 'processor_id' to the more efficient
read_cpuid_id() function.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-01 12:06:23 +01:00
Eric Miao 905a09d57a [ARM] pxa: add support for L2 outer cache on XScale3 (attempt 2)
(20072fd0c9 lost most of its changes
somehow, came from a mbox archive applied with git-am.  No idea
what happened.  This puts back the missing bits.  --rmk)

The initial patch from Lothar, and Lennert make it into a cleaner
one, modified and tested on PXA320 by Eric Miao.

This patch moves the L2 cache operations out of proc-xsc3.S into
dedicated outer cache support code.

CACHE_XSC3L2 can be deselected so no L2 cache specific code will be
linked in, and that L2 enable bit will not be set, this applies to
the following cases:

    a. _only_ PXA300/PXA310 support included and no L2 cache wanted
    b. PXA320 support included, but want L2 be disabled

So the enabling of L2 depends on two things:

    - CACHE_XSC3L2 is selected
    - and L2 cache is present

Where the latter is only a safeguard (previous testing shows it works
OK even when this bit is turned on).

IXP series of processors with XScale3 cannot disable L2 cache for the
moment since they depend on the L2 cache for its coherent memory, so
IXP may always select CACHE_XSC3L2.

Other L2 relevant bits are always turned on (i.e. the original code
enclosed by #if L2_CACHE_ENABLED .. #endif), as they showed no side
effects. Specifically, these bits are:

   - OC bits in TTBASE register (table walk outer cache attributes)
   - LLR Outer Cache Attributes (OC) in Auxiliary Control Register

Signed-off-by: Lothar WaÃ<9f>mann <LW@KARO-electronics.de>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-07-28 23:13:09 +01:00