When p3_ioremap() was converted to ioremap_prot() there was some breakage
introduced where the 29-bit segmentation logic would trap the area range
and return an identity mapping without having allowed the area
specification to force mapping through page tables. This wires up a PCC
mask for pgprot verification to work out whether to short-circuit the
identity mapping on legacy parts, restoring the previous behaviour.
Reported-by: Nobuhiro Iwamatsu <iwamatsu@nigauri.org>
Cc: stable@kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently it's still possible to reference the PAGE_KERNEL_PCC pgprot
encoding on X2 TLBs which simply don't support it at all. Convert this
over to follow the nommu behaviour and simply hand back an invalid pgprot
value so we error out properly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that sh64 has grown extended page flag support we finally have a free
bit for _PAGE_SPECIAL. Wire it up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Since we no longer need to provide KM_type, the whole pte_*map_nested()
API is now redundant, remove it.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This provides a dummy value for legacy parts which permits the entry
wiring to be open-coded. The compiler takes care of optimizing the entry
wiring away in these cases.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation
for wiring TLB entries and use it in the fixmap code path so that we can
wire the fixmap TLB entry.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
pte_write() should check whether the permissions include either the user
or kernel write permission bits. Likewise, pte_wrprotect() needs to
remove both the kernel and user write bits.
Without this patch handle_tlbmiss() doesn't handle faulting in pages
from the P3 area (our vmalloc space) because of a write. Mappings of the
P3 space have the _PAGE_EXT_KERN_WRITE bit but not _PAGE_EXT_USER_WRITE.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Allocate one of the unused PTE bits for _PAGE_SPECIAL directly. This is
prep work for fast gup and the zero page revival.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the sparc changes a439fe51a1.
Most of the moving about was done with Sam's directions at:
http://marc.info/?l=linux-sh&m=121724823706062&w=2
with subsequent hacking and fixups entirely my fault.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>