linux/arch/powerpc/mm
Kumar Gala 37dd2badcf [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero)
Added support to allow an 85xx kernel to be run from a non-zero physical
address (useful for cooperative asymmetric multiprocessing situations and
kdump).  The support can be configured at compile time by setting
CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
desired.

Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
config option causes the kernel to determine at runtime the physical
addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
header physical address field in the resulting ELF image.

Currently we are limited to running at a physical address that is a
multiple of 256M.  This is due to how we map TLBs to cover
lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
in the future.  It is considered an error to try and run a kernel at a
non-aligned physical address.

All the magic for this support is accomplished by proper initialization
of the kernel memory subsystem and use of ARCH_PFN_OFFSET.

The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.

/dev/mem continues to allow access to any physical address in the system
regardless of how CONFIG_PHYSICAL_START is set.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-24 20:58:01 +10:00
..
40x_mmu.c [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr 2008-04-17 07:46:12 +10:00
44x_mmu.c [POWERPC] Introduce lowmem_end_addr to distinguish from total_lowmem 2008-04-17 07:46:13 +10:00
fault.c [POWERPC] Make setjmp/longjmp code usable outside of xmon 2008-01-25 22:52:50 +11:00
fsl_booke_mmu.c [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) 2008-04-24 20:58:01 +10:00
hash_low_32.S [POWERPC] Fix deadlock with mmu_hash_lock in hash_page_sync 2008-04-03 22:11:11 +11:00
hash_low_64.S [POWERPC] Provide a way to protect 4k subpages when using 64k pages 2008-01-24 10:06:01 +11:00
hash_native_64.c [POWERPC] Use 1TB segments 2007-10-12 14:05:17 +10:00
hash_utils_64.c [POWERPC] htab_remove_mapping is only used by MEMORY_HOTPLUG 2008-04-07 13:49:25 +10:00
hugetlbpage.c [POWERPC] Add hugepagesz boot-time parameter 2008-01-17 14:57:36 +11:00
init_32.c [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) 2008-04-24 20:58:01 +10:00
init_64.c [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) 2008-04-24 20:58:01 +10:00
Makefile [LIB]: Make PowerPC LMB code generic so sparc64 can use it too. 2008-02-13 16:56:49 -08:00
mem.c [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) 2008-04-24 20:58:01 +10:00
mmap.c
mmu_context_32.c
mmu_context_64.c [POWERPC] Tidy up CONFIG_PPC_MM_SLICES code 2007-08-17 11:01:59 +10:00
mmu_decl.h [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr 2008-04-17 07:46:13 +10:00
numa.c [POWERPC] Add include of linux/of.h to numa.c 2008-04-24 20:57:32 +10:00
pgtable_32.c [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr 2008-04-17 07:46:12 +10:00
pgtable_64.c [POWERPC] Use 1TB segments 2007-10-12 14:05:17 +10:00
ppc_mmu_32.c [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr 2008-04-17 07:46:13 +10:00
slb_low.S [POWERPC] Use SLB size from the device tree 2007-12-11 13:45:56 +11:00
slb.c [POWERPC] Fix PMU + soft interrupt disable bug 2008-03-20 10:14:55 +11:00
slice.c spin_lock_unlocked cleanups 2007-10-17 08:43:01 -07:00
stab.c [LIB]: Make PowerPC LMB code generic so sparc64 can use it too. 2008-02-13 16:56:49 -08:00
subpage-prot.c [POWERPC] Provide a way to protect 4k subpages when using 64k pages 2008-01-24 10:06:01 +11:00
tlb_32.c
tlb_64.c [POWERPC] Fix CONFIG_SMP=n build error on ppc64 2007-11-13 16:22:44 +11:00