Commit Graph

30 Commits

Author SHA1 Message Date
Johannes Weiner 759496ba64 arch: mm: pass userspace fault flag to generic fault handler
Unlike global OOM handling, memory cgroup code will invoke the OOM killer
in any OOM situation because it has no way of telling faults occuring in
kernel context - which could be handled more gracefully - from
user-triggered faults.

Pass a flag that identifies faults originating in user space from the
architecture-specific fault handlers to generic code so that memcg OOM
handling can be improved.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-12 15:38:01 -07:00
Naoya Horiguchi 83467efbdb mm: migrate: check movability of hugepage in unmap_and_move_huge_page()
Currently hugepage migration works well only for pmd-based hugepages
(mainly due to lack of testing,) so we had better not enable migration of
other levels of hugepages until we are ready for it.

Some users of hugepage migration (mbind, move_pages, and migrate_pages) do
page table walk and check pud/pmd_huge() there, so they are safe.  But the
other users (softoffline and memory hotremove) don't do this, so without
this patch they can try to migrate unexpected types of hugepages.

To prevent this, we introduce hugepage_migration_support() as an
architecture dependent check of whether hugepage are implemented on a pmd
basis or not.  And on some architecture multiple sizes of hugepages are
available, so hugepage_migration_support() also checks hugepage size.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Rik van Riel <riel@redhat.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-11 15:57:49 -07:00
Santosh Shilimkar 374d5c9964 of: Specify initrd location using 64-bit
On some PAE architectures, the entire range of physical memory could reside
outside the 32-bit limit.  These systems need the ability to specify the
initrd location using 64-bit numbers.

This patch globally modifies the early_init_dt_setup_initrd_arch() function to
use 64-bit numbers instead of the current unsigned long.

There has been quite a bit of debate about whether to use u64 or phys_addr_t.
It was concluded to stick to u64 to be consistent with rest of the device
tree code. As summarized by Geert, "The address to load the initrd is decided
by the bootloader/user and set at that point later in time. The dtb should not
be tied to the kernel you are booting"

More details on the discussion can be found here:
https://lkml.org/lkml/2013/6/20/690
https://lkml.org/lkml/2012/9/13/544

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
2013-07-24 11:10:01 +01:00
Johannes Weiner 609838cfed mm: invoke oom-killer from remaining unconverted page fault handlers
A few remaining architectures directly kill the page faulting task in an
out of memory situation.  This is usually not a good idea since that
task might not even use a significant amount of memory and so may not be
the optimal victim to resolve the situation.

Since 2.6.29's 1c0fe6e ("mm: invoke oom-killer from page fault") there
is a hook that architecture page fault handlers are supposed to call to
invoke the OOM killer and let it pick the right task to kill.  Convert
the remaining architectures over to this hook.

To have the previous behavior of simply taking out the faulting task the
vm.oom_kill_allocating_task sysctl can be set to 1.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>   [arch/arc bits]
Cc: James Hogan <james.hogan@imgtec.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-09 10:33:20 -07:00
Linus Torvalds 7644a448cc Metag architecture changes for v3.11
- Infrastructure and DT files for TZ1090 SoC (pin control drivers
   already merged via pinctrl tree).
 - Panic on boot instead of just warning if cache aliasing possible.
 - Various SMP/hotplug fixes.
 - Various other randconfig/sparse fixes.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.13 (GNU/Linux)
 
 iQIcBAABAgAGBQJR1T3SAAoJEKHZs+irPybfOPcP/0HDcQFZ0/2j5jbobz8VumdC
 +PtEcKQc5vY2p96NGO0xPX86ebPZ7b52E4GWkk1RTEsreIHGeCxAdX/UsDWUqPrs
 b4ECnB+TOXvMN09UCkP4d1bVb/Fzh9aBp/NUkgp3v24Rb6j3WSoAh9TtRRw4OY0/
 ZIbLKvaAlQIKofErYJldQvdk5ymskMQ9dAQg3A/8f6az5+qkISgCckJgzi6kyH3Y
 An3GH0nK8dgucNPHBoyfzRvCCLuBeUpdZiMK/LPuV1MVuK/4R4m/pjhxE8Eyf6Q/
 JSwu3paH8NBNAY/gRXQcbNkxDV2+yv1cn1MUpMD/taYf3Q7BKpR6uzIelkipri7K
 bbsvPHoI91ZmZzhGCtkT9ZtArGB1OhomNDC1P9vbCe8boN2ajvmJKfmA8VfpioFS
 urW6e42bchplXXd7BPTEKx5gWJWDz0TNqPoOX1io24Avmp6ljHgJ3/oMlcSYnuiy
 WfHBydFthZDp869mpG8qrlRwa4lRgc04EjYEgXpIW1ov5ix5HkTpwJFkZ+p8keZK
 PNA/jk/C1Y0yMEHJ7uueqjxxW6Ji4SdeaPLcpgIw2GRHptJxFcvK/GPLR3Z+peze
 Vrx22Ld3/bXrRPlTPKQb3ZVQCh+LToX3xbFXFra5SDeId/M7v4TDNBnOoZKbNxoJ
 J+sf+lrHxmH9lxqHLyiY
 =nmbJ
 -----END PGP SIGNATURE-----

Merge tag 'metag-for-v3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag

Pull Metag architecture changes from James Hogan:
 - Infrastructure and DT files for TZ1090 SoC (pin control drivers
   already merged via pinctrl tree).
 - Panic on boot instead of just warning if cache aliasing possible.
 - Various SMP/hotplug fixes.
 - Various other randconfig/sparse fixes.

* tag 'metag-for-v3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag: (24 commits)
  metag: move EXPORT_SYMBOL(csum_partial) to metag_ksyms.c
  metag: cpu hotplug: route_irq: preserve irq mask
  metag: kick: add missing irq_enter/exit to kick_handler()
  metag: smp: don't spin waiting for CPU to start
  metag: smp: enable irqs after set_cpu_online
  metag: use clear_tasks_mm_cpumask()
  metag: tz1090: select and instantiate pinctrl-tz1090-pdc
  metag: tz1090: select and instantiate pinctrl-tz1090
  metag: don't check for cache aliasing on smp cpu boot
  metag: panic if cache aliasing possible
  metag: *.dts: include using preprocessor
  metag: add <dt-bindings/> symlink
  metag/.gitignore: Extend the *.dtb pattern to match the dtb.S files
  metag/traps: include setup.h for the per_cpu_trap_init declaration
  metag/traps: Mark die() as __noreturn to match the declaration.
  metag/processor.h: Add missing cpuinfo_op declaration.
  metag/setup: Restrict scope for the capabilities variable
  metag/mm/cache: Restrict scope for metag_lnkget_probe
  metag/asm/irq.h: Declare init_IRQ
  metag/kernel/irq.c: Declare root_domain as static
  ...
2013-07-06 12:39:39 -07:00
Jiang Liu 5ad62f24ba mm/metag: prepare for killing free_all_bootmem_node()
Prepare for killing free_all_bootmem_node() by using free_all_bootmem().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03 16:07:38 -07:00
Jiang Liu 132de6717c mm/metag: prepare for removing num_physpages and simplify mem_init()
Prepare for removing num_physpages and simplify mem_init().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03 16:07:36 -07:00
Jiang Liu 0c98853473 mm: concentrate modification of totalram_pages into the mm core
Concentrate code to modify totalram_pages into the mm core, so the arch
memory initialized code doesn't need to take care of it.  With these
changes applied, only following functions from mm core modify global
variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
free_all_bootmem_node(), adjust_managed_page_count().

With this patch applied, it will be much more easier for us to keep
totalram_pages and zone->managed_pages in consistence.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: David Howells <dhowells@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03 16:07:33 -07:00
Jiang Liu 7b4b2a0d6c mm: accurately calculate zone->managed_pages for highmem zones
Commit "mm: introduce new field 'managed_pages' to struct zone" assumes
that all highmem pages will be freed into the buddy system by function
mem_init().  But that's not always true, some architectures may reserve
some highmem pages during boot.  For example PPC may allocate highmem
pages for giagant HugeTLB pages, and several architectures have code to
check PageReserved flag to exclude highmem pages allocated during boot
when freeing highmem pages into the buddy system.

So treat highmem pages in the same way as normal pages, that is to:
1) reset zone->managed_pages to zero in mem_init().
2) recalculate managed_pages when freeing pages into the buddy system.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03 16:07:33 -07:00
Jiang Liu 11199692d8 mm: change signature of free_reserved_area() to fix building warnings
Change signature of free_reserved_area() according to Russell King's
suggestion to fix following build warnings:

  arch/arm/mm/init.c: In function 'mem_init':
  arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default]
    free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
    ^
  In file included from include/linux/mman.h:4:0,
                   from arch/arm/mm/init.c:15:
  include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *'
   extern unsigned long free_reserved_area(unsigned long start, unsigned long end,

   mm/page_alloc.c: In function 'free_reserved_area':
>> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default]
   In file included from arch/mips/include/asm/page.h:49:0,
                    from include/linux/mmzone.h:20,
                    from include/linux/gfp.h:4,
                    from include/linux/mm.h:8,
                    from mm/page_alloc.c:18:
   arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int'
   mm/page_alloc.c: In function 'free_area_init_nodes':
   mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds]

Also address some minor code review comments.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03 16:07:32 -07:00
Markos Chandras 42ad59e375 metag/mm/cache: Restrict scope for metag_lnkget_probe
Hide symbol since it's only used within the cache.c file

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-06-13 12:55:28 +01:00
Linus Torvalds 5a148af669 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
Pull powerpc update from Benjamin Herrenschmidt:
 "The main highlights this time around are:

   - A pile of addition POWER8 bits and nits, such as updated
     performance counter support (Michael Ellerman), new branch history
     buffer support (Anshuman Khandual), base support for the new PCI
     host bridge when not using the hypervisor (Gavin Shan) and other
     random related bits and fixes from various contributors.

   - Some rework of our page table format by Aneesh Kumar which fixes a
     thing or two and paves the way for THP support.  THP itself will
     not make it this time around however.

   - More Freescale updates, including Altivec support on the new e6500
     cores, new PCI controller support, and a pile of new boards support
     and updates.

   - The usual batch of trivial cleanups & fixes"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (156 commits)
  powerpc: Fix build error for book3e
  powerpc: Context switch the new EBB SPRs
  powerpc: Turn on the EBB H/FSCR bits
  powerpc: Replace CPU_FTR_BCTAR with CPU_FTR_ARCH_207S
  powerpc: Setup BHRB instructions facility in HFSCR for POWER8
  powerpc: Fix interrupt range check on debug exception
  powerpc: Update tlbie/tlbiel as per ISA doc
  powerpc: Print page size info during boot
  powerpc: print both base and actual page size on hash failure
  powerpc: Fix hpte_decode to use the correct decoding for page sizes
  powerpc: Decode the pte-lp-encoding bits correctly.
  powerpc: Use encode avpn where we need only avpn values
  powerpc: Reduce PTE table memory wastage
  powerpc: Move the pte free routines from common header
  powerpc: Reduce the PTE_INDEX_SIZE
  powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format
  powerpc: New hugepage directory format
  powerpc: Don't truncate pgd_index wrongly
  powerpc: Don't hard code the size of pte page
  powerpc: Save DAR and DSISR in pt_regs on MCE
  ...
2013-05-02 10:16:16 -07:00
Linus Torvalds 87c1f0f8c9 Misc arch/metag changes for v3.10-rc1
- Various fixes for the interrupting perf counter handling in metag's
    perf backend.
  - Add OProfile support based on perf.
  - Sets up cache partitions for SMP so bootloader doesn't have to.
  - Patch from Paul Bolle to remove ARCH_POPULATES_NODE_MAP again
    (touches microblaze too).
  - Add TLS pointer regset to metag ptrace api.
  - Add exported metag DSP extended context handling header <asm/ech.h>.
  - Increase defconfig log buffer size to 128KiB.
  - Various fixes, typos, missing exports.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.13 (GNU/Linux)
 
 iQIcBAABAgAGBQJRf8FdAAoJEKHZs+irPybfY68P/3Orph99480CpWq2gNjGgzrU
 0xTpVaF+7/qmfdCBPi3byZXX+RdNXQC3MrSzf3pFBcLsEs5JpTngvOzrqR+ZJBg3
 83p5Gv7m/Gtl67s3rymiex7eM3AnTu4TmwXGv2C7Equ7kwq21jQbgxXoOq1YSBdk
 hsT/eNDUwfFV6WCSexsG7svt23LVCoaT0hwo0HniEG/ej3ezPSTJ9Y0WuZR0oDfx
 EBl6INcNr8yhhS7UPLZhqF5oDJWpu22zPv14g8FXrAF/QXvfBSjH49xUtOOt1TYS
 VFOHDAQCjPMEJDltkxKcDUtNw1WW5i1y0HMcblAHFpa1YYZQdy4JwR1vgZxJs9eW
 2wNFo0omUi/EU0s98AvVKU1hs4aDpVovENVGpzIEbkkFWsDDT2G6Pp30uHSrmZRb
 yvn87JwijPOgLnc8xrN/7jeWJ9fP/KX08YTS7/8H7U7PZBos9aSAqp/V0yD2e+Sh
 xSqzo2xYAey+yNCGvuyx/AePjIP/EblQI+/zjDoGcFS+OhvwK9N1TN0s1rGk7xG8
 KXjke4xcaxP9a+p8TQAf/m9DTmXxXjIcK99HZ4oKv5IrEWZI4P8aFRq/H/N7aZTt
 +vscajLQpprFnd0FsnzD+ntuCrc/fMofz3l29kW7uRpRJghf4KJCxyRho40KDvsB
 31L4HvWrU8m/FhVhJ+2q
 =JhIi
 -----END PGP SIGNATURE-----

Merge tag 'metag-for-v3.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag

Pull arch/metag update from James Hogan:

 - Various fixes for the interrupting perf counter handling in metag's
   perf backend.

 - Add OProfile support based on perf.

 - Sets up cache partitions for SMP so bootloader doesn't have to.

 - Patch from Paul Bolle to remove ARCH_POPULATES_NODE_MAP again
   (touches microblaze too).

 - Add TLS pointer regset to metag ptrace api.

 - Add exported metag DSP extended context handling header <asm/ech.h>.

 - Increase defconfig log buffer size to 128KiB.

 - Various fixes, typos, missing exports.

* tag 'metag-for-v3.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag:
  metag: defconfigs: increase log buffer 8KiB => 128KiB
  metag: avoid unnecessary builtin dtb rebuilds
  metag: add exported <asm/ech.h> for extended context handling
  metag: export _metag_da_present and cpu_2_hwthread_id
  metag: ptrace: Implement NT_METAG_TLS
  memblock: Kill ARCH_POPULATES_NODE_MAP once more
  metag: cachepart: fix get_global_dcache_size() typo
  metag: cachepart: take into account small cache bits
  metag: smp: copy cache partition and enable GCOn
  metag: OProfile support
  metag: perf: prepare for use by oprofile
  metag: perf: don't reset TXTACTCYC
  metag: perf: use hard_processor_id() to get thread
  metag: perf: fix frequency sampling (dynamic period)
  metag: perf: add missing prev_count updates
  metag: perf: fixes for interrupting perf counters
  metag: perf: fix wrap handling in delta calculation
  metag: perf: fix core internal / perf channel mux
2013-04-30 10:09:39 -07:00
Jiang Liu a564496e2c mm/metag: use free_highmem_page() to free highmem pages into buddy system
Use helper function free_highmem_page() to free highmem pages into
the buddy system.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-29 15:54:32 -07:00
Jiang Liu c941836819 mm/metag: use common help functions to free reserved pages
Use common help functions to free reserved pages.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-29 15:54:31 -07:00
Paul Bolle 45b02f8d94 memblock: kill "config MAX_ACTIVE_REGIONS"
The Kconfig symbol MAX_ACTIVE_REGIONS is unused. Commit
0ee332c145 ("memblock: Kill
early_node_map[]") removed the only place were it was actually used. But
it did not remove its Kconfig entries (for powerpc and sh).

Remove those two entries (and the entry for metag, that popped up in
v3.9-rc1).

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
2013-04-18 13:03:53 +10:00
Paul Bolle 2b8660ed3b memblock: Kill ARCH_POPULATES_NODE_MAP once more
The Kconfig symbol ARCH_POPULATES_NODE_MAP was killed in v3.3. After
that it popped up again in microblaze and metag. Nobody noticed,
probably because these Kconfig symbols are entirely unused and these
architectures both select HAVE_MEMBLOCK_NODE_MAP. Anyhow, these two
entries can also be killed.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-20 13:07:58 +00:00
Paul Mundt 40f09f3cd6 metag: Inhibit NUMA balancing.
The metag NUMA implementation follows the SH model, using different nodes for
memories with different latencies. As such, we ensure that automated balancing
between nodes is inhibited, by way of the new ARCH_WANT_VARIABLE_LOCALITY.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-04 10:29:19 +00:00
James Hogan 44c2451080 metag: move mm/init.c exports out of metag_ksyms.c
It's less error prone to have function symbols exported immediately
after the function rather than in metag_ksyms.c. Move each EXPORT_SYMBOL
in metag_ksyms.c for symbols defined in mm/init.c into mm/init.c.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:11:16 +00:00
James Hogan f75c28d896 metag: hugetlb: convert to vm_unmapped_area()
Convert hugetlb_get_unmapped_area_new_pmd() to use vm_unmapped_area()
rather than searching the virtual address space itself. This fixes the
following errors in linux-next due to the specified members being
removed after other architectures have already been converted:

arch/metag/mm/hugetlbpage.c: In function 'hugetlb_get_unmapped_area_new_pmd':
arch/metag/mm/hugetlbpage.c:199: error: 'struct mm_struct' has no member named 'cached_hole_size'
arch/metag/mm/hugetlbpage.c:200: error: 'struct mm_struct' has no member named 'free_area_cache'
arch/metag/mm/hugetlbpage.c:215: error: 'struct mm_struct' has no member named 'cached_hole_size'

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Michel Lespinasse <walken@google.com>
2013-03-02 20:11:13 +00:00
James Hogan f626dc704e metag: export metag_code_cache_flush_all
Various file systems indirectly use metag_code_cache_flush_all(), so
when they're built as modules we get build errors like the following:

ERROR: "metag_code_cache_flush_all" [fs/xfs/xfs.ko] undefined!

Therefore export this function to modules to fix the errors. This was
hit by a randconfig build.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:11:12 +00:00
James Hogan 883a635591 metag: add boot time LNKGET/LNKSET check
Add boot time check for whether LNKGET/LNKSET go through or around the
cache. Depending on the configuration an info message (no harm), warning
(technically wrong but no harm), or big WARN (expect failure in either
kernel or userland) may be emitted if the behaviour is not as expected:

Configuration                                Hardware   Response
------------------------------------------   --------   --------
AROUND_CACHE                                 through    pr_info
!AROUND_CACHE && ATOMICITY_LNKGET            around     WARN (kernel)
     "        && !ATOMICITY_LNKGET && SMP    around     WARN (user)
     "                   "         && !SMP   around     pr_warn

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:56 +00:00
James Hogan 0a38a8adc5 metag: add __init to metag_cache_probe()
metag_cache_probe() is only called from setup_arch(), so add the __init
attribute to it.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:56 +00:00
James Hogan 5633004cc2 metag: Build infrastructure
Add metag build infrastructure.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:54 +00:00
James Hogan c438b58e65 metag: TCM support
Add some TCM support

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:21 +00:00
James Hogan bbc17704d5 metag: Highmem support
Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:20 +00:00
James Hogan e624e95bd8 metag: Huge TLB
Add huge TLB support to the metag architecture.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:20 +00:00
James Hogan 373cd784d0 metag: Memory handling
Meta has instructions for accessing:
 - bytes        - GETB (1 byte)
 - words        - GETW (2 bytes)
 - doublewords  - GETD (4 bytes)
 - longwords    - GETL (8 bytes)

All accesses must be aligned. Unaligned accesses can be detected and
made to fault on Meta2, however it isn't possible to fix up unaligned
writes so we don't bother fixing up reads either.

This patch adds metag memory handling code including:
 - I/O memory (io.h, ioremap.c): Actually any virtual memory can be
   accessed with these helpers. A part of the non-MMUable address space
   is used for memory mapped I/O. The ioremap() function is implemented
   one to one for non-MMUable addresses.
 - User memory (uaccess.h, usercopy.c): User memory is directly
   accessible from privileged code.
 - Kernel memory (maccess.c): probe_kernel_write() needs to be
   overwridden to use the I/O functions when doing a simple aligned
   write to non-writecombined memory, otherwise the write may be split
   by the generic version.

Note that due to the fact that a portion of the virtual address space is
non-MMUable, and therefore always maps directly to the physical address
space, metag specific I/O functions are made available (metag_in32,
metag_out32 etc). These cast the address argument to a pointer so that
they can be used with raw physical addresses. These accessors are only
to be used for accessing fixed core Meta architecture registers in the
non-MMU region, and not for any SoC/peripheral registers.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:19 +00:00
James Hogan f5df8e268f metag: Memory management
Add memory management files for metag.

Meta's 32bit virtual address space is split into two halves:
 - local (0x08000000-0x7fffffff): traditionally local to a hardware
   thread and incoherent between hardware threads. Each hardware thread
   has it's own local MMU table. On Meta2 the local space can be
   globally coherent (GCOn) if the cache partitions coincide.
 - global (0x88000000-0xffff0000): coherent and traditionally global
   between hardware threads. On Meta2, each hardware thread has it's own
   global MMU table.

The low 128MiB of each half is non-MMUable and maps directly to the
physical address space:
 - 0x00010000-0x07ffffff: contains Meta core registers and maps SoC bus
 - 0x80000000-0x87ffffff: contains low latency global core memories

Linux usually further splits the local virtual address space like this:
 - 0x08000000-0x3fffffff: user mappings
 - 0x40000000-0x7fffffff: kernel mappings

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:19 +00:00
James Hogan 99ef7c2ac1 metag: Cache/TLB handling
Add cache and TLB handling code for metag, including the required
callbacks used by MM switches and DMA operations. Caches can be
partitioned between the hardware threads and the global space, however
this is usually configured by the bootloader so Linux doesn't make any
changes to this configuration. TLBs aren't configurable, so only need
consideration to flush them.

On Meta1 the L1 cache was VIVT which required a full flush on MM switch.
Meta2 has a VIPT L1 cache so it doesn't require the full flush on MM
switch. Meta2 can also have a writeback L2 with hardware prefetch which
requires some special handling. Support is optional, and the L2 can be
detected and initialised by Linux.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:19 +00:00