Commit Graph

157 Commits

Author SHA1 Message Date
Anton Blanchard
315a699851 [PATCH] ppc64: use c99 initialisers in cputable code
Use c99 initialisers in the cputable code.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-07 18:23:36 -07:00
Olaf Hering
2098eec228 [PATCH] ppc64: vdso32: fix link errors after recent toolchain changes
Patch from <amodra@bigpond.net.au>,
http://sources.redhat.com/bugzilla/show_bug.cgi?id=1042

/usr/bin/ld: arch/ppc64/kernel/vdso32/vdso32.so: The first section in the
PT_DYNAMIC segment is not the .dynamic section

Signed-off-by: Olaf Hering <olh@suse.de>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-07 18:23:36 -07:00
Jeff Mahoney
5e6557722e [PATCH] openfirmware: generate device table for userspace
This converts the usage of struct of_match to struct of_device_id,
similar to pci_device_id.  This allows a device table to be generated,
which can be parsed by depmod(8) to generate a map file for module
loading.

In order for hotplug to work with macio devices, patches to
module-init-tools and hotplug must be applied.  Those patches are
available at:

 ftp://ftp.suse.com/pub/people/jeffm/linux/macio-hotplug/

Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-06 12:55:20 -07:00
Rusty Lynch
6772926bef [PATCH] kprobes: fix namespace problem and sparc64 build
The following renames arch_init, a kprobes function for performing any
architecture specific initialization, to arch_init_kprobes in order to
cleanup the namespace.

Also, this patch adds arch_init_kprobes to sparc64 to fix the sparc64 kprobes
build from the last return probe patch.

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-05 19:19:00 -07:00
Linus Torvalds
12829dcb10 Merge rsync://rsync.kernel.org/pub/scm/linux/kernel/git/paulus/ppc64-2.6 2005-06-30 08:48:56 -07:00
Michael Ellerman
719d1cd867 [PATCH] ppc64: Replace custom locking code with a spinlock
The hvlpevent_queue (formally ItLpQueue) has a member called xInUseWord
which is used for serialising access to the queue. Because it's a word
(ie. 32 bit) there's a custom 32-bit version of test_and_set_bit() or
thereabouts in ItLpQueue.c.

The xInUseWord is not shared with they hypervisor, so we can replace it
with a spinlock and remove the custom code.

There is also another locking mechanism (ItLpQueueInProcess). This is
redundant because it's only manipulated while the lock's held. Remove it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:17:02 +10:00
Michael Ellerman
ffe1b7e14e [PATCH] ppc64: Formatting cleanups in arch/ppc64/kernel/ItLpQueue.c
Just formatting cleanups:
 * rename some "nextLpEvent" variables to just "event"
 * make code fit in 80 columns
 * use brackets around if/else
 * use a temporary to make hvlpevent_clear_valid clearer

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:16:48 +10:00
Michael Ellerman
38fcdcfe38 [PATCH] ppc64: Cleanup whitespace in arch/ppc64/kernel/ItLpQueue.c
Just cleanup white space.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:16:28 +10:00
Michael Ellerman
9b0470200a [PATCH] ppc64: Cleanup proc printing of event types
The code that prints event counts by type uses a hand-coded number of tabs
to get the alignment right. Instead use a printf alignment which will allow
allow us to use the event_type strings elsewhere in the future.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:16:18 +10:00
Michael Ellerman
ed094150bd [PATCH] ppc64: Simplify counting of lpevents, remove lpevent_count from paca
Currently there's a per-cpu count of lpevents processed, a per-queue (ie.
global) total count, and a count by event type.

Replace all that with a count by event for each cpu. We only need to add
it up int the proc code.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:16:09 +10:00
Michael Ellerman
74889802a1 [PATCH] ppc64: Don't count number of events processed for caller
Currently we count the number of lpevents processed in 3 seperate places.

One of these counters is never read, so just remove it. This means
hvlpevent_queue_process() no longer needs to return the number of events
processed.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:15:53 +10:00
Michael Ellerman
937b31b114 [PATCH] ppc64: Rename ItLpQueue_* functions to hvlpevent_queue_*
Now that we've renamed the xItLpQueue structure, rename the functions that
operate on it also.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:15:42 +10:00
Michael Ellerman
a61874648d [PATCH] ppc64: Rename xItLpQueue to hvlpevent_queue
The xItLpQueue is a queue of HvLpEvents that we're given by the Hypervisor.
Rename xItLpQueue to hvlpevent_queue and make the type struct hvlpevent_queue.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:15:32 +10:00
Michael Ellerman
ab354b6379 [PATCH] ppc64: Move definition of xItLpQueue
The xItLpQueue is declared in LparData.c, move it into ItLpQueue.c.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:12:21 +10:00
Michael Ellerman
0f6014b37e [PATCH] ppc64: Make two ItLpQueue related functions static
External parties don't need to use ItLpQueue_getNextLpEvent() or
ItLpQueue_clearValid(), they're internal to ItLpQueue.c

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:08:56 +10:00
Michael Ellerman
7b01328d45 [PATCH] ppc64: Move xItLpQueue proc code into ItLpQueue.c
Move the code that displays xItLpQueue values in /proc into
ItLpQueue.c.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:08:44 +10:00
Michael Ellerman
512d31d6a8 [PATCH] ppc64: Move initialisation of xItLpQueue into ItLpQueue.c
The xItLpQueue is initalised manually in iSeries_setup_arch().  Move
this code into ItLpQueue.c for a cleaner separation.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:08:27 +10:00
Michael Ellerman
1b19bc7214 [PATCH] ppc64: Don't pass the pointers to xItLpQueue around
Because there's only one ItLpQueue and we know where it is, ie. xItLpQueue,
there's no point passing pointers to it it around all over the place.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:07:57 +10:00
Michael Ellerman
ee48444b85 [PATCH] ppc64: Reorganise the paca initialisation macros
This patch updates the macros that initialise the paca to remove the lpq
parameter. It also rearranges them a bit with the hope of making them a
bit clearer.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:07:48 +10:00
Michael Ellerman
0c885c175c [PATCH] ppc64: Move set_spread_lpevents() into ItLpQueue.c
The only code outside ItLpQueue.c that refers to spread_lpevents is in
set_apread_lpevents(), so move it inside ItLpQueue.c and make spread_lpevents
static.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:07:33 +10:00
Michael Ellerman
fc07695386 [PATCH] ppc64: Spread lpevents by default on iSeries
With the previous patch in place, spreading lpevents by default becomes
a one liner.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:07:21 +10:00
Michael Ellerman
bea248fb30 [PATCH] ppc64: Remove lpqueue pointer from the paca on iSeries
The iSeries code keeps a pointer to the ItLpQueue in its paca struct. But
all these pointers end up pointing to the one place, ie. xItLpQueue.

So remove the pointer from the paca struct and just refer to xItLpQueue
directly where needed.

The only complication is that the spread_lpevents logic was implemented by
having a NULL lpqueue pointer in the paca on CPUs that weren't supposed to
process events. Instead we just compare the spread_lpevents value to the
processor id to get the same behaviour.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-30 15:07:09 +10:00
Alan Cox
200803dfe4 [PATCH] irqpoll
Anyone reporting a stuck IRQ should try these options.  Its effectiveness
varies we've found in the Fedora case.  Quite a few systems with misdescribed
IRQ routing just work when you use irqpoll.  It also fixes up the VIA systems
although thats now fixed with the VIA quirk (which we could just make default
as its what Redmond OS does but Linus didn't like it historically).

A small number of systems have jammed IRQ sources or misdescribes that cause
an IRQ that we have no handler registered anywhere for.  In those cases it
doesn't help.

Signed-off-by: Alan Cox <number6@the-village.bc.nu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-28 21:20:35 -07:00
Olaf Hering
b1bdfbd0a2 [PATCH] remove duplicate printf in arch/ppc64/boot/main.c
initrd size is printed as hex, add a missing 0x
remove a duplicate printf when initrd is used.
remove use of kernel type to access the first bytes of the initrd memarea.

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-28 21:01:46 +10:00
Olaf Hering
e6019db5a7 [PATCH] remove printk usage in arch/ppc64/boot/prom.c
remove the printk usage in the zImage. we are not there, yet.

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-28 21:01:35 +10:00
Olaf Hering
a58dfbbb2a [PATCH] remove unused arch/ppc64/boot/mknote.c
mknote is not called in arch/ppc64/boot/Makefile

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-28 21:01:28 +10:00
Olaf Hering
45891f7660 [PATCH] remove unused arch/ppc64/boot/piggyback.c
piggyback is not called in arch/ppc64/boot/Makefile

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-28 21:01:21 +10:00
Nathan Lynch
f5f1cc5437 [PATCH] ppc64: don't create spurious symlinks under node0 sysdev
On partitioned systems we can wind up creating spurious symlinks in
/sys/devices/system/node/node0 to non-present cpus.  The symlinks are
not broken; the problem is that we're potentially misinforming
userspace that there is a relationship between node0 and cpus which
are to be added later.  There's no guarantee at all that a cpu which
is added later will belong to node 0.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-28 20:55:25 +10:00
Arnd Bergmann
a341ad9724 [PATCH] ppc64: simplify nvram partition scanning code
Convert nvram_create_os_partition to use list_for_each_entry
instead of list_for_each, as this reduces the code size by
two lines.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-28 20:33:49 +10:00
Greg KH
8644d2a42b Merge rsync://rsync.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 2005-06-27 22:07:56 -07:00
Michael Ellerman
2311b1f2bb [PATCH] PCI: fix-pci-mmap-on-ppc-and-ppc64.patch
This is an updated version of Ben's fix-pci-mmap-on-ppc-and-ppc64.patch
which is in 2.6.12-rc4-mm1.

It fixes the patch to work on PPC iSeries, removes some debug printks
at Ben's request, and incorporates your
fix-pci-mmap-on-ppc-and-ppc64-fix.patch also.

Originally from Benjamin Herrenschmidt <benh@kernel.crashing.org>

This patch was discussed at length on linux-pci and so far, the last
iteration of it didn't raise any comment.  It's effect is a nop on
architecture that don't define the new pci_resource_to_user() callback
anyway.  It allows architecture like ppc who put weird things inside of
PCI resource structures to convert to some different value for user
visible ones.  It also fixes mmap'ing of IO space on those archs.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2005-06-27 21:52:45 -07:00
Rusty Lynch
97f7943d70 [PATCH] Return probe redesign: ppc64 specific implementation
The following is a patch provided by Ananth Mavinakayanahalli that implements
the new PPC64 specific parts of the new function return probe design.

NOTE: Since getting Ananth's patch, I changed trampoline_probe_handler()
      to consume each of the outstanding return probem instances (feedback
      on my original RFC after Ananth cut a patch), and also added the
      arch_init() function (adding arch specific initialization.) I have
      cross compiled but have not testing this on a PPC64 machine.

Changes include:
 * Addition of kretprobe_trampoline to act as a dummy function for instrumented
   functions to return to, and for the return probe infrastructure to place
   a kprobe on on, gaining control so that the return probe handler
   can be called, and so that the instruction pointer can be moved back
   to the original return address.
 * Addition of arch_init(), allowing a kprobe to be registered on
   kretprobe_trampoline
 * Addition of trampoline_probe_handler() which is used as the pre_handler
   for the kprobe inserted on kretprobe_implementation.  This is the function
   that handles the details for calling the return probe handler function
   and returning control back at the original return address
 * Addition of arch_prepare_kretprobe() which is setup as the pre_handler
   for a kprobe registered at the beginning of the target function by
   kernel/kprobes.c so that a return probe instance can be setup when
   a caller enters the target function.  (A return probe instance contains
   all the needed information for trampoline_probe_handler to do it's job.)
 * Hooks added to the exit path of a task so that we can cleanup any left-over
   return probe instances (i.e. if a task dies while inside a targeted function
   then the return probe instance was reserved at the beginning of the function
   but the function never returns so we need to mark the instance as unused.)

Signed-off-by: Rusty Lynch <rusty.lynch@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 15:23:53 -07:00
Ananth N Mavinakayanahalli
9ec4b1f356 [PATCH] kprobes: fix single-step out of line - take2
Now that PPC64 has no-execute support, here is a second try to fix the
single step out of line during kprobe execution.  Kprobes on x86_64 already
solved this problem by allocating an executable page and using it as the
scratch area for stepping out of line.  Reuse that.

Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 15:23:52 -07:00
Benjamin Herrenschmidt
6ae3db110e [PATCH] ppc64: Add missing exports
This patch adds a couple of missing symbol exports.  flush_dcache_page is
used by the AGP driver and rtc_lock by the RTC driver.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 15:11:43 -07:00
Maneesh Soni
72414d3f1d [PATCH] kexec code cleanup
o Following patch provides purely cosmetic changes and corrects CodingStyle
  guide lines related certain issues like below in kexec related files

  o braces for one line "if" statements, "for" loops,
  o more than 80 column wide lines,
  o No space after "while", "for" and "switch" key words

o Changes:
  o take-2: Removed the extra tab before "case" key words.
  o take-3: Put operator at the end of line and space before "*/"

Signed-off-by: Maneesh Soni <maneesh@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:55 -07:00
Alexander Nyberg
6e274d1443 [PATCH] kdump: Use real pt_regs from exception
Makes kexec_crashdump() take a pt_regs * as an argument.  This allows to
get exact register state at the point of the crash.  If we come from direct
panic assertion NULL will be passed and the current registers saved before
crashdump.

This hooks into two places:
die(): check the conditions under which we will panic when calling
do_exit and go there directly with the pt_regs that caused the fatal
fault.

die_nmi(): If we receive an NMI lockup while in the kernel use the
pt_regs and go directly to crash_kexec(). We're probably nested up badly
at this point so this might be the only chance to escape with proper
information.

Signed-off-by: Alexander Nyberg <alexn@telia.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:54 -07:00
R Sharada
fce0d57403 [PATCH] ppc64: kexec support for ppc64
This patch implements the kexec support for ppc64 platforms.

A couple of notes:

1)  We copy the pages in virtual mode, using the full base kernel
    and a statically allocated stack.   At kexec_prepare time we
    scan the pages and if any overlap our (0, _end[]) range we
    return -ETXTBSY.

    On PowerPC 64 systems running in LPAR (logical partitioning)
    mode, only a small region of memory, referred to as the RMO,
    can be accessed in real mode.  Since Linux runs with only one
    zone of memory in the memory allocator, and it can be orders of
    magnitude more memory than the RMO, looping until we allocate
    pages in the source region is not feasible.  Copying in virtual
    means we don't have to write a hash table generation and call
    hypervisor to insert translations, instead we rely on the pinned
    kernel linear mapping.  The kernel already has move to linked
    location built in, so there is no requirement to load it at 0.

    If we want to load something other than a kernel, then a stub
    can be written to copy a linear chunk in real mode.

2)  The start entry point gets passed parameters from the kernel.
    Slaves are started at a fixed address after copying code from
    the entry point.

    All CPUs get passed their firmware assigned physical id in r3
    (most calling conventions use this register for the first
    argument).

    This is used to distinguish each CPU from all other CPUs.
    Since firmware is not around, there is no other way to obtain
    this information other than to pass it somewhere.

    A single CPU, referred to here as the master and the one executing
    the kexec call, branches to start with the address of start in r4.
    While this can be calculated, we have to load it through a gpr to
    branch to this point so defining the register this is contained
    in is free.  A stack of unspecified size is available at r1
    (also common calling convention).

    All remaining running CPUs are sent to start at absolute address
    0x60 after copying the first 0x100 bytes from start to address 0.
    This convention was chosen because it matches what the kernel
    has been doing itself.  (only gpr3 is defined).

    Note: This is not quite the convention of the kexec bootblock v2
    in the kernel.  A stub has been written to convert between them,
    and we may adjust the kernel in the future to allow this directly
    without any stub.

3)  Destination pages can be placed anywhere, even where they
    would not be accessible in real mode.  This will allow us to
    place ram disks above the RMO if we choose.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: R Sharada <sharada@in.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:51 -07:00
R Sharada
f4c82d5132 [PATCH] ppc64 kexec: native hash clear
Add code to clear the hash table and invalidate the tlb for native (SMP,
non-LPAR) mode.  Supports 16M and 4k pages.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: R Sharada <sharada@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:51 -07:00
Ingo Molnar
cc19ca86a0 [PATCH] consolidate PREEMPT options into kernel/Kconfig.preempt
This patch consolidates the CONFIG_PREEMPT and CONFIG_PREEMPT_BKL
preemption options into kernel/Kconfig.preempt.  This, besides reducing
source-code, also enables more centralized tweaking of preemption related
options.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:45 -07:00
Zwane Mwaikambo
f370513640 [PATCH] i386 CPU hotplug
(The i386 CPU hotplug patch provides infrastructure for some work which Pavel
is doing as well as for ACPI S3 (suspend-to-RAM) work which Li Shaohua
<shaohua.li@intel.com> is doing)

The following provides i386 architecture support for safely unregistering and
registering processors during runtime, updated for the current -mm tree.  In
order to avoid dumping cpu hotplug code into kernel/irq/* i dropped the
cpu_online check in do_IRQ() by modifying fixup_irqs().  The difference being
that on cpu offline, fixup_irqs() is called before we clear the cpu from
cpu_online_map and a long delay in order to ensure that we never have any
queued external interrupts on the APICs.  There are additional changes to s390
and ppc64 to account for this change.

1) Add CONFIG_HOTPLUG_CPU
2) disable local APIC timer on dead cpus.
3) Disable preempt around irq balancing to prevent CPUs going down.
4) Print irq stats for all possible cpus.
5) Debugging check for interrupts on offline cpus.
6) Hacky fixup_irqs() to redirect irqs when cpus go off/online.
7) play_dead() for offline cpus to spin inside.
8) Handle offline cpus set in flush_tlb_others().
9) Grab lock earlier in smp_call_function() to prevent CPUs going down.
10) Implement __cpu_disable() and __cpu_die().
11) Enable local interrupts in cpu_enable() after fixup_irqs()
12) Don't fiddle with NMI on dead cpu, but leave intact on other cpus.
13) Program IRQ affinity whilst cpu is still in cpu_online_map on offline.

Signed-off-by: Zwane Mwaikambo <zwane@linuxpower.ca>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:29 -07:00
Michael Ellerman
856509d5da [PATCH] ppc64: Fix compile warnings in arch/ppc64/kernel/lparcfg.c
Stephen's patch to remove LparData.h missed an include in lparcfg.c This
fixes a few compile warnings.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:27 -07:00
Andrea Arcangeli
7286aa9b9a [PATCH] ppc64: fix seccomp with 32-bit userland
The seccomp check has to happen when entering the syscall and not when
exiting it or regs->gpr[0] contains garabge during signal handling in
ppc64_rt_sigreturn (this actually might be a bug too, but an orthogonal
one, since we really have to run the check before invoking the syscall and
not after it).

Signed-off-by: Andrea Arcangeli <andrea@cpushare.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-24 00:05:18 -07:00
Linus Torvalds
24665cd00d Merge rsync://rsync.kernel.org/pub/scm/linux/kernel/git/paulus/ppc64-2.6 2005-06-23 09:49:55 -07:00
Prasanna S Panchamukhi
42cc20600a [PATCH] kprobes: Temporary disarming of reentrant probe for ppc64
This patch includes ppc64 architecture specific changes to support temporary
disarming on reentrancy of probes.

Signed-of-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:25 -07:00
Rusty Lynch
7e1048b11c [PATCH] Move kprobe [dis]arming into arch specific code
The architecture independent code of the current kprobes implementation is
arming and disarming kprobes at registration time.  The problem is that the
code is assuming that arming and disarming is a just done by a simple write
of some magic value to an address.  This is problematic for ia64 where our
instructions look more like structures, and we can not insert break points
by just doing something like:

*p->addr = BREAKPOINT_INSTRUCTION;

The following patch to 2.6.12-rc4-mm2 adds two new architecture dependent
functions:

     * void arch_arm_kprobe(struct kprobe *p)
     * void arch_disarm_kprobe(struct kprobe *p)

and then adds the new functions for each of the architectures that already
implement kprobes (spar64/ppc64/i386/x86_64).

I thought arch_[dis]arm_kprobe was the most descriptive of what was really
happening, but each of the architectures already had a disarm_kprobe()
function that was really a "disarm and do some other clean-up items as
needed when you stumble across a recursive kprobe." So...  I took the
liberty of changing the code that was calling disarm_kprobe() to call
arch_disarm_kprobe(), and then do the cleanup in the block of code dealing
with the recursive kprobe case.

So far this patch as been tested on i386, x86_64, and ppc64, but still
needs to be tested in sparc64.

Signed-off-by: Rusty Lynch <rusty.lynch@intel.com>
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:21 -07:00
Ian Campbell
0f8e2d62fa [PATCH] use ${CROSS_COMPILE}installkernel in arch/*/boot/install.sh
The attached patch causes the various arch specific install.sh scripts to
look for ${CROSS_COMPILE}installkernel rather than just installkernel (in
both /sbin/ and ~/bin/ where the script already did this).  This allows you
to have e.g.  arm-linux-installkernel as a handy way to install on your
cross target.  It also prevents the script picking up on the host
/sbin/installkernel which causes the script to fall through and do the
install itself (which is what I actually use myself, with $INSTALL_PATH
set).

I don't believe it causes back-compatibility problems since calling the
host installkernel was never likely to work or be what you wanted when
cross compiling anyway.  If $CROSS_COMPILE isn't set then nothing changes.

I only use ARM and i386 myself but I figured it couldn't hurt to do the
whole lot.  I've cc'd those who I hope are the arch maintainers for files
that I've touched.

Signed-off-by: Ian Campbell <icampbell@arcom.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:07 -07:00
Andy Whitcroft
145e664231 [PATCH] ppc64: sparsemem memory model
Provide the architecture specific implementation for SPARSEMEM for PPC64
systems.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> (in part)
Signed-off-by: Martin Bligh <mbligh@aracnet.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:06 -07:00
Andy Whitcroft
74b30be2e1 [PATCH] ppc64: add memory present
Provide hooks for PPC64 to allow memory models to be informed of installed
memory areas.  This allows SPARSEMEM to instantiate mem_map for the populated
areas.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Martin Bligh <mbligh@aracnet.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:05 -07:00
Andy Whitcroft
510f8fa7ba [PATCH] ppc64: add early_pfn_to_nid
Provide an implementation of early_pfn_to_nid for PPC64.  This is used by
memory models to determine the node from which to take allocations before the
memory allocators are fully initialised.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Martin Bligh <mbligh@aracnet.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:05 -07:00
Andy Whitcroft
641c767389 [PATCH] sparsemem swiss cheese numa layouts
The part of the sparsemem patch which modifies memmap_init_zone() has recently
become a problem.  It changes behavior so that there is a call to
pfn_to_page() for each individual page inside of a node's range:
node_start_pfn through node_end_pfn.  It used to simply do this once, at the
beginning of the node, but having sparsemem's non-contiguous mem_map[]s inside
of a node made it necessary to change.

Mike Kravetz recently wrote a patch which made the NUMA code accept some new
kinds of layouts.  The system's memory was laid out like this, with node 0's
memory in two pieces: one before and one after node 1's memory:

	Node 0: +++++     +++++
	Node 1:      +++++

Previous behavior before Mike's patch was to assign nodes like this:

	Node 0: 00000     XXXXX
	Node 1:      11111

Where the 'X' areas were simply thrown away.  The new behavior was to make the
pg_data_t span node 0 across all of its areas, including areas that are really
node 1's: Node 0: 000000000000000 Node 1: 11111

This wastes a little bit of mem_map space, but ends up being OK, and more
fully utilizes the system's memory.  memmap_init_zone() initializes all of the
"struct page"s for node 0, even for the "hole", but those never get used,
because there is no pfn_to_page() that resolves to those pages.  However, only
calling pfn_to_page() once, memmap_init_zone() always uses the pages that were
allocated for node0->node_mem_map because:

	struct page *start = pfn_to_page(start_pfn);
	// effectively start = &node->node_mem_map[0]
	for (page = start; page < (start + size); page++) {
		init_page_here();...
		page++;
	}

Slow, and wasteful, but generally harmless.

But, modify that to call pfn_to_page() for each loop iteration (like sparsemem
does):

	for (pfn = start_pfn; pfn < < (start_pfn + size); pfn++++) {
		page = pfn_to_page(pfn);
	}

And you end up trying to initialize node 1's pages too early, along with bogus
data from node 0.  This patch checks for those weird layouts and declines to
touch the pages, making the more frequent pfn_to_page() calls OK to do.

Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:05 -07:00