Merge branch 'master'

This commit is contained in:
Jeff Garzik 2006-02-20 02:16:23 -05:00
commit 5b2ffed906
333 changed files with 5472 additions and 3199 deletions

View File

@ -11,6 +11,8 @@
Joel Schopp <jschopp@austin.ibm.com>
ia64/x86_64:
Ashok Raj <ashok.raj@intel.com>
s390:
Heiko Carstens <heiko.carstens@de.ibm.com>
Authors: Ashok Raj <ashok.raj@intel.com>
Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>,
@ -44,9 +46,23 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
maxcpus=2 will only boot 2. You can choose to bring the
other cpus later online, read FAQ's for more info.
additional_cpus=n [x86_64 only] use this to limit hotpluggable cpus.
This option sets
cpu_possible_map = cpu_present_map + additional_cpus
additional_cpus=n [x86_64, s390 only] use this to limit hotpluggable cpus.
This option sets
cpu_possible_map = cpu_present_map + additional_cpus
ia64 and x86_64 use the number of disabled local apics in ACPI tables MADT
to determine the number of potentially hot-pluggable cpus. The implementation
should only rely on this to count the #of cpus, but *MUST* not rely on the
apicid values in those tables for disabled apics. In the event BIOS doesnt
mark such hot-pluggable cpus as disabled entries, one could use this
parameter "additional_cpus=x" to represent those cpus in the cpu_possible_map.
possible_cpus=n [s390 only] use this to set hotpluggable cpus.
This option sets possible_cpus bits in
cpu_possible_map. Thus keeping the numbers of bits set
constant even if the machine gets rebooted.
This option overrides additional_cpus.
CPU maps and such
-----------------

View File

@ -0,0 +1,234 @@
=================================
INTERNAL KERNEL ABI FOR FR-V ARCH
=================================
The internal FRV kernel ABI is not quite the same as the userspace ABI. A number of the registers
are used for special purposed, and the ABI is not consistent between modules vs core, and MMU vs
no-MMU.
This partly stems from the fact that FRV CPUs do not have a separate supervisor stack pointer, and
most of them do not have any scratch registers, thus requiring at least one general purpose
register to be clobbered in such an event. Also, within the kernel core, it is possible to simply
jump or call directly between functions using a relative offset. This cannot be extended to modules
for the displacement is likely to be too far. Thus in modules the address of a function to call
must be calculated in a register and then used, requiring two extra instructions.
This document has the following sections:
(*) System call register ABI
(*) CPU operating modes
(*) Internal kernel-mode register ABI
(*) Internal debug-mode register ABI
(*) Virtual interrupt handling
========================
SYSTEM CALL REGISTER ABI
========================
When a system call is made, the following registers are effective:
REGISTERS CALL RETURN
=============== ======================= =======================
GR7 System call number Preserved
GR8 Syscall arg #1 Return value
GR9-GR13 Syscall arg #2-6 Preserved
===================
CPU OPERATING MODES
===================
The FR-V CPU has three basic operating modes. In order of increasing capability:
(1) User mode.
Basic userspace running mode.
(2) Kernel mode.
Normal kernel mode. There are many additional control registers available that may be
accessed in this mode, in addition to all the stuff available to user mode. This has two
submodes:
(a) Exceptions enabled (PSR.T == 1).
Exceptions will invoke the appropriate normal kernel mode handler. On entry to the
handler, the PSR.T bit will be cleared.
(b) Exceptions disabled (PSR.T == 0).
No exceptions or interrupts may happen. Any mandatory exceptions will cause the CPU to
halt unless the CPU is told to jump into debug mode instead.
(3) Debug mode.
No exceptions may happen in this mode. Memory protection and management exceptions will be
flagged for later consideration, but the exception handler won't be invoked. Debugging traps
such as hardware breakpoints and watchpoints will be ignored. This mode is entered only by
debugging events obtained from the other two modes.
All kernel mode registers may be accessed, plus a few extra debugging specific registers.
=================================
INTERNAL KERNEL-MODE REGISTER ABI
=================================
There are a number of permanent register assignments that are set up by entry.S in the exception
prologue. Note that there is a complete set of exception prologues for each of user->kernel
transition and kernel->kernel transition. There are also user->debug and kernel->debug mode
transition prologues.
REGISTER FLAVOUR USE
=============== ======= ====================================================
GR1 Supervisor stack pointer
GR15 Current thread info pointer
GR16 GP-Rel base register for small data
GR28 Current exception frame pointer (__frame)
GR29 Current task pointer (current)
GR30 Destroyed by kernel mode entry
GR31 NOMMU Destroyed by debug mode entry
GR31 MMU Destroyed by TLB miss kernel mode entry
CCR.ICC2 Virtual interrupt disablement tracking
CCCR.CC3 Cleared by exception prologue (atomic op emulation)
SCR0 MMU See mmu-layout.txt.
SCR1 MMU See mmu-layout.txt.
SCR2 MMU Save for EAR0 (destroyed by icache insns in debug mode)
SCR3 MMU Save for GR31 during debug exceptions
DAMR/IAMR NOMMU Fixed memory protection layout.
DAMR/IAMR MMU See mmu-layout.txt.
Certain registers are also used or modified across function calls:
REGISTER CALL RETURN
=============== =============================== ===============================
GR0 Fixed Zero -
GR2 Function call frame pointer
GR3 Special Preserved
GR3-GR7 - Clobbered
GR8 Function call arg #1 Return value (or clobbered)
GR9 Function call arg #2 Return value MSW (or clobbered)
GR10-GR13 Function call arg #3-#6 Clobbered
GR14 - Clobbered
GR15-GR16 Special Preserved
GR17-GR27 - Preserved
GR28-GR31 Special Only accessed explicitly
LR Return address after CALL Clobbered
CCR/CCCR - Mostly Clobbered
================================
INTERNAL DEBUG-MODE REGISTER ABI
================================
This is the same as the kernel-mode register ABI for functions calls. The difference is that in
debug-mode there's a different stack and a different exception frame. Almost all the global
registers from kernel-mode (including the stack pointer) may be changed.
REGISTER FLAVOUR USE
=============== ======= ====================================================
GR1 Debug stack pointer
GR16 GP-Rel base register for small data
GR31 Current debug exception frame pointer (__debug_frame)
SCR3 MMU Saved value of GR31
Note that debug mode is able to interfere with the kernel's emulated atomic ops, so it must be
exceedingly careful not to do any that would interact with the main kernel in this regard. Hence
the debug mode code (gdbstub) is almost completely self-contained. The only external code used is
the sprintf family of functions.
Futhermore, break.S is so complicated because single-step mode does not switch off on entry to an
exception. That means unless manually disabled, single-stepping will blithely go on stepping into
things like interrupts. See gdbstub.txt for more information.
==========================
VIRTUAL INTERRUPT HANDLING
==========================
Because accesses to the PSR is so slow, and to disable interrupts we have to access it twice (once
to read and once to write), we don't actually disable interrupts at all if we don't have to. What
we do instead is use the ICC2 condition code flags to note virtual disablement, such that if we
then do take an interrupt, we note the flag, really disable interrupts, set another flag and resume
execution at the point the interrupt happened. Setting condition flags as a side effect of an
arithmetic or logical instruction is really fast. This use of the ICC2 only occurs within the
kernel - it does not affect userspace.
The flags we use are:
(*) CCR.ICC2.Z [Zero flag]
Set to virtually disable interrupts, clear when interrupts are virtually enabled. Can be
modified by logical instructions without affecting the Carry flag.
(*) CCR.ICC2.C [Carry flag]
Clear to indicate hardware interrupts are really disabled, set otherwise.
What happens is this:
(1) Normal kernel-mode operation.
ICC2.Z is 0, ICC2.C is 1.
(2) An interrupt occurs. The exception prologue examines ICC2.Z and determines that nothing needs
doing. This is done simply with an unlikely BEQ instruction.
(3) The interrupts are disabled (local_irq_disable)
ICC2.Z is set to 1.
(4) If interrupts were then re-enabled (local_irq_enable):
ICC2.Z would be set to 0.
A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would be used to trap if
interrupts were now virtually enabled, but physically disabled - which they're not, so the
trap isn't taken. The kernel would then be back to state (1).
(5) An interrupt occurs. The exception prologue examines ICC2.Z and determines that the interrupt
shouldn't actually have happened. It jumps aside, and there disabled interrupts by setting
PSR.PIL to 14 and then it clears ICC2.C.
(6) If interrupts were then saved and disabled again (local_irq_save):
ICC2.Z would be shifted into the save variable and masked off (giving a 1).
ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be unaffected (ie: 0).
(7) If interrupts were then restored from state (6) (local_irq_restore):
ICC2.Z would be set to indicate the result of XOR'ing the saved value (ie: 1) with 1, which
gives a result of 0 - thus leaving ICC2.Z set.
ICC2.C would remain unaffected (ie: 0).
A TIHI #2 instruction would be used to again assay the current state, but this would do
nothing as Z==1.
(8) If interrupts were then enabled (local_irq_enable):
ICC2.Z would be cleared. ICC2.C would be left unaffected. Both flags would now be 0.
A TIHI #2 instruction again issued to assay the current state would then trap as both Z==0
[interrupts virtually enabled] and C==0 [interrupts really disabled] would then be true.
(9) The trap #2 handler would simply enable hardware interrupts (set PSR.PIL to 0), set ICC2.C to
1 and return.
(10) Immediately upon returning, the pending interrupt would be taken.
(11) The interrupt handler would take the path of actually processing the interrupt (ICC2.Z is
clear, BEQ fails as per step (2)).
(12) The interrupt handler would then set ICC2.C to 1 since hardware interrupts are definitely
enabled - or else the kernel wouldn't be here.
(13) On return from the interrupt handler, things would be back to state (1).
This trap (#2) is only available in kernel mode. In user mode it will result in SIGILL.

View File

@ -36,6 +36,10 @@ Module Parameters
(default is 1)
Use 'init=0' to bypass initializing the chip.
Try this if your computer crashes when you load the module.
* reset: int
(default is 0)
The driver used to reset the chip on load, but does no more. Use
'reset=1' to restore the old behavior. Report if you need to do this.
Description
-----------

View File

@ -1133,6 +1133,8 @@ running once the system is up.
Mechanism 1.
conf2 [IA-32] Force use of PCI Configuration
Mechanism 2.
nommconf [IA-32,X86_64] Disable use of MMCONFIG for PCI
Configuration
nosort [IA-32] Don't sort PCI devices according to
order given by the PCI BIOS. This sorting is
done to get a device order compatible with
@ -1636,6 +1638,9 @@ running once the system is up.
Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
norandmaps Don't use address space randomization
Equivalent to echo 0 > /proc/sys/kernel/randomize_va_space
______________________________________________________________________
Changelog:

View File

@ -136,17 +136,20 @@ Kprobes, jprobes, and return probes are implemented on the following
architectures:
- i386
- x86_64 (AMD-64, E64MT)
- x86_64 (AMD-64, EM64T)
- ppc64
- ia64 (Support for probes on certain instruction types is still in progress.)
- ia64 (Does not support probes on instruction slot1.)
- sparc64 (Return probes not yet implemented.)
3. Configuring Kprobes
When configuring the kernel using make menuconfig/xconfig/oldconfig,
ensure that CONFIG_KPROBES is set to "y". Under "Kernel hacking",
look for "Kprobes". You may have to enable "Kernel debugging"
(CONFIG_DEBUG_KERNEL) before you can enable Kprobes.
ensure that CONFIG_KPROBES is set to "y". Under "Instrumentation
Support", look for "Kprobes".
So that you can load and unload Kprobes-based instrumentation modules,
make sure "Loadable module support" (CONFIG_MODULES) and "Module
unloading" (CONFIG_MODULE_UNLOAD) are set to "y".
You may also want to ensure that CONFIG_KALLSYMS and perhaps even
CONFIG_KALLSYMS_ALL are set to "y", since kallsyms_lookup_name()
@ -262,18 +265,18 @@ at any time after the probe has been registered.
5. Kprobes Features and Limitations
As of Linux v2.6.12, Kprobes allows multiple probes at the same
address. Currently, however, there cannot be multiple jprobes on
the same function at the same time.
Kprobes allows multiple probes at the same address. Currently,
however, there cannot be multiple jprobes on the same function at
the same time.
In general, you can install a probe anywhere in the kernel.
In particular, you can probe interrupt handlers. Known exceptions
are discussed in this section.
For obvious reasons, it's a bad idea to install a probe in
the code that implements Kprobes (mostly kernel/kprobes.c and
arch/*/kernel/kprobes.c). A patch in the v2.6.13 timeframe instructs
Kprobes to reject such requests.
The register_*probe functions will return -EINVAL if you attempt
to install a probe in the code that implements Kprobes (mostly
kernel/kprobes.c and arch/*/kernel/kprobes.c, but also functions such
as do_page_fault and notifier_call_chain).
If you install a probe in an inline-able function, Kprobes makes
no attempt to chase down all inline instances of the function and
@ -290,18 +293,14 @@ from the accidental ones. Don't drink and probe.
Kprobes makes no attempt to prevent probe handlers from stepping on
each other -- e.g., probing printk() and then calling printk() from a
probe handler. As of Linux v2.6.12, if a probe handler hits a probe,
that second probe's handlers won't be run in that instance.
probe handler. If a probe handler hits a probe, that second probe's
handlers won't be run in that instance, and the kprobe.nmissed member
of the second probe will be incremented.
In Linux v2.6.12 and previous versions, Kprobes' data structures are
protected by a single lock that is held during probe registration and
unregistration and while handlers are run. Thus, no two handlers
can run simultaneously. To improve scalability on SMP systems,
this restriction will probably be removed soon, in which case
multiple handlers (or multiple instances of the same handler) may
run concurrently on different CPUs. Code your handlers accordingly.
As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
the same handler) may run concurrently on different CPUs.
Kprobes does not use semaphores or allocate memory except during
Kprobes does not use mutexes or allocate memory except during
registration and unregistration.
Probe handlers are run with preemption disabled. Depending on the
@ -316,11 +315,18 @@ address instead of the real return address for kretprobed functions.
(As far as we can tell, __builtin_return_address() is used only
for instrumentation and error reporting.)
If the number of times a function is called does not match the
number of times it returns, registering a return probe on that
function may produce undesirable results. We have the do_exit()
and do_execve() cases covered. do_fork() is not an issue. We're
unaware of other specific cases where this could be a problem.
If the number of times a function is called does not match the number
of times it returns, registering a return probe on that function may
produce undesirable results. We have the do_exit() case covered.
do_execve() and do_fork() are not an issue. We're unaware of other
specific cases where this could be a problem.
If, upon entry to or exit from a function, the CPU is running on
a stack other than that of the current task, registering a return
probe on that function may produce undesirable results. For this
reason, Kprobes doesn't support return probes (or kprobes or jprobes)
on the x86_64 version of __switch_to(); the registration functions
return -EINVAL.
6. Probe Overhead
@ -347,14 +353,12 @@ k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
7. TODO
a. SystemTap (http://sourceware.org/systemtap): Work in progress
to provide a simplified programming interface for probe-based
instrumentation.
b. Improved SMP scalability: Currently, work is in progress to handle
multiple kprobes in parallel.
c. Kernel return probes for sparc64.
d. Support for other architectures.
e. User-space probes.
a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
programming interface for probe-based instrumentation. Try it out.
b. Kernel return probes for sparc64.
c. Support for other architectures.
d. User-space probes.
e. Watchpoint probes (which fire on data references).
8. Kprobes Example
@ -411,8 +415,7 @@ int init_module(void)
printk("Couldn't find %s to plant kprobe\n", "do_fork");
return -1;
}
ret = register_kprobe(&kp);
if (ret < 0) {
if ((ret = register_kprobe(&kp) < 0)) {
printk("register_kprobe failed, returned %d\n", ret);
return -1;
}

View File

@ -95,11 +95,13 @@ CONFIG_BLK_DEV_IDEDMA_PCI=y
CONFIG_IDEDMA_PCI_AUTO=y
CONFIG_BLK_DEV_IDE_AU1XXX=y
CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA=y
CONFIG_BLK_DEV_IDE_AU1XXX_BURSTABLE_ON=y
CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128
CONFIG_BLK_DEV_IDEDMA=y
CONFIG_IDEDMA_AUTO=y
Also define 'IDE_AU1XXX_BURSTMODE' in 'drivers/ide/mips/au1xxx-ide.c' to enable
the burst support on DBDMA controller.
If the used system need the USB support enable the following kernel configs for
high IDE to USB throughput.
@ -115,6 +117,8 @@ CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128
CONFIG_BLK_DEV_IDEDMA=y
CONFIG_IDEDMA_AUTO=y
Also undefine 'IDE_AU1XXX_BURSTMODE' in 'drivers/ide/mips/au1xxx-ide.c' to
disable the burst support on DBDMA controller.
ADD NEW HARD DISC TO WHITE OR BLACK LIST
----------------------------------------

View File

@ -1,3 +1,26 @@
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.04
i. Support for 1078 type (ppc IOP) controller, device id : 0x60 added.
During initialization, depending on the device id, the template members
are initialized with function pointers specific to the ppc or
xscale controllers.
-Sumant Patro <Sumant.Patro@lsil.com>
1 Release Date : Fri Feb 03 14:16:25 PST 2006 - Sumant Patro
<Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.02
i. Register 16 byte CDB capability with scsi midlayer
"Ths patch properly registers the 16 byte command length capability of the
megaraid_sas controlled hardware with the scsi midlayer. All megaraid_sas
hardware supports 16 byte CDB's."
-Joshua Giles <joshua_giles@dell.com>
1 Release Date : Mon Jan 23 14:09:01 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.02
3 Older Version : 00.00.02.01

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 16
EXTRAVERSION =-rc3
EXTRAVERSION =-rc4
NAME=Sliding Snow Leopard
# *DOCUMENTATION*
@ -106,13 +106,12 @@ KBUILD_OUTPUT := $(shell cd $(KBUILD_OUTPUT) && /bin/pwd)
$(if $(KBUILD_OUTPUT),, \
$(error output directory "$(saved-output)" does not exist))
.PHONY: $(MAKECMDGOALS) cdbuilddir
$(MAKECMDGOALS) _all: cdbuilddir
.PHONY: $(MAKECMDGOALS)
cdbuilddir:
$(filter-out _all,$(MAKECMDGOALS)) _all:
$(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \
KBUILD_SRC=$(CURDIR) \
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $(MAKECMDGOALS)
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $@
# Leave processing to above invocation of make
skip-makefile := 1

View File

@ -111,7 +111,7 @@
CALL(sys_statfs)
/* 100 */ CALL(sys_fstatfs)
CALL(sys_ni_syscall)
CALL(OBSOLETE(sys_socketcall))
CALL(OBSOLETE(ABI(sys_socketcall, sys_oabi_socketcall)))
CALL(sys_syslog)
CALL(sys_setitimer)
/* 105 */ CALL(sys_getitimer)

View File

@ -23,6 +23,7 @@
#include <linux/root_dev.h>
#include <linux/cpu.h>
#include <linux/interrupt.h>
#include <linux/smp.h>
#include <asm/cpu.h>
#include <asm/elf.h>
@ -771,6 +772,10 @@ void __init setup_arch(char **cmdline_p)
paging_init(&meminfo, mdesc);
request_standard_resources(&meminfo, mdesc);
#ifdef CONFIG_SMP
smp_init_cpus();
#endif
cpu_init();
/*

View File

@ -338,7 +338,6 @@ void __init smp_prepare_boot_cpu(void)
per_cpu(cpu_data, cpu).idle = current;
cpu_set(cpu, cpu_possible_map);
cpu_set(cpu, cpu_present_map);
cpu_set(cpu, cpu_online_map);
}

View File

@ -64,6 +64,7 @@
* sys_connect:
* sys_sendmsg:
* sys_sendto:
* sys_socketcall:
*
* struct sockaddr_un loses its padding with EABI. Since the size of the
* structure is used as a validation test in unix_mkname(), we need to
@ -78,6 +79,7 @@
#include <linux/eventpoll.h>
#include <linux/sem.h>
#include <linux/socket.h>
#include <linux/net.h>
#include <asm/ipc.h>
#include <asm/uaccess.h>
@ -408,3 +410,31 @@ asmlinkage long sys_oabi_sendmsg(int fd, struct msghdr __user *msg, unsigned fla
return sys_sendmsg(fd, msg, flags);
}
asmlinkage long sys_oabi_socketcall(int call, unsigned long __user *args)
{
unsigned long r = -EFAULT, a[6];
switch (call) {
case SYS_BIND:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_bind(a[0], (struct sockaddr __user *)a[1], a[2]);
break;
case SYS_CONNECT:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_connect(a[0], (struct sockaddr __user *)a[1], a[2]);
break;
case SYS_SENDTO:
if (copy_from_user(a, args, 6 * sizeof(long)) == 0)
r = sys_oabi_sendto(a[0], (void __user *)a[1], a[2], a[3],
(struct sockaddr __user *)a[4], a[5]);
break;
case SYS_SENDMSG:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_sendmsg(a[0], (struct msghdr __user *)a[1], a[2]);
break;
default:
r = sys_socketcall(call, args);
}
return r;
}

View File

@ -140,6 +140,18 @@ static void __init poke_milo(void)
mb();
}
/*
* Initialise the CPU possible map early - this describes the CPUs
* which may be present or become present in the system.
*/
void __init smp_init_cpus(void)
{
unsigned int i, ncores = get_core_count();
for (i = 0; i < ncores; i++)
cpu_set(i, cpu_possible_map);
}
void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned int ncores = get_core_count();
@ -176,14 +188,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
max_cpus = ncores;
/*
* Initialise the possible/present maps.
* cpu_possible_map describes the set of CPUs which may be present
* cpu_present_map describes the set of CPUs populated
* Initialise the present map, which describes the set of CPUs
* actually populated at the present time.
*/
for (i = 0; i < max_cpus; i++) {
cpu_set(i, cpu_possible_map);
for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_present_map);
}
/*
* Do we need any more CPUs? If so, then let them know where

View File

@ -13,7 +13,6 @@
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/config.h>
#include <linux/init.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/platform_device.h>

View File

@ -12,7 +12,6 @@
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/config.h>
#include <linux/init.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/platform_device.h>

View File

@ -27,8 +27,6 @@ static struct flash_platform_data nslu2_flash_data = {
};
static struct resource nslu2_flash_resource = {
.start = NSLU2_FLASH_BASE,
.end = NSLU2_FLASH_BASE + NSLU2_FLASH_SIZE,
.flags = IORESOURCE_MEM,
};
@ -116,6 +114,10 @@ static void __init nslu2_init(void)
{
ixp4xx_sys_init();
nslu2_flash_resource.start = IXP4XX_EXP_BUS_BASE(0);
nslu2_flash_resource.end =
IXP4XX_EXP_BUS_BASE(0) + ixp4xx_exp_bus_size - 1;
pm_power_off = nslu2_power_off;
platform_add_devices(nslu2_devices, ARRAY_SIZE(nslu2_devices));

View File

@ -143,6 +143,18 @@ static void __init poke_milo(void)
mb();
}
/*
* Initialise the CPU possible map early - this describes the CPUs
* which may be present or become present in the system.
*/
void __init smp_init_cpus(void)
{
unsigned int i, ncores = get_core_count();
for (i = 0; i < ncores; i++)
cpu_set(i, cpu_possible_map);
}
void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned int ncores = get_core_count();
@ -179,14 +191,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
local_timer_setup(cpu);
/*
* Initialise the possible/present maps.
* cpu_possible_map describes the set of CPUs which may be present
* cpu_present_map describes the set of CPUs populated
* Initialise the present map, which describes the set of CPUs
* actually populated at the present time.
*/
for (i = 0; i < max_cpus; i++) {
cpu_set(i, cpu_possible_map);
for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_present_map);
}
/*
* Do we need any more CPUs? If so, then let them know where

View File

@ -38,7 +38,6 @@
#include <linux/pm.h>
#include <linux/sched.h>
#include <linux/proc_fs.h>
#include <linux/pm.h>
#include <linux/interrupt.h>
#include <asm/io.h>

View File

@ -25,6 +25,10 @@ config GENERIC_HARDIRQS
bool
default n
config TIME_LOW_RES
bool
default y
mainmenu "Fujitsu FR-V Kernel Configuration"
source "init/Kconfig"

View File

@ -81,7 +81,7 @@ endif
# - reserve CC3 for use with atomic ops
# - all the extra registers are dealt with only at context switch time
CFLAGS += -mno-fdpic -mgpr-32 -msoft-float -mno-media
CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15
CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15 -ffixed-icc2
AFLAGS += -mno-fdpic
ASFLAGS += -mno-fdpic

View File

@ -200,12 +200,20 @@ __break_step:
movsg bpcsr,gr2
sethi.p %hi(__entry_kernel_external_interrupt),gr3
setlo %lo(__entry_kernel_external_interrupt),gr3
subcc gr2,gr3,gr0,icc0
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_uspace_external_interrupt),gr3
setlo.p %lo(__entry_uspace_external_interrupt),gr3
beq icc0,#2,__break_step_kernel_external_interrupt
sethi.p %hi(__entry_uspace_external_interrupt),gr3
setlo %lo(__entry_uspace_external_interrupt),gr3
subcc gr2,gr3,gr0,icc0
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_kernel_external_interrupt_virtually_disabled),gr3
setlo.p %lo(__entry_kernel_external_interrupt_virtually_disabled),gr3
beq icc0,#2,__break_step_uspace_external_interrupt
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_kernel_external_interrupt_virtual_reenable),gr3
setlo.p %lo(__entry_kernel_external_interrupt_virtual_reenable),gr3
beq icc0,#2,__break_step_kernel_external_interrupt_virtually_disabled
subcc gr2,gr3,gr0,icc0
beq icc0,#2,__break_step_kernel_external_interrupt_virtual_reenable
LEDS 0x2007,gr2
@ -254,6 +262,9 @@ __break_step_kernel_softprog_interrupt:
# step through an external interrupt from kernel mode
.globl __break_step_kernel_external_interrupt
__break_step_kernel_external_interrupt:
# deal with virtual interrupt disablement
beq icc2,#0,__break_step_kernel_external_interrupt_virtually_disabled
sethi.p %hi(__entry_kernel_external_interrupt_reentry),gr3
setlo %lo(__entry_kernel_external_interrupt_reentry),gr3
@ -294,6 +305,64 @@ __break_return_as_kernel_prologue:
#endif
rett #1
# we single-stepped into an interrupt handler whilst interrupts were merely virtually disabled
# need to really disable interrupts, set flag, fix up and return
__break_step_kernel_external_interrupt_virtually_disabled:
movsg psr,gr2
andi gr2,#~PSR_PIL,gr2
ori gr2,#PSR_PIL_14,gr2 /* debugging interrupts only */
movgs gr2,psr
ldi @(gr31,#REG_CCR),gr3
movgs gr3,ccr
subcc.p gr0,gr0,gr0,icc2 /* leave Z set, clear C */
# exceptions must've been enabled and we must've been in supervisor mode
setlos BPSR_BET|BPSR_BS,gr3
movgs gr3,bpsr
# return to where the interrupt happened
movsg pcsr,gr2
movgs gr2,bpcsr
lddi.p @(gr31,#REG_GR(2)),gr2
xor gr31,gr31,gr31
movgs gr0,brr
#ifdef CONFIG_MMU
movsg scr3,gr31
#endif
rett #1
# we stepped through into the virtual interrupt reenablement trap
#
# we also want to single step anyway, but after fixing up so that we get an event on the
# instruction after the broken-into exception returns
.globl __break_step_kernel_external_interrupt_virtual_reenable
__break_step_kernel_external_interrupt_virtual_reenable:
movsg psr,gr2
andi gr2,#~PSR_PIL,gr2
movgs gr2,psr
ldi @(gr31,#REG_CCR),gr3
movgs gr3,ccr
subicc gr0,#1,gr0,icc2 /* clear Z, set C */
# save the adjusted ICC2
movsg ccr,gr3
sti gr3,@(gr31,#REG_CCR)
# exceptions must've been enabled and we must've been in supervisor mode
setlos BPSR_BET|BPSR_BS,gr3
movgs gr3,bpsr
# return to where the trap happened
movsg pcsr,gr2
movgs gr2,bpcsr
# and then process the single step
bra __break_continue
# step through an internal exception from uspace mode
.globl __break_step_uspace_softprog_interrupt
__break_step_uspace_softprog_interrupt:

View File

@ -116,6 +116,8 @@ __break_kerneltrap_fixup_table:
.long __break_step_uspace_external_interrupt
.section .trap.kernel
.org \tbr_tt
# deal with virtual interrupt disablement
beq icc2,#0,__entry_kernel_external_interrupt_virtually_disabled
bra __entry_kernel_external_interrupt
.section .trap.fixup.kernel
.org \tbr_tt >> 2
@ -259,25 +261,52 @@ __trap_fixup_kernel_data_tlb_miss:
.org TBR_TT_TRAP0
.rept 127
bra __entry_uspace_softprog_interrupt
bra __break_step_uspace_softprog_interrupt
.long 0,0
.long 0,0,0
.endr
.org TBR_TT_BREAK
bra __entry_break
.long 0,0,0
.section .trap.fixup.user
.org TBR_TT_TRAP0 >> 2
.rept 127
.long __break_step_uspace_softprog_interrupt
.endr
.org TBR_TT_BREAK >> 2
.long 0
# miscellaneous kernel mode entry points
.section .trap.kernel
.org TBR_TT_TRAP0
.rept 127
bra __entry_kernel_softprog_interrupt
bra __break_step_kernel_softprog_interrupt
.long 0,0
.org TBR_TT_TRAP1
bra __entry_kernel_softprog_interrupt
# trap #2 in kernel - reenable interrupts
.org TBR_TT_TRAP2
bra __entry_kernel_external_interrupt_virtual_reenable
# miscellaneous kernel traps
.org TBR_TT_TRAP3
.rept 124
bra __entry_kernel_softprog_interrupt
.long 0,0,0
.endr
.org TBR_TT_BREAK
bra __entry_break
.long 0,0,0
.section .trap.fixup.kernel
.org TBR_TT_TRAP0 >> 2
.long __break_step_kernel_softprog_interrupt
.long __break_step_kernel_softprog_interrupt
.long __break_step_kernel_external_interrupt_virtual_reenable
.rept 124
.long __break_step_kernel_softprog_interrupt
.endr
.org TBR_TT_BREAK >> 2
.long 0
# miscellaneous debug mode entry points
.section .trap.break
.org TBR_TT_BREAK

View File

@ -141,7 +141,10 @@ __entry_uspace_external_interrupt_reentry:
movsg gner0,gr4
movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0)
stdi.p gr4,@(gr28,#REG_GNER0)
# interrupts start off fully disabled in the interrupt handler
subcc gr0,gr0,gr0,icc2 /* set Z and clear C */
# set up kernel global registers
sethi.p %hi(__kernel_current_task),gr5
@ -193,9 +196,8 @@ __entry_uspace_external_interrupt_reentry:
.type __entry_kernel_external_interrupt,@function
__entry_kernel_external_interrupt:
LEDS 0x6210
sub sp,gr15,gr31
LEDS32
// sub sp,gr15,gr31
// LEDS32
# set up the stack pointer
or.p sp,gr0,gr30
@ -231,7 +233,10 @@ __entry_kernel_external_interrupt_reentry:
stdi gr24,@(gr28,#REG_GR(24))
stdi gr26,@(gr28,#REG_GR(26))
sti gr29,@(gr28,#REG_GR(29))
stdi gr30,@(gr28,#REG_GR(30))
stdi.p gr30,@(gr28,#REG_GR(30))
# note virtual interrupts will be fully enabled upon return
subicc gr0,#1,gr0,icc2 /* clear Z, set C */
movsg tbr ,gr20
movsg psr ,gr22
@ -267,7 +272,10 @@ __entry_kernel_external_interrupt_reentry:
movsg gner0,gr4
movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0)
stdi.p gr4,@(gr28,#REG_GNER0)
# interrupts start off fully disabled in the interrupt handler
subcc gr0,gr0,gr0,icc2 /* set Z and clear C */
# set the return address
sethi.p %hi(__entry_return_from_kernel_interrupt),gr4
@ -291,6 +299,45 @@ __entry_kernel_external_interrupt_reentry:
.size __entry_kernel_external_interrupt,.-__entry_kernel_external_interrupt
###############################################################################
#
# deal with interrupts that were actually virtually disabled
# - we need to really disable them, flag the fact and return immediately
# - if you change this, you must alter break.S also
#
###############################################################################
.balign L1_CACHE_BYTES
.globl __entry_kernel_external_interrupt_virtually_disabled
.type __entry_kernel_external_interrupt_virtually_disabled,@function
__entry_kernel_external_interrupt_virtually_disabled:
movsg psr,gr30
andi gr30,#~PSR_PIL,gr30
ori gr30,#PSR_PIL_14,gr30 ; debugging interrupts only
movgs gr30,psr
subcc gr0,gr0,gr0,icc2 ; leave Z set, clear C
rett #0
.size __entry_kernel_external_interrupt_virtually_disabled,.-__entry_kernel_external_interrupt_virtually_disabled
###############################################################################
#
# deal with re-enablement of interrupts that were pending when virtually re-enabled
# - set ICC2.C, re-enable the real interrupts and return
# - we can clear ICC2.Z because we shouldn't be here if it's not 0 [due to TIHI]
# - if you change this, you must alter break.S also
#
###############################################################################
.balign L1_CACHE_BYTES
.globl __entry_kernel_external_interrupt_virtual_reenable
.type __entry_kernel_external_interrupt_virtual_reenable,@function
__entry_kernel_external_interrupt_virtual_reenable:
movsg psr,gr30
andi gr30,#~PSR_PIL,gr30 ; re-enable interrupts
movgs gr30,psr
subicc gr0,#1,gr0,icc2 ; clear Z, set C
rett #0
.size __entry_kernel_external_interrupt_virtual_reenable,.-__entry_kernel_external_interrupt_virtual_reenable
###############################################################################
#
@ -335,6 +382,7 @@ __entry_uspace_softprog_interrupt_reentry:
sethi.p %hi(__entry_return_from_user_exception),gr23
setlo %lo(__entry_return_from_user_exception),gr23
bra __entry_common
.size __entry_uspace_softprog_interrupt,.-__entry_uspace_softprog_interrupt
@ -495,7 +543,10 @@ __entry_common:
movsg gner0,gr4
movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0)
stdi.p gr4,@(gr28,#REG_GNER0)
# set up virtual interrupt disablement
subicc gr0,#1,gr0,icc2 /* clear Z flag, set C flag */
# set up kernel global registers
sethi.p %hi(__kernel_current_task),gr5
@ -1418,11 +1469,27 @@ sys_call_table:
.long sys_add_key
.long sys_request_key
.long sys_keyctl
.long sys_ni_syscall // sys_vperfctr_open
.long sys_ni_syscall // sys_vperfctr_control /* 290 */
.long sys_ni_syscall // sys_vperfctr_unlink
.long sys_ni_syscall // sys_vperfctr_iresume
.long sys_ni_syscall // sys_vperfctr_read
.long sys_ioprio_set
.long sys_ioprio_get /* 290 */
.long sys_inotify_init
.long sys_inotify_add_watch
.long sys_inotify_rm_watch
.long sys_migrate_pages
.long sys_openat /* 295 */
.long sys_mkdirat
.long sys_mknodat
.long sys_fchownat
.long sys_futimesat
.long sys_newfstatat /* 300 */
.long sys_unlinkat
.long sys_renameat
.long sys_linkat
.long sys_symlinkat
.long sys_readlinkat /* 305 */
.long sys_fchmodat
.long sys_faccessat
.long sys_pselect6
.long sys_ppoll
syscall_table_size = (. - sys_call_table)

View File

@ -513,6 +513,9 @@ __head_mmu_enabled:
movgs gr0,ccr
movgs gr0,cccr
# initialise the virtual interrupt handling
subcc gr0,gr0,gr0,icc2 /* set Z, clear C */
#ifdef CONFIG_MMU
movgs gr3,scr2
movgs gr3,scr3

View File

@ -287,18 +287,11 @@ asmlinkage void do_IRQ(void)
struct irq_source *source;
int level, cpu;
irq_enter();
level = (__frame->tbr >> 4) & 0xf;
cpu = smp_processor_id();
#if 0
{
static u32 irqcount;
*(volatile u32 *) 0xe1200004 = ~((irqcount++ << 8) | level);
*(volatile u16 *) 0xffc00100 = (u16) ~0x9999;
mb();
}
#endif
if ((unsigned long) __frame - (unsigned long) (current + 1) < 512)
BUG();
@ -308,40 +301,12 @@ asmlinkage void do_IRQ(void)
kstat_this_cpu.irqs[level]++;
irq_enter();
for (source = frv_irq_levels[level].sources; source; source = source->next)
source->doirq(source);
irq_exit();
__clr_MASK(level);
/* only process softirqs if we didn't interrupt another interrupt handler */
if ((__frame->psr & PSR_PIL) == PSR_PIL_0)
if (local_softirq_pending())
do_softirq();
#ifdef CONFIG_PREEMPT
local_irq_disable();
while (--current->preempt_count == 0) {
if (!(__frame->psr & PSR_S) ||
current->need_resched == 0 ||
in_interrupt())
break;
current->preempt_count++;
local_irq_enable();
preempt_schedule();
local_irq_disable();
}
#endif
#if 0
{
*(volatile u16 *) 0xffc00100 = (u16) ~0x6666;
mb();
}
#endif
irq_exit();
} /* end do_IRQ() */

View File

@ -43,15 +43,6 @@ void iounmap(void *addr)
{
}
/*
* __iounmap unmaps nearly everything, so be careful
* it doesn't free currently pointer/page tables anymore but it
* wans't used anyway and might be added later.
*/
void __iounmap(void *addr, unsigned long size)
{
}
/*
* Set new cache mode for some kernel address space.
* The caller must push data for that range itself, if such data may already

View File

@ -33,6 +33,10 @@ config GENERIC_CALIBRATE_DELAY
bool
default y
config TIME_LOW_RES
bool
default y
config ISA
bool
default y

View File

@ -169,7 +169,7 @@ endif
config CPU_H8300H
bool
depends on (H8002 || H83007 || H83048 || H83068)
depends on (H83002 || H83007 || H83048 || H83068)
default y
config CPU_H8S

3
arch/i386/boot/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
bootsect
bzImage
setup

1
arch/i386/boot/tools/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build

1
arch/i386/kernel/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
vsyscall.lds

View File

@ -1,4 +1,5 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <asm/processor.h>
#include <asm/msr.h>

View File

@ -282,6 +282,10 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
if (val != CPUFREQ_RESUMECHANGE)
write_seqlock_irq(&xtime_lock);
if (!ref_freq) {
if (!freq->old){
ref_freq = freq->new;
goto end;
}
ref_freq = freq->old;
loops_per_jiffy_ref = cpu_data[freq->cpu].loops_per_jiffy;
#ifndef CONFIG_SMP
@ -307,6 +311,7 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
#endif
}
end:
if (val != CPUFREQ_RESUMECHANGE)
write_sequnlock_irq(&xtime_lock);

View File

@ -7,6 +7,21 @@
* for details.
*/
/*
* The caller puts arg2 in %ecx, which gets pushed. The kernel will use
* %ecx itself for arg2. The pushing is because the sysexit instruction
* (found in entry.S) requires that we clobber %ecx with the desired %esp.
* User code might expect that %ecx is unclobbered though, as it would be
* for returning via the iret instruction, so we must push and pop.
*
* The caller puts arg3 in %edx, which the sysexit instruction requires
* for %eip. Thus, exactly as for arg2, we must push and pop.
*
* Arg6 is different. The caller puts arg6 in %ebp. Since the sysenter
* instruction clobbers %esp, the user's %esp won't even survive entry
* into the kernel. We store %esp in %ebp. Code in entry.S must fetch
* arg6 from the stack.
*/
.text
.globl __kernel_vsyscall
.type __kernel_vsyscall,@function

View File

@ -20,7 +20,20 @@ struct frame_head {
} __attribute__((packed));
static struct frame_head *
dump_backtrace(struct frame_head * head)
dump_kernel_backtrace(struct frame_head * head)
{
oprofile_add_trace(head->ret);
/* frame pointers should strictly progress back up the stack
* (towards higher addresses) */
if (head >= head->ebp)
return NULL;
return head->ebp;
}
static struct frame_head *
dump_user_backtrace(struct frame_head * head)
{
struct frame_head bufhead[2];
@ -105,10 +118,10 @@ x86_backtrace(struct pt_regs * const regs, unsigned int depth)
if (!user_mode_vm(regs)) {
while (depth-- && valid_kernel_stack(head, regs))
head = dump_backtrace(head);
head = dump_kernel_backtrace(head);
return;
}
while (depth-- && head)
head = dump_backtrace(head);
head = dump_user_backtrace(head);
}

View File

@ -761,6 +761,59 @@ int acpi_map_cpu2node(acpi_handle handle, int cpu, long physid)
return (0);
}
int additional_cpus __initdata = -1;
static __init int setup_additional_cpus(char *s)
{
if (s)
additional_cpus = simple_strtol(s, NULL, 0);
return 0;
}
early_param("additional_cpus", setup_additional_cpus);
/*
* cpu_possible_map should be static, it cannot change as cpu's
* are onlined, or offlined. The reason is per-cpu data-structures
* are allocated by some modules at init time, and dont expect to
* do this dynamically on cpu arrival/departure.
* cpu_present_map on the other hand can change dynamically.
* In case when cpu_hotplug is not compiled, then we resort to current
* behaviour, which is cpu_possible == cpu_present.
* - Ashok Raj
*
* Three ways to find out the number of additional hotplug CPUs:
* - If the BIOS specified disabled CPUs in ACPI/mptables use that.
* - The user can overwrite it with additional_cpus=NUM
* - Otherwise don't reserve additional CPUs.
*/
__init void prefill_possible_map(void)
{
int i;
int possible, disabled_cpus;
disabled_cpus = total_cpus - available_cpus;
if (additional_cpus == -1) {
if (disabled_cpus > 0)
additional_cpus = disabled_cpus;
else
additional_cpus = 0;
}
possible = available_cpus + additional_cpus;
if (possible > NR_CPUS)
possible = NR_CPUS;
printk(KERN_INFO "SMP: Allowing %d CPUs, %d hotplug CPUs\n",
possible, max((possible - available_cpus), 0));
for (i = 0; i < possible; i++)
cpu_set(i, cpu_possible_map);
}
int acpi_map_lsapic(acpi_handle handle, int *pcpu)
{
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };

View File

@ -569,7 +569,9 @@ GLOBAL_ENTRY(ia64_trace_syscall)
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
br.call.sptk.many rp=syscall_trace_leave // give parent a chance to catch return value
.ret3: br.cond.sptk .work_pending_syscall_end
.ret3:
(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
br.cond.sptk .work_pending_syscall_end
strace_error:
ld8 r3=[r2] // load pt_regs.r8

View File

@ -10,23 +10,8 @@
#include <linux/string.h>
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memchr);
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memscan);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strlen);
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(strncpy);
EXPORT_SYMBOL(strnlen);
EXPORT_SYMBOL(strrchr);
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strpbrk);
#include <asm/checksum.h>
EXPORT_SYMBOL(ip_fast_csum); /* hand-coded assembly */

View File

@ -430,6 +430,7 @@ setup_arch (char **cmdline_p)
if (early_console_setup(*cmdline_p) == 0)
mark_bsp_online();
parse_early_param();
#ifdef CONFIG_ACPI
/* Initialize the ACPI boot-time table parser */
acpi_table_init();
@ -688,6 +689,9 @@ void
setup_per_cpu_areas (void)
{
/* start_kernel() requires this... */
#ifdef CONFIG_ACPI_HOTPLUG_CPU
prefill_possible_map();
#endif
}
/*

View File

@ -129,7 +129,7 @@ DEFINE_PER_CPU(int, cpu_state);
/* Bitmasks of currently online, and possible CPUs */
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_possible_map;
cpumask_t cpu_possible_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_possible_map);
cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
@ -506,9 +506,6 @@ smp_build_cpu_map (void)
for (cpu = 0; cpu < NR_CPUS; cpu++) {
ia64_cpu_to_sapicid[cpu] = -1;
#ifdef CONFIG_HOTPLUG_CPU
cpu_set(cpu, cpu_possible_map);
#endif
}
ia64_cpu_to_sapicid[0] = boot_cpu_id;

View File

@ -250,32 +250,27 @@ time_init (void)
set_normalized_timespec(&wall_to_monotonic, -xtime.tv_sec, -xtime.tv_nsec);
}
#define SMALLUSECS 100
/*
* Generic udelay assumes that if preemption is allowed and the thread
* migrates to another CPU, that the ITC values are synchronized across
* all CPUs.
*/
static void
ia64_itc_udelay (unsigned long usecs)
{
unsigned long start = ia64_get_itc();
unsigned long end = start + usecs*local_cpu_data->cyc_per_usec;
while (time_before(ia64_get_itc(), end))
cpu_relax();
}
void (*ia64_udelay)(unsigned long usecs) = &ia64_itc_udelay;
void
udelay (unsigned long usecs)
{
unsigned long start;
unsigned long cycles;
unsigned long smallusecs;
/*
* Execute the non-preemptible delay loop (because the ITC might
* not be synchronized between CPUS) in relatively short time
* chunks, allowing preemption between the chunks.
*/
while (usecs > 0) {
smallusecs = (usecs > SMALLUSECS) ? SMALLUSECS : usecs;
preempt_disable();
cycles = smallusecs*local_cpu_data->cyc_per_usec;
start = ia64_get_itc();
while (ia64_get_itc() - start < cycles)
cpu_relax();
preempt_enable();
usecs -= smallusecs;
}
(*ia64_udelay)(usecs);
}
EXPORT_SYMBOL(udelay);

View File

@ -16,6 +16,7 @@
#include <linux/module.h> /* for EXPORT_SYMBOL */
#include <linux/hardirq.h>
#include <linux/kprobes.h>
#include <linux/delay.h> /* for ssleep() */
#include <asm/fpswa.h>
#include <asm/ia32.h>
@ -116,6 +117,13 @@ die (const char *str, struct pt_regs *regs, long err)
bust_spinlocks(0);
die.lock_owner = -1;
spin_unlock_irq(&die.lock);
if (panic_on_oops) {
printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
ssleep(5);
panic("Fatal exception");
}
do_exit(SIGSEGV);
}

View File

@ -23,6 +23,10 @@
#include "xtalk/hubdev.h"
#include "xtalk/xwidgetdev.h"
extern void sn_init_cpei_timer(void);
extern void register_sn_procfs(void);
static struct list_head sn_sysdata_list;
/* sysdata list struct */
@ -40,12 +44,12 @@ struct brick {
struct slab_info slab_info[MAX_SLABS + 1];
};
int sn_ioif_inited = 0; /* SN I/O infrastructure initialized? */
int sn_ioif_inited; /* SN I/O infrastructure initialized? */
struct sn_pcibus_provider *sn_pci_provider[PCIIO_ASIC_MAX_TYPES]; /* indexed by asic type */
static int max_segment_number = 0; /* Default highest segment number */
static int max_pcibus_number = 255; /* Default highest pci bus number */
static int max_segment_number; /* Default highest segment number */
static int max_pcibus_number = 255; /* Default highest pci bus number */
/*
* Hooks and struct for unsupported pci providers
@ -84,7 +88,6 @@ static inline u64
sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
@ -94,7 +97,6 @@ sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
(u64) nasid, (u64) widget_num,
(u64) device_num, (u64) address, 0, 0, 0);
return ret_stuff.status;
}
/*
@ -102,7 +104,6 @@ sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
*/
static inline u64 sal_get_hubdev_info(u64 handle, u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
@ -118,7 +119,6 @@ static inline u64 sal_get_hubdev_info(u64 handle, u64 address)
*/
static inline u64 sal_get_pcibus_info(u64 segment, u64 busnum, u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
@ -215,7 +215,7 @@ static void __init sn_fixup_ionodes(void)
struct hubdev_info *hubdev;
u64 status;
u64 nasid;
int i, widget, device;
int i, widget, device, size;
/*
* Get SGI Specific HUB chipset information.
@ -251,48 +251,37 @@ static void __init sn_fixup_ionodes(void)
if (!hubdev->hdi_flush_nasid_list.widget_p)
continue;
size = (HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *);
hubdev->hdi_flush_nasid_list.widget_p =
kmalloc((HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *),
GFP_KERNEL);
memset(hubdev->hdi_flush_nasid_list.widget_p, 0x0,
(HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *));
kzalloc(size, GFP_KERNEL);
if (!hubdev->hdi_flush_nasid_list.widget_p)
BUG();
for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++) {
sn_flush_device_kernel = kmalloc(DEV_PER_WIDGET *
sizeof(struct
sn_flush_device_kernel),
GFP_KERNEL);
size = DEV_PER_WIDGET *
sizeof(struct sn_flush_device_kernel);
sn_flush_device_kernel = kzalloc(size, GFP_KERNEL);
if (!sn_flush_device_kernel)
BUG();
memset(sn_flush_device_kernel, 0x0,
DEV_PER_WIDGET *
sizeof(struct sn_flush_device_kernel));
dev_entry = sn_flush_device_kernel;
for (device = 0; device < DEV_PER_WIDGET;
device++,dev_entry++) {
dev_entry->common = kmalloc(sizeof(struct
sn_flush_device_common),
GFP_KERNEL);
size = sizeof(struct sn_flush_device_common);
dev_entry->common = kzalloc(size, GFP_KERNEL);
if (!dev_entry->common)
BUG();
memset(dev_entry->common, 0x0, sizeof(struct
sn_flush_device_common));
if (sn_prom_feature_available(
PRF_DEVICE_FLUSH_LIST))
status = sal_get_device_dmaflush_list(
nasid,
widget,
device,
(u64)(dev_entry->common));
nasid, widget, device,
(u64)(dev_entry->common));
else
status = sn_device_fixup_war(nasid,
widget,
device,
dev_entry->common);
widget, device,
dev_entry->common);
if (status != SALRET_OK)
panic("SAL call failed: %s\n",
ia64_sal_strerror(status));
@ -383,13 +372,12 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
pci_dev_get(dev); /* for the sysdata pointer */
pcidev_info = kzalloc(sizeof(struct pcidev_info), GFP_KERNEL);
if (pcidev_info <= 0)
if (!pcidev_info)
BUG(); /* Cannot afford to run out of memory */
sn_irq_info = kmalloc(sizeof(struct sn_irq_info), GFP_KERNEL);
if (sn_irq_info <= 0)
sn_irq_info = kzalloc(sizeof(struct sn_irq_info), GFP_KERNEL);
if (!sn_irq_info)
BUG(); /* Cannot afford to run out of memory */
memset(sn_irq_info, 0, sizeof(struct sn_irq_info));
/* Call to retrieve pci device information needed by kernel. */
status = sal_get_pcidev_info((u64) segment, (u64) dev->bus->number,
@ -482,13 +470,13 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
*/
void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
{
int status = 0;
int status;
int nasid, cnode;
struct pci_controller *controller;
struct sn_pci_controller *sn_controller;
struct pcibus_bussoft *prom_bussoft_ptr;
struct hubdev_info *hubdev_info;
void *provider_soft = NULL;
void *provider_soft;
struct sn_pcibus_provider *provider;
status = sal_get_pcibus_info((u64) segment, (u64) busnum,
@ -535,6 +523,8 @@ void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
bus->sysdata = controller;
if (provider->bus_fixup)
provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr, controller);
else
provider_soft = NULL;
if (provider_soft == NULL) {
/* fixup failed or not applicable */
@ -638,13 +628,8 @@ void sn_bus_free_sysdata(void)
static int __init sn_pci_init(void)
{
int i = 0;
int j = 0;
int i, j;
struct pci_dev *pci_dev = NULL;
extern void sn_init_cpei_timer(void);
#ifdef CONFIG_PROC_FS
extern void register_sn_procfs(void);
#endif
if (!ia64_platform_is("sn2") || IS_RUNNING_ON_FAKE_PROM())
return 0;
@ -700,32 +685,29 @@ static int __init sn_pci_init(void)
*/
void hubdev_init_node(nodepda_t * npda, cnodeid_t node)
{
struct hubdev_info *hubdev_info;
int size;
pg_data_t *pg;
size = sizeof(struct hubdev_info);
if (node >= num_online_nodes()) /* Headless/memless IO nodes */
hubdev_info =
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(0),
sizeof(struct
hubdev_info));
pg = NODE_DATA(0);
else
hubdev_info =
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(node),
sizeof(struct
hubdev_info));
npda->pdinfo = (void *)hubdev_info;
pg = NODE_DATA(node);
hubdev_info = (struct hubdev_info *)alloc_bootmem_node(pg, size);
npda->pdinfo = (void *)hubdev_info;
}
geoid_t
cnodeid_get_geoid(cnodeid_t cnode)
{
struct hubdev_info *hubdev;
hubdev = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
return hubdev->hdi_geoid;
}
subsys_initcall(sn_pci_init);
@ -734,3 +716,4 @@ EXPORT_SYMBOL(sn_pci_unfixup_slot);
EXPORT_SYMBOL(sn_pci_controller_fixup);
EXPORT_SYMBOL(sn_bus_store_sysdata);
EXPORT_SYMBOL(sn_bus_free_sysdata);
EXPORT_SYMBOL(sn_pcidev_info_get);

View File

@ -75,7 +75,7 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info);
EXPORT_PER_CPU_SYMBOL(__sn_hub_info);
DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_NUMNODES]);
DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_COMPACT_NODES]);
EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid);
DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda);
@ -317,6 +317,7 @@ struct pcdp_vga_device {
#define PCDP_PCI_TRANS_IOPORT 0x02
#define PCDP_PCI_TRANS_MMIO 0x01
#if defined(CONFIG_VT) && defined(CONFIG_VGA_CONSOLE)
static void
sn_scan_pcdp(void)
{
@ -358,6 +359,7 @@ sn_scan_pcdp(void)
break; /* once we find the primary, we're done */
}
}
#endif
static unsigned long sn2_rtc_initial;

View File

@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1999,2001-2004 Silicon Graphics, Inc. All Rights Reserved.
* Copyright (C) 1999,2001-2004, 2006 Silicon Graphics, Inc. All Rights Reserved.
*
* Module to export the system's Firmware Interface Tables, including
* PROM revision numbers and banners, in /proc
@ -190,7 +190,7 @@ static int
read_version_entry(char *page, char **start, off_t off, int count, int *eof,
void *data)
{
int len = 0;
int len;
/* data holds the NASID of the node */
len = dump_version(page, (unsigned long)data);
@ -202,7 +202,7 @@ static int
read_fit_entry(char *page, char **start, off_t off, int count, int *eof,
void *data)
{
int len = 0;
int len;
/* data holds the NASID of the node */
len = dump_fit(page, (unsigned long)data);
@ -229,13 +229,16 @@ int __init prominfo_init(void)
struct proc_dir_entry *p;
cnodeid_t cnodeid;
unsigned long nasid;
int size;
char name[NODE_NAME_LEN];
if (!ia64_platform_is("sn2"))
return 0;
proc_entries = kmalloc(num_online_nodes() * sizeof(struct proc_dir_entry *),
GFP_KERNEL);
size = num_online_nodes() * sizeof(struct proc_dir_entry *);
proc_entries = kzalloc(size, GFP_KERNEL);
if (!proc_entries)
return -ENOMEM;
sgi_prominfo_entry = proc_mkdir("sgi_prominfo", NULL);
@ -244,14 +247,12 @@ int __init prominfo_init(void)
sprintf(name, "node%d", cnodeid);
*entp = proc_mkdir(name, sgi_prominfo_entry);
nasid = cnodeid_to_nasid(cnodeid);
p = create_proc_read_entry(
"fit", 0, *entp, read_fit_entry,
(void *)nasid);
p = create_proc_read_entry("fit", 0, *entp, read_fit_entry,
(void *)nasid);
if (p)
p->owner = THIS_MODULE;
p = create_proc_read_entry(
"version", 0, *entp, read_version_entry,
(void *)nasid);
p = create_proc_read_entry("version", 0, *entp,
read_version_entry, (void *)nasid);
if (p)
p->owner = THIS_MODULE;
entp++;
@ -263,7 +264,7 @@ int __init prominfo_init(void)
void __exit prominfo_exit(void)
{
struct proc_dir_entry **entp;
unsigned cnodeid;
unsigned int cnodeid;
char name[NODE_NAME_LEN];
entp = proc_entries;

View File

@ -46,8 +46,14 @@ DECLARE_PER_CPU(struct ptc_stats, ptcstats);
static __cacheline_aligned DEFINE_SPINLOCK(sn2_global_ptc_lock);
void sn2_ptc_deadlock_recovery(short *, short, short, int, volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
extern unsigned long
sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
void
sn2_ptc_deadlock_recovery(short *, short, short, int,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
/*
* Note: some is the following is captured here to make degugging easier
@ -59,16 +65,6 @@ void sn2_ptc_deadlock_recovery(short *, short, short, int, volatile unsigned lon
#define reset_max_active_on_deadlock() 1
#define PTC_LOCK(sh1) ((sh1) ? &sn2_global_ptc_lock : &sn_nodepda->ptc_lock)
static inline void ptc_lock(int sh1, unsigned long *flagp)
{
spin_lock_irqsave(PTC_LOCK(sh1), *flagp);
}
static inline void ptc_unlock(int sh1, unsigned long flags)
{
spin_unlock_irqrestore(PTC_LOCK(sh1), flags);
}
struct ptc_stats {
unsigned long ptc_l;
unsigned long change_rid;
@ -82,6 +78,8 @@ struct ptc_stats {
unsigned long shub_ptc_flushes_not_my_mm;
};
#define sn2_ptctest 0
static inline unsigned long wait_piowc(void)
{
volatile unsigned long *piows;
@ -200,7 +198,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
max_active = max_active_pio(shub1);
itc = ia64_get_itc();
ptc_lock(shub1, &flags);
spin_lock_irqsave(PTC_LOCK(shub1), flags);
itc2 = ia64_get_itc();
__get_cpu_var(ptcstats).lock_itc_clocks += itc2 - itc;
@ -258,7 +256,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
ia64_srlz_d();
}
ptc_unlock(shub1, flags);
spin_unlock_irqrestore(PTC_LOCK(shub1), flags);
preempt_enable();
}
@ -270,11 +268,12 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
* TLB flush transaction. The recovery sequence is somewhat tricky & is
* coded in assembly language.
*/
void sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid, volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1)
void
sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid,
volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1)
{
extern unsigned long sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long, volatile unsigned long *, unsigned long);
short nasid, i;
unsigned long *piows, zeroval, n;

View File

@ -6,11 +6,11 @@
* Copyright (C) 2000-2005 Silicon Graphics, Inc. All rights reserved.
*/
#include <linux/config.h>
#include <asm/uaccess.h>
#ifdef CONFIG_PROC_FS
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <asm/uaccess.h>
#include <asm/sn/sn_sal.h>
static int partition_id_show(struct seq_file *s, void *p)
@ -90,10 +90,10 @@ static int coherence_id_open(struct inode *inode, struct file *file)
return single_open(file, coherence_id_show, NULL);
}
static struct proc_dir_entry *sn_procfs_create_entry(
const char *name, struct proc_dir_entry *parent,
int (*openfunc)(struct inode *, struct file *),
int (*releasefunc)(struct inode *, struct file *))
static struct proc_dir_entry
*sn_procfs_create_entry(const char *name, struct proc_dir_entry *parent,
int (*openfunc)(struct inode *, struct file *),
int (*releasefunc)(struct inode *, struct file *))
{
struct proc_dir_entry *e = create_proc_entry(name, 0444, parent);
@ -126,24 +126,24 @@ void register_sn_procfs(void)
return;
sn_procfs_create_entry("partition_id", sgi_proc_dir,
partition_id_open, single_release);
partition_id_open, single_release);
sn_procfs_create_entry("system_serial_number", sgi_proc_dir,
system_serial_number_open, single_release);
system_serial_number_open, single_release);
sn_procfs_create_entry("licenseID", sgi_proc_dir,
licenseID_open, single_release);
licenseID_open, single_release);
e = sn_procfs_create_entry("sn_force_interrupt", sgi_proc_dir,
sn_force_interrupt_open, single_release);
sn_force_interrupt_open, single_release);
if (e)
e->proc_fops->write = sn_force_interrupt_write_proc;
sn_procfs_create_entry("coherence_id", sgi_proc_dir,
coherence_id_open, single_release);
coherence_id_open, single_release);
sn_procfs_create_entry("sn_topology", sgi_proc_dir,
sn_topology_open, sn_topology_release);
sn_topology_open, sn_topology_release);
}
#endif /* CONFIG_PROC_FS */

View File

@ -14,6 +14,7 @@
#include <asm/hw_irq.h>
#include <asm/system.h>
#include <asm/timex.h>
#include <asm/sn/leds.h>
#include <asm/sn/shub_mmr.h>
@ -28,9 +29,27 @@ static struct time_interpolator sn2_interpolator = {
.source = TIME_SOURCE_MMIO64
};
/*
* sn udelay uses the RTC instead of the ITC because the ITC is not
* synchronized across all CPUs, and the thread may migrate to another CPU
* if preemption is enabled.
*/
static void
ia64_sn_udelay (unsigned long usecs)
{
unsigned long start = rtc_time();
unsigned long end = start +
usecs * sn_rtc_cycles_per_second / 1000000;
while (time_before((unsigned long)rtc_time(), end))
cpu_relax();
}
void __init sn_timer_init(void)
{
sn2_interpolator.frequency = sn_rtc_cycles_per_second;
sn2_interpolator.addr = RTC_COUNTER_ADDR;
register_time_interpolator(&sn2_interpolator);
ia64_udelay = &ia64_sn_udelay;
}

View File

@ -1,7 +1,7 @@
/*
*
*
* Copyright (c) 2005 Silicon Graphics, Inc. All Rights Reserved.
* Copyright (c) 2005, 2006 Silicon Graphics, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of version 2 of the GNU General Public License
@ -22,11 +22,6 @@
* License along with this program; if not, write the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
*
* Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
* Mountain View, CA 94043, or:
*
* http://www.sgi.com
*
* For further information regarding this notice, see:
*
* http://oss.sgi.com/projects/GenInfo/NoticeExplan

View File

@ -284,12 +284,10 @@ struct sn_irq_info *tiocx_irq_alloc(nasid_t nasid, int widget, int irq,
if ((nasid & 1) == 0)
return NULL;
sn_irq_info = kmalloc(sn_irq_size, GFP_KERNEL);
sn_irq_info = kzalloc(sn_irq_size, GFP_KERNEL);
if (sn_irq_info == NULL)
return NULL;
memset(sn_irq_info, 0x0, sn_irq_size);
status = tiocx_intr_alloc(nasid, widget, __pa(sn_irq_info), irq,
req_nasid, slice);
if (status) {

View File

@ -738,7 +738,9 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
/* make sure all activity has settled down first */
if (atomic_read(&ch->references) > 0) {
if (atomic_read(&ch->references) > 0 ||
((ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) &&
!(ch->flags & XPC_C_DISCONNECTINGCALLOUT_MADE))) {
return;
}
DBUG_ON(atomic_read(&ch->kthreads_assigned) != 0);
@ -775,7 +777,7 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
/* both sides are disconnected now */
if (ch->flags & XPC_C_CONNECTCALLOUT) {
if (ch->flags & XPC_C_DISCONNECTINGCALLOUT_MADE) {
spin_unlock_irqrestore(&ch->lock, *irq_flags);
xpc_disconnect_callout(ch, xpcDisconnected);
spin_lock_irqsave(&ch->lock, *irq_flags);
@ -1300,7 +1302,7 @@ xpc_process_msg_IPI(struct xpc_partition *part, int ch_number)
"delivered=%d, partid=%d, channel=%d\n",
nmsgs_sent, ch->partid, ch->number);
if (ch->flags & XPC_C_CONNECTCALLOUT) {
if (ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) {
xpc_activate_kthreads(ch, nmsgs_sent);
}
}

View File

@ -750,12 +750,16 @@ xpc_daemonize_kthread(void *args)
/* let registerer know that connection has been established */
spin_lock_irqsave(&ch->lock, irq_flags);
if (!(ch->flags & XPC_C_CONNECTCALLOUT)) {
ch->flags |= XPC_C_CONNECTCALLOUT;
if (!(ch->flags & XPC_C_CONNECTEDCALLOUT)) {
ch->flags |= XPC_C_CONNECTEDCALLOUT;
spin_unlock_irqrestore(&ch->lock, irq_flags);
xpc_connected_callout(ch);
spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_CONNECTEDCALLOUT_MADE;
spin_unlock_irqrestore(&ch->lock, irq_flags);
/*
* It is possible that while the callout was being
* made that the remote partition sent some messages.
@ -777,15 +781,17 @@ xpc_daemonize_kthread(void *args)
if (atomic_dec_return(&ch->kthreads_assigned) == 0) {
spin_lock_irqsave(&ch->lock, irq_flags);
if ((ch->flags & XPC_C_CONNECTCALLOUT) &&
!(ch->flags & XPC_C_DISCONNECTCALLOUT)) {
ch->flags |= XPC_C_DISCONNECTCALLOUT;
if ((ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) &&
!(ch->flags & XPC_C_DISCONNECTINGCALLOUT)) {
ch->flags |= XPC_C_DISCONNECTINGCALLOUT;
spin_unlock_irqrestore(&ch->lock, irq_flags);
xpc_disconnect_callout(ch, xpcDisconnecting);
} else {
spin_unlock_irqrestore(&ch->lock, irq_flags);
spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_DISCONNECTINGCALLOUT_MADE;
}
spin_unlock_irqrestore(&ch->lock, irq_flags);
if (atomic_dec_return(&part->nchannels_engaged) == 0) {
xpc_mark_partition_disengaged(part);
xpc_IPI_send_disengage(part);

View File

@ -335,10 +335,10 @@ int sn_pci_legacy_read(struct pci_bus *bus, u16 port, u32 *val, u8 size)
*/
SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE,
pci_domain_nr(bus), bus->number,
0, /* io */
0, /* read */
port, size, __pa(val));
pci_domain_nr(bus), bus->number,
0, /* io */
0, /* read */
port, size, __pa(val));
if (isrv.status == 0)
return size;
@ -381,10 +381,10 @@ int sn_pci_legacy_write(struct pci_bus *bus, u16 port, u32 val, u8 size)
*/
SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE,
pci_domain_nr(bus), bus->number,
0, /* io */
1, /* write */
port, size, __pa(&val));
pci_domain_nr(bus), bus->number,
0, /* io */
1, /* write */
port, size, __pa(&val));
if (isrv.status == 0)
return size;

View File

@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
* Copyright (C) 2001-2006 Silicon Graphics, Inc. All rights reserved.
*/
#include <linux/types.h>
@ -12,7 +12,7 @@
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
int pcibr_invalidate_ate; /* by default don't invalidate ATE on free */
/*
* mark_ate: Mark the ate as either free or inuse.
@ -20,14 +20,12 @@ int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
static void mark_ate(struct ate_resource *ate_resource, int start, int number,
u64 value)
{
u64 *ate = ate_resource->ate;
int index;
int length = 0;
for (index = start; length < number; index++, length++)
ate[index] = value;
}
/*
@ -37,7 +35,6 @@ static void mark_ate(struct ate_resource *ate_resource, int start, int number,
static int find_free_ate(struct ate_resource *ate_resource, int start,
int count)
{
u64 *ate = ate_resource->ate;
int index;
int start_free;
@ -70,12 +67,10 @@ static int find_free_ate(struct ate_resource *ate_resource, int start,
static inline void free_ate_resource(struct ate_resource *ate_resource,
int start)
{
mark_ate(ate_resource, start, ate_resource->ate[start], 0);
if ((ate_resource->lowest_free_index > start) ||
(ate_resource->lowest_free_index < 0))
ate_resource->lowest_free_index = start;
}
/*
@ -84,7 +79,6 @@ static inline void free_ate_resource(struct ate_resource *ate_resource,
static inline int alloc_ate_resource(struct ate_resource *ate_resource,
int ate_needed)
{
int start_index;
/*
@ -118,19 +112,12 @@ static inline int alloc_ate_resource(struct ate_resource *ate_resource,
*/
int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count)
{
int status = 0;
u64 flag;
int status;
unsigned long flags;
flag = pcibr_lock(pcibus_info);
spin_lock_irqsave(&pcibus_info->pbi_lock, flags);
status = alloc_ate_resource(&pcibus_info->pbi_int_ate_resource, count);
if (status < 0) {
/* Failed to allocate */
pcibr_unlock(pcibus_info, flag);
return -1;
}
pcibr_unlock(pcibus_info, flag);
spin_unlock_irqrestore(&pcibus_info->pbi_lock, flags);
return status;
}
@ -182,7 +169,7 @@ void pcibr_ate_free(struct pcibus_info *pcibus_info, int index)
ate_write(pcibus_info, index, count, (ate & ~PCI32_ATE_V));
}
flags = pcibr_lock(pcibus_info);
spin_lock_irqsave(&pcibus_info->pbi_lock, flags);
free_ate_resource(&pcibus_info->pbi_int_ate_resource, index);
pcibr_unlock(pcibus_info, flags);
spin_unlock_irqrestore(&pcibus_info->pbi_lock, flags);
}

View File

@ -137,14 +137,12 @@ pcibr_dmatrans_direct64(struct pcidev_info * info, u64 paddr,
pci_addr |= PCI64_ATTR_VIRTUAL;
return pci_addr;
}
static dma_addr_t
pcibr_dmatrans_direct32(struct pcidev_info * info,
u64 paddr, size_t req_size, u64 flags)
{
struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
pdi_pcibus_info;
@ -171,7 +169,6 @@ pcibr_dmatrans_direct32(struct pcidev_info * info,
}
return PCI32_DIRECT_BASE | offset;
}
/*
@ -218,9 +215,8 @@ void sn_dma_flush(u64 addr)
u64 flags;
u64 itte;
struct hubdev_info *hubinfo;
volatile struct sn_flush_device_kernel *p;
volatile struct sn_flush_device_common *common;
struct sn_flush_device_kernel *p;
struct sn_flush_device_common *common;
struct sn_flush_nasid_entry *flush_nasid_list;
if (!sn_ioif_inited)
@ -310,8 +306,7 @@ void sn_dma_flush(u64 addr)
(common->sfdl_slot - 1));
}
} else {
spin_lock_irqsave((spinlock_t *)&p->sfdl_flush_lock,
flags);
spin_lock_irqsave(&p->sfdl_flush_lock, flags);
*common->sfdl_flush_addr = 0;
/* force an interrupt. */
@ -322,8 +317,7 @@ void sn_dma_flush(u64 addr)
cpu_relax();
/* okay, everything is synched up. */
spin_unlock_irqrestore((spinlock_t *)&p->sfdl_flush_lock,
flags);
spin_unlock_irqrestore(&p->sfdl_flush_lock, flags);
}
return;
}

View File

@ -163,9 +163,12 @@ pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
/* Setup the PMU ATE map */
soft->pbi_int_ate_resource.lowest_free_index = 0;
soft->pbi_int_ate_resource.ate =
kmalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL);
memset(soft->pbi_int_ate_resource.ate, 0,
(soft->pbi_int_ate_size * sizeof(u64)));
kzalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL);
if (!soft->pbi_int_ate_resource.ate) {
kfree(soft);
return NULL;
}
if (prom_bussoft->bs_asic_type == PCIIO_ASIC_TYPE_TIOCP) {
/* TIO PCI Bridge: find nearest node with CPUs */

View File

@ -21,6 +21,10 @@ config GENERIC_CALIBRATE_DELAY
bool
default y
config TIME_LOW_RES
bool
default y
config ARCH_MAY_HAVE_PC_FDC
bool
depends on Q40 || (BROKEN && SUN3X)

View File

@ -29,6 +29,10 @@ config GENERIC_CALIBRATE_DELAY
bool
default y
config TIME_LOW_RES
bool
default y
source "init/Kconfig"
menu "Processor type and features"

View File

@ -94,7 +94,6 @@ endif
# machines may also. Since BFD is incredibly buggy with respect to
# crossformat linking we rely on the elf2ecoff tool for format conversion.
#
cflags-y += -I $(TOPDIR)/include/asm/gcc
cflags-y += -G 0 -mno-abicalls -fno-pic -pipe
LDFLAGS_vmlinux += -G 0 -static -n -nostdlib
MODFLAGS += -mlong-calls

View File

@ -25,6 +25,7 @@
#include <linux/a.out.h>
#include <linux/init.h>
#include <linux/completion.h>
#include <linux/kallsyms.h>
#include <asm/abi.h>
#include <asm/bootinfo.h>
@ -272,46 +273,19 @@ long kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
static struct mips_frame_info {
void *func;
int omit_fp; /* compiled without fno-omit-frame-pointer */
int frame_offset;
unsigned long func_size;
int frame_size;
int pc_offset;
} schedule_frame, mfinfo[] = {
{ schedule, 0 }, /* must be first */
/* arch/mips/kernel/semaphore.c */
{ __down, 1 },
{ __down_interruptible, 1 },
/* kernel/sched.c */
#ifdef CONFIG_PREEMPT
{ preempt_schedule, 0 },
#endif
{ wait_for_completion, 0 },
{ interruptible_sleep_on, 0 },
{ interruptible_sleep_on_timeout, 0 },
{ sleep_on, 0 },
{ sleep_on_timeout, 0 },
{ yield, 0 },
{ io_schedule, 0 },
{ io_schedule_timeout, 0 },
#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT)
{ __preempt_spin_lock, 0 },
{ __preempt_write_lock, 0 },
#endif
/* kernel/timer.c */
{ schedule_timeout, 1 },
/* { nanosleep_restart, 1 }, */
/* lib/rwsem-spinlock.c */
{ __down_read, 1 },
{ __down_write, 1 },
};
} *schedule_frame, mfinfo[64];
static int mfinfo_num;
static int mips_frame_info_initialized;
static int __init get_frame_info(struct mips_frame_info *info)
{
int i;
void *func = info->func;
union mips_instruction *ip = (union mips_instruction *)func;
info->pc_offset = -1;
info->frame_offset = info->omit_fp ? 0 : -1;
info->frame_size = 0;
for (i = 0; i < 128; i++, ip++) {
/* if jal, jalr, jr, stop. */
if (ip->j_format.opcode == jal_op ||
@ -320,6 +294,23 @@ static int __init get_frame_info(struct mips_frame_info *info)
ip->r_format.func == jr_op)))
break;
if (info->func_size && i >= info->func_size / 4)
break;
if (
#ifdef CONFIG_32BIT
ip->i_format.opcode == addiu_op &&
#endif
#ifdef CONFIG_64BIT
ip->i_format.opcode == daddiu_op &&
#endif
ip->i_format.rs == 29 &&
ip->i_format.rt == 29) {
/* addiu/daddiu sp,sp,-imm */
if (info->frame_size)
continue;
info->frame_size = - ip->i_format.simmediate;
}
if (
#ifdef CONFIG_32BIT
ip->i_format.opcode == sw_op &&
@ -327,31 +318,20 @@ static int __init get_frame_info(struct mips_frame_info *info)
#ifdef CONFIG_64BIT
ip->i_format.opcode == sd_op &&
#endif
ip->i_format.rs == 29)
{
ip->i_format.rs == 29 &&
ip->i_format.rt == 31) {
/* sw / sd $ra, offset($sp) */
if (ip->i_format.rt == 31) {
if (info->pc_offset != -1)
continue;
info->pc_offset =
ip->i_format.simmediate / sizeof(long);
}
/* sw / sd $s8, offset($sp) */
if (ip->i_format.rt == 30) {
//#if 0 /* gcc 3.4 does aggressive optimization... */
if (info->frame_offset != -1)
continue;
//#endif
info->frame_offset =
ip->i_format.simmediate / sizeof(long);
}
if (info->pc_offset != -1)
continue;
info->pc_offset =
ip->i_format.simmediate / sizeof(long);
}
}
if (info->pc_offset == -1 || info->frame_offset == -1) {
printk("Can't analyze prologue code at %p\n", func);
if (info->pc_offset == -1 || info->frame_size == 0) {
if (func == schedule)
printk("Can't analyze prologue code at %p\n", func);
info->pc_offset = -1;
info->frame_offset = -1;
return -1;
info->frame_size = 0;
}
return 0;
@ -359,25 +339,36 @@ static int __init get_frame_info(struct mips_frame_info *info)
static int __init frame_info_init(void)
{
int i, found;
for (i = 0; i < ARRAY_SIZE(mfinfo); i++)
if (get_frame_info(&mfinfo[i]))
return -1;
schedule_frame = mfinfo[0];
/* bubble sort */
do {
struct mips_frame_info tmp;
found = 0;
for (i = 1; i < ARRAY_SIZE(mfinfo); i++) {
if (mfinfo[i-1].func > mfinfo[i].func) {
tmp = mfinfo[i];
mfinfo[i] = mfinfo[i-1];
mfinfo[i-1] = tmp;
found = 1;
}
}
} while (found);
mips_frame_info_initialized = 1;
int i;
#ifdef CONFIG_KALLSYMS
char *modname;
char namebuf[KSYM_NAME_LEN + 1];
unsigned long start, size, ofs;
extern char __sched_text_start[], __sched_text_end[];
extern char __lock_text_start[], __lock_text_end[];
start = (unsigned long)__sched_text_start;
for (i = 0; i < ARRAY_SIZE(mfinfo); i++) {
if (start == (unsigned long)schedule)
schedule_frame = &mfinfo[i];
if (!kallsyms_lookup(start, &size, &ofs, &modname, namebuf))
break;
mfinfo[i].func = (void *)(start + ofs);
mfinfo[i].func_size = size;
start += size - ofs;
if (start >= (unsigned long)__lock_text_end)
break;
if (start == (unsigned long)__sched_text_end)
start = (unsigned long)__lock_text_start;
}
#else
mfinfo[0].func = schedule;
schedule_frame = &mfinfo[0];
#endif
for (i = 0; i < ARRAY_SIZE(mfinfo) && mfinfo[i].func; i++)
get_frame_info(&mfinfo[i]);
mfinfo_num = i;
return 0;
}
@ -394,47 +385,52 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
if (t->reg31 == (unsigned long) ret_from_fork)
return t->reg31;
if (schedule_frame.pc_offset < 0)
if (!schedule_frame || schedule_frame->pc_offset < 0)
return 0;
return ((unsigned long *)t->reg29)[schedule_frame.pc_offset];
return ((unsigned long *)t->reg29)[schedule_frame->pc_offset];
}
/* get_wchan - a maintenance nightmare^W^Wpain in the ass ... */
unsigned long get_wchan(struct task_struct *p)
{
unsigned long stack_page;
unsigned long frame, pc;
unsigned long pc;
#ifdef CONFIG_KALLSYMS
unsigned long frame;
#endif
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
stack_page = (unsigned long)task_stack_page(p);
if (!stack_page || !mips_frame_info_initialized)
if (!stack_page || !mfinfo_num)
return 0;
pc = thread_saved_pc(p);
#ifdef CONFIG_KALLSYMS
if (!in_sched_functions(pc))
return pc;
frame = ((unsigned long *)p->thread.reg30)[schedule_frame.frame_offset];
frame = p->thread.reg29 + schedule_frame->frame_size;
do {
int i;
if (frame < stack_page || frame > stack_page + THREAD_SIZE - 32)
return 0;
for (i = ARRAY_SIZE(mfinfo) - 1; i >= 0; i--) {
for (i = mfinfo_num - 1; i >= 0; i--) {
if (pc >= (unsigned long) mfinfo[i].func)
break;
}
if (i < 0)
break;
if (mfinfo[i].omit_fp)
break;
pc = ((unsigned long *)frame)[mfinfo[i].pc_offset];
frame = ((unsigned long *)frame)[mfinfo[i].frame_offset];
if (!mfinfo[i].frame_size)
break;
frame += mfinfo[i].frame_size;
} while (in_sched_functions(pc));
#endif
return pc;
}

View File

@ -623,7 +623,7 @@ einval: li v0, -EINVAL
sys sys_mknodat 4 /* 4290 */
sys sys_fchownat 5
sys sys_futimesat 3
sys sys_newfstatat 4
sys sys_fstatat64 4
sys sys_unlinkat 3
sys sys_renameat 4 /* 4295 */
sys sys_linkat 4

View File

@ -176,7 +176,7 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size)
if ((ka->sa.sa_flags & SA_ONSTACK) && (sas_ss_flags (sp) == 0))
sp = current->sas_ss_sp + current->sas_ss_size;
return (void __user *)((sp - frame_size) & (ICACHE_REFILLS_WORKAROUND_WAR ? 32 : ALMASK));
return (void __user *)((sp - frame_size) & (ICACHE_REFILLS_WORKAROUND_WAR ? ~(cpu_icache_line_size()-1) : ALMASK));
}
static inline int install_sigtramp(unsigned int __user *tramp,

View File

@ -537,7 +537,7 @@ _sys32_rt_sigreturn(nabi_no_regargs struct pt_regs regs)
/* The ucontext contains a stack32_t, so we must convert! */
if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp))
goto badframe;
st.ss_size = (long) sp;
st.ss_sp = (void *)(long) sp;
if (__get_user(st.ss_size, &frame->rs_uc.uc_stack.ss_size))
goto badframe;
if (__get_user(st.ss_flags, &frame->rs_uc.uc_stack.ss_flags))

View File

@ -108,7 +108,7 @@ _sysn32_rt_sigreturn(nabi_no_regargs struct pt_regs regs)
/* The ucontext contains a stack32_t, so we must convert! */
if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp))
goto badframe;
st.ss_size = (long) sp;
st.ss_sp = (void *)(long) sp;
if (__get_user(st.ss_size, &frame->rs_uc.uc_stack.ss_size))
goto badframe;
if (__get_user(st.ss_flags, &frame->rs_uc.uc_stack.ss_flags))

View File

@ -68,6 +68,8 @@ void __init sanitize_tlb_entries(void)
set_c0_mvpcontrol(MVPCONTROL_VPC);
back_to_back_c0_hazard();
/* Disable TLB sharing */
clear_c0_mvpcontrol(MVPCONTROL_STLB);
@ -102,35 +104,6 @@ void __init sanitize_tlb_entries(void)
clear_c0_mvpcontrol(MVPCONTROL_VPC);
}
#if 0
/*
* Use c0_MVPConf0 to find out how many CPUs are available, setting up
* phys_cpu_present_map and the logical/physical mappings.
*/
void __init prom_build_cpu_map(void)
{
int i, num, ncpus;
cpus_clear(phys_cpu_present_map);
/* assume we boot on cpu 0.... */
cpu_set(0, phys_cpu_present_map);
__cpu_number_map[0] = 0;
__cpu_logical_map[0] = 0;
if (cpu_has_mipsmt) {
ncpus = ((read_c0_mvpconf0() & (MVPCONF0_PVPE)) >> MVPCONF0_PVPE_SHIFT) + 1;
for (i=1, num=0; i< NR_CPUS && i<ncpus; i++) {
cpu_set(i, phys_cpu_present_map);
__cpu_number_map[i] = ++num;
__cpu_logical_map[num] = i;
}
printk(KERN_INFO "%i available secondary CPU(s)\n", num);
}
}
#endif
static void ipi_resched_dispatch (struct pt_regs *regs)
{
do_IRQ(MIPS_CPU_IPI_RESCHED_IRQ, regs);
@ -222,6 +195,9 @@ void prom_prepare_cpus(unsigned int max_cpus)
/* set config to be the same as vpe0, particularly kseg0 coherency alg */
write_vpe_c0_config( read_c0_config());
/* Propagate Config7 */
write_vpe_c0_config7(read_c0_config7());
}
}

View File

@ -471,61 +471,29 @@ struct flush_icache_range_args {
static inline void local_r4k_flush_icache_range(void *args)
{
struct flush_icache_range_args *fir_args = args;
unsigned long dc_lsize = cpu_dcache_line_size();
unsigned long ic_lsize = cpu_icache_line_size();
unsigned long sc_lsize = cpu_scache_line_size();
unsigned long start = fir_args->start;
unsigned long end = fir_args->end;
unsigned long addr, aend;
if (!cpu_has_ic_fills_f_dc) {
if (end - start > dcache_size) {
r4k_blast_dcache();
} else {
R4600_HIT_CACHEOP_WAR_IMPL;
addr = start & ~(dc_lsize - 1);
aend = (end - 1) & ~(dc_lsize - 1);
while (1) {
/* Hit_Writeback_Inv_D */
protected_writeback_dcache_line(addr);
if (addr == aend)
break;
addr += dc_lsize;
}
protected_blast_dcache_range(start, end);
}
if (!cpu_icache_snoops_remote_store) {
if (end - start > scache_size) {
if (end - start > scache_size)
r4k_blast_scache();
} else {
addr = start & ~(sc_lsize - 1);
aend = (end - 1) & ~(sc_lsize - 1);
while (1) {
/* Hit_Writeback_Inv_SD */
protected_writeback_scache_line(addr);
if (addr == aend)
break;
addr += sc_lsize;
}
}
else
protected_blast_scache_range(start, end);
}
}
if (end - start > icache_size)
r4k_blast_icache();
else {
addr = start & ~(ic_lsize - 1);
aend = (end - 1) & ~(ic_lsize - 1);
while (1) {
/* Hit_Invalidate_I */
protected_flush_icache_line(addr);
if (addr == aend)
break;
addr += ic_lsize;
}
}
else
protected_blast_icache_range(start, end);
}
static void r4k_flush_icache_range(unsigned long start, unsigned long end)
@ -619,27 +587,14 @@ static void r4k_flush_icache_page(struct vm_area_struct *vma,
static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
{
unsigned long end, a;
/* Catch bad driver code */
BUG_ON(size == 0);
if (cpu_has_subset_pcaches) {
unsigned long sc_lsize = cpu_scache_line_size();
if (size >= scache_size) {
if (size >= scache_size)
r4k_blast_scache();
return;
}
a = addr & ~(sc_lsize - 1);
end = (addr + size - 1) & ~(sc_lsize - 1);
while (1) {
flush_scache_line(a); /* Hit_Writeback_Inv_SD */
if (a == end)
break;
a += sc_lsize;
}
else
blast_scache_range(addr, addr + size);
return;
}
@ -651,17 +606,8 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
if (size >= dcache_size) {
r4k_blast_dcache();
} else {
unsigned long dc_lsize = cpu_dcache_line_size();
R4600_HIT_CACHEOP_WAR_IMPL;
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
flush_dcache_line(a); /* Hit_Writeback_Inv_D */
if (a == end)
break;
a += dc_lsize;
}
blast_dcache_range(addr, addr + size);
}
bc_wback_inv(addr, size);
@ -669,44 +615,22 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
{
unsigned long end, a;
/* Catch bad driver code */
BUG_ON(size == 0);
if (cpu_has_subset_pcaches) {
unsigned long sc_lsize = cpu_scache_line_size();
if (size >= scache_size) {
if (size >= scache_size)
r4k_blast_scache();
return;
}
a = addr & ~(sc_lsize - 1);
end = (addr + size - 1) & ~(sc_lsize - 1);
while (1) {
flush_scache_line(a); /* Hit_Writeback_Inv_SD */
if (a == end)
break;
a += sc_lsize;
}
else
blast_scache_range(addr, addr + size);
return;
}
if (size >= dcache_size) {
r4k_blast_dcache();
} else {
unsigned long dc_lsize = cpu_dcache_line_size();
R4600_HIT_CACHEOP_WAR_IMPL;
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
flush_dcache_line(a); /* Hit_Writeback_Inv_D */
if (a == end)
break;
a += dc_lsize;
}
blast_dcache_range(addr, addr + size);
}
bc_inv(addr, size);

View File

@ -44,8 +44,6 @@ __asm__ __volatile__( \
/* TX39H-style cache flush routines. */
static void tx39h_flush_icache_all(void)
{
unsigned long start = KSEG0;
unsigned long end = (start + icache_size);
unsigned long flags, config;
/* disable icache (set ICE#) */
@ -53,33 +51,18 @@ static void tx39h_flush_icache_all(void)
config = read_c0_conf();
write_c0_conf(config & ~TX39_CONF_ICE);
TX39_STOP_STREAMING();
/* invalidate icache */
while (start < end) {
cache16_unroll32(start, Index_Invalidate_I);
start += 0x200;
}
blast_icache16();
write_c0_conf(config);
local_irq_restore(flags);
}
static void tx39h_dma_cache_wback_inv(unsigned long addr, unsigned long size)
{
unsigned long end, a;
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
/* Catch bad driver code */
BUG_ON(size == 0);
iob();
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
invalidate_dcache_line(a); /* Hit_Invalidate_D */
if (a == end) break;
a += dc_lsize;
}
blast_inv_dcache_range(addr, addr + size);
}
@ -241,42 +224,21 @@ static void tx39_flush_data_cache_page(unsigned long addr)
static void tx39_flush_icache_range(unsigned long start, unsigned long end)
{
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
unsigned long addr, aend;
if (end - start > dcache_size)
tx39_blast_dcache();
else {
addr = start & ~(dc_lsize - 1);
aend = (end - 1) & ~(dc_lsize - 1);
while (1) {
/* Hit_Writeback_Inv_D */
protected_writeback_dcache_line(addr);
if (addr == aend)
break;
addr += dc_lsize;
}
}
else
protected_blast_dcache_range(start, end);
if (end - start > icache_size)
tx39_blast_icache();
else {
unsigned long flags, config;
addr = start & ~(dc_lsize - 1);
aend = (end - 1) & ~(dc_lsize - 1);
/* disable icache (set ICE#) */
local_irq_save(flags);
config = read_c0_conf();
write_c0_conf(config & ~TX39_CONF_ICE);
TX39_STOP_STREAMING();
while (1) {
/* Hit_Invalidate_I */
protected_flush_icache_line(addr);
if (addr == aend)
break;
addr += dc_lsize;
}
protected_blast_icache_range(start, end);
write_c0_conf(config);
local_irq_restore(flags);
}
@ -311,7 +273,7 @@ static void tx39_flush_icache_page(struct vm_area_struct *vma, struct page *page
static void tx39_dma_cache_wback_inv(unsigned long addr, unsigned long size)
{
unsigned long end, a;
unsigned long end;
if (((size | addr) & (PAGE_SIZE - 1)) == 0) {
end = addr + size;
@ -322,20 +284,13 @@ static void tx39_dma_cache_wback_inv(unsigned long addr, unsigned long size)
} else if (size > dcache_size) {
tx39_blast_dcache();
} else {
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
flush_dcache_line(a); /* Hit_Writeback_Inv_D */
if (a == end) break;
a += dc_lsize;
}
blast_dcache_range(addr, addr + size);
}
}
static void tx39_dma_cache_inv(unsigned long addr, unsigned long size)
{
unsigned long end, a;
unsigned long end;
if (((size | addr) & (PAGE_SIZE - 1)) == 0) {
end = addr + size;
@ -346,14 +301,7 @@ static void tx39_dma_cache_inv(unsigned long addr, unsigned long size)
} else if (size > dcache_size) {
tx39_blast_dcache();
} else {
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
invalidate_dcache_line(a); /* Hit_Invalidate_D */
if (a == end) break;
a += dc_lsize;
}
blast_inv_dcache_range(addr, addr + size);
}
}

View File

@ -29,6 +29,11 @@ config GENERIC_CALIBRATE_DELAY
bool
default y
config TIME_LOW_RES
bool
depends on SMP
default y
config GENERIC_ISA_DMA
bool

View File

@ -377,15 +377,15 @@
ENTRY_SAME(inotify_init)
ENTRY_SAME(inotify_add_watch) /* 270 */
ENTRY_SAME(inotify_rm_watch)
ENTRY_COMP(pselect6)
ENTRY_COMP(ppoll)
ENTRY_SAME(ni_syscall) /* 271 ENTRY_COMP(pselect6) */
ENTRY_SAME(ni_syscall) /* 272 ENTRY_COMP(ppoll) */
ENTRY_SAME(migrate_pages)
ENTRY_COMP(openat) /* 275 */
ENTRY_SAME(mkdirat)
ENTRY_SAME(mknodat)
ENTRY_SAME(fchownat)
ENTRY_COMP(futimesat)
ENTRY_COMP(newfstatat) /* 280 */
ENTRY_SAME(fstatat64) /* 280 */
ENTRY_SAME(unlinkat)
ENTRY_SAME(renameat)
ENTRY_SAME(linkat)

View File

@ -80,6 +80,10 @@ config HOTPLUG_CPU
can be controlled through /sys/devices/system/cpu/cpu#.
Say N if you want to disable CPU hotplug.
config DEFAULT_MIGRATION_COST
int
default "1000000"
config MATHEMU
bool "IEEE FPU emulation"
depends on MARCH_G5

View File

@ -905,8 +905,8 @@ asmlinkage long sys32_fstat64(unsigned long fd, struct stat64_emu31 __user * sta
return ret;
}
asmlinkage long sys32_fstatat(unsigned int dfd, char __user *filename,
struct stat64_emu31 __user* statbuf, int flag)
asmlinkage long sys32_fstatat64(unsigned int dfd, char __user *filename,
struct stat64_emu31 __user* statbuf, int flag)
{
struct kstat stat;
int error = -EINVAL;

View File

@ -1523,13 +1523,13 @@ compat_sys_futimesat_wrapper:
llgtr %r4,%r4 # struct timeval *
jg compat_sys_futimesat
.globl sys32_fstatat_wrapper
sys32_fstatat_wrapper:
.globl sys32_fstatat64_wrapper
sys32_fstatat64_wrapper:
llgfr %r2,%r2 # unsigned int
llgtr %r3,%r3 # char *
llgtr %r4,%r4 # struct stat64 *
lgfr %r5,%r5 # int
jg sys32_fstatat
jg sys32_fstatat64
.globl sys_unlinkat_wrapper
sys_unlinkat_wrapper:

View File

@ -128,8 +128,10 @@ void default_idle(void)
__ctl_set_bit(8, 15);
#ifdef CONFIG_HOTPLUG_CPU
if (cpu_is_offline(cpu))
if (cpu_is_offline(cpu)) {
preempt_enable_no_resched();
cpu_die();
}
#endif
local_mcck_disable();

View File

@ -600,6 +600,7 @@ setup_arch(char **cmdline_p)
init_mm.brk = (unsigned long) &_end;
parse_cmdline_early(cmdline_p);
parse_early_param();
setup_memory();
setup_resources();
@ -607,6 +608,7 @@ setup_arch(char **cmdline_p)
cpu_init();
__cpu_logical_map[0] = S390_lowcore.cpu_data.cpu_addr;
smp_setup_cpu_possible_map();
/*
* Create kernel page tables and switch to virtual addressing.

View File

@ -1,8 +1,7 @@
/*
* arch/s390/kernel/smp.c
*
* S390 version
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Copyright (C) IBM Corp. 1999,2006
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com),
* Martin Schwidefsky (schwidefsky@de.ibm.com)
* Heiko Carstens (heiko.carstens@de.ibm.com)
@ -41,8 +40,6 @@
#include <asm/cpcmd.h>
#include <asm/tlbflush.h>
/* prototypes */
extern volatile int __cpu_logical_map[];
/*
@ -51,13 +48,11 @@ extern volatile int __cpu_logical_map[];
struct _lowcore *lowcore_ptr[NR_CPUS];
cpumask_t cpu_online_map;
cpumask_t cpu_possible_map = CPU_MASK_ALL;
cpumask_t cpu_online_map = CPU_MASK_NONE;
cpumask_t cpu_possible_map = CPU_MASK_NONE;
static struct task_struct *current_set[NR_CPUS];
EXPORT_SYMBOL(cpu_online_map);
/*
* Reboot, halt and power_off routines for SMP.
*/
@ -490,10 +485,10 @@ void smp_ctl_clear_bit(int cr, int bit) {
* Lets check how many CPUs we have.
*/
void
__init smp_check_cpus(unsigned int max_cpus)
static unsigned int
__init smp_count_cpus(void)
{
int cpu, num_cpus;
unsigned int cpu, num_cpus;
__u16 boot_cpu_addr;
/*
@ -503,19 +498,20 @@ __init smp_check_cpus(unsigned int max_cpus)
boot_cpu_addr = S390_lowcore.cpu_data.cpu_addr;
current_thread_info()->cpu = 0;
num_cpus = 1;
for (cpu = 0; cpu <= 65535 && num_cpus < max_cpus; cpu++) {
for (cpu = 0; cpu <= 65535; cpu++) {
if ((__u16) cpu == boot_cpu_addr)
continue;
__cpu_logical_map[num_cpus] = (__u16) cpu;
if (signal_processor(num_cpus, sigp_sense) ==
__cpu_logical_map[1] = (__u16) cpu;
if (signal_processor(1, sigp_sense) ==
sigp_not_operational)
continue;
cpu_set(num_cpus, cpu_present_map);
num_cpus++;
}
printk("Detected %d CPU's\n",(int) num_cpus);
printk("Boot cpu address %2X\n", boot_cpu_addr);
return num_cpus;
}
/*
@ -676,6 +672,44 @@ __cpu_up(unsigned int cpu)
return 0;
}
static unsigned int __initdata additional_cpus;
static unsigned int __initdata possible_cpus;
void __init smp_setup_cpu_possible_map(void)
{
unsigned int phy_cpus, pos_cpus, cpu;
phy_cpus = smp_count_cpus();
pos_cpus = min(phy_cpus + additional_cpus, (unsigned int) NR_CPUS);
if (possible_cpus)
pos_cpus = min(possible_cpus, (unsigned int) NR_CPUS);
for (cpu = 0; cpu < pos_cpus; cpu++)
cpu_set(cpu, cpu_possible_map);
phy_cpus = min(phy_cpus, pos_cpus);
for (cpu = 0; cpu < phy_cpus; cpu++)
cpu_set(cpu, cpu_present_map);
}
#ifdef CONFIG_HOTPLUG_CPU
static int __init setup_additional_cpus(char *s)
{
additional_cpus = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("additional_cpus", setup_additional_cpus);
static int __init setup_possible_cpus(char *s)
{
possible_cpus = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("possible_cpus", setup_possible_cpus);
int
__cpu_disable(void)
{
@ -744,6 +778,8 @@ cpu_die(void)
for(;;);
}
#endif /* CONFIG_HOTPLUG_CPU */
/*
* Cycle through the processors and setup structures.
*/
@ -757,7 +793,6 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
/* request the 0x1201 emergency signal external interrupt */
if (register_external_interrupt(0x1201, do_ext_call_interrupt) != 0)
panic("Couldn't request external interrupt 0x1201");
smp_check_cpus(max_cpus);
memset(lowcore_ptr,0,sizeof(lowcore_ptr));
/*
* Initialize prefix pages and stacks for all possible cpus
@ -806,7 +841,6 @@ void __devinit smp_prepare_boot_cpu(void)
BUG_ON(smp_processor_id() != 0);
cpu_set(0, cpu_online_map);
cpu_set(0, cpu_present_map);
S390_lowcore.percpu_offset = __per_cpu_offset[0];
current_set[0] = current;
}
@ -845,6 +879,7 @@ static int __init topology_init(void)
subsys_initcall(topology_init);
EXPORT_SYMBOL(cpu_online_map);
EXPORT_SYMBOL(cpu_possible_map);
EXPORT_SYMBOL(lowcore_ptr);
EXPORT_SYMBOL(smp_ctl_set_bit);

View File

@ -301,7 +301,7 @@ SYSCALL(sys_mkdirat,sys_mkdirat,sys_mkdirat_wrapper)
SYSCALL(sys_mknodat,sys_mknodat,sys_mknodat_wrapper) /* 290 */
SYSCALL(sys_fchownat,sys_fchownat,sys_fchownat_wrapper)
SYSCALL(sys_futimesat,sys_futimesat,compat_sys_futimesat_wrapper)
SYSCALL(sys_fstatat64,sys_newfstatat,sys32_fstatat_wrapper)
SYSCALL(sys_fstatat64,sys_newfstatat,sys32_fstatat64_wrapper)
SYSCALL(sys_unlinkat,sys_unlinkat,sys_unlinkat_wrapper)
SYSCALL(sys_renameat,sys_renameat,sys_renameat_wrapper) /* 295 */
SYSCALL(sys_linkat,sys_linkat,sys_linkat_wrapper)

View File

@ -30,7 +30,7 @@ void __delay(unsigned long loops)
*/
__asm__ __volatile__(
"0: brct %0,0b"
: /* no outputs */ : "r" (loops/2) );
: /* no outputs */ : "r" ((loops/2) + 1));
}
/*

View File

@ -446,7 +446,7 @@ endmenu
config ISA_DMA_API
bool
depends on MPC1211
depends on SH_MPC1211
default y
menu "Kernel features"

View File

@ -76,7 +76,7 @@ sys_call_table:
/*270*/ .long sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink
/*275*/ .long sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid
/*280*/ .long sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat
/*285*/ .long sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_newfstatat
/*285*/ .long sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*290*/ .long sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
/*295*/ .long sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare

View File

@ -428,6 +428,27 @@ asmlinkage long compat_sys_fstat64(unsigned int fd,
return error;
}
asmlinkage long compat_sys_fstatat64(unsigned int dfd, char __user *filename,
struct compat_stat64 __user * statbuf, int flag)
{
struct kstat stat;
int error = -EINVAL;
if ((flag & ~AT_SYMLINK_NOFOLLOW) != 0)
goto out;
if (flag & AT_SYMLINK_NOFOLLOW)
error = vfs_lstat_fd(dfd, filename, &stat);
else
error = vfs_stat_fd(dfd, filename, &stat);
if (!error)
error = cp_compat_stat64(&stat, statbuf);
out:
return error;
}
asmlinkage long compat_sys_sysfs(int option, u32 arg1, u32 arg2)
{
return sys_sysfs(option, arg1, arg2);

View File

@ -77,7 +77,7 @@ sys_call_table32:
/*270*/ .word sys32_io_submit, sys_io_cancel, compat_sys_io_getevents, sys32_mq_open, sys_mq_unlink
.word compat_sys_mq_timedsend, compat_sys_mq_timedreceive, compat_sys_mq_notify, compat_sys_mq_getsetattr, compat_sys_waitid
/*280*/ .word sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, compat_sys_openat
.word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_newfstatat
.word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_fstatat64
/*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, compat_sys_pselect6, compat_sys_ppoll, sys_unshare
@ -146,7 +146,7 @@ sys_call_table:
/*270*/ .word sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink
.word sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid
/*280*/ .word sys_nis_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat
.word sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_newfstatat
.word sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare

View File

@ -28,6 +28,10 @@ config GENERIC_IRQ_PROBE
bool
default y
config TIME_LOW_RES
bool
default y
# Turn off some random 386 crap that can affect device config
config ISA
bool

View File

@ -1,7 +1,7 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.16-rc1-git2
# Thu Jan 19 10:05:21 2006
# Linux kernel version: 2.6.16-rc3
# Mon Feb 13 22:31:24 2006
#
CONFIG_X86_64=y
CONFIG_64BIT=y
@ -21,7 +21,6 @@ CONFIG_DMI=y
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
@ -267,6 +266,7 @@ CONFIG_NET=y
#
# Networking options
#
# CONFIG_NETDEBUG is not set
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y
@ -446,7 +446,6 @@ CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
CONFIG_BLK_DEV_PDC202XX_NEW=y
# CONFIG_PDC202XX_FORCE is not set
# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIIMAGE is not set
# CONFIG_BLK_DEV_SIS5513 is not set
@ -573,7 +572,33 @@ CONFIG_FUSION_MAX_SGE=128
#
# IEEE 1394 (FireWire) support
#
# CONFIG_IEEE1394 is not set
CONFIG_IEEE1394=y
#
# Subsystem Options
#
# CONFIG_IEEE1394_VERBOSEDEBUG is not set
# CONFIG_IEEE1394_OUI_DB is not set
# CONFIG_IEEE1394_EXTRA_CONFIG_ROMS is not set
# CONFIG_IEEE1394_EXPORT_FULL_API is not set
#
# Device Drivers
#
#
# Texas Instruments PCILynx requires I2C
#
CONFIG_IEEE1394_OHCI1394=y
#
# Protocol Drivers
#
# CONFIG_IEEE1394_VIDEO1394 is not set
# CONFIG_IEEE1394_SBP2 is not set
# CONFIG_IEEE1394_ETH1394 is not set
# CONFIG_IEEE1394_DV1394 is not set
CONFIG_IEEE1394_RAWIO=y
#
# I2O device support
@ -772,6 +797,7 @@ CONFIG_SERIAL_8250_RUNTIME_UARTS=4
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_JSM is not set
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
@ -871,6 +897,7 @@ CONFIG_HPET_MMAP=y
#
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_HWMON_DEBUG_CHIP is not set
@ -1101,7 +1128,6 @@ CONFIG_USB_MON=y
# EDAC - error detection and reporting (RAS)
#
# CONFIG_EDAC is not set
# CONFIG_EDAC_POLL is not set
#
# Firmware Drivers
@ -1291,14 +1317,12 @@ CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_VM is not set
# CONFIG_FRAME_POINTER is not set
# CONFIG_FORCED_INLINING is not set
# CONFIG_UNWIND_INFO is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_INIT_DEBUG=y
# CONFIG_DEBUG_RODATA is not set
# CONFIG_IOMMU_DEBUG is not set

View File

@ -1152,6 +1152,7 @@ __setup("noapicmaintimer", setup_noapicmaintimer);
static __init int setup_apicpmtimer(char *s)
{
apic_calibrate_pmtmr = 1;
notsc_setup(NULL);
return setup_apicmaintimer(NULL);
}
__setup("apicpmtimer", setup_apicpmtimer);

View File

@ -554,6 +554,7 @@ iret_label:
/* running with kernel gs */
bad_iret:
movq $-9999,%rdi /* better code? */
sti
jmp do_exit
.previous

View File

@ -213,6 +213,11 @@ ENTRY(early_idt_handler)
cmpl $2,early_recursion_flag(%rip)
jz 1f
call dump_stack
#ifdef CONFIG_KALLSYMS
leaq early_idt_ripmsg(%rip),%rdi
movq 8(%rsp),%rsi # get rip again
call __print_symbol
#endif
1: hlt
jmp 1b
early_recursion_flag:
@ -220,6 +225,8 @@ early_recursion_flag:
early_idt_msg:
.asciz "PANIC: early exception rip %lx error %lx cr2 %lx\n"
early_idt_ripmsg:
.asciz "RIP %s\n"
.code32
ENTRY(no_long_mode)

View File

@ -30,6 +30,9 @@
#include <linux/mc146818rtc.h>
#include <linux/acpi.h>
#include <linux/sysdev.h>
#ifdef CONFIG_ACPI
#include <acpi/acpi_bus.h>
#endif
#include <asm/io.h>
#include <asm/smp.h>
@ -260,6 +263,8 @@ __setup("apic", enable_ioapic_setup);
And another hack to disable the IOMMU on VIA chipsets.
... and others. Really should move this somewhere else.
Kludge-O-Rama. */
void __init check_ioapic(void)
{
@ -307,6 +312,17 @@ void __init check_ioapic(void)
case PCI_VENDOR_ID_ATI:
if (apic_runs_main_timer != 0)
break;
#ifdef CONFIG_ACPI
/* Don't do this for laptops right
right now because their timer
doesn't necessarily tick in C2/3 */
if (acpi_fadt.revision >= 3 &&
(acpi_fadt.plvl2_lat + acpi_fadt.plvl3_lat) < 1100) {
printk(KERN_INFO
"ATI board detected, but seems to be a laptop. Timer might be shakey, sorry\n");
break;
}
#endif
printk(KERN_INFO
"ATI board detected. Using APIC/PM timer.\n");
apic_runs_main_timer = 1;

View File

@ -288,9 +288,9 @@ static int __init smp_read_mpc(struct mp_config_table *mpc)
memcpy(str,mpc->mpc_productid,12);
str[12]=0;
printk(KERN_INFO "Product ID: %s ",str);
printk("Product ID: %s ",str);
printk(KERN_INFO "APIC at: 0x%X\n",mpc->mpc_lapic);
printk("APIC at: 0x%X\n",mpc->mpc_lapic);
/* save the local APIC address, it might be non-default */
if (!acpi_lapic)

View File

@ -236,6 +236,7 @@ static void enable_lapic_nmi_watchdog(void)
{
if (nmi_active < 0) {
nmi_watchdog = NMI_LOCAL_APIC;
touch_nmi_watchdog();
setup_apic_nmi_watchdog();
}
}
@ -456,15 +457,17 @@ static DEFINE_PER_CPU(int, nmi_touch);
void touch_nmi_watchdog (void)
{
int i;
if (nmi_watchdog > 0) {
unsigned cpu;
/*
* Tell other CPUs to reset their alert counters. We cannot
* do it ourselves because the alert count increase is not
* atomic.
*/
for (i = 0; i < NR_CPUS; i++)
per_cpu(nmi_touch, i) = 1;
/*
* Tell other CPUs to reset their alert counters. We cannot
* do it ourselves because the alert count increase is not
* atomic.
*/
for_each_present_cpu (cpu)
per_cpu(nmi_touch, cpu) = 1;
}
touch_softlockup_watchdog();
}

View File

@ -1327,8 +1327,7 @@ static int __init nohpet_setup(char *s)
__setup("nohpet", nohpet_setup);
static int __init notsc_setup(char *s)
int __init notsc_setup(char *s)
{
notsc = 1;
return 0;

View File

@ -155,7 +155,7 @@ int __init k8_scan_nodes(unsigned long start, unsigned long end)
if (!found)
return -1;
memnode_shift = compute_hash_shift(nodes, numnodes);
memnode_shift = compute_hash_shift(nodes, 8);
if (memnode_shift < 0) {
printk(KERN_ERR "No NUMA node hash function found. Contact maintainer\n");
return -1;

View File

@ -351,7 +351,7 @@ void __init init_cpu_to_node(void)
continue;
if (apicid_to_node[apicid] == NUMA_NO_NODE)
continue;
cpu_to_node[i] = apicid_to_node[apicid];
numa_set_node(i,apicid_to_node[apicid]);
}
}

View File

@ -228,7 +228,8 @@ static int nodes_cover_memory(void)
}
e820ram = end_pfn - e820_hole_size(0, end_pfn);
if (pxmram < e820ram) {
/* We seem to lose 3 pages somewhere. Allow a bit of slack. */
if ((long)(e820ram - pxmram) >= 1*1024*1024) {
printk(KERN_ERR
"SRAT: PXMs only cover %luMB of your %luMB e820 RAM. Not used.\n",
(pxmram << PAGE_SHIFT) >> 20,
@ -270,7 +271,7 @@ int __init acpi_scan_nodes(unsigned long start, unsigned long end)
return -1;
}
memnode_shift = compute_hash_shift(nodes, nodes_weight(nodes_parsed));
memnode_shift = compute_hash_shift(nodes, MAX_NUMNODES);
if (memnode_shift < 0) {
printk(KERN_ERR
"SRAT: No NUMA node hash function found. Contact maintainer\n");

View File

@ -391,8 +391,7 @@ acpi_rs_get_list_length(u8 * aml_buffer,
* Ensure a 32-bit boundary for the structure
*/
extra_struct_bytes =
ACPI_ROUND_UP_to_32_bITS(resource_length) -
resource_length;
ACPI_ROUND_UP_to_32_bITS(resource_length);
break;
case ACPI_RESOURCE_NAME_END_TAG:
@ -408,8 +407,7 @@ acpi_rs_get_list_length(u8 * aml_buffer,
* Add vendor data and ensure a 32-bit boundary for the structure
*/
extra_struct_bytes =
ACPI_ROUND_UP_to_32_bITS(resource_length) -
resource_length;
ACPI_ROUND_UP_to_32_bITS(resource_length);
break;
case ACPI_RESOURCE_NAME_ADDRESS32:

View File

@ -645,7 +645,7 @@ static void pkt_copy_bio_data(struct bio *src_bio, int seg, int offs, struct pag
* b) The data can be used as cache to avoid read requests if we receive a
* new write request for the same zone.
*/
static void pkt_make_local_copy(struct packet_data *pkt, struct page **pages, int *offsets)
static void pkt_make_local_copy(struct packet_data *pkt, struct bio_vec *bvec)
{
int f, p, offs;
@ -653,15 +653,15 @@ static void pkt_make_local_copy(struct packet_data *pkt, struct page **pages, in
p = 0;
offs = 0;
for (f = 0; f < pkt->frames; f++) {
if (pages[f] != pkt->pages[p]) {
void *vfrom = kmap_atomic(pages[f], KM_USER0) + offsets[f];
if (bvec[f].bv_page != pkt->pages[p]) {
void *vfrom = kmap_atomic(bvec[f].bv_page, KM_USER0) + bvec[f].bv_offset;
void *vto = page_address(pkt->pages[p]) + offs;
memcpy(vto, vfrom, CD_FRAMESIZE);
kunmap_atomic(vfrom, KM_USER0);
pages[f] = pkt->pages[p];
offsets[f] = offs;
bvec[f].bv_page = pkt->pages[p];
bvec[f].bv_offset = offs;
} else {
BUG_ON(offsets[f] != offs);
BUG_ON(bvec[f].bv_offset != offs);
}
offs += CD_FRAMESIZE;
if (offs >= PAGE_SIZE) {
@ -991,18 +991,17 @@ try_next_bio:
static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
{
struct bio *bio;
struct page *pages[PACKET_MAX_SIZE];
int offsets[PACKET_MAX_SIZE];
int f;
int frames_write;
struct bio_vec *bvec = pkt->w_bio->bi_io_vec;
for (f = 0; f < pkt->frames; f++) {
pages[f] = pkt->pages[(f * CD_FRAMESIZE) / PAGE_SIZE];
offsets[f] = (f * CD_FRAMESIZE) % PAGE_SIZE;
bvec[f].bv_page = pkt->pages[(f * CD_FRAMESIZE) / PAGE_SIZE];
bvec[f].bv_offset = (f * CD_FRAMESIZE) % PAGE_SIZE;
}
/*
* Fill-in pages[] and offsets[] with data from orig_bios.
* Fill-in bvec with data from orig_bios.
*/
frames_write = 0;
spin_lock(&pkt->lock);
@ -1024,11 +1023,11 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
}
if (src_bvl->bv_len - src_offs >= CD_FRAMESIZE) {
pages[f] = src_bvl->bv_page;
offsets[f] = src_bvl->bv_offset + src_offs;
bvec[f].bv_page = src_bvl->bv_page;
bvec[f].bv_offset = src_bvl->bv_offset + src_offs;
} else {
pkt_copy_bio_data(bio, segment, src_offs,
pages[f], offsets[f]);
bvec[f].bv_page, bvec[f].bv_offset);
}
src_offs += CD_FRAMESIZE;
frames_write++;
@ -1042,7 +1041,7 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
BUG_ON(frames_write != pkt->write_size);
if (test_bit(PACKET_MERGE_SEGS, &pd->flags) || (pkt->write_size < pkt->frames)) {
pkt_make_local_copy(pkt, pages, offsets);
pkt_make_local_copy(pkt, bvec);
pkt->cache_valid = 1;
} else {
pkt->cache_valid = 0;
@ -1055,17 +1054,9 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
pkt->w_bio->bi_bdev = pd->bdev;
pkt->w_bio->bi_end_io = pkt_end_io_packet_write;
pkt->w_bio->bi_private = pkt;
for (f = 0; f < pkt->frames; f++) {
if ((f + 1 < pkt->frames) && (pages[f + 1] == pages[f]) &&
(offsets[f + 1] = offsets[f] + CD_FRAMESIZE)) {
if (!bio_add_page(pkt->w_bio, pages[f], CD_FRAMESIZE * 2, offsets[f]))
BUG();
f++;
} else {
if (!bio_add_page(pkt->w_bio, pages[f], CD_FRAMESIZE, offsets[f]))
BUG();
}
}
for (f = 0; f < pkt->frames; f++)
if (!bio_add_page(pkt->w_bio, bvec[f].bv_page, CD_FRAMESIZE, bvec[f].bv_offset))
BUG();
VPRINTK("pktcdvd: vcnt=%d\n", pkt->w_bio->bi_vcnt);
atomic_set(&pkt->io_wait, 1);
@ -1548,7 +1539,7 @@ static int pkt_good_disc(struct pktcdvd_device *pd, disc_information *di)
case 0x12: /* DVD-RAM */
return 0;
default:
printk("pktcdvd: Wrong disc profile (%x)\n", pd->mmc3_profile);
VPRINTK("pktcdvd: Wrong disc profile (%x)\n", pd->mmc3_profile);
return 1;
}
@ -1894,8 +1885,8 @@ static int pkt_open_write(struct pktcdvd_device *pd)
unsigned int write_speed, media_write_speed, read_speed;
if ((ret = pkt_probe_settings(pd))) {
DPRINTK("pktcdvd: %s failed probe\n", pd->name);
return -EIO;
VPRINTK("pktcdvd: %s failed probe\n", pd->name);
return -EROFS;
}
if ((ret = pkt_set_write_settings(pd))) {
@ -2053,10 +2044,9 @@ static int pkt_open(struct inode *inode, struct file *file)
goto out_dec;
}
} else {
if (pkt_open_dev(pd, file->f_mode & FMODE_WRITE)) {
ret = -EIO;
ret = pkt_open_dev(pd, file->f_mode & FMODE_WRITE);
if (ret)
goto out_dec;
}
/*
* needed here as well, since ext2 (among others) may change
* the blocksize at mount time
@ -2436,11 +2426,12 @@ static int pkt_ioctl(struct inode *inode, struct file *file, unsigned int cmd, u
* The door gets locked when the device is opened, so we
* have to unlock it or else the eject command fails.
*/
pkt_lock_door(pd, 0);
if (pd->refcnt == 1)
pkt_lock_door(pd, 0);
return blkdev_ioctl(pd->bdev->bd_inode, file, cmd, arg);
default:
printk("pktcdvd: Unknown ioctl for %s (%x)\n", pd->name, cmd);
VPRINTK("pktcdvd: Unknown ioctl for %s (%x)\n", pd->name, cmd);
return -ENOTTY;
}

View File

@ -474,18 +474,6 @@ static int bt3c_hci_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long
/* ======================== Card services HCI interaction ======================== */
static struct device *bt3c_device(void)
{
static struct device dev = {
.bus_id = "pcmcia",
};
kobject_set_name(&dev.kobj, "bt3c");
kobject_init(&dev.kobj);
return &dev;
}
static int bt3c_load_firmware(bt3c_info_t *info, unsigned char *firmware, int count)
{
char *ptr = (char *) firmware;
@ -574,6 +562,7 @@ static int bt3c_open(bt3c_info_t *info)
{
const struct firmware *firmware;
struct hci_dev *hdev;
client_handle_t handle;
int err;
spin_lock_init(&(info->lock));
@ -605,8 +594,10 @@ static int bt3c_open(bt3c_info_t *info)
hdev->owner = THIS_MODULE;
handle = info->link.handle;
/* Load firmware */
err = request_firmware(&firmware, "BT3CPCC.bin", bt3c_device());
err = request_firmware(&firmware, "BT3CPCC.bin", &handle_to_dev(handle));
if (err < 0) {
BT_ERR("Firmware request failed");
goto error;

View File

@ -85,7 +85,6 @@
{0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100}, \
{0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0x1002, 0x5b60, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
{0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \
{0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \

View File

@ -150,17 +150,6 @@ static void rs_wait_until_sent(struct tty_struct *, int);
/* Standard COM flags (except for COM4, because of the 8514 problem) */
#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
/*
* tmp_buf is used as a temporary buffer by serial_write. We need to
* lock it in case the memcpy_fromfs blocks while swapping in a page,
* and some other program tries to do a serial write at the same time.
* Since the lock will only come under contention when the system is
* swapping and available memory is low, it makes sense to share one
* buffer across all the serial ports, since it significantly saves
* memory if large numbers of serial ports are open.
*/
static unsigned char *tmp_buf;
static inline int serial_paranoia_check(struct esp_struct *info,
char *name, const char *routine)
{
@ -1267,7 +1256,7 @@ static int rs_write(struct tty_struct * tty,
if (serial_paranoia_check(info, tty->name, "rs_write"))
return 0;
if (!tty || !info->xmit_buf || !tmp_buf)
if (!tty || !info->xmit_buf)
return 0;
while (1) {
@ -2291,11 +2280,7 @@ static int esp_open(struct tty_struct *tty, struct file * filp)
tty->driver_data = info;
info->tty = tty;
if (!tmp_buf) {
tmp_buf = (unsigned char *) get_zeroed_page(GFP_KERNEL);
if (!tmp_buf)
return -ENOMEM;
}
spin_unlock_irqrestore(&info->lock, flags);
/*
* Start up serial port
@ -2602,9 +2587,6 @@ static void __exit espserial_exit(void)
free_pages((unsigned long)dma_buffer,
get_order(DMA_BUFFER_SZ));
if (tmp_buf)
free_page((unsigned long)tmp_buf);
while (free_pio_buf) {
pio_buf = free_pio_buf->next;
kfree(free_pio_buf);

View File

@ -956,22 +956,18 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
}
} else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) {
struct acpi_resource_extended_irq *irqp;
int i;
int i, irq;
irqp = &res->data.extended_irq;
if (irqp->interrupt_count > 0) {
hdp->hd_nirqs = irqp->interrupt_count;
for (i = 0; i < irqp->interrupt_count; i++) {
irq = acpi_register_gsi(irqp->interrupts[i],
irqp->triggering, irqp->polarity);
if (irq < 0)
return AE_ERROR;
for (i = 0; i < hdp->hd_nirqs; i++) {
int rc =
acpi_register_gsi(irqp->interrupts[i],
irqp->triggering,
irqp->polarity);
if (rc < 0)
return AE_ERROR;
hdp->hd_irq[i] = rc;
}
hdp->hd_irq[hdp->hd_nirqs] = irq;
hdp->hd_nirqs++;
}
}

View File

@ -33,6 +33,7 @@
static int TPM_INF_DATA;
static int TPM_INF_ADDR;
static int TPM_INF_BASE;
static int TPM_INF_ADDR_LEN;
static int TPM_INF_PORT_LEN;
/* TPM header definitions */
@ -195,6 +196,7 @@ static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
int i;
int ret;
u32 size = 0;
number_of_wtx = 0;
recv_begin:
/* start receiving header */
@ -378,24 +380,35 @@ static int __devinit tpm_inf_pnp_probe(struct pnp_dev *dev,
if (pnp_port_valid(dev, 0) && pnp_port_valid(dev, 1) &&
!(pnp_port_flags(dev, 0) & IORESOURCE_DISABLED)) {
TPM_INF_ADDR = pnp_port_start(dev, 0);
TPM_INF_ADDR_LEN = pnp_port_len(dev, 0);
TPM_INF_DATA = (TPM_INF_ADDR + 1);
TPM_INF_BASE = pnp_port_start(dev, 1);
TPM_INF_PORT_LEN = pnp_port_len(dev, 1);
if (!TPM_INF_PORT_LEN)
return -EINVAL;
if ((TPM_INF_PORT_LEN < 4) || (TPM_INF_ADDR_LEN < 2)) {
rc = -EINVAL;
goto err_last;
}
dev_info(&dev->dev, "Found %s with ID %s\n",
dev->name, dev_id->id);
if (!((TPM_INF_BASE >> 8) & 0xff))
return -EINVAL;
if (!((TPM_INF_BASE >> 8) & 0xff)) {
rc = -EINVAL;
goto err_last;
}
/* publish my base address and request region */
tpm_inf.base = TPM_INF_BASE;
if (request_region
(tpm_inf.base, TPM_INF_PORT_LEN, "tpm_infineon0") == NULL) {
release_region(tpm_inf.base, TPM_INF_PORT_LEN);
return -EINVAL;
rc = -EINVAL;
goto err_last;
}
if (request_region(TPM_INF_ADDR, TPM_INF_ADDR_LEN,
"tpm_infineon0") == NULL) {
rc = -EINVAL;
goto err_last;
}
} else {
return -EINVAL;
rc = -EINVAL;
goto err_last;
}
/* query chip for its vendor, its version number a.s.o. */
@ -443,8 +456,8 @@ static int __devinit tpm_inf_pnp_probe(struct pnp_dev *dev,
dev_err(&dev->dev,
"Could not set IO-ports to 0x%lx\n",
tpm_inf.base);
release_region(tpm_inf.base, TPM_INF_PORT_LEN);
return -EIO;
rc = -EIO;
goto err_release_region;
}
/* activate register */
@ -471,14 +484,21 @@ static int __devinit tpm_inf_pnp_probe(struct pnp_dev *dev,
rc = tpm_register_hardware(&dev->dev, &tpm_inf);
if (rc < 0) {
release_region(tpm_inf.base, TPM_INF_PORT_LEN);
return -ENODEV;
rc = -ENODEV;
goto err_release_region;
}
return 0;
} else {
dev_info(&dev->dev, "No Infineon TPM found!\n");
return -ENODEV;
rc = -ENODEV;
goto err_release_region;
}
err_release_region:
release_region(tpm_inf.base, TPM_INF_PORT_LEN);
release_region(TPM_INF_ADDR, TPM_INF_ADDR_LEN);
err_last:
return rc;
}
static __devexit void tpm_inf_pnp_remove(struct pnp_dev *dev)
@ -518,5 +538,5 @@ module_exit(cleanup_inf);
MODULE_AUTHOR("Marcel Selhorst <selhorst@crypto.rub.de>");
MODULE_DESCRIPTION("Driver for Infineon TPM SLD 9630 TT 1.1 / SLB 9635 TT 1.2");
MODULE_VERSION("1.6");
MODULE_VERSION("1.7");
MODULE_LICENSE("GPL");

Some files were not shown because too many files have changed in this diff Show More