Merge git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86

* git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: (890 commits)
  x86: fix nodemap_size according to nodeid bits
  x86: fix overlap between pagetable with bss section
  x86: add PCI IDs to k8topology_64.c
  x86: fix early_ioremap pagetable ops
  x86: use the same pgd_list for PAE and 64-bit
  x86: defer cr3 reload when doing pud_clear()
  x86: early boot debugging via FireWire (ohci1394_dma=early)
  x86: don't special-case pmd allocations as much
  x86: shrink some ifdefs in fault.c
  x86: ignore spurious faults
  x86: remove nx_enabled from fault.c
  x86: unify fault_32|64.c
  x86: unify fault_32|64.c with ifdefs
  x86: unify fault_32|64.c by ifdef'd function bodies
  x86: arch/x86/mm/init_32.c printk fixes
  x86: arch/x86/mm/init_32.c cleanup
  x86: arch/x86/mm/init_64.c printk fixes
  x86: unify ioremap
  x86: fixes some bugs about EFI memory map handling
  x86: use reboot_type on EFI 32
  ...
This commit is contained in:
Linus Torvalds 2008-01-31 00:40:09 +11:00
commit dd430ca20c
635 changed files with 36033 additions and 36801 deletions

View File

@ -0,0 +1,179 @@
Using physical DMA provided by OHCI-1394 FireWire controllers for debugging
---------------------------------------------------------------------------
Introduction
------------
Basically all FireWire controllers which are in use today are compliant
to the OHCI-1394 specification which defines the controller to be a PCI
bus master which uses DMA to offload data transfers from the CPU and has
a "Physical Response Unit" which executes specific requests by employing
PCI-Bus master DMA after applying filters defined by the OHCI-1394 driver.
Once properly configured, remote machines can send these requests to
ask the OHCI-1394 controller to perform read and write requests on
physical system memory and, for read requests, send the result of
the physical memory read back to the requester.
With that, it is possible to debug issues by reading interesting memory
locations such as buffers like the printk buffer or the process table.
Retrieving a full system memory dump is also possible over the FireWire,
using data transfer rates in the order of 10MB/s or more.
Memory access is currently limited to the low 4G of physical address
space which can be a problem on IA64 machines where memory is located
mostly above that limit, but it is rarely a problem on more common
hardware such as hardware based on x86, x86-64 and PowerPC.
Together with a early initialization of the OHCI-1394 controller for debugging,
this facility proved most useful for examining long debugs logs in the printk
buffer on to debug early boot problems in areas like ACPI where the system
fails to boot and other means for debugging (serial port) are either not
available (notebooks) or too slow for extensive debug information (like ACPI).
Drivers
-------
The OHCI-1394 drivers in drivers/firewire and drivers/ieee1394 initialize
the OHCI-1394 controllers to a working state and can be used to enable
physical DMA. By default you only have to load the driver, and physical
DMA access will be granted to all remote nodes, but it can be turned off
when using the ohci1394 driver.
Because these drivers depend on the PCI enumeration to be completed, an
initialization routine which can runs pretty early (long before console_init(),
which makes the printk buffer appear on the console can be called) was written.
To activate it, enable CONFIG_PROVIDE_OHCI1394_DMA_INIT (Kernel hacking menu:
Provide code for enabling DMA over FireWire early on boot) and pass the
parameter "ohci1394_dma=early" to the recompiled kernel on boot.
Tools
-----
firescope - Originally developed by Benjamin Herrenschmidt, Andi Kleen ported
it from PowerPC to x86 and x86_64 and added functionality, firescope can now
be used to view the printk buffer of a remote machine, even with live update.
Bernhard Kaindl enhanced firescope to support accessing 64-bit machines
from 32-bit firescope and vice versa:
- ftp://ftp.suse.de/private/bk/firewire/tools/firescope-0.2.2.tar.bz2
and he implemented fast system dump (alpha version - read README.txt):
- ftp://ftp.suse.de/private/bk/firewire/tools/firedump-0.1.tar.bz2
There is also a gdb proxy for firewire which allows to use gdb to access
data which can be referenced from symbols found by gdb in vmlinux:
- ftp://ftp.suse.de/private/bk/firewire/tools/fireproxy-0.33.tar.bz2
The latest version of this gdb proxy (fireproxy-0.34) can communicate (not
yet stable) with kgdb over an memory-based communication module (kgdbom).
Getting Started
---------------
The OHCI-1394 specification regulates that the OHCI-1394 controller must
disable all physical DMA on each bus reset.
This means that if you want to debug an issue in a system state where
interrupts are disabled and where no polling of the OHCI-1394 controller
for bus resets takes place, you have to establish any FireWire cable
connections and fully initialize all FireWire hardware __before__ the
system enters such state.
Step-by-step instructions for using firescope with early OHCI initialization:
1) Verify that your hardware is supported:
Load the ohci1394 or the fw-ohci module and check your kernel logs.
You should see a line similar to
ohci1394: fw-host0: OHCI-1394 1.1 (PCI): IRQ=[18] MMIO=[fe9ff800-fe9fffff]
... Max Packet=[2048] IR/IT contexts=[4/8]
when loading the driver. If you have no supported controller, many PCI,
CardBus and even some Express cards which are fully compliant to OHCI-1394
specification are available. If it requires no driver for Windows operating
systems, it most likely is. Only specialized shops have cards which are not
compliant, they are based on TI PCILynx chips and require drivers for Win-
dows operating systems.
2) Establish a working FireWire cable connection:
Any FireWire cable, as long at it provides electrically and mechanically
stable connection and has matching connectors (there are small 4-pin and
large 6-pin FireWire ports) will do.
If an driver is running on both machines you should see a line like
ieee1394: Node added: ID:BUS[0-01:1023] GUID[0090270001b84bba]
on both machines in the kernel log when the cable is plugged in
and connects the two machines.
3) Test physical DMA using firescope:
On the debug host,
- load the raw1394 module,
- make sure that /dev/raw1394 is accessible,
then start firescope:
$ firescope
Port 0 (ohci1394) opened, 2 nodes detected
FireScope
---------
Target : <unspecified>
Gen : 1
[Ctrl-T] choose target
[Ctrl-H] this menu
[Ctrl-Q] quit
------> Press Ctrl-T now, the output should be similar to:
2 nodes available, local node is: 0
0: ffc0, uuid: 00000000 00000000 [LOCAL]
1: ffc1, uuid: 00279000 ba4bb801
Besides the [LOCAL] node, it must show another node without error message.
4) Prepare for debugging with early OHCI-1394 initialization:
4.1) Kernel compilation and installation on debug target
Compile the kernel to be debugged with CONFIG_PROVIDE_OHCI1394_DMA_INIT
(Kernel hacking: Provide code for enabling DMA over FireWire early on boot)
enabled and install it on the machine to be debugged (debug target).
4.2) Transfer the System.map of the debugged kernel to the debug host
Copy the System.map of the kernel be debugged to the debug host (the host
which is connected to the debugged machine over the FireWire cable).
5) Retrieving the printk buffer contents:
With the FireWire cable connected, the OHCI-1394 driver on the debugging
host loaded, reboot the debugged machine, booting the kernel which has
CONFIG_PROVIDE_OHCI1394_DMA_INIT enabled, with the option ohci1394_dma=early.
Then, on the debugging host, run firescope, for example by using -A:
firescope -A System.map-of-debug-target-kernel
Note: -A automatically attaches to the first non-local node. It only works
reliably if only connected two machines are connected using FireWire.
After having attached to the debug target, press Ctrl-D to view the
complete printk buffer or Ctrl-U to enter auto update mode and get an
updated live view of recent kernel messages logged on the debug target.
Call "firescope -h" to get more information on firescope's options.
Notes
-----
Documentation and specifications: ftp://ftp.suse.de/private/bk/firewire/docs
FireWire is a trademark of Apple Inc. - for more information please refer to:
http://en.wikipedia.org/wiki/FireWire

View File

@ -416,8 +416,21 @@ and is between 256 and 4096 characters. It is defined in the file
[SPARC64] tick [SPARC64] tick
[X86-64] hpet,tsc [X86-64] hpet,tsc
code_bytes [IA32] How many bytes of object code to print in an clearcpuid=BITNUM [X86]
oops report. Disable CPUID feature X for the kernel. See
include/asm-x86/cpufeature.h for the valid bit numbers.
Note the Linux specific bits are not necessarily
stable over kernel options, but the vendor specific
ones should be.
Also note that user programs calling CPUID directly
or using the feature without checking anything
will still see it. This just prevents it from
being used by the kernel or shown in /proc/cpuinfo.
Also note the kernel might malfunction if you disable
some critical bits.
code_bytes [IA32/X86_64] How many bytes of object code to print
in an oops report.
Range: 0 - 8192 Range: 0 - 8192
Default: 64 Default: 64
@ -570,6 +583,12 @@ and is between 256 and 4096 characters. It is defined in the file
See drivers/char/README.epca and See drivers/char/README.epca and
Documentation/digiepca.txt. Documentation/digiepca.txt.
disable_mtrr_trim [X86, Intel and AMD only]
By default the kernel will trim any uncacheable
memory out of your available memory pool based on
MTRR settings. This parameter disables that behavior,
possibly causing your machine to run very slowly.
dmasound= [HW,OSS] Sound subsystem buffers dmasound= [HW,OSS] Sound subsystem buffers
dscc4.setup= [NET] dscc4.setup= [NET]
@ -660,6 +679,10 @@ and is between 256 and 4096 characters. It is defined in the file
gamma= [HW,DRM] gamma= [HW,DRM]
gart_fix_e820= [X86_64] disable the fix e820 for K8 GART
Format: off | on
default: on
gdth= [HW,SCSI] gdth= [HW,SCSI]
See header of drivers/scsi/gdth.c. See header of drivers/scsi/gdth.c.
@ -794,6 +817,16 @@ and is between 256 and 4096 characters. It is defined in the file
for translation below 32 bit and if not available for translation below 32 bit and if not available
then look in the higher range. then look in the higher range.
io_delay= [X86-32,X86-64] I/O delay method
0x80
Standard port 0x80 based delay
0xed
Alternate port 0xed based delay (needed on some systems)
udelay
Simple two microseconds delay
none
No delay
io7= [HW] IO7 for Marvel based alpha systems io7= [HW] IO7 for Marvel based alpha systems
See comment before marvel_specify_io7 in See comment before marvel_specify_io7 in
arch/alpha/kernel/core_marvel.c. arch/alpha/kernel/core_marvel.c.
@ -1059,6 +1092,11 @@ and is between 256 and 4096 characters. It is defined in the file
Multi-Function General Purpose Timers on AMD Geode Multi-Function General Purpose Timers on AMD Geode
platforms. platforms.
mfgptfix [X86-32] Fix MFGPT timers on AMD Geode platforms when
the BIOS has incorrectly applied a workaround. TinyBIOS
version 0.98 is known to be affected, 0.99 fixes the
problem by letting the user disable the workaround.
mga= [HW,DRM] mga= [HW,DRM]
mousedev.tap_time= mousedev.tap_time=
@ -1159,6 +1197,8 @@ and is between 256 and 4096 characters. It is defined in the file
nodisconnect [HW,SCSI,M68K] Disables SCSI disconnects. nodisconnect [HW,SCSI,M68K] Disables SCSI disconnects.
noefi [X86-32,X86-64] Disable EFI runtime services support.
noexec [IA-64] noexec [IA-64]
noexec [X86-32,X86-64] noexec [X86-32,X86-64]
@ -1169,6 +1209,8 @@ and is between 256 and 4096 characters. It is defined in the file
register save and restore. The kernel will only save register save and restore. The kernel will only save
legacy floating-point registers on task switch. legacy floating-point registers on task switch.
noclflush [BUGS=X86] Don't use the CLFLUSH instruction
nohlt [BUGS=ARM] nohlt [BUGS=ARM]
no-hlt [BUGS=X86-32] Tells the kernel that the hlt no-hlt [BUGS=X86-32] Tells the kernel that the hlt
@ -1978,6 +2020,11 @@ and is between 256 and 4096 characters. It is defined in the file
vdso=1: enable VDSO (default) vdso=1: enable VDSO (default)
vdso=0: disable VDSO mapping vdso=0: disable VDSO mapping
vdso32= [X86-32,X86-64]
vdso32=2: enable compat VDSO (default with COMPAT_VDSO)
vdso32=1: enable 32-bit VDSO (default)
vdso32=0: disable 32-bit VDSO mapping
vector= [IA-64,SMP] vector= [IA-64,SMP]
vector=percpu: enable percpu vector domain vector=percpu: enable percpu vector domain

View File

@ -110,12 +110,18 @@ Idle loop
Rebooting Rebooting
reboot=b[ios] | t[riple] | k[bd] [, [w]arm | [c]old] reboot=b[ios] | t[riple] | k[bd] | a[cpi] | e[fi] [, [w]arm | [c]old]
bios Use the CPU reboot vector for warm reset bios Use the CPU reboot vector for warm reset
warm Don't set the cold reboot flag warm Don't set the cold reboot flag
cold Set the cold reboot flag cold Set the cold reboot flag
triple Force a triple fault (init) triple Force a triple fault (init)
kbd Use the keyboard controller. cold reset (default) kbd Use the keyboard controller. cold reset (default)
acpi Use the ACPI RESET_REG in the FADT. If ACPI is not configured or the
ACPI reset does not work, the reboot path attempts the reset using
the keyboard controller.
efi Use efi reset_system runtime service. If EFI is not configured or the
EFI reset does not work, the reboot path attempts the reset using
the keyboard controller.
Using warm reset will be much faster especially on big memory Using warm reset will be much faster especially on big memory
systems because the BIOS will not go through the memory check. systems because the BIOS will not go through the memory check.

View File

@ -19,6 +19,10 @@ Mechanics:
- Build the kernel with the following configuration. - Build the kernel with the following configuration.
CONFIG_FB_EFI=y CONFIG_FB_EFI=y
CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y
If EFI runtime services are expected, the following configuration should
be selected.
CONFIG_EFI=y
CONFIG_EFI_VARS=y or m # optional
- Create a VFAT partition on the disk - Create a VFAT partition on the disk
- Copy the following to the VFAT partition: - Copy the following to the VFAT partition:
elilo bootloader with x86_64 support, elilo configuration file, elilo bootloader with x86_64 support, elilo configuration file,
@ -27,3 +31,8 @@ Mechanics:
can be found in the elilo sourceforge project. can be found in the elilo sourceforge project.
- Boot to EFI shell and invoke elilo choosing the kernel image built - Boot to EFI shell and invoke elilo choosing the kernel image built
in first step. in first step.
- If some or all EFI runtime services don't work, you can try following
kernel command line parameters to turn off some or all EFI runtime
services.
noefi turn off all EFI runtime services
reboot_type=k turn off EFI reboot runtime service

View File

@ -91,6 +91,11 @@ config GENERIC_IRQ_PROBE
bool bool
default y default y
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
bool bool
default y default y

View File

@ -42,6 +42,11 @@ config MMU
config SWIOTLB config SWIOTLB
bool bool
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config RWSEM_XCHGADD_ALGORITHM config RWSEM_XCHGADD_ALGORITHM
bool bool
default y default y
@ -75,6 +80,9 @@ config GENERIC_TIME_VSYSCALL
bool bool
default y default y
config ARCH_SETS_UP_PER_CPU_AREA
def_bool y
config DMI config DMI
bool bool
default y default y

View File

@ -222,7 +222,8 @@ elf32_set_personality (void)
} }
static unsigned long static unsigned long
elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type) elf32_map(struct file *filep, unsigned long addr, struct elf_phdr *eppnt,
int prot, int type, unsigned long unused)
{ {
unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK; unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK;

View File

@ -947,7 +947,7 @@ percpu_modcopy (void *pcpudst, const void *src, unsigned long size)
{ {
unsigned int i; unsigned int i;
for_each_possible_cpu(i) { for_each_possible_cpu(i) {
memcpy(pcpudst + __per_cpu_offset[i], src, size); memcpy(pcpudst + per_cpu_offset(i), src, size);
} }
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */

View File

@ -235,6 +235,11 @@ config IRAM_SIZE
# Define implied options from the CPU selection here # Define implied options from the CPU selection here
# #
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
bool bool
depends on M32R depends on M32R

View File

@ -694,6 +694,11 @@ source "arch/mips/vr41xx/Kconfig"
endmenu endmenu
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
bool bool
default y default y

View File

@ -24,9 +24,7 @@ DEFINE_SPINLOCK(i8253_lock);
static void init_pit_timer(enum clock_event_mode mode, static void init_pit_timer(enum clock_event_mode mode,
struct clock_event_device *evt) struct clock_event_device *evt)
{ {
unsigned long flags; spin_lock(&i8253_lock);
spin_lock_irqsave(&i8253_lock, flags);
switch(mode) { switch(mode) {
case CLOCK_EVT_MODE_PERIODIC: case CLOCK_EVT_MODE_PERIODIC:
@ -55,7 +53,7 @@ static void init_pit_timer(enum clock_event_mode mode,
/* Nothing to do here */ /* Nothing to do here */
break; break;
} }
spin_unlock_irqrestore(&i8253_lock, flags); spin_unlock(&i8253_lock);
} }
/* /*
@ -65,12 +63,10 @@ static void init_pit_timer(enum clock_event_mode mode,
*/ */
static int pit_next_event(unsigned long delta, struct clock_event_device *evt) static int pit_next_event(unsigned long delta, struct clock_event_device *evt)
{ {
unsigned long flags; spin_lock(&i8253_lock);
spin_lock_irqsave(&i8253_lock, flags);
outb_p(delta & 0xff , PIT_CH0); /* LSB */ outb_p(delta & 0xff , PIT_CH0); /* LSB */
outb(delta >> 8 , PIT_CH0); /* MSB */ outb(delta >> 8 , PIT_CH0); /* MSB */
spin_unlock_irqrestore(&i8253_lock, flags); spin_unlock(&i8253_lock);
return 0; return 0;
} }

View File

@ -19,6 +19,11 @@ config MMU
config STACK_GROWSUP config STACK_GROWSUP
def_bool y def_bool y
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
def_bool y def_bool y

View File

@ -42,6 +42,9 @@ config GENERIC_HARDIRQS
bool bool
default y default y
config ARCH_SETS_UP_PER_CPU_AREA
def_bool PPC64
config IRQ_PER_CPU config IRQ_PER_CPU
bool bool
default y default y
@ -53,6 +56,11 @@ config RWSEM_XCHGADD_ALGORITHM
bool bool
default y default y
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config ARCH_HAS_ILOG2_U32 config ARCH_HAS_ILOG2_U32
bool bool
default y default y

View File

@ -256,7 +256,7 @@ static int set_evrregs(struct task_struct *task, unsigned long *data)
#endif /* CONFIG_SPE */ #endif /* CONFIG_SPE */
static void set_single_step(struct task_struct *task) void user_enable_single_step(struct task_struct *task)
{ {
struct pt_regs *regs = task->thread.regs; struct pt_regs *regs = task->thread.regs;
@ -271,7 +271,7 @@ static void set_single_step(struct task_struct *task)
set_tsk_thread_flag(task, TIF_SINGLESTEP); set_tsk_thread_flag(task, TIF_SINGLESTEP);
} }
static void clear_single_step(struct task_struct *task) void user_disable_single_step(struct task_struct *task)
{ {
struct pt_regs *regs = task->thread.regs; struct pt_regs *regs = task->thread.regs;
@ -313,7 +313,7 @@ static int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
void ptrace_disable(struct task_struct *child) void ptrace_disable(struct task_struct *child)
{ {
/* make sure the single step bit is not set. */ /* make sure the single step bit is not set. */
clear_single_step(child); user_disable_single_step(child);
} }
/* /*
@ -445,52 +445,6 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
break; break;
} }
case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
case PTRACE_CONT: { /* restart after signal. */
ret = -EIO;
if (!valid_signal(data))
break;
if (request == PTRACE_SYSCALL)
set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
else
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
child->exit_code = data;
/* make sure the single step bit is not set. */
clear_single_step(child);
wake_up_process(child);
ret = 0;
break;
}
/*
* make the child exit. Best I can do is send it a sigkill.
* perhaps it should be put in the status that it wants to
* exit.
*/
case PTRACE_KILL: {
ret = 0;
if (child->exit_state == EXIT_ZOMBIE) /* already dead */
break;
child->exit_code = SIGKILL;
/* make sure the single step bit is not set. */
clear_single_step(child);
wake_up_process(child);
break;
}
case PTRACE_SINGLESTEP: { /* set the trap flag. */
ret = -EIO;
if (!valid_signal(data))
break;
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
set_single_step(child);
child->exit_code = data;
/* give it a chance to run. */
wake_up_process(child);
ret = 0;
break;
}
case PTRACE_GET_DEBUGREG: { case PTRACE_GET_DEBUGREG: {
ret = -EINVAL; ret = -EINVAL;
/* We only support one DABR and no IABRS at the moment */ /* We only support one DABR and no IABRS at the moment */

View File

@ -66,6 +66,9 @@ config AUDIT_ARCH
bool bool
default y default y
config ARCH_SETS_UP_PER_CPU_AREA
def_bool y
config ARCH_NO_VIRT_TO_BUS config ARCH_NO_VIRT_TO_BUS
def_bool y def_bool y
@ -200,6 +203,11 @@ config US2E_FREQ
If in doubt, say N. If in doubt, say N.
# Global things across all Sun machines. # Global things across all Sun machines.
config GENERIC_LOCKBREAK
bool
default y
depends on SMP && PREEMPT
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
bool bool

View File

@ -71,10 +71,10 @@ EXPORT_SYMBOL(dump_thread);
/* required for SMP */ /* required for SMP */
extern void FASTCALL( __write_lock_failed(rwlock_t *rw)); extern void __write_lock_failed(rwlock_t *rw);
EXPORT_SYMBOL(__write_lock_failed); EXPORT_SYMBOL(__write_lock_failed);
extern void FASTCALL( __read_lock_failed(rwlock_t *rw)); extern void __read_lock_failed(rwlock_t *rw);
EXPORT_SYMBOL(__read_lock_failed); EXPORT_SYMBOL(__read_lock_failed);
#endif #endif

View File

@ -3,10 +3,10 @@
* Licensed under the GPL * Licensed under the GPL
*/ */
#include "linux/ptrace.h" #include <linux/ptrace.h>
#include "asm/unistd.h" #include <asm/unistd.h>
#include "asm/uaccess.h" #include <asm/uaccess.h>
#include "asm/ucontext.h" #include <asm/ucontext.h>
#include "frame_kern.h" #include "frame_kern.h"
#include "skas.h" #include "skas.h"
@ -18,17 +18,17 @@ void copy_sc(struct uml_pt_regs *regs, void *from)
REGS_FS(regs->gp) = sc->fs; REGS_FS(regs->gp) = sc->fs;
REGS_ES(regs->gp) = sc->es; REGS_ES(regs->gp) = sc->es;
REGS_DS(regs->gp) = sc->ds; REGS_DS(regs->gp) = sc->ds;
REGS_EDI(regs->gp) = sc->edi; REGS_EDI(regs->gp) = sc->di;
REGS_ESI(regs->gp) = sc->esi; REGS_ESI(regs->gp) = sc->si;
REGS_EBP(regs->gp) = sc->ebp; REGS_EBP(regs->gp) = sc->bp;
REGS_SP(regs->gp) = sc->esp; REGS_SP(regs->gp) = sc->sp;
REGS_EBX(regs->gp) = sc->ebx; REGS_EBX(regs->gp) = sc->bx;
REGS_EDX(regs->gp) = sc->edx; REGS_EDX(regs->gp) = sc->dx;
REGS_ECX(regs->gp) = sc->ecx; REGS_ECX(regs->gp) = sc->cx;
REGS_EAX(regs->gp) = sc->eax; REGS_EAX(regs->gp) = sc->ax;
REGS_IP(regs->gp) = sc->eip; REGS_IP(regs->gp) = sc->ip;
REGS_CS(regs->gp) = sc->cs; REGS_CS(regs->gp) = sc->cs;
REGS_EFLAGS(regs->gp) = sc->eflags; REGS_EFLAGS(regs->gp) = sc->flags;
REGS_SS(regs->gp) = sc->ss; REGS_SS(regs->gp) = sc->ss;
} }
@ -229,18 +229,18 @@ static int copy_sc_to_user(struct sigcontext __user *to,
sc.fs = REGS_FS(regs->regs.gp); sc.fs = REGS_FS(regs->regs.gp);
sc.es = REGS_ES(regs->regs.gp); sc.es = REGS_ES(regs->regs.gp);
sc.ds = REGS_DS(regs->regs.gp); sc.ds = REGS_DS(regs->regs.gp);
sc.edi = REGS_EDI(regs->regs.gp); sc.di = REGS_EDI(regs->regs.gp);
sc.esi = REGS_ESI(regs->regs.gp); sc.si = REGS_ESI(regs->regs.gp);
sc.ebp = REGS_EBP(regs->regs.gp); sc.bp = REGS_EBP(regs->regs.gp);
sc.esp = sp; sc.sp = sp;
sc.ebx = REGS_EBX(regs->regs.gp); sc.bx = REGS_EBX(regs->regs.gp);
sc.edx = REGS_EDX(regs->regs.gp); sc.dx = REGS_EDX(regs->regs.gp);
sc.ecx = REGS_ECX(regs->regs.gp); sc.cx = REGS_ECX(regs->regs.gp);
sc.eax = REGS_EAX(regs->regs.gp); sc.ax = REGS_EAX(regs->regs.gp);
sc.eip = REGS_IP(regs->regs.gp); sc.ip = REGS_IP(regs->regs.gp);
sc.cs = REGS_CS(regs->regs.gp); sc.cs = REGS_CS(regs->regs.gp);
sc.eflags = REGS_EFLAGS(regs->regs.gp); sc.flags = REGS_EFLAGS(regs->regs.gp);
sc.esp_at_signal = regs->regs.gp[UESP]; sc.sp_at_signal = regs->regs.gp[UESP];
sc.ss = regs->regs.gp[SS]; sc.ss = regs->regs.gp[SS];
sc.cr2 = fi->cr2; sc.cr2 = fi->cr2;
sc.err = fi->error_code; sc.err = fi->error_code;

View File

@ -4,11 +4,11 @@
* Licensed under the GPL * Licensed under the GPL
*/ */
#include "linux/personality.h" #include <linux/personality.h>
#include "linux/ptrace.h" #include <linux/ptrace.h>
#include "asm/unistd.h" #include <asm/unistd.h>
#include "asm/uaccess.h" #include <asm/uaccess.h>
#include "asm/ucontext.h" #include <asm/ucontext.h>
#include "frame_kern.h" #include "frame_kern.h"
#include "skas.h" #include "skas.h"
@ -27,16 +27,16 @@ void copy_sc(struct uml_pt_regs *regs, void *from)
GETREG(regs, R13, sc, r13); GETREG(regs, R13, sc, r13);
GETREG(regs, R14, sc, r14); GETREG(regs, R14, sc, r14);
GETREG(regs, R15, sc, r15); GETREG(regs, R15, sc, r15);
GETREG(regs, RDI, sc, rdi); GETREG(regs, RDI, sc, di);
GETREG(regs, RSI, sc, rsi); GETREG(regs, RSI, sc, si);
GETREG(regs, RBP, sc, rbp); GETREG(regs, RBP, sc, bp);
GETREG(regs, RBX, sc, rbx); GETREG(regs, RBX, sc, bx);
GETREG(regs, RDX, sc, rdx); GETREG(regs, RDX, sc, dx);
GETREG(regs, RAX, sc, rax); GETREG(regs, RAX, sc, ax);
GETREG(regs, RCX, sc, rcx); GETREG(regs, RCX, sc, cx);
GETREG(regs, RSP, sc, rsp); GETREG(regs, RSP, sc, sp);
GETREG(regs, RIP, sc, rip); GETREG(regs, RIP, sc, ip);
GETREG(regs, EFLAGS, sc, eflags); GETREG(regs, EFLAGS, sc, flags);
GETREG(regs, CS, sc, cs); GETREG(regs, CS, sc, cs);
#undef GETREG #undef GETREG
@ -61,16 +61,16 @@ static int copy_sc_from_user(struct pt_regs *regs,
err |= GETREG(regs, R13, from, r13); err |= GETREG(regs, R13, from, r13);
err |= GETREG(regs, R14, from, r14); err |= GETREG(regs, R14, from, r14);
err |= GETREG(regs, R15, from, r15); err |= GETREG(regs, R15, from, r15);
err |= GETREG(regs, RDI, from, rdi); err |= GETREG(regs, RDI, from, di);
err |= GETREG(regs, RSI, from, rsi); err |= GETREG(regs, RSI, from, si);
err |= GETREG(regs, RBP, from, rbp); err |= GETREG(regs, RBP, from, bp);
err |= GETREG(regs, RBX, from, rbx); err |= GETREG(regs, RBX, from, bx);
err |= GETREG(regs, RDX, from, rdx); err |= GETREG(regs, RDX, from, dx);
err |= GETREG(regs, RAX, from, rax); err |= GETREG(regs, RAX, from, ax);
err |= GETREG(regs, RCX, from, rcx); err |= GETREG(regs, RCX, from, cx);
err |= GETREG(regs, RSP, from, rsp); err |= GETREG(regs, RSP, from, sp);
err |= GETREG(regs, RIP, from, rip); err |= GETREG(regs, RIP, from, ip);
err |= GETREG(regs, EFLAGS, from, eflags); err |= GETREG(regs, EFLAGS, from, flags);
err |= GETREG(regs, CS, from, cs); err |= GETREG(regs, CS, from, cs);
if (err) if (err)
return 1; return 1;
@ -108,19 +108,19 @@ static int copy_sc_to_user(struct sigcontext __user *to,
__put_user((regs)->regs.gp[(regno) / sizeof(unsigned long)], \ __put_user((regs)->regs.gp[(regno) / sizeof(unsigned long)], \
&(sc)->regname) &(sc)->regname)
err |= PUTREG(regs, RDI, to, rdi); err |= PUTREG(regs, RDI, to, di);
err |= PUTREG(regs, RSI, to, rsi); err |= PUTREG(regs, RSI, to, si);
err |= PUTREG(regs, RBP, to, rbp); err |= PUTREG(regs, RBP, to, bp);
/* /*
* Must use orignal RSP, which is passed in, rather than what's in * Must use orignal RSP, which is passed in, rather than what's in
* the pt_regs, because that's already been updated to point at the * the pt_regs, because that's already been updated to point at the
* signal frame. * signal frame.
*/ */
err |= __put_user(sp, &to->rsp); err |= __put_user(sp, &to->sp);
err |= PUTREG(regs, RBX, to, rbx); err |= PUTREG(regs, RBX, to, bx);
err |= PUTREG(regs, RDX, to, rdx); err |= PUTREG(regs, RDX, to, dx);
err |= PUTREG(regs, RCX, to, rcx); err |= PUTREG(regs, RCX, to, cx);
err |= PUTREG(regs, RAX, to, rax); err |= PUTREG(regs, RAX, to, ax);
err |= PUTREG(regs, R8, to, r8); err |= PUTREG(regs, R8, to, r8);
err |= PUTREG(regs, R9, to, r9); err |= PUTREG(regs, R9, to, r9);
err |= PUTREG(regs, R10, to, r10); err |= PUTREG(regs, R10, to, r10);
@ -135,8 +135,8 @@ static int copy_sc_to_user(struct sigcontext __user *to,
err |= __put_user(fi->error_code, &to->err); err |= __put_user(fi->error_code, &to->err);
err |= __put_user(fi->trap_no, &to->trapno); err |= __put_user(fi->trap_no, &to->trapno);
err |= PUTREG(regs, RIP, to, rip); err |= PUTREG(regs, RIP, to, ip);
err |= PUTREG(regs, EFLAGS, to, eflags); err |= PUTREG(regs, EFLAGS, to, flags);
#undef PUTREG #undef PUTREG
err |= __put_user(mask, &to->oldmask); err |= __put_user(mask, &to->oldmask);

View File

@ -17,81 +17,69 @@ config X86_64
### Arch settings ### Arch settings
config X86 config X86
bool def_bool y
default y
config GENERIC_LOCKBREAK
def_bool n
config GENERIC_TIME config GENERIC_TIME
bool def_bool y
default y
config GENERIC_CMOS_UPDATE config GENERIC_CMOS_UPDATE
bool def_bool y
default y
config CLOCKSOURCE_WATCHDOG config CLOCKSOURCE_WATCHDOG
bool def_bool y
default y
config GENERIC_CLOCKEVENTS config GENERIC_CLOCKEVENTS
bool def_bool y
default y
config GENERIC_CLOCKEVENTS_BROADCAST config GENERIC_CLOCKEVENTS_BROADCAST
bool def_bool y
default y
depends on X86_64 || (X86_32 && X86_LOCAL_APIC) depends on X86_64 || (X86_32 && X86_LOCAL_APIC)
config LOCKDEP_SUPPORT config LOCKDEP_SUPPORT
bool def_bool y
default y
config STACKTRACE_SUPPORT config STACKTRACE_SUPPORT
bool def_bool y
default y
config SEMAPHORE_SLEEPERS config SEMAPHORE_SLEEPERS
bool def_bool y
default y
config MMU config MMU
bool def_bool y
default y
config ZONE_DMA config ZONE_DMA
bool def_bool y
default y
config QUICKLIST config QUICKLIST
bool def_bool X86_32
default X86_32
config SBUS config SBUS
bool bool
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool def_bool y
default y
config GENERIC_IOMAP config GENERIC_IOMAP
bool def_bool y
default y
config GENERIC_BUG config GENERIC_BUG
bool def_bool y
default y
depends on BUG depends on BUG
config GENERIC_HWEIGHT config GENERIC_HWEIGHT
bool def_bool y
default y
config GENERIC_GPIO
def_bool n
config ARCH_MAY_HAVE_PC_FDC config ARCH_MAY_HAVE_PC_FDC
bool def_bool y
default y
config DMI config DMI
bool def_bool y
default y
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
def_bool !X86_XADD def_bool !X86_XADD
@ -112,6 +100,9 @@ config GENERIC_TIME_VSYSCALL
bool bool
default X86_64 default X86_64
config HAVE_SETUP_PER_CPU_AREA
def_bool X86_64
config ARCH_SUPPORTS_OPROFILE config ARCH_SUPPORTS_OPROFILE
bool bool
default y default y
@ -144,9 +135,17 @@ config GENERIC_PENDING_IRQ
config X86_SMP config X86_SMP
bool bool
depends on X86_32 && SMP && !X86_VOYAGER depends on SMP && ((X86_32 && !X86_VOYAGER) || X86_64)
default y default y
config X86_32_SMP
def_bool y
depends on X86_32 && SMP
config X86_64_SMP
def_bool y
depends on X86_64 && SMP
config X86_HT config X86_HT
bool bool
depends on SMP depends on SMP
@ -292,6 +291,18 @@ config X86_ES7000
Only choose this option if you have such a system, otherwise you Only choose this option if you have such a system, otherwise you
should say N here. should say N here.
config X86_RDC321X
bool "RDC R-321x SoC"
depends on X86_32
select M486
select X86_REBOOTFIXUPS
select GENERIC_GPIO
select LEDS_GPIO
help
This option is needed for RDC R-321x system-on-chip, also known
as R-8610-(G).
If you don't have one of these chips, you should say N here.
config X86_VSMP config X86_VSMP
bool "Support for ScaleMP vSMP" bool "Support for ScaleMP vSMP"
depends on X86_64 && PCI depends on X86_64 && PCI
@ -303,8 +314,8 @@ config X86_VSMP
endchoice endchoice
config SCHED_NO_NO_OMIT_FRAME_POINTER config SCHED_NO_NO_OMIT_FRAME_POINTER
bool "Single-depth WCHAN output" def_bool y
default y prompt "Single-depth WCHAN output"
depends on X86_32 depends on X86_32
help help
Calculate simpler /proc/<PID>/wchan values. If this option Calculate simpler /proc/<PID>/wchan values. If this option
@ -314,18 +325,8 @@ config SCHED_NO_NO_OMIT_FRAME_POINTER
If in doubt, say "Y". If in doubt, say "Y".
config PARAVIRT
bool
depends on X86_32 && !(X86_VISWS || X86_VOYAGER)
help
This changes the kernel so it can modify itself when it is run
under a hypervisor, potentially improving performance significantly
over full virtualization. However, when run without a hypervisor
the kernel is theoretically slower and slightly larger.
menuconfig PARAVIRT_GUEST menuconfig PARAVIRT_GUEST
bool "Paravirtualized guest support" bool "Paravirtualized guest support"
depends on X86_32
help help
Say Y here to get to see options related to running Linux under Say Y here to get to see options related to running Linux under
various hypervisors. This option alone does not add any kernel code. various hypervisors. This option alone does not add any kernel code.
@ -339,6 +340,7 @@ source "arch/x86/xen/Kconfig"
config VMI config VMI
bool "VMI Guest support" bool "VMI Guest support"
select PARAVIRT select PARAVIRT
depends on X86_32
depends on !(X86_VISWS || X86_VOYAGER) depends on !(X86_VISWS || X86_VOYAGER)
help help
VMI provides a paravirtualized interface to the VMware ESX server VMI provides a paravirtualized interface to the VMware ESX server
@ -348,40 +350,43 @@ config VMI
source "arch/x86/lguest/Kconfig" source "arch/x86/lguest/Kconfig"
config PARAVIRT
bool "Enable paravirtualization code"
depends on !(X86_VISWS || X86_VOYAGER)
help
This changes the kernel so it can modify itself when it is run
under a hypervisor, potentially improving performance significantly
over full virtualization. However, when run without a hypervisor
the kernel is theoretically slower and slightly larger.
endif endif
config ACPI_SRAT config ACPI_SRAT
bool def_bool y
default y
depends on X86_32 && ACPI && NUMA && (X86_SUMMIT || X86_GENERICARCH) depends on X86_32 && ACPI && NUMA && (X86_SUMMIT || X86_GENERICARCH)
select ACPI_NUMA select ACPI_NUMA
config HAVE_ARCH_PARSE_SRAT config HAVE_ARCH_PARSE_SRAT
bool def_bool y
default y depends on ACPI_SRAT
depends on ACPI_SRAT
config X86_SUMMIT_NUMA config X86_SUMMIT_NUMA
bool def_bool y
default y
depends on X86_32 && NUMA && (X86_SUMMIT || X86_GENERICARCH) depends on X86_32 && NUMA && (X86_SUMMIT || X86_GENERICARCH)
config X86_CYCLONE_TIMER config X86_CYCLONE_TIMER
bool def_bool y
default y
depends on X86_32 && X86_SUMMIT || X86_GENERICARCH depends on X86_32 && X86_SUMMIT || X86_GENERICARCH
config ES7000_CLUSTERED_APIC config ES7000_CLUSTERED_APIC
bool def_bool y
default y
depends on SMP && X86_ES7000 && MPENTIUMIII depends on SMP && X86_ES7000 && MPENTIUMIII
source "arch/x86/Kconfig.cpu" source "arch/x86/Kconfig.cpu"
config HPET_TIMER config HPET_TIMER
bool def_bool X86_64
prompt "HPET Timer Support" if X86_32 prompt "HPET Timer Support" if X86_32
default X86_64
help help
Use the IA-PC HPET (High Precision Event Timer) to manage Use the IA-PC HPET (High Precision Event Timer) to manage
time in preference to the PIT and RTC, if a HPET is time in preference to the PIT and RTC, if a HPET is
@ -399,9 +404,8 @@ config HPET_TIMER
Choose N to continue using the legacy 8254 timer. Choose N to continue using the legacy 8254 timer.
config HPET_EMULATE_RTC config HPET_EMULATE_RTC
bool def_bool y
depends on HPET_TIMER && RTC=y depends on HPET_TIMER && (RTC=y || RTC=m)
default y
# Mark as embedded because too many people got it wrong. # Mark as embedded because too many people got it wrong.
# The code disables itself when not needed. # The code disables itself when not needed.
@ -441,8 +445,8 @@ config CALGARY_IOMMU
If unsure, say Y. If unsure, say Y.
config CALGARY_IOMMU_ENABLED_BY_DEFAULT config CALGARY_IOMMU_ENABLED_BY_DEFAULT
bool "Should Calgary be enabled by default?" def_bool y
default y prompt "Should Calgary be enabled by default?"
depends on CALGARY_IOMMU depends on CALGARY_IOMMU
help help
Should Calgary be enabled by default? if you choose 'y', Calgary Should Calgary be enabled by default? if you choose 'y', Calgary
@ -486,9 +490,9 @@ config SCHED_SMT
N here. N here.
config SCHED_MC config SCHED_MC
bool "Multi-core scheduler support" def_bool y
prompt "Multi-core scheduler support"
depends on (X86_64 && SMP) || (X86_32 && X86_HT) depends on (X86_64 && SMP) || (X86_32 && X86_HT)
default y
help help
Multi-core scheduler support improves the CPU scheduler's decision Multi-core scheduler support improves the CPU scheduler's decision
making when dealing with multi-core CPU chips at a cost of slightly making when dealing with multi-core CPU chips at a cost of slightly
@ -522,19 +526,16 @@ config X86_UP_IOAPIC
an IO-APIC, then the kernel will still run with no slowdown at all. an IO-APIC, then the kernel will still run with no slowdown at all.
config X86_LOCAL_APIC config X86_LOCAL_APIC
bool def_bool y
depends on X86_64 || (X86_32 && (X86_UP_APIC || ((X86_VISWS || SMP) && !X86_VOYAGER) || X86_GENERICARCH)) depends on X86_64 || (X86_32 && (X86_UP_APIC || ((X86_VISWS || SMP) && !X86_VOYAGER) || X86_GENERICARCH))
default y
config X86_IO_APIC config X86_IO_APIC
bool def_bool y
depends on X86_64 || (X86_32 && (X86_UP_IOAPIC || (SMP && !(X86_VISWS || X86_VOYAGER)) || X86_GENERICARCH)) depends on X86_64 || (X86_32 && (X86_UP_IOAPIC || (SMP && !(X86_VISWS || X86_VOYAGER)) || X86_GENERICARCH))
default y
config X86_VISWS_APIC config X86_VISWS_APIC
bool def_bool y
depends on X86_32 && X86_VISWS depends on X86_32 && X86_VISWS
default y
config X86_MCE config X86_MCE
bool "Machine Check Exception" bool "Machine Check Exception"
@ -554,17 +555,17 @@ config X86_MCE
the 386 and 486, so nearly everyone can say Y here. the 386 and 486, so nearly everyone can say Y here.
config X86_MCE_INTEL config X86_MCE_INTEL
bool "Intel MCE features" def_bool y
prompt "Intel MCE features"
depends on X86_64 && X86_MCE && X86_LOCAL_APIC depends on X86_64 && X86_MCE && X86_LOCAL_APIC
default y
help help
Additional support for intel specific MCE features such as Additional support for intel specific MCE features such as
the thermal monitor. the thermal monitor.
config X86_MCE_AMD config X86_MCE_AMD
bool "AMD MCE features" def_bool y
prompt "AMD MCE features"
depends on X86_64 && X86_MCE && X86_LOCAL_APIC depends on X86_64 && X86_MCE && X86_LOCAL_APIC
default y
help help
Additional support for AMD specific MCE features such as Additional support for AMD specific MCE features such as
the DRAM Error Threshold. the DRAM Error Threshold.
@ -637,9 +638,9 @@ config I8K
Say N otherwise. Say N otherwise.
config X86_REBOOTFIXUPS config X86_REBOOTFIXUPS
bool "Enable X86 board specific fixups for reboot" def_bool n
prompt "Enable X86 board specific fixups for reboot"
depends on X86_32 && X86 depends on X86_32 && X86
default n
---help--- ---help---
This enables chipset and/or board specific fixups to be done This enables chipset and/or board specific fixups to be done
in order to get reboot to work correctly. This is only needed on in order to get reboot to work correctly. This is only needed on
@ -648,7 +649,7 @@ config X86_REBOOTFIXUPS
system. system.
Currently, the only fixup is for the Geode machines using Currently, the only fixup is for the Geode machines using
CS5530A and CS5536 chipsets. CS5530A and CS5536 chipsets and the RDC R-321x SoC.
Say Y if you want to enable the fixup. Currently, it's safe to Say Y if you want to enable the fixup. Currently, it's safe to
enable this option even if you don't need it. enable this option even if you don't need it.
@ -672,9 +673,8 @@ config MICROCODE
module will be called microcode. module will be called microcode.
config MICROCODE_OLD_INTERFACE config MICROCODE_OLD_INTERFACE
bool def_bool y
depends on MICROCODE depends on MICROCODE
default y
config X86_MSR config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support" tristate "/dev/cpu/*/msr - Model-specific register support"
@ -798,13 +798,12 @@ config PAGE_OFFSET
depends on X86_32 depends on X86_32
config HIGHMEM config HIGHMEM
bool def_bool y
depends on X86_32 && (HIGHMEM64G || HIGHMEM4G) depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
default y
config X86_PAE config X86_PAE
bool "PAE (Physical Address Extension) Support" def_bool n
default n prompt "PAE (Physical Address Extension) Support"
depends on X86_32 && !HIGHMEM4G depends on X86_32 && !HIGHMEM4G
select RESOURCES_64BIT select RESOURCES_64BIT
help help
@ -836,10 +835,10 @@ comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
depends on X86_32 && X86_SUMMIT && (!HIGHMEM64G || !ACPI) depends on X86_32 && X86_SUMMIT && (!HIGHMEM64G || !ACPI)
config K8_NUMA config K8_NUMA
bool "Old style AMD Opteron NUMA detection" def_bool y
depends on X86_64 && NUMA && PCI prompt "Old style AMD Opteron NUMA detection"
default y depends on X86_64 && NUMA && PCI
help help
Enable K8 NUMA node topology detection. You should say Y here if Enable K8 NUMA node topology detection. You should say Y here if
you have a multi processor AMD K8 system. This uses an old you have a multi processor AMD K8 system. This uses an old
method to read the NUMA configuration directly from the builtin method to read the NUMA configuration directly from the builtin
@ -847,10 +846,10 @@ config K8_NUMA
instead, which also takes priority if both are compiled in. instead, which also takes priority if both are compiled in.
config X86_64_ACPI_NUMA config X86_64_ACPI_NUMA
bool "ACPI NUMA detection" def_bool y
prompt "ACPI NUMA detection"
depends on X86_64 && NUMA && ACPI && PCI depends on X86_64 && NUMA && ACPI && PCI
select ACPI_NUMA select ACPI_NUMA
default y
help help
Enable ACPI SRAT based node topology detection. Enable ACPI SRAT based node topology detection.
@ -864,52 +863,53 @@ config NUMA_EMU
config NODES_SHIFT config NODES_SHIFT
int int
range 1 15 if X86_64
default "6" if X86_64 default "6" if X86_64
default "4" if X86_NUMAQ default "4" if X86_NUMAQ
default "3" default "3"
depends on NEED_MULTIPLE_NODES depends on NEED_MULTIPLE_NODES
config HAVE_ARCH_BOOTMEM_NODE config HAVE_ARCH_BOOTMEM_NODE
bool def_bool y
depends on X86_32 && NUMA depends on X86_32 && NUMA
default y
config ARCH_HAVE_MEMORY_PRESENT config ARCH_HAVE_MEMORY_PRESENT
bool def_bool y
depends on X86_32 && DISCONTIGMEM depends on X86_32 && DISCONTIGMEM
default y
config NEED_NODE_MEMMAP_SIZE config NEED_NODE_MEMMAP_SIZE
bool def_bool y
depends on X86_32 && (DISCONTIGMEM || SPARSEMEM) depends on X86_32 && (DISCONTIGMEM || SPARSEMEM)
default y
config HAVE_ARCH_ALLOC_REMAP config HAVE_ARCH_ALLOC_REMAP
bool def_bool y
depends on X86_32 && NUMA depends on X86_32 && NUMA
default y
config ARCH_FLATMEM_ENABLE config ARCH_FLATMEM_ENABLE
def_bool y def_bool y
depends on (X86_32 && ARCH_SELECT_MEMORY_MODEL && X86_PC) || (X86_64 && !NUMA) depends on X86_32 && ARCH_SELECT_MEMORY_MODEL && X86_PC && !NUMA
config ARCH_DISCONTIGMEM_ENABLE config ARCH_DISCONTIGMEM_ENABLE
def_bool y def_bool y
depends on NUMA depends on NUMA && X86_32
config ARCH_DISCONTIGMEM_DEFAULT config ARCH_DISCONTIGMEM_DEFAULT
def_bool y def_bool y
depends on NUMA depends on NUMA && X86_32
config ARCH_SPARSEMEM_DEFAULT
def_bool y
depends on X86_64
config ARCH_SPARSEMEM_ENABLE config ARCH_SPARSEMEM_ENABLE
def_bool y def_bool y
depends on NUMA || (EXPERIMENTAL && (X86_PC || X86_64)) depends on X86_64 || NUMA || (EXPERIMENTAL && X86_PC)
select SPARSEMEM_STATIC if X86_32 select SPARSEMEM_STATIC if X86_32
select SPARSEMEM_VMEMMAP_ENABLE if X86_64 select SPARSEMEM_VMEMMAP_ENABLE if X86_64
config ARCH_SELECT_MEMORY_MODEL config ARCH_SELECT_MEMORY_MODEL
def_bool y def_bool y
depends on X86_32 && ARCH_SPARSEMEM_ENABLE depends on ARCH_SPARSEMEM_ENABLE
config ARCH_MEMORY_PROBE config ARCH_MEMORY_PROBE
def_bool X86_64 def_bool X86_64
@ -987,42 +987,32 @@ config MTRR
See <file:Documentation/mtrr.txt> for more information. See <file:Documentation/mtrr.txt> for more information.
config EFI config EFI
bool "Boot from EFI support" def_bool n
depends on X86_32 && ACPI prompt "EFI runtime service support"
default n depends on ACPI
---help--- ---help---
This enables the kernel to boot on EFI platforms using This enables the kernel to use EFI runtime services that are
system configuration information passed to it from the firmware.
This also enables the kernel to use any EFI runtime services that are
available (such as the EFI variable services). available (such as the EFI variable services).
This option is only useful on systems that have EFI firmware This option is only useful on systems that have EFI firmware.
and will result in a kernel image that is ~8k larger. In addition, In addition, you should use the latest ELILO loader available
you must use the latest ELILO loader available at at <http://elilo.sourceforge.net> in order to take advantage
<http://elilo.sourceforge.net> in order to take advantage of of EFI runtime services. However, even with this option, the
kernel initialization using EFI information (neither GRUB nor LILO know resultant kernel should continue to boot on existing non-EFI
anything about EFI). However, even with this option, the resultant platforms.
kernel should continue to boot on existing non-EFI platforms.
config IRQBALANCE config IRQBALANCE
bool "Enable kernel irq balancing" def_bool y
prompt "Enable kernel irq balancing"
depends on X86_32 && SMP && X86_IO_APIC depends on X86_32 && SMP && X86_IO_APIC
default y
help help
The default yes will allow the kernel to do irq load balancing. The default yes will allow the kernel to do irq load balancing.
Saying no will keep the kernel from doing irq load balancing. Saying no will keep the kernel from doing irq load balancing.
# turning this on wastes a bunch of space.
# Summit needs it only when NUMA is on
config BOOT_IOREMAP
bool
depends on X86_32 && (((X86_SUMMIT || X86_GENERICARCH) && NUMA) || (X86 && EFI))
default y
config SECCOMP config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode" def_bool y
prompt "Enable seccomp to safely compute untrusted bytecode"
depends on PROC_FS depends on PROC_FS
default y
help help
This kernel feature is useful for number crunching applications This kernel feature is useful for number crunching applications
that may need to compute untrusted bytecode during their that may need to compute untrusted bytecode during their
@ -1189,11 +1179,11 @@ config HOTPLUG_CPU
suspend. suspend.
config COMPAT_VDSO config COMPAT_VDSO
bool "Compat VDSO support" def_bool y
default y prompt "Compat VDSO support"
depends on X86_32 depends on X86_32 || IA32_EMULATION
help help
Map the VDSO to the predictable old-style address too. Map the 32-bit VDSO to the predictable old-style address too.
---help--- ---help---
Say N here if you are running a sufficiently recent glibc Say N here if you are running a sufficiently recent glibc
version (2.3.3 or later), to remove the high-mapped version (2.3.3 or later), to remove the high-mapped
@ -1207,30 +1197,26 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
def_bool y def_bool y
depends on X86_64 || (X86_32 && HIGHMEM) depends on X86_64 || (X86_32 && HIGHMEM)
config MEMORY_HOTPLUG_RESERVE
def_bool X86_64
depends on (MEMORY_HOTPLUG && DISCONTIGMEM)
config HAVE_ARCH_EARLY_PFN_TO_NID config HAVE_ARCH_EARLY_PFN_TO_NID
def_bool X86_64 def_bool X86_64
depends on NUMA depends on NUMA
config OUT_OF_LINE_PFN_TO_PAGE
def_bool X86_64
depends on DISCONTIGMEM
menu "Power management options" menu "Power management options"
depends on !X86_VOYAGER depends on !X86_VOYAGER
config ARCH_HIBERNATION_HEADER config ARCH_HIBERNATION_HEADER
bool def_bool y
depends on X86_64 && HIBERNATION depends on X86_64 && HIBERNATION
default y
source "kernel/power/Kconfig" source "kernel/power/Kconfig"
source "drivers/acpi/Kconfig" source "drivers/acpi/Kconfig"
config X86_APM_BOOT
bool
default y
depends on APM || APM_MODULE
menuconfig APM menuconfig APM
tristate "APM (Advanced Power Management) BIOS support" tristate "APM (Advanced Power Management) BIOS support"
depends on X86_32 && PM_SLEEP && !X86_VISWS depends on X86_32 && PM_SLEEP && !X86_VISWS
@ -1371,7 +1357,7 @@ menu "Bus options (PCI etc.)"
config PCI config PCI
bool "PCI support" if !X86_VISWS bool "PCI support" if !X86_VISWS
depends on !X86_VOYAGER depends on !X86_VOYAGER
default y if X86_VISWS default y
select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC) select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC)
help help
Find out whether you have a PCI motherboard. PCI is the name of a Find out whether you have a PCI motherboard. PCI is the name of a
@ -1418,25 +1404,21 @@ config PCI_GOANY
endchoice endchoice
config PCI_BIOS config PCI_BIOS
bool def_bool y
depends on X86_32 && !X86_VISWS && PCI && (PCI_GOBIOS || PCI_GOANY) depends on X86_32 && !X86_VISWS && PCI && (PCI_GOBIOS || PCI_GOANY)
default y
# x86-64 doesn't support PCI BIOS access from long mode so always go direct. # x86-64 doesn't support PCI BIOS access from long mode so always go direct.
config PCI_DIRECT config PCI_DIRECT
bool def_bool y
depends on PCI && (X86_64 || (PCI_GODIRECT || PCI_GOANY) || X86_VISWS) depends on PCI && (X86_64 || (PCI_GODIRECT || PCI_GOANY) || X86_VISWS)
default y
config PCI_MMCONFIG config PCI_MMCONFIG
bool def_bool y
depends on X86_32 && PCI && ACPI && (PCI_GOMMCONFIG || PCI_GOANY) depends on X86_32 && PCI && ACPI && (PCI_GOMMCONFIG || PCI_GOANY)
default y
config PCI_DOMAINS config PCI_DOMAINS
bool def_bool y
depends on PCI depends on PCI
default y
config PCI_MMCONFIG config PCI_MMCONFIG
bool "Support mmconfig PCI config space access" bool "Support mmconfig PCI config space access"
@ -1453,9 +1435,9 @@ config DMAR
remapping devices. remapping devices.
config DMAR_GFX_WA config DMAR_GFX_WA
bool "Support for Graphics workaround" def_bool y
prompt "Support for Graphics workaround"
depends on DMAR depends on DMAR
default y
help help
Current Graphics drivers tend to use physical address Current Graphics drivers tend to use physical address
for DMA and avoid using DMA APIs. Setting this config for DMA and avoid using DMA APIs. Setting this config
@ -1464,9 +1446,8 @@ config DMAR_GFX_WA
to use physical addresses for DMA. to use physical addresses for DMA.
config DMAR_FLOPPY_WA config DMAR_FLOPPY_WA
bool def_bool y
depends on DMAR depends on DMAR
default y
help help
Floppy disk drivers are know to bypass DMA API calls Floppy disk drivers are know to bypass DMA API calls
thereby failing to work when IOMMU is enabled. This thereby failing to work when IOMMU is enabled. This
@ -1479,8 +1460,7 @@ source "drivers/pci/Kconfig"
# x86_64 have no ISA slots, but do have ISA-style DMA. # x86_64 have no ISA slots, but do have ISA-style DMA.
config ISA_DMA_API config ISA_DMA_API
bool def_bool y
default y
if X86_32 if X86_32
@ -1546,9 +1526,9 @@ config SCx200HR_TIMER
other workaround is idle=poll boot option. other workaround is idle=poll boot option.
config GEODE_MFGPT_TIMER config GEODE_MFGPT_TIMER
bool "Geode Multi-Function General Purpose Timer (MFGPT) events" def_bool y
prompt "Geode Multi-Function General Purpose Timer (MFGPT) events"
depends on MGEODE_LX && GENERIC_TIME && GENERIC_CLOCKEVENTS depends on MGEODE_LX && GENERIC_TIME && GENERIC_CLOCKEVENTS
default y
help help
This driver provides a clock event source based on the MFGPT This driver provides a clock event source based on the MFGPT
timer(s) in the CS5535 and CS5536 companion chip for the geode. timer(s) in the CS5535 and CS5536 companion chip for the geode.
@ -1575,6 +1555,7 @@ source "fs/Kconfig.binfmt"
config IA32_EMULATION config IA32_EMULATION
bool "IA32 Emulation" bool "IA32 Emulation"
depends on X86_64 depends on X86_64
select COMPAT_BINFMT_ELF
help help
Include code to run 32-bit programs under a 64-bit kernel. You should Include code to run 32-bit programs under a 64-bit kernel. You should
likely turn this on, unless you're 100% sure that you don't have any likely turn this on, unless you're 100% sure that you don't have any
@ -1587,18 +1568,16 @@ config IA32_AOUT
Support old a.out binaries in the 32bit emulation. Support old a.out binaries in the 32bit emulation.
config COMPAT config COMPAT
bool def_bool y
depends on IA32_EMULATION depends on IA32_EMULATION
default y
config COMPAT_FOR_U64_ALIGNMENT config COMPAT_FOR_U64_ALIGNMENT
def_bool COMPAT def_bool COMPAT
depends on X86_64 depends on X86_64
config SYSVIPC_COMPAT config SYSVIPC_COMPAT
bool def_bool y
depends on X86_64 && COMPAT && SYSVIPC depends on X86_64 && COMPAT && SYSVIPC
default y
endmenu endmenu

View File

@ -219,10 +219,10 @@ config MGEODEGX1
Select this for a Geode GX1 (Cyrix MediaGX) chip. Select this for a Geode GX1 (Cyrix MediaGX) chip.
config MGEODE_LX config MGEODE_LX
bool "Geode GX/LX" bool "Geode GX/LX"
depends on X86_32 depends on X86_32
help help
Select this for AMD Geode GX and LX processors. Select this for AMD Geode GX and LX processors.
config MCYRIXIII config MCYRIXIII
bool "CyrixIII/VIA-C3" bool "CyrixIII/VIA-C3"
@ -258,7 +258,7 @@ config MPSC
Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
Xeon CPUs with Intel 64bit which is compatible with x86-64. Xeon CPUs with Intel 64bit which is compatible with x86-64.
Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
Netburst core and shouldn't use this option. You can distinguish them Netburst core and shouldn't use this option. You can distinguish them
using the cpu family field using the cpu family field
in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one. in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
@ -317,81 +317,75 @@ config X86_L1_CACHE_SHIFT
default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7 default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7
config X86_XADD config X86_XADD
bool def_bool y
depends on X86_32 && !M386 depends on X86_32 && !M386
default y
config X86_PPRO_FENCE config X86_PPRO_FENCE
bool bool "PentiumPro memory ordering errata workaround"
depends on M686 || M586MMX || M586TSC || M586 || M486 || M386 || MGEODEGX1 depends on M686 || M586MMX || M586TSC || M586 || M486 || M386 || MGEODEGX1
default y help
Old PentiumPro multiprocessor systems had errata that could cause memory
operations to violate the x86 ordering standard in rare cases. Enabling this
option will attempt to work around some (but not all) occurances of
this problem, at the cost of much heavier spinlock and memory barrier
operations.
If unsure, say n here. Even distro kernels should think twice before enabling
this: there are few systems, and an unlikely bug.
config X86_F00F_BUG config X86_F00F_BUG
bool def_bool y
depends on M586MMX || M586TSC || M586 || M486 || M386 depends on M586MMX || M586TSC || M586 || M486 || M386
default y
config X86_WP_WORKS_OK config X86_WP_WORKS_OK
bool def_bool y
depends on X86_32 && !M386 depends on X86_32 && !M386
default y
config X86_INVLPG config X86_INVLPG
bool def_bool y
depends on X86_32 && !M386 depends on X86_32 && !M386
default y
config X86_BSWAP config X86_BSWAP
bool def_bool y
depends on X86_32 && !M386 depends on X86_32 && !M386
default y
config X86_POPAD_OK config X86_POPAD_OK
bool def_bool y
depends on X86_32 && !M386 depends on X86_32 && !M386
default y
config X86_ALIGNMENT_16 config X86_ALIGNMENT_16
bool def_bool y
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || X86_ELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2 || MGEODEGX1 depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || X86_ELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2 || MGEODEGX1
default y
config X86_GOOD_APIC config X86_GOOD_APIC
bool def_bool y
depends on MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8 || MEFFICEON || MCORE2 || MVIAC7 || X86_64 depends on MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8 || MEFFICEON || MCORE2 || MVIAC7 || X86_64
default y
config X86_INTEL_USERCOPY config X86_INTEL_USERCOPY
bool def_bool y
depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
default y
config X86_USE_PPRO_CHECKSUM config X86_USE_PPRO_CHECKSUM
bool def_bool y
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON || MGEODE_LX || MCORE2 depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON || MGEODE_LX || MCORE2
default y
config X86_USE_3DNOW config X86_USE_3DNOW
bool def_bool y
depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
default y
config X86_OOSTORE config X86_OOSTORE
bool def_bool y
depends on (MWINCHIP3D || MWINCHIP2 || MWINCHIPC6) && MTRR depends on (MWINCHIP3D || MWINCHIP2 || MWINCHIPC6) && MTRR
default y
config X86_TSC config X86_TSC
bool def_bool y
depends on ((MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64 depends on ((MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64
default y
# this should be set for all -march=.. options where the compiler # this should be set for all -march=.. options where the compiler
# generates cmov. # generates cmov.
config X86_CMOV config X86_CMOV
bool def_bool y
depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7) depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7)
default y
config X86_MINIMUM_CPU_FAMILY config X86_MINIMUM_CPU_FAMILY
int int
@ -399,3 +393,6 @@ config X86_MINIMUM_CPU_FAMILY
default "4" if X86_32 && (X86_XADD || X86_CMPXCHG || X86_BSWAP || X86_WP_WORKS_OK) default "4" if X86_32 && (X86_XADD || X86_CMPXCHG || X86_BSWAP || X86_WP_WORKS_OK)
default "3" default "3"
config X86_DEBUGCTLMSR
def_bool y
depends on !(M586MMX || M586TSC || M586 || M486 || M386)

View File

@ -6,7 +6,7 @@ config TRACE_IRQFLAGS_SUPPORT
source "lib/Kconfig.debug" source "lib/Kconfig.debug"
config EARLY_PRINTK config EARLY_PRINTK
bool "Early printk" if EMBEDDED && DEBUG_KERNEL && X86_32 bool "Early printk" if EMBEDDED
default y default y
help help
Write kernel log output directly into the VGA buffer or to a serial Write kernel log output directly into the VGA buffer or to a serial
@ -40,22 +40,49 @@ comment "Page alloc debug is incompatible with Software Suspend on i386"
config DEBUG_PAGEALLOC config DEBUG_PAGEALLOC
bool "Debug page memory allocations" bool "Debug page memory allocations"
depends on DEBUG_KERNEL && !HIBERNATION && !HUGETLBFS depends on DEBUG_KERNEL && X86_32
depends on X86_32
help help
Unmap pages from the kernel linear mapping after free_pages(). Unmap pages from the kernel linear mapping after free_pages().
This results in a large slowdown, but helps to find certain types This results in a large slowdown, but helps to find certain types
of memory corruptions. of memory corruptions.
config DEBUG_PER_CPU_MAPS
bool "Debug access to per_cpu maps"
depends on DEBUG_KERNEL
depends on X86_64_SMP
default n
help
Say Y to verify that the per_cpu map being accessed has
been setup. Adds a fair amount of code to kernel memory
and decreases performance.
Say N if unsure.
config DEBUG_RODATA config DEBUG_RODATA
bool "Write protect kernel read-only data structures" bool "Write protect kernel read-only data structures"
default y
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help help
Mark the kernel read-only data as write-protected in the pagetables, Mark the kernel read-only data as write-protected in the pagetables,
in order to catch accidental (and incorrect) writes to such const in order to catch accidental (and incorrect) writes to such const
data. This option may have a slight performance impact because a data. This is recommended so that we can catch kernel bugs sooner.
portion of the kernel code won't be covered by a 2MB TLB anymore. If in doubt, say "Y".
If in doubt, say "N".
config DEBUG_RODATA_TEST
bool "Testcase for the DEBUG_RODATA feature"
depends on DEBUG_RODATA
help
This option enables a testcase for the DEBUG_RODATA
feature as well as for the change_page_attr() infrastructure.
If in doubt, say "N"
config DEBUG_NX_TEST
tristate "Testcase for the NX non-executable stack feature"
depends on DEBUG_KERNEL && m
help
This option enables a testcase for the CPU NX capability
and the software setup of this feature.
If in doubt, say "N"
config 4KSTACKS config 4KSTACKS
bool "Use 4Kb for kernel stacks instead of 8Kb" bool "Use 4Kb for kernel stacks instead of 8Kb"
@ -75,8 +102,7 @@ config X86_FIND_SMP_CONFIG
config X86_MPPARSE config X86_MPPARSE
def_bool y def_bool y
depends on X86_LOCAL_APIC && !X86_VISWS depends on (X86_32 && (X86_LOCAL_APIC && !X86_VISWS)) || X86_64
depends on X86_32
config DOUBLEFAULT config DOUBLEFAULT
default y default y
@ -112,4 +138,91 @@ config IOMMU_LEAK
Add a simple leak tracer to the IOMMU code. This is useful when you Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings. are debugging a buggy device driver that leaks IOMMU mappings.
#
# IO delay types:
#
config IO_DELAY_TYPE_0X80
int
default "0"
config IO_DELAY_TYPE_0XED
int
default "1"
config IO_DELAY_TYPE_UDELAY
int
default "2"
config IO_DELAY_TYPE_NONE
int
default "3"
choice
prompt "IO delay type"
default IO_DELAY_0XED
config IO_DELAY_0X80
bool "port 0x80 based port-IO delay [recommended]"
help
This is the traditional Linux IO delay used for in/out_p.
It is the most tested hence safest selection here.
config IO_DELAY_0XED
bool "port 0xed based port-IO delay"
help
Use port 0xed as the IO delay. This frees up port 0x80 which is
often used as a hardware-debug port.
config IO_DELAY_UDELAY
bool "udelay based port-IO delay"
help
Use udelay(2) as the IO delay method. This provides the delay
while not having any side-effect on the IO port space.
config IO_DELAY_NONE
bool "no port-IO delay"
help
No port-IO delay. Will break on old boxes that require port-IO
delay for certain operations. Should work on most new machines.
endchoice
if IO_DELAY_0X80
config DEFAULT_IO_DELAY_TYPE
int
default IO_DELAY_TYPE_0X80
endif
if IO_DELAY_0XED
config DEFAULT_IO_DELAY_TYPE
int
default IO_DELAY_TYPE_0XED
endif
if IO_DELAY_UDELAY
config DEFAULT_IO_DELAY_TYPE
int
default IO_DELAY_TYPE_UDELAY
endif
if IO_DELAY_NONE
config DEFAULT_IO_DELAY_TYPE
int
default IO_DELAY_TYPE_NONE
endif
config DEBUG_BOOT_PARAMS
bool "Debug boot parameters"
depends on DEBUG_KERNEL
depends on DEBUG_FS
help
This option will cause struct boot_params to be exported via debugfs.
config CPA_DEBUG
bool "CPA self test code"
depends on DEBUG_KERNEL
help
Do change_page_attr self tests at boot.
endmenu endmenu

View File

@ -7,13 +7,252 @@ else
KBUILD_DEFCONFIG := $(ARCH)_defconfig KBUILD_DEFCONFIG := $(ARCH)_defconfig
endif endif
# No need to remake these files # BITS is used as extension for files which are available in a 32 bit
$(srctree)/arch/x86/Makefile%: ; # and a 64 bit version to simplify shared Makefiles.
# e.g.: obj-y += foo_$(BITS).o
export BITS
ifeq ($(CONFIG_X86_32),y) ifeq ($(CONFIG_X86_32),y)
BITS := 32
UTS_MACHINE := i386 UTS_MACHINE := i386
include $(srctree)/arch/x86/Makefile_32 CHECKFLAGS += -D__i386__
biarch := $(call cc-option,-m32)
KBUILD_AFLAGS += $(biarch)
KBUILD_CFLAGS += $(biarch)
ifdef CONFIG_RELOCATABLE
LDFLAGS_vmlinux := --emit-relocs
endif
KBUILD_CFLAGS += -msoft-float -mregparm=3 -freg-struct-return
# prevent gcc from keeping the stack 16 byte aligned
KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
# Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
# a lot more stack due to the lack of sharing of stacklots:
KBUILD_CFLAGS += $(shell if [ $(call cc-version) -lt 0400 ] ; then \
echo $(call cc-option,-fno-unit-at-a-time); fi ;)
# CPU-specific tuning. Anything which can be shared with UML should go here.
include $(srctree)/arch/x86/Makefile_32.cpu
KBUILD_CFLAGS += $(cflags-y)
# temporary until string.h is fixed
KBUILD_CFLAGS += -ffreestanding
else else
BITS := 64
UTS_MACHINE := x86_64 UTS_MACHINE := x86_64
include $(srctree)/arch/x86/Makefile_64 CHECKFLAGS += -D__x86_64__ -m64
KBUILD_AFLAGS += -m64
KBUILD_CFLAGS += -m64
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
cflags-$(CONFIG_MCORE2) += \
$(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
KBUILD_CFLAGS += $(cflags-y)
KBUILD_CFLAGS += -mno-red-zone
KBUILD_CFLAGS += -mcmodel=kernel
# -funit-at-a-time shrinks the kernel .text considerably
# unfortunately it makes reading oopses harder.
KBUILD_CFLAGS += $(call cc-option,-funit-at-a-time)
# this works around some issues with generating unwind tables in older gccs
# newer gccs do it by default
KBUILD_CFLAGS += -maccumulate-outgoing-args
stackp := $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh
stackp-$(CONFIG_CC_STACKPROTECTOR) := $(shell $(stackp) \
"$(CC)" -fstack-protector )
stackp-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(stackp) \
"$(CC)" -fstack-protector-all )
KBUILD_CFLAGS += $(stackp-y)
endif endif
# Stackpointer is addressed different for 32 bit and 64 bit x86
sp-$(CONFIG_X86_32) := esp
sp-$(CONFIG_X86_64) := rsp
# do binutils support CFI?
cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
# is .cfi_signal_frame supported too?
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
KBUILD_AFLAGS += $(cfi) $(cfi-sigframe)
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe)
LDFLAGS := -m elf_$(UTS_MACHINE)
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
# Speed up the build
KBUILD_CFLAGS += -pipe
# Workaround for a gcc prelease that unfortunately was shipped in a suse release
KBUILD_CFLAGS += -Wno-sign-compare
#
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
# prevent gcc from generating any FP code by mistake
KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
###
# Sub architecture support
# fcore-y is linked before mcore-y files.
# Default subarch .c files
mcore-y := arch/x86/mach-default/
# Voyager subarch support
mflags-$(CONFIG_X86_VOYAGER) := -Iinclude/asm-x86/mach-voyager
mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/
# VISWS subarch support
mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws
mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws/
# NUMAQ subarch support
mflags-$(CONFIG_X86_NUMAQ) := -Iinclude/asm-x86/mach-numaq
mcore-$(CONFIG_X86_NUMAQ) := arch/x86/mach-default/
# BIGSMP subarch support
mflags-$(CONFIG_X86_BIGSMP) := -Iinclude/asm-x86/mach-bigsmp
mcore-$(CONFIG_X86_BIGSMP) := arch/x86/mach-default/
#Summit subarch support
mflags-$(CONFIG_X86_SUMMIT) := -Iinclude/asm-x86/mach-summit
mcore-$(CONFIG_X86_SUMMIT) := arch/x86/mach-default/
# generic subarchitecture
mflags-$(CONFIG_X86_GENERICARCH):= -Iinclude/asm-x86/mach-generic
fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/
# ES7000 subarch support
mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-x86/mach-es7000
fcore-$(CONFIG_X86_ES7000) := arch/x86/mach-es7000/
mcore-$(CONFIG_X86_ES7000) := arch/x86/mach-default/
# RDC R-321x subarch support
mflags-$(CONFIG_X86_RDC321X) := -Iinclude/asm-x86/mach-rdc321x
mcore-$(CONFIG_X86_RDC321X) := arch/x86/mach-default
core-$(CONFIG_X86_RDC321X) += arch/x86/mach-rdc321x/
# default subarch .h files
mflags-y += -Iinclude/asm-x86/mach-default
# 64 bit does not support subarch support - clear sub arch variables
fcore-$(CONFIG_X86_64) :=
mcore-$(CONFIG_X86_64) :=
mflags-$(CONFIG_X86_64) :=
KBUILD_CFLAGS += $(mflags-y)
KBUILD_AFLAGS += $(mflags-y)
###
# Kernel objects
head-y := arch/x86/kernel/head_$(BITS).o
head-$(CONFIG_X86_64) += arch/x86/kernel/head64.o
head-y += arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/
# Sub architecture files that needs linking first
core-y += $(fcore-y)
# Xen paravirtualization support
core-$(CONFIG_XEN) += arch/x86/xen/
# lguest paravirtualization support
core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
core-y += arch/x86/kernel/
core-y += arch/x86/mm/
# Remaining sub architecture files
core-y += $(mcore-y)
core-y += arch/x86/crypto/
core-y += arch/x86/vdso/
core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
# drivers-y are linked after core-y
drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
drivers-$(CONFIG_PCI) += arch/x86/pci/
# must be linked after kernel/
drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
ifeq ($(CONFIG_X86_32),y)
drivers-$(CONFIG_PM) += arch/x86/power/
drivers-$(CONFIG_FB) += arch/x86/video/
endif
####
# boot loader support. Several targets are kept for legacy purposes
boot := arch/x86/boot
PHONY += zImage bzImage compressed zlilo bzlilo \
zdisk bzdisk fdimage fdimage144 fdimage288 isoimage install
# Default kernel to build
all: bzImage
# KBUILD_IMAGE specify target image being built
KBUILD_IMAGE := $(boot)/bzImage
zImage zlilo zdisk: KBUILD_IMAGE := arch/x86/boot/zImage
zImage bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/bzImage
compressed: zImage
zlilo bzlilo: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zlilo
zdisk bzdisk: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zdisk
fdimage fdimage144 fdimage288 isoimage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
install: vdso_install
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) install
PHONY += vdso_install
vdso_install:
$(Q)$(MAKE) $(build)=arch/x86/vdso $@
archclean:
$(Q)rm -rf $(objtree)/arch/i386
$(Q)rm -rf $(objtree)/arch/x86_64
$(Q)$(MAKE) $(clean)=$(boot)
define archhelp
echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)'
echo ' install - Install kernel using'
echo ' (your) ~/bin/installkernel or'
echo ' (distribution) /sbin/installkernel or'
echo ' install to $$(INSTALL_PATH) and run lilo'
echo ' fdimage - Create 1.4MB boot floppy image (arch/x86/boot/fdimage)'
echo ' fdimage144 - Create 1.4MB boot floppy image (arch/x86/boot/fdimage)'
echo ' fdimage288 - Create 2.8MB boot floppy image (arch/x86/boot/fdimage)'
echo ' isoimage - Create a boot CD-ROM image (arch/x86/boot/image.iso)'
echo ' bzdisk/fdimage*/isoimage also accept:'
echo ' FDARGS="..." arguments for the booted kernel'
echo ' FDINITRD=file initrd for the booted kernel'
endef
CLEAN_FILES += arch/x86/boot/fdimage \
arch/x86/boot/image.iso \
arch/x86/boot/mtools.conf

View File

@ -1,175 +0,0 @@
#
# i386 Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies. Remember to do have actions
# for "archclean" cleaning up for this architecture.
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1994 by Linus Torvalds
#
# 19990713 Artur Skawina <skawina@geocities.com>
# Added '-march' and '-mpreferred-stack-boundary' support
#
# 20050320 Kianusch Sayah Karadji <kianusch@sk-tech.net>
# Added support for GEODE CPU
# BITS is used as extension for files which are available in a 32 bit
# and a 64 bit version to simplify shared Makefiles.
# e.g.: obj-y += foo_$(BITS).o
BITS := 32
export BITS
HAS_BIARCH := $(call cc-option-yn, -m32)
ifeq ($(HAS_BIARCH),y)
AS := $(AS) --32
LD := $(LD) -m elf_i386
CC := $(CC) -m32
endif
LDFLAGS := -m elf_i386
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
ifdef CONFIG_RELOCATABLE
LDFLAGS_vmlinux := --emit-relocs
endif
CHECKFLAGS += -D__i386__
KBUILD_CFLAGS += -pipe -msoft-float -mregparm=3 -freg-struct-return
# prevent gcc from keeping the stack 16 byte aligned
KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
# CPU-specific tuning. Anything which can be shared with UML should go here.
include $(srctree)/arch/x86/Makefile_32.cpu
# temporary until string.h is fixed
cflags-y += -ffreestanding
# this works around some issues with generating unwind tables in older gccs
# newer gccs do it by default
cflags-y += -maccumulate-outgoing-args
# Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
# a lot more stack due to the lack of sharing of stacklots:
KBUILD_CFLAGS += $(shell if [ $(call cc-version) -lt 0400 ] ; then echo $(call cc-option,-fno-unit-at-a-time); fi ;)
# do binutils support CFI?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_rel_offset esp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_rel_offset esp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
# is .cfi_signal_frame supported too?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
KBUILD_CFLAGS += $(cflags-y)
# Default subarch .c files
mcore-y := arch/x86/mach-default
# Voyager subarch support
mflags-$(CONFIG_X86_VOYAGER) := -Iinclude/asm-x86/mach-voyager
mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager
# VISWS subarch support
mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws
mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws
# NUMAQ subarch support
mflags-$(CONFIG_X86_NUMAQ) := -Iinclude/asm-x86/mach-numaq
mcore-$(CONFIG_X86_NUMAQ) := arch/x86/mach-default
# BIGSMP subarch support
mflags-$(CONFIG_X86_BIGSMP) := -Iinclude/asm-x86/mach-bigsmp
mcore-$(CONFIG_X86_BIGSMP) := arch/x86/mach-default
#Summit subarch support
mflags-$(CONFIG_X86_SUMMIT) := -Iinclude/asm-x86/mach-summit
mcore-$(CONFIG_X86_SUMMIT) := arch/x86/mach-default
# generic subarchitecture
mflags-$(CONFIG_X86_GENERICARCH) := -Iinclude/asm-x86/mach-generic
mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default
core-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
# ES7000 subarch support
mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-x86/mach-es7000
mcore-$(CONFIG_X86_ES7000) := arch/x86/mach-default
core-$(CONFIG_X86_ES7000) := arch/x86/mach-es7000/
# Xen paravirtualization support
core-$(CONFIG_XEN) += arch/x86/xen/
# lguest paravirtualization support
core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
# default subarch .h files
mflags-y += -Iinclude/asm-x86/mach-default
head-y := arch/x86/kernel/head_32.o arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/
core-y += arch/x86/kernel/ \
arch/x86/mm/ \
$(mcore-y)/ \
arch/x86/crypto/
drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
drivers-$(CONFIG_PCI) += arch/x86/pci/
# must be linked after kernel/
drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
drivers-$(CONFIG_PM) += arch/x86/power/
drivers-$(CONFIG_FB) += arch/x86/video/
KBUILD_CFLAGS += $(mflags-y)
KBUILD_AFLAGS += $(mflags-y)
boot := arch/x86/boot
PHONY += zImage bzImage compressed zlilo bzlilo \
zdisk bzdisk fdimage fdimage144 fdimage288 isoimage install
all: bzImage
# KBUILD_IMAGE specify target image being built
KBUILD_IMAGE := $(boot)/bzImage
zImage zlilo zdisk: KBUILD_IMAGE := arch/x86/boot/zImage
zImage bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
$(Q)mkdir -p $(objtree)/arch/i386/boot
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/i386/boot/bzImage
compressed: zImage
zlilo bzlilo: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zlilo
zdisk bzdisk: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zdisk
fdimage fdimage144 fdimage288 isoimage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
install:
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) install
archclean:
$(Q)rm -rf $(objtree)/arch/i386/boot
$(Q)$(MAKE) $(clean)=arch/x86/boot
define archhelp
echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)'
echo ' install - Install kernel using'
echo ' (your) ~/bin/installkernel or'
echo ' (distribution) /sbin/installkernel or'
echo ' install to $$(INSTALL_PATH) and run lilo'
echo ' bzdisk - Create a boot floppy in /dev/fd0'
echo ' fdimage - Create a boot floppy image'
echo ' isoimage - Create a boot CD-ROM image'
endef
CLEAN_FILES += arch/x86/boot/fdimage \
arch/x86/boot/image.iso \
arch/x86/boot/mtools.conf

View File

@ -1,144 +0,0 @@
#
# x86_64 Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies. Remember to do have actions
# for "archclean" and "archdep" for cleaning up and making dependencies for
# this architecture
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1994 by Linus Torvalds
#
# 19990713 Artur Skawina <skawina@geocities.com>
# Added '-march' and '-mpreferred-stack-boundary' support
# 20000913 Pavel Machek <pavel@suse.cz>
# Converted for x86_64 architecture
# 20010105 Andi Kleen, add IA32 compiler.
# ....and later removed it again....
#
# $Id: Makefile,v 1.31 2002/03/22 15:56:07 ak Exp $
# BITS is used as extension for files which are available in a 32 bit
# and a 64 bit version to simplify shared Makefiles.
# e.g.: obj-y += foo_$(BITS).o
BITS := 64
export BITS
LDFLAGS := -m elf_x86_64
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
LDFLAGS_vmlinux :=
CHECKFLAGS += -D__x86_64__ -m64
cflags-y :=
cflags-kernel-y :=
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
# gcc doesn't support -march=core2 yet as of gcc 4.3, but I hope it
# will eventually. Use -mtune=generic as fallback
cflags-$(CONFIG_MCORE2) += \
$(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
cflags-y += -m64
cflags-y += -mno-red-zone
cflags-y += -mcmodel=kernel
cflags-y += -pipe
cflags-y += -Wno-sign-compare
cflags-y += -fno-asynchronous-unwind-tables
ifneq ($(CONFIG_DEBUG_INFO),y)
# -fweb shrinks the kernel a bit, but the difference is very small
# it also messes up debugging, so don't use it for now.
#cflags-y += $(call cc-option,-fweb)
endif
# -funit-at-a-time shrinks the kernel .text considerably
# unfortunately it makes reading oopses harder.
cflags-y += $(call cc-option,-funit-at-a-time)
# prevent gcc from generating any FP code by mistake
cflags-y += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
# this works around some issues with generating unwind tables in older gccs
# newer gccs do it by default
cflags-y += -maccumulate-outgoing-args
# do binutils support CFI?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_rel_offset rsp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_rel_offset rsp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
# is .cfi_signal_frame supported too?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
cflags-$(CONFIG_CC_STACKPROTECTOR) += $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh "$(CC)" -fstack-protector )
cflags-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh "$(CC)" -fstack-protector-all )
KBUILD_CFLAGS += $(cflags-y)
CFLAGS_KERNEL += $(cflags-kernel-y)
KBUILD_AFLAGS += -m64
head-y := arch/x86/kernel/head_64.o arch/x86/kernel/head64.o arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/
core-y += arch/x86/kernel/ \
arch/x86/mm/ \
arch/x86/crypto/ \
arch/x86/vdso/
core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
drivers-$(CONFIG_PCI) += arch/x86/pci/
drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
boot := arch/x86/boot
PHONY += bzImage bzlilo install archmrproper \
fdimage fdimage144 fdimage288 isoimage archclean
#Default target when executing "make"
all: bzImage
BOOTIMAGE := arch/x86/boot/bzImage
KBUILD_IMAGE := $(BOOTIMAGE)
bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(BOOTIMAGE)
$(Q)mkdir -p $(objtree)/arch/x86_64/boot
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/x86_64/boot/bzImage
bzlilo: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) zlilo
bzdisk: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) zdisk
fdimage fdimage144 fdimage288 isoimage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) $@
install: vdso_install
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) $@
vdso_install:
ifeq ($(CONFIG_IA32_EMULATION),y)
$(Q)$(MAKE) $(build)=arch/x86/ia32 $@
endif
$(Q)$(MAKE) $(build)=arch/x86/vdso $@
archclean:
$(Q)rm -rf $(objtree)/arch/x86_64/boot
$(Q)$(MAKE) $(clean)=$(boot)
define archhelp
echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)'
echo ' install - Install kernel using'
echo ' (your) ~/bin/installkernel or'
echo ' (distribution) /sbin/installkernel or'
echo ' install to $$(INSTALL_PATH) and run lilo'
echo ' bzdisk - Create a boot floppy in /dev/fd0'
echo ' fdimage - Create a boot floppy image'
echo ' isoimage - Create a boot CD-ROM image'
endef
CLEAN_FILES += arch/x86/boot/fdimage \
arch/x86/boot/image.iso \
arch/x86/boot/mtools.conf

View File

@ -28,9 +28,11 @@ SVGA_MODE := -DSVGA_MODE=NORMAL_VGA
targets := vmlinux.bin setup.bin setup.elf zImage bzImage targets := vmlinux.bin setup.bin setup.elf zImage bzImage
subdir- := compressed subdir- := compressed
setup-y += a20.o apm.o cmdline.o copy.o cpu.o cpucheck.o edd.o setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o
setup-y += header.o main.o mca.o memory.o pm.o pmjump.o setup-y += header.o main.o mca.o memory.o pm.o pmjump.o
setup-y += printf.o string.o tty.o video.o version.o voyager.o setup-y += printf.o string.o tty.o video.o version.o
setup-$(CONFIG_X86_APM_BOOT) += apm.o
setup-$(CONFIG_X86_VOYAGER) += voyager.o
# The link order of the video-*.o modules can matter. In particular, # The link order of the video-*.o modules can matter. In particular,
# video-vga.o *must* be listed first, followed by video-vesa.o. # video-vga.o *must* be listed first, followed by video-vesa.o.
@ -49,10 +51,7 @@ HOSTCFLAGS_build.o := $(LINUXINCLUDE)
# How to compile the 16-bit code. Note we always compile for -march=i386, # How to compile the 16-bit code. Note we always compile for -march=i386,
# that way we can complain to the user if the CPU is insufficient. # that way we can complain to the user if the CPU is insufficient.
cflags-$(CONFIG_X86_32) :=
cflags-$(CONFIG_X86_64) := -m32
KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
$(cflags-y) \
-Wall -Wstrict-prototypes \ -Wall -Wstrict-prototypes \
-march=i386 -mregparm=3 \ -march=i386 -mregparm=3 \
-include $(srctree)/$(src)/code16gcc.h \ -include $(srctree)/$(src)/code16gcc.h \
@ -62,6 +61,7 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
$(call cc-option, -fno-unit-at-a-time)) \ $(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \ $(call cc-option, -fno-stack-protector) \
$(call cc-option, -mpreferred-stack-boundary=2) $(call cc-option, -mpreferred-stack-boundary=2)
KBUILD_CFLAGS += $(call cc-option,-m32)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
$(obj)/zImage: IMAGE_OFFSET := 0x1000 $(obj)/zImage: IMAGE_OFFSET := 0x1000

View File

@ -19,8 +19,6 @@
#include "boot.h" #include "boot.h"
#if defined(CONFIG_APM) || defined(CONFIG_APM_MODULE)
int query_apm_bios(void) int query_apm_bios(void)
{ {
u16 ax, bx, cx, dx, di; u16 ax, bx, cx, dx, di;
@ -95,4 +93,3 @@ int query_apm_bios(void)
return 0; return 0;
} }
#endif

View File

@ -109,7 +109,7 @@ typedef unsigned int addr_t;
static inline u8 rdfs8(addr_t addr) static inline u8 rdfs8(addr_t addr)
{ {
u8 v; u8 v;
asm volatile("movb %%fs:%1,%0" : "=r" (v) : "m" (*(u8 *)addr)); asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
return v; return v;
} }
static inline u16 rdfs16(addr_t addr) static inline u16 rdfs16(addr_t addr)
@ -127,21 +127,21 @@ static inline u32 rdfs32(addr_t addr)
static inline void wrfs8(u8 v, addr_t addr) static inline void wrfs8(u8 v, addr_t addr)
{ {
asm volatile("movb %1,%%fs:%0" : "+m" (*(u8 *)addr) : "r" (v)); asm volatile("movb %1,%%fs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
} }
static inline void wrfs16(u16 v, addr_t addr) static inline void wrfs16(u16 v, addr_t addr)
{ {
asm volatile("movw %1,%%fs:%0" : "+m" (*(u16 *)addr) : "r" (v)); asm volatile("movw %1,%%fs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
} }
static inline void wrfs32(u32 v, addr_t addr) static inline void wrfs32(u32 v, addr_t addr)
{ {
asm volatile("movl %1,%%fs:%0" : "+m" (*(u32 *)addr) : "r" (v)); asm volatile("movl %1,%%fs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
} }
static inline u8 rdgs8(addr_t addr) static inline u8 rdgs8(addr_t addr)
{ {
u8 v; u8 v;
asm volatile("movb %%gs:%1,%0" : "=r" (v) : "m" (*(u8 *)addr)); asm volatile("movb %%gs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
return v; return v;
} }
static inline u16 rdgs16(addr_t addr) static inline u16 rdgs16(addr_t addr)
@ -159,15 +159,15 @@ static inline u32 rdgs32(addr_t addr)
static inline void wrgs8(u8 v, addr_t addr) static inline void wrgs8(u8 v, addr_t addr)
{ {
asm volatile("movb %1,%%gs:%0" : "+m" (*(u8 *)addr) : "r" (v)); asm volatile("movb %1,%%gs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
} }
static inline void wrgs16(u16 v, addr_t addr) static inline void wrgs16(u16 v, addr_t addr)
{ {
asm volatile("movw %1,%%gs:%0" : "+m" (*(u16 *)addr) : "r" (v)); asm volatile("movw %1,%%gs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
} }
static inline void wrgs32(u32 v, addr_t addr) static inline void wrgs32(u32 v, addr_t addr)
{ {
asm volatile("movl %1,%%gs:%0" : "+m" (*(u32 *)addr) : "r" (v)); asm volatile("movl %1,%%gs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
} }
/* Note: these only return true/false, not a signed return value! */ /* Note: these only return true/false, not a signed return value! */
@ -241,6 +241,7 @@ int query_apm_bios(void);
/* cmdline.c */ /* cmdline.c */
int cmdline_find_option(const char *option, char *buffer, int bufsize); int cmdline_find_option(const char *option, char *buffer, int bufsize);
int cmdline_find_option_bool(const char *option);
/* cpu.c, cpucheck.c */ /* cpu.c, cpucheck.c */
int check_cpu(int *cpu_level_ptr, int *req_level_ptr, u32 **err_flags_ptr); int check_cpu(int *cpu_level_ptr, int *req_level_ptr, u32 **err_flags_ptr);

View File

@ -95,3 +95,68 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize)
return len; return len;
} }
/*
* Find a boolean option (like quiet,noapic,nosmp....)
*
* Returns the position of that option (starts counting with 1)
* or 0 on not found
*/
int cmdline_find_option_bool(const char *option)
{
u32 cmdline_ptr = boot_params.hdr.cmd_line_ptr;
addr_t cptr;
char c;
int pos = 0, wstart = 0;
const char *opptr = NULL;
enum {
st_wordstart, /* Start of word/after whitespace */
st_wordcmp, /* Comparing this word */
st_wordskip, /* Miscompare, skip */
} state = st_wordstart;
if (!cmdline_ptr || cmdline_ptr >= 0x100000)
return -1; /* No command line, or inaccessible */
cptr = cmdline_ptr & 0xf;
set_fs(cmdline_ptr >> 4);
while (cptr < 0x10000) {
c = rdfs8(cptr++);
pos++;
switch (state) {
case st_wordstart:
if (!c)
return 0;
else if (myisspace(c))
break;
state = st_wordcmp;
opptr = option;
wstart = pos;
/* fall through */
case st_wordcmp:
if (!*opptr)
if (!c || myisspace(c))
return wstart;
else
state = st_wordskip;
else if (!c)
return 0;
else if (c != *opptr++)
state = st_wordskip;
break;
case st_wordskip:
if (!c)
return 0;
else if (myisspace(c))
state = st_wordstart;
break;
}
}
return 0; /* Buffer overrun */
}

View File

@ -1,5 +1,63 @@
#
# linux/arch/x86/boot/compressed/Makefile
#
# create a compressed vmlinux image from the original vmlinux
#
targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
cflags-$(CONFIG_X86_64) := -mcmodel=small
KBUILD_CFLAGS += $(cflags-y)
KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
LDFLAGS := -m elf_$(UTS_MACHINE)
LDFLAGS_vmlinux := -T
$(obj)/vmlinux: $(src)/vmlinux_$(BITS).lds $(obj)/head_$(BITS).o $(obj)/misc.o $(obj)/piggy.o FORCE
$(call if_changed,ld)
@:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
ifeq ($(CONFIG_X86_32),y) ifeq ($(CONFIG_X86_32),y)
include ${srctree}/arch/x86/boot/compressed/Makefile_32 targets += vmlinux.bin.all vmlinux.relocs
hostprogs-y := relocs
quiet_cmd_relocs = RELOCS $@
cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $<
$(obj)/vmlinux.relocs: vmlinux $(obj)/relocs FORCE
$(call if_changed,relocs)
vmlinux.bin.all-y := $(obj)/vmlinux.bin
vmlinux.bin.all-$(CONFIG_RELOCATABLE) += $(obj)/vmlinux.relocs
quiet_cmd_relocbin = BUILD $@
cmd_relocbin = cat $(filter-out FORCE,$^) > $@
$(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE
$(call if_changed,relocbin)
ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip)
else else
include ${srctree}/arch/x86/boot/compressed/Makefile_64 $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
endif endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,ld)

View File

@ -1,50 +0,0 @@
#
# linux/arch/x86/boot/compressed/Makefile
#
# create a compressed vmlinux image from the original vmlinux
#
targets := vmlinux vmlinux.bin vmlinux.bin.gz head_32.o misc_32.o piggy.o \
vmlinux.bin.all vmlinux.relocs
EXTRA_AFLAGS := -traditional
LDFLAGS_vmlinux := -T
hostprogs-y := relocs
KBUILD_CFLAGS := -m32 -D__KERNEL__ $(LINUX_INCLUDE) -O2 \
-fno-strict-aliasing -fPIC \
$(call cc-option,-ffreestanding) \
$(call cc-option,-fno-stack-protector)
LDFLAGS := -m elf_i386
$(obj)/vmlinux: $(src)/vmlinux_32.lds $(obj)/head_32.o $(obj)/misc_32.o $(obj)/piggy.o FORCE
$(call if_changed,ld)
@:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
quiet_cmd_relocs = RELOCS $@
cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $<
$(obj)/vmlinux.relocs: vmlinux $(obj)/relocs FORCE
$(call if_changed,relocs)
vmlinux.bin.all-y := $(obj)/vmlinux.bin
vmlinux.bin.all-$(CONFIG_RELOCATABLE) += $(obj)/vmlinux.relocs
quiet_cmd_relocbin = BUILD $@
cmd_relocbin = cat $(filter-out FORCE,$^) > $@
$(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE
$(call if_changed,relocbin)
ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip)
else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
$(obj)/piggy.o: $(src)/vmlinux_32.scr $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,ld)

View File

@ -1,30 +0,0 @@
#
# linux/arch/x86/boot/compressed/Makefile
#
# create a compressed vmlinux image from the original vmlinux
#
targets := vmlinux vmlinux.bin vmlinux.bin.gz head_64.o misc_64.o piggy.o
KBUILD_CFLAGS := -m64 -D__KERNEL__ $(LINUXINCLUDE) -O2 \
-fno-strict-aliasing -fPIC -mcmodel=small \
$(call cc-option, -ffreestanding) \
$(call cc-option, -fno-stack-protector)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
LDFLAGS := -m elf_x86_64
LDFLAGS_vmlinux := -T
$(obj)/vmlinux: $(src)/vmlinux_64.lds $(obj)/head_64.o $(obj)/misc_64.o $(obj)/piggy.o FORCE
$(call if_changed,ld)
@:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
$(obj)/piggy.o: $(obj)/vmlinux_64.scr $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,ld)

View File

@ -1,7 +1,7 @@
/* /*
* misc.c * misc.c
* *
* This is a collection of several routines from gzip-1.0.3 * This is a collection of several routines from gzip-1.0.3
* adapted for Linux. * adapted for Linux.
* *
* malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994 * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
@ -9,9 +9,18 @@
* High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996 * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
*/ */
/*
* we have to be careful, because no indirections are allowed here, and
* paravirt_ops is a kind of one. As it will only run in baremetal anyway,
* we just keep it from happening
*/
#undef CONFIG_PARAVIRT #undef CONFIG_PARAVIRT
#ifdef CONFIG_X86_64
#define _LINUX_STRING_H_ 1
#define __LINUX_BITMAP_H 1
#endif
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/vmalloc.h>
#include <linux/screen_info.h> #include <linux/screen_info.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/page.h> #include <asm/page.h>
@ -186,10 +195,20 @@ static void *memcpy(void *dest, const void *src, unsigned n);
static void putstr(const char *); static void putstr(const char *);
static unsigned long free_mem_ptr; #ifdef CONFIG_X86_64
static unsigned long free_mem_end_ptr; #define memptr long
#else
#define memptr unsigned
#endif
static memptr free_mem_ptr;
static memptr free_mem_end_ptr;
#ifdef CONFIG_X86_64
#define HEAP_SIZE 0x7000
#else
#define HEAP_SIZE 0x4000 #define HEAP_SIZE 0x4000
#endif
static char *vidmem = (char *)0xb8000; static char *vidmem = (char *)0xb8000;
static int vidport; static int vidport;
@ -230,7 +249,7 @@ static void gzip_mark(void **ptr)
static void gzip_release(void **ptr) static void gzip_release(void **ptr)
{ {
free_mem_ptr = (unsigned long) *ptr; free_mem_ptr = (memptr) *ptr;
} }
static void scroll(void) static void scroll(void)
@ -247,8 +266,10 @@ static void putstr(const char *s)
int x,y,pos; int x,y,pos;
char c; char c;
#ifdef CONFIG_X86_32
if (RM_SCREEN_INFO.orig_video_mode == 0 && lines == 0 && cols == 0) if (RM_SCREEN_INFO.orig_video_mode == 0 && lines == 0 && cols == 0)
return; return;
#endif
x = RM_SCREEN_INFO.orig_x; x = RM_SCREEN_INFO.orig_x;
y = RM_SCREEN_INFO.orig_y; y = RM_SCREEN_INFO.orig_y;
@ -261,7 +282,7 @@ static void putstr(const char *s)
y--; y--;
} }
} else { } else {
vidmem [ ( x + cols * y ) * 2 ] = c; vidmem [(x + cols * y) * 2] = c;
if ( ++x >= cols ) { if ( ++x >= cols ) {
x = 0; x = 0;
if ( ++y >= lines ) { if ( ++y >= lines ) {
@ -276,16 +297,16 @@ static void putstr(const char *s)
RM_SCREEN_INFO.orig_y = y; RM_SCREEN_INFO.orig_y = y;
pos = (x + cols * y) * 2; /* Update cursor position */ pos = (x + cols * y) * 2; /* Update cursor position */
outb_p(14, vidport); outb(14, vidport);
outb_p(0xff & (pos >> 9), vidport+1); outb(0xff & (pos >> 9), vidport+1);
outb_p(15, vidport); outb(15, vidport);
outb_p(0xff & (pos >> 1), vidport+1); outb(0xff & (pos >> 1), vidport+1);
} }
static void* memset(void* s, int c, unsigned n) static void* memset(void* s, int c, unsigned n)
{ {
int i; int i;
char *ss = (char*)s; char *ss = s;
for (i=0;i<n;i++) ss[i] = c; for (i=0;i<n;i++) ss[i] = c;
return s; return s;
@ -294,7 +315,8 @@ static void* memset(void* s, int c, unsigned n)
static void* memcpy(void* dest, const void* src, unsigned n) static void* memcpy(void* dest, const void* src, unsigned n)
{ {
int i; int i;
char *d = (char *)dest, *s = (char *)src; const char *s = src;
char *d = dest;
for (i=0;i<n;i++) d[i] = s[i]; for (i=0;i<n;i++) d[i] = s[i];
return dest; return dest;
@ -339,11 +361,13 @@ static void error(char *x)
putstr(x); putstr(x);
putstr("\n\n -- System halted"); putstr("\n\n -- System halted");
while(1); /* Halt */ while (1)
asm("hlt");
} }
asmlinkage void decompress_kernel(void *rmode, unsigned long end, asmlinkage void decompress_kernel(void *rmode, memptr heap,
uch *input_data, unsigned long input_len, uch *output) uch *input_data, unsigned long input_len,
uch *output)
{ {
real_mode = rmode; real_mode = rmode;
@ -358,25 +382,32 @@ asmlinkage void decompress_kernel(void *rmode, unsigned long end,
lines = RM_SCREEN_INFO.orig_video_lines; lines = RM_SCREEN_INFO.orig_video_lines;
cols = RM_SCREEN_INFO.orig_video_cols; cols = RM_SCREEN_INFO.orig_video_cols;
window = output; /* Output buffer (Normally at 1M) */ window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = end; /* Heap */ free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = end + HEAP_SIZE; free_mem_end_ptr = heap + HEAP_SIZE;
inbuf = input_data; /* Input buffer */ inbuf = input_data; /* Input buffer */
insize = input_len; insize = input_len;
inptr = 0; inptr = 0;
#ifdef CONFIG_X86_64
if ((ulg)output & (__KERNEL_ALIGN - 1))
error("Destination address not 2M aligned");
if ((ulg)output >= 0xffffffffffUL)
error("Destination address too large");
#else
if ((u32)output & (CONFIG_PHYSICAL_ALIGN -1)) if ((u32)output & (CONFIG_PHYSICAL_ALIGN -1))
error("Destination address not CONFIG_PHYSICAL_ALIGN aligned"); error("Destination address not CONFIG_PHYSICAL_ALIGN aligned");
if (end > ((-__PAGE_OFFSET-(512 <<20)-1) & 0x7fffffff)) if (heap > ((-__PAGE_OFFSET-(512<<20)-1) & 0x7fffffff))
error("Destination address too large"); error("Destination address too large");
#ifndef CONFIG_RELOCATABLE #ifndef CONFIG_RELOCATABLE
if ((u32)output != LOAD_PHYSICAL_ADDR) if ((u32)output != LOAD_PHYSICAL_ADDR)
error("Wrong destination address"); error("Wrong destination address");
#endif
#endif #endif
makecrc(); makecrc();
putstr("Uncompressing Linux... "); putstr("\nDecompressing Linux... ");
gunzip(); gunzip();
putstr("Ok, booting the kernel.\n"); putstr("done.\nBooting the kernel.\n");
return; return;
} }

View File

@ -1,371 +0,0 @@
/*
* misc.c
*
* This is a collection of several routines from gzip-1.0.3
* adapted for Linux.
*
* malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
* puts by Nick Holloway 1993, better puts by Martin Mares 1995
* High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
*/
#define _LINUX_STRING_H_ 1
#define __LINUX_BITMAP_H 1
#include <linux/linkage.h>
#include <linux/screen_info.h>
#include <asm/io.h>
#include <asm/page.h>
/* WARNING!!
* This code is compiled with -fPIC and it is relocated dynamically
* at run time, but no relocation processing is performed.
* This means that it is not safe to place pointers in static structures.
*/
/*
* Getting to provable safe in place decompression is hard.
* Worst case behaviours need to be analyzed.
* Background information:
*
* The file layout is:
* magic[2]
* method[1]
* flags[1]
* timestamp[4]
* extraflags[1]
* os[1]
* compressed data blocks[N]
* crc[4] orig_len[4]
*
* resulting in 18 bytes of non compressed data overhead.
*
* Files divided into blocks
* 1 bit (last block flag)
* 2 bits (block type)
*
* 1 block occurs every 32K -1 bytes or when there 50% compression has been achieved.
* The smallest block type encoding is always used.
*
* stored:
* 32 bits length in bytes.
*
* fixed:
* magic fixed tree.
* symbols.
*
* dynamic:
* dynamic tree encoding.
* symbols.
*
*
* The buffer for decompression in place is the length of the
* uncompressed data, plus a small amount extra to keep the algorithm safe.
* The compressed data is placed at the end of the buffer. The output
* pointer is placed at the start of the buffer and the input pointer
* is placed where the compressed data starts. Problems will occur
* when the output pointer overruns the input pointer.
*
* The output pointer can only overrun the input pointer if the input
* pointer is moving faster than the output pointer. A condition only
* triggered by data whose compressed form is larger than the uncompressed
* form.
*
* The worst case at the block level is a growth of the compressed data
* of 5 bytes per 32767 bytes.
*
* The worst case internal to a compressed block is very hard to figure.
* The worst case can at least be boundined by having one bit that represents
* 32764 bytes and then all of the rest of the bytes representing the very
* very last byte.
*
* All of which is enough to compute an amount of extra data that is required
* to be safe. To avoid problems at the block level allocating 5 extra bytes
* per 32767 bytes of data is sufficient. To avoind problems internal to a block
* adding an extra 32767 bytes (the worst case uncompressed block size) is
* sufficient, to ensure that in the worst case the decompressed data for
* block will stop the byte before the compressed data for a block begins.
* To avoid problems with the compressed data's meta information an extra 18
* bytes are needed. Leading to the formula:
*
* extra_bytes = (uncompressed_size >> 12) + 32768 + 18 + decompressor_size.
*
* Adding 8 bytes per 32K is a bit excessive but much easier to calculate.
* Adding 32768 instead of 32767 just makes for round numbers.
* Adding the decompressor_size is necessary as it musht live after all
* of the data as well. Last I measured the decompressor is about 14K.
* 10K of actual data and 4K of bss.
*
*/
/*
* gzip declarations
*/
#define OF(args) args
#define STATIC static
#undef memset
#undef memcpy
#define memzero(s, n) memset ((s), 0, (n))
typedef unsigned char uch;
typedef unsigned short ush;
typedef unsigned long ulg;
#define WSIZE 0x80000000 /* Window size must be at least 32k,
* and a power of two
* We don't actually have a window just
* a huge output buffer so I report
* a 2G windows size, as that should
* always be larger than our output buffer.
*/
static uch *inbuf; /* input buffer */
static uch *window; /* Sliding window buffer, (and final output buffer) */
static unsigned insize; /* valid bytes in inbuf */
static unsigned inptr; /* index of next byte to be processed in inbuf */
static unsigned outcnt; /* bytes in output buffer */
/* gzip flag byte */
#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
#define COMMENT 0x10 /* bit 4 set: file comment present */
#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
#define RESERVED 0xC0 /* bit 6,7: reserved */
#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
/* Diagnostic functions */
#ifdef DEBUG
# define Assert(cond,msg) {if(!(cond)) error(msg);}
# define Trace(x) fprintf x
# define Tracev(x) {if (verbose) fprintf x ;}
# define Tracevv(x) {if (verbose>1) fprintf x ;}
# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
#else
# define Assert(cond,msg)
# define Trace(x)
# define Tracev(x)
# define Tracevv(x)
# define Tracec(c,x)
# define Tracecv(c,x)
#endif
static int fill_inbuf(void);
static void flush_window(void);
static void error(char *m);
static void gzip_mark(void **);
static void gzip_release(void **);
/*
* This is set up by the setup-routine at boot-time
*/
static unsigned char *real_mode; /* Pointer to real-mode data */
#define RM_EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
#ifndef STANDARD_MEMORY_BIOS_CALL
#define RM_ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
#endif
#define RM_SCREEN_INFO (*(struct screen_info *)(real_mode+0))
extern unsigned char input_data[];
extern int input_len;
static long bytes_out = 0;
static void *malloc(int size);
static void free(void *where);
static void *memset(void *s, int c, unsigned n);
static void *memcpy(void *dest, const void *src, unsigned n);
static void putstr(const char *);
static long free_mem_ptr;
static long free_mem_end_ptr;
#define HEAP_SIZE 0x7000
static char *vidmem = (char *)0xb8000;
static int vidport;
static int lines, cols;
#include "../../../../lib/inflate.c"
static void *malloc(int size)
{
void *p;
if (size <0) error("Malloc error");
if (free_mem_ptr <= 0) error("Memory error");
free_mem_ptr = (free_mem_ptr + 3) & ~3; /* Align */
p = (void *)free_mem_ptr;
free_mem_ptr += size;
if (free_mem_ptr >= free_mem_end_ptr)
error("Out of memory");
return p;
}
static void free(void *where)
{ /* Don't care */
}
static void gzip_mark(void **ptr)
{
*ptr = (void *) free_mem_ptr;
}
static void gzip_release(void **ptr)
{
free_mem_ptr = (long) *ptr;
}
static void scroll(void)
{
int i;
memcpy ( vidmem, vidmem + cols * 2, ( lines - 1 ) * cols * 2 );
for ( i = ( lines - 1 ) * cols * 2; i < lines * cols * 2; i += 2 )
vidmem[i] = ' ';
}
static void putstr(const char *s)
{
int x,y,pos;
char c;
x = RM_SCREEN_INFO.orig_x;
y = RM_SCREEN_INFO.orig_y;
while ( ( c = *s++ ) != '\0' ) {
if ( c == '\n' ) {
x = 0;
if ( ++y >= lines ) {
scroll();
y--;
}
} else {
vidmem [ ( x + cols * y ) * 2 ] = c;
if ( ++x >= cols ) {
x = 0;
if ( ++y >= lines ) {
scroll();
y--;
}
}
}
}
RM_SCREEN_INFO.orig_x = x;
RM_SCREEN_INFO.orig_y = y;
pos = (x + cols * y) * 2; /* Update cursor position */
outb_p(14, vidport);
outb_p(0xff & (pos >> 9), vidport+1);
outb_p(15, vidport);
outb_p(0xff & (pos >> 1), vidport+1);
}
static void* memset(void* s, int c, unsigned n)
{
int i;
char *ss = (char*)s;
for (i=0;i<n;i++) ss[i] = c;
return s;
}
static void* memcpy(void* dest, const void* src, unsigned n)
{
int i;
char *d = (char *)dest, *s = (char *)src;
for (i=0;i<n;i++) d[i] = s[i];
return dest;
}
/* ===========================================================================
* Fill the input buffer. This is called only when the buffer is empty
* and at least one byte is really needed.
*/
static int fill_inbuf(void)
{
error("ran out of input data");
return 0;
}
/* ===========================================================================
* Write the output window window[0..outcnt-1] and update crc and bytes_out.
* (Used for the decompressed data only.)
*/
static void flush_window(void)
{
/* With my window equal to my output buffer
* I only need to compute the crc here.
*/
ulg c = crc; /* temporary variable */
unsigned n;
uch *in, ch;
in = window;
for (n = 0; n < outcnt; n++) {
ch = *in++;
c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
}
crc = c;
bytes_out += (ulg)outcnt;
outcnt = 0;
}
static void error(char *x)
{
putstr("\n\n");
putstr(x);
putstr("\n\n -- System halted");
while(1); /* Halt */
}
asmlinkage void decompress_kernel(void *rmode, unsigned long heap,
uch *input_data, unsigned long input_len, uch *output)
{
real_mode = rmode;
if (RM_SCREEN_INFO.orig_video_mode == 7) {
vidmem = (char *) 0xb0000;
vidport = 0x3b4;
} else {
vidmem = (char *) 0xb8000;
vidport = 0x3d4;
}
lines = RM_SCREEN_INFO.orig_video_lines;
cols = RM_SCREEN_INFO.orig_video_cols;
window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + HEAP_SIZE;
inbuf = input_data; /* Input buffer */
insize = input_len;
inptr = 0;
if ((ulg)output & (__KERNEL_ALIGN - 1))
error("Destination address not 2M aligned");
if ((ulg)output >= 0xffffffffffUL)
error("Destination address too large");
makecrc();
putstr(".\nDecompressing Linux...");
gunzip();
putstr("done.\nBooting the kernel.\n");
return;
}

View File

@ -27,11 +27,6 @@ static unsigned long *relocs;
* absolute relocations present w.r.t these symbols. * absolute relocations present w.r.t these symbols.
*/ */
static const char* safe_abs_relocs[] = { static const char* safe_abs_relocs[] = {
"__kernel_vsyscall",
"__kernel_rt_sigreturn",
"__kernel_sigreturn",
"SYSENTER_RETURN",
"VDSO_NOTE_MASK",
"xen_irq_disable_direct_reloc", "xen_irq_disable_direct_reloc",
"xen_save_fl_direct_reloc", "xen_save_fl_direct_reloc",
}; };
@ -45,6 +40,8 @@ static int is_safe_abs_reloc(const char* sym_name)
/* Match found */ /* Match found */
return 1; return 1;
} }
if (strncmp(sym_name, "VDSO", 4) == 0)
return 1;
if (strncmp(sym_name, "__crc_", 6) == 0) if (strncmp(sym_name, "__crc_", 6) == 0)
return 1; return 1;
return 0; return 0;

View File

@ -1,6 +1,6 @@
SECTIONS SECTIONS
{ {
.text.compressed : { .rodata.compressed : {
input_len = .; input_len = .;
LONG(input_data_end - input_data) input_data = .; LONG(input_data_end - input_data) input_data = .;
*(.data) *(.data)

View File

@ -3,17 +3,17 @@ OUTPUT_ARCH(i386)
ENTRY(startup_32) ENTRY(startup_32)
SECTIONS SECTIONS
{ {
/* Be careful parts of head.S assume startup_32 is at /* Be careful parts of head_32.S assume startup_32 is at
* address 0. * address 0.
*/ */
. = 0 ; . = 0;
.text.head : { .text.head : {
_head = . ; _head = . ;
*(.text.head) *(.text.head)
_ehead = . ; _ehead = . ;
} }
.data.compressed : { .rodata.compressed : {
*(.data.compressed) *(.rodata.compressed)
} }
.text : { .text : {
_text = .; /* Text */ _text = .; /* Text */

View File

@ -1,10 +0,0 @@
SECTIONS
{
.data.compressed : {
input_len = .;
LONG(input_data_end - input_data) input_data = .;
*(.data)
output_len = . - 4;
input_data_end = .;
}
}

View File

@ -3,15 +3,19 @@ OUTPUT_ARCH(i386:x86-64)
ENTRY(startup_64) ENTRY(startup_64)
SECTIONS SECTIONS
{ {
/* Be careful parts of head.S assume startup_32 is at /* Be careful parts of head_64.S assume startup_64 is at
* address 0. * address 0.
*/ */
. = 0; . = 0;
.text : { .text.head : {
_head = . ; _head = . ;
*(.text.head) *(.text.head)
_ehead = . ; _ehead = . ;
*(.text.compressed) }
.rodata.compressed : {
*(.rodata.compressed)
}
.text : {
_text = .; /* Text */ _text = .; /* Text */
*(.text) *(.text)
*(.text.*) *(.text.*)

View File

@ -129,6 +129,7 @@ void query_edd(void)
char eddarg[8]; char eddarg[8];
int do_mbr = 1; int do_mbr = 1;
int do_edd = 1; int do_edd = 1;
int be_quiet;
int devno; int devno;
struct edd_info ei, *edp; struct edd_info ei, *edp;
u32 *mbrptr; u32 *mbrptr;
@ -140,12 +141,21 @@ void query_edd(void)
do_edd = 0; do_edd = 0;
} }
be_quiet = cmdline_find_option_bool("quiet");
edp = boot_params.eddbuf; edp = boot_params.eddbuf;
mbrptr = boot_params.edd_mbr_sig_buffer; mbrptr = boot_params.edd_mbr_sig_buffer;
if (!do_edd) if (!do_edd)
return; return;
/* Bugs in OnBoard or AddOnCards Bios may hang the EDD probe,
* so give a hint if this happens.
*/
if (!be_quiet)
printf("Probing EDD (edd=off to disable)... ");
for (devno = 0x80; devno < 0x80+EDD_MBR_SIG_MAX; devno++) { for (devno = 0x80; devno < 0x80+EDD_MBR_SIG_MAX; devno++) {
/* /*
* Scan the BIOS-supported hard disks and query EDD * Scan the BIOS-supported hard disks and query EDD
@ -162,6 +172,9 @@ void query_edd(void)
if (do_mbr && !read_mbr_sig(devno, &ei, mbrptr++)) if (do_mbr && !read_mbr_sig(devno, &ei, mbrptr++))
boot_params.edd_mbr_sig_buf_entries = devno-0x80+1; boot_params.edd_mbr_sig_buf_entries = devno-0x80+1;
} }
if (!be_quiet)
printf("ok\n");
} }
#endif #endif

View File

@ -195,10 +195,13 @@ cmd_line_ptr: .long 0 # (Header version 0x0202 or later)
# can be located anywhere in # can be located anywhere in
# low memory 0x10000 or higher. # low memory 0x10000 or higher.
ramdisk_max: .long (-__PAGE_OFFSET-(512 << 20)-1) & 0x7fffffff ramdisk_max: .long 0x7fffffff
# (Header version 0x0203 or later) # (Header version 0x0203 or later)
# The highest safe address for # The highest safe address for
# the contents of an initrd # the contents of an initrd
# The current kernel allows up to 4 GB,
# but leave it at 2 GB to avoid
# possible bootloader bugs.
kernel_alignment: .long CONFIG_PHYSICAL_ALIGN #physical addr alignment kernel_alignment: .long CONFIG_PHYSICAL_ALIGN #physical addr alignment
#required for protected mode #required for protected mode

View File

@ -100,20 +100,32 @@ static void set_bios_mode(void)
#endif #endif
} }
static void init_heap(void)
{
char *stack_end;
if (boot_params.hdr.loadflags & CAN_USE_HEAP) {
asm("leal %P1(%%esp),%0"
: "=r" (stack_end) : "i" (-STACK_SIZE));
heap_end = (char *)
((size_t)boot_params.hdr.heap_end_ptr + 0x200);
if (heap_end > stack_end)
heap_end = stack_end;
} else {
/* Boot protocol 2.00 only, no heap available */
puts("WARNING: Ancient bootloader, some functionality "
"may be limited!\n");
}
}
void main(void) void main(void)
{ {
/* First, copy the boot header into the "zeropage" */ /* First, copy the boot header into the "zeropage" */
copy_boot_params(); copy_boot_params();
/* End of heap check */ /* End of heap check */
if (boot_params.hdr.loadflags & CAN_USE_HEAP) { init_heap();
heap_end = (char *)(boot_params.hdr.heap_end_ptr
+0x200-STACK_SIZE);
} else {
/* Boot protocol 2.00 only, no heap available */
puts("WARNING: Ancient bootloader, some functionality "
"may be limited!\n");
}
/* Make sure we have all the proper CPU support */ /* Make sure we have all the proper CPU support */
if (validate_cpu()) { if (validate_cpu()) {
@ -131,9 +143,6 @@ void main(void)
/* Set keyboard repeat rate (why?) */ /* Set keyboard repeat rate (why?) */
keyboard_set_repeat(); keyboard_set_repeat();
/* Set the video mode */
set_video();
/* Query MCA information */ /* Query MCA information */
query_mca(); query_mca();
@ -154,6 +163,10 @@ void main(void)
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE) #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
query_edd(); query_edd();
#endif #endif
/* Set the video mode */
set_video();
/* Do the last things and invoke protected mode */ /* Do the last things and invoke protected mode */
go_to_protected_mode(); go_to_protected_mode();
} }

View File

@ -104,7 +104,7 @@ static void reset_coprocessor(void)
(((u64)(base & 0xff000000) << 32) | \ (((u64)(base & 0xff000000) << 32) | \
((u64)flags << 40) | \ ((u64)flags << 40) | \
((u64)(limit & 0x00ff0000) << 32) | \ ((u64)(limit & 0x00ff0000) << 32) | \
((u64)(base & 0x00ffff00) << 16) | \ ((u64)(base & 0x00ffffff) << 16) | \
((u64)(limit & 0x0000ffff))) ((u64)(limit & 0x0000ffff)))
struct gdt_ptr { struct gdt_ptr {
@ -121,6 +121,10 @@ static void setup_gdt(void)
[GDT_ENTRY_BOOT_CS] = GDT_ENTRY(0xc09b, 0, 0xfffff), [GDT_ENTRY_BOOT_CS] = GDT_ENTRY(0xc09b, 0, 0xfffff),
/* DS: data, read/write, 4 GB, base 0 */ /* DS: data, read/write, 4 GB, base 0 */
[GDT_ENTRY_BOOT_DS] = GDT_ENTRY(0xc093, 0, 0xfffff), [GDT_ENTRY_BOOT_DS] = GDT_ENTRY(0xc093, 0, 0xfffff),
/* TSS: 32-bit tss, 104 bytes, base 4096 */
/* We only have a TSS here to keep Intel VT happy;
we don't actually use it for anything. */
[GDT_ENTRY_BOOT_TSS] = GDT_ENTRY(0x0089, 4096, 103),
}; };
/* Xen HVM incorrectly stores a pointer to the gdt_ptr, instead /* Xen HVM incorrectly stores a pointer to the gdt_ptr, instead
of the gdt_ptr contents. Thus, make it static so it will of the gdt_ptr contents. Thus, make it static so it will

View File

@ -15,6 +15,7 @@
*/ */
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/processor-flags.h>
#include <asm/segment.h> #include <asm/segment.h>
.text .text
@ -29,28 +30,55 @@
*/ */
protected_mode_jump: protected_mode_jump:
movl %edx, %esi # Pointer to boot_params table movl %edx, %esi # Pointer to boot_params table
movl %eax, 2f # Patch ljmpl instruction
xorl %ebx, %ebx
movw %cs, %bx
shll $4, %ebx
addl %ebx, 2f
movw $__BOOT_DS, %cx movw $__BOOT_DS, %cx
xorl %ebx, %ebx # Per the 32-bit boot protocol movw $__BOOT_TSS, %di
xorl %ebp, %ebp # Per the 32-bit boot protocol
xorl %edi, %edi # Per the 32-bit boot protocol
movl %cr0, %edx movl %cr0, %edx
orb $1, %dl # Protected mode (PE) bit orb $X86_CR0_PE, %dl # Protected mode
movl %edx, %cr0 movl %edx, %cr0
jmp 1f # Short jump to serialize on 386/486 jmp 1f # Short jump to serialize on 386/486
1: 1:
movw %cx, %ds # Transition to 32-bit mode
movw %cx, %es
movw %cx, %fs
movw %cx, %gs
movw %cx, %ss
# Jump to the 32-bit entrypoint
.byte 0x66, 0xea # ljmpl opcode .byte 0x66, 0xea # ljmpl opcode
2: .long 0 # offset 2: .long in_pm32 # offset
.word __BOOT_CS # segment .word __BOOT_CS # segment
.size protected_mode_jump, .-protected_mode_jump .size protected_mode_jump, .-protected_mode_jump
.code32
.type in_pm32, @function
in_pm32:
# Set up data segments for flat 32-bit mode
movl %ecx, %ds
movl %ecx, %es
movl %ecx, %fs
movl %ecx, %gs
movl %ecx, %ss
# The 32-bit code sets up its own stack, but this way we do have
# a valid stack if some debugging hack wants to use it.
addl %ebx, %esp
# Set up TR to make Intel VT happy
ltr %di
# Clear registers to allow for future extensions to the
# 32-bit boot protocol
xorl %ecx, %ecx
xorl %edx, %edx
xorl %ebx, %ebx
xorl %ebp, %ebp
xorl %edi, %edi
# Set up LDTR to make Intel VT happy
lldt %cx
jmpl *%eax # Jump to the 32-bit entrypoint
.size in_pm32, .-in_pm32

View File

@ -104,6 +104,7 @@ static int bios_probe(void)
mi = GET_HEAP(struct mode_info, 1); mi = GET_HEAP(struct mode_info, 1);
mi->mode = VIDEO_FIRST_BIOS+mode; mi->mode = VIDEO_FIRST_BIOS+mode;
mi->depth = 0; /* text */
mi->x = rdfs16(0x44a); mi->x = rdfs16(0x44a);
mi->y = rdfs8(0x484)+1; mi->y = rdfs8(0x484)+1;
nmodes++; nmodes++;
@ -116,7 +117,7 @@ static int bios_probe(void)
__videocard video_bios = __videocard video_bios =
{ {
.card_name = "BIOS (scanned)", .card_name = "BIOS",
.probe = bios_probe, .probe = bios_probe,
.set_mode = bios_set_mode, .set_mode = bios_set_mode,
.unsafe = 1, .unsafe = 1,

View File

@ -79,20 +79,28 @@ static int vesa_probe(void)
/* Text Mode, TTY BIOS supported, /* Text Mode, TTY BIOS supported,
supported by hardware */ supported by hardware */
mi = GET_HEAP(struct mode_info, 1); mi = GET_HEAP(struct mode_info, 1);
mi->mode = mode + VIDEO_FIRST_VESA; mi->mode = mode + VIDEO_FIRST_VESA;
mi->x = vminfo.h_res; mi->depth = 0; /* text */
mi->y = vminfo.v_res; mi->x = vminfo.h_res;
mi->y = vminfo.v_res;
nmodes++; nmodes++;
} else if ((vminfo.mode_attr & 0x99) == 0x99) { } else if ((vminfo.mode_attr & 0x99) == 0x99 &&
(vminfo.memory_layout == 4 ||
vminfo.memory_layout == 6) &&
vminfo.memory_planes == 1) {
#ifdef CONFIG_FB #ifdef CONFIG_FB
/* Graphics mode, color, linear frame buffer /* Graphics mode, color, linear frame buffer
supported -- register the mode but hide from supported. Only register the mode if
the menu. Only do this if framebuffer is if framebuffer is configured, however,
configured, however, otherwise the user will otherwise the user will be left without a screen.
be left without a screen. */ We don't require CONFIG_FB_VESA, however, since
some of the other framebuffer drivers can use
this mode-setting, too. */
mi = GET_HEAP(struct mode_info, 1); mi = GET_HEAP(struct mode_info, 1);
mi->mode = mode + VIDEO_FIRST_VESA; mi->mode = mode + VIDEO_FIRST_VESA;
mi->x = mi->y = 0; mi->depth = vminfo.bpp;
mi->x = vminfo.h_res;
mi->y = vminfo.v_res;
nmodes++; nmodes++;
#endif #endif
} }

View File

@ -18,22 +18,22 @@
#include "video.h" #include "video.h"
static struct mode_info vga_modes[] = { static struct mode_info vga_modes[] = {
{ VIDEO_80x25, 80, 25 }, { VIDEO_80x25, 80, 25, 0 },
{ VIDEO_8POINT, 80, 50 }, { VIDEO_8POINT, 80, 50, 0 },
{ VIDEO_80x43, 80, 43 }, { VIDEO_80x43, 80, 43, 0 },
{ VIDEO_80x28, 80, 28 }, { VIDEO_80x28, 80, 28, 0 },
{ VIDEO_80x30, 80, 30 }, { VIDEO_80x30, 80, 30, 0 },
{ VIDEO_80x34, 80, 34 }, { VIDEO_80x34, 80, 34, 0 },
{ VIDEO_80x60, 80, 60 }, { VIDEO_80x60, 80, 60, 0 },
}; };
static struct mode_info ega_modes[] = { static struct mode_info ega_modes[] = {
{ VIDEO_80x25, 80, 25 }, { VIDEO_80x25, 80, 25, 0 },
{ VIDEO_8POINT, 80, 43 }, { VIDEO_8POINT, 80, 43, 0 },
}; };
static struct mode_info cga_modes[] = { static struct mode_info cga_modes[] = {
{ VIDEO_80x25, 80, 25 }, { VIDEO_80x25, 80, 25, 0 },
}; };
__videocard video_vga; __videocard video_vga;

View File

@ -293,13 +293,28 @@ static void display_menu(void)
struct mode_info *mi; struct mode_info *mi;
char ch; char ch;
int i; int i;
int nmodes;
int modes_per_line;
int col;
puts("Mode: COLSxROWS:\n"); nmodes = 0;
for (card = video_cards; card < video_cards_end; card++)
nmodes += card->nmodes;
modes_per_line = 1;
if (nmodes >= 20)
modes_per_line = 3;
for (col = 0; col < modes_per_line; col++)
puts("Mode: Resolution: Type: ");
putchar('\n');
col = 0;
ch = '0'; ch = '0';
for (card = video_cards; card < video_cards_end; card++) { for (card = video_cards; card < video_cards_end; card++) {
mi = card->modes; mi = card->modes;
for (i = 0; i < card->nmodes; i++, mi++) { for (i = 0; i < card->nmodes; i++, mi++) {
char resbuf[32];
int visible = mi->x && mi->y; int visible = mi->x && mi->y;
u16 mode_id = mi->mode ? mi->mode : u16 mode_id = mi->mode ? mi->mode :
(mi->y << 8)+mi->x; (mi->y << 8)+mi->x;
@ -307,8 +322,18 @@ static void display_menu(void)
if (!visible) if (!visible)
continue; /* Hidden mode */ continue; /* Hidden mode */
printf("%c %04X %3dx%-3d %s\n", if (mi->depth)
ch, mode_id, mi->x, mi->y, card->card_name); sprintf(resbuf, "%dx%d", mi->y, mi->depth);
else
sprintf(resbuf, "%d", mi->y);
printf("%c %03X %4dx%-7s %-6s",
ch, mode_id, mi->x, resbuf, card->card_name);
col++;
if (col >= modes_per_line) {
putchar('\n');
col = 0;
}
if (ch == '9') if (ch == '9')
ch = 'a'; ch = 'a';
@ -318,6 +343,8 @@ static void display_menu(void)
ch++; ch++;
} }
} }
if (col)
putchar('\n');
} }
#define H(x) ((x)-'a'+10) #define H(x) ((x)-'a'+10)

View File

@ -83,7 +83,8 @@ void store_screen(void);
struct mode_info { struct mode_info {
u16 mode; /* Mode number (vga= style) */ u16 mode; /* Mode number (vga= style) */
u8 x, y; /* Width, height */ u16 x, y; /* Width, height */
u16 depth; /* Bits per pixel, 0 for text mode */
}; };
struct card_info { struct card_info {

View File

@ -16,8 +16,6 @@
#include "boot.h" #include "boot.h"
#ifdef CONFIG_X86_VOYAGER
int query_voyager(void) int query_voyager(void)
{ {
u8 err; u8 err;
@ -42,5 +40,3 @@ int query_voyager(void)
copy_from_fs(data_ptr, di, 7); /* Table is 7 bytes apparently */ copy_from_fs(data_ptr, di, 7); /* Table is 7 bytes apparently */
return 0; return 0;
} }
#endif /* CONFIG_X86_VOYAGER */

View File

@ -99,9 +99,9 @@ CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y CONFIG_IOSCHED_CFQ=y
CONFIG_DEFAULT_AS=y # CONFIG_DEFAULT_AS is not set
# CONFIG_DEFAULT_DEADLINE is not set # CONFIG_DEFAULT_DEADLINE is not set
# CONFIG_DEFAULT_CFQ is not set CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set # CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="anticipatory" CONFIG_DEFAULT_IOSCHED="anticipatory"

View File

@ -145,15 +145,6 @@ CONFIG_K8_NUMA=y
CONFIG_NODES_SHIFT=6 CONFIG_NODES_SHIFT=6
CONFIG_X86_64_ACPI_NUMA=y CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y CONFIG_NUMA_EMU=y
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
CONFIG_ARCH_DISCONTIGMEM_DEFAULT=y
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_SELECT_MEMORY_MODEL=y
# CONFIG_FLATMEM_MANUAL is not set
CONFIG_DISCONTIGMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_DISCONTIGMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_NEED_MULTIPLE_NODES=y CONFIG_NEED_MULTIPLE_NODES=y
# CONFIG_SPARSEMEM_STATIC is not set # CONFIG_SPARSEMEM_STATIC is not set
CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_SPLIT_PTLOCK_CPUS=4

View File

@ -2,9 +2,7 @@
# Makefile for the ia32 kernel emulation subsystem. # Makefile for the ia32 kernel emulation subsystem.
# #
obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_signal.o tls32.o \ obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_signal.o
ia32_binfmt.o fpu32.o ptrace32.o syscall32.o syscall32_syscall.o \
mmap32.o
sysv-$(CONFIG_SYSVIPC) := ipc32.o sysv-$(CONFIG_SYSVIPC) := ipc32.o
obj-$(CONFIG_IA32_EMULATION) += $(sysv-y) obj-$(CONFIG_IA32_EMULATION) += $(sysv-y)
@ -13,40 +11,3 @@ obj-$(CONFIG_IA32_AOUT) += ia32_aout.o
audit-class-$(CONFIG_AUDIT) := audit.o audit-class-$(CONFIG_AUDIT) := audit.o
obj-$(CONFIG_IA32_EMULATION) += $(audit-class-y) obj-$(CONFIG_IA32_EMULATION) += $(audit-class-y)
$(obj)/syscall32_syscall.o: \
$(foreach F,sysenter syscall,$(obj)/vsyscall-$F.so)
# Teach kbuild about targets
targets := $(foreach F,$(addprefix vsyscall-,sysenter syscall),\
$F.o $F.so $F.so.dbg)
# The DSO images are built using a special linker script
quiet_cmd_syscall = SYSCALL $@
cmd_syscall = $(CC) -m32 -nostdlib -shared \
$(call ld-option, -Wl$(comma)--hash-style=sysv) \
-Wl,-soname=linux-gate.so.1 -o $@ \
-Wl,-T,$(filter-out FORCE,$^)
$(obj)/%.so: OBJCOPYFLAGS := -S
$(obj)/%.so: $(obj)/%.so.dbg FORCE
$(call if_changed,objcopy)
$(obj)/vsyscall-sysenter.so.dbg $(obj)/vsyscall-syscall.so.dbg: \
$(obj)/vsyscall-%.so.dbg: $(src)/vsyscall.lds $(obj)/vsyscall-%.o FORCE
$(call if_changed,syscall)
AFLAGS_vsyscall-sysenter.o = -m32 -Wa,-32
AFLAGS_vsyscall-syscall.o = -m32 -Wa,-32
vdsos := vdso32-sysenter.so vdso32-syscall.so
quiet_cmd_vdso_install = INSTALL $@
cmd_vdso_install = cp $(@:vdso32-%.so=$(obj)/vsyscall-%.so.dbg) \
$(MODLIB)/vdso/$@
$(vdsos):
@mkdir -p $(MODLIB)/vdso
$(call cmd,vdso_install)
vdso_install: $(vdsos)

View File

@ -27,7 +27,7 @@ unsigned ia32_signal_class[] = {
int ia32_classify_syscall(unsigned syscall) int ia32_classify_syscall(unsigned syscall)
{ {
switch(syscall) { switch (syscall) {
case __NR_open: case __NR_open:
return 2; return 2;
case __NR_openat: case __NR_openat:

View File

@ -1,183 +0,0 @@
/*
* Copyright 2002 Andi Kleen, SuSE Labs.
* FXSAVE<->i387 conversion support. Based on code by Gareth Hughes.
* This is used for ptrace, signals and coredumps in 32bit emulation.
*/
#include <linux/sched.h>
#include <asm/sigcontext32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/i387.h>
static inline unsigned short twd_i387_to_fxsr(unsigned short twd)
{
unsigned int tmp; /* to avoid 16 bit prefixes in the code */
/* Transform each pair of bits into 01 (valid) or 00 (empty) */
tmp = ~twd;
tmp = (tmp | (tmp>>1)) & 0x5555; /* 0V0V0V0V0V0V0V0V */
/* and move the valid bits to the lower byte. */
tmp = (tmp | (tmp >> 1)) & 0x3333; /* 00VV00VV00VV00VV */
tmp = (tmp | (tmp >> 2)) & 0x0f0f; /* 0000VVVV0000VVVV */
tmp = (tmp | (tmp >> 4)) & 0x00ff; /* 00000000VVVVVVVV */
return tmp;
}
static inline unsigned long twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave)
{
struct _fpxreg *st = NULL;
unsigned long tos = (fxsave->swd >> 11) & 7;
unsigned long twd = (unsigned long) fxsave->twd;
unsigned long tag;
unsigned long ret = 0xffff0000;
int i;
#define FPREG_ADDR(f, n) ((void *)&(f)->st_space + (n) * 16);
for (i = 0 ; i < 8 ; i++) {
if (twd & 0x1) {
st = FPREG_ADDR( fxsave, (i - tos) & 7 );
switch (st->exponent & 0x7fff) {
case 0x7fff:
tag = 2; /* Special */
break;
case 0x0000:
if ( !st->significand[0] &&
!st->significand[1] &&
!st->significand[2] &&
!st->significand[3] ) {
tag = 1; /* Zero */
} else {
tag = 2; /* Special */
}
break;
default:
if (st->significand[3] & 0x8000) {
tag = 0; /* Valid */
} else {
tag = 2; /* Special */
}
break;
}
} else {
tag = 3; /* Empty */
}
ret |= (tag << (2 * i));
twd = twd >> 1;
}
return ret;
}
static inline int convert_fxsr_from_user(struct i387_fxsave_struct *fxsave,
struct _fpstate_ia32 __user *buf)
{
struct _fpxreg *to;
struct _fpreg __user *from;
int i;
u32 v;
int err = 0;
#define G(num,val) err |= __get_user(val, num + (u32 __user *)buf)
G(0, fxsave->cwd);
G(1, fxsave->swd);
G(2, fxsave->twd);
fxsave->twd = twd_i387_to_fxsr(fxsave->twd);
G(3, fxsave->rip);
G(4, v);
fxsave->fop = v>>16; /* cs ignored */
G(5, fxsave->rdp);
/* 6: ds ignored */
#undef G
if (err)
return -1;
to = (struct _fpxreg *)&fxsave->st_space[0];
from = &buf->_st[0];
for (i = 0 ; i < 8 ; i++, to++, from++) {
if (__copy_from_user(to, from, sizeof(*from)))
return -1;
}
return 0;
}
static inline int convert_fxsr_to_user(struct _fpstate_ia32 __user *buf,
struct i387_fxsave_struct *fxsave,
struct pt_regs *regs,
struct task_struct *tsk)
{
struct _fpreg __user *to;
struct _fpxreg *from;
int i;
u16 cs,ds;
int err = 0;
if (tsk == current) {
/* should be actually ds/cs at fpu exception time,
but that information is not available in 64bit mode. */
asm("movw %%ds,%0 " : "=r" (ds));
asm("movw %%cs,%0 " : "=r" (cs));
} else { /* ptrace. task has stopped. */
ds = tsk->thread.ds;
cs = regs->cs;
}
#define P(num,val) err |= __put_user(val, num + (u32 __user *)buf)
P(0, (u32)fxsave->cwd | 0xffff0000);
P(1, (u32)fxsave->swd | 0xffff0000);
P(2, twd_fxsr_to_i387(fxsave));
P(3, (u32)fxsave->rip);
P(4, cs | ((u32)fxsave->fop) << 16);
P(5, fxsave->rdp);
P(6, 0xffff0000 | ds);
#undef P
if (err)
return -1;
to = &buf->_st[0];
from = (struct _fpxreg *) &fxsave->st_space[0];
for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
if (__copy_to_user(to, from, sizeof(*to)))
return -1;
}
return 0;
}
int restore_i387_ia32(struct task_struct *tsk, struct _fpstate_ia32 __user *buf, int fsave)
{
clear_fpu(tsk);
if (!fsave) {
if (__copy_from_user(&tsk->thread.i387.fxsave,
&buf->_fxsr_env[0],
sizeof(struct i387_fxsave_struct)))
return -1;
tsk->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
set_stopped_child_used_math(tsk);
}
return convert_fxsr_from_user(&tsk->thread.i387.fxsave, buf);
}
int save_i387_ia32(struct task_struct *tsk,
struct _fpstate_ia32 __user *buf,
struct pt_regs *regs,
int fsave)
{
int err = 0;
init_fpu(tsk);
if (convert_fxsr_to_user(buf, &tsk->thread.i387.fxsave, regs, tsk))
return -1;
if (fsave)
return 0;
err |= __put_user(tsk->thread.i387.fxsave.swd, &buf->status);
if (fsave)
return err ? -1 : 1;
err |= __put_user(X86_FXSR_MAGIC, &buf->magic);
err |= __copy_to_user(&buf->_fxsr_env[0], &tsk->thread.i387.fxsave,
sizeof(struct i387_fxsave_struct));
return err ? -1 : 1;
}

View File

@ -25,6 +25,7 @@
#include <linux/binfmts.h> #include <linux/binfmts.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/jiffies.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
@ -36,61 +37,67 @@
#undef WARN_OLD #undef WARN_OLD
#undef CORE_DUMP /* probably broken */ #undef CORE_DUMP /* probably broken */
static int load_aout_binary(struct linux_binprm *, struct pt_regs * regs); static int load_aout_binary(struct linux_binprm *, struct pt_regs *regs);
static int load_aout_library(struct file*); static int load_aout_library(struct file *);
#ifdef CORE_DUMP #ifdef CORE_DUMP
static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, unsigned long limit); static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file,
unsigned long limit);
/* /*
* fill in the user structure for a core dump.. * fill in the user structure for a core dump..
*/ */
static void dump_thread32(struct pt_regs * regs, struct user32 * dump) static void dump_thread32(struct pt_regs *regs, struct user32 *dump)
{ {
u32 fs,gs; u32 fs, gs;
/* changed the size calculations - should hopefully work better. lbt */ /* changed the size calculations - should hopefully work better. lbt */
dump->magic = CMAGIC; dump->magic = CMAGIC;
dump->start_code = 0; dump->start_code = 0;
dump->start_stack = regs->rsp & ~(PAGE_SIZE - 1); dump->start_stack = regs->sp & ~(PAGE_SIZE - 1);
dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT; dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT;
dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT; dump->u_dsize = ((unsigned long)
(current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT;
dump->u_dsize -= dump->u_tsize; dump->u_dsize -= dump->u_tsize;
dump->u_ssize = 0; dump->u_ssize = 0;
dump->u_debugreg[0] = current->thread.debugreg0; dump->u_debugreg[0] = current->thread.debugreg0;
dump->u_debugreg[1] = current->thread.debugreg1; dump->u_debugreg[1] = current->thread.debugreg1;
dump->u_debugreg[2] = current->thread.debugreg2; dump->u_debugreg[2] = current->thread.debugreg2;
dump->u_debugreg[3] = current->thread.debugreg3; dump->u_debugreg[3] = current->thread.debugreg3;
dump->u_debugreg[4] = 0; dump->u_debugreg[4] = 0;
dump->u_debugreg[5] = 0; dump->u_debugreg[5] = 0;
dump->u_debugreg[6] = current->thread.debugreg6; dump->u_debugreg[6] = current->thread.debugreg6;
dump->u_debugreg[7] = current->thread.debugreg7; dump->u_debugreg[7] = current->thread.debugreg7;
if (dump->start_stack < 0xc0000000) if (dump->start_stack < 0xc0000000) {
dump->u_ssize = ((unsigned long) (0xc0000000 - dump->start_stack)) >> PAGE_SHIFT; unsigned long tmp;
dump->regs.ebx = regs->rbx; tmp = (unsigned long) (0xc0000000 - dump->start_stack);
dump->regs.ecx = regs->rcx; dump->u_ssize = tmp >> PAGE_SHIFT;
dump->regs.edx = regs->rdx; }
dump->regs.esi = regs->rsi;
dump->regs.edi = regs->rdi; dump->regs.bx = regs->bx;
dump->regs.ebp = regs->rbp; dump->regs.cx = regs->cx;
dump->regs.eax = regs->rax; dump->regs.dx = regs->dx;
dump->regs.si = regs->si;
dump->regs.di = regs->di;
dump->regs.bp = regs->bp;
dump->regs.ax = regs->ax;
dump->regs.ds = current->thread.ds; dump->regs.ds = current->thread.ds;
dump->regs.es = current->thread.es; dump->regs.es = current->thread.es;
asm("movl %%fs,%0" : "=r" (fs)); dump->regs.fs = fs; asm("movl %%fs,%0" : "=r" (fs)); dump->regs.fs = fs;
asm("movl %%gs,%0" : "=r" (gs)); dump->regs.gs = gs; asm("movl %%gs,%0" : "=r" (gs)); dump->regs.gs = gs;
dump->regs.orig_eax = regs->orig_rax; dump->regs.orig_ax = regs->orig_ax;
dump->regs.eip = regs->rip; dump->regs.ip = regs->ip;
dump->regs.cs = regs->cs; dump->regs.cs = regs->cs;
dump->regs.eflags = regs->eflags; dump->regs.flags = regs->flags;
dump->regs.esp = regs->rsp; dump->regs.sp = regs->sp;
dump->regs.ss = regs->ss; dump->regs.ss = regs->ss;
#if 1 /* FIXME */ #if 1 /* FIXME */
dump->u_fpvalid = 0; dump->u_fpvalid = 0;
#else #else
dump->u_fpvalid = dump_fpu (regs, &dump->i387); dump->u_fpvalid = dump_fpu(regs, &dump->i387);
#endif #endif
} }
@ -128,15 +135,19 @@ static int dump_write(struct file *file, const void *addr, int nr)
return file->f_op->write(file, addr, nr, &file->f_pos) == nr; return file->f_op->write(file, addr, nr, &file->f_pos) == nr;
} }
#define DUMP_WRITE(addr, nr) \ #define DUMP_WRITE(addr, nr) \
if (!dump_write(file, (void *)(addr), (nr))) \ if (!dump_write(file, (void *)(addr), (nr))) \
goto end_coredump; goto end_coredump;
#define DUMP_SEEK(offset) \ #define DUMP_SEEK(offset) \
if (file->f_op->llseek) { \ if (file->f_op->llseek) { \
if (file->f_op->llseek(file,(offset),0) != (offset)) \ if (file->f_op->llseek(file, (offset), 0) != (offset)) \
goto end_coredump; \ goto end_coredump; \
} else file->f_pos = (offset) } else \
file->f_pos = (offset)
#define START_DATA() (u.u_tsize << PAGE_SHIFT)
#define START_STACK(u) (u.start_stack)
/* /*
* Routine writes a core dump image in the current directory. * Routine writes a core dump image in the current directory.
@ -148,62 +159,70 @@ if (file->f_op->llseek) { \
* dumping of the process results in another error.. * dumping of the process results in another error..
*/ */
static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, unsigned long limit) static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file,
unsigned long limit)
{ {
mm_segment_t fs; mm_segment_t fs;
int has_dumped = 0; int has_dumped = 0;
unsigned long dump_start, dump_size; unsigned long dump_start, dump_size;
struct user32 dump; struct user32 dump;
# define START_DATA(u) (u.u_tsize << PAGE_SHIFT)
# define START_STACK(u) (u.start_stack)
fs = get_fs(); fs = get_fs();
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
has_dumped = 1; has_dumped = 1;
current->flags |= PF_DUMPCORE; current->flags |= PF_DUMPCORE;
strncpy(dump.u_comm, current->comm, sizeof(current->comm)); strncpy(dump.u_comm, current->comm, sizeof(current->comm));
dump.u_ar0 = (u32)(((unsigned long)(&dump.regs)) - ((unsigned long)(&dump))); dump.u_ar0 = (u32)(((unsigned long)(&dump.regs)) -
((unsigned long)(&dump)));
dump.signal = signr; dump.signal = signr;
dump_thread32(regs, &dump); dump_thread32(regs, &dump);
/* If the size of the dump file exceeds the rlimit, then see what would happen /*
if we wrote the stack, but not the data area. */ * If the size of the dump file exceeds the rlimit, then see
* what would happen if we wrote the stack, but not the data
* area.
*/
if ((dump.u_dsize + dump.u_ssize + 1) * PAGE_SIZE > limit) if ((dump.u_dsize + dump.u_ssize + 1) * PAGE_SIZE > limit)
dump.u_dsize = 0; dump.u_dsize = 0;
/* Make sure we have enough room to write the stack and data areas. */ /* Make sure we have enough room to write the stack and data areas. */
if ((dump.u_ssize + 1) * PAGE_SIZE > limit) if ((dump.u_ssize + 1) * PAGE_SIZE > limit)
dump.u_ssize = 0; dump.u_ssize = 0;
/* make sure we actually have a data and stack area to dump */ /* make sure we actually have a data and stack area to dump */
set_fs(USER_DS); set_fs(USER_DS);
if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_DATA(dump), dump.u_dsize << PAGE_SHIFT)) if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_DATA(dump),
dump.u_dsize << PAGE_SHIFT))
dump.u_dsize = 0; dump.u_dsize = 0;
if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_STACK(dump), dump.u_ssize << PAGE_SHIFT)) if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_STACK(dump),
dump.u_ssize << PAGE_SHIFT))
dump.u_ssize = 0; dump.u_ssize = 0;
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
/* struct user */ /* struct user */
DUMP_WRITE(&dump,sizeof(dump)); DUMP_WRITE(&dump, sizeof(dump));
/* Now dump all of the user data. Include malloced stuff as well */ /* Now dump all of the user data. Include malloced stuff as well */
DUMP_SEEK(PAGE_SIZE); DUMP_SEEK(PAGE_SIZE);
/* now we start writing out the user space info */ /* now we start writing out the user space info */
set_fs(USER_DS); set_fs(USER_DS);
/* Dump the data area */ /* Dump the data area */
if (dump.u_dsize != 0) { if (dump.u_dsize != 0) {
dump_start = START_DATA(dump); dump_start = START_DATA(dump);
dump_size = dump.u_dsize << PAGE_SHIFT; dump_size = dump.u_dsize << PAGE_SHIFT;
DUMP_WRITE(dump_start,dump_size); DUMP_WRITE(dump_start, dump_size);
} }
/* Now prepare to dump the stack area */ /* Now prepare to dump the stack area */
if (dump.u_ssize != 0) { if (dump.u_ssize != 0) {
dump_start = START_STACK(dump); dump_start = START_STACK(dump);
dump_size = dump.u_ssize << PAGE_SHIFT; dump_size = dump.u_ssize << PAGE_SHIFT;
DUMP_WRITE(dump_start,dump_size); DUMP_WRITE(dump_start, dump_size);
} }
/* Finally dump the task struct. Not be used by gdb, but could be useful */ /*
* Finally dump the task struct. Not be used by gdb, but
* could be useful
*/
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
DUMP_WRITE(current,sizeof(*current)); DUMP_WRITE(current, sizeof(*current));
end_coredump: end_coredump:
set_fs(fs); set_fs(fs);
return has_dumped; return has_dumped;
@ -217,35 +236,34 @@ end_coredump:
*/ */
static u32 __user *create_aout_tables(char __user *p, struct linux_binprm *bprm) static u32 __user *create_aout_tables(char __user *p, struct linux_binprm *bprm)
{ {
u32 __user *argv; u32 __user *argv, *envp, *sp;
u32 __user *envp; int argc = bprm->argc, envc = bprm->envc;
u32 __user *sp;
int argc = bprm->argc;
int envc = bprm->envc;
sp = (u32 __user *) ((-(unsigned long)sizeof(u32)) & (unsigned long) p); sp = (u32 __user *) ((-(unsigned long)sizeof(u32)) & (unsigned long) p);
sp -= envc+1; sp -= envc+1;
envp = sp; envp = sp;
sp -= argc+1; sp -= argc+1;
argv = sp; argv = sp;
put_user((unsigned long) envp,--sp); put_user((unsigned long) envp, --sp);
put_user((unsigned long) argv,--sp); put_user((unsigned long) argv, --sp);
put_user(argc,--sp); put_user(argc, --sp);
current->mm->arg_start = (unsigned long) p; current->mm->arg_start = (unsigned long) p;
while (argc-->0) { while (argc-- > 0) {
char c; char c;
put_user((u32)(unsigned long)p,argv++);
put_user((u32)(unsigned long)p, argv++);
do { do {
get_user(c,p++); get_user(c, p++);
} while (c); } while (c);
} }
put_user(0, argv); put_user(0, argv);
current->mm->arg_end = current->mm->env_start = (unsigned long) p; current->mm->arg_end = current->mm->env_start = (unsigned long) p;
while (envc-->0) { while (envc-- > 0) {
char c; char c;
put_user((u32)(unsigned long)p,envp++);
put_user((u32)(unsigned long)p, envp++);
do { do {
get_user(c,p++); get_user(c, p++);
} while (c); } while (c);
} }
put_user(0, envp); put_user(0, envp);
@ -257,20 +275,18 @@ static u32 __user *create_aout_tables(char __user *p, struct linux_binprm *bprm)
* These are the functions used to load a.out style executables and shared * These are the functions used to load a.out style executables and shared
* libraries. There is no binary dependent code anywhere else. * libraries. There is no binary dependent code anywhere else.
*/ */
static int load_aout_binary(struct linux_binprm *bprm, struct pt_regs *regs)
static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
{ {
unsigned long error, fd_offset, rlim;
struct exec ex; struct exec ex;
unsigned long error;
unsigned long fd_offset;
unsigned long rlim;
int retval; int retval;
ex = *((struct exec *) bprm->buf); /* exec-header */ ex = *((struct exec *) bprm->buf); /* exec-header */
if ((N_MAGIC(ex) != ZMAGIC && N_MAGIC(ex) != OMAGIC && if ((N_MAGIC(ex) != ZMAGIC && N_MAGIC(ex) != OMAGIC &&
N_MAGIC(ex) != QMAGIC && N_MAGIC(ex) != NMAGIC) || N_MAGIC(ex) != QMAGIC && N_MAGIC(ex) != NMAGIC) ||
N_TRSIZE(ex) || N_DRSIZE(ex) || N_TRSIZE(ex) || N_DRSIZE(ex) ||
i_size_read(bprm->file->f_path.dentry->d_inode) < ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) { i_size_read(bprm->file->f_path.dentry->d_inode) <
ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) {
return -ENOEXEC; return -ENOEXEC;
} }
@ -291,13 +307,13 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
if (retval) if (retval)
return retval; return retval;
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r12 = regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r12 =
regs->r13 = regs->r14 = regs->r15 = 0; regs->r13 = regs->r14 = regs->r15 = 0;
/* OK, This is the point of no return */ /* OK, This is the point of no return */
set_personality(PER_LINUX); set_personality(PER_LINUX);
set_thread_flag(TIF_IA32); set_thread_flag(TIF_IA32);
clear_thread_flag(TIF_ABI_PENDING); clear_thread_flag(TIF_ABI_PENDING);
current->mm->end_code = ex.a_text + current->mm->end_code = ex.a_text +
@ -311,7 +327,7 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
current->mm->mmap = NULL; current->mm->mmap = NULL;
compute_creds(bprm); compute_creds(bprm);
current->flags &= ~PF_FORKNOEXEC; current->flags &= ~PF_FORKNOEXEC;
if (N_MAGIC(ex) == OMAGIC) { if (N_MAGIC(ex) == OMAGIC) {
unsigned long text_addr, map_size; unsigned long text_addr, map_size;
@ -338,30 +354,31 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
send_sig(SIGKILL, current, 0); send_sig(SIGKILL, current, 0);
return error; return error;
} }
flush_icache_range(text_addr, text_addr+ex.a_text+ex.a_data); flush_icache_range(text_addr, text_addr+ex.a_text+ex.a_data);
} else { } else {
#ifdef WARN_OLD #ifdef WARN_OLD
static unsigned long error_time, error_time2; static unsigned long error_time, error_time2;
if ((ex.a_text & 0xfff || ex.a_data & 0xfff) && if ((ex.a_text & 0xfff || ex.a_data & 0xfff) &&
(N_MAGIC(ex) != NMAGIC) && (jiffies-error_time2) > 5*HZ) (N_MAGIC(ex) != NMAGIC) &&
{ time_after(jiffies, error_time2 + 5*HZ)) {
printk(KERN_NOTICE "executable not page aligned\n"); printk(KERN_NOTICE "executable not page aligned\n");
error_time2 = jiffies; error_time2 = jiffies;
} }
if ((fd_offset & ~PAGE_MASK) != 0 && if ((fd_offset & ~PAGE_MASK) != 0 &&
(jiffies-error_time) > 5*HZ) time_after(jiffies, error_time + 5*HZ)) {
{ printk(KERN_WARNING
printk(KERN_WARNING "fd_offset is not page aligned. Please convert "
"fd_offset is not page aligned. Please convert program: %s\n", "program: %s\n",
bprm->file->f_path.dentry->d_name.name); bprm->file->f_path.dentry->d_name.name);
error_time = jiffies; error_time = jiffies;
} }
#endif #endif
if (!bprm->file->f_op->mmap||((fd_offset & ~PAGE_MASK) != 0)) { if (!bprm->file->f_op->mmap || (fd_offset & ~PAGE_MASK) != 0) {
loff_t pos = fd_offset; loff_t pos = fd_offset;
down_write(&current->mm->mmap_sem); down_write(&current->mm->mmap_sem);
do_brk(N_TXTADDR(ex), ex.a_text+ex.a_data); do_brk(N_TXTADDR(ex), ex.a_text+ex.a_data);
up_write(&current->mm->mmap_sem); up_write(&current->mm->mmap_sem);
@ -376,9 +393,10 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
down_write(&current->mm->mmap_sem); down_write(&current->mm->mmap_sem);
error = do_mmap(bprm->file, N_TXTADDR(ex), ex.a_text, error = do_mmap(bprm->file, N_TXTADDR(ex), ex.a_text,
PROT_READ | PROT_EXEC, PROT_READ | PROT_EXEC,
MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE | MAP_32BIT, MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE |
fd_offset); MAP_EXECUTABLE | MAP_32BIT,
fd_offset);
up_write(&current->mm->mmap_sem); up_write(&current->mm->mmap_sem);
if (error != N_TXTADDR(ex)) { if (error != N_TXTADDR(ex)) {
@ -387,9 +405,10 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
} }
down_write(&current->mm->mmap_sem); down_write(&current->mm->mmap_sem);
error = do_mmap(bprm->file, N_DATADDR(ex), ex.a_data, error = do_mmap(bprm->file, N_DATADDR(ex), ex.a_data,
PROT_READ | PROT_WRITE | PROT_EXEC, PROT_READ | PROT_WRITE | PROT_EXEC,
MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE | MAP_32BIT, MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE |
MAP_EXECUTABLE | MAP_32BIT,
fd_offset + ex.a_text); fd_offset + ex.a_text);
up_write(&current->mm->mmap_sem); up_write(&current->mm->mmap_sem);
if (error != N_DATADDR(ex)) { if (error != N_DATADDR(ex)) {
@ -403,9 +422,9 @@ beyond_if:
set_brk(current->mm->start_brk, current->mm->brk); set_brk(current->mm->start_brk, current->mm->brk);
retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT); retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT);
if (retval < 0) { if (retval < 0) {
/* Someone check-me: is this error path enough? */ /* Someone check-me: is this error path enough? */
send_sig(SIGKILL, current, 0); send_sig(SIGKILL, current, 0);
return retval; return retval;
} }
@ -414,10 +433,10 @@ beyond_if:
/* start thread */ /* start thread */
asm volatile("movl %0,%%fs" :: "r" (0)); \ asm volatile("movl %0,%%fs" :: "r" (0)); \
asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS)); asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS));
load_gs_index(0); load_gs_index(0);
(regs)->rip = ex.a_entry; (regs)->ip = ex.a_entry;
(regs)->rsp = current->mm->start_stack; (regs)->sp = current->mm->start_stack;
(regs)->eflags = 0x200; (regs)->flags = 0x200;
(regs)->cs = __USER32_CS; (regs)->cs = __USER32_CS;
(regs)->ss = __USER32_DS; (regs)->ss = __USER32_DS;
regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r8 = regs->r9 = regs->r10 = regs->r11 =
@ -425,7 +444,7 @@ beyond_if:
set_fs(USER_DS); set_fs(USER_DS);
if (unlikely(current->ptrace & PT_PTRACED)) { if (unlikely(current->ptrace & PT_PTRACED)) {
if (current->ptrace & PT_TRACE_EXEC) if (current->ptrace & PT_TRACE_EXEC)
ptrace_notify ((PTRACE_EVENT_EXEC << 8) | SIGTRAP); ptrace_notify((PTRACE_EVENT_EXEC << 8) | SIGTRAP);
else else
send_sig(SIGTRAP, current, 0); send_sig(SIGTRAP, current, 0);
} }
@ -434,9 +453,8 @@ beyond_if:
static int load_aout_library(struct file *file) static int load_aout_library(struct file *file)
{ {
struct inode * inode; struct inode *inode;
unsigned long bss, start_addr, len; unsigned long bss, start_addr, len, error;
unsigned long error;
int retval; int retval;
struct exec ex; struct exec ex;
@ -450,7 +468,8 @@ static int load_aout_library(struct file *file)
/* We come in here for the regular a.out style of shared libraries */ /* We come in here for the regular a.out style of shared libraries */
if ((N_MAGIC(ex) != ZMAGIC && N_MAGIC(ex) != QMAGIC) || N_TRSIZE(ex) || if ((N_MAGIC(ex) != ZMAGIC && N_MAGIC(ex) != QMAGIC) || N_TRSIZE(ex) ||
N_DRSIZE(ex) || ((ex.a_entry & 0xfff) && N_MAGIC(ex) == ZMAGIC) || N_DRSIZE(ex) || ((ex.a_entry & 0xfff) && N_MAGIC(ex) == ZMAGIC) ||
i_size_read(inode) < ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) { i_size_read(inode) <
ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) {
goto out; goto out;
} }
@ -467,10 +486,10 @@ static int load_aout_library(struct file *file)
#ifdef WARN_OLD #ifdef WARN_OLD
static unsigned long error_time; static unsigned long error_time;
if ((jiffies-error_time) > 5*HZ) if (time_after(jiffies, error_time + 5*HZ)) {
{ printk(KERN_WARNING
printk(KERN_WARNING "N_TXTOFF is not page aligned. Please convert "
"N_TXTOFF is not page aligned. Please convert library: %s\n", "library: %s\n",
file->f_path.dentry->d_name.name); file->f_path.dentry->d_name.name);
error_time = jiffies; error_time = jiffies;
} }
@ -478,11 +497,12 @@ static int load_aout_library(struct file *file)
down_write(&current->mm->mmap_sem); down_write(&current->mm->mmap_sem);
do_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss); do_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss);
up_write(&current->mm->mmap_sem); up_write(&current->mm->mmap_sem);
file->f_op->read(file, (char __user *)start_addr, file->f_op->read(file, (char __user *)start_addr,
ex.a_text + ex.a_data, &pos); ex.a_text + ex.a_data, &pos);
flush_icache_range((unsigned long) start_addr, flush_icache_range((unsigned long) start_addr,
(unsigned long) start_addr + ex.a_text + ex.a_data); (unsigned long) start_addr + ex.a_text +
ex.a_data);
retval = 0; retval = 0;
goto out; goto out;

View File

@ -1,285 +0,0 @@
/*
* Written 2000,2002 by Andi Kleen.
*
* Loosely based on the sparc64 and IA64 32bit emulation loaders.
* This tricks binfmt_elf.c into loading 32bit binaries using lots
* of ugly preprocessor tricks. Talk about very very poor man's inheritance.
*/
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/rwsem.h>
#include <linux/sched.h>
#include <linux/compat.h>
#include <linux/string.h>
#include <linux/binfmts.h>
#include <linux/mm.h>
#include <linux/security.h>
#include <linux/elfcore-compat.h>
#include <asm/segment.h>
#include <asm/ptrace.h>
#include <asm/processor.h>
#include <asm/user32.h>
#include <asm/sigcontext32.h>
#include <asm/fpu32.h>
#include <asm/i387.h>
#include <asm/uaccess.h>
#include <asm/ia32.h>
#include <asm/vsyscall32.h>
#undef ELF_ARCH
#undef ELF_CLASS
#define ELF_CLASS ELFCLASS32
#define ELF_ARCH EM_386
#undef elfhdr
#undef elf_phdr
#undef elf_note
#undef elf_addr_t
#define elfhdr elf32_hdr
#define elf_phdr elf32_phdr
#define elf_note elf32_note
#define elf_addr_t Elf32_Off
#define ELF_NAME "elf/i386"
#define AT_SYSINFO 32
#define AT_SYSINFO_EHDR 33
int sysctl_vsyscall32 = 1;
#undef ARCH_DLINFO
#define ARCH_DLINFO do { \
if (sysctl_vsyscall32) { \
current->mm->context.vdso = (void *)VSYSCALL32_BASE; \
NEW_AUX_ENT(AT_SYSINFO, (u32)(u64)VSYSCALL32_VSYSCALL); \
NEW_AUX_ENT(AT_SYSINFO_EHDR, VSYSCALL32_BASE); \
} \
} while(0)
struct file;
#define IA32_EMULATOR 1
#undef ELF_ET_DYN_BASE
#define ELF_ET_DYN_BASE (TASK_UNMAPPED_BASE + 0x1000000)
#define jiffies_to_timeval(a,b) do { (b)->tv_usec = 0; (b)->tv_sec = (a)/HZ; }while(0)
#define _GET_SEG(x) \
({ __u32 seg; asm("movl %%" __stringify(x) ",%0" : "=r"(seg)); seg; })
/* Assumes current==process to be dumped */
#undef ELF_CORE_COPY_REGS
#define ELF_CORE_COPY_REGS(pr_reg, regs) \
pr_reg[0] = regs->rbx; \
pr_reg[1] = regs->rcx; \
pr_reg[2] = regs->rdx; \
pr_reg[3] = regs->rsi; \
pr_reg[4] = regs->rdi; \
pr_reg[5] = regs->rbp; \
pr_reg[6] = regs->rax; \
pr_reg[7] = _GET_SEG(ds); \
pr_reg[8] = _GET_SEG(es); \
pr_reg[9] = _GET_SEG(fs); \
pr_reg[10] = _GET_SEG(gs); \
pr_reg[11] = regs->orig_rax; \
pr_reg[12] = regs->rip; \
pr_reg[13] = regs->cs; \
pr_reg[14] = regs->eflags; \
pr_reg[15] = regs->rsp; \
pr_reg[16] = regs->ss;
#define elf_prstatus compat_elf_prstatus
#define elf_prpsinfo compat_elf_prpsinfo
#define elf_fpregset_t struct user_i387_ia32_struct
#define elf_fpxregset_t struct user32_fxsr_struct
#define user user32
#undef elf_read_implies_exec
#define elf_read_implies_exec(ex, executable_stack) (executable_stack != EXSTACK_DISABLE_X)
#define elf_core_copy_regs elf32_core_copy_regs
static inline void elf32_core_copy_regs(compat_elf_gregset_t *elfregs,
struct pt_regs *regs)
{
ELF_CORE_COPY_REGS((&elfregs->ebx), regs)
}
#define elf_core_copy_task_regs elf32_core_copy_task_regs
static inline int elf32_core_copy_task_regs(struct task_struct *t,
compat_elf_gregset_t* elfregs)
{
struct pt_regs *pp = task_pt_regs(t);
ELF_CORE_COPY_REGS((&elfregs->ebx), pp);
/* fix wrong segments */
elfregs->ds = t->thread.ds;
elfregs->fs = t->thread.fsindex;
elfregs->gs = t->thread.gsindex;
elfregs->es = t->thread.es;
return 1;
}
#define elf_core_copy_task_fpregs elf32_core_copy_task_fpregs
static inline int
elf32_core_copy_task_fpregs(struct task_struct *tsk, struct pt_regs *regs,
elf_fpregset_t *fpu)
{
struct _fpstate_ia32 *fpstate = (void*)fpu;
mm_segment_t oldfs = get_fs();
if (!tsk_used_math(tsk))
return 0;
if (!regs)
regs = task_pt_regs(tsk);
if (tsk == current)
unlazy_fpu(tsk);
set_fs(KERNEL_DS);
save_i387_ia32(tsk, fpstate, regs, 1);
/* Correct for i386 bug. It puts the fop into the upper 16bits of
the tag word (like FXSAVE), not into the fcs*/
fpstate->cssel |= fpstate->tag & 0xffff0000;
set_fs(oldfs);
return 1;
}
#define ELF_CORE_COPY_XFPREGS 1
#define ELF_CORE_XFPREG_TYPE NT_PRXFPREG
#define elf_core_copy_task_xfpregs elf32_core_copy_task_xfpregs
static inline int
elf32_core_copy_task_xfpregs(struct task_struct *t, elf_fpxregset_t *xfpu)
{
struct pt_regs *regs = task_pt_regs(t);
if (!tsk_used_math(t))
return 0;
if (t == current)
unlazy_fpu(t);
memcpy(xfpu, &t->thread.i387.fxsave, sizeof(elf_fpxregset_t));
xfpu->fcs = regs->cs;
xfpu->fos = t->thread.ds; /* right? */
return 1;
}
#undef elf_check_arch
#define elf_check_arch(x) \
((x)->e_machine == EM_386)
extern int force_personality32;
#undef ELF_EXEC_PAGESIZE
#undef ELF_HWCAP
#undef ELF_PLATFORM
#undef SET_PERSONALITY
#define ELF_EXEC_PAGESIZE PAGE_SIZE
#define ELF_HWCAP (boot_cpu_data.x86_capability[0])
#define ELF_PLATFORM ("i686")
#define SET_PERSONALITY(ex, ibcs2) \
do { \
unsigned long new_flags = 0; \
if ((ex).e_ident[EI_CLASS] == ELFCLASS32) \
new_flags = _TIF_IA32; \
if ((current_thread_info()->flags & _TIF_IA32) \
!= new_flags) \
set_thread_flag(TIF_ABI_PENDING); \
else \
clear_thread_flag(TIF_ABI_PENDING); \
/* XXX This overwrites the user set personality */ \
current->personality |= force_personality32; \
} while (0)
/* Override some function names */
#define elf_format elf32_format
#define init_elf_binfmt init_elf32_binfmt
#define exit_elf_binfmt exit_elf32_binfmt
#define load_elf_binary load_elf32_binary
#undef ELF_PLAT_INIT
#define ELF_PLAT_INIT(r, load_addr) elf32_init(r)
#undef start_thread
#define start_thread(regs,new_rip,new_rsp) do { \
asm volatile("movl %0,%%fs" :: "r" (0)); \
asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS)); \
load_gs_index(0); \
(regs)->rip = (new_rip); \
(regs)->rsp = (new_rsp); \
(regs)->eflags = 0x200; \
(regs)->cs = __USER32_CS; \
(regs)->ss = __USER32_DS; \
set_fs(USER_DS); \
} while(0)
#include <linux/module.h>
MODULE_DESCRIPTION("Binary format loader for compatibility with IA32 ELF binaries.");
MODULE_AUTHOR("Eric Youngdale, Andi Kleen");
#undef MODULE_DESCRIPTION
#undef MODULE_AUTHOR
static void elf32_init(struct pt_regs *);
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
#define arch_setup_additional_pages syscall32_setup_pages
extern int syscall32_setup_pages(struct linux_binprm *, int exstack);
#include "../../../fs/binfmt_elf.c"
static void elf32_init(struct pt_regs *regs)
{
struct task_struct *me = current;
regs->rdi = 0;
regs->rsi = 0;
regs->rdx = 0;
regs->rcx = 0;
regs->rax = 0;
regs->rbx = 0;
regs->rbp = 0;
regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r12 =
regs->r13 = regs->r14 = regs->r15 = 0;
me->thread.fs = 0;
me->thread.gs = 0;
me->thread.fsindex = 0;
me->thread.gsindex = 0;
me->thread.ds = __USER_DS;
me->thread.es = __USER_DS;
}
#ifdef CONFIG_SYSCTL
/* Register vsyscall32 into the ABI table */
#include <linux/sysctl.h>
static ctl_table abi_table2[] = {
{
.procname = "vsyscall32",
.data = &sysctl_vsyscall32,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec
},
{}
};
static ctl_table abi_root_table2[] = {
{
.ctl_name = CTL_ABI,
.procname = "abi",
.mode = 0555,
.child = abi_table2
},
{}
};
static __init int ia32_binfmt_init(void)
{
register_sysctl_table(abi_root_table2);
return 0;
}
__initcall(ia32_binfmt_init);
#endif

View File

@ -29,9 +29,8 @@
#include <asm/ia32_unistd.h> #include <asm/ia32_unistd.h>
#include <asm/user32.h> #include <asm/user32.h>
#include <asm/sigcontext32.h> #include <asm/sigcontext32.h>
#include <asm/fpu32.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/vsyscall32.h> #include <asm/vdso.h>
#define DEBUG_SIG 0 #define DEBUG_SIG 0
@ -43,7 +42,8 @@ void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
{ {
int err; int err;
if (!access_ok (VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
return -EFAULT; return -EFAULT;
/* If you change siginfo_t structure, please make sure that /* If you change siginfo_t structure, please make sure that
@ -53,16 +53,19 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
3 ints plus the relevant union member. */ 3 ints plus the relevant union member. */
err = __put_user(from->si_signo, &to->si_signo); err = __put_user(from->si_signo, &to->si_signo);
err |= __put_user(from->si_errno, &to->si_errno); err |= __put_user(from->si_errno, &to->si_errno);
err |= __put_user((short)from->si_code, &to->si_code); err |= __put_user((short)from->si_code, &to->si_code);
if (from->si_code < 0) { if (from->si_code < 0) {
err |= __put_user(from->si_pid, &to->si_pid); err |= __put_user(from->si_pid, &to->si_pid);
err |= __put_user(from->si_uid, &to->si_uid); err |= __put_user(from->si_uid, &to->si_uid);
err |= __put_user(ptr_to_compat(from->si_ptr), &to->si_ptr); err |= __put_user(ptr_to_compat(from->si_ptr), &to->si_ptr);
} else { } else {
/* First 32bits of unions are always present: /*
* si_pid === si_band === si_tid === si_addr(LS half) */ * First 32bits of unions are always present:
err |= __put_user(from->_sifields._pad[0], &to->_sifields._pad[0]); * si_pid === si_band === si_tid === si_addr(LS half)
*/
err |= __put_user(from->_sifields._pad[0],
&to->_sifields._pad[0]);
switch (from->si_code >> 16) { switch (from->si_code >> 16) {
case __SI_FAULT >> 16: case __SI_FAULT >> 16:
break; break;
@ -76,14 +79,15 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
err |= __put_user(from->si_uid, &to->si_uid); err |= __put_user(from->si_uid, &to->si_uid);
break; break;
case __SI_POLL >> 16: case __SI_POLL >> 16:
err |= __put_user(from->si_fd, &to->si_fd); err |= __put_user(from->si_fd, &to->si_fd);
break; break;
case __SI_TIMER >> 16: case __SI_TIMER >> 16:
err |= __put_user(from->si_overrun, &to->si_overrun); err |= __put_user(from->si_overrun, &to->si_overrun);
err |= __put_user(ptr_to_compat(from->si_ptr), err |= __put_user(ptr_to_compat(from->si_ptr),
&to->si_ptr); &to->si_ptr);
break; break;
case __SI_RT >> 16: /* This is not generated by the kernel as of now. */ /* This is not generated by the kernel as of now. */
case __SI_RT >> 16:
case __SI_MESGQ >> 16: case __SI_MESGQ >> 16:
err |= __put_user(from->si_uid, &to->si_uid); err |= __put_user(from->si_uid, &to->si_uid);
err |= __put_user(from->si_int, &to->si_int); err |= __put_user(from->si_int, &to->si_int);
@ -97,7 +101,8 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
{ {
int err; int err;
u32 ptr32; u32 ptr32;
if (!access_ok (VERIFY_READ, from, sizeof(compat_siginfo_t)))
if (!access_ok(VERIFY_READ, from, sizeof(compat_siginfo_t)))
return -EFAULT; return -EFAULT;
err = __get_user(to->si_signo, &from->si_signo); err = __get_user(to->si_signo, &from->si_signo);
@ -112,8 +117,7 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
return err; return err;
} }
asmlinkage long asmlinkage long sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
{ {
mask &= _BLOCKABLE; mask &= _BLOCKABLE;
spin_lock_irq(&current->sighand->siglock); spin_lock_irq(&current->sighand->siglock);
@ -128,36 +132,37 @@ sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
return -ERESTARTNOHAND; return -ERESTARTNOHAND;
} }
asmlinkage long asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
sys32_sigaltstack(const stack_ia32_t __user *uss_ptr, stack_ia32_t __user *uoss_ptr,
stack_ia32_t __user *uoss_ptr, struct pt_regs *regs)
struct pt_regs *regs)
{ {
stack_t uss,uoss; stack_t uss, uoss;
int ret; int ret;
mm_segment_t seg; mm_segment_t seg;
if (uss_ptr) {
if (uss_ptr) {
u32 ptr; u32 ptr;
memset(&uss,0,sizeof(stack_t));
if (!access_ok(VERIFY_READ,uss_ptr,sizeof(stack_ia32_t)) || memset(&uss, 0, sizeof(stack_t));
if (!access_ok(VERIFY_READ, uss_ptr, sizeof(stack_ia32_t)) ||
__get_user(ptr, &uss_ptr->ss_sp) || __get_user(ptr, &uss_ptr->ss_sp) ||
__get_user(uss.ss_flags, &uss_ptr->ss_flags) || __get_user(uss.ss_flags, &uss_ptr->ss_flags) ||
__get_user(uss.ss_size, &uss_ptr->ss_size)) __get_user(uss.ss_size, &uss_ptr->ss_size))
return -EFAULT; return -EFAULT;
uss.ss_sp = compat_ptr(ptr); uss.ss_sp = compat_ptr(ptr);
} }
seg = get_fs(); seg = get_fs();
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss, regs->rsp); ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss, regs->sp);
set_fs(seg); set_fs(seg);
if (ret >= 0 && uoss_ptr) { if (ret >= 0 && uoss_ptr) {
if (!access_ok(VERIFY_WRITE,uoss_ptr,sizeof(stack_ia32_t)) || if (!access_ok(VERIFY_WRITE, uoss_ptr, sizeof(stack_ia32_t)) ||
__put_user(ptr_to_compat(uoss.ss_sp), &uoss_ptr->ss_sp) || __put_user(ptr_to_compat(uoss.ss_sp), &uoss_ptr->ss_sp) ||
__put_user(uoss.ss_flags, &uoss_ptr->ss_flags) || __put_user(uoss.ss_flags, &uoss_ptr->ss_flags) ||
__put_user(uoss.ss_size, &uoss_ptr->ss_size)) __put_user(uoss.ss_size, &uoss_ptr->ss_size))
ret = -EFAULT; ret = -EFAULT;
} }
return ret; return ret;
} }
/* /*
@ -186,87 +191,85 @@ struct rt_sigframe
char retcode[8]; char retcode[8];
}; };
static int #define COPY(x) { \
ia32_restore_sigcontext(struct pt_regs *regs, struct sigcontext_ia32 __user *sc, unsigned int *peax) unsigned int reg; \
err |= __get_user(reg, &sc->x); \
regs->x = reg; \
}
#define RELOAD_SEG(seg,mask) \
{ unsigned int cur; \
unsigned short pre; \
err |= __get_user(pre, &sc->seg); \
asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \
pre |= mask; \
if (pre != cur) loadsegment(seg, pre); }
static int ia32_restore_sigcontext(struct pt_regs *regs,
struct sigcontext_ia32 __user *sc,
unsigned int *peax)
{ {
unsigned int err = 0; unsigned int tmpflags, gs, oldgs, err = 0;
struct _fpstate_ia32 __user *buf;
u32 tmp;
/* Always make any pending restarted system calls return -EINTR */ /* Always make any pending restarted system calls return -EINTR */
current_thread_info()->restart_block.fn = do_no_restart_syscall; current_thread_info()->restart_block.fn = do_no_restart_syscall;
#if DEBUG_SIG #if DEBUG_SIG
printk("SIG restore_sigcontext: sc=%p err(%x) eip(%x) cs(%x) flg(%x)\n", printk(KERN_DEBUG "SIG restore_sigcontext: "
sc, sc->err, sc->eip, sc->cs, sc->eflags); "sc=%p err(%x) eip(%x) cs(%x) flg(%x)\n",
sc, sc->err, sc->ip, sc->cs, sc->flags);
#endif #endif
#define COPY(x) { \
unsigned int reg; \
err |= __get_user(reg, &sc->e ##x); \
regs->r ## x = reg; \
}
#define RELOAD_SEG(seg,mask) \ /*
{ unsigned int cur; \ * Reload fs and gs if they have changed in the signal
unsigned short pre; \ * handler. This does not handle long fs/gs base changes in
err |= __get_user(pre, &sc->seg); \ * the handler, but does not clobber them at least in the
asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \ * normal case.
pre |= mask; \ */
if (pre != cur) loadsegment(seg,pre); } err |= __get_user(gs, &sc->gs);
gs |= 3;
asm("movl %%gs,%0" : "=r" (oldgs));
if (gs != oldgs)
load_gs_index(gs);
/* Reload fs and gs if they have changed in the signal handler. RELOAD_SEG(fs, 3);
This does not handle long fs/gs base changes in the handler, but RELOAD_SEG(ds, 3);
does not clobber them at least in the normal case. */ RELOAD_SEG(es, 3);
{
unsigned gs, oldgs;
err |= __get_user(gs, &sc->gs);
gs |= 3;
asm("movl %%gs,%0" : "=r" (oldgs));
if (gs != oldgs)
load_gs_index(gs);
}
RELOAD_SEG(fs,3);
RELOAD_SEG(ds,3);
RELOAD_SEG(es,3);
COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx); COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
COPY(dx); COPY(cx); COPY(ip); COPY(dx); COPY(cx); COPY(ip);
/* Don't touch extended registers */ /* Don't touch extended registers */
err |= __get_user(regs->cs, &sc->cs);
regs->cs |= 3;
err |= __get_user(regs->ss, &sc->ss);
regs->ss |= 3;
{ err |= __get_user(regs->cs, &sc->cs);
unsigned int tmpflags; regs->cs |= 3;
err |= __get_user(tmpflags, &sc->eflags); err |= __get_user(regs->ss, &sc->ss);
regs->eflags = (regs->eflags & ~0x40DD5) | (tmpflags & 0x40DD5); regs->ss |= 3;
regs->orig_rax = -1; /* disable syscall checks */
}
{ err |= __get_user(tmpflags, &sc->flags);
u32 tmp; regs->flags = (regs->flags & ~0x40DD5) | (tmpflags & 0x40DD5);
struct _fpstate_ia32 __user * buf; /* disable syscall checks */
err |= __get_user(tmp, &sc->fpstate); regs->orig_ax = -1;
buf = compat_ptr(tmp);
if (buf) { err |= __get_user(tmp, &sc->fpstate);
if (!access_ok(VERIFY_READ, buf, sizeof(*buf))) buf = compat_ptr(tmp);
goto badframe; if (buf) {
err |= restore_i387_ia32(current, buf, 0); if (!access_ok(VERIFY_READ, buf, sizeof(*buf)))
} else { goto badframe;
struct task_struct *me = current; err |= restore_i387_ia32(buf);
if (used_math()) { } else {
clear_fpu(me); struct task_struct *me = current;
clear_used_math();
} if (used_math()) {
clear_fpu(me);
clear_used_math();
} }
} }
{ err |= __get_user(tmp, &sc->ax);
u32 tmp; *peax = tmp;
err |= __get_user(tmp, &sc->eax);
*peax = tmp;
}
return err; return err;
badframe: badframe:
@ -275,15 +278,16 @@ badframe:
asmlinkage long sys32_sigreturn(struct pt_regs *regs) asmlinkage long sys32_sigreturn(struct pt_regs *regs)
{ {
struct sigframe __user *frame = (struct sigframe __user *)(regs->rsp-8); struct sigframe __user *frame = (struct sigframe __user *)(regs->sp-8);
sigset_t set; sigset_t set;
unsigned int eax; unsigned int ax;
if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
goto badframe; goto badframe;
if (__get_user(set.sig[0], &frame->sc.oldmask) if (__get_user(set.sig[0], &frame->sc.oldmask)
|| (_COMPAT_NSIG_WORDS > 1 || (_COMPAT_NSIG_WORDS > 1
&& __copy_from_user((((char *) &set.sig) + 4), &frame->extramask, && __copy_from_user((((char *) &set.sig) + 4),
&frame->extramask,
sizeof(frame->extramask)))) sizeof(frame->extramask))))
goto badframe; goto badframe;
@ -292,24 +296,24 @@ asmlinkage long sys32_sigreturn(struct pt_regs *regs)
current->blocked = set; current->blocked = set;
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock); spin_unlock_irq(&current->sighand->siglock);
if (ia32_restore_sigcontext(regs, &frame->sc, &eax)) if (ia32_restore_sigcontext(regs, &frame->sc, &ax))
goto badframe; goto badframe;
return eax; return ax;
badframe: badframe:
signal_fault(regs, frame, "32bit sigreturn"); signal_fault(regs, frame, "32bit sigreturn");
return 0; return 0;
} }
asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs) asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
{ {
struct rt_sigframe __user *frame; struct rt_sigframe __user *frame;
sigset_t set; sigset_t set;
unsigned int eax; unsigned int ax;
struct pt_regs tregs; struct pt_regs tregs;
frame = (struct rt_sigframe __user *)(regs->rsp - 4); frame = (struct rt_sigframe __user *)(regs->sp - 4);
if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
goto badframe; goto badframe;
@ -321,28 +325,28 @@ asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
current->blocked = set; current->blocked = set;
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock); spin_unlock_irq(&current->sighand->siglock);
if (ia32_restore_sigcontext(regs, &frame->uc.uc_mcontext, &eax)) if (ia32_restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax))
goto badframe; goto badframe;
tregs = *regs; tregs = *regs;
if (sys32_sigaltstack(&frame->uc.uc_stack, NULL, &tregs) == -EFAULT) if (sys32_sigaltstack(&frame->uc.uc_stack, NULL, &tregs) == -EFAULT)
goto badframe; goto badframe;
return eax; return ax;
badframe: badframe:
signal_fault(regs,frame,"32bit rt sigreturn"); signal_fault(regs, frame, "32bit rt sigreturn");
return 0; return 0;
} }
/* /*
* Set up a signal frame. * Set up a signal frame.
*/ */
static int static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, struct _fpstate_ia32 __user *fpstate, struct _fpstate_ia32 __user *fpstate,
struct pt_regs *regs, unsigned int mask) struct pt_regs *regs, unsigned int mask)
{ {
int tmp, err = 0; int tmp, err = 0;
@ -356,26 +360,26 @@ ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, struct _fpstate_ia32 __
__asm__("movl %%es,%0" : "=r"(tmp): "0"(tmp)); __asm__("movl %%es,%0" : "=r"(tmp): "0"(tmp));
err |= __put_user(tmp, (unsigned int __user *)&sc->es); err |= __put_user(tmp, (unsigned int __user *)&sc->es);
err |= __put_user((u32)regs->rdi, &sc->edi); err |= __put_user((u32)regs->di, &sc->di);
err |= __put_user((u32)regs->rsi, &sc->esi); err |= __put_user((u32)regs->si, &sc->si);
err |= __put_user((u32)regs->rbp, &sc->ebp); err |= __put_user((u32)regs->bp, &sc->bp);
err |= __put_user((u32)regs->rsp, &sc->esp); err |= __put_user((u32)regs->sp, &sc->sp);
err |= __put_user((u32)regs->rbx, &sc->ebx); err |= __put_user((u32)regs->bx, &sc->bx);
err |= __put_user((u32)regs->rdx, &sc->edx); err |= __put_user((u32)regs->dx, &sc->dx);
err |= __put_user((u32)regs->rcx, &sc->ecx); err |= __put_user((u32)regs->cx, &sc->cx);
err |= __put_user((u32)regs->rax, &sc->eax); err |= __put_user((u32)regs->ax, &sc->ax);
err |= __put_user((u32)regs->cs, &sc->cs); err |= __put_user((u32)regs->cs, &sc->cs);
err |= __put_user((u32)regs->ss, &sc->ss); err |= __put_user((u32)regs->ss, &sc->ss);
err |= __put_user(current->thread.trap_no, &sc->trapno); err |= __put_user(current->thread.trap_no, &sc->trapno);
err |= __put_user(current->thread.error_code, &sc->err); err |= __put_user(current->thread.error_code, &sc->err);
err |= __put_user((u32)regs->rip, &sc->eip); err |= __put_user((u32)regs->ip, &sc->ip);
err |= __put_user((u32)regs->eflags, &sc->eflags); err |= __put_user((u32)regs->flags, &sc->flags);
err |= __put_user((u32)regs->rsp, &sc->esp_at_signal); err |= __put_user((u32)regs->sp, &sc->sp_at_signal);
tmp = save_i387_ia32(current, fpstate, regs, 0); tmp = save_i387_ia32(fpstate);
if (tmp < 0) if (tmp < 0)
err = -EFAULT; err = -EFAULT;
else { else {
clear_used_math(); clear_used_math();
stts(); stts();
err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL), err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL),
@ -392,40 +396,53 @@ ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, struct _fpstate_ia32 __
/* /*
* Determine which stack to use.. * Determine which stack to use..
*/ */
static void __user * static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size) size_t frame_size)
{ {
unsigned long rsp; unsigned long sp;
/* Default to using normal stack */ /* Default to using normal stack */
rsp = regs->rsp; sp = regs->sp;
/* This is the X/Open sanctioned signal stack switching. */ /* This is the X/Open sanctioned signal stack switching. */
if (ka->sa.sa_flags & SA_ONSTACK) { if (ka->sa.sa_flags & SA_ONSTACK) {
if (sas_ss_flags(rsp) == 0) if (sas_ss_flags(sp) == 0)
rsp = current->sas_ss_sp + current->sas_ss_size; sp = current->sas_ss_sp + current->sas_ss_size;
} }
/* This is the legacy signal stack switching. */ /* This is the legacy signal stack switching. */
else if ((regs->ss & 0xffff) != __USER_DS && else if ((regs->ss & 0xffff) != __USER_DS &&
!(ka->sa.sa_flags & SA_RESTORER) && !(ka->sa.sa_flags & SA_RESTORER) &&
ka->sa.sa_restorer) { ka->sa.sa_restorer)
rsp = (unsigned long) ka->sa.sa_restorer; sp = (unsigned long) ka->sa.sa_restorer;
}
rsp -= frame_size; sp -= frame_size;
/* Align the stack pointer according to the i386 ABI, /* Align the stack pointer according to the i386 ABI,
* i.e. so that on function entry ((sp + 4) & 15) == 0. */ * i.e. so that on function entry ((sp + 4) & 15) == 0. */
rsp = ((rsp + 4) & -16ul) - 4; sp = ((sp + 4) & -16ul) - 4;
return (void __user *) rsp; return (void __user *) sp;
} }
int ia32_setup_frame(int sig, struct k_sigaction *ka, int ia32_setup_frame(int sig, struct k_sigaction *ka,
compat_sigset_t *set, struct pt_regs * regs) compat_sigset_t *set, struct pt_regs *regs)
{ {
struct sigframe __user *frame; struct sigframe __user *frame;
void __user *restorer;
int err = 0; int err = 0;
/* copy_to_user optimizes that into a single 8 byte store */
static const struct {
u16 poplmovl;
u32 val;
u16 int80;
u16 pad;
} __attribute__((packed)) code = {
0xb858, /* popl %eax ; movl $...,%eax */
__NR_ia32_sigreturn,
0x80cd, /* int $0x80 */
0,
};
frame = get_sigframe(ka, regs, sizeof(*frame)); frame = get_sigframe(ka, regs, sizeof(*frame));
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
@ -443,64 +460,53 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
if (_COMPAT_NSIG_WORDS > 1) { if (_COMPAT_NSIG_WORDS > 1) {
err |= __copy_to_user(frame->extramask, &set->sig[1], err |= __copy_to_user(frame->extramask, &set->sig[1],
sizeof(frame->extramask)); sizeof(frame->extramask));
if (err)
goto give_sigsegv;
} }
if (err)
goto give_sigsegv;
/* Return stub is in 32bit vsyscall page */ if (ka->sa.sa_flags & SA_RESTORER) {
{ restorer = ka->sa.sa_restorer;
void __user *restorer; } else {
/* Return stub is in 32bit vsyscall page */
if (current->binfmt->hasvdso) if (current->binfmt->hasvdso)
restorer = VSYSCALL32_SIGRETURN; restorer = VDSO32_SYMBOL(current->mm->context.vdso,
sigreturn);
else else
restorer = (void *)&frame->retcode; restorer = &frame->retcode;
if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
}
/* These are actually not used anymore, but left because some
gdb versions depend on them as a marker. */
{
/* copy_to_user optimizes that into a single 8 byte store */
static const struct {
u16 poplmovl;
u32 val;
u16 int80;
u16 pad;
} __attribute__((packed)) code = {
0xb858, /* popl %eax ; movl $...,%eax */
__NR_ia32_sigreturn,
0x80cd, /* int $0x80 */
0,
};
err |= __copy_to_user(frame->retcode, &code, 8);
} }
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
/*
* These are actually not used anymore, but left because some
* gdb versions depend on them as a marker.
*/
err |= __copy_to_user(frame->retcode, &code, 8);
if (err) if (err)
goto give_sigsegv; goto give_sigsegv;
/* Set up registers for signal handler */ /* Set up registers for signal handler */
regs->rsp = (unsigned long) frame; regs->sp = (unsigned long) frame;
regs->rip = (unsigned long) ka->sa.sa_handler; regs->ip = (unsigned long) ka->sa.sa_handler;
/* Make -mregparm=3 work */ /* Make -mregparm=3 work */
regs->rax = sig; regs->ax = sig;
regs->rdx = 0; regs->dx = 0;
regs->rcx = 0; regs->cx = 0;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS)); asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
asm volatile("movl %0,%%es" :: "r" (__USER32_DS)); asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->ss = __USER32_DS; regs->ss = __USER32_DS;
set_fs(USER_DS); set_fs(USER_DS);
regs->eflags &= ~TF_MASK; regs->flags &= ~X86_EFLAGS_TF;
if (test_thread_flag(TIF_SINGLESTEP)) if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP); ptrace_notify(SIGTRAP);
#if DEBUG_SIG #if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n", printk(KERN_DEBUG "SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode); current->comm, current->pid, frame, regs->ip, frame->pretcode);
#endif #endif
return 0; return 0;
@ -511,25 +517,34 @@ give_sigsegv:
} }
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
compat_sigset_t *set, struct pt_regs * regs) compat_sigset_t *set, struct pt_regs *regs)
{ {
struct rt_sigframe __user *frame; struct rt_sigframe __user *frame;
struct exec_domain *ed = current_thread_info()->exec_domain;
void __user *restorer;
int err = 0; int err = 0;
/* __copy_to_user optimizes that into a single 8 byte store */
static const struct {
u8 movl;
u32 val;
u16 int80;
u16 pad;
u8 pad2;
} __attribute__((packed)) code = {
0xb8,
__NR_ia32_rt_sigreturn,
0x80cd,
0,
};
frame = get_sigframe(ka, regs, sizeof(*frame)); frame = get_sigframe(ka, regs, sizeof(*frame));
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv; goto give_sigsegv;
{ err |= __put_user((ed && ed->signal_invmap && sig < 32
struct exec_domain *ed = current_thread_info()->exec_domain; ? ed->signal_invmap[sig] : sig), &frame->sig);
err |= __put_user((ed
&& ed->signal_invmap
&& sig < 32
? ed->signal_invmap[sig]
: sig),
&frame->sig);
}
err |= __put_user(ptr_to_compat(&frame->info), &frame->pinfo); err |= __put_user(ptr_to_compat(&frame->info), &frame->pinfo);
err |= __put_user(ptr_to_compat(&frame->uc), &frame->puc); err |= __put_user(ptr_to_compat(&frame->uc), &frame->puc);
err |= copy_siginfo_to_user32(&frame->info, info); err |= copy_siginfo_to_user32(&frame->info, info);
@ -540,73 +555,58 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
err |= __put_user(0, &frame->uc.uc_flags); err |= __put_user(0, &frame->uc.uc_flags);
err |= __put_user(0, &frame->uc.uc_link); err |= __put_user(0, &frame->uc.uc_link);
err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp); err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
err |= __put_user(sas_ss_flags(regs->rsp), err |= __put_user(sas_ss_flags(regs->sp),
&frame->uc.uc_stack.ss_flags); &frame->uc.uc_stack.ss_flags);
err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size); err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
err |= ia32_setup_sigcontext(&frame->uc.uc_mcontext, &frame->fpstate, err |= ia32_setup_sigcontext(&frame->uc.uc_mcontext, &frame->fpstate,
regs, set->sig[0]); regs, set->sig[0]);
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
if (err) if (err)
goto give_sigsegv; goto give_sigsegv;
if (ka->sa.sa_flags & SA_RESTORER)
{ restorer = ka->sa.sa_restorer;
void __user *restorer = VSYSCALL32_RTSIGRETURN; else
if (ka->sa.sa_flags & SA_RESTORER) restorer = VDSO32_SYMBOL(current->mm->context.vdso,
restorer = ka->sa.sa_restorer; rt_sigreturn);
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode); err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
}
/* This is movl $,%eax ; int $0x80 */ /*
/* Not actually used anymore, but left because some gdb versions * Not actually used anymore, but left because some gdb
need it. */ * versions need it.
{ */
/* __copy_to_user optimizes that into a single 8 byte store */ err |= __copy_to_user(frame->retcode, &code, 8);
static const struct {
u8 movl;
u32 val;
u16 int80;
u16 pad;
u8 pad2;
} __attribute__((packed)) code = {
0xb8,
__NR_ia32_rt_sigreturn,
0x80cd,
0,
};
err |= __copy_to_user(frame->retcode, &code, 8);
}
if (err) if (err)
goto give_sigsegv; goto give_sigsegv;
/* Set up registers for signal handler */ /* Set up registers for signal handler */
regs->rsp = (unsigned long) frame; regs->sp = (unsigned long) frame;
regs->rip = (unsigned long) ka->sa.sa_handler; regs->ip = (unsigned long) ka->sa.sa_handler;
/* Make -mregparm=3 work */ /* Make -mregparm=3 work */
regs->rax = sig; regs->ax = sig;
regs->rdx = (unsigned long) &frame->info; regs->dx = (unsigned long) &frame->info;
regs->rcx = (unsigned long) &frame->uc; regs->cx = (unsigned long) &frame->uc;
/* Make -mregparm=3 work */ /* Make -mregparm=3 work */
regs->rax = sig; regs->ax = sig;
regs->rdx = (unsigned long) &frame->info; regs->dx = (unsigned long) &frame->info;
regs->rcx = (unsigned long) &frame->uc; regs->cx = (unsigned long) &frame->uc;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS)); asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
asm volatile("movl %0,%%es" :: "r" (__USER32_DS)); asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->ss = __USER32_DS; regs->ss = __USER32_DS;
set_fs(USER_DS); set_fs(USER_DS);
regs->eflags &= ~TF_MASK; regs->flags &= ~X86_EFLAGS_TF;
if (test_thread_flag(TIF_SINGLESTEP)) if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP); ptrace_notify(SIGTRAP);
#if DEBUG_SIG #if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n", printk(KERN_DEBUG "SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode); current->comm, current->pid, frame, regs->ip, frame->pretcode);
#endif #endif
return 0; return 0;

View File

@ -12,7 +12,6 @@
#include <asm/ia32_unistd.h> #include <asm/ia32_unistd.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/vsyscall32.h>
#include <asm/irqflags.h> #include <asm/irqflags.h>
#include <linux/linkage.h> #include <linux/linkage.h>
@ -104,7 +103,7 @@ ENTRY(ia32_sysenter_target)
pushfq pushfq
CFI_ADJUST_CFA_OFFSET 8 CFI_ADJUST_CFA_OFFSET 8
/*CFI_REL_OFFSET rflags,0*/ /*CFI_REL_OFFSET rflags,0*/
movl $VSYSCALL32_SYSEXIT, %r10d movl 8*3-THREAD_SIZE+threadinfo_sysenter_return(%rsp), %r10d
CFI_REGISTER rip,r10 CFI_REGISTER rip,r10
pushq $__USER32_CS pushq $__USER32_CS
CFI_ADJUST_CFA_OFFSET 8 CFI_ADJUST_CFA_OFFSET 8
@ -142,6 +141,8 @@ sysenter_do_call:
andl $~TS_COMPAT,threadinfo_status(%r10) andl $~TS_COMPAT,threadinfo_status(%r10)
/* clear IF, that popfq doesn't enable interrupts early */ /* clear IF, that popfq doesn't enable interrupts early */
andl $~0x200,EFLAGS-R11(%rsp) andl $~0x200,EFLAGS-R11(%rsp)
movl RIP-R11(%rsp),%edx /* User %eip */
CFI_REGISTER rip,rdx
RESTORE_ARGS 1,24,1,1,1,1 RESTORE_ARGS 1,24,1,1,1,1
popfq popfq
CFI_ADJUST_CFA_OFFSET -8 CFI_ADJUST_CFA_OFFSET -8
@ -149,8 +150,6 @@ sysenter_do_call:
popq %rcx /* User %esp */ popq %rcx /* User %esp */
CFI_ADJUST_CFA_OFFSET -8 CFI_ADJUST_CFA_OFFSET -8
CFI_REGISTER rsp,rcx CFI_REGISTER rsp,rcx
movl $VSYSCALL32_SYSEXIT,%edx /* User %eip */
CFI_REGISTER rip,rdx
TRACE_IRQS_ON TRACE_IRQS_ON
swapgs swapgs
sti /* sti only takes effect after the next instruction */ sti /* sti only takes effect after the next instruction */
@ -644,8 +643,8 @@ ia32_sys_call_table:
.quad compat_sys_futex /* 240 */ .quad compat_sys_futex /* 240 */
.quad compat_sys_sched_setaffinity .quad compat_sys_sched_setaffinity
.quad compat_sys_sched_getaffinity .quad compat_sys_sched_getaffinity
.quad sys32_set_thread_area .quad sys_set_thread_area
.quad sys32_get_thread_area .quad sys_get_thread_area
.quad compat_sys_io_setup /* 245 */ .quad compat_sys_io_setup /* 245 */
.quad sys_io_destroy .quad sys_io_destroy
.quad compat_sys_io_getevents .quad compat_sys_io_getevents

View File

@ -9,9 +9,8 @@
#include <linux/ipc.h> #include <linux/ipc.h>
#include <linux/compat.h> #include <linux/compat.h>
asmlinkage long asmlinkage long sys32_ipc(u32 call, int first, int second, int third,
sys32_ipc(u32 call, int first, int second, int third, compat_uptr_t ptr, u32 fifth)
compat_uptr_t ptr, u32 fifth)
{ {
int version; int version;
@ -19,36 +18,35 @@ sys32_ipc(u32 call, int first, int second, int third,
call &= 0xffff; call &= 0xffff;
switch (call) { switch (call) {
case SEMOP: case SEMOP:
/* struct sembuf is the same on 32 and 64bit :)) */ /* struct sembuf is the same on 32 and 64bit :)) */
return sys_semtimedop(first, compat_ptr(ptr), second, NULL); return sys_semtimedop(first, compat_ptr(ptr), second, NULL);
case SEMTIMEDOP: case SEMTIMEDOP:
return compat_sys_semtimedop(first, compat_ptr(ptr), second, return compat_sys_semtimedop(first, compat_ptr(ptr), second,
compat_ptr(fifth)); compat_ptr(fifth));
case SEMGET: case SEMGET:
return sys_semget(first, second, third); return sys_semget(first, second, third);
case SEMCTL: case SEMCTL:
return compat_sys_semctl(first, second, third, compat_ptr(ptr)); return compat_sys_semctl(first, second, third, compat_ptr(ptr));
case MSGSND: case MSGSND:
return compat_sys_msgsnd(first, second, third, compat_ptr(ptr)); return compat_sys_msgsnd(first, second, third, compat_ptr(ptr));
case MSGRCV: case MSGRCV:
return compat_sys_msgrcv(first, second, fifth, third, return compat_sys_msgrcv(first, second, fifth, third,
version, compat_ptr(ptr)); version, compat_ptr(ptr));
case MSGGET: case MSGGET:
return sys_msgget((key_t) first, second); return sys_msgget((key_t) first, second);
case MSGCTL: case MSGCTL:
return compat_sys_msgctl(first, second, compat_ptr(ptr)); return compat_sys_msgctl(first, second, compat_ptr(ptr));
case SHMAT: case SHMAT:
return compat_sys_shmat(first, second, third, version, return compat_sys_shmat(first, second, third, version,
compat_ptr(ptr)); compat_ptr(ptr));
break; case SHMDT:
case SHMDT:
return sys_shmdt(compat_ptr(ptr)); return sys_shmdt(compat_ptr(ptr));
case SHMGET: case SHMGET:
return sys_shmget(first, (unsigned)second, third); return sys_shmget(first, (unsigned)second, third);
case SHMCTL: case SHMCTL:
return compat_sys_shmctl(first, second, compat_ptr(ptr)); return compat_sys_shmctl(first, second, compat_ptr(ptr));
} }
return -ENOSYS; return -ENOSYS;

View File

@ -1,79 +0,0 @@
/*
* linux/arch/x86_64/ia32/mm/mmap.c
*
* flexible mmap layout support
*
* Based on the i386 version which was
*
* Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*
* Started by Ingo Molnar <mingo@elte.hu>
*/
#include <linux/personality.h>
#include <linux/mm.h>
#include <linux/random.h>
#include <linux/sched.h>
/*
* Top of mmap area (just below the process stack).
*
* Leave an at least ~128 MB hole.
*/
#define MIN_GAP (128*1024*1024)
#define MAX_GAP (TASK_SIZE/6*5)
static inline unsigned long mmap_base(struct mm_struct *mm)
{
unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
unsigned long random_factor = 0;
if (current->flags & PF_RANDOMIZE)
random_factor = get_random_int() % (1024*1024);
if (gap < MIN_GAP)
gap = MIN_GAP;
else if (gap > MAX_GAP)
gap = MAX_GAP;
return PAGE_ALIGN(TASK_SIZE - gap - random_factor);
}
/*
* This function, called very early during the creation of a new
* process VM image, sets up which VM layout function to use:
*/
void ia32_pick_mmap_layout(struct mm_struct *mm)
{
/*
* Fall back to the standard layout if the personality
* bit is set, or if the expected stack growth is unlimited:
*/
if (sysctl_legacy_va_layout ||
(current->personality & ADDR_COMPAT_LAYOUT) ||
current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(mm);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}

View File

@ -1,404 +0,0 @@
/*
* 32bit ptrace for x86-64.
*
* Copyright 2001,2002 Andi Kleen, SuSE Labs.
* Some parts copied from arch/i386/kernel/ptrace.c. See that file for earlier
* copyright.
*
* This allows to access 64bit processes too; but there is no way to see the extended
* register contents.
*/
#include <linux/kernel.h>
#include <linux/stddef.h>
#include <linux/sched.h>
#include <linux/syscalls.h>
#include <linux/unistd.h>
#include <linux/mm.h>
#include <linux/err.h>
#include <linux/ptrace.h>
#include <asm/ptrace.h>
#include <asm/compat.h>
#include <asm/uaccess.h>
#include <asm/user32.h>
#include <asm/user.h>
#include <asm/errno.h>
#include <asm/debugreg.h>
#include <asm/i387.h>
#include <asm/fpu32.h>
#include <asm/ia32.h>
/*
* Determines which flags the user has access to [1 = access, 0 = no access].
* Prohibits changing ID(21), VIP(20), VIF(19), VM(17), IOPL(12-13), IF(9).
* Also masks reserved bits (31-22, 15, 5, 3, 1).
*/
#define FLAG_MASK 0x54dd5UL
#define R32(l,q) \
case offsetof(struct user32, regs.l): stack[offsetof(struct pt_regs, q)/8] = val; break
static int putreg32(struct task_struct *child, unsigned regno, u32 val)
{
int i;
__u64 *stack = (__u64 *)task_pt_regs(child);
switch (regno) {
case offsetof(struct user32, regs.fs):
if (val && (val & 3) != 3) return -EIO;
child->thread.fsindex = val & 0xffff;
break;
case offsetof(struct user32, regs.gs):
if (val && (val & 3) != 3) return -EIO;
child->thread.gsindex = val & 0xffff;
break;
case offsetof(struct user32, regs.ds):
if (val && (val & 3) != 3) return -EIO;
child->thread.ds = val & 0xffff;
break;
case offsetof(struct user32, regs.es):
child->thread.es = val & 0xffff;
break;
case offsetof(struct user32, regs.ss):
if ((val & 3) != 3) return -EIO;
stack[offsetof(struct pt_regs, ss)/8] = val & 0xffff;
break;
case offsetof(struct user32, regs.cs):
if ((val & 3) != 3) return -EIO;
stack[offsetof(struct pt_regs, cs)/8] = val & 0xffff;
break;
R32(ebx, rbx);
R32(ecx, rcx);
R32(edx, rdx);
R32(edi, rdi);
R32(esi, rsi);
R32(ebp, rbp);
R32(eax, rax);
R32(orig_eax, orig_rax);
R32(eip, rip);
R32(esp, rsp);
case offsetof(struct user32, regs.eflags): {
__u64 *flags = &stack[offsetof(struct pt_regs, eflags)/8];
val &= FLAG_MASK;
*flags = val | (*flags & ~FLAG_MASK);
break;
}
case offsetof(struct user32, u_debugreg[4]):
case offsetof(struct user32, u_debugreg[5]):
return -EIO;
case offsetof(struct user32, u_debugreg[0]):
child->thread.debugreg0 = val;
break;
case offsetof(struct user32, u_debugreg[1]):
child->thread.debugreg1 = val;
break;
case offsetof(struct user32, u_debugreg[2]):
child->thread.debugreg2 = val;
break;
case offsetof(struct user32, u_debugreg[3]):
child->thread.debugreg3 = val;
break;
case offsetof(struct user32, u_debugreg[6]):
child->thread.debugreg6 = val;
break;
case offsetof(struct user32, u_debugreg[7]):
val &= ~DR_CONTROL_RESERVED;
/* See arch/i386/kernel/ptrace.c for an explanation of
* this awkward check.*/
for(i=0; i<4; i++)
if ((0x5454 >> ((val >> (16 + 4*i)) & 0xf)) & 1)
return -EIO;
child->thread.debugreg7 = val;
if (val)
set_tsk_thread_flag(child, TIF_DEBUG);
else
clear_tsk_thread_flag(child, TIF_DEBUG);
break;
default:
if (regno > sizeof(struct user32) || (regno & 3))
return -EIO;
/* Other dummy fields in the virtual user structure are ignored */
break;
}
return 0;
}
#undef R32
#define R32(l,q) \
case offsetof(struct user32, regs.l): *val = stack[offsetof(struct pt_regs, q)/8]; break
static int getreg32(struct task_struct *child, unsigned regno, u32 *val)
{
__u64 *stack = (__u64 *)task_pt_regs(child);
switch (regno) {
case offsetof(struct user32, regs.fs):
*val = child->thread.fsindex;
break;
case offsetof(struct user32, regs.gs):
*val = child->thread.gsindex;
break;
case offsetof(struct user32, regs.ds):
*val = child->thread.ds;
break;
case offsetof(struct user32, regs.es):
*val = child->thread.es;
break;
R32(cs, cs);
R32(ss, ss);
R32(ebx, rbx);
R32(ecx, rcx);
R32(edx, rdx);
R32(edi, rdi);
R32(esi, rsi);
R32(ebp, rbp);
R32(eax, rax);
R32(orig_eax, orig_rax);
R32(eip, rip);
R32(eflags, eflags);
R32(esp, rsp);
case offsetof(struct user32, u_debugreg[0]):
*val = child->thread.debugreg0;
break;
case offsetof(struct user32, u_debugreg[1]):
*val = child->thread.debugreg1;
break;
case offsetof(struct user32, u_debugreg[2]):
*val = child->thread.debugreg2;
break;
case offsetof(struct user32, u_debugreg[3]):
*val = child->thread.debugreg3;
break;
case offsetof(struct user32, u_debugreg[6]):
*val = child->thread.debugreg6;
break;
case offsetof(struct user32, u_debugreg[7]):
*val = child->thread.debugreg7;
break;
default:
if (regno > sizeof(struct user32) || (regno & 3))
return -EIO;
/* Other dummy fields in the virtual user structure are ignored */
*val = 0;
break;
}
return 0;
}
#undef R32
static long ptrace32_siginfo(unsigned request, u32 pid, u32 addr, u32 data)
{
int ret;
compat_siginfo_t __user *si32 = compat_ptr(data);
siginfo_t ssi;
siginfo_t __user *si = compat_alloc_user_space(sizeof(siginfo_t));
if (request == PTRACE_SETSIGINFO) {
memset(&ssi, 0, sizeof(siginfo_t));
ret = copy_siginfo_from_user32(&ssi, si32);
if (ret)
return ret;
if (copy_to_user(si, &ssi, sizeof(siginfo_t)))
return -EFAULT;
}
ret = sys_ptrace(request, pid, addr, (unsigned long)si);
if (ret)
return ret;
if (request == PTRACE_GETSIGINFO) {
if (copy_from_user(&ssi, si, sizeof(siginfo_t)))
return -EFAULT;
ret = copy_siginfo_to_user32(si32, &ssi);
}
return ret;
}
asmlinkage long sys32_ptrace(long request, u32 pid, u32 addr, u32 data)
{
struct task_struct *child;
struct pt_regs *childregs;
void __user *datap = compat_ptr(data);
int ret;
__u32 val;
switch (request) {
case PTRACE_TRACEME:
case PTRACE_ATTACH:
case PTRACE_KILL:
case PTRACE_CONT:
case PTRACE_SINGLESTEP:
case PTRACE_DETACH:
case PTRACE_SYSCALL:
case PTRACE_OLDSETOPTIONS:
case PTRACE_SETOPTIONS:
case PTRACE_SET_THREAD_AREA:
case PTRACE_GET_THREAD_AREA:
return sys_ptrace(request, pid, addr, data);
default:
return -EINVAL;
case PTRACE_PEEKTEXT:
case PTRACE_PEEKDATA:
case PTRACE_POKEDATA:
case PTRACE_POKETEXT:
case PTRACE_POKEUSR:
case PTRACE_PEEKUSR:
case PTRACE_GETREGS:
case PTRACE_SETREGS:
case PTRACE_SETFPREGS:
case PTRACE_GETFPREGS:
case PTRACE_SETFPXREGS:
case PTRACE_GETFPXREGS:
case PTRACE_GETEVENTMSG:
break;
case PTRACE_SETSIGINFO:
case PTRACE_GETSIGINFO:
return ptrace32_siginfo(request, pid, addr, data);
}
child = ptrace_get_task_struct(pid);
if (IS_ERR(child))
return PTR_ERR(child);
ret = ptrace_check_attach(child, request == PTRACE_KILL);
if (ret < 0)
goto out;
childregs = task_pt_regs(child);
switch (request) {
case PTRACE_PEEKDATA:
case PTRACE_PEEKTEXT:
ret = 0;
if (access_process_vm(child, addr, &val, sizeof(u32), 0)!=sizeof(u32))
ret = -EIO;
else
ret = put_user(val, (unsigned int __user *)datap);
break;
case PTRACE_POKEDATA:
case PTRACE_POKETEXT:
ret = 0;
if (access_process_vm(child, addr, &data, sizeof(u32), 1)!=sizeof(u32))
ret = -EIO;
break;
case PTRACE_PEEKUSR:
ret = getreg32(child, addr, &val);
if (ret == 0)
ret = put_user(val, (__u32 __user *)datap);
break;
case PTRACE_POKEUSR:
ret = putreg32(child, addr, data);
break;
case PTRACE_GETREGS: { /* Get all gp regs from the child. */
int i;
if (!access_ok(VERIFY_WRITE, datap, 16*4)) {
ret = -EIO;
break;
}
ret = 0;
for ( i = 0; i <= 16*4 ; i += sizeof(__u32) ) {
getreg32(child, i, &val);
ret |= __put_user(val,(u32 __user *)datap);
datap += sizeof(u32);
}
break;
}
case PTRACE_SETREGS: { /* Set all gp regs in the child. */
unsigned long tmp;
int i;
if (!access_ok(VERIFY_READ, datap, 16*4)) {
ret = -EIO;
break;
}
ret = 0;
for ( i = 0; i <= 16*4; i += sizeof(u32) ) {
ret |= __get_user(tmp, (u32 __user *)datap);
putreg32(child, i, tmp);
datap += sizeof(u32);
}
break;
}
case PTRACE_GETFPREGS:
ret = -EIO;
if (!access_ok(VERIFY_READ, compat_ptr(data),
sizeof(struct user_i387_struct)))
break;
save_i387_ia32(child, datap, childregs, 1);
ret = 0;
break;
case PTRACE_SETFPREGS:
ret = -EIO;
if (!access_ok(VERIFY_WRITE, datap,
sizeof(struct user_i387_struct)))
break;
ret = 0;
/* don't check EFAULT to be bug-to-bug compatible to i386 */
restore_i387_ia32(child, datap, 1);
break;
case PTRACE_GETFPXREGS: {
struct user32_fxsr_struct __user *u = datap;
init_fpu(child);
ret = -EIO;
if (!access_ok(VERIFY_WRITE, u, sizeof(*u)))
break;
ret = -EFAULT;
if (__copy_to_user(u, &child->thread.i387.fxsave, sizeof(*u)))
break;
ret = __put_user(childregs->cs, &u->fcs);
ret |= __put_user(child->thread.ds, &u->fos);
break;
}
case PTRACE_SETFPXREGS: {
struct user32_fxsr_struct __user *u = datap;
unlazy_fpu(child);
ret = -EIO;
if (!access_ok(VERIFY_READ, u, sizeof(*u)))
break;
/* no checking to be bug-to-bug compatible with i386. */
/* but silence warning */
if (__copy_from_user(&child->thread.i387.fxsave, u, sizeof(*u)))
;
set_stopped_child_used_math(child);
child->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
ret = 0;
break;
}
case PTRACE_GETEVENTMSG:
ret = put_user(child->ptrace_message,(unsigned int __user *)compat_ptr(data));
break;
default:
BUG();
}
out:
put_task_struct(child);
return ret;
}

View File

@ -1,29 +1,29 @@
/* /*
* sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on
* sys_sparc32 * sys_sparc32
* *
* Copyright (C) 2000 VA Linux Co * Copyright (C) 2000 VA Linux Co
* Copyright (C) 2000 Don Dugger <n0ano@valinux.com> * Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
* Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com> * Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu) * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
* Copyright (C) 2000 Hewlett-Packard Co. * Copyright (C) 2000 Hewlett-Packard Co.
* Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com> * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 2000,2001,2002 Andi Kleen, SuSE Labs (x86-64 port) * Copyright (C) 2000,2001,2002 Andi Kleen, SuSE Labs (x86-64 port)
* *
* These routines maintain argument size conversion between 32bit and 64bit * These routines maintain argument size conversion between 32bit and 64bit
* environment. In 2.5 most of this should be moved to a generic directory. * environment. In 2.5 most of this should be moved to a generic directory.
* *
* This file assumes that there is a hole at the end of user address space. * This file assumes that there is a hole at the end of user address space.
* *
* Some of the functions are LE specific currently. These are hopefully all marked. * Some of the functions are LE specific currently. These are
* This should be fixed. * hopefully all marked. This should be fixed.
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/file.h> #include <linux/file.h>
#include <linux/signal.h> #include <linux/signal.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <linux/resource.h> #include <linux/resource.h>
@ -90,43 +90,44 @@ int cp_compat_stat(struct kstat *kbuf, struct compat_stat __user *ubuf)
if (sizeof(ino) < sizeof(kbuf->ino) && ino != kbuf->ino) if (sizeof(ino) < sizeof(kbuf->ino) && ino != kbuf->ino)
return -EOVERFLOW; return -EOVERFLOW;
if (!access_ok(VERIFY_WRITE, ubuf, sizeof(struct compat_stat)) || if (!access_ok(VERIFY_WRITE, ubuf, sizeof(struct compat_stat)) ||
__put_user (old_encode_dev(kbuf->dev), &ubuf->st_dev) || __put_user(old_encode_dev(kbuf->dev), &ubuf->st_dev) ||
__put_user (ino, &ubuf->st_ino) || __put_user(ino, &ubuf->st_ino) ||
__put_user (kbuf->mode, &ubuf->st_mode) || __put_user(kbuf->mode, &ubuf->st_mode) ||
__put_user (kbuf->nlink, &ubuf->st_nlink) || __put_user(kbuf->nlink, &ubuf->st_nlink) ||
__put_user (uid, &ubuf->st_uid) || __put_user(uid, &ubuf->st_uid) ||
__put_user (gid, &ubuf->st_gid) || __put_user(gid, &ubuf->st_gid) ||
__put_user (old_encode_dev(kbuf->rdev), &ubuf->st_rdev) || __put_user(old_encode_dev(kbuf->rdev), &ubuf->st_rdev) ||
__put_user (kbuf->size, &ubuf->st_size) || __put_user(kbuf->size, &ubuf->st_size) ||
__put_user (kbuf->atime.tv_sec, &ubuf->st_atime) || __put_user(kbuf->atime.tv_sec, &ubuf->st_atime) ||
__put_user (kbuf->atime.tv_nsec, &ubuf->st_atime_nsec) || __put_user(kbuf->atime.tv_nsec, &ubuf->st_atime_nsec) ||
__put_user (kbuf->mtime.tv_sec, &ubuf->st_mtime) || __put_user(kbuf->mtime.tv_sec, &ubuf->st_mtime) ||
__put_user (kbuf->mtime.tv_nsec, &ubuf->st_mtime_nsec) || __put_user(kbuf->mtime.tv_nsec, &ubuf->st_mtime_nsec) ||
__put_user (kbuf->ctime.tv_sec, &ubuf->st_ctime) || __put_user(kbuf->ctime.tv_sec, &ubuf->st_ctime) ||
__put_user (kbuf->ctime.tv_nsec, &ubuf->st_ctime_nsec) || __put_user(kbuf->ctime.tv_nsec, &ubuf->st_ctime_nsec) ||
__put_user (kbuf->blksize, &ubuf->st_blksize) || __put_user(kbuf->blksize, &ubuf->st_blksize) ||
__put_user (kbuf->blocks, &ubuf->st_blocks)) __put_user(kbuf->blocks, &ubuf->st_blocks))
return -EFAULT; return -EFAULT;
return 0; return 0;
} }
asmlinkage long asmlinkage long sys32_truncate64(char __user *filename,
sys32_truncate64(char __user * filename, unsigned long offset_low, unsigned long offset_high) unsigned long offset_low,
unsigned long offset_high)
{ {
return sys_truncate(filename, ((loff_t) offset_high << 32) | offset_low); return sys_truncate(filename, ((loff_t) offset_high << 32) | offset_low);
} }
asmlinkage long asmlinkage long sys32_ftruncate64(unsigned int fd, unsigned long offset_low,
sys32_ftruncate64(unsigned int fd, unsigned long offset_low, unsigned long offset_high) unsigned long offset_high)
{ {
return sys_ftruncate(fd, ((loff_t) offset_high << 32) | offset_low); return sys_ftruncate(fd, ((loff_t) offset_high << 32) | offset_low);
} }
/* Another set for IA32/LFS -- x86_64 struct stat is different due to /*
support for 64bit inode numbers. */ * Another set for IA32/LFS -- x86_64 struct stat is different due to
* support for 64bit inode numbers.
static int */
cp_stat64(struct stat64 __user *ubuf, struct kstat *stat) static int cp_stat64(struct stat64 __user *ubuf, struct kstat *stat)
{ {
typeof(ubuf->st_uid) uid = 0; typeof(ubuf->st_uid) uid = 0;
typeof(ubuf->st_gid) gid = 0; typeof(ubuf->st_gid) gid = 0;
@ -134,38 +135,39 @@ cp_stat64(struct stat64 __user *ubuf, struct kstat *stat)
SET_GID(gid, stat->gid); SET_GID(gid, stat->gid);
if (!access_ok(VERIFY_WRITE, ubuf, sizeof(struct stat64)) || if (!access_ok(VERIFY_WRITE, ubuf, sizeof(struct stat64)) ||
__put_user(huge_encode_dev(stat->dev), &ubuf->st_dev) || __put_user(huge_encode_dev(stat->dev), &ubuf->st_dev) ||
__put_user (stat->ino, &ubuf->__st_ino) || __put_user(stat->ino, &ubuf->__st_ino) ||
__put_user (stat->ino, &ubuf->st_ino) || __put_user(stat->ino, &ubuf->st_ino) ||
__put_user (stat->mode, &ubuf->st_mode) || __put_user(stat->mode, &ubuf->st_mode) ||
__put_user (stat->nlink, &ubuf->st_nlink) || __put_user(stat->nlink, &ubuf->st_nlink) ||
__put_user (uid, &ubuf->st_uid) || __put_user(uid, &ubuf->st_uid) ||
__put_user (gid, &ubuf->st_gid) || __put_user(gid, &ubuf->st_gid) ||
__put_user (huge_encode_dev(stat->rdev), &ubuf->st_rdev) || __put_user(huge_encode_dev(stat->rdev), &ubuf->st_rdev) ||
__put_user (stat->size, &ubuf->st_size) || __put_user(stat->size, &ubuf->st_size) ||
__put_user (stat->atime.tv_sec, &ubuf->st_atime) || __put_user(stat->atime.tv_sec, &ubuf->st_atime) ||
__put_user (stat->atime.tv_nsec, &ubuf->st_atime_nsec) || __put_user(stat->atime.tv_nsec, &ubuf->st_atime_nsec) ||
__put_user (stat->mtime.tv_sec, &ubuf->st_mtime) || __put_user(stat->mtime.tv_sec, &ubuf->st_mtime) ||
__put_user (stat->mtime.tv_nsec, &ubuf->st_mtime_nsec) || __put_user(stat->mtime.tv_nsec, &ubuf->st_mtime_nsec) ||
__put_user (stat->ctime.tv_sec, &ubuf->st_ctime) || __put_user(stat->ctime.tv_sec, &ubuf->st_ctime) ||
__put_user (stat->ctime.tv_nsec, &ubuf->st_ctime_nsec) || __put_user(stat->ctime.tv_nsec, &ubuf->st_ctime_nsec) ||
__put_user (stat->blksize, &ubuf->st_blksize) || __put_user(stat->blksize, &ubuf->st_blksize) ||
__put_user (stat->blocks, &ubuf->st_blocks)) __put_user(stat->blocks, &ubuf->st_blocks))
return -EFAULT; return -EFAULT;
return 0; return 0;
} }
asmlinkage long asmlinkage long sys32_stat64(char __user *filename,
sys32_stat64(char __user * filename, struct stat64 __user *statbuf) struct stat64 __user *statbuf)
{ {
struct kstat stat; struct kstat stat;
int ret = vfs_stat(filename, &stat); int ret = vfs_stat(filename, &stat);
if (!ret) if (!ret)
ret = cp_stat64(statbuf, &stat); ret = cp_stat64(statbuf, &stat);
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_lstat64(char __user *filename,
sys32_lstat64(char __user * filename, struct stat64 __user *statbuf) struct stat64 __user *statbuf)
{ {
struct kstat stat; struct kstat stat;
int ret = vfs_lstat(filename, &stat); int ret = vfs_lstat(filename, &stat);
@ -174,8 +176,7 @@ sys32_lstat64(char __user * filename, struct stat64 __user *statbuf)
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_fstat64(unsigned int fd, struct stat64 __user *statbuf)
sys32_fstat64(unsigned int fd, struct stat64 __user *statbuf)
{ {
struct kstat stat; struct kstat stat;
int ret = vfs_fstat(fd, &stat); int ret = vfs_fstat(fd, &stat);
@ -184,9 +185,8 @@ sys32_fstat64(unsigned int fd, struct stat64 __user *statbuf)
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_fstatat(unsigned int dfd, char __user *filename,
sys32_fstatat(unsigned int dfd, char __user *filename, struct stat64 __user *statbuf, int flag)
struct stat64 __user* statbuf, int flag)
{ {
struct kstat stat; struct kstat stat;
int error = -EINVAL; int error = -EINVAL;
@ -221,8 +221,7 @@ struct mmap_arg_struct {
unsigned int offset; unsigned int offset;
}; };
asmlinkage long asmlinkage long sys32_mmap(struct mmap_arg_struct __user *arg)
sys32_mmap(struct mmap_arg_struct __user *arg)
{ {
struct mmap_arg_struct a; struct mmap_arg_struct a;
struct file *file = NULL; struct file *file = NULL;
@ -233,33 +232,33 @@ sys32_mmap(struct mmap_arg_struct __user *arg)
return -EFAULT; return -EFAULT;
if (a.offset & ~PAGE_MASK) if (a.offset & ~PAGE_MASK)
return -EINVAL; return -EINVAL;
if (!(a.flags & MAP_ANONYMOUS)) { if (!(a.flags & MAP_ANONYMOUS)) {
file = fget(a.fd); file = fget(a.fd);
if (!file) if (!file)
return -EBADF; return -EBADF;
} }
mm = current->mm; mm = current->mm;
down_write(&mm->mmap_sem); down_write(&mm->mmap_sem);
retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset>>PAGE_SHIFT); retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags,
a.offset>>PAGE_SHIFT);
if (file) if (file)
fput(file); fput(file);
up_write(&mm->mmap_sem); up_write(&mm->mmap_sem);
return retval; return retval;
} }
asmlinkage long asmlinkage long sys32_mprotect(unsigned long start, size_t len,
sys32_mprotect(unsigned long start, size_t len, unsigned long prot) unsigned long prot)
{ {
return sys_mprotect(start,len,prot); return sys_mprotect(start, len, prot);
} }
asmlinkage long asmlinkage long sys32_pipe(int __user *fd)
sys32_pipe(int __user *fd)
{ {
int retval; int retval;
int fds[2]; int fds[2];
@ -269,13 +268,13 @@ sys32_pipe(int __user *fd)
goto out; goto out;
if (copy_to_user(fd, fds, sizeof(fds))) if (copy_to_user(fd, fds, sizeof(fds)))
retval = -EFAULT; retval = -EFAULT;
out: out:
return retval; return retval;
} }
asmlinkage long asmlinkage long sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
sys32_rt_sigaction(int sig, struct sigaction32 __user *act, struct sigaction32 __user *oact,
struct sigaction32 __user *oact, unsigned int sigsetsize) unsigned int sigsetsize)
{ {
struct k_sigaction new_ka, old_ka; struct k_sigaction new_ka, old_ka;
int ret; int ret;
@ -291,12 +290,17 @@ sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
if (!access_ok(VERIFY_READ, act, sizeof(*act)) || if (!access_ok(VERIFY_READ, act, sizeof(*act)) ||
__get_user(handler, &act->sa_handler) || __get_user(handler, &act->sa_handler) ||
__get_user(new_ka.sa.sa_flags, &act->sa_flags) || __get_user(new_ka.sa.sa_flags, &act->sa_flags) ||
__get_user(restorer, &act->sa_restorer)|| __get_user(restorer, &act->sa_restorer) ||
__copy_from_user(&set32, &act->sa_mask, sizeof(compat_sigset_t))) __copy_from_user(&set32, &act->sa_mask,
sizeof(compat_sigset_t)))
return -EFAULT; return -EFAULT;
new_ka.sa.sa_handler = compat_ptr(handler); new_ka.sa.sa_handler = compat_ptr(handler);
new_ka.sa.sa_restorer = compat_ptr(restorer); new_ka.sa.sa_restorer = compat_ptr(restorer);
/* FIXME: here we rely on _COMPAT_NSIG_WORS to be >= than _NSIG_WORDS << 1 */
/*
* FIXME: here we rely on _COMPAT_NSIG_WORS to be >=
* than _NSIG_WORDS << 1
*/
switch (_NSIG_WORDS) { switch (_NSIG_WORDS) {
case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6] case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6]
| (((long)set32.sig[7]) << 32); | (((long)set32.sig[7]) << 32);
@ -312,7 +316,10 @@ sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL); ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
if (!ret && oact) { if (!ret && oact) {
/* FIXME: here we rely on _COMPAT_NSIG_WORS to be >= than _NSIG_WORDS << 1 */ /*
* FIXME: here we rely on _COMPAT_NSIG_WORS to be >=
* than _NSIG_WORDS << 1
*/
switch (_NSIG_WORDS) { switch (_NSIG_WORDS) {
case 4: case 4:
set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32); set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32);
@ -328,23 +335,26 @@ sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
set32.sig[0] = old_ka.sa.sa_mask.sig[0]; set32.sig[0] = old_ka.sa.sa_mask.sig[0];
} }
if (!access_ok(VERIFY_WRITE, oact, sizeof(*oact)) || if (!access_ok(VERIFY_WRITE, oact, sizeof(*oact)) ||
__put_user(ptr_to_compat(old_ka.sa.sa_handler), &oact->sa_handler) || __put_user(ptr_to_compat(old_ka.sa.sa_handler),
__put_user(ptr_to_compat(old_ka.sa.sa_restorer), &oact->sa_restorer) || &oact->sa_handler) ||
__put_user(ptr_to_compat(old_ka.sa.sa_restorer),
&oact->sa_restorer) ||
__put_user(old_ka.sa.sa_flags, &oact->sa_flags) || __put_user(old_ka.sa.sa_flags, &oact->sa_flags) ||
__copy_to_user(&oact->sa_mask, &set32, sizeof(compat_sigset_t))) __copy_to_user(&oact->sa_mask, &set32,
sizeof(compat_sigset_t)))
return -EFAULT; return -EFAULT;
} }
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_sigaction(int sig, struct old_sigaction32 __user *act,
sys32_sigaction (int sig, struct old_sigaction32 __user *act, struct old_sigaction32 __user *oact) struct old_sigaction32 __user *oact)
{ {
struct k_sigaction new_ka, old_ka; struct k_sigaction new_ka, old_ka;
int ret; int ret;
if (act) { if (act) {
compat_old_sigset_t mask; compat_old_sigset_t mask;
compat_uptr_t handler, restorer; compat_uptr_t handler, restorer;
@ -359,33 +369,35 @@ sys32_sigaction (int sig, struct old_sigaction32 __user *act, struct old_sigacti
new_ka.sa.sa_restorer = compat_ptr(restorer); new_ka.sa.sa_restorer = compat_ptr(restorer);
siginitset(&new_ka.sa.sa_mask, mask); siginitset(&new_ka.sa.sa_mask, mask);
} }
ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL); ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
if (!ret && oact) { if (!ret && oact) {
if (!access_ok(VERIFY_WRITE, oact, sizeof(*oact)) || if (!access_ok(VERIFY_WRITE, oact, sizeof(*oact)) ||
__put_user(ptr_to_compat(old_ka.sa.sa_handler), &oact->sa_handler) || __put_user(ptr_to_compat(old_ka.sa.sa_handler),
__put_user(ptr_to_compat(old_ka.sa.sa_restorer), &oact->sa_restorer) || &oact->sa_handler) ||
__put_user(ptr_to_compat(old_ka.sa.sa_restorer),
&oact->sa_restorer) ||
__put_user(old_ka.sa.sa_flags, &oact->sa_flags) || __put_user(old_ka.sa.sa_flags, &oact->sa_flags) ||
__put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask)) __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask))
return -EFAULT; return -EFAULT;
} }
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
sys32_rt_sigprocmask(int how, compat_sigset_t __user *set, compat_sigset_t __user *oset,
compat_sigset_t __user *oset, unsigned int sigsetsize) unsigned int sigsetsize)
{ {
sigset_t s; sigset_t s;
compat_sigset_t s32; compat_sigset_t s32;
int ret; int ret;
mm_segment_t old_fs = get_fs(); mm_segment_t old_fs = get_fs();
if (set) { if (set) {
if (copy_from_user (&s32, set, sizeof(compat_sigset_t))) if (copy_from_user(&s32, set, sizeof(compat_sigset_t)))
return -EFAULT; return -EFAULT;
switch (_NSIG_WORDS) { switch (_NSIG_WORDS) {
case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32); case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
@ -394,13 +406,14 @@ sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32); case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
} }
} }
set_fs (KERNEL_DS); set_fs(KERNEL_DS);
ret = sys_rt_sigprocmask(how, ret = sys_rt_sigprocmask(how,
set ? (sigset_t __user *)&s : NULL, set ? (sigset_t __user *)&s : NULL,
oset ? (sigset_t __user *)&s : NULL, oset ? (sigset_t __user *)&s : NULL,
sigsetsize); sigsetsize);
set_fs (old_fs); set_fs(old_fs);
if (ret) return ret; if (ret)
return ret;
if (oset) { if (oset) {
switch (_NSIG_WORDS) { switch (_NSIG_WORDS) {
case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3]; case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
@ -408,52 +421,49 @@ sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1]; case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0]; case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
} }
if (copy_to_user (oset, &s32, sizeof(compat_sigset_t))) if (copy_to_user(oset, &s32, sizeof(compat_sigset_t)))
return -EFAULT; return -EFAULT;
} }
return 0; return 0;
} }
static inline long static inline long get_tv32(struct timeval *o, struct compat_timeval __user *i)
get_tv32(struct timeval *o, struct compat_timeval __user *i)
{ {
int err = -EFAULT; int err = -EFAULT;
if (access_ok(VERIFY_READ, i, sizeof(*i))) {
if (access_ok(VERIFY_READ, i, sizeof(*i))) {
err = __get_user(o->tv_sec, &i->tv_sec); err = __get_user(o->tv_sec, &i->tv_sec);
err |= __get_user(o->tv_usec, &i->tv_usec); err |= __get_user(o->tv_usec, &i->tv_usec);
} }
return err; return err;
} }
static inline long static inline long put_tv32(struct compat_timeval __user *o, struct timeval *i)
put_tv32(struct compat_timeval __user *o, struct timeval *i)
{ {
int err = -EFAULT; int err = -EFAULT;
if (access_ok(VERIFY_WRITE, o, sizeof(*o))) {
if (access_ok(VERIFY_WRITE, o, sizeof(*o))) {
err = __put_user(i->tv_sec, &o->tv_sec); err = __put_user(i->tv_sec, &o->tv_sec);
err |= __put_user(i->tv_usec, &o->tv_usec); err |= __put_user(i->tv_usec, &o->tv_usec);
} }
return err; return err;
} }
extern unsigned int alarm_setitimer(unsigned int seconds); asmlinkage long sys32_alarm(unsigned int seconds)
asmlinkage long
sys32_alarm(unsigned int seconds)
{ {
return alarm_setitimer(seconds); return alarm_setitimer(seconds);
} }
/* Translations due to time_t size differences. Which affects all /*
sorts of things, like timeval and itimerval. */ * Translations due to time_t size differences. Which affects all
* sorts of things, like timeval and itimerval.
extern struct timezone sys_tz; */
asmlinkage long sys32_gettimeofday(struct compat_timeval __user *tv,
asmlinkage long struct timezone __user *tz)
sys32_gettimeofday(struct compat_timeval __user *tv, struct timezone __user *tz)
{ {
if (tv) { if (tv) {
struct timeval ktv; struct timeval ktv;
do_gettimeofday(&ktv); do_gettimeofday(&ktv);
if (put_tv32(tv, &ktv)) if (put_tv32(tv, &ktv))
return -EFAULT; return -EFAULT;
@ -465,14 +475,14 @@ sys32_gettimeofday(struct compat_timeval __user *tv, struct timezone __user *tz)
return 0; return 0;
} }
asmlinkage long asmlinkage long sys32_settimeofday(struct compat_timeval __user *tv,
sys32_settimeofday(struct compat_timeval __user *tv, struct timezone __user *tz) struct timezone __user *tz)
{ {
struct timeval ktv; struct timeval ktv;
struct timespec kts; struct timespec kts;
struct timezone ktz; struct timezone ktz;
if (tv) { if (tv) {
if (get_tv32(&ktv, tv)) if (get_tv32(&ktv, tv))
return -EFAULT; return -EFAULT;
kts.tv_sec = ktv.tv_sec; kts.tv_sec = ktv.tv_sec;
@ -494,8 +504,7 @@ struct sel_arg_struct {
unsigned int tvp; unsigned int tvp;
}; };
asmlinkage long asmlinkage long sys32_old_select(struct sel_arg_struct __user *arg)
sys32_old_select(struct sel_arg_struct __user *arg)
{ {
struct sel_arg_struct a; struct sel_arg_struct a;
@ -505,50 +514,45 @@ sys32_old_select(struct sel_arg_struct __user *arg)
compat_ptr(a.exp), compat_ptr(a.tvp)); compat_ptr(a.exp), compat_ptr(a.tvp));
} }
extern asmlinkage long asmlinkage long sys32_waitpid(compat_pid_t pid, unsigned int *stat_addr,
compat_sys_wait4(compat_pid_t pid, compat_uint_t * stat_addr, int options, int options)
struct compat_rusage *ru);
asmlinkage long
sys32_waitpid(compat_pid_t pid, unsigned int *stat_addr, int options)
{ {
return compat_sys_wait4(pid, stat_addr, options, NULL); return compat_sys_wait4(pid, stat_addr, options, NULL);
} }
/* 32-bit timeval and related flotsam. */ /* 32-bit timeval and related flotsam. */
asmlinkage long asmlinkage long sys32_sysfs(int option, u32 arg1, u32 arg2)
sys32_sysfs(int option, u32 arg1, u32 arg2)
{ {
return sys_sysfs(option, arg1, arg2); return sys_sysfs(option, arg1, arg2);
} }
asmlinkage long asmlinkage long sys32_sched_rr_get_interval(compat_pid_t pid,
sys32_sched_rr_get_interval(compat_pid_t pid, struct compat_timespec __user *interval) struct compat_timespec __user *interval)
{ {
struct timespec t; struct timespec t;
int ret; int ret;
mm_segment_t old_fs = get_fs (); mm_segment_t old_fs = get_fs();
set_fs (KERNEL_DS); set_fs(KERNEL_DS);
ret = sys_sched_rr_get_interval(pid, (struct timespec __user *)&t); ret = sys_sched_rr_get_interval(pid, (struct timespec __user *)&t);
set_fs (old_fs); set_fs(old_fs);
if (put_compat_timespec(&t, interval)) if (put_compat_timespec(&t, interval))
return -EFAULT; return -EFAULT;
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_rt_sigpending(compat_sigset_t __user *set,
sys32_rt_sigpending(compat_sigset_t __user *set, compat_size_t sigsetsize) compat_size_t sigsetsize)
{ {
sigset_t s; sigset_t s;
compat_sigset_t s32; compat_sigset_t s32;
int ret; int ret;
mm_segment_t old_fs = get_fs(); mm_segment_t old_fs = get_fs();
set_fs (KERNEL_DS); set_fs(KERNEL_DS);
ret = sys_rt_sigpending((sigset_t __user *)&s, sigsetsize); ret = sys_rt_sigpending((sigset_t __user *)&s, sigsetsize);
set_fs (old_fs); set_fs(old_fs);
if (!ret) { if (!ret) {
switch (_NSIG_WORDS) { switch (_NSIG_WORDS) {
case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3]; case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
@ -556,30 +560,29 @@ sys32_rt_sigpending(compat_sigset_t __user *set, compat_size_t sigsetsize)
case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1]; case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0]; case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
} }
if (copy_to_user (set, &s32, sizeof(compat_sigset_t))) if (copy_to_user(set, &s32, sizeof(compat_sigset_t)))
return -EFAULT; return -EFAULT;
} }
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_rt_sigqueueinfo(int pid, int sig,
sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo) compat_siginfo_t __user *uinfo)
{ {
siginfo_t info; siginfo_t info;
int ret; int ret;
mm_segment_t old_fs = get_fs(); mm_segment_t old_fs = get_fs();
if (copy_siginfo_from_user32(&info, uinfo)) if (copy_siginfo_from_user32(&info, uinfo))
return -EFAULT; return -EFAULT;
set_fs (KERNEL_DS); set_fs(KERNEL_DS);
ret = sys_rt_sigqueueinfo(pid, sig, (siginfo_t __user *)&info); ret = sys_rt_sigqueueinfo(pid, sig, (siginfo_t __user *)&info);
set_fs (old_fs); set_fs(old_fs);
return ret; return ret;
} }
/* These are here just in case some old ia32 binary calls it. */ /* These are here just in case some old ia32 binary calls it. */
asmlinkage long asmlinkage long sys32_pause(void)
sys32_pause(void)
{ {
current->state = TASK_INTERRUPTIBLE; current->state = TASK_INTERRUPTIBLE;
schedule(); schedule();
@ -599,25 +602,25 @@ struct sysctl_ia32 {
}; };
asmlinkage long asmlinkage long sys32_sysctl(struct sysctl_ia32 __user *args32)
sys32_sysctl(struct sysctl_ia32 __user *args32)
{ {
struct sysctl_ia32 a32; struct sysctl_ia32 a32;
mm_segment_t old_fs = get_fs (); mm_segment_t old_fs = get_fs();
void __user *oldvalp, *newvalp; void __user *oldvalp, *newvalp;
size_t oldlen; size_t oldlen;
int __user *namep; int __user *namep;
long ret; long ret;
if (copy_from_user(&a32, args32, sizeof (a32))) if (copy_from_user(&a32, args32, sizeof(a32)))
return -EFAULT; return -EFAULT;
/* /*
* We need to pre-validate these because we have to disable address checking * We need to pre-validate these because we have to disable
* before calling do_sysctl() because of OLDLEN but we can't run the risk of the * address checking before calling do_sysctl() because of
* user specifying bad addresses here. Well, since we're dealing with 32 bit * OLDLEN but we can't run the risk of the user specifying bad
* addresses, we KNOW that access_ok() will always succeed, so this is an * addresses here. Well, since we're dealing with 32 bit
* expensive NOP, but so what... * addresses, we KNOW that access_ok() will always succeed, so
* this is an expensive NOP, but so what...
*/ */
namep = compat_ptr(a32.name); namep = compat_ptr(a32.name);
oldvalp = compat_ptr(a32.oldval); oldvalp = compat_ptr(a32.oldval);
@ -636,34 +639,34 @@ sys32_sysctl(struct sysctl_ia32 __user *args32)
unlock_kernel(); unlock_kernel();
set_fs(old_fs); set_fs(old_fs);
if (oldvalp && put_user (oldlen, (int __user *)compat_ptr(a32.oldlenp))) if (oldvalp && put_user(oldlen, (int __user *)compat_ptr(a32.oldlenp)))
return -EFAULT; return -EFAULT;
return ret; return ret;
} }
#endif #endif
/* warning: next two assume little endian */ /* warning: next two assume little endian */
asmlinkage long asmlinkage long sys32_pread(unsigned int fd, char __user *ubuf, u32 count,
sys32_pread(unsigned int fd, char __user *ubuf, u32 count, u32 poslo, u32 poshi) u32 poslo, u32 poshi)
{ {
return sys_pread64(fd, ubuf, count, return sys_pread64(fd, ubuf, count,
((loff_t)AA(poshi) << 32) | AA(poslo)); ((loff_t)AA(poshi) << 32) | AA(poslo));
} }
asmlinkage long asmlinkage long sys32_pwrite(unsigned int fd, char __user *ubuf, u32 count,
sys32_pwrite(unsigned int fd, char __user *ubuf, u32 count, u32 poslo, u32 poshi) u32 poslo, u32 poshi)
{ {
return sys_pwrite64(fd, ubuf, count, return sys_pwrite64(fd, ubuf, count,
((loff_t)AA(poshi) << 32) | AA(poslo)); ((loff_t)AA(poshi) << 32) | AA(poslo));
} }
asmlinkage long asmlinkage long sys32_personality(unsigned long personality)
sys32_personality(unsigned long personality)
{ {
int ret; int ret;
if (personality(current->personality) == PER_LINUX32 &&
if (personality(current->personality) == PER_LINUX32 &&
personality == PER_LINUX) personality == PER_LINUX)
personality = PER_LINUX32; personality = PER_LINUX32;
ret = sys_personality(personality); ret = sys_personality(personality);
@ -672,34 +675,33 @@ sys32_personality(unsigned long personality)
return ret; return ret;
} }
asmlinkage long asmlinkage long sys32_sendfile(int out_fd, int in_fd,
sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset, s32 count) compat_off_t __user *offset, s32 count)
{ {
mm_segment_t old_fs = get_fs(); mm_segment_t old_fs = get_fs();
int ret; int ret;
off_t of; off_t of;
if (offset && get_user(of, offset)) if (offset && get_user(of, offset))
return -EFAULT; return -EFAULT;
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
ret = sys_sendfile(out_fd, in_fd, offset ? (off_t __user *)&of : NULL, ret = sys_sendfile(out_fd, in_fd, offset ? (off_t __user *)&of : NULL,
count); count);
set_fs(old_fs); set_fs(old_fs);
if (offset && put_user(of, offset)) if (offset && put_user(of, offset))
return -EFAULT; return -EFAULT;
return ret; return ret;
} }
asmlinkage long sys32_mmap2(unsigned long addr, unsigned long len, asmlinkage long sys32_mmap2(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags, unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long pgoff) unsigned long fd, unsigned long pgoff)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
unsigned long error; unsigned long error;
struct file * file = NULL; struct file *file = NULL;
flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE); flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
if (!(flags & MAP_ANONYMOUS)) { if (!(flags & MAP_ANONYMOUS)) {
@ -717,36 +719,35 @@ asmlinkage long sys32_mmap2(unsigned long addr, unsigned long len,
return error; return error;
} }
asmlinkage long sys32_olduname(struct oldold_utsname __user * name) asmlinkage long sys32_olduname(struct oldold_utsname __user *name)
{ {
char *arch = "x86_64";
int err; int err;
if (!name) if (!name)
return -EFAULT; return -EFAULT;
if (!access_ok(VERIFY_WRITE, name, sizeof(struct oldold_utsname))) if (!access_ok(VERIFY_WRITE, name, sizeof(struct oldold_utsname)))
return -EFAULT; return -EFAULT;
down_read(&uts_sem);
err = __copy_to_user(&name->sysname,&utsname()->sysname, down_read(&uts_sem);
__OLD_UTS_LEN);
err |= __put_user(0,name->sysname+__OLD_UTS_LEN); err = __copy_to_user(&name->sysname, &utsname()->sysname,
err |= __copy_to_user(&name->nodename,&utsname()->nodename, __OLD_UTS_LEN);
__OLD_UTS_LEN); err |= __put_user(0, name->sysname+__OLD_UTS_LEN);
err |= __put_user(0,name->nodename+__OLD_UTS_LEN); err |= __copy_to_user(&name->nodename, &utsname()->nodename,
err |= __copy_to_user(&name->release,&utsname()->release, __OLD_UTS_LEN);
__OLD_UTS_LEN); err |= __put_user(0, name->nodename+__OLD_UTS_LEN);
err |= __put_user(0,name->release+__OLD_UTS_LEN); err |= __copy_to_user(&name->release, &utsname()->release,
err |= __copy_to_user(&name->version,&utsname()->version, __OLD_UTS_LEN);
__OLD_UTS_LEN); err |= __put_user(0, name->release+__OLD_UTS_LEN);
err |= __put_user(0,name->version+__OLD_UTS_LEN); err |= __copy_to_user(&name->version, &utsname()->version,
{ __OLD_UTS_LEN);
char *arch = "x86_64"; err |= __put_user(0, name->version+__OLD_UTS_LEN);
if (personality(current->personality) == PER_LINUX32)
arch = "i686"; if (personality(current->personality) == PER_LINUX32)
arch = "i686";
err |= __copy_to_user(&name->machine, arch, strlen(arch)+1);
} err |= __copy_to_user(&name->machine, arch, strlen(arch) + 1);
up_read(&uts_sem); up_read(&uts_sem);
@ -755,17 +756,19 @@ asmlinkage long sys32_olduname(struct oldold_utsname __user * name)
return err; return err;
} }
long sys32_uname(struct old_utsname __user * name) long sys32_uname(struct old_utsname __user *name)
{ {
int err; int err;
if (!name) if (!name)
return -EFAULT; return -EFAULT;
down_read(&uts_sem); down_read(&uts_sem);
err = copy_to_user(name, utsname(), sizeof (*name)); err = copy_to_user(name, utsname(), sizeof(*name));
up_read(&uts_sem); up_read(&uts_sem);
if (personality(current->personality) == PER_LINUX32) if (personality(current->personality) == PER_LINUX32)
err |= copy_to_user(&name->machine, "i686", 5); err |= copy_to_user(&name->machine, "i686", 5);
return err?-EFAULT:0;
return err ? -EFAULT : 0;
} }
long sys32_ustat(unsigned dev, struct ustat32 __user *u32p) long sys32_ustat(unsigned dev, struct ustat32 __user *u32p)
@ -773,27 +776,28 @@ long sys32_ustat(unsigned dev, struct ustat32 __user *u32p)
struct ustat u; struct ustat u;
mm_segment_t seg; mm_segment_t seg;
int ret; int ret;
seg = get_fs(); seg = get_fs();
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
ret = sys_ustat(dev, (struct ustat __user *)&u); ret = sys_ustat(dev, (struct ustat __user *)&u);
set_fs(seg); set_fs(seg);
if (ret >= 0) { if (ret < 0)
if (!access_ok(VERIFY_WRITE,u32p,sizeof(struct ustat32)) || return ret;
__put_user((__u32) u.f_tfree, &u32p->f_tfree) ||
__put_user((__u32) u.f_tinode, &u32p->f_tfree) || if (!access_ok(VERIFY_WRITE, u32p, sizeof(struct ustat32)) ||
__copy_to_user(&u32p->f_fname, u.f_fname, sizeof(u.f_fname)) || __put_user((__u32) u.f_tfree, &u32p->f_tfree) ||
__copy_to_user(&u32p->f_fpack, u.f_fpack, sizeof(u.f_fpack))) __put_user((__u32) u.f_tinode, &u32p->f_tfree) ||
ret = -EFAULT; __copy_to_user(&u32p->f_fname, u.f_fname, sizeof(u.f_fname)) ||
} __copy_to_user(&u32p->f_fpack, u.f_fpack, sizeof(u.f_fpack)))
ret = -EFAULT;
return ret; return ret;
} }
asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv, asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv,
compat_uptr_t __user *envp, struct pt_regs *regs) compat_uptr_t __user *envp, struct pt_regs *regs)
{ {
long error; long error;
char * filename; char *filename;
filename = getname(name); filename = getname(name);
error = PTR_ERR(filename); error = PTR_ERR(filename);
@ -812,18 +816,19 @@ asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv,
asmlinkage long sys32_clone(unsigned int clone_flags, unsigned int newsp, asmlinkage long sys32_clone(unsigned int clone_flags, unsigned int newsp,
struct pt_regs *regs) struct pt_regs *regs)
{ {
void __user *parent_tid = (void __user *)regs->rdx; void __user *parent_tid = (void __user *)regs->dx;
void __user *child_tid = (void __user *)regs->rdi; void __user *child_tid = (void __user *)regs->di;
if (!newsp) if (!newsp)
newsp = regs->rsp; newsp = regs->sp;
return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid); return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid);
} }
/* /*
* Some system calls that need sign extended arguments. This could be done by a generic wrapper. * Some system calls that need sign extended arguments. This could be
*/ * done by a generic wrapper.
*/
long sys32_lseek (unsigned int fd, int offset, unsigned int whence) long sys32_lseek(unsigned int fd, int offset, unsigned int whence)
{ {
return sys_lseek(fd, offset, whence); return sys_lseek(fd, offset, whence);
} }
@ -832,49 +837,52 @@ long sys32_kill(int pid, int sig)
{ {
return sys_kill(pid, sig); return sys_kill(pid, sig);
} }
long sys32_fadvise64_64(int fd, __u32 offset_low, __u32 offset_high, long sys32_fadvise64_64(int fd, __u32 offset_low, __u32 offset_high,
__u32 len_low, __u32 len_high, int advice) __u32 len_low, __u32 len_high, int advice)
{ {
return sys_fadvise64_64(fd, return sys_fadvise64_64(fd,
(((u64)offset_high)<<32) | offset_low, (((u64)offset_high)<<32) | offset_low,
(((u64)len_high)<<32) | len_low, (((u64)len_high)<<32) | len_low,
advice); advice);
} }
long sys32_vm86_warning(void) long sys32_vm86_warning(void)
{ {
struct task_struct *me = current; struct task_struct *me = current;
static char lastcomm[sizeof(me->comm)]; static char lastcomm[sizeof(me->comm)];
if (strncmp(lastcomm, me->comm, sizeof(lastcomm))) { if (strncmp(lastcomm, me->comm, sizeof(lastcomm))) {
compat_printk(KERN_INFO "%s: vm86 mode not supported on 64 bit kernel\n", compat_printk(KERN_INFO
me->comm); "%s: vm86 mode not supported on 64 bit kernel\n",
me->comm);
strncpy(lastcomm, me->comm, sizeof(lastcomm)); strncpy(lastcomm, me->comm, sizeof(lastcomm));
} }
return -ENOSYS; return -ENOSYS;
} }
long sys32_lookup_dcookie(u32 addr_low, u32 addr_high, long sys32_lookup_dcookie(u32 addr_low, u32 addr_high,
char __user * buf, size_t len) char __user *buf, size_t len)
{ {
return sys_lookup_dcookie(((u64)addr_high << 32) | addr_low, buf, len); return sys_lookup_dcookie(((u64)addr_high << 32) | addr_low, buf, len);
} }
asmlinkage ssize_t sys32_readahead(int fd, unsigned off_lo, unsigned off_hi, size_t count) asmlinkage ssize_t sys32_readahead(int fd, unsigned off_lo, unsigned off_hi,
size_t count)
{ {
return sys_readahead(fd, ((u64)off_hi << 32) | off_lo, count); return sys_readahead(fd, ((u64)off_hi << 32) | off_lo, count);
} }
asmlinkage long sys32_sync_file_range(int fd, unsigned off_low, unsigned off_hi, asmlinkage long sys32_sync_file_range(int fd, unsigned off_low, unsigned off_hi,
unsigned n_low, unsigned n_hi, int flags) unsigned n_low, unsigned n_hi, int flags)
{ {
return sys_sync_file_range(fd, return sys_sync_file_range(fd,
((u64)off_hi << 32) | off_low, ((u64)off_hi << 32) | off_low,
((u64)n_hi << 32) | n_low, flags); ((u64)n_hi << 32) | n_low, flags);
} }
asmlinkage long sys32_fadvise64(int fd, unsigned offset_lo, unsigned offset_hi, size_t len, asmlinkage long sys32_fadvise64(int fd, unsigned offset_lo, unsigned offset_hi,
int advice) size_t len, int advice)
{ {
return sys_fadvise64_64(fd, ((u64)offset_hi << 32) | offset_lo, return sys_fadvise64_64(fd, ((u64)offset_hi << 32) | offset_lo,
len, advice); len, advice);

View File

@ -1,83 +0,0 @@
/* Copyright 2002,2003 Andi Kleen, SuSE Labs */
/* vsyscall handling for 32bit processes. Map a stub page into it
on demand because 32bit cannot reach the kernel's fixmaps */
#include <linux/mm.h>
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/gfp.h>
#include <linux/init.h>
#include <linux/stringify.h>
#include <linux/security.h>
#include <asm/proto.h>
#include <asm/tlbflush.h>
#include <asm/ia32_unistd.h>
#include <asm/vsyscall32.h>
extern unsigned char syscall32_syscall[], syscall32_syscall_end[];
extern unsigned char syscall32_sysenter[], syscall32_sysenter_end[];
extern int sysctl_vsyscall32;
static struct page *syscall32_pages[1];
static int use_sysenter = -1;
struct linux_binprm;
/* Setup a VMA at program startup for the vsyscall page */
int syscall32_setup_pages(struct linux_binprm *bprm, int exstack)
{
struct mm_struct *mm = current->mm;
int ret;
down_write(&mm->mmap_sem);
/*
* MAYWRITE to allow gdb to COW and set breakpoints
*
* Make sure the vDSO gets into every core dump.
* Dumping its contents makes post-mortem fully interpretable later
* without matching up the same kernel and hardware config to see
* what PC values meant.
*/
/* Could randomize here */
ret = install_special_mapping(mm, VSYSCALL32_BASE, PAGE_SIZE,
VM_READ|VM_EXEC|
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC|
VM_ALWAYSDUMP,
syscall32_pages);
up_write(&mm->mmap_sem);
return ret;
}
static int __init init_syscall32(void)
{
char *syscall32_page = (void *)get_zeroed_page(GFP_KERNEL);
if (!syscall32_page)
panic("Cannot allocate syscall32 page");
syscall32_pages[0] = virt_to_page(syscall32_page);
if (use_sysenter > 0) {
memcpy(syscall32_page, syscall32_sysenter,
syscall32_sysenter_end - syscall32_sysenter);
} else {
memcpy(syscall32_page, syscall32_syscall,
syscall32_syscall_end - syscall32_syscall);
}
return 0;
}
__initcall(init_syscall32);
/* May not be __init: called during resume */
void syscall32_cpu_init(void)
{
if (use_sysenter < 0)
use_sysenter = (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL);
/* Load these always in case some future AMD CPU supports
SYSENTER from compat mode too. */
checking_wrmsrl(MSR_IA32_SYSENTER_CS, (u64)__KERNEL_CS);
checking_wrmsrl(MSR_IA32_SYSENTER_ESP, 0ULL);
checking_wrmsrl(MSR_IA32_SYSENTER_EIP, (u64)ia32_sysenter_target);
wrmsrl(MSR_CSTAR, ia32_cstar_target);
}

View File

@ -1,17 +0,0 @@
/* 32bit VDSOs mapped into user space. */
.section ".init.data","aw"
.globl syscall32_syscall
.globl syscall32_syscall_end
syscall32_syscall:
.incbin "arch/x86/ia32/vsyscall-syscall.so"
syscall32_syscall_end:
.globl syscall32_sysenter
.globl syscall32_sysenter_end
syscall32_sysenter:
.incbin "arch/x86/ia32/vsyscall-sysenter.so"
syscall32_sysenter_end:

View File

@ -1,163 +0,0 @@
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/user.h>
#include <asm/uaccess.h>
#include <asm/desc.h>
#include <asm/system.h>
#include <asm/ldt.h>
#include <asm/processor.h>
#include <asm/proto.h>
/*
* sys_alloc_thread_area: get a yet unused TLS descriptor index.
*/
static int get_free_idx(void)
{
struct thread_struct *t = &current->thread;
int idx;
for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
if (desc_empty((struct n_desc_struct *)(t->tls_array) + idx))
return idx + GDT_ENTRY_TLS_MIN;
return -ESRCH;
}
/*
* Set a given TLS descriptor:
* When you want addresses > 32bit use arch_prctl()
*/
int do_set_thread_area(struct thread_struct *t, struct user_desc __user *u_info)
{
struct user_desc info;
struct n_desc_struct *desc;
int cpu, idx;
if (copy_from_user(&info, u_info, sizeof(info)))
return -EFAULT;
idx = info.entry_number;
/*
* index -1 means the kernel should try to find and
* allocate an empty descriptor:
*/
if (idx == -1) {
idx = get_free_idx();
if (idx < 0)
return idx;
if (put_user(idx, &u_info->entry_number))
return -EFAULT;
}
if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
return -EINVAL;
desc = ((struct n_desc_struct *)t->tls_array) + idx - GDT_ENTRY_TLS_MIN;
/*
* We must not get preempted while modifying the TLS.
*/
cpu = get_cpu();
if (LDT_empty(&info)) {
desc->a = 0;
desc->b = 0;
} else {
desc->a = LDT_entry_a(&info);
desc->b = LDT_entry_b(&info);
}
if (t == &current->thread)
load_TLS(t, cpu);
put_cpu();
return 0;
}
asmlinkage long sys32_set_thread_area(struct user_desc __user *u_info)
{
return do_set_thread_area(&current->thread, u_info);
}
/*
* Get the current Thread-Local Storage area:
*/
#define GET_BASE(desc) ( \
(((desc)->a >> 16) & 0x0000ffff) | \
(((desc)->b << 16) & 0x00ff0000) | \
( (desc)->b & 0xff000000) )
#define GET_LIMIT(desc) ( \
((desc)->a & 0x0ffff) | \
((desc)->b & 0xf0000) )
#define GET_32BIT(desc) (((desc)->b >> 22) & 1)
#define GET_CONTENTS(desc) (((desc)->b >> 10) & 3)
#define GET_WRITABLE(desc) (((desc)->b >> 9) & 1)
#define GET_LIMIT_PAGES(desc) (((desc)->b >> 23) & 1)
#define GET_PRESENT(desc) (((desc)->b >> 15) & 1)
#define GET_USEABLE(desc) (((desc)->b >> 20) & 1)
#define GET_LONGMODE(desc) (((desc)->b >> 21) & 1)
int do_get_thread_area(struct thread_struct *t, struct user_desc __user *u_info)
{
struct user_desc info;
struct n_desc_struct *desc;
int idx;
if (get_user(idx, &u_info->entry_number))
return -EFAULT;
if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
return -EINVAL;
desc = ((struct n_desc_struct *)t->tls_array) + idx - GDT_ENTRY_TLS_MIN;
memset(&info, 0, sizeof(struct user_desc));
info.entry_number = idx;
info.base_addr = GET_BASE(desc);
info.limit = GET_LIMIT(desc);
info.seg_32bit = GET_32BIT(desc);
info.contents = GET_CONTENTS(desc);
info.read_exec_only = !GET_WRITABLE(desc);
info.limit_in_pages = GET_LIMIT_PAGES(desc);
info.seg_not_present = !GET_PRESENT(desc);
info.useable = GET_USEABLE(desc);
info.lm = GET_LONGMODE(desc);
if (copy_to_user(u_info, &info, sizeof(info)))
return -EFAULT;
return 0;
}
asmlinkage long sys32_get_thread_area(struct user_desc __user *u_info)
{
return do_get_thread_area(&current->thread, u_info);
}
int ia32_child_tls(struct task_struct *p, struct pt_regs *childregs)
{
struct n_desc_struct *desc;
struct user_desc info;
struct user_desc __user *cp;
int idx;
cp = (void __user *)childregs->rsi;
if (copy_from_user(&info, cp, sizeof(info)))
return -EFAULT;
if (LDT_empty(&info))
return -EINVAL;
idx = info.entry_number;
if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
return -EINVAL;
desc = (struct n_desc_struct *)(p->thread.tls_array) + idx - GDT_ENTRY_TLS_MIN;
desc->a = LDT_entry_a(&info);
desc->b = LDT_entry_b(&info);
return 0;
}

View File

@ -1,143 +0,0 @@
/*
* Common code for the sigreturn entry points on the vsyscall page.
* This code uses SYSCALL_ENTER_KERNEL (either syscall or int $0x80)
* to enter the kernel.
* This file is #include'd by vsyscall-*.S to define them after the
* vsyscall entry point. The addresses we get for these entry points
* by doing ".balign 32" must match in both versions of the page.
*/
.code32
.section .text.sigreturn,"ax"
.balign 32
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
__kernel_sigreturn:
.LSTART_sigreturn:
popl %eax
movl $__NR_ia32_sigreturn, %eax
SYSCALL_ENTER_KERNEL
.LEND_sigreturn:
.size __kernel_sigreturn,.-.LSTART_sigreturn
.section .text.rtsigreturn,"ax"
.balign 32
.globl __kernel_rt_sigreturn
.type __kernel_rt_sigreturn,@function
__kernel_rt_sigreturn:
.LSTART_rt_sigreturn:
movl $__NR_ia32_rt_sigreturn, %eax
SYSCALL_ENTER_KERNEL
.LEND_rt_sigreturn:
.size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
.section .eh_frame,"a",@progbits
.LSTARTFRAMES:
.long .LENDCIES-.LSTARTCIES
.LSTARTCIES:
.long 0 /* CIE ID */
.byte 1 /* Version number */
.string "zRS" /* NUL-terminated augmentation string */
.uleb128 1 /* Code alignment factor */
.sleb128 -4 /* Data alignment factor */
.byte 8 /* Return address register column */
.uleb128 1 /* Augmentation value length */
.byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
.byte 0x0c /* DW_CFA_def_cfa */
.uleb128 4
.uleb128 4
.byte 0x88 /* DW_CFA_offset, column 0x8 */
.uleb128 1
.align 4
.LENDCIES:
.long .LENDFDE2-.LSTARTFDE2 /* Length FDE */
.LSTARTFDE2:
.long .LSTARTFDE2-.LSTARTFRAMES /* CIE pointer */
/* HACK: The dwarf2 unwind routines will subtract 1 from the
return address to get an address in the middle of the
presumed call instruction. Since we didn't get here via
a call, we need to include the nop before the real start
to make up for it. */
.long .LSTART_sigreturn-1-. /* PC-relative start address */
.long .LEND_sigreturn-.LSTART_sigreturn+1
.uleb128 0 /* Augmentation length */
/* What follows are the instructions for the table generation.
We record the locations of each register saved. This is
complicated by the fact that the "CFA" is always assumed to
be the value of the stack pointer in the caller. This means
that we must define the CFA of this body of code to be the
saved value of the stack pointer in the sigcontext. Which
also means that there is no fixed relation to the other
saved registers, which means that we must use DW_CFA_expression
to compute their addresses. It also means that when we
adjust the stack with the popl, we have to do it all over again. */
#define do_cfa_expr(offset) \
.byte 0x0f; /* DW_CFA_def_cfa_expression */ \
.uleb128 1f-0f; /* length */ \
0: .byte 0x74; /* DW_OP_breg4 */ \
.sleb128 offset; /* offset */ \
.byte 0x06; /* DW_OP_deref */ \
1:
#define do_expr(regno, offset) \
.byte 0x10; /* DW_CFA_expression */ \
.uleb128 regno; /* regno */ \
.uleb128 1f-0f; /* length */ \
0: .byte 0x74; /* DW_OP_breg4 */ \
.sleb128 offset; /* offset */ \
1:
do_cfa_expr(IA32_SIGCONTEXT_esp+4)
do_expr(0, IA32_SIGCONTEXT_eax+4)
do_expr(1, IA32_SIGCONTEXT_ecx+4)
do_expr(2, IA32_SIGCONTEXT_edx+4)
do_expr(3, IA32_SIGCONTEXT_ebx+4)
do_expr(5, IA32_SIGCONTEXT_ebp+4)
do_expr(6, IA32_SIGCONTEXT_esi+4)
do_expr(7, IA32_SIGCONTEXT_edi+4)
do_expr(8, IA32_SIGCONTEXT_eip+4)
.byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
do_cfa_expr(IA32_SIGCONTEXT_esp)
do_expr(0, IA32_SIGCONTEXT_eax)
do_expr(1, IA32_SIGCONTEXT_ecx)
do_expr(2, IA32_SIGCONTEXT_edx)
do_expr(3, IA32_SIGCONTEXT_ebx)
do_expr(5, IA32_SIGCONTEXT_ebp)
do_expr(6, IA32_SIGCONTEXT_esi)
do_expr(7, IA32_SIGCONTEXT_edi)
do_expr(8, IA32_SIGCONTEXT_eip)
.align 4
.LENDFDE2:
.long .LENDFDE3-.LSTARTFDE3 /* Length FDE */
.LSTARTFDE3:
.long .LSTARTFDE3-.LSTARTFRAMES /* CIE pointer */
/* HACK: See above wrt unwind library assumptions. */
.long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
.long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
.uleb128 0 /* Augmentation */
/* What follows are the instructions for the table generation.
We record the locations of each register saved. This is
slightly less complicated than the above, since we don't
modify the stack pointer in the process. */
do_cfa_expr(IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_esp)
do_expr(0, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_eax)
do_expr(1, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ecx)
do_expr(2, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_edx)
do_expr(3, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ebx)
do_expr(5, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ebp)
do_expr(6, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_esi)
do_expr(7, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_edi)
do_expr(8, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_eip)
.align 4
.LENDFDE3:
#include "../../x86/kernel/vsyscall-note_32.S"

View File

@ -1,95 +0,0 @@
/*
* Code for the vsyscall page. This version uses the sysenter instruction.
*/
#include <asm/ia32_unistd.h>
#include <asm/asm-offsets.h>
.code32
.text
.section .text.vsyscall,"ax"
.globl __kernel_vsyscall
.type __kernel_vsyscall,@function
__kernel_vsyscall:
.LSTART_vsyscall:
push %ecx
.Lpush_ecx:
push %edx
.Lpush_edx:
push %ebp
.Lenter_kernel:
movl %esp,%ebp
sysenter
.space 7,0x90
jmp .Lenter_kernel
/* 16: System call normal return point is here! */
pop %ebp
.Lpop_ebp:
pop %edx
.Lpop_edx:
pop %ecx
.Lpop_ecx:
ret
.LEND_vsyscall:
.size __kernel_vsyscall,.-.LSTART_vsyscall
.section .eh_frame,"a",@progbits
.LSTARTFRAME:
.long .LENDCIE-.LSTARTCIE
.LSTARTCIE:
.long 0 /* CIE ID */
.byte 1 /* Version number */
.string "zR" /* NUL-terminated augmentation string */
.uleb128 1 /* Code alignment factor */
.sleb128 -4 /* Data alignment factor */
.byte 8 /* Return address register column */
.uleb128 1 /* Augmentation value length */
.byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
.byte 0x0c /* DW_CFA_def_cfa */
.uleb128 4
.uleb128 4
.byte 0x88 /* DW_CFA_offset, column 0x8 */
.uleb128 1
.align 4
.LENDCIE:
.long .LENDFDE1-.LSTARTFDE1 /* Length FDE */
.LSTARTFDE1:
.long .LSTARTFDE1-.LSTARTFRAME /* CIE pointer */
.long .LSTART_vsyscall-. /* PC-relative start address */
.long .LEND_vsyscall-.LSTART_vsyscall
.uleb128 0 /* Augmentation length */
/* What follows are the instructions for the table generation.
We have to record all changes of the stack pointer. */
.byte 0x04 /* DW_CFA_advance_loc4 */
.long .Lpush_ecx-.LSTART_vsyscall
.byte 0x0e /* DW_CFA_def_cfa_offset */
.byte 0x08 /* RA at offset 8 now */
.byte 0x04 /* DW_CFA_advance_loc4 */
.long .Lpush_edx-.Lpush_ecx
.byte 0x0e /* DW_CFA_def_cfa_offset */
.byte 0x0c /* RA at offset 12 now */
.byte 0x04 /* DW_CFA_advance_loc4 */
.long .Lenter_kernel-.Lpush_edx
.byte 0x0e /* DW_CFA_def_cfa_offset */
.byte 0x10 /* RA at offset 16 now */
.byte 0x85, 0x04 /* DW_CFA_offset %ebp -16 */
/* Finally the epilogue. */
.byte 0x04 /* DW_CFA_advance_loc4 */
.long .Lpop_ebp-.Lenter_kernel
.byte 0x0e /* DW_CFA_def_cfa_offset */
.byte 0x12 /* RA at offset 12 now */
.byte 0xc5 /* DW_CFA_restore %ebp */
.byte 0x04 /* DW_CFA_advance_loc4 */
.long .Lpop_edx-.Lpop_ebp
.byte 0x0e /* DW_CFA_def_cfa_offset */
.byte 0x08 /* RA at offset 8 now */
.byte 0x04 /* DW_CFA_advance_loc4 */
.long .Lpop_ecx-.Lpop_edx
.byte 0x0e /* DW_CFA_def_cfa_offset */
.byte 0x04 /* RA at offset 4 now */
.align 4
.LENDFDE1:
#define SYSCALL_ENTER_KERNEL int $0x80
#include "vsyscall-sigreturn.S"

View File

@ -1,80 +0,0 @@
/*
* Linker script for vsyscall DSO. The vsyscall page is an ELF shared
* object prelinked to its virtual address. This script controls its layout.
*/
/* This must match <asm/fixmap.h>. */
VSYSCALL_BASE = 0xffffe000;
SECTIONS
{
. = VSYSCALL_BASE + SIZEOF_HEADERS;
.hash : { *(.hash) } :text
.gnu.hash : { *(.gnu.hash) }
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
/* This linker script is used both with -r and with -shared.
For the layouts to match, we need to skip more than enough
space for the dynamic symbol table et al. If this amount
is insufficient, ld -shared will barf. Just increase it here. */
. = VSYSCALL_BASE + 0x400;
.text.vsyscall : { *(.text.vsyscall) } :text =0x90909090
/* This is an 32bit object and we cannot easily get the offsets
into the 64bit kernel. Just hardcode them here. This assumes
that all the stubs don't need more than 0x100 bytes. */
. = VSYSCALL_BASE + 0x500;
.text.sigreturn : { *(.text.sigreturn) } :text =0x90909090
. = VSYSCALL_BASE + 0x600;
.text.rtsigreturn : { *(.text.rtsigreturn) } :text =0x90909090
.note : { *(.note.*) } :text :note
.eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
.eh_frame : { KEEP (*(.eh_frame)) } :text
.dynamic : { *(.dynamic) } :text :dynamic
.useless : {
*(.got.plt) *(.got)
*(.data .data.* .gnu.linkonce.d.*)
*(.dynbss)
*(.bss .bss.* .gnu.linkonce.b.*)
} :text
}
/*
* We must supply the ELF program headers explicitly to get just one
* PT_LOAD segment, and set the flags explicitly to make segments read-only.
*/
PHDRS
{
text PT_LOAD FILEHDR PHDRS FLAGS(5); /* PF_R|PF_X */
dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
note PT_NOTE FLAGS(4); /* PF_R */
eh_frame_hdr 0x6474e550; /* PT_GNU_EH_FRAME, but ld doesn't match the name */
}
/*
* This controls what symbols we export from the DSO.
*/
VERSION
{
LINUX_2.5 {
global:
__kernel_vsyscall;
__kernel_sigreturn;
__kernel_rt_sigreturn;
local: *;
};
}
/* The ELF entry point can be used to set the AT_SYSINFO value. */
ENTRY(__kernel_vsyscall);

View File

@ -1,9 +1,91 @@
ifeq ($(CONFIG_X86_32),y) #
include ${srctree}/arch/x86/kernel/Makefile_32 # Makefile for the linux kernel.
else #
include ${srctree}/arch/x86/kernel/Makefile_64
extra-y := head_$(BITS).o init_task.o vmlinux.lds
extra-$(CONFIG_X86_64) += head64.o
CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
CFLAGS_vsyscall_64.o := $(PROFILING) -g0
obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o
obj-y += traps_$(BITS).o irq_$(BITS).o
obj-y += time_$(BITS).o ioport.o ldt.o
obj-y += setup_$(BITS).o i8259_$(BITS).o
obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o
obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o
obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o
obj-y += pci-dma_$(BITS).o bootflag.o e820_$(BITS).o
obj-y += quirks.o i8237.o topology.o kdebugfs.o
obj-y += alternative.o i8253.o
obj-$(CONFIG_X86_64) += pci-nommu_64.o bugs_64.o
obj-y += tsc_$(BITS).o io_delay.o rtc.o
obj-y += i387.o
obj-y += ptrace.o
obj-y += ds.o
obj-$(CONFIG_X86_32) += tls.o
obj-$(CONFIG_IA32_EMULATION) += tls.o
obj-y += step.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += cpu/
obj-y += acpi/
obj-$(CONFIG_X86_BIOS_REBOOT) += reboot.o
obj-$(CONFIG_X86_64) += reboot.o
obj-$(CONFIG_MCA) += mca_32.o
obj-$(CONFIG_X86_MSR) += msr.o
obj-$(CONFIG_X86_CPUID) += cpuid.o
obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_PCI) += early-quirks.o
obj-$(CONFIG_APM) += apm_32.o
obj-$(CONFIG_X86_SMP) += smp_$(BITS).o smpboot_$(BITS).o tsc_sync.o
obj-$(CONFIG_X86_32_SMP) += smpcommon_32.o
obj-$(CONFIG_X86_64_SMP) += smp_64.o smpboot_64.o tsc_sync.o
obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_$(BITS).o
obj-$(CONFIG_X86_MPPARSE) += mpparse_$(BITS).o
obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi_$(BITS).o
obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
obj-$(CONFIG_X86_NUMAQ) += numaq_32.o
obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o
obj-$(CONFIG_X86_VSMP) += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_ACPI_SRAT) += srat_32.o
obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o
obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o
obj-$(CONFIG_VM86) += vm86_32.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_HPET_TIMER) += hpet.o
obj-$(CONFIG_K8_NB) += k8.o
obj-$(CONFIG_MGEODE_LX) += geode_32.o mfgpt_32.o
obj-$(CONFIG_DEBUG_RODATA_TEST) += test_rodata.o
obj-$(CONFIG_DEBUG_NX_TEST) += test_nx.o
obj-$(CONFIG_VMI) += vmi_32.o vmiclock_32.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
ifdef CONFIG_INPUT_PCSPKR
obj-y += pcspeaker.o
endif endif
# Workaround to delete .lds files with make clean obj-$(CONFIG_SCx200) += scx200_32.o
# The problem is that we do not enter Makefile_32 with make clean.
clean-files := vsyscall*.lds vsyscall*.so ###
# 64 bit specific files
ifeq ($(CONFIG_X86_64),y)
obj-y += genapic_64.o genapic_flat_64.o
obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
obj-$(CONFIG_AUDIT) += audit_64.o
obj-$(CONFIG_PM) += suspend_64.o
obj-$(CONFIG_HIBERNATION) += suspend_asm_64.o
obj-$(CONFIG_GART_IOMMU) += pci-gart_64.o aperture_64.o
obj-$(CONFIG_CALGARY_IOMMU) += pci-calgary_64.o tce_64.o
obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o
endif

View File

@ -1,88 +0,0 @@
#
# Makefile for the linux kernel.
#
extra-y := head_32.o init_task.o vmlinux.lds
CPPFLAGS_vmlinux.lds += -Ui386
obj-y := process_32.o signal_32.o entry_32.o traps_32.o irq_32.o \
ptrace_32.o time_32.o ioport_32.o ldt_32.o setup_32.o i8259_32.o sys_i386_32.o \
pci-dma_32.o i386_ksyms_32.o i387_32.o bootflag.o e820_32.o\
quirks.o i8237.o topology.o alternative.o i8253.o tsc_32.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += cpu/
obj-y += acpi/
obj-$(CONFIG_X86_BIOS_REBOOT) += reboot_32.o
obj-$(CONFIG_MCA) += mca_32.o
obj-$(CONFIG_X86_MSR) += msr.o
obj-$(CONFIG_X86_CPUID) += cpuid.o
obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_PCI) += early-quirks.o
obj-$(CONFIG_APM) += apm_32.o
obj-$(CONFIG_X86_SMP) += smp_32.o smpboot_32.o tsc_sync.o
obj-$(CONFIG_SMP) += smpcommon_32.o
obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_32.o
obj-$(CONFIG_X86_MPPARSE) += mpparse_32.o
obj-$(CONFIG_X86_LOCAL_APIC) += apic_32.o nmi_32.o
obj-$(CONFIG_X86_IO_APIC) += io_apic_32.o
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
obj-$(CONFIG_KEXEC) += machine_kexec_32.o relocate_kernel_32.o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_32.o
obj-$(CONFIG_X86_NUMAQ) += numaq_32.o
obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o
obj-$(CONFIG_KPROBES) += kprobes_32.o
obj-$(CONFIG_MODULES) += module_32.o
obj-y += sysenter_32.o vsyscall_32.o
obj-$(CONFIG_ACPI_SRAT) += srat_32.o
obj-$(CONFIG_EFI) += efi_32.o efi_stub_32.o
obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o
obj-$(CONFIG_VM86) += vm86_32.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_HPET_TIMER) += hpet.o
obj-$(CONFIG_K8_NB) += k8.o
obj-$(CONFIG_MGEODE_LX) += geode_32.o mfgpt_32.o
obj-$(CONFIG_VMI) += vmi_32.o vmiclock_32.o
obj-$(CONFIG_PARAVIRT) += paravirt_32.o
obj-y += pcspeaker.o
obj-$(CONFIG_SCx200) += scx200_32.o
# vsyscall_32.o contains the vsyscall DSO images as __initdata.
# We must build both images before we can assemble it.
# Note: kbuild does not track this dependency due to usage of .incbin
$(obj)/vsyscall_32.o: $(obj)/vsyscall-int80_32.so $(obj)/vsyscall-sysenter_32.so
targets += $(foreach F,int80 sysenter,vsyscall-$F_32.o vsyscall-$F_32.so)
targets += vsyscall-note_32.o vsyscall_32.lds
# The DSO images are built using a special linker script.
quiet_cmd_syscall = SYSCALL $@
cmd_syscall = $(CC) -m elf_i386 -nostdlib $(SYSCFLAGS_$(@F)) \
-Wl,-T,$(filter-out FORCE,$^) -o $@
export CPPFLAGS_vsyscall_32.lds += -P -C -Ui386
vsyscall-flags = -shared -s -Wl,-soname=linux-gate.so.1 \
$(call ld-option, -Wl$(comma)--hash-style=sysv)
SYSCFLAGS_vsyscall-sysenter_32.so = $(vsyscall-flags)
SYSCFLAGS_vsyscall-int80_32.so = $(vsyscall-flags)
$(obj)/vsyscall-int80_32.so $(obj)/vsyscall-sysenter_32.so: \
$(obj)/vsyscall-%.so: $(src)/vsyscall_32.lds \
$(obj)/vsyscall-%.o $(obj)/vsyscall-note_32.o FORCE
$(call if_changed,syscall)
# We also create a special relocatable object that should mirror the symbol
# table and layout of the linked DSO. With ld -R we can then refer to
# these symbols in the kernel code rather than hand-coded addresses.
extra-y += vsyscall-syms.o
$(obj)/built-in.o: $(obj)/vsyscall-syms.o
$(obj)/built-in.o: ld_flags += -R $(obj)/vsyscall-syms.o
SYSCFLAGS_vsyscall-syms.o = -r
$(obj)/vsyscall-syms.o: $(src)/vsyscall_32.lds \
$(obj)/vsyscall-sysenter_32.o $(obj)/vsyscall-note_32.o FORCE
$(call if_changed,syscall)

View File

@ -1,45 +0,0 @@
#
# Makefile for the linux kernel.
#
extra-y := head_64.o head64.o init_task.o vmlinux.lds
CPPFLAGS_vmlinux.lds += -Ux86_64
EXTRA_AFLAGS := -traditional
obj-y := process_64.o signal_64.o entry_64.o traps_64.o irq_64.o \
ptrace_64.o time_64.o ioport_64.o ldt_64.o setup_64.o i8259_64.o sys_x86_64.o \
x8664_ksyms_64.o i387_64.o syscall_64.o vsyscall_64.o \
setup64.o bootflag.o e820_64.o reboot_64.o quirks.o i8237.o \
pci-dma_64.o pci-nommu_64.o alternative.o hpet.o tsc_64.o bugs_64.o \
i8253.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += cpu/
obj-y += acpi/
obj-$(CONFIG_X86_MSR) += msr.o
obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_X86_CPUID) += cpuid.o
obj-$(CONFIG_SMP) += smp_64.o smpboot_64.o trampoline_64.o tsc_sync.o
obj-y += apic_64.o nmi_64.o
obj-y += io_apic_64.o mpparse_64.o genapic_64.o genapic_flat_64.o
obj-$(CONFIG_KEXEC) += machine_kexec_64.o relocate_kernel_64.o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_64.o
obj-$(CONFIG_PM) += suspend_64.o
obj-$(CONFIG_HIBERNATION) += suspend_asm_64.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_GART_IOMMU) += pci-gart_64.o aperture_64.o
obj-$(CONFIG_CALGARY_IOMMU) += pci-calgary_64.o tce_64.o
obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o
obj-$(CONFIG_KPROBES) += kprobes_64.o
obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
obj-$(CONFIG_X86_VSMP) += vsmp_64.o
obj-$(CONFIG_K8_NB) += k8.o
obj-$(CONFIG_AUDIT) += audit_64.o
obj-$(CONFIG_MODULES) += module_64.o
obj-$(CONFIG_PCI) += early-quirks.o
obj-y += topology.o
obj-y += pcspeaker.o
CFLAGS_vsyscall_64.o := $(PROFILING) -g0

View File

@ -1,5 +1,5 @@
obj-$(CONFIG_ACPI) += boot.o obj-$(CONFIG_ACPI) += boot.o
obj-$(CONFIG_ACPI_SLEEP) += sleep_$(BITS).o wakeup_$(BITS).o obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_$(BITS).o
ifneq ($(CONFIG_ACPI_PROCESSOR),) ifneq ($(CONFIG_ACPI_PROCESSOR),)
obj-y += cstate.o processor.o obj-y += cstate.o processor.o

View File

@ -0,0 +1,87 @@
/*
* sleep.c - x86-specific ACPI sleep support.
*
* Copyright (C) 2001-2003 Patrick Mochel
* Copyright (C) 2001-2003 Pavel Machek <pavel@suse.cz>
*/
#include <linux/acpi.h>
#include <linux/bootmem.h>
#include <linux/dmi.h>
#include <linux/cpumask.h>
#include <asm/smp.h>
/* address in low memory of the wakeup routine. */
unsigned long acpi_wakeup_address = 0;
unsigned long acpi_realmode_flags;
extern char wakeup_start, wakeup_end;
extern unsigned long acpi_copy_wakeup_routine(unsigned long);
/**
* acpi_save_state_mem - save kernel state
*
* Create an identity mapped page table and copy the wakeup routine to
* low memory.
*/
int acpi_save_state_mem(void)
{
if (!acpi_wakeup_address) {
printk(KERN_ERR "Could not allocate memory during boot, S3 disabled\n");
return -ENOMEM;
}
memcpy((void *)acpi_wakeup_address, &wakeup_start,
&wakeup_end - &wakeup_start);
acpi_copy_wakeup_routine(acpi_wakeup_address);
return 0;
}
/*
* acpi_restore_state - undo effects of acpi_save_state_mem
*/
void acpi_restore_state_mem(void)
{
}
/**
* acpi_reserve_bootmem - do _very_ early ACPI initialisation
*
* We allocate a page from the first 1MB of memory for the wakeup
* routine for when we come back from a sleep state. The
* runtime allocator allows specification of <16MB pages, but not
* <1MB pages.
*/
void __init acpi_reserve_bootmem(void)
{
if ((&wakeup_end - &wakeup_start) > PAGE_SIZE*2) {
printk(KERN_ERR
"ACPI: Wakeup code way too big, S3 disabled.\n");
return;
}
acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE*2);
if (!acpi_wakeup_address)
printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n");
}
static int __init acpi_sleep_setup(char *str)
{
while ((str != NULL) && (*str != '\0')) {
if (strncmp(str, "s3_bios", 7) == 0)
acpi_realmode_flags |= 1;
if (strncmp(str, "s3_mode", 7) == 0)
acpi_realmode_flags |= 2;
if (strncmp(str, "s3_beep", 7) == 0)
acpi_realmode_flags |= 4;
str = strchr(str, ',');
if (str != NULL)
str += strspn(str, ", \t");
}
return 1;
}
__setup("acpi_sleep=", acpi_sleep_setup);

View File

@ -12,76 +12,6 @@
#include <asm/smp.h> #include <asm/smp.h>
/* address in low memory of the wakeup routine. */
unsigned long acpi_wakeup_address = 0;
unsigned long acpi_realmode_flags;
extern char wakeup_start, wakeup_end;
extern unsigned long FASTCALL(acpi_copy_wakeup_routine(unsigned long));
/**
* acpi_save_state_mem - save kernel state
*
* Create an identity mapped page table and copy the wakeup routine to
* low memory.
*/
int acpi_save_state_mem(void)
{
if (!acpi_wakeup_address)
return 1;
memcpy((void *)acpi_wakeup_address, &wakeup_start,
&wakeup_end - &wakeup_start);
acpi_copy_wakeup_routine(acpi_wakeup_address);
return 0;
}
/*
* acpi_restore_state - undo effects of acpi_save_state_mem
*/
void acpi_restore_state_mem(void)
{
}
/**
* acpi_reserve_bootmem - do _very_ early ACPI initialisation
*
* We allocate a page from the first 1MB of memory for the wakeup
* routine for when we come back from a sleep state. The
* runtime allocator allows specification of <16MB pages, but not
* <1MB pages.
*/
void __init acpi_reserve_bootmem(void)
{
if ((&wakeup_end - &wakeup_start) > PAGE_SIZE) {
printk(KERN_ERR
"ACPI: Wakeup code way too big, S3 disabled.\n");
return;
}
acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE);
if (!acpi_wakeup_address)
printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n");
}
static int __init acpi_sleep_setup(char *str)
{
while ((str != NULL) && (*str != '\0')) {
if (strncmp(str, "s3_bios", 7) == 0)
acpi_realmode_flags |= 1;
if (strncmp(str, "s3_mode", 7) == 0)
acpi_realmode_flags |= 2;
if (strncmp(str, "s3_beep", 7) == 0)
acpi_realmode_flags |= 4;
str = strchr(str, ',');
if (str != NULL)
str += strspn(str, ", \t");
}
return 1;
}
__setup("acpi_sleep=", acpi_sleep_setup);
/* Ouch, we want to delete this. We already have better version in userspace, in /* Ouch, we want to delete this. We already have better version in userspace, in
s2ram from suspend.sf.net project */ s2ram from suspend.sf.net project */
static __init int reset_videomode_after_s3(const struct dmi_system_id *d) static __init int reset_videomode_after_s3(const struct dmi_system_id *d)

View File

@ -1,117 +0,0 @@
/*
* acpi.c - Architecture-Specific Low-Level ACPI Support
*
* Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
* Copyright (C) 2001 Jun Nakajima <jun.nakajima@intel.com>
* Copyright (C) 2001 Patrick Mochel <mochel@osdl.org>
* Copyright (C) 2002 Andi Kleen, SuSE Labs (x86-64 port)
* Copyright (C) 2003 Pavel Machek, SuSE Labs
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/bootmem.h>
#include <linux/acpi.h>
#include <linux/cpumask.h>
#include <asm/mpspec.h>
#include <asm/io.h>
#include <asm/apic.h>
#include <asm/apicdef.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/io_apic.h>
#include <asm/proto.h>
#include <asm/tlbflush.h>
/* --------------------------------------------------------------------------
Low-Level Sleep Support
-------------------------------------------------------------------------- */
/* address in low memory of the wakeup routine. */
unsigned long acpi_wakeup_address = 0;
unsigned long acpi_realmode_flags;
extern char wakeup_start, wakeup_end;
extern unsigned long acpi_copy_wakeup_routine(unsigned long);
/**
* acpi_save_state_mem - save kernel state
*
* Create an identity mapped page table and copy the wakeup routine to
* low memory.
*/
int acpi_save_state_mem(void)
{
memcpy((void *)acpi_wakeup_address, &wakeup_start,
&wakeup_end - &wakeup_start);
acpi_copy_wakeup_routine(acpi_wakeup_address);
return 0;
}
/*
* acpi_restore_state
*/
void acpi_restore_state_mem(void)
{
}
/**
* acpi_reserve_bootmem - do _very_ early ACPI initialisation
*
* We allocate a page in low memory for the wakeup
* routine for when we come back from a sleep state. The
* runtime allocator allows specification of <16M pages, but not
* <1M pages.
*/
void __init acpi_reserve_bootmem(void)
{
acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE*2);
if ((&wakeup_end - &wakeup_start) > (PAGE_SIZE*2))
printk(KERN_CRIT
"ACPI: Wakeup code way too big, will crash on attempt"
" to suspend\n");
}
static int __init acpi_sleep_setup(char *str)
{
while ((str != NULL) && (*str != '\0')) {
if (strncmp(str, "s3_bios", 7) == 0)
acpi_realmode_flags |= 1;
if (strncmp(str, "s3_mode", 7) == 0)
acpi_realmode_flags |= 2;
if (strncmp(str, "s3_beep", 7) == 0)
acpi_realmode_flags |= 4;
str = strchr(str, ',');
if (str != NULL)
str += strspn(str, ", \t");
}
return 1;
}
__setup("acpi_sleep=", acpi_sleep_setup);

View File

@ -1,4 +1,4 @@
.text .section .text.page_aligned
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/page.h> #include <asm/page.h>

View File

@ -344,13 +344,13 @@ do_suspend_lowlevel:
call save_processor_state call save_processor_state
movq $saved_context, %rax movq $saved_context, %rax
movq %rsp, pt_regs_rsp(%rax) movq %rsp, pt_regs_sp(%rax)
movq %rbp, pt_regs_rbp(%rax) movq %rbp, pt_regs_bp(%rax)
movq %rsi, pt_regs_rsi(%rax) movq %rsi, pt_regs_si(%rax)
movq %rdi, pt_regs_rdi(%rax) movq %rdi, pt_regs_di(%rax)
movq %rbx, pt_regs_rbx(%rax) movq %rbx, pt_regs_bx(%rax)
movq %rcx, pt_regs_rcx(%rax) movq %rcx, pt_regs_cx(%rax)
movq %rdx, pt_regs_rdx(%rax) movq %rdx, pt_regs_dx(%rax)
movq %r8, pt_regs_r8(%rax) movq %r8, pt_regs_r8(%rax)
movq %r9, pt_regs_r9(%rax) movq %r9, pt_regs_r9(%rax)
movq %r10, pt_regs_r10(%rax) movq %r10, pt_regs_r10(%rax)
@ -360,7 +360,7 @@ do_suspend_lowlevel:
movq %r14, pt_regs_r14(%rax) movq %r14, pt_regs_r14(%rax)
movq %r15, pt_regs_r15(%rax) movq %r15, pt_regs_r15(%rax)
pushfq pushfq
popq pt_regs_eflags(%rax) popq pt_regs_flags(%rax)
movq $.L97, saved_rip(%rip) movq $.L97, saved_rip(%rip)
@ -391,15 +391,15 @@ do_suspend_lowlevel:
movq %rbx, %cr2 movq %rbx, %cr2
movq saved_context_cr0(%rax), %rbx movq saved_context_cr0(%rax), %rbx
movq %rbx, %cr0 movq %rbx, %cr0
pushq pt_regs_eflags(%rax) pushq pt_regs_flags(%rax)
popfq popfq
movq pt_regs_rsp(%rax), %rsp movq pt_regs_sp(%rax), %rsp
movq pt_regs_rbp(%rax), %rbp movq pt_regs_bp(%rax), %rbp
movq pt_regs_rsi(%rax), %rsi movq pt_regs_si(%rax), %rsi
movq pt_regs_rdi(%rax), %rdi movq pt_regs_di(%rax), %rdi
movq pt_regs_rbx(%rax), %rbx movq pt_regs_bx(%rax), %rbx
movq pt_regs_rcx(%rax), %rcx movq pt_regs_cx(%rax), %rcx
movq pt_regs_rdx(%rax), %rdx movq pt_regs_dx(%rax), %rdx
movq pt_regs_r8(%rax), %r8 movq pt_regs_r8(%rax), %r8
movq pt_regs_r9(%rax), %r9 movq pt_regs_r9(%rax), %r9
movq pt_regs_r10(%rax), %r10 movq pt_regs_r10(%rax), %r10

View File

@ -273,6 +273,7 @@ struct smp_alt_module {
}; };
static LIST_HEAD(smp_alt_modules); static LIST_HEAD(smp_alt_modules);
static DEFINE_SPINLOCK(smp_alt); static DEFINE_SPINLOCK(smp_alt);
static int smp_mode = 1; /* protected by smp_alt */
void alternatives_smp_module_add(struct module *mod, char *name, void alternatives_smp_module_add(struct module *mod, char *name,
void *locks, void *locks_end, void *locks, void *locks_end,
@ -341,12 +342,13 @@ void alternatives_smp_switch(int smp)
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
/* /*
* A not yet fixed binutils section handling bug prevents * Older binutils section handling bug prevented
* alternatives-replacement from working reliably, so turn * alternatives-replacement from working reliably.
* it off: *
* If this still occurs then you should see a hang
* or crash shortly after this line:
*/ */
printk("lockdep: not fixing up alternatives.\n"); printk("lockdep: fixing up alternatives.\n");
return;
#endif #endif
if (noreplace_smp || smp_alt_once) if (noreplace_smp || smp_alt_once)
@ -354,21 +356,29 @@ void alternatives_smp_switch(int smp)
BUG_ON(!smp && (num_online_cpus() > 1)); BUG_ON(!smp && (num_online_cpus() > 1));
spin_lock_irqsave(&smp_alt, flags); spin_lock_irqsave(&smp_alt, flags);
if (smp) {
/*
* Avoid unnecessary switches because it forces JIT based VMs to
* throw away all cached translations, which can be quite costly.
*/
if (smp == smp_mode) {
/* nothing */
} else if (smp) {
printk(KERN_INFO "SMP alternatives: switching to SMP code\n"); printk(KERN_INFO "SMP alternatives: switching to SMP code\n");
clear_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability); clear_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
clear_bit(X86_FEATURE_UP, cpu_data(0).x86_capability); clear_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
list_for_each_entry(mod, &smp_alt_modules, next) list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_lock(mod->locks, mod->locks_end, alternatives_smp_lock(mod->locks, mod->locks_end,
mod->text, mod->text_end); mod->text, mod->text_end);
} else { } else {
printk(KERN_INFO "SMP alternatives: switching to UP code\n"); printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability); set_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
set_bit(X86_FEATURE_UP, cpu_data(0).x86_capability); set_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
list_for_each_entry(mod, &smp_alt_modules, next) list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_unlock(mod->locks, mod->locks_end, alternatives_smp_unlock(mod->locks, mod->locks_end,
mod->text, mod->text_end); mod->text, mod->text_end);
} }
smp_mode = smp;
spin_unlock_irqrestore(&smp_alt, flags); spin_unlock_irqrestore(&smp_alt, flags);
} }
@ -431,8 +441,9 @@ void __init alternative_instructions(void)
if (smp_alt_once) { if (smp_alt_once) {
if (1 == num_possible_cpus()) { if (1 == num_possible_cpus()) {
printk(KERN_INFO "SMP alternatives: switching to UP code\n"); printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability); set_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
set_bit(X86_FEATURE_UP, cpu_data(0).x86_capability); set_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
alternatives_smp_unlock(__smp_locks, __smp_locks_end, alternatives_smp_unlock(__smp_locks, __smp_locks_end,
_text, _etext); _text, _etext);
} }
@ -440,7 +451,10 @@ void __init alternative_instructions(void)
alternatives_smp_module_add(NULL, "core kernel", alternatives_smp_module_add(NULL, "core kernel",
__smp_locks, __smp_locks_end, __smp_locks, __smp_locks_end,
_text, _etext); _text, _etext);
alternatives_smp_switch(0);
/* Only switch to UP mode if we don't immediately boot others */
if (num_possible_cpus() == 1 || setup_max_cpus <= 1)
alternatives_smp_switch(0);
} }
#endif #endif
apply_paravirt(__parainstructions, __parainstructions_end); apply_paravirt(__parainstructions, __parainstructions_end);

View File

@ -1,12 +1,12 @@
/* /*
* Firmware replacement code. * Firmware replacement code.
* *
* Work around broken BIOSes that don't set an aperture or only set the * Work around broken BIOSes that don't set an aperture or only set the
* aperture in the AGP bridge. * aperture in the AGP bridge.
* If all fails map the aperture over some low memory. This is cheaper than * If all fails map the aperture over some low memory. This is cheaper than
* doing bounce buffering. The memory is lost. This is done at early boot * doing bounce buffering. The memory is lost. This is done at early boot
* because only the bootmem allocator can allocate 32+MB. * because only the bootmem allocator can allocate 32+MB.
* *
* Copyright 2002 Andi Kleen, SuSE Labs. * Copyright 2002 Andi Kleen, SuSE Labs.
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
@ -30,7 +30,7 @@ int gart_iommu_aperture_disabled __initdata = 0;
int gart_iommu_aperture_allowed __initdata = 0; int gart_iommu_aperture_allowed __initdata = 0;
int fallback_aper_order __initdata = 1; /* 64MB */ int fallback_aper_order __initdata = 1; /* 64MB */
int fallback_aper_force __initdata = 0; int fallback_aper_force __initdata = 0;
int fix_aperture __initdata = 1; int fix_aperture __initdata = 1;
@ -49,167 +49,270 @@ static void __init insert_aperture_resource(u32 aper_base, u32 aper_size)
/* This code runs before the PCI subsystem is initialized, so just /* This code runs before the PCI subsystem is initialized, so just
access the northbridge directly. */ access the northbridge directly. */
static u32 __init allocate_aperture(void) static u32 __init allocate_aperture(void)
{ {
u32 aper_size; u32 aper_size;
void *p; void *p;
if (fallback_aper_order > 7) if (fallback_aper_order > 7)
fallback_aper_order = 7; fallback_aper_order = 7;
aper_size = (32 * 1024 * 1024) << fallback_aper_order; aper_size = (32 * 1024 * 1024) << fallback_aper_order;
/* /*
* Aperture has to be naturally aligned. This means an 2GB aperture won't * Aperture has to be naturally aligned. This means a 2GB aperture
* have much chance of finding a place in the lower 4GB of memory. * won't have much chance of finding a place in the lower 4GB of
* Unfortunately we cannot move it up because that would make the * memory. Unfortunately we cannot move it up because that would
* IOMMU useless. * make the IOMMU useless.
*/ */
p = __alloc_bootmem_nopanic(aper_size, aper_size, 0); p = __alloc_bootmem_nopanic(aper_size, aper_size, 0);
if (!p || __pa(p)+aper_size > 0xffffffff) { if (!p || __pa(p)+aper_size > 0xffffffff) {
printk("Cannot allocate aperture memory hole (%p,%uK)\n", printk(KERN_ERR
p, aper_size>>10); "Cannot allocate aperture memory hole (%p,%uK)\n",
p, aper_size>>10);
if (p) if (p)
free_bootmem(__pa(p), aper_size); free_bootmem(__pa(p), aper_size);
return 0; return 0;
} }
printk("Mapping aperture over %d KB of RAM @ %lx\n", printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",
aper_size >> 10, __pa(p)); aper_size >> 10, __pa(p));
insert_aperture_resource((u32)__pa(p), aper_size); insert_aperture_resource((u32)__pa(p), aper_size);
return (u32)__pa(p);
return (u32)__pa(p);
} }
static int __init aperture_valid(u64 aper_base, u32 aper_size) static int __init aperture_valid(u64 aper_base, u32 aper_size)
{ {
if (!aper_base) if (!aper_base)
return 0; return 0;
if (aper_size < 64*1024*1024) {
printk("Aperture too small (%d MB)\n", aper_size>>20);
return 0;
}
if (aper_base + aper_size > 0x100000000UL) { if (aper_base + aper_size > 0x100000000UL) {
printk("Aperture beyond 4GB. Ignoring.\n"); printk(KERN_ERR "Aperture beyond 4GB. Ignoring.\n");
return 0; return 0;
} }
if (e820_any_mapped(aper_base, aper_base + aper_size, E820_RAM)) { if (e820_any_mapped(aper_base, aper_base + aper_size, E820_RAM)) {
printk("Aperture pointing to e820 RAM. Ignoring.\n"); printk(KERN_ERR "Aperture pointing to e820 RAM. Ignoring.\n");
return 0; return 0;
} }
if (aper_size < 64*1024*1024) {
printk(KERN_ERR "Aperture too small (%d MB)\n", aper_size>>20);
return 0;
}
return 1; return 1;
} }
/* Find a PCI capability */ /* Find a PCI capability */
static __u32 __init find_cap(int num, int slot, int func, int cap) static __u32 __init find_cap(int num, int slot, int func, int cap)
{ {
u8 pos;
int bytes; int bytes;
if (!(read_pci_config_16(num,slot,func,PCI_STATUS) & PCI_STATUS_CAP_LIST)) u8 pos;
if (!(read_pci_config_16(num, slot, func, PCI_STATUS) &
PCI_STATUS_CAP_LIST))
return 0; return 0;
pos = read_pci_config_byte(num,slot,func,PCI_CAPABILITY_LIST);
for (bytes = 0; bytes < 48 && pos >= 0x40; bytes++) { pos = read_pci_config_byte(num, slot, func, PCI_CAPABILITY_LIST);
for (bytes = 0; bytes < 48 && pos >= 0x40; bytes++) {
u8 id; u8 id;
pos &= ~3;
id = read_pci_config_byte(num,slot,func,pos+PCI_CAP_LIST_ID); pos &= ~3;
id = read_pci_config_byte(num, slot, func, pos+PCI_CAP_LIST_ID);
if (id == 0xff) if (id == 0xff)
break; break;
if (id == cap) if (id == cap)
return pos; return pos;
pos = read_pci_config_byte(num,slot,func,pos+PCI_CAP_LIST_NEXT); pos = read_pci_config_byte(num, slot, func,
} pos+PCI_CAP_LIST_NEXT);
}
return 0; return 0;
} }
/* Read a standard AGPv3 bridge header */ /* Read a standard AGPv3 bridge header */
static __u32 __init read_agp(int num, int slot, int func, int cap, u32 *order) static __u32 __init read_agp(int num, int slot, int func, int cap, u32 *order)
{ {
u32 apsize; u32 apsize;
u32 apsizereg; u32 apsizereg;
int nbits; int nbits;
u32 aper_low, aper_hi; u32 aper_low, aper_hi;
u64 aper; u64 aper;
printk("AGP bridge at %02x:%02x:%02x\n", num, slot, func); printk(KERN_INFO "AGP bridge at %02x:%02x:%02x\n", num, slot, func);
apsizereg = read_pci_config_16(num,slot,func, cap + 0x14); apsizereg = read_pci_config_16(num, slot, func, cap + 0x14);
if (apsizereg == 0xffffffff) { if (apsizereg == 0xffffffff) {
printk("APSIZE in AGP bridge unreadable\n"); printk(KERN_ERR "APSIZE in AGP bridge unreadable\n");
return 0; return 0;
} }
apsize = apsizereg & 0xfff; apsize = apsizereg & 0xfff;
/* Some BIOS use weird encodings not in the AGPv3 table. */ /* Some BIOS use weird encodings not in the AGPv3 table. */
if (apsize & 0xff) if (apsize & 0xff)
apsize |= 0xf00; apsize |= 0xf00;
nbits = hweight16(apsize); nbits = hweight16(apsize);
*order = 7 - nbits; *order = 7 - nbits;
if ((int)*order < 0) /* < 32MB */ if ((int)*order < 0) /* < 32MB */
*order = 0; *order = 0;
aper_low = read_pci_config(num,slot,func, 0x10); aper_low = read_pci_config(num, slot, func, 0x10);
aper_hi = read_pci_config(num,slot,func,0x14); aper_hi = read_pci_config(num, slot, func, 0x14);
aper = (aper_low & ~((1<<22)-1)) | ((u64)aper_hi << 32); aper = (aper_low & ~((1<<22)-1)) | ((u64)aper_hi << 32);
printk("Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n", printk(KERN_INFO "Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n",
aper, 32 << *order, apsizereg); aper, 32 << *order, apsizereg);
if (!aperture_valid(aper, (32*1024*1024) << *order)) if (!aperture_valid(aper, (32*1024*1024) << *order))
return 0; return 0;
return (u32)aper; return (u32)aper;
} }
/* Look for an AGP bridge. Windows only expects the aperture in the /*
AGP bridge and some BIOS forget to initialize the Northbridge too. * Look for an AGP bridge. Windows only expects the aperture in the
Work around this here. * AGP bridge and some BIOS forget to initialize the Northbridge too.
* Work around this here.
Do an PCI bus scan by hand because we're running before the PCI *
subsystem. * Do an PCI bus scan by hand because we're running before the PCI
* subsystem.
All K8 AGP bridges are AGPv3 compliant, so we can do this scan *
generically. It's probably overkill to always scan all slots because * All K8 AGP bridges are AGPv3 compliant, so we can do this scan
the AGP bridges should be always an own bus on the HT hierarchy, * generically. It's probably overkill to always scan all slots because
but do it here for future safety. */ * the AGP bridges should be always an own bus on the HT hierarchy,
* but do it here for future safety.
*/
static __u32 __init search_agp_bridge(u32 *order, int *valid_agp) static __u32 __init search_agp_bridge(u32 *order, int *valid_agp)
{ {
int num, slot, func; int num, slot, func;
/* Poor man's PCI discovery */ /* Poor man's PCI discovery */
for (num = 0; num < 256; num++) { for (num = 0; num < 256; num++) {
for (slot = 0; slot < 32; slot++) { for (slot = 0; slot < 32; slot++) {
for (func = 0; func < 8; func++) { for (func = 0; func < 8; func++) {
u32 class, cap; u32 class, cap;
u8 type; u8 type;
class = read_pci_config(num,slot,func, class = read_pci_config(num, slot, func,
PCI_CLASS_REVISION); PCI_CLASS_REVISION);
if (class == 0xffffffff) if (class == 0xffffffff)
break; break;
switch (class >> 16) { switch (class >> 16) {
case PCI_CLASS_BRIDGE_HOST: case PCI_CLASS_BRIDGE_HOST:
case PCI_CLASS_BRIDGE_OTHER: /* needed? */ case PCI_CLASS_BRIDGE_OTHER: /* needed? */
/* AGP bridge? */ /* AGP bridge? */
cap = find_cap(num,slot,func,PCI_CAP_ID_AGP); cap = find_cap(num, slot, func,
PCI_CAP_ID_AGP);
if (!cap) if (!cap)
break; break;
*valid_agp = 1; *valid_agp = 1;
return read_agp(num,slot,func,cap,order); return read_agp(num, slot, func, cap,
} order);
}
/* No multi-function device? */ /* No multi-function device? */
type = read_pci_config_byte(num,slot,func, type = read_pci_config_byte(num, slot, func,
PCI_HEADER_TYPE); PCI_HEADER_TYPE);
if (!(type & 0x80)) if (!(type & 0x80))
break; break;
} }
} }
} }
printk("No AGP bridge found\n"); printk(KERN_INFO "No AGP bridge found\n");
return 0; return 0;
} }
static int gart_fix_e820 __initdata = 1;
static int __init parse_gart_mem(char *p)
{
if (!p)
return -EINVAL;
if (!strncmp(p, "off", 3))
gart_fix_e820 = 0;
else if (!strncmp(p, "on", 2))
gart_fix_e820 = 1;
return 0;
}
early_param("gart_fix_e820", parse_gart_mem);
void __init early_gart_iommu_check(void)
{
/*
* in case it is enabled before, esp for kexec/kdump,
* previous kernel already enable that. memset called
* by allocate_aperture/__alloc_bootmem_nopanic cause restart.
* or second kernel have different position for GART hole. and new
* kernel could use hole as RAM that is still used by GART set by
* first kernel
* or BIOS forget to put that in reserved.
* try to update e820 to make that region as reserved.
*/
int fix, num;
u32 ctl;
u32 aper_size = 0, aper_order = 0, last_aper_order = 0;
u64 aper_base = 0, last_aper_base = 0;
int aper_enabled = 0, last_aper_enabled = 0;
if (!early_pci_allowed())
return;
fix = 0;
for (num = 24; num < 32; num++) {
if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
continue;
ctl = read_pci_config(0, num, 3, 0x90);
aper_enabled = ctl & 1;
aper_order = (ctl >> 1) & 7;
aper_size = (32 * 1024 * 1024) << aper_order;
aper_base = read_pci_config(0, num, 3, 0x94) & 0x7fff;
aper_base <<= 25;
if ((last_aper_order && aper_order != last_aper_order) ||
(last_aper_base && aper_base != last_aper_base) ||
(last_aper_enabled && aper_enabled != last_aper_enabled)) {
fix = 1;
break;
}
last_aper_order = aper_order;
last_aper_base = aper_base;
last_aper_enabled = aper_enabled;
}
if (!fix && !aper_enabled)
return;
if (!aper_base || !aper_size || aper_base + aper_size > 0x100000000UL)
fix = 1;
if (gart_fix_e820 && !fix && aper_enabled) {
if (e820_any_mapped(aper_base, aper_base + aper_size,
E820_RAM)) {
/* reserved it, so we can resuse it in second kernel */
printk(KERN_INFO "update e820 for GART\n");
add_memory_region(aper_base, aper_size, E820_RESERVED);
update_e820();
}
return;
}
/* different nodes have different setting, disable them all at first*/
for (num = 24; num < 32; num++) {
if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
continue;
ctl = read_pci_config(0, num, 3, 0x90);
ctl &= ~1;
write_pci_config(0, num, 3, 0x90, ctl);
}
}
void __init gart_iommu_hole_init(void) void __init gart_iommu_hole_init(void)
{ {
int fix, num;
u32 aper_size, aper_alloc = 0, aper_order = 0, last_aper_order = 0; u32 aper_size, aper_alloc = 0, aper_order = 0, last_aper_order = 0;
u64 aper_base, last_aper_base = 0; u64 aper_base, last_aper_base = 0;
int valid_agp = 0; int fix, num, valid_agp = 0;
int node;
if (gart_iommu_aperture_disabled || !fix_aperture || if (gart_iommu_aperture_disabled || !fix_aperture ||
!early_pci_allowed()) !early_pci_allowed())
@ -218,24 +321,26 @@ void __init gart_iommu_hole_init(void)
printk(KERN_INFO "Checking aperture...\n"); printk(KERN_INFO "Checking aperture...\n");
fix = 0; fix = 0;
for (num = 24; num < 32; num++) { node = 0;
for (num = 24; num < 32; num++) {
if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00))) if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
continue; continue;
iommu_detected = 1; iommu_detected = 1;
gart_iommu_aperture = 1; gart_iommu_aperture = 1;
aper_order = (read_pci_config(0, num, 3, 0x90) >> 1) & 7; aper_order = (read_pci_config(0, num, 3, 0x90) >> 1) & 7;
aper_size = (32 * 1024 * 1024) << aper_order; aper_size = (32 * 1024 * 1024) << aper_order;
aper_base = read_pci_config(0, num, 3, 0x94) & 0x7fff; aper_base = read_pci_config(0, num, 3, 0x94) & 0x7fff;
aper_base <<= 25; aper_base <<= 25;
printk(KERN_INFO "Node %d: aperture @ %Lx size %u MB\n",
node, aper_base, aper_size >> 20);
node++;
printk("CPU %d: aperture @ %Lx size %u MB\n", num-24,
aper_base, aper_size>>20);
if (!aperture_valid(aper_base, aper_size)) { if (!aperture_valid(aper_base, aper_size)) {
fix = 1; fix = 1;
break; break;
} }
if ((last_aper_order && aper_order != last_aper_order) || if ((last_aper_order && aper_order != last_aper_order) ||
@ -245,55 +350,64 @@ void __init gart_iommu_hole_init(void)
} }
last_aper_order = aper_order; last_aper_order = aper_order;
last_aper_base = aper_base; last_aper_base = aper_base;
} }
if (!fix && !fallback_aper_force) { if (!fix && !fallback_aper_force) {
if (last_aper_base) { if (last_aper_base) {
unsigned long n = (32 * 1024 * 1024) << last_aper_order; unsigned long n = (32 * 1024 * 1024) << last_aper_order;
insert_aperture_resource((u32)last_aper_base, n); insert_aperture_resource((u32)last_aper_base, n);
} }
return; return;
} }
if (!fallback_aper_force) if (!fallback_aper_force)
aper_alloc = search_agp_bridge(&aper_order, &valid_agp); aper_alloc = search_agp_bridge(&aper_order, &valid_agp);
if (aper_alloc) { if (aper_alloc) {
/* Got the aperture from the AGP bridge */ /* Got the aperture from the AGP bridge */
} else if (swiotlb && !valid_agp) { } else if (swiotlb && !valid_agp) {
/* Do nothing */ /* Do nothing */
} else if ((!no_iommu && end_pfn > MAX_DMA32_PFN) || } else if ((!no_iommu && end_pfn > MAX_DMA32_PFN) ||
force_iommu || force_iommu ||
valid_agp || valid_agp ||
fallback_aper_force) { fallback_aper_force) {
printk("Your BIOS doesn't leave a aperture memory hole\n"); printk(KERN_ERR
printk("Please enable the IOMMU option in the BIOS setup\n"); "Your BIOS doesn't leave a aperture memory hole\n");
printk("This costs you %d MB of RAM\n", printk(KERN_ERR
32 << fallback_aper_order); "Please enable the IOMMU option in the BIOS setup\n");
printk(KERN_ERR
"This costs you %d MB of RAM\n",
32 << fallback_aper_order);
aper_order = fallback_aper_order; aper_order = fallback_aper_order;
aper_alloc = allocate_aperture(); aper_alloc = allocate_aperture();
if (!aper_alloc) { if (!aper_alloc) {
/* Could disable AGP and IOMMU here, but it's probably /*
not worth it. But the later users cannot deal with * Could disable AGP and IOMMU here, but it's
bad apertures and turning on the aperture over memory * probably not worth it. But the later users
causes very strange problems, so it's better to * cannot deal with bad apertures and turning
panic early. */ * on the aperture over memory causes very
* strange problems, so it's better to panic
* early.
*/
panic("Not enough memory for aperture"); panic("Not enough memory for aperture");
} }
} else { } else {
return; return;
} }
/* Fix up the north bridges */ /* Fix up the north bridges */
for (num = 24; num < 32; num++) { for (num = 24; num < 32; num++) {
if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00))) if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
continue; continue;
/* Don't enable translation yet. That is done later. /*
Assume this BIOS didn't initialise the GART so * Don't enable translation yet. That is done later.
just overwrite all previous bits */ * Assume this BIOS didn't initialise the GART so
write_pci_config(0, num, 3, 0x90, aper_order<<1); * just overwrite all previous bits
write_pci_config(0, num, 3, 0x94, aper_alloc>>25); */
} write_pci_config(0, num, 3, 0x90, aper_order<<1);
} write_pci_config(0, num, 3, 0x94, aper_alloc>>25);
}
}

View File

@ -43,12 +43,10 @@
#include <mach_apicdef.h> #include <mach_apicdef.h>
#include <mach_ipi.h> #include <mach_ipi.h>
#include "io_ports.h"
/* /*
* Sanity check * Sanity check
*/ */
#if (SPURIOUS_APIC_VECTOR & 0x0F) != 0x0F #if ((SPURIOUS_APIC_VECTOR & 0x0F) != 0x0F)
# error SPURIOUS_APIC_VECTOR definition error # error SPURIOUS_APIC_VECTOR definition error
#endif #endif
@ -57,7 +55,7 @@
* *
* -1=force-disable, +1=force-enable * -1=force-disable, +1=force-enable
*/ */
static int enable_local_apic __initdata = 0; static int enable_local_apic __initdata;
/* Local APIC timer verification ok */ /* Local APIC timer verification ok */
static int local_apic_timer_verify_ok; static int local_apic_timer_verify_ok;
@ -101,6 +99,8 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
/* Local APIC was disabled by the BIOS and enabled by the kernel */ /* Local APIC was disabled by the BIOS and enabled by the kernel */
static int enabled_via_apicbase; static int enabled_via_apicbase;
static unsigned long apic_phys;
/* /*
* Get the LAPIC version * Get the LAPIC version
*/ */
@ -110,7 +110,7 @@ static inline int lapic_get_version(void)
} }
/* /*
* Check, if the APIC is integrated or a seperate chip * Check, if the APIC is integrated or a separate chip
*/ */
static inline int lapic_is_integrated(void) static inline int lapic_is_integrated(void)
{ {
@ -135,9 +135,9 @@ void apic_wait_icr_idle(void)
cpu_relax(); cpu_relax();
} }
unsigned long safe_apic_wait_icr_idle(void) u32 safe_apic_wait_icr_idle(void)
{ {
unsigned long send_status; u32 send_status;
int timeout; int timeout;
timeout = 0; timeout = 0;
@ -154,7 +154,7 @@ unsigned long safe_apic_wait_icr_idle(void)
/** /**
* enable_NMI_through_LVT0 - enable NMI through local vector table 0 * enable_NMI_through_LVT0 - enable NMI through local vector table 0
*/ */
void enable_NMI_through_LVT0 (void * dummy) void __cpuinit enable_NMI_through_LVT0(void)
{ {
unsigned int v = APIC_DM_NMI; unsigned int v = APIC_DM_NMI;
@ -379,8 +379,10 @@ void __init setup_boot_APIC_clock(void)
*/ */
if (local_apic_timer_disabled) { if (local_apic_timer_disabled) {
/* No broadcast on UP ! */ /* No broadcast on UP ! */
if (num_possible_cpus() > 1) if (num_possible_cpus() > 1) {
lapic_clockevent.mult = 1;
setup_APIC_timer(); setup_APIC_timer();
}
return; return;
} }
@ -434,7 +436,7 @@ void __init setup_boot_APIC_clock(void)
"with PM Timer: %ldms instead of 100ms\n", "with PM Timer: %ldms instead of 100ms\n",
(long)res); (long)res);
/* Correct the lapic counter value */ /* Correct the lapic counter value */
res = (((u64) delta ) * pm_100ms); res = (((u64) delta) * pm_100ms);
do_div(res, deltapm); do_div(res, deltapm);
printk(KERN_INFO "APIC delta adjusted to PM-Timer: " printk(KERN_INFO "APIC delta adjusted to PM-Timer: "
"%lu (%ld)\n", (unsigned long) res, delta); "%lu (%ld)\n", (unsigned long) res, delta);
@ -472,6 +474,19 @@ void __init setup_boot_APIC_clock(void)
local_apic_timer_verify_ok = 1; local_apic_timer_verify_ok = 1;
/*
* Do a sanity check on the APIC calibration result
*/
if (calibration_result < (1000000 / HZ)) {
local_irq_enable();
printk(KERN_WARNING
"APIC frequency too slow, disabling apic timer\n");
/* No broadcast on UP ! */
if (num_possible_cpus() > 1)
setup_APIC_timer();
return;
}
/* We trust the pm timer based calibration */ /* We trust the pm timer based calibration */
if (!pm_referenced) { if (!pm_referenced) {
apic_printk(APIC_VERBOSE, "... verify APIC timer\n"); apic_printk(APIC_VERBOSE, "... verify APIC timer\n");
@ -563,6 +578,9 @@ static void local_apic_timer_interrupt(void)
return; return;
} }
/*
* the NMI deadlock-detector uses this.
*/
per_cpu(irq_stat, cpu).apic_timer_irqs++; per_cpu(irq_stat, cpu).apic_timer_irqs++;
evt->event_handler(evt); evt->event_handler(evt);
@ -576,8 +594,7 @@ static void local_apic_timer_interrupt(void)
* [ if a single-CPU system runs an SMP kernel then we call the local * [ if a single-CPU system runs an SMP kernel then we call the local
* interrupt as well. Thus we cannot inline the local irq ... ] * interrupt as well. Thus we cannot inline the local irq ... ]
*/ */
void smp_apic_timer_interrupt(struct pt_regs *regs)
void fastcall smp_apic_timer_interrupt(struct pt_regs *regs)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
@ -616,9 +633,14 @@ int setup_profiling_timer(unsigned int multiplier)
*/ */
void clear_local_APIC(void) void clear_local_APIC(void)
{ {
int maxlvt = lapic_get_maxlvt(); int maxlvt;
unsigned long v; u32 v;
/* APIC hasn't been mapped yet */
if (!apic_phys)
return;
maxlvt = lapic_get_maxlvt();
/* /*
* Masking an LVT entry can trigger a local APIC error * Masking an LVT entry can trigger a local APIC error
* if the vector is zero. Mask LVTERR first to prevent this. * if the vector is zero. Mask LVTERR first to prevent this.
@ -976,7 +998,8 @@ void __cpuinit setup_local_APIC(void)
value |= APIC_LVT_LEVEL_TRIGGER; value |= APIC_LVT_LEVEL_TRIGGER;
apic_write_around(APIC_LVT1, value); apic_write_around(APIC_LVT1, value);
if (integrated && !esr_disable) { /* !82489DX */ if (integrated && !esr_disable) {
/* !82489DX */
maxlvt = lapic_get_maxlvt(); maxlvt = lapic_get_maxlvt();
if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */ if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
apic_write(APIC_ESR, 0); apic_write(APIC_ESR, 0);
@ -1020,7 +1043,7 @@ void __cpuinit setup_local_APIC(void)
/* /*
* Detect and initialize APIC * Detect and initialize APIC
*/ */
static int __init detect_init_APIC (void) static int __init detect_init_APIC(void)
{ {
u32 h, l, features; u32 h, l, features;
@ -1077,7 +1100,7 @@ static int __init detect_init_APIC (void)
printk(KERN_WARNING "Could not enable APIC!\n"); printk(KERN_WARNING "Could not enable APIC!\n");
return -1; return -1;
} }
set_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability); set_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
/* The BIOS may have set up the APIC at some other address */ /* The BIOS may have set up the APIC at some other address */
@ -1104,8 +1127,6 @@ no_apic:
*/ */
void __init init_apic_mappings(void) void __init init_apic_mappings(void)
{ {
unsigned long apic_phys;
/* /*
* If no local APIC can be found then set up a fake all * If no local APIC can be found then set up a fake all
* zeroes page to simulate the local APIC and another * zeroes page to simulate the local APIC and another
@ -1164,10 +1185,10 @@ fake_ioapic_page:
* This initializes the IO-APIC and APIC hardware if this is * This initializes the IO-APIC and APIC hardware if this is
* a UP kernel. * a UP kernel.
*/ */
int __init APIC_init_uniprocessor (void) int __init APIC_init_uniprocessor(void)
{ {
if (enable_local_apic < 0) if (enable_local_apic < 0)
clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability); clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
if (!smp_found_config && !cpu_has_apic) if (!smp_found_config && !cpu_has_apic)
return -1; return -1;
@ -1179,7 +1200,7 @@ int __init APIC_init_uniprocessor (void)
APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) { APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
printk(KERN_ERR "BIOS bug, local APIC #%d not detected!...\n", printk(KERN_ERR "BIOS bug, local APIC #%d not detected!...\n",
boot_cpu_physical_apicid); boot_cpu_physical_apicid);
clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability); clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
return -1; return -1;
} }
@ -1209,50 +1230,6 @@ int __init APIC_init_uniprocessor (void)
return 0; return 0;
} }
/*
* APIC command line parameters
*/
static int __init parse_lapic(char *arg)
{
enable_local_apic = 1;
return 0;
}
early_param("lapic", parse_lapic);
static int __init parse_nolapic(char *arg)
{
enable_local_apic = -1;
clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
return 0;
}
early_param("nolapic", parse_nolapic);
static int __init parse_disable_lapic_timer(char *arg)
{
local_apic_timer_disabled = 1;
return 0;
}
early_param("nolapic_timer", parse_disable_lapic_timer);
static int __init parse_lapic_timer_c2_ok(char *arg)
{
local_apic_timer_c2_ok = 1;
return 0;
}
early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
static int __init apic_set_verbosity(char *str)
{
if (strcmp("debug", str) == 0)
apic_verbosity = APIC_DEBUG;
else if (strcmp("verbose", str) == 0)
apic_verbosity = APIC_VERBOSE;
return 1;
}
__setup("apic=", apic_set_verbosity);
/* /*
* Local APIC interrupts * Local APIC interrupts
*/ */
@ -1306,7 +1283,7 @@ void smp_error_interrupt(struct pt_regs *regs)
6: Received illegal vector 6: Received illegal vector
7: Illegal register address 7: Illegal register address
*/ */
printk (KERN_DEBUG "APIC error on CPU%d: %02lx(%02lx)\n", printk(KERN_DEBUG "APIC error on CPU%d: %02lx(%02lx)\n",
smp_processor_id(), v , v1); smp_processor_id(), v , v1);
irq_exit(); irq_exit();
} }
@ -1393,7 +1370,7 @@ void disconnect_bsp_APIC(int virt_wire_setup)
value = apic_read(APIC_LVT0); value = apic_read(APIC_LVT0);
value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING | value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR | APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED ); APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING; value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT); value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
apic_write_around(APIC_LVT0, value); apic_write_around(APIC_LVT0, value);
@ -1565,3 +1542,46 @@ device_initcall(init_lapic_sysfs);
static void apic_pm_activate(void) { } static void apic_pm_activate(void) { }
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
/*
* APIC command line parameters
*/
static int __init parse_lapic(char *arg)
{
enable_local_apic = 1;
return 0;
}
early_param("lapic", parse_lapic);
static int __init parse_nolapic(char *arg)
{
enable_local_apic = -1;
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
return 0;
}
early_param("nolapic", parse_nolapic);
static int __init parse_disable_lapic_timer(char *arg)
{
local_apic_timer_disabled = 1;
return 0;
}
early_param("nolapic_timer", parse_disable_lapic_timer);
static int __init parse_lapic_timer_c2_ok(char *arg)
{
local_apic_timer_c2_ok = 1;
return 0;
}
early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
static int __init apic_set_verbosity(char *str)
{
if (strcmp("debug", str) == 0)
apic_verbosity = APIC_DEBUG;
else if (strcmp("verbose", str) == 0)
apic_verbosity = APIC_VERBOSE;
return 1;
}
__setup("apic=", apic_set_verbosity);

File diff suppressed because it is too large Load Diff

View File

@ -227,6 +227,7 @@
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/jiffies.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
@ -235,8 +236,6 @@
#include <asm/paravirt.h> #include <asm/paravirt.h>
#include <asm/reboot.h> #include <asm/reboot.h>
#include "io_ports.h"
#if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT) #if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT)
extern int (*console_blank_hook)(int); extern int (*console_blank_hook)(int);
#endif #endif
@ -324,7 +323,7 @@ extern int (*console_blank_hook)(int);
/* /*
* Ignore suspend events for this amount of time after a resume * Ignore suspend events for this amount of time after a resume
*/ */
#define DEFAULT_BOUNCE_INTERVAL (3 * HZ) #define DEFAULT_BOUNCE_INTERVAL (3 * HZ)
/* /*
* Maximum number of events stored * Maximum number of events stored
@ -336,7 +335,7 @@ extern int (*console_blank_hook)(int);
*/ */
struct apm_user { struct apm_user {
int magic; int magic;
struct apm_user * next; struct apm_user *next;
unsigned int suser: 1; unsigned int suser: 1;
unsigned int writer: 1; unsigned int writer: 1;
unsigned int reader: 1; unsigned int reader: 1;
@ -372,44 +371,44 @@ struct apm_user {
static struct { static struct {
unsigned long offset; unsigned long offset;
unsigned short segment; unsigned short segment;
} apm_bios_entry; } apm_bios_entry;
static int clock_slowed; static int clock_slowed;
static int idle_threshold __read_mostly = DEFAULT_IDLE_THRESHOLD; static int idle_threshold __read_mostly = DEFAULT_IDLE_THRESHOLD;
static int idle_period __read_mostly = DEFAULT_IDLE_PERIOD; static int idle_period __read_mostly = DEFAULT_IDLE_PERIOD;
static int set_pm_idle; static int set_pm_idle;
static int suspends_pending; static int suspends_pending;
static int standbys_pending; static int standbys_pending;
static int ignore_sys_suspend; static int ignore_sys_suspend;
static int ignore_normal_resume; static int ignore_normal_resume;
static int bounce_interval __read_mostly = DEFAULT_BOUNCE_INTERVAL; static int bounce_interval __read_mostly = DEFAULT_BOUNCE_INTERVAL;
static int debug __read_mostly; static int debug __read_mostly;
static int smp __read_mostly; static int smp __read_mostly;
static int apm_disabled = -1; static int apm_disabled = -1;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int power_off; static int power_off;
#else #else
static int power_off = 1; static int power_off = 1;
#endif #endif
#ifdef CONFIG_APM_REAL_MODE_POWER_OFF #ifdef CONFIG_APM_REAL_MODE_POWER_OFF
static int realmode_power_off = 1; static int realmode_power_off = 1;
#else #else
static int realmode_power_off; static int realmode_power_off;
#endif #endif
#ifdef CONFIG_APM_ALLOW_INTS #ifdef CONFIG_APM_ALLOW_INTS
static int allow_ints = 1; static int allow_ints = 1;
#else #else
static int allow_ints; static int allow_ints;
#endif #endif
static int broken_psr; static int broken_psr;
static DECLARE_WAIT_QUEUE_HEAD(apm_waitqueue); static DECLARE_WAIT_QUEUE_HEAD(apm_waitqueue);
static DECLARE_WAIT_QUEUE_HEAD(apm_suspend_waitqueue); static DECLARE_WAIT_QUEUE_HEAD(apm_suspend_waitqueue);
static struct apm_user * user_list; static struct apm_user *user_list;
static DEFINE_SPINLOCK(user_list_lock); static DEFINE_SPINLOCK(user_list_lock);
static const struct desc_struct bad_bios_desc = { 0, 0x00409200 }; static const struct desc_struct bad_bios_desc = { { { 0, 0x00409200 } } };
static const char driver_version[] = "1.16ac"; /* no spaces */ static const char driver_version[] = "1.16ac"; /* no spaces */
static struct task_struct *kapmd_task; static struct task_struct *kapmd_task;
@ -417,7 +416,7 @@ static struct task_struct *kapmd_task;
* APM event names taken from the APM 1.2 specification. These are * APM event names taken from the APM 1.2 specification. These are
* the message codes that the BIOS uses to tell us about events * the message codes that the BIOS uses to tell us about events
*/ */
static const char * const apm_event_name[] = { static const char * const apm_event_name[] = {
"system standby", "system standby",
"system suspend", "system suspend",
"normal resume", "normal resume",
@ -435,14 +434,14 @@ static const char * const apm_event_name[] = {
typedef struct lookup_t { typedef struct lookup_t {
int key; int key;
char * msg; char *msg;
} lookup_t; } lookup_t;
/* /*
* The BIOS returns a set of standard error codes in AX when the * The BIOS returns a set of standard error codes in AX when the
* carry flag is set. * carry flag is set.
*/ */
static const lookup_t error_table[] = { static const lookup_t error_table[] = {
/* N/A { APM_SUCCESS, "Operation succeeded" }, */ /* N/A { APM_SUCCESS, "Operation succeeded" }, */
{ APM_DISABLED, "Power management disabled" }, { APM_DISABLED, "Power management disabled" },
@ -472,24 +471,25 @@ static const lookup_t error_table[] = {
* Write a meaningful log entry to the kernel log in the event of * Write a meaningful log entry to the kernel log in the event of
* an APM error. * an APM error.
*/ */
static void apm_error(char *str, int err) static void apm_error(char *str, int err)
{ {
int i; int i;
for (i = 0; i < ERROR_COUNT; i++) for (i = 0; i < ERROR_COUNT; i++)
if (error_table[i].key == err) break; if (error_table[i].key == err)
break;
if (i < ERROR_COUNT) if (i < ERROR_COUNT)
printk(KERN_NOTICE "apm: %s: %s\n", str, error_table[i].msg); printk(KERN_NOTICE "apm: %s: %s\n", str, error_table[i].msg);
else else
printk(KERN_NOTICE "apm: %s: unknown error code %#2.2x\n", printk(KERN_NOTICE "apm: %s: unknown error code %#2.2x\n",
str, err); str, err);
} }
/* /*
* Lock APM functionality to physical CPU 0 * Lock APM functionality to physical CPU 0
*/ */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static cpumask_t apm_save_cpus(void) static cpumask_t apm_save_cpus(void)
@ -511,7 +511,7 @@ static inline void apm_restore_cpus(cpumask_t mask)
/* /*
* No CPU lockdown needed on a uniprocessor * No CPU lockdown needed on a uniprocessor
*/ */
#define apm_save_cpus() (current->cpus_allowed) #define apm_save_cpus() (current->cpus_allowed)
#define apm_restore_cpus(x) (void)(x) #define apm_restore_cpus(x) (void)(x)
@ -590,7 +590,7 @@ static inline void apm_irq_restore(unsigned long flags)
* code is returned in AH (bits 8-15 of eax) and this function * code is returned in AH (bits 8-15 of eax) and this function
* returns non-zero. * returns non-zero.
*/ */
static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in, static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
u32 *eax, u32 *ebx, u32 *ecx, u32 *edx, u32 *esi) u32 *eax, u32 *ebx, u32 *ecx, u32 *edx, u32 *esi)
{ {
@ -602,7 +602,7 @@ static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
struct desc_struct *gdt; struct desc_struct *gdt;
cpus = apm_save_cpus(); cpus = apm_save_cpus();
cpu = get_cpu(); cpu = get_cpu();
gdt = get_cpu_gdt_table(cpu); gdt = get_cpu_gdt_table(cpu);
save_desc_40 = gdt[0x40 / 8]; save_desc_40 = gdt[0x40 / 8];
@ -616,7 +616,7 @@ static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
gdt[0x40 / 8] = save_desc_40; gdt[0x40 / 8] = save_desc_40;
put_cpu(); put_cpu();
apm_restore_cpus(cpus); apm_restore_cpus(cpus);
return *eax & 0xff; return *eax & 0xff;
} }
@ -645,7 +645,7 @@ static u8 apm_bios_call_simple(u32 func, u32 ebx_in, u32 ecx_in, u32 *eax)
struct desc_struct *gdt; struct desc_struct *gdt;
cpus = apm_save_cpus(); cpus = apm_save_cpus();
cpu = get_cpu(); cpu = get_cpu();
gdt = get_cpu_gdt_table(cpu); gdt = get_cpu_gdt_table(cpu);
save_desc_40 = gdt[0x40 / 8]; save_desc_40 = gdt[0x40 / 8];
@ -680,7 +680,7 @@ static u8 apm_bios_call_simple(u32 func, u32 ebx_in, u32 ecx_in, u32 *eax)
static int apm_driver_version(u_short *val) static int apm_driver_version(u_short *val)
{ {
u32 eax; u32 eax;
if (apm_bios_call_simple(APM_FUNC_VERSION, 0, *val, &eax)) if (apm_bios_call_simple(APM_FUNC_VERSION, 0, *val, &eax))
return (eax >> 8) & 0xff; return (eax >> 8) & 0xff;
@ -704,16 +704,16 @@ static int apm_driver_version(u_short *val)
* that APM 1.2 is in use. If no messges are pending the value 0x80 * that APM 1.2 is in use. If no messges are pending the value 0x80
* is returned (No power management events pending). * is returned (No power management events pending).
*/ */
static int apm_get_event(apm_event_t *event, apm_eventinfo_t *info) static int apm_get_event(apm_event_t *event, apm_eventinfo_t *info)
{ {
u32 eax; u32 eax;
u32 ebx; u32 ebx;
u32 ecx; u32 ecx;
u32 dummy; u32 dummy;
if (apm_bios_call(APM_FUNC_GET_EVENT, 0, 0, &eax, &ebx, &ecx, if (apm_bios_call(APM_FUNC_GET_EVENT, 0, 0, &eax, &ebx, &ecx,
&dummy, &dummy)) &dummy, &dummy))
return (eax >> 8) & 0xff; return (eax >> 8) & 0xff;
*event = ebx; *event = ebx;
if (apm_info.connection_version < 0x0102) if (apm_info.connection_version < 0x0102)
@ -736,10 +736,10 @@ static int apm_get_event(apm_event_t *event, apm_eventinfo_t *info)
* The state holds the state to transition to, which may in fact * The state holds the state to transition to, which may in fact
* be an acceptance of a BIOS requested state change. * be an acceptance of a BIOS requested state change.
*/ */
static int set_power_state(u_short what, u_short state) static int set_power_state(u_short what, u_short state)
{ {
u32 eax; u32 eax;
if (apm_bios_call_simple(APM_FUNC_SET_STATE, what, state, &eax)) if (apm_bios_call_simple(APM_FUNC_SET_STATE, what, state, &eax))
return (eax >> 8) & 0xff; return (eax >> 8) & 0xff;
@ -752,7 +752,7 @@ static int set_power_state(u_short what, u_short state)
* *
* Transition the entire system into a new APM power state. * Transition the entire system into a new APM power state.
*/ */
static int set_system_power_state(u_short state) static int set_system_power_state(u_short state)
{ {
return set_power_state(APM_DEVICE_ALL, state); return set_power_state(APM_DEVICE_ALL, state);
@ -766,13 +766,13 @@ static int set_system_power_state(u_short state)
* to handle the idle request. On a success the function returns 1 * to handle the idle request. On a success the function returns 1
* if the BIOS did clock slowing or 0 otherwise. * if the BIOS did clock slowing or 0 otherwise.
*/ */
static int apm_do_idle(void) static int apm_do_idle(void)
{ {
u32 eax; u32 eax;
u8 ret = 0; u8 ret = 0;
int idled = 0; int idled = 0;
int polling; int polling;
polling = !!(current_thread_info()->status & TS_POLLING); polling = !!(current_thread_info()->status & TS_POLLING);
if (polling) { if (polling) {
@ -799,10 +799,9 @@ static int apm_do_idle(void)
/* This always fails on some SMP boards running UP kernels. /* This always fails on some SMP boards running UP kernels.
* Only report the failure the first 5 times. * Only report the failure the first 5 times.
*/ */
if (++t < 5) if (++t < 5) {
{
printk(KERN_DEBUG "apm_do_idle failed (%d)\n", printk(KERN_DEBUG "apm_do_idle failed (%d)\n",
(eax >> 8) & 0xff); (eax >> 8) & 0xff);
t = jiffies; t = jiffies;
} }
return -1; return -1;
@ -814,15 +813,15 @@ static int apm_do_idle(void)
/** /**
* apm_do_busy - inform the BIOS the CPU is busy * apm_do_busy - inform the BIOS the CPU is busy
* *
* Request that the BIOS brings the CPU back to full performance. * Request that the BIOS brings the CPU back to full performance.
*/ */
static void apm_do_busy(void) static void apm_do_busy(void)
{ {
u32 dummy; u32 dummy;
if (clock_slowed || ALWAYS_CALL_BUSY) { if (clock_slowed || ALWAYS_CALL_BUSY) {
(void) apm_bios_call_simple(APM_FUNC_BUSY, 0, 0, &dummy); (void)apm_bios_call_simple(APM_FUNC_BUSY, 0, 0, &dummy);
clock_slowed = 0; clock_slowed = 0;
} }
} }
@ -833,15 +832,15 @@ static void apm_do_busy(void)
* power management - we probably want * power management - we probably want
* to conserve power. * to conserve power.
*/ */
#define IDLE_CALC_LIMIT (HZ * 100) #define IDLE_CALC_LIMIT (HZ * 100)
#define IDLE_LEAKY_MAX 16 #define IDLE_LEAKY_MAX 16
static void (*original_pm_idle)(void) __read_mostly; static void (*original_pm_idle)(void) __read_mostly;
/** /**
* apm_cpu_idle - cpu idling for APM capable Linux * apm_cpu_idle - cpu idling for APM capable Linux
* *
* This is the idling function the kernel executes when APM is available. It * This is the idling function the kernel executes when APM is available. It
* tries to do BIOS powermanagement based on the average system idle time. * tries to do BIOS powermanagement based on the average system idle time.
* Furthermore it calls the system default idle routine. * Furthermore it calls the system default idle routine.
*/ */
@ -882,7 +881,8 @@ recalc:
t = jiffies; t = jiffies;
switch (apm_do_idle()) { switch (apm_do_idle()) {
case 0: apm_idle_done = 1; case 0:
apm_idle_done = 1;
if (t != jiffies) { if (t != jiffies) {
if (bucket) { if (bucket) {
bucket = IDLE_LEAKY_MAX; bucket = IDLE_LEAKY_MAX;
@ -893,7 +893,8 @@ recalc:
continue; continue;
} }
break; break;
case 1: apm_idle_done = 1; case 1:
apm_idle_done = 1;
break; break;
default: /* BIOS refused */ default: /* BIOS refused */
break; break;
@ -921,10 +922,10 @@ recalc:
* the SMP call on CPU0 as some systems will only honour this call * the SMP call on CPU0 as some systems will only honour this call
* on their first cpu. * on their first cpu.
*/ */
static void apm_power_off(void) static void apm_power_off(void)
{ {
unsigned char po_bios_call[] = { unsigned char po_bios_call[] = {
0xb8, 0x00, 0x10, /* movw $0x1000,ax */ 0xb8, 0x00, 0x10, /* movw $0x1000,ax */
0x8e, 0xd0, /* movw ax,ss */ 0x8e, 0xd0, /* movw ax,ss */
0xbc, 0x00, 0xf0, /* movw $0xf000,sp */ 0xbc, 0x00, 0xf0, /* movw $0xf000,sp */
@ -935,13 +936,12 @@ static void apm_power_off(void)
}; };
/* Some bioses don't like being called from CPU != 0 */ /* Some bioses don't like being called from CPU != 0 */
if (apm_info.realmode_power_off) if (apm_info.realmode_power_off) {
{
(void)apm_save_cpus(); (void)apm_save_cpus();
machine_real_restart(po_bios_call, sizeof(po_bios_call)); machine_real_restart(po_bios_call, sizeof(po_bios_call));
} else {
(void)set_system_power_state(APM_STATE_OFF);
} }
else
(void) set_system_power_state(APM_STATE_OFF);
} }
#ifdef CONFIG_APM_DO_ENABLE #ifdef CONFIG_APM_DO_ENABLE
@ -950,17 +950,17 @@ static void apm_power_off(void)
* apm_enable_power_management - enable BIOS APM power management * apm_enable_power_management - enable BIOS APM power management
* @enable: enable yes/no * @enable: enable yes/no
* *
* Enable or disable the APM BIOS power services. * Enable or disable the APM BIOS power services.
*/ */
static int apm_enable_power_management(int enable) static int apm_enable_power_management(int enable)
{ {
u32 eax; u32 eax;
if ((enable == 0) && (apm_info.bios.flags & APM_BIOS_DISENGAGED)) if ((enable == 0) && (apm_info.bios.flags & APM_BIOS_DISENGAGED))
return APM_NOT_ENGAGED; return APM_NOT_ENGAGED;
if (apm_bios_call_simple(APM_FUNC_ENABLE_PM, APM_DEVICE_BALL, if (apm_bios_call_simple(APM_FUNC_ENABLE_PM, APM_DEVICE_BALL,
enable, &eax)) enable, &eax))
return (eax >> 8) & 0xff; return (eax >> 8) & 0xff;
if (enable) if (enable)
apm_info.bios.flags &= ~APM_BIOS_DISABLED; apm_info.bios.flags &= ~APM_BIOS_DISABLED;
@ -983,19 +983,19 @@ static int apm_enable_power_management(int enable)
* if reported is a lifetime in secodnds/minutes at current powwer * if reported is a lifetime in secodnds/minutes at current powwer
* consumption. * consumption.
*/ */
static int apm_get_power_status(u_short *status, u_short *bat, u_short *life) static int apm_get_power_status(u_short *status, u_short *bat, u_short *life)
{ {
u32 eax; u32 eax;
u32 ebx; u32 ebx;
u32 ecx; u32 ecx;
u32 edx; u32 edx;
u32 dummy; u32 dummy;
if (apm_info.get_power_status_broken) if (apm_info.get_power_status_broken)
return APM_32_UNSUPPORTED; return APM_32_UNSUPPORTED;
if (apm_bios_call(APM_FUNC_GET_STATUS, APM_DEVICE_ALL, 0, if (apm_bios_call(APM_FUNC_GET_STATUS, APM_DEVICE_ALL, 0,
&eax, &ebx, &ecx, &edx, &dummy)) &eax, &ebx, &ecx, &edx, &dummy))
return (eax >> 8) & 0xff; return (eax >> 8) & 0xff;
*status = ebx; *status = ebx;
*bat = ecx; *bat = ecx;
@ -1011,11 +1011,11 @@ static int apm_get_power_status(u_short *status, u_short *bat, u_short *life)
static int apm_get_battery_status(u_short which, u_short *status, static int apm_get_battery_status(u_short which, u_short *status,
u_short *bat, u_short *life, u_short *nbat) u_short *bat, u_short *life, u_short *nbat)
{ {
u32 eax; u32 eax;
u32 ebx; u32 ebx;
u32 ecx; u32 ecx;
u32 edx; u32 edx;
u32 esi; u32 esi;
if (apm_info.connection_version < 0x0102) { if (apm_info.connection_version < 0x0102) {
/* pretend we only have one battery. */ /* pretend we only have one battery. */
@ -1026,7 +1026,7 @@ static int apm_get_battery_status(u_short which, u_short *status,
} }
if (apm_bios_call(APM_FUNC_GET_STATUS, (0x8000 | (which)), 0, &eax, if (apm_bios_call(APM_FUNC_GET_STATUS, (0x8000 | (which)), 0, &eax,
&ebx, &ecx, &edx, &esi)) &ebx, &ecx, &edx, &esi))
return (eax >> 8) & 0xff; return (eax >> 8) & 0xff;
*status = ebx; *status = ebx;
*bat = ecx; *bat = ecx;
@ -1044,10 +1044,10 @@ static int apm_get_battery_status(u_short which, u_short *status,
* Activate or deactive power management on either a specific device * Activate or deactive power management on either a specific device
* or the entire system (%APM_DEVICE_ALL). * or the entire system (%APM_DEVICE_ALL).
*/ */
static int apm_engage_power_management(u_short device, int enable) static int apm_engage_power_management(u_short device, int enable)
{ {
u32 eax; u32 eax;
if ((enable == 0) && (device == APM_DEVICE_ALL) if ((enable == 0) && (device == APM_DEVICE_ALL)
&& (apm_info.bios.flags & APM_BIOS_DISABLED)) && (apm_info.bios.flags & APM_BIOS_DISABLED))
@ -1074,7 +1074,7 @@ static int apm_engage_power_management(u_short device, int enable)
* all video devices. Typically the BIOS will do laptop backlight and * all video devices. Typically the BIOS will do laptop backlight and
* monitor powerdown for us. * monitor powerdown for us.
*/ */
static int apm_console_blank(int blank) static int apm_console_blank(int blank)
{ {
int error = APM_NOT_ENGAGED; /* silence gcc */ int error = APM_NOT_ENGAGED; /* silence gcc */
@ -1126,7 +1126,7 @@ static apm_event_t get_queued_event(struct apm_user *as)
static void queue_event(apm_event_t event, struct apm_user *sender) static void queue_event(apm_event_t event, struct apm_user *sender)
{ {
struct apm_user * as; struct apm_user *as;
spin_lock(&user_list_lock); spin_lock(&user_list_lock);
if (user_list == NULL) if (user_list == NULL)
@ -1174,11 +1174,11 @@ static void reinit_timer(void)
spin_lock_irqsave(&i8253_lock, flags); spin_lock_irqsave(&i8253_lock, flags);
/* set the clock to HZ */ /* set the clock to HZ */
outb_p(0x34, PIT_MODE); /* binary, mode 2, LSB/MSB, ch 0 */ outb_pit(0x34, PIT_MODE); /* binary, mode 2, LSB/MSB, ch 0 */
udelay(10); udelay(10);
outb_p(LATCH & 0xff, PIT_CH0); /* LSB */ outb_pit(LATCH & 0xff, PIT_CH0); /* LSB */
udelay(10); udelay(10);
outb(LATCH >> 8, PIT_CH0); /* MSB */ outb_pit(LATCH >> 8, PIT_CH0); /* MSB */
udelay(10); udelay(10);
spin_unlock_irqrestore(&i8253_lock, flags); spin_unlock_irqrestore(&i8253_lock, flags);
#endif #endif
@ -1186,7 +1186,7 @@ static void reinit_timer(void)
static int suspend(int vetoable) static int suspend(int vetoable)
{ {
int err; int err;
struct apm_user *as; struct apm_user *as;
if (pm_send_all(PM_SUSPEND, (void *)3)) { if (pm_send_all(PM_SUSPEND, (void *)3)) {
@ -1239,7 +1239,7 @@ static int suspend(int vetoable)
static void standby(void) static void standby(void)
{ {
int err; int err;
local_irq_disable(); local_irq_disable();
device_power_down(PMSG_SUSPEND); device_power_down(PMSG_SUSPEND);
@ -1256,8 +1256,8 @@ static void standby(void)
static apm_event_t get_event(void) static apm_event_t get_event(void)
{ {
int error; int error;
apm_event_t event = APM_NO_EVENTS; /* silence gcc */ apm_event_t event = APM_NO_EVENTS; /* silence gcc */
apm_eventinfo_t info; apm_eventinfo_t info;
static int notified; static int notified;
@ -1275,9 +1275,9 @@ static apm_event_t get_event(void)
static void check_events(void) static void check_events(void)
{ {
apm_event_t event; apm_event_t event;
static unsigned long last_resume; static unsigned long last_resume;
static int ignore_bounce; static int ignore_bounce;
while ((event = get_event()) != 0) { while ((event = get_event()) != 0) {
if (debug) { if (debug) {
@ -1289,7 +1289,7 @@ static void check_events(void)
"event 0x%02x\n", event); "event 0x%02x\n", event);
} }
if (ignore_bounce if (ignore_bounce
&& ((jiffies - last_resume) > bounce_interval)) && (time_after(jiffies, last_resume + bounce_interval)))
ignore_bounce = 0; ignore_bounce = 0;
switch (event) { switch (event) {
@ -1357,7 +1357,7 @@ static void check_events(void)
/* /*
* We are not allowed to reject a critical suspend. * We are not allowed to reject a critical suspend.
*/ */
(void) suspend(0); (void)suspend(0);
break; break;
} }
} }
@ -1365,12 +1365,12 @@ static void check_events(void)
static void apm_event_handler(void) static void apm_event_handler(void)
{ {
static int pending_count = 4; static int pending_count = 4;
int err; int err;
if ((standbys_pending > 0) || (suspends_pending > 0)) { if ((standbys_pending > 0) || (suspends_pending > 0)) {
if ((apm_info.connection_version > 0x100) && if ((apm_info.connection_version > 0x100) &&
(pending_count-- <= 0)) { (pending_count-- <= 0)) {
pending_count = 4; pending_count = 4;
if (debug) if (debug)
printk(KERN_DEBUG "apm: setting state busy\n"); printk(KERN_DEBUG "apm: setting state busy\n");
@ -1418,9 +1418,9 @@ static int check_apm_user(struct apm_user *as, const char *func)
static ssize_t do_read(struct file *fp, char __user *buf, size_t count, loff_t *ppos) static ssize_t do_read(struct file *fp, char __user *buf, size_t count, loff_t *ppos)
{ {
struct apm_user * as; struct apm_user *as;
int i; int i;
apm_event_t event; apm_event_t event;
as = fp->private_data; as = fp->private_data;
if (check_apm_user(as, "read")) if (check_apm_user(as, "read"))
@ -1459,9 +1459,9 @@ static ssize_t do_read(struct file *fp, char __user *buf, size_t count, loff_t *
return 0; return 0;
} }
static unsigned int do_poll(struct file *fp, poll_table * wait) static unsigned int do_poll(struct file *fp, poll_table *wait)
{ {
struct apm_user * as; struct apm_user *as;
as = fp->private_data; as = fp->private_data;
if (check_apm_user(as, "poll")) if (check_apm_user(as, "poll"))
@ -1472,10 +1472,10 @@ static unsigned int do_poll(struct file *fp, poll_table * wait)
return 0; return 0;
} }
static int do_ioctl(struct inode * inode, struct file *filp, static int do_ioctl(struct inode *inode, struct file *filp,
u_int cmd, u_long arg) u_int cmd, u_long arg)
{ {
struct apm_user * as; struct apm_user *as;
as = filp->private_data; as = filp->private_data;
if (check_apm_user(as, "ioctl")) if (check_apm_user(as, "ioctl"))
@ -1515,9 +1515,9 @@ static int do_ioctl(struct inode * inode, struct file *filp,
return 0; return 0;
} }
static int do_release(struct inode * inode, struct file * filp) static int do_release(struct inode *inode, struct file *filp)
{ {
struct apm_user * as; struct apm_user *as;
as = filp->private_data; as = filp->private_data;
if (check_apm_user(as, "release")) if (check_apm_user(as, "release"))
@ -1533,11 +1533,11 @@ static int do_release(struct inode * inode, struct file * filp)
if (suspends_pending <= 0) if (suspends_pending <= 0)
(void) suspend(1); (void) suspend(1);
} }
spin_lock(&user_list_lock); spin_lock(&user_list_lock);
if (user_list == as) if (user_list == as)
user_list = as->next; user_list = as->next;
else { else {
struct apm_user * as1; struct apm_user *as1;
for (as1 = user_list; for (as1 = user_list;
(as1 != NULL) && (as1->next != as); (as1 != NULL) && (as1->next != as);
@ -1553,9 +1553,9 @@ static int do_release(struct inode * inode, struct file * filp)
return 0; return 0;
} }
static int do_open(struct inode * inode, struct file * filp) static int do_open(struct inode *inode, struct file *filp)
{ {
struct apm_user * as; struct apm_user *as;
as = kmalloc(sizeof(*as), GFP_KERNEL); as = kmalloc(sizeof(*as), GFP_KERNEL);
if (as == NULL) { if (as == NULL) {
@ -1569,7 +1569,7 @@ static int do_open(struct inode * inode, struct file * filp)
as->suspends_read = as->standbys_read = 0; as->suspends_read = as->standbys_read = 0;
/* /*
* XXX - this is a tiny bit broken, when we consider BSD * XXX - this is a tiny bit broken, when we consider BSD
* process accounting. If the device is opened by root, we * process accounting. If the device is opened by root, we
* instantly flag that we used superuser privs. Who knows, * instantly flag that we used superuser privs. Who knows,
* we might close the device immediately without doing a * we might close the device immediately without doing a
* privileged operation -- cevans * privileged operation -- cevans
@ -1652,16 +1652,16 @@ static int proc_apm_show(struct seq_file *m, void *v)
8) min = minutes; sec = seconds */ 8) min = minutes; sec = seconds */
seq_printf(m, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n", seq_printf(m, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n",
driver_version, driver_version,
(apm_info.bios.version >> 8) & 0xff, (apm_info.bios.version >> 8) & 0xff,
apm_info.bios.version & 0xff, apm_info.bios.version & 0xff,
apm_info.bios.flags, apm_info.bios.flags,
ac_line_status, ac_line_status,
battery_status, battery_status,
battery_flag, battery_flag,
percentage, percentage,
time_units, time_units,
units); units);
return 0; return 0;
} }
@ -1684,8 +1684,8 @@ static int apm(void *unused)
unsigned short cx; unsigned short cx;
unsigned short dx; unsigned short dx;
int error; int error;
char * power_stat; char *power_stat;
char * bat_stat; char *bat_stat;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* 2002/08/01 - WT /* 2002/08/01 - WT
@ -1744,23 +1744,41 @@ static int apm(void *unused)
} }
} }
if (debug && (num_online_cpus() == 1 || smp )) { if (debug && (num_online_cpus() == 1 || smp)) {
error = apm_get_power_status(&bx, &cx, &dx); error = apm_get_power_status(&bx, &cx, &dx);
if (error) if (error)
printk(KERN_INFO "apm: power status not available\n"); printk(KERN_INFO "apm: power status not available\n");
else { else {
switch ((bx >> 8) & 0xff) { switch ((bx >> 8) & 0xff) {
case 0: power_stat = "off line"; break; case 0:
case 1: power_stat = "on line"; break; power_stat = "off line";
case 2: power_stat = "on backup power"; break; break;
default: power_stat = "unknown"; break; case 1:
power_stat = "on line";
break;
case 2:
power_stat = "on backup power";
break;
default:
power_stat = "unknown";
break;
} }
switch (bx & 0xff) { switch (bx & 0xff) {
case 0: bat_stat = "high"; break; case 0:
case 1: bat_stat = "low"; break; bat_stat = "high";
case 2: bat_stat = "critical"; break; break;
case 3: bat_stat = "charging"; break; case 1:
default: bat_stat = "unknown"; break; bat_stat = "low";
break;
case 2:
bat_stat = "critical";
break;
case 3:
bat_stat = "charging";
break;
default:
bat_stat = "unknown";
break;
} }
printk(KERN_INFO printk(KERN_INFO
"apm: AC %s, battery status %s, battery life ", "apm: AC %s, battery status %s, battery life ",
@ -1777,8 +1795,8 @@ static int apm(void *unused)
printk("unknown\n"); printk("unknown\n");
else else
printk("%d %s\n", dx & 0x7fff, printk("%d %s\n", dx & 0x7fff,
(dx & 0x8000) ? (dx & 0x8000) ?
"minutes" : "seconds"); "minutes" : "seconds");
} }
} }
} }
@ -1803,7 +1821,7 @@ static int apm(void *unused)
#ifndef MODULE #ifndef MODULE
static int __init apm_setup(char *str) static int __init apm_setup(char *str)
{ {
int invert; int invert;
while ((str != NULL) && (*str != '\0')) { while ((str != NULL) && (*str != '\0')) {
if (strncmp(str, "off", 3) == 0) if (strncmp(str, "off", 3) == 0)
@ -1828,14 +1846,13 @@ static int __init apm_setup(char *str)
if ((strncmp(str, "power-off", 9) == 0) || if ((strncmp(str, "power-off", 9) == 0) ||
(strncmp(str, "power_off", 9) == 0)) (strncmp(str, "power_off", 9) == 0))
power_off = !invert; power_off = !invert;
if (strncmp(str, "smp", 3) == 0) if (strncmp(str, "smp", 3) == 0) {
{
smp = !invert; smp = !invert;
idle_threshold = 100; idle_threshold = 100;
} }
if ((strncmp(str, "allow-ints", 10) == 0) || if ((strncmp(str, "allow-ints", 10) == 0) ||
(strncmp(str, "allow_ints", 10) == 0)) (strncmp(str, "allow_ints", 10) == 0))
apm_info.allow_ints = !invert; apm_info.allow_ints = !invert;
if ((strncmp(str, "broken-psr", 10) == 0) || if ((strncmp(str, "broken-psr", 10) == 0) ||
(strncmp(str, "broken_psr", 10) == 0)) (strncmp(str, "broken_psr", 10) == 0))
apm_info.get_power_status_broken = !invert; apm_info.get_power_status_broken = !invert;
@ -1881,7 +1898,8 @@ static int __init print_if_true(const struct dmi_system_id *d)
*/ */
static int __init broken_ps2_resume(const struct dmi_system_id *d) static int __init broken_ps2_resume(const struct dmi_system_id *d)
{ {
printk(KERN_INFO "%s machine detected. Mousepad Resume Bug workaround hopefully not needed.\n", d->ident); printk(KERN_INFO "%s machine detected. Mousepad Resume Bug "
"workaround hopefully not needed.\n", d->ident);
return 0; return 0;
} }
@ -1890,7 +1908,8 @@ static int __init set_realmode_power_off(const struct dmi_system_id *d)
{ {
if (apm_info.realmode_power_off == 0) { if (apm_info.realmode_power_off == 0) {
apm_info.realmode_power_off = 1; apm_info.realmode_power_off = 1;
printk(KERN_INFO "%s bios detected. Using realmode poweroff only.\n", d->ident); printk(KERN_INFO "%s bios detected. "
"Using realmode poweroff only.\n", d->ident);
} }
return 0; return 0;
} }
@ -1900,7 +1919,8 @@ static int __init set_apm_ints(const struct dmi_system_id *d)
{ {
if (apm_info.allow_ints == 0) { if (apm_info.allow_ints == 0) {
apm_info.allow_ints = 1; apm_info.allow_ints = 1;
printk(KERN_INFO "%s machine detected. Enabling interrupts during APM calls.\n", d->ident); printk(KERN_INFO "%s machine detected. "
"Enabling interrupts during APM calls.\n", d->ident);
} }
return 0; return 0;
} }
@ -1910,7 +1930,8 @@ static int __init apm_is_horked(const struct dmi_system_id *d)
{ {
if (apm_info.disabled == 0) { if (apm_info.disabled == 0) {
apm_info.disabled = 1; apm_info.disabled = 1;
printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident); printk(KERN_INFO "%s machine detected. "
"Disabling APM.\n", d->ident);
} }
return 0; return 0;
} }
@ -1919,7 +1940,8 @@ static int __init apm_is_horked_d850md(const struct dmi_system_id *d)
{ {
if (apm_info.disabled == 0) { if (apm_info.disabled == 0) {
apm_info.disabled = 1; apm_info.disabled = 1;
printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident); printk(KERN_INFO "%s machine detected. "
"Disabling APM.\n", d->ident);
printk(KERN_INFO "This bug is fixed in bios P15 which is available for \n"); printk(KERN_INFO "This bug is fixed in bios P15 which is available for \n");
printk(KERN_INFO "download from support.intel.com \n"); printk(KERN_INFO "download from support.intel.com \n");
} }
@ -1931,7 +1953,8 @@ static int __init apm_likes_to_melt(const struct dmi_system_id *d)
{ {
if (apm_info.forbid_idle == 0) { if (apm_info.forbid_idle == 0) {
apm_info.forbid_idle = 1; apm_info.forbid_idle = 1;
printk(KERN_INFO "%s machine detected. Disabling APM idle calls.\n", d->ident); printk(KERN_INFO "%s machine detected. "
"Disabling APM idle calls.\n", d->ident);
} }
return 0; return 0;
} }
@ -1954,7 +1977,8 @@ static int __init apm_likes_to_melt(const struct dmi_system_id *d)
static int __init broken_apm_power(const struct dmi_system_id *d) static int __init broken_apm_power(const struct dmi_system_id *d)
{ {
apm_info.get_power_status_broken = 1; apm_info.get_power_status_broken = 1;
printk(KERN_WARNING "BIOS strings suggest APM bugs, disabling power status reporting.\n"); printk(KERN_WARNING "BIOS strings suggest APM bugs, "
"disabling power status reporting.\n");
return 0; return 0;
} }
@ -1965,7 +1989,8 @@ static int __init broken_apm_power(const struct dmi_system_id *d)
static int __init swab_apm_power_in_minutes(const struct dmi_system_id *d) static int __init swab_apm_power_in_minutes(const struct dmi_system_id *d)
{ {
apm_info.get_power_status_swabinminutes = 1; apm_info.get_power_status_swabinminutes = 1;
printk(KERN_WARNING "BIOS strings suggest APM reports battery life in minutes and wrong byte order.\n"); printk(KERN_WARNING "BIOS strings suggest APM reports battery life "
"in minutes and wrong byte order.\n");
return 0; return 0;
} }
@ -1990,8 +2015,8 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
apm_is_horked, "Dell Inspiron 2500", apm_is_horked, "Dell Inspiron 2500",
{ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"), { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"), DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"), DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
DMI_MATCH(DMI_BIOS_VERSION,"A11"), }, DMI_MATCH(DMI_BIOS_VERSION, "A11"), },
}, },
{ /* Allow interrupts during suspend on Dell Inspiron laptops*/ { /* Allow interrupts during suspend on Dell Inspiron laptops*/
set_apm_ints, "Dell Inspiron", { set_apm_ints, "Dell Inspiron", {
@ -2014,15 +2039,15 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
apm_is_horked, "Dell Dimension 4100", apm_is_horked, "Dell Dimension 4100",
{ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"), { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
DMI_MATCH(DMI_PRODUCT_NAME, "XPS-Z"), DMI_MATCH(DMI_PRODUCT_NAME, "XPS-Z"),
DMI_MATCH(DMI_BIOS_VENDOR,"Intel Corp."), DMI_MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
DMI_MATCH(DMI_BIOS_VERSION,"A11"), }, DMI_MATCH(DMI_BIOS_VERSION, "A11"), },
}, },
{ /* Allow interrupts during suspend on Compaq Laptops*/ { /* Allow interrupts during suspend on Compaq Laptops*/
set_apm_ints, "Compaq 12XL125", set_apm_ints, "Compaq 12XL125",
{ DMI_MATCH(DMI_SYS_VENDOR, "Compaq"), { DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
DMI_MATCH(DMI_PRODUCT_NAME, "Compaq PC"), DMI_MATCH(DMI_PRODUCT_NAME, "Compaq PC"),
DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"), DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
DMI_MATCH(DMI_BIOS_VERSION,"4.06"), }, DMI_MATCH(DMI_BIOS_VERSION, "4.06"), },
}, },
{ /* Allow interrupts during APM or the clock goes slow */ { /* Allow interrupts during APM or the clock goes slow */
set_apm_ints, "ASUSTeK", set_apm_ints, "ASUSTeK",
@ -2064,15 +2089,15 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
apm_is_horked, "Sharp PC-PJ/AX", apm_is_horked, "Sharp PC-PJ/AX",
{ DMI_MATCH(DMI_SYS_VENDOR, "SHARP"), { DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
DMI_MATCH(DMI_PRODUCT_NAME, "PC-PJ/AX"), DMI_MATCH(DMI_PRODUCT_NAME, "PC-PJ/AX"),
DMI_MATCH(DMI_BIOS_VENDOR,"SystemSoft"), DMI_MATCH(DMI_BIOS_VENDOR, "SystemSoft"),
DMI_MATCH(DMI_BIOS_VERSION,"Version R2.08"), }, DMI_MATCH(DMI_BIOS_VERSION, "Version R2.08"), },
}, },
{ /* APM crashes */ { /* APM crashes */
apm_is_horked, "Dell Inspiron 2500", apm_is_horked, "Dell Inspiron 2500",
{ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"), { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"), DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"), DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
DMI_MATCH(DMI_BIOS_VERSION,"A11"), }, DMI_MATCH(DMI_BIOS_VERSION, "A11"), },
}, },
{ /* APM idle hangs */ { /* APM idle hangs */
apm_likes_to_melt, "Jabil AMD", apm_likes_to_melt, "Jabil AMD",
@ -2203,11 +2228,11 @@ static int __init apm_init(void)
return -ENODEV; return -ENODEV;
} }
printk(KERN_INFO printk(KERN_INFO
"apm: BIOS version %d.%d Flags 0x%02x (Driver version %s)\n", "apm: BIOS version %d.%d Flags 0x%02x (Driver version %s)\n",
((apm_info.bios.version >> 8) & 0xff), ((apm_info.bios.version >> 8) & 0xff),
(apm_info.bios.version & 0xff), (apm_info.bios.version & 0xff),
apm_info.bios.flags, apm_info.bios.flags,
driver_version); driver_version);
if ((apm_info.bios.flags & APM_32_BIT_SUPPORT) == 0) { if ((apm_info.bios.flags & APM_32_BIT_SUPPORT) == 0) {
printk(KERN_INFO "apm: no 32 bit BIOS support\n"); printk(KERN_INFO "apm: no 32 bit BIOS support\n");
return -ENODEV; return -ENODEV;
@ -2312,9 +2337,9 @@ static int __init apm_init(void)
} }
wake_up_process(kapmd_task); wake_up_process(kapmd_task);
if (num_online_cpus() > 1 && !smp ) { if (num_online_cpus() > 1 && !smp) {
printk(KERN_NOTICE printk(KERN_NOTICE
"apm: disabled - APM is not SMP safe (power off active).\n"); "apm: disabled - APM is not SMP safe (power off active).\n");
return 0; return 0;
} }
@ -2339,7 +2364,7 @@ static int __init apm_init(void)
static void __exit apm_exit(void) static void __exit apm_exit(void)
{ {
int error; int error;
if (set_pm_idle) { if (set_pm_idle) {
pm_idle = original_pm_idle; pm_idle = original_pm_idle;

View File

@ -38,15 +38,15 @@ void foo(void);
void foo(void) void foo(void)
{ {
OFFSET(SIGCONTEXT_eax, sigcontext, eax); OFFSET(IA32_SIGCONTEXT_ax, sigcontext, ax);
OFFSET(SIGCONTEXT_ebx, sigcontext, ebx); OFFSET(IA32_SIGCONTEXT_bx, sigcontext, bx);
OFFSET(SIGCONTEXT_ecx, sigcontext, ecx); OFFSET(IA32_SIGCONTEXT_cx, sigcontext, cx);
OFFSET(SIGCONTEXT_edx, sigcontext, edx); OFFSET(IA32_SIGCONTEXT_dx, sigcontext, dx);
OFFSET(SIGCONTEXT_esi, sigcontext, esi); OFFSET(IA32_SIGCONTEXT_si, sigcontext, si);
OFFSET(SIGCONTEXT_edi, sigcontext, edi); OFFSET(IA32_SIGCONTEXT_di, sigcontext, di);
OFFSET(SIGCONTEXT_ebp, sigcontext, ebp); OFFSET(IA32_SIGCONTEXT_bp, sigcontext, bp);
OFFSET(SIGCONTEXT_esp, sigcontext, esp); OFFSET(IA32_SIGCONTEXT_sp, sigcontext, sp);
OFFSET(SIGCONTEXT_eip, sigcontext, eip); OFFSET(IA32_SIGCONTEXT_ip, sigcontext, ip);
BLANK(); BLANK();
OFFSET(CPUINFO_x86, cpuinfo_x86, x86); OFFSET(CPUINFO_x86, cpuinfo_x86, x86);
@ -70,39 +70,38 @@ void foo(void)
OFFSET(TI_cpu, thread_info, cpu); OFFSET(TI_cpu, thread_info, cpu);
BLANK(); BLANK();
OFFSET(GDS_size, Xgt_desc_struct, size); OFFSET(GDS_size, desc_ptr, size);
OFFSET(GDS_address, Xgt_desc_struct, address); OFFSET(GDS_address, desc_ptr, address);
OFFSET(GDS_pad, Xgt_desc_struct, pad);
BLANK(); BLANK();
OFFSET(PT_EBX, pt_regs, ebx); OFFSET(PT_EBX, pt_regs, bx);
OFFSET(PT_ECX, pt_regs, ecx); OFFSET(PT_ECX, pt_regs, cx);
OFFSET(PT_EDX, pt_regs, edx); OFFSET(PT_EDX, pt_regs, dx);
OFFSET(PT_ESI, pt_regs, esi); OFFSET(PT_ESI, pt_regs, si);
OFFSET(PT_EDI, pt_regs, edi); OFFSET(PT_EDI, pt_regs, di);
OFFSET(PT_EBP, pt_regs, ebp); OFFSET(PT_EBP, pt_regs, bp);
OFFSET(PT_EAX, pt_regs, eax); OFFSET(PT_EAX, pt_regs, ax);
OFFSET(PT_DS, pt_regs, xds); OFFSET(PT_DS, pt_regs, ds);
OFFSET(PT_ES, pt_regs, xes); OFFSET(PT_ES, pt_regs, es);
OFFSET(PT_FS, pt_regs, xfs); OFFSET(PT_FS, pt_regs, fs);
OFFSET(PT_ORIG_EAX, pt_regs, orig_eax); OFFSET(PT_ORIG_EAX, pt_regs, orig_ax);
OFFSET(PT_EIP, pt_regs, eip); OFFSET(PT_EIP, pt_regs, ip);
OFFSET(PT_CS, pt_regs, xcs); OFFSET(PT_CS, pt_regs, cs);
OFFSET(PT_EFLAGS, pt_regs, eflags); OFFSET(PT_EFLAGS, pt_regs, flags);
OFFSET(PT_OLDESP, pt_regs, esp); OFFSET(PT_OLDESP, pt_regs, sp);
OFFSET(PT_OLDSS, pt_regs, xss); OFFSET(PT_OLDSS, pt_regs, ss);
BLANK(); BLANK();
OFFSET(EXEC_DOMAIN_handler, exec_domain, handler); OFFSET(EXEC_DOMAIN_handler, exec_domain, handler);
OFFSET(RT_SIGFRAME_sigcontext, rt_sigframe, uc.uc_mcontext); OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe, uc.uc_mcontext);
BLANK(); BLANK();
OFFSET(pbe_address, pbe, address); OFFSET(pbe_address, pbe, address);
OFFSET(pbe_orig_address, pbe, orig_address); OFFSET(pbe_orig_address, pbe, orig_address);
OFFSET(pbe_next, pbe, next); OFFSET(pbe_next, pbe, next);
/* Offset from the sysenter stack to tss.esp0 */ /* Offset from the sysenter stack to tss.sp0 */
DEFINE(TSS_sysenter_esp0, offsetof(struct tss_struct, x86_tss.esp0) - DEFINE(TSS_sysenter_sp0, offsetof(struct tss_struct, x86_tss.sp0) -
sizeof(struct tss_struct)); sizeof(struct tss_struct));
DEFINE(PAGE_SIZE_asm, PAGE_SIZE); DEFINE(PAGE_SIZE_asm, PAGE_SIZE);
@ -111,8 +110,6 @@ void foo(void)
DEFINE(PTRS_PER_PMD, PTRS_PER_PMD); DEFINE(PTRS_PER_PMD, PTRS_PER_PMD);
DEFINE(PTRS_PER_PGD, PTRS_PER_PGD); DEFINE(PTRS_PER_PGD, PTRS_PER_PGD);
DEFINE(VDSO_PRELINK_asm, VDSO_PRELINK);
OFFSET(crypto_tfm_ctx_offset, crypto_tfm, __crt_ctx); OFFSET(crypto_tfm_ctx_offset, crypto_tfm, __crt_ctx);
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
@ -123,7 +120,7 @@ void foo(void)
OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable); OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable); OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
OFFSET(PV_CPU_iret, pv_cpu_ops, iret); OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit); OFFSET(PV_CPU_irq_enable_syscall_ret, pv_cpu_ops, irq_enable_syscall_ret);
OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0); OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0);
#endif #endif

View File

@ -38,7 +38,6 @@ int main(void)
#define ENTRY(entry) DEFINE(tsk_ ## entry, offsetof(struct task_struct, entry)) #define ENTRY(entry) DEFINE(tsk_ ## entry, offsetof(struct task_struct, entry))
ENTRY(state); ENTRY(state);
ENTRY(flags); ENTRY(flags);
ENTRY(thread);
ENTRY(pid); ENTRY(pid);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
@ -47,6 +46,9 @@ int main(void)
ENTRY(addr_limit); ENTRY(addr_limit);
ENTRY(preempt_count); ENTRY(preempt_count);
ENTRY(status); ENTRY(status);
#ifdef CONFIG_IA32_EMULATION
ENTRY(sysenter_return);
#endif
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
#define ENTRY(entry) DEFINE(pda_ ## entry, offsetof(struct x8664_pda, entry)) #define ENTRY(entry) DEFINE(pda_ ## entry, offsetof(struct x8664_pda, entry))
@ -59,17 +61,31 @@ int main(void)
ENTRY(data_offset); ENTRY(data_offset);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
#ifdef CONFIG_PARAVIRT
BLANK();
OFFSET(PARAVIRT_enabled, pv_info, paravirt_enabled);
OFFSET(PARAVIRT_PATCH_pv_cpu_ops, paravirt_patch_template, pv_cpu_ops);
OFFSET(PARAVIRT_PATCH_pv_irq_ops, paravirt_patch_template, pv_irq_ops);
OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
OFFSET(PV_CPU_irq_enable_syscall_ret, pv_cpu_ops, irq_enable_syscall_ret);
OFFSET(PV_CPU_swapgs, pv_cpu_ops, swapgs);
OFFSET(PV_MMU_read_cr2, pv_mmu_ops, read_cr2);
#endif
#ifdef CONFIG_IA32_EMULATION #ifdef CONFIG_IA32_EMULATION
#define ENTRY(entry) DEFINE(IA32_SIGCONTEXT_ ## entry, offsetof(struct sigcontext_ia32, entry)) #define ENTRY(entry) DEFINE(IA32_SIGCONTEXT_ ## entry, offsetof(struct sigcontext_ia32, entry))
ENTRY(eax); ENTRY(ax);
ENTRY(ebx); ENTRY(bx);
ENTRY(ecx); ENTRY(cx);
ENTRY(edx); ENTRY(dx);
ENTRY(esi); ENTRY(si);
ENTRY(edi); ENTRY(di);
ENTRY(ebp); ENTRY(bp);
ENTRY(esp); ENTRY(sp);
ENTRY(eip); ENTRY(ip);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
DEFINE(IA32_RT_SIGFRAME_sigcontext, DEFINE(IA32_RT_SIGFRAME_sigcontext,
@ -81,14 +97,14 @@ int main(void)
DEFINE(pbe_next, offsetof(struct pbe, next)); DEFINE(pbe_next, offsetof(struct pbe, next));
BLANK(); BLANK();
#define ENTRY(entry) DEFINE(pt_regs_ ## entry, offsetof(struct pt_regs, entry)) #define ENTRY(entry) DEFINE(pt_regs_ ## entry, offsetof(struct pt_regs, entry))
ENTRY(rbx); ENTRY(bx);
ENTRY(rbx); ENTRY(bx);
ENTRY(rcx); ENTRY(cx);
ENTRY(rdx); ENTRY(dx);
ENTRY(rsp); ENTRY(sp);
ENTRY(rbp); ENTRY(bp);
ENTRY(rsi); ENTRY(si);
ENTRY(rdi); ENTRY(di);
ENTRY(r8); ENTRY(r8);
ENTRY(r9); ENTRY(r9);
ENTRY(r10); ENTRY(r10);
@ -97,7 +113,7 @@ int main(void)
ENTRY(r13); ENTRY(r13);
ENTRY(r14); ENTRY(r14);
ENTRY(r15); ENTRY(r15);
ENTRY(eflags); ENTRY(flags);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
#define ENTRY(entry) DEFINE(saved_context_ ## entry, offsetof(struct saved_context, entry)) #define ENTRY(entry) DEFINE(saved_context_ ## entry, offsetof(struct saved_context, entry))
@ -108,7 +124,7 @@ int main(void)
ENTRY(cr8); ENTRY(cr8);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
DEFINE(TSS_ist, offsetof(struct tss_struct, ist)); DEFINE(TSS_ist, offsetof(struct tss_struct, x86_tss.ist));
BLANK(); BLANK();
DEFINE(crypto_tfm_ctx_offset, offsetof(struct crypto_tfm, __crt_ctx)); DEFINE(crypto_tfm_ctx_offset, offsetof(struct crypto_tfm, __crt_ctx));
BLANK(); BLANK();

View File

@ -1,8 +1,6 @@
/* /*
* Implement 'Simple Boot Flag Specification 2.0' * Implement 'Simple Boot Flag Specification 2.0'
*/ */
#include <linux/types.h> #include <linux/types.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
@ -14,40 +12,38 @@
#include <linux/mc146818rtc.h> #include <linux/mc146818rtc.h>
#define SBF_RESERVED (0x78) #define SBF_RESERVED (0x78)
#define SBF_PNPOS (1<<0) #define SBF_PNPOS (1<<0)
#define SBF_BOOTING (1<<1) #define SBF_BOOTING (1<<1)
#define SBF_DIAG (1<<2) #define SBF_DIAG (1<<2)
#define SBF_PARITY (1<<7) #define SBF_PARITY (1<<7)
int sbf_port __initdata = -1; /* set via acpi_boot_init() */ int sbf_port __initdata = -1; /* set via acpi_boot_init() */
static int __init parity(u8 v) static int __init parity(u8 v)
{ {
int x = 0; int x = 0;
int i; int i;
for(i=0;i<8;i++) for (i = 0; i < 8; i++) {
{ x ^= (v & 1);
x^=(v&1); v >>= 1;
v>>=1;
} }
return x; return x;
} }
static void __init sbf_write(u8 v) static void __init sbf_write(u8 v)
{ {
unsigned long flags; unsigned long flags;
if(sbf_port != -1)
{
v &= ~SBF_PARITY;
if(!parity(v))
v|=SBF_PARITY;
printk(KERN_INFO "Simple Boot Flag at 0x%x set to 0x%x\n", sbf_port, v); if (sbf_port != -1) {
v &= ~SBF_PARITY;
if (!parity(v))
v |= SBF_PARITY;
printk(KERN_INFO "Simple Boot Flag at 0x%x set to 0x%x\n",
sbf_port, v);
spin_lock_irqsave(&rtc_lock, flags); spin_lock_irqsave(&rtc_lock, flags);
CMOS_WRITE(v, sbf_port); CMOS_WRITE(v, sbf_port);
@ -57,33 +53,41 @@ static void __init sbf_write(u8 v)
static u8 __init sbf_read(void) static u8 __init sbf_read(void)
{ {
u8 v;
unsigned long flags; unsigned long flags;
if(sbf_port == -1) u8 v;
if (sbf_port == -1)
return 0; return 0;
spin_lock_irqsave(&rtc_lock, flags); spin_lock_irqsave(&rtc_lock, flags);
v = CMOS_READ(sbf_port); v = CMOS_READ(sbf_port);
spin_unlock_irqrestore(&rtc_lock, flags); spin_unlock_irqrestore(&rtc_lock, flags);
return v; return v;
} }
static int __init sbf_value_valid(u8 v) static int __init sbf_value_valid(u8 v)
{ {
if(v&SBF_RESERVED) /* Reserved bits */ if (v & SBF_RESERVED) /* Reserved bits */
return 0; return 0;
if(!parity(v)) if (!parity(v))
return 0; return 0;
return 1; return 1;
} }
static int __init sbf_init(void) static int __init sbf_init(void)
{ {
u8 v; u8 v;
if(sbf_port == -1)
if (sbf_port == -1)
return 0; return 0;
v = sbf_read(); v = sbf_read();
if(!sbf_value_valid(v)) if (!sbf_value_valid(v)) {
printk(KERN_WARNING "Simple Boot Flag value 0x%x read from CMOS RAM was invalid\n",v); printk(KERN_WARNING "Simple Boot Flag value 0x%x read from "
"CMOS RAM was invalid\n", v);
}
v &= ~SBF_RESERVED; v &= ~SBF_RESERVED;
v &= ~SBF_BOOTING; v &= ~SBF_BOOTING;
@ -92,7 +96,7 @@ static int __init sbf_init(void)
v |= SBF_PNPOS; v |= SBF_PNPOS;
#endif #endif
sbf_write(v); sbf_write(v);
return 0; return 0;
} }
module_init(sbf_init); module_init(sbf_init);

View File

@ -13,7 +13,6 @@
void __init check_bugs(void) void __init check_bugs(void)
{ {
identify_cpu(&boot_cpu_data); identify_cpu(&boot_cpu_data);
mtrr_bp_init();
#if !defined(CONFIG_SMP) #if !defined(CONFIG_SMP)
printk("CPU: "); printk("CPU: ");
print_cpu_info(&boot_cpu_data); print_cpu_info(&boot_cpu_data);

View File

@ -45,6 +45,6 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
&regs[CR_ECX], &regs[CR_EDX]); &regs[CR_ECX], &regs[CR_EDX]);
if (regs[cb->reg] & (1 << cb->bit)) if (regs[cb->reg] & (1 << cb->bit))
set_bit(cb->feature, c->x86_capability); set_cpu_cap(c, cb->feature);
} }
} }

View File

@ -63,6 +63,15 @@ static __cpuinit int amd_apic_timer_broken(void)
int force_mwait __cpuinitdata; int force_mwait __cpuinitdata;
void __cpuinit early_init_amd(struct cpuinfo_x86 *c)
{
if (cpuid_eax(0x80000000) >= 0x80000007) {
c->x86_power = cpuid_edx(0x80000007);
if (c->x86_power & (1<<8))
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
}
}
static void __cpuinit init_amd(struct cpuinfo_x86 *c) static void __cpuinit init_amd(struct cpuinfo_x86 *c)
{ {
u32 l, h; u32 l, h;
@ -85,6 +94,8 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
} }
#endif #endif
early_init_amd(c);
/* /*
* FIXME: We should handle the K5 here. Set up the write * FIXME: We should handle the K5 here. Set up the write
* range and also turn on MSR 83 bits 4 and 31 (write alloc, * range and also turn on MSR 83 bits 4 and 31 (write alloc,
@ -257,12 +268,6 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1; c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
} }
if (cpuid_eax(0x80000000) >= 0x80000007) {
c->x86_power = cpuid_edx(0x80000007);
if (c->x86_power & (1<<8))
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
}
#ifdef CONFIG_X86_HT #ifdef CONFIG_X86_HT
/* /*
* On a AMD multi core setup the lower bits of the APIC id * On a AMD multi core setup the lower bits of the APIC id
@ -295,12 +300,12 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
local_apic_timer_disabled = 1; local_apic_timer_disabled = 1;
#endif #endif
if (c->x86 == 0x10 && !force_mwait)
clear_bit(X86_FEATURE_MWAIT, c->x86_capability);
/* K6s reports MCEs but don't actually have all the MSRs */ /* K6s reports MCEs but don't actually have all the MSRs */
if (c->x86 < 6) if (c->x86 < 6)
clear_bit(X86_FEATURE_MCE, c->x86_capability); clear_bit(X86_FEATURE_MCE, c->x86_capability);
if (cpu_has_xmm)
set_bit(X86_FEATURE_MFENCE_RDTSC, c->x86_capability);
} }
static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size) static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)

View File

@ -11,6 +11,7 @@
#include <linux/utsname.h> #include <linux/utsname.h>
#include <asm/bugs.h> #include <asm/bugs.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/processor-flags.h>
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
@ -35,7 +36,7 @@ __setup("mca-pentium", mca_pentium);
static int __init no_387(char *s) static int __init no_387(char *s)
{ {
boot_cpu_data.hard_math = 0; boot_cpu_data.hard_math = 0;
write_cr0(0xE | read_cr0()); write_cr0(X86_CR0_TS | X86_CR0_EM | X86_CR0_MP | read_cr0());
return 1; return 1;
} }
@ -153,7 +154,7 @@ static void __init check_config(void)
* If we configured ourselves for a TSC, we'd better have one! * If we configured ourselves for a TSC, we'd better have one!
*/ */
#ifdef CONFIG_X86_TSC #ifdef CONFIG_X86_TSC
if (!cpu_has_tsc && !tsc_disable) if (!cpu_has_tsc)
panic("Kernel compiled for Pentium+, requires TSC feature!"); panic("Kernel compiled for Pentium+, requires TSC feature!");
#endif #endif

View File

@ -22,43 +22,48 @@
#include "cpu.h" #include "cpu.h"
DEFINE_PER_CPU(struct gdt_page, gdt_page) = { .gdt = { DEFINE_PER_CPU(struct gdt_page, gdt_page) = { .gdt = {
[GDT_ENTRY_KERNEL_CS] = { 0x0000ffff, 0x00cf9a00 }, [GDT_ENTRY_KERNEL_CS] = { { { 0x0000ffff, 0x00cf9a00 } } },
[GDT_ENTRY_KERNEL_DS] = { 0x0000ffff, 0x00cf9200 }, [GDT_ENTRY_KERNEL_DS] = { { { 0x0000ffff, 0x00cf9200 } } },
[GDT_ENTRY_DEFAULT_USER_CS] = { 0x0000ffff, 0x00cffa00 }, [GDT_ENTRY_DEFAULT_USER_CS] = { { { 0x0000ffff, 0x00cffa00 } } },
[GDT_ENTRY_DEFAULT_USER_DS] = { 0x0000ffff, 0x00cff200 }, [GDT_ENTRY_DEFAULT_USER_DS] = { { { 0x0000ffff, 0x00cff200 } } },
/* /*
* Segments used for calling PnP BIOS have byte granularity. * Segments used for calling PnP BIOS have byte granularity.
* They code segments and data segments have fixed 64k limits, * They code segments and data segments have fixed 64k limits,
* the transfer segment sizes are set at run time. * the transfer segment sizes are set at run time.
*/ */
[GDT_ENTRY_PNPBIOS_CS32] = { 0x0000ffff, 0x00409a00 },/* 32-bit code */ /* 32-bit code */
[GDT_ENTRY_PNPBIOS_CS16] = { 0x0000ffff, 0x00009a00 },/* 16-bit code */ [GDT_ENTRY_PNPBIOS_CS32] = { { { 0x0000ffff, 0x00409a00 } } },
[GDT_ENTRY_PNPBIOS_DS] = { 0x0000ffff, 0x00009200 }, /* 16-bit data */ /* 16-bit code */
[GDT_ENTRY_PNPBIOS_TS1] = { 0x00000000, 0x00009200 },/* 16-bit data */ [GDT_ENTRY_PNPBIOS_CS16] = { { { 0x0000ffff, 0x00009a00 } } },
[GDT_ENTRY_PNPBIOS_TS2] = { 0x00000000, 0x00009200 },/* 16-bit data */ /* 16-bit data */
[GDT_ENTRY_PNPBIOS_DS] = { { { 0x0000ffff, 0x00009200 } } },
/* 16-bit data */
[GDT_ENTRY_PNPBIOS_TS1] = { { { 0x00000000, 0x00009200 } } },
/* 16-bit data */
[GDT_ENTRY_PNPBIOS_TS2] = { { { 0x00000000, 0x00009200 } } },
/* /*
* The APM segments have byte granularity and their bases * The APM segments have byte granularity and their bases
* are set at run time. All have 64k limits. * are set at run time. All have 64k limits.
*/ */
[GDT_ENTRY_APMBIOS_BASE] = { 0x0000ffff, 0x00409a00 },/* 32-bit code */ /* 32-bit code */
[GDT_ENTRY_APMBIOS_BASE] = { { { 0x0000ffff, 0x00409a00 } } },
/* 16-bit code */ /* 16-bit code */
[GDT_ENTRY_APMBIOS_BASE+1] = { 0x0000ffff, 0x00009a00 }, [GDT_ENTRY_APMBIOS_BASE+1] = { { { 0x0000ffff, 0x00009a00 } } },
[GDT_ENTRY_APMBIOS_BASE+2] = { 0x0000ffff, 0x00409200 }, /* data */ /* data */
[GDT_ENTRY_APMBIOS_BASE+2] = { { { 0x0000ffff, 0x00409200 } } },
[GDT_ENTRY_ESPFIX_SS] = { 0x00000000, 0x00c09200 }, [GDT_ENTRY_ESPFIX_SS] = { { { 0x00000000, 0x00c09200 } } },
[GDT_ENTRY_PERCPU] = { 0x00000000, 0x00000000 }, [GDT_ENTRY_PERCPU] = { { { 0x00000000, 0x00000000 } } },
} }; } };
EXPORT_PER_CPU_SYMBOL_GPL(gdt_page); EXPORT_PER_CPU_SYMBOL_GPL(gdt_page);
__u32 cleared_cpu_caps[NCAPINTS] __cpuinitdata;
static int cachesize_override __cpuinitdata = -1; static int cachesize_override __cpuinitdata = -1;
static int disable_x86_fxsr __cpuinitdata;
static int disable_x86_serial_nr __cpuinitdata = 1; static int disable_x86_serial_nr __cpuinitdata = 1;
static int disable_x86_sep __cpuinitdata;
struct cpu_dev * cpu_devs[X86_VENDOR_NUM] = {}; struct cpu_dev * cpu_devs[X86_VENDOR_NUM] = {};
extern int disable_pse;
static void __cpuinit default_init(struct cpuinfo_x86 * c) static void __cpuinit default_init(struct cpuinfo_x86 * c)
{ {
/* Not much we can do here... */ /* Not much we can do here... */
@ -207,16 +212,8 @@ static void __cpuinit get_cpu_vendor(struct cpuinfo_x86 *c, int early)
static int __init x86_fxsr_setup(char * s) static int __init x86_fxsr_setup(char * s)
{ {
/* Tell all the other CPUs to not use it... */ setup_clear_cpu_cap(X86_FEATURE_FXSR);
disable_x86_fxsr = 1; setup_clear_cpu_cap(X86_FEATURE_XMM);
/*
* ... and clear the bits early in the boot_cpu_data
* so that the bootup process doesn't try to do this
* either.
*/
clear_bit(X86_FEATURE_FXSR, boot_cpu_data.x86_capability);
clear_bit(X86_FEATURE_XMM, boot_cpu_data.x86_capability);
return 1; return 1;
} }
__setup("nofxsr", x86_fxsr_setup); __setup("nofxsr", x86_fxsr_setup);
@ -224,7 +221,7 @@ __setup("nofxsr", x86_fxsr_setup);
static int __init x86_sep_setup(char * s) static int __init x86_sep_setup(char * s)
{ {
disable_x86_sep = 1; setup_clear_cpu_cap(X86_FEATURE_SEP);
return 1; return 1;
} }
__setup("nosep", x86_sep_setup); __setup("nosep", x86_sep_setup);
@ -281,6 +278,33 @@ void __init cpu_detect(struct cpuinfo_x86 *c)
c->x86_cache_alignment = ((misc >> 8) & 0xff) * 8; c->x86_cache_alignment = ((misc >> 8) & 0xff) * 8;
} }
} }
static void __cpuinit early_get_cap(struct cpuinfo_x86 *c)
{
u32 tfms, xlvl;
int ebx;
memset(&c->x86_capability, 0, sizeof c->x86_capability);
if (have_cpuid_p()) {
/* Intel-defined flags: level 0x00000001 */
if (c->cpuid_level >= 0x00000001) {
u32 capability, excap;
cpuid(0x00000001, &tfms, &ebx, &excap, &capability);
c->x86_capability[0] = capability;
c->x86_capability[4] = excap;
}
/* AMD-defined flags: level 0x80000001 */
xlvl = cpuid_eax(0x80000000);
if ((xlvl & 0xffff0000) == 0x80000000) {
if (xlvl >= 0x80000001) {
c->x86_capability[1] = cpuid_edx(0x80000001);
c->x86_capability[6] = cpuid_ecx(0x80000001);
}
}
}
}
/* Do minimum CPU detection early. /* Do minimum CPU detection early.
Fields really needed: vendor, cpuid_level, family, model, mask, cache alignment. Fields really needed: vendor, cpuid_level, family, model, mask, cache alignment.
@ -300,6 +324,17 @@ static void __init early_cpu_detect(void)
cpu_detect(c); cpu_detect(c);
get_cpu_vendor(c, 1); get_cpu_vendor(c, 1);
switch (c->x86_vendor) {
case X86_VENDOR_AMD:
early_init_amd(c);
break;
case X86_VENDOR_INTEL:
early_init_intel(c);
break;
}
early_get_cap(c);
} }
static void __cpuinit generic_identify(struct cpuinfo_x86 * c) static void __cpuinit generic_identify(struct cpuinfo_x86 * c)
@ -357,8 +392,6 @@ static void __cpuinit generic_identify(struct cpuinfo_x86 * c)
init_scattered_cpuid_features(c); init_scattered_cpuid_features(c);
} }
early_intel_workaround(c);
#ifdef CONFIG_X86_HT #ifdef CONFIG_X86_HT
c->phys_proc_id = (cpuid_ebx(1) >> 24) & 0xff; c->phys_proc_id = (cpuid_ebx(1) >> 24) & 0xff;
#endif #endif
@ -392,7 +425,7 @@ __setup("serialnumber", x86_serial_nr_setup);
/* /*
* This does the hard work of actually picking apart the CPU stuff... * This does the hard work of actually picking apart the CPU stuff...
*/ */
static void __cpuinit identify_cpu(struct cpuinfo_x86 *c) void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
{ {
int i; int i;
@ -418,20 +451,9 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
generic_identify(c); generic_identify(c);
printk(KERN_DEBUG "CPU: After generic identify, caps:"); if (this_cpu->c_identify)
for (i = 0; i < NCAPINTS; i++)
printk(" %08lx", c->x86_capability[i]);
printk("\n");
if (this_cpu->c_identify) {
this_cpu->c_identify(c); this_cpu->c_identify(c);
printk(KERN_DEBUG "CPU: After vendor identify, caps:");
for (i = 0; i < NCAPINTS; i++)
printk(" %08lx", c->x86_capability[i]);
printk("\n");
}
/* /*
* Vendor-specific initialization. In this section we * Vendor-specific initialization. In this section we
* canonicalize the feature flags, meaning if there are * canonicalize the feature flags, meaning if there are
@ -453,23 +475,6 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
* we do "generic changes." * we do "generic changes."
*/ */
/* TSC disabled? */
if ( tsc_disable )
clear_bit(X86_FEATURE_TSC, c->x86_capability);
/* FXSR disabled? */
if (disable_x86_fxsr) {
clear_bit(X86_FEATURE_FXSR, c->x86_capability);
clear_bit(X86_FEATURE_XMM, c->x86_capability);
}
/* SEP disabled? */
if (disable_x86_sep)
clear_bit(X86_FEATURE_SEP, c->x86_capability);
if (disable_pse)
clear_bit(X86_FEATURE_PSE, c->x86_capability);
/* If the model name is still unset, do table lookup. */ /* If the model name is still unset, do table lookup. */
if ( !c->x86_model_id[0] ) { if ( !c->x86_model_id[0] ) {
char *p; char *p;
@ -482,13 +487,6 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
c->x86, c->x86_model); c->x86, c->x86_model);
} }
/* Now the feature flags better reflect actual CPU features! */
printk(KERN_DEBUG "CPU: After all inits, caps:");
for (i = 0; i < NCAPINTS; i++)
printk(" %08lx", c->x86_capability[i]);
printk("\n");
/* /*
* On SMP, boot_cpu_data holds the common feature set between * On SMP, boot_cpu_data holds the common feature set between
* all CPUs; so make sure that we indicate which features are * all CPUs; so make sure that we indicate which features are
@ -501,8 +499,14 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
boot_cpu_data.x86_capability[i] &= c->x86_capability[i]; boot_cpu_data.x86_capability[i] &= c->x86_capability[i];
} }
/* Clear all flags overriden by options */
for (i = 0; i < NCAPINTS; i++)
c->x86_capability[i] ^= cleared_cpu_caps[i];
/* Init Machine Check Exception if available. */ /* Init Machine Check Exception if available. */
mcheck_init(c); mcheck_init(c);
select_idle_routine(c);
} }
void __init identify_boot_cpu(void) void __init identify_boot_cpu(void)
@ -510,7 +514,6 @@ void __init identify_boot_cpu(void)
identify_cpu(&boot_cpu_data); identify_cpu(&boot_cpu_data);
sysenter_setup(); sysenter_setup();
enable_sep_cpu(); enable_sep_cpu();
mtrr_bp_init();
} }
void __cpuinit identify_secondary_cpu(struct cpuinfo_x86 *c) void __cpuinit identify_secondary_cpu(struct cpuinfo_x86 *c)
@ -567,6 +570,13 @@ void __cpuinit detect_ht(struct cpuinfo_x86 *c)
} }
#endif #endif
static __init int setup_noclflush(char *arg)
{
setup_clear_cpu_cap(X86_FEATURE_CLFLSH);
return 1;
}
__setup("noclflush", setup_noclflush);
void __cpuinit print_cpu_info(struct cpuinfo_x86 *c) void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
{ {
char *vendor = NULL; char *vendor = NULL;
@ -590,6 +600,17 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
printk("\n"); printk("\n");
} }
static __init int setup_disablecpuid(char *arg)
{
int bit;
if (get_option(&arg, &bit) && bit < NCAPINTS*32)
setup_clear_cpu_cap(bit);
else
return 0;
return 1;
}
__setup("clearcpuid=", setup_disablecpuid);
cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE; cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
/* This is hacky. :) /* This is hacky. :)
@ -620,21 +641,13 @@ void __init early_cpu_init(void)
nexgen_init_cpu(); nexgen_init_cpu();
umc_init_cpu(); umc_init_cpu();
early_cpu_detect(); early_cpu_detect();
#ifdef CONFIG_DEBUG_PAGEALLOC
/* pse is not compatible with on-the-fly unmapping,
* disable it even if the cpus claim to support it.
*/
clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);
disable_pse = 1;
#endif
} }
/* Make sure %fs is initialized properly in idle threads */ /* Make sure %fs is initialized properly in idle threads */
struct pt_regs * __devinit idle_regs(struct pt_regs *regs) struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
{ {
memset(regs, 0, sizeof(struct pt_regs)); memset(regs, 0, sizeof(struct pt_regs));
regs->xfs = __KERNEL_PERCPU; regs->fs = __KERNEL_PERCPU;
return regs; return regs;
} }
@ -642,7 +655,7 @@ struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
* it's on the real one. */ * it's on the real one. */
void switch_to_new_gdt(void) void switch_to_new_gdt(void)
{ {
struct Xgt_desc_struct gdt_descr; struct desc_ptr gdt_descr;
gdt_descr.address = (long)get_cpu_gdt_table(smp_processor_id()); gdt_descr.address = (long)get_cpu_gdt_table(smp_processor_id());
gdt_descr.size = GDT_SIZE - 1; gdt_descr.size = GDT_SIZE - 1;
@ -672,12 +685,6 @@ void __cpuinit cpu_init(void)
if (cpu_has_vme || cpu_has_tsc || cpu_has_de) if (cpu_has_vme || cpu_has_tsc || cpu_has_de)
clear_in_cr4(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE); clear_in_cr4(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
if (tsc_disable && cpu_has_tsc) {
printk(KERN_NOTICE "Disabling TSC...\n");
/**** FIX-HPA: DOES THIS REALLY BELONG HERE? ****/
clear_bit(X86_FEATURE_TSC, boot_cpu_data.x86_capability);
set_in_cr4(X86_CR4_TSD);
}
load_idt(&idt_descr); load_idt(&idt_descr);
switch_to_new_gdt(); switch_to_new_gdt();
@ -691,7 +698,7 @@ void __cpuinit cpu_init(void)
BUG(); BUG();
enter_lazy_tlb(&init_mm, curr); enter_lazy_tlb(&init_mm, curr);
load_esp0(t, thread); load_sp0(t, thread);
set_tss_desc(cpu,t); set_tss_desc(cpu,t);
load_TR_desc(); load_TR_desc();
load_LDT(&init_mm.context); load_LDT(&init_mm.context);

View File

@ -24,5 +24,6 @@ extern struct cpu_dev * cpu_devs [X86_VENDOR_NUM];
extern int get_model_name(struct cpuinfo_x86 *c); extern int get_model_name(struct cpuinfo_x86 *c);
extern void display_cacheinfo(struct cpuinfo_x86 *c); extern void display_cacheinfo(struct cpuinfo_x86 *c);
extern void early_intel_workaround(struct cpuinfo_x86 *c); extern void early_init_intel(struct cpuinfo_x86 *c);
extern void early_init_amd(struct cpuinfo_x86 *c);

View File

@ -67,7 +67,8 @@ struct acpi_cpufreq_data {
unsigned int cpu_feature; unsigned int cpu_feature;
}; };
static struct acpi_cpufreq_data *drv_data[NR_CPUS]; static DEFINE_PER_CPU(struct acpi_cpufreq_data *, drv_data);
/* acpi_perf_data is a pointer to percpu data. */ /* acpi_perf_data is a pointer to percpu data. */
static struct acpi_processor_performance *acpi_perf_data; static struct acpi_processor_performance *acpi_perf_data;
@ -218,14 +219,14 @@ static u32 get_cur_val(cpumask_t mask)
if (unlikely(cpus_empty(mask))) if (unlikely(cpus_empty(mask)))
return 0; return 0;
switch (drv_data[first_cpu(mask)]->cpu_feature) { switch (per_cpu(drv_data, first_cpu(mask))->cpu_feature) {
case SYSTEM_INTEL_MSR_CAPABLE: case SYSTEM_INTEL_MSR_CAPABLE:
cmd.type = SYSTEM_INTEL_MSR_CAPABLE; cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
cmd.addr.msr.reg = MSR_IA32_PERF_STATUS; cmd.addr.msr.reg = MSR_IA32_PERF_STATUS;
break; break;
case SYSTEM_IO_CAPABLE: case SYSTEM_IO_CAPABLE:
cmd.type = SYSTEM_IO_CAPABLE; cmd.type = SYSTEM_IO_CAPABLE;
perf = drv_data[first_cpu(mask)]->acpi_data; perf = per_cpu(drv_data, first_cpu(mask))->acpi_data;
cmd.addr.io.port = perf->control_register.address; cmd.addr.io.port = perf->control_register.address;
cmd.addr.io.bit_width = perf->control_register.bit_width; cmd.addr.io.bit_width = perf->control_register.bit_width;
break; break;
@ -325,7 +326,7 @@ static unsigned int get_measured_perf(unsigned int cpu)
#endif #endif
retval = drv_data[cpu]->max_freq * perf_percent / 100; retval = per_cpu(drv_data, cpu)->max_freq * perf_percent / 100;
put_cpu(); put_cpu();
set_cpus_allowed(current, saved_mask); set_cpus_allowed(current, saved_mask);
@ -336,7 +337,7 @@ static unsigned int get_measured_perf(unsigned int cpu)
static unsigned int get_cur_freq_on_cpu(unsigned int cpu) static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
{ {
struct acpi_cpufreq_data *data = drv_data[cpu]; struct acpi_cpufreq_data *data = per_cpu(drv_data, cpu);
unsigned int freq; unsigned int freq;
dprintk("get_cur_freq_on_cpu (%d)\n", cpu); dprintk("get_cur_freq_on_cpu (%d)\n", cpu);
@ -370,7 +371,7 @@ static unsigned int check_freqs(cpumask_t mask, unsigned int freq,
static int acpi_cpufreq_target(struct cpufreq_policy *policy, static int acpi_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int target_freq, unsigned int relation)
{ {
struct acpi_cpufreq_data *data = drv_data[policy->cpu]; struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
struct acpi_processor_performance *perf; struct acpi_processor_performance *perf;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
cpumask_t online_policy_cpus; cpumask_t online_policy_cpus;
@ -466,7 +467,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
static int acpi_cpufreq_verify(struct cpufreq_policy *policy) static int acpi_cpufreq_verify(struct cpufreq_policy *policy)
{ {
struct acpi_cpufreq_data *data = drv_data[policy->cpu]; struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
dprintk("acpi_cpufreq_verify\n"); dprintk("acpi_cpufreq_verify\n");
@ -570,7 +571,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
return -ENOMEM; return -ENOMEM;
data->acpi_data = percpu_ptr(acpi_perf_data, cpu); data->acpi_data = percpu_ptr(acpi_perf_data, cpu);
drv_data[cpu] = data; per_cpu(drv_data, cpu) = data;
if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) if (cpu_has(c, X86_FEATURE_CONSTANT_TSC))
acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS; acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS;
@ -714,20 +715,20 @@ err_unreg:
acpi_processor_unregister_performance(perf, cpu); acpi_processor_unregister_performance(perf, cpu);
err_free: err_free:
kfree(data); kfree(data);
drv_data[cpu] = NULL; per_cpu(drv_data, cpu) = NULL;
return result; return result;
} }
static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy) static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{ {
struct acpi_cpufreq_data *data = drv_data[policy->cpu]; struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
dprintk("acpi_cpufreq_cpu_exit\n"); dprintk("acpi_cpufreq_cpu_exit\n");
if (data) { if (data) {
cpufreq_frequency_table_put_attr(policy->cpu); cpufreq_frequency_table_put_attr(policy->cpu);
drv_data[policy->cpu] = NULL; per_cpu(drv_data, policy->cpu) = NULL;
acpi_processor_unregister_performance(data->acpi_data, acpi_processor_unregister_performance(data->acpi_data,
policy->cpu); policy->cpu);
kfree(data); kfree(data);
@ -738,7 +739,7 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
static int acpi_cpufreq_resume(struct cpufreq_policy *policy) static int acpi_cpufreq_resume(struct cpufreq_policy *policy)
{ {
struct acpi_cpufreq_data *data = drv_data[policy->cpu]; struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
dprintk("acpi_cpufreq_resume\n"); dprintk("acpi_cpufreq_resume\n");

View File

@ -694,7 +694,7 @@ static acpi_status longhaul_walk_callback(acpi_handle obj_handle,
if ( acpi_bus_get_device(obj_handle, &d) ) { if ( acpi_bus_get_device(obj_handle, &d) ) {
return 0; return 0;
} }
*return_value = (void *)acpi_driver_data(d); *return_value = acpi_driver_data(d);
return 1; return 1;
} }

View File

@ -52,7 +52,7 @@
/* serialize freq changes */ /* serialize freq changes */
static DEFINE_MUTEX(fidvid_mutex); static DEFINE_MUTEX(fidvid_mutex);
static struct powernow_k8_data *powernow_data[NR_CPUS]; static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data);
static int cpu_family = CPU_OPTERON; static int cpu_family = CPU_OPTERON;
@ -1018,7 +1018,7 @@ static int transition_frequency_pstate(struct powernow_k8_data *data, unsigned i
static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsigned relation) static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsigned relation)
{ {
cpumask_t oldmask = CPU_MASK_ALL; cpumask_t oldmask = CPU_MASK_ALL;
struct powernow_k8_data *data = powernow_data[pol->cpu]; struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
u32 checkfid; u32 checkfid;
u32 checkvid; u32 checkvid;
unsigned int newstate; unsigned int newstate;
@ -1094,7 +1094,7 @@ err_out:
/* Driver entry point to verify the policy and range of frequencies */ /* Driver entry point to verify the policy and range of frequencies */
static int powernowk8_verify(struct cpufreq_policy *pol) static int powernowk8_verify(struct cpufreq_policy *pol)
{ {
struct powernow_k8_data *data = powernow_data[pol->cpu]; struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
if (!data) if (!data)
return -EINVAL; return -EINVAL;
@ -1202,7 +1202,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
dprintk("cpu_init done, current fid 0x%x, vid 0x%x\n", dprintk("cpu_init done, current fid 0x%x, vid 0x%x\n",
data->currfid, data->currvid); data->currfid, data->currvid);
powernow_data[pol->cpu] = data; per_cpu(powernow_data, pol->cpu) = data;
return 0; return 0;
@ -1216,7 +1216,7 @@ err_out:
static int __devexit powernowk8_cpu_exit (struct cpufreq_policy *pol) static int __devexit powernowk8_cpu_exit (struct cpufreq_policy *pol)
{ {
struct powernow_k8_data *data = powernow_data[pol->cpu]; struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
if (!data) if (!data)
return -EINVAL; return -EINVAL;
@ -1237,7 +1237,7 @@ static unsigned int powernowk8_get (unsigned int cpu)
cpumask_t oldmask = current->cpus_allowed; cpumask_t oldmask = current->cpus_allowed;
unsigned int khz = 0; unsigned int khz = 0;
data = powernow_data[first_cpu(per_cpu(cpu_core_map, cpu))]; data = per_cpu(powernow_data, first_cpu(per_cpu(cpu_core_map, cpu)));
if (!data) if (!data)
return -EINVAL; return -EINVAL;

View File

@ -5,6 +5,7 @@
#include <asm/dma.h> #include <asm/dma.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/processor-cyrix.h> #include <asm/processor-cyrix.h>
#include <asm/processor-flags.h>
#include <asm/timer.h> #include <asm/timer.h>
#include <asm/pci-direct.h> #include <asm/pci-direct.h>
#include <asm/tsc.h> #include <asm/tsc.h>
@ -126,15 +127,12 @@ static void __cpuinit set_cx86_reorder(void)
static void __cpuinit set_cx86_memwb(void) static void __cpuinit set_cx86_memwb(void)
{ {
u32 cr0;
printk(KERN_INFO "Enable Memory-Write-back mode on Cyrix/NSC processor.\n"); printk(KERN_INFO "Enable Memory-Write-back mode on Cyrix/NSC processor.\n");
/* CCR2 bit 2: unlock NW bit */ /* CCR2 bit 2: unlock NW bit */
setCx86(CX86_CCR2, getCx86(CX86_CCR2) & ~0x04); setCx86(CX86_CCR2, getCx86(CX86_CCR2) & ~0x04);
/* set 'Not Write-through' */ /* set 'Not Write-through' */
cr0 = 0x20000000; write_cr0(read_cr0() | X86_CR0_NW);
write_cr0(read_cr0() | cr0);
/* CCR2 bit 2: lock NW bit and set WT1 */ /* CCR2 bit 2: lock NW bit and set WT1 */
setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x14 ); setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x14 );
} }

View File

@ -11,6 +11,8 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/ptrace.h>
#include <asm/ds.h>
#include "cpu.h" #include "cpu.h"
@ -27,13 +29,14 @@
struct movsl_mask movsl_mask __read_mostly; struct movsl_mask movsl_mask __read_mostly;
#endif #endif
void __cpuinit early_intel_workaround(struct cpuinfo_x86 *c) void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
{ {
if (c->x86_vendor != X86_VENDOR_INTEL)
return;
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */ /* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
if (c->x86 == 15 && c->x86_cache_alignment == 64) if (c->x86 == 15 && c->x86_cache_alignment == 64)
c->x86_cache_alignment = 128; c->x86_cache_alignment = 128;
if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
(c->x86 == 0x6 && c->x86_model >= 0x0e))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
} }
/* /*
@ -113,6 +116,8 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
unsigned int l2 = 0; unsigned int l2 = 0;
char *p = NULL; char *p = NULL;
early_init_intel(c);
#ifdef CONFIG_X86_F00F_BUG #ifdef CONFIG_X86_F00F_BUG
/* /*
* All current models of Pentium and Pentium with MMX technology CPUs * All current models of Pentium and Pentium with MMX technology CPUs
@ -132,7 +137,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
} }
#endif #endif
select_idle_routine(c);
l2 = init_intel_cacheinfo(c); l2 = init_intel_cacheinfo(c);
if (c->cpuid_level > 9 ) { if (c->cpuid_level > 9 ) {
unsigned eax = cpuid_eax(10); unsigned eax = cpuid_eax(10);
@ -201,16 +205,13 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
} }
#endif #endif
if (cpu_has_xmm2)
set_bit(X86_FEATURE_LFENCE_RDTSC, c->x86_capability);
if (c->x86 == 15) { if (c->x86 == 15) {
set_bit(X86_FEATURE_P4, c->x86_capability); set_bit(X86_FEATURE_P4, c->x86_capability);
set_bit(X86_FEATURE_SYNC_RDTSC, c->x86_capability);
} }
if (c->x86 == 6) if (c->x86 == 6)
set_bit(X86_FEATURE_P3, c->x86_capability); set_bit(X86_FEATURE_P3, c->x86_capability);
if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
(c->x86 == 0x6 && c->x86_model >= 0x0e))
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
if (cpu_has_ds) { if (cpu_has_ds) {
unsigned int l1; unsigned int l1;
rdmsr(MSR_IA32_MISC_ENABLE, l1, l2); rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
@ -219,6 +220,9 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
if (!(l1 & (1<<12))) if (!(l1 & (1<<12)))
set_bit(X86_FEATURE_PEBS, c->x86_capability); set_bit(X86_FEATURE_PEBS, c->x86_capability);
} }
if (cpu_has_bts)
ds_init_intel(c);
} }
static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned int size) static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned int size)
@ -342,5 +346,22 @@ unsigned long cmpxchg_386_u32(volatile void *ptr, u32 old, u32 new)
EXPORT_SYMBOL(cmpxchg_386_u32); EXPORT_SYMBOL(cmpxchg_386_u32);
#endif #endif
#ifndef CONFIG_X86_CMPXCHG64
unsigned long long cmpxchg_486_u64(volatile void *ptr, u64 old, u64 new)
{
u64 prev;
unsigned long flags;
/* Poor man's cmpxchg8b for 386 and 486. Unsuitable for SMP */
local_irq_save(flags);
prev = *(u64 *)ptr;
if (prev == old)
*(u64 *)ptr = new;
local_irq_restore(flags);
return prev;
}
EXPORT_SYMBOL(cmpxchg_486_u64);
#endif
// arch_initcall(intel_cpu_init); // arch_initcall(intel_cpu_init);

View File

@ -16,7 +16,7 @@
#include "mce.h" #include "mce.h"
/* Machine Check Handler For AMD Athlon/Duron */ /* Machine Check Handler For AMD Athlon/Duron */
static fastcall void k7_machine_check(struct pt_regs * regs, long error_code) static void k7_machine_check(struct pt_regs * regs, long error_code)
{ {
int recover=1; int recover=1;
u32 alow, ahigh, high, low; u32 alow, ahigh, high, low;
@ -27,29 +27,32 @@ static fastcall void k7_machine_check(struct pt_regs * regs, long error_code)
if (mcgstl & (1<<0)) /* Recoverable ? */ if (mcgstl & (1<<0)) /* Recoverable ? */
recover=0; recover=0;
printk (KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n", printk(KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
smp_processor_id(), mcgsth, mcgstl); smp_processor_id(), mcgsth, mcgstl);
for (i=1; i<nr_mce_banks; i++) { for (i = 1; i < nr_mce_banks; i++) {
rdmsr (MSR_IA32_MC0_STATUS+i*4,low, high); rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high);
if (high&(1<<31)) { if (high&(1<<31)) {
char misc[20];
char addr[24];
misc[0] = addr[0] = '\0';
if (high & (1<<29)) if (high & (1<<29))
recover |= 1; recover |= 1;
if (high & (1<<25)) if (high & (1<<25))
recover |= 2; recover |= 2;
printk (KERN_EMERG "Bank %d: %08x%08x", i, high, low);
high &= ~(1<<31); high &= ~(1<<31);
if (high & (1<<27)) { if (high & (1<<27)) {
rdmsr (MSR_IA32_MC0_MISC+i*4, alow, ahigh); rdmsr(MSR_IA32_MC0_MISC+i*4, alow, ahigh);
printk ("[%08x%08x]", ahigh, alow); snprintf(misc, 20, "[%08x%08x]", ahigh, alow);
} }
if (high & (1<<26)) { if (high & (1<<26)) {
rdmsr (MSR_IA32_MC0_ADDR+i*4, alow, ahigh); rdmsr(MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
printk (" at %08x%08x", ahigh, alow); snprintf(addr, 24, " at %08x%08x", ahigh, alow);
} }
printk ("\n"); printk(KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n",
smp_processor_id(), i, high, low, misc, addr);
/* Clear it */ /* Clear it */
wrmsr (MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL); wrmsr(MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL);
/* Serialize */ /* Serialize */
wmb(); wmb();
add_taint(TAINT_MACHINE_CHECK); add_taint(TAINT_MACHINE_CHECK);

View File

@ -8,7 +8,7 @@ void intel_p6_mcheck_init(struct cpuinfo_x86 *c);
void winchip_mcheck_init(struct cpuinfo_x86 *c); void winchip_mcheck_init(struct cpuinfo_x86 *c);
/* Call the installed machine check handler for this CPU setup. */ /* Call the installed machine check handler for this CPU setup. */
extern fastcall void (*machine_check_vector)(struct pt_regs *, long error_code); extern void (*machine_check_vector)(struct pt_regs *, long error_code);
extern int nr_mce_banks; extern int nr_mce_banks;

View File

@ -22,13 +22,13 @@ int nr_mce_banks;
EXPORT_SYMBOL_GPL(nr_mce_banks); /* non-fatal.o */ EXPORT_SYMBOL_GPL(nr_mce_banks); /* non-fatal.o */
/* Handle unconfigured int18 (should never happen) */ /* Handle unconfigured int18 (should never happen) */
static fastcall void unexpected_machine_check(struct pt_regs * regs, long error_code) static void unexpected_machine_check(struct pt_regs * regs, long error_code)
{ {
printk(KERN_ERR "CPU#%d: Unexpected int18 (Machine Check).\n", smp_processor_id()); printk(KERN_ERR "CPU#%d: Unexpected int18 (Machine Check).\n", smp_processor_id());
} }
/* Call the installed machine check handler for this CPU setup. */ /* Call the installed machine check handler for this CPU setup. */
void fastcall (*machine_check_vector)(struct pt_regs *, long error_code) = unexpected_machine_check; void (*machine_check_vector)(struct pt_regs *, long error_code) = unexpected_machine_check;
/* This has to be run for each processor */ /* This has to be run for each processor */
void mcheck_init(struct cpuinfo_x86 *c) void mcheck_init(struct cpuinfo_x86 *c)

View File

@ -63,7 +63,7 @@ static DECLARE_WAIT_QUEUE_HEAD(mce_wait);
* separate MCEs from kernel messages to avoid bogus bug reports. * separate MCEs from kernel messages to avoid bogus bug reports.
*/ */
struct mce_log mcelog = { static struct mce_log mcelog = {
MCE_LOG_SIGNATURE, MCE_LOG_SIGNATURE,
MCE_LOG_LEN, MCE_LOG_LEN,
}; };
@ -80,7 +80,7 @@ void mce_log(struct mce *mce)
/* When the buffer fills up discard new entries. Assume /* When the buffer fills up discard new entries. Assume
that the earlier errors are the more interesting. */ that the earlier errors are the more interesting. */
if (entry >= MCE_LOG_LEN) { if (entry >= MCE_LOG_LEN) {
set_bit(MCE_OVERFLOW, &mcelog.flags); set_bit(MCE_OVERFLOW, (unsigned long *)&mcelog.flags);
return; return;
} }
/* Old left over entry. Skip. */ /* Old left over entry. Skip. */
@ -110,12 +110,12 @@ static void print_mce(struct mce *m)
KERN_EMERG KERN_EMERG
"CPU %d: Machine Check Exception: %16Lx Bank %d: %016Lx\n", "CPU %d: Machine Check Exception: %16Lx Bank %d: %016Lx\n",
m->cpu, m->mcgstatus, m->bank, m->status); m->cpu, m->mcgstatus, m->bank, m->status);
if (m->rip) { if (m->ip) {
printk(KERN_EMERG "RIP%s %02x:<%016Lx> ", printk(KERN_EMERG "RIP%s %02x:<%016Lx> ",
!(m->mcgstatus & MCG_STATUS_EIPV) ? " !INEXACT!" : "", !(m->mcgstatus & MCG_STATUS_EIPV) ? " !INEXACT!" : "",
m->cs, m->rip); m->cs, m->ip);
if (m->cs == __KERNEL_CS) if (m->cs == __KERNEL_CS)
print_symbol("{%s}", m->rip); print_symbol("{%s}", m->ip);
printk("\n"); printk("\n");
} }
printk(KERN_EMERG "TSC %Lx ", m->tsc); printk(KERN_EMERG "TSC %Lx ", m->tsc);
@ -156,16 +156,16 @@ static int mce_available(struct cpuinfo_x86 *c)
static inline void mce_get_rip(struct mce *m, struct pt_regs *regs) static inline void mce_get_rip(struct mce *m, struct pt_regs *regs)
{ {
if (regs && (m->mcgstatus & MCG_STATUS_RIPV)) { if (regs && (m->mcgstatus & MCG_STATUS_RIPV)) {
m->rip = regs->rip; m->ip = regs->ip;
m->cs = regs->cs; m->cs = regs->cs;
} else { } else {
m->rip = 0; m->ip = 0;
m->cs = 0; m->cs = 0;
} }
if (rip_msr) { if (rip_msr) {
/* Assume the RIP in the MSR is exact. Is this true? */ /* Assume the RIP in the MSR is exact. Is this true? */
m->mcgstatus |= MCG_STATUS_EIPV; m->mcgstatus |= MCG_STATUS_EIPV;
rdmsrl(rip_msr, m->rip); rdmsrl(rip_msr, m->ip);
m->cs = 0; m->cs = 0;
} }
} }
@ -192,10 +192,10 @@ void do_machine_check(struct pt_regs * regs, long error_code)
atomic_inc(&mce_entry); atomic_inc(&mce_entry);
if (regs) if ((regs
notify_die(DIE_NMI, "machine check", regs, error_code, 18, && notify_die(DIE_NMI, "machine check", regs, error_code,
SIGKILL); 18, SIGKILL) == NOTIFY_STOP)
if (!banks) || !banks)
goto out2; goto out2;
memset(&m, 0, sizeof(struct mce)); memset(&m, 0, sizeof(struct mce));
@ -288,7 +288,7 @@ void do_machine_check(struct pt_regs * regs, long error_code)
* instruction which caused the MCE. * instruction which caused the MCE.
*/ */
if (m.mcgstatus & MCG_STATUS_EIPV) if (m.mcgstatus & MCG_STATUS_EIPV)
user_space = panicm.rip && (panicm.cs & 3); user_space = panicm.ip && (panicm.cs & 3);
/* /*
* If we know that the error was in user space, send a * If we know that the error was in user space, send a
@ -564,7 +564,7 @@ static ssize_t mce_read(struct file *filp, char __user *ubuf, size_t usize,
loff_t *off) loff_t *off)
{ {
unsigned long *cpu_tsc; unsigned long *cpu_tsc;
static DECLARE_MUTEX(mce_read_sem); static DEFINE_MUTEX(mce_read_mutex);
unsigned next; unsigned next;
char __user *buf = ubuf; char __user *buf = ubuf;
int i, err; int i, err;
@ -573,12 +573,12 @@ static ssize_t mce_read(struct file *filp, char __user *ubuf, size_t usize,
if (!cpu_tsc) if (!cpu_tsc)
return -ENOMEM; return -ENOMEM;
down(&mce_read_sem); mutex_lock(&mce_read_mutex);
next = rcu_dereference(mcelog.next); next = rcu_dereference(mcelog.next);
/* Only supports full reads right now */ /* Only supports full reads right now */
if (*off != 0 || usize < MCE_LOG_LEN*sizeof(struct mce)) { if (*off != 0 || usize < MCE_LOG_LEN*sizeof(struct mce)) {
up(&mce_read_sem); mutex_unlock(&mce_read_mutex);
kfree(cpu_tsc); kfree(cpu_tsc);
return -EINVAL; return -EINVAL;
} }
@ -621,7 +621,7 @@ static ssize_t mce_read(struct file *filp, char __user *ubuf, size_t usize,
memset(&mcelog.entry[i], 0, sizeof(struct mce)); memset(&mcelog.entry[i], 0, sizeof(struct mce));
} }
} }
up(&mce_read_sem); mutex_unlock(&mce_read_mutex);
kfree(cpu_tsc); kfree(cpu_tsc);
return err ? -EFAULT : buf - ubuf; return err ? -EFAULT : buf - ubuf;
} }
@ -634,8 +634,7 @@ static unsigned int mce_poll(struct file *file, poll_table *wait)
return 0; return 0;
} }
static int mce_ioctl(struct inode *i, struct file *f,unsigned int cmd, static long mce_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
unsigned long arg)
{ {
int __user *p = (int __user *)arg; int __user *p = (int __user *)arg;
@ -664,7 +663,7 @@ static const struct file_operations mce_chrdev_ops = {
.release = mce_release, .release = mce_release,
.read = mce_read, .read = mce_read,
.poll = mce_poll, .poll = mce_poll,
.ioctl = mce_ioctl, .unlocked_ioctl = mce_ioctl,
}; };
static struct miscdevice mce_log_device = { static struct miscdevice mce_log_device = {
@ -855,8 +854,8 @@ static void mce_remove_device(unsigned int cpu)
} }
/* Get notified when a cpu comes on/off. Be hotplug friendly. */ /* Get notified when a cpu comes on/off. Be hotplug friendly. */
static int static int __cpuinit mce_cpu_callback(struct notifier_block *nfb,
mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu) unsigned long action, void *hcpu)
{ {
unsigned int cpu = (unsigned long)hcpu; unsigned int cpu = (unsigned long)hcpu;
@ -873,7 +872,7 @@ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
return NOTIFY_OK; return NOTIFY_OK;
} }
static struct notifier_block mce_cpu_notifier = { static struct notifier_block mce_cpu_notifier __cpuinitdata = {
.notifier_call = mce_cpu_callback, .notifier_call = mce_cpu_callback,
}; };

Some files were not shown because too many files have changed in this diff Show More