Merge branch 'master' of /usr/src/ntfs-2.6/

This commit is contained in:
Anton Altaparmakov 2006-03-23 17:06:08 +00:00
commit 92fe7b9ea8
320 changed files with 6338 additions and 4810 deletions

View File

@ -1008,7 +1008,9 @@ running once the system is up.
noexec=on: enable non-executable mappings (default)
noexec=off: disable nn-executable mappings
nofxsr [BUGS=IA-32]
nofxsr [BUGS=IA-32] Disables x86 floating point extended
register save and restore. The kernel will only save
legacy floating-point registers on task switch.
nohlt [BUGS=ARM]
@ -1053,6 +1055,8 @@ running once the system is up.
nosbagart [IA-64]
nosep [BUGS=IA-32] Disables x86 SYSENTER/SYSEXIT support.
nosmp [SMP] Tells an SMP kernel to act as a UP kernel.
nosync [HW,M68K] Disables sync negotiation for all devices.
@ -1122,6 +1126,11 @@ running once the system is up.
pas16= [HW,SCSI]
See header of drivers/scsi/pas16.c.
pause_on_oops=
Halt all CPUs after the first oops has been printed for
the specified number of seconds. This is to be used if
your oopses keep scrolling off the screen.
pcbit= [HW,ISDN]
pcd. [PARIDE]

View File

@ -17,6 +17,11 @@ Some warnings, first.
* but it will probably only crash.
*
* (*) suspend/resume support is needed to make it safe.
*
* If you have any filesystems on USB devices mounted before suspend,
* they won't be accessible after resume and you may lose data, as though
* you have unplugged the USB devices with mounted filesystems on them
* (see the FAQ below for details).
You need to append resume=/dev/your_swap_partition to kernel command
line. Then you suspend by
@ -27,19 +32,18 @@ echo shutdown > /sys/power/disk; echo disk > /sys/power/state
echo platform > /sys/power/disk; echo disk > /sys/power/state
. If you have SATA disks, you'll need recent kernels with SATA suspend
support. For suspend and resume to work, make sure your disk drivers
are built into kernel -- not modules. [There's way to make
suspend/resume with modular disk drivers, see FAQ, but you probably
should not do that.]
If you want to limit the suspend image size to N bytes, do
echo N > /sys/power/image_size
before suspend (it is limited to 500 MB by default).
Encrypted suspend image:
------------------------
If you want to store your suspend image encrypted with a temporary
key to prevent data gathering after resume you must compile
crypto and the aes algorithm into the kernel - modules won't work
as they cannot be loaded at resume time.
Article about goals and implementation of Software Suspend for Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -333,4 +337,37 @@ init=/bin/bash, then swapon and starting suspend sequence manually
usually does the trick. Then it is good idea to try with latest
vanilla kernel.
Q: How can distributions ship a swsusp-supporting kernel with modular
disk drivers (especially SATA)?
A: Well, it can be done, load the drivers, then do echo into
/sys/power/disk/resume file from initrd. Be sure not to mount
anything, not even read-only mount, or you are going to lose your
data.
Q: How do I make suspend more verbose?
A: If you want to see any non-error kernel messages on the virtual
terminal the kernel switches to during suspend, you have to set the
kernel console loglevel to at least 5, for example by doing
echo 5 > /proc/sys/kernel/printk
Q: Is this true that if I have a mounted filesystem on a USB device and
I suspend to disk, I can lose data unless the filesystem has been mounted
with "sync"?
A: That's right. It depends on your hardware, and it could be true even for
suspend-to-RAM. In fact, even with "-o sync" you can lose data if your
programs have information in buffers they haven't written out to disk.
If you're lucky, your hardware will support low-power modes for USB
controllers while the system is asleep. Lots of hardware doesn't,
however. Shutting off the power to a USB controller is equivalent to
unplugging all the attached devices.
Remember that it's always a bad idea to unplug a disk drive containing a
mounted filesystem. With USB that's true even when your system is asleep!
The safest thing is to unmount all USB-based filesystems before suspending
and remount them after resuming.

View File

@ -0,0 +1,149 @@
Documentation for userland software suspend interface
(C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
First, the warnings at the beginning of swsusp.txt still apply.
Second, you should read the FAQ in swsusp.txt _now_ if you have not
done it already.
Now, to use the userland interface for software suspend you need special
utilities that will read/write the system memory snapshot from/to the
kernel. Such utilities are available, for example, from
<http://www.sisk.pl/kernel/utilities/suspend>. You may want to have
a look at them if you are going to develop your own suspend/resume
utilities.
The interface consists of a character device providing the open(),
release(), read(), and write() operations as well as several ioctl()
commands defined in kernel/power/power.h. The major and minor
numbers of the device are, respectively, 10 and 231, and they can
be read from /sys/class/misc/snapshot/dev.
The device can be open either for reading or for writing. If open for
reading, it is considered to be in the suspend mode. Otherwise it is
assumed to be in the resume mode. The device cannot be open for reading
and writing. It is also impossible to have the device open more than once
at a time.
The ioctl() commands recognized by the device are:
SNAPSHOT_FREEZE - freeze user space processes (the current process is
not frozen); this is required for SNAPSHOT_ATOMIC_SNAPSHOT
and SNAPSHOT_ATOMIC_RESTORE to succeed
SNAPSHOT_UNFREEZE - thaw user space processes frozen by SNAPSHOT_FREEZE
SNAPSHOT_ATOMIC_SNAPSHOT - create a snapshot of the system memory; the
last argument of ioctl() should be a pointer to an int variable,
the value of which will indicate whether the call returned after
creating the snapshot (1) or after restoring the system memory state
from it (0) (after resume the system finds itself finishing the
SNAPSHOT_ATOMIC_SNAPSHOT ioctl() again); after the snapshot
has been created the read() operation can be used to transfer
it out of the kernel
SNAPSHOT_ATOMIC_RESTORE - restore the system memory state from the
uploaded snapshot image; before calling it you should transfer
the system memory snapshot back to the kernel using the write()
operation; this call will not succeed if the snapshot
image is not available to the kernel
SNAPSHOT_FREE - free memory allocated for the snapshot image
SNAPSHOT_SET_IMAGE_SIZE - set the preferred maximum size of the image
(the kernel will do its best to ensure the image size will not exceed
this number, but if it turns out to be impossible, the kernel will
create the smallest image possible)
SNAPSHOT_AVAIL_SWAP - return the amount of available swap in bytes (the last
argument should be a pointer to an unsigned int variable that will
contain the result if the call is successful).
SNAPSHOT_GET_SWAP_PAGE - allocate a swap page from the resume partition
(the last argument should be a pointer to a loff_t variable that
will contain the swap page offset if the call is successful)
SNAPSHOT_FREE_SWAP_PAGES - free all swap pages allocated with
SNAPSHOT_GET_SWAP_PAGE
SNAPSHOT_SET_SWAP_FILE - set the resume partition (the last ioctl() argument
should specify the device's major and minor numbers in the old
two-byte format, as returned by the stat() function in the .st_rdev
member of the stat structure); it is recommended to always use this
call, because the code to set the resume partition could be removed from
future kernels
The device's read() operation can be used to transfer the snapshot image from
the kernel. It has the following limitations:
- you cannot read() more than one virtual memory page at a time
- read()s accross page boundaries are impossible (ie. if ypu read() 1/2 of
a page in the previous call, you will only be able to read()
_at_ _most_ 1/2 of the page in the next call)
The device's write() operation is used for uploading the system memory snapshot
into the kernel. It has the same limitations as the read() operation.
The release() operation frees all memory allocated for the snapshot image
and all swap pages allocated with SNAPSHOT_GET_SWAP_PAGE (if any).
Thus it is not necessary to use either SNAPSHOT_FREE or
SNAPSHOT_FREE_SWAP_PAGES before closing the device (in fact it will also
unfreeze user space processes frozen by SNAPSHOT_UNFREEZE if they are
still frozen when the device is being closed).
Currently it is assumed that the userland utilities reading/writing the
snapshot image from/to the kernel will use a swap parition, called the resume
partition, as storage space. However, this is not really required, as they
can use, for example, a special (blank) suspend partition or a file on a partition
that is unmounted before SNAPSHOT_ATOMIC_SNAPSHOT and mounted afterwards.
These utilities SHOULD NOT make any assumptions regarding the ordering of
data within the snapshot image, except for the image header that MAY be
assumed to start with an swsusp_info structure, as specified in
kernel/power/power.h. This structure MAY be used by the userland utilities
to obtain some information about the snapshot image, such as the size
of the snapshot image, including the metadata and the header itself,
contained in the .size member of swsusp_info.
The snapshot image MUST be written to the kernel unaltered (ie. all of the image
data, metadata and header MUST be written in _exactly_ the same amount, form
and order in which they have been read). Otherwise, the behavior of the
resumed system may be totally unpredictable.
While executing SNAPSHOT_ATOMIC_RESTORE the kernel checks if the
structure of the snapshot image is consistent with the information stored
in the image header. If any inconsistencies are detected,
SNAPSHOT_ATOMIC_RESTORE will not succeed. Still, this is not a fool-proof
mechanism and the userland utilities using the interface SHOULD use additional
means, such as checksums, to ensure the integrity of the snapshot image.
The suspending and resuming utilities MUST lock themselves in memory,
preferrably using mlockall(), before calling SNAPSHOT_FREEZE.
The suspending utility MUST check the value stored by SNAPSHOT_ATOMIC_SNAPSHOT
in the memory location pointed to by the last argument of ioctl() and proceed
in accordance with it:
1. If the value is 1 (ie. the system memory snapshot has just been
created and the system is ready for saving it):
(a) The suspending utility MUST NOT close the snapshot device
_unless_ the whole suspend procedure is to be cancelled, in
which case, if the snapshot image has already been saved, the
suspending utility SHOULD destroy it, preferrably by zapping
its header. If the suspend is not to be cancelled, the
system MUST be powered off or rebooted after the snapshot
image has been saved.
(b) The suspending utility SHOULD NOT attempt to perform any
file system operations (including reads) on the file systems
that were mounted before SNAPSHOT_ATOMIC_SNAPSHOT has been
called. However, it MAY mount a file system that was not
mounted at that time and perform some operations on it (eg.
use it for saving the image).
2. If the value is 0 (ie. the system state has just been restored from
the snapshot image), the suspending utility MUST close the snapshot
device. Afterwards it will be treated as a regular userland process,
so it need not exit.
The resuming utility SHOULD NOT attempt to mount any file systems that could
be mounted before suspend and SHOULD NOT attempt to perform any operations
involving such file systems.
For details, please refer to the source code.

View File

@ -1,7 +1,7 @@
Video issues with S3 resume
~~~~~~~~~~~~~~~~~~~~~~~~~~~
2003-2005, Pavel Machek
2003-2006, Pavel Machek
During S3 resume, hardware needs to be reinitialized. For most
devices, this is easy, and kernel driver knows how to do
@ -15,6 +15,27 @@ run normally so video card is normally initialized. It should not be
problem for S1 standby, because hardware should retain its state over
that.
We either have to run video BIOS during early resume, or interpret it
using vbetool later, or maybe nothing is neccessary on particular
system because video state is preserved. Unfortunately different
methods work on different systems, and no known method suits all of
them.
Userland application called s2ram has been developed; it contains long
whitelist of systems, and automatically selects working method for a
given system. It can be downloaded from CVS at
www.sf.net/projects/suspend . If you get a system that is not in the
whitelist, please try to find a working solution, and submit whitelist
entry so that work does not need to be repeated.
Currently, VBE_SAVE method (6 below) works on most
systems. Unfortunately, vbetool only runs after userland is resumed,
so it makes debugging of early resume problems
hard/impossible. Methods that do not rely on userland are preferable.
Details
~~~~~~~
There are a few types of systems where video works after S3 resume:
(1) systems where video state is preserved over S3.
@ -104,6 +125,7 @@ HP NX7000 ??? (*)
HP Pavilion ZD7000 vbetool post needed, need open-source nv driver for X
HP Omnibook XE3 athlon version none (1)
HP Omnibook XE3GC none (1), video is S3 Savage/IX-MV
HP Omnibook 5150 none (1), (S1 also works OK)
IBM TP T20, model 2647-44G none (1), video is S3 Inc. 86C270-294 Savage/IX-MV, vesafb gets "interesting" but X work.
IBM TP A31 / Type 2652-M5G s3_mode (3) [works ok with BIOS 1.04 2002-08-23, but not at all with BIOS 1.11 2004-11-05 :-(]
IBM TP R32 / Type 2658-MMG none (1)
@ -120,18 +142,24 @@ IBM ThinkPad T42p (2373-GTG) s3_bios (2)
IBM TP X20 ??? (*)
IBM TP X30 s3_bios (2)
IBM TP X31 / Type 2672-XXH none (1), use radeontool (http://fdd.com/software/radeon/) to turn off backlight.
IBM TP X32 none (1), but backlight is on and video is trashed after long suspend
IBM TP X32 none (1), but backlight is on and video is trashed after long suspend. s3_bios,s3_mode (4) works too. Perhaps that gets better results?
IBM Thinkpad X40 Type 2371-7JG s3_bios,s3_mode (4)
IBM TP 600e none(1), but a switch to console and back to X is needed
Medion MD4220 ??? (*)
Samsung P35 vbetool needed (6)
Sharp PC-AR10 (ATI rage) none (1)
Sharp PC-AR10 (ATI rage) none (1), backlight does not switch off
Sony Vaio PCG-C1VRX/K s3_bios (2)
Sony Vaio PCG-F403 ??? (*)
Sony Vaio PCG-GRT995MP none (1), works with 'nv' X driver
Sony Vaio PCG-GR7/K none (1), but needs radeonfb, use radeontool (http://fdd.com/software/radeon/) to turn off backlight.
Sony Vaio PCG-N505SN ??? (*)
Sony Vaio vgn-s260 X or boot-radeon can init it (5)
Sony Vaio vgn-S580BH vga=normal, but suspend from X. Console will be blank unless you return to X.
Sony Vaio vgn-FS115B s3_bios (2),s3_mode (4)
Toshiba Libretto L5 none (1)
Toshiba Satellite 4030CDT s3_mode (3)
Toshiba Satellite 4080XCDT s3_mode (3)
Toshiba Portege 3020CT s3_mode (3)
Toshiba Satellite 4030CDT s3_mode (3) (S1 also works OK)
Toshiba Satellite 4080XCDT s3_mode (3) (S1 also works OK)
Toshiba Satellite 4090XCDT ??? (*)
Toshiba Satellite P10-554 s3_bios,s3_mode (4)(****)
Toshiba M30 (2) xor X with nvidia driver using internal AGP
@ -151,39 +179,3 @@ Asus A7V8X nVidia RIVA TNT2 model 64 s3_bios,s3_mode (4)
(***) To be tested with a newer kernel.
(****) Not with SMP kernel, UP only.
VBEtool details
~~~~~~~~~~~~~~~
(with thanks to Carl-Daniel Hailfinger)
First, boot into X and run the following script ONCE:
#!/bin/bash
statedir=/root/s3/state
mkdir -p $statedir
chvt 2
sleep 1
vbetool vbestate save >$statedir/vbe
To suspend and resume properly, call the following script as root:
#!/bin/bash
statedir=/root/s3/state
curcons=`fgconsole`
fuser /dev/tty$curcons 2>/dev/null|xargs ps -o comm= -p|grep -q X && chvt 2
cat /dev/vcsa >$statedir/vcsa
sync
echo 3 >/proc/acpi/sleep
sync
vbetool post
vbetool vbestate restore <$statedir/vbe
cat $statedir/vcsa >/dev/vcsa
rckbd restart
chvt $[curcons%6+1]
chvt $curcons
Unless you change your graphics card or other hardware configuration,
the state once saved will be OK for every resume afterwards.
NOTE: The "rckbd restart" command may be different for your
distribution. Simply replace it with the command you would use to
set the fonts on screen.

View File

@ -52,9 +52,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -67,9 +66,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);

View File

@ -75,9 +75,8 @@ int show_interrupts(struct seq_file *p, void *v)
switch (i) {
case 0:
seq_printf(p, " ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
break;
@ -100,9 +99,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i - 1]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i - 1]);
#endif
level = group->sources[ix]->level - frv_irq_levels;

View File

@ -80,6 +80,7 @@ config X86_VOYAGER
config X86_NUMAQ
bool "NUMAQ (IBM/Sequent)"
select SMP
select NUMA
help
This option is used for getting Linux to run on a (IBM/Sequent) NUMA
@ -400,6 +401,7 @@ choice
config NOHIGHMEM
bool "off"
depends on !X86_NUMAQ
---help---
Linux can use up to 64 Gigabytes of physical memory on x86 systems.
However, the address space of 32-bit x86 processors is only 4
@ -436,6 +438,7 @@ config NOHIGHMEM
config HIGHMEM4G
bool "4GB"
depends on !X86_NUMAQ
help
Select this if you have a 32-bit processor and between 1 and 4
gigabytes of physical RAM.
@ -503,10 +506,6 @@ config NUMA
default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT)
# Need comments to help the hapless user trying to turn on NUMA support
comment "NUMA (NUMA-Q) requires SMP, 64GB highmem support"
depends on X86_NUMAQ && (!HIGHMEM64G || !SMP)
comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
depends on X86_SUMMIT && (!HIGHMEM64G || !ACPI)
@ -660,13 +659,18 @@ config BOOT_IOREMAP
default y
config REGPARM
bool "Use register arguments (EXPERIMENTAL)"
depends on EXPERIMENTAL
default n
bool "Use register arguments"
default y
help
Compile the kernel with -mregparm=3. This uses a different ABI
and passes the first three arguments of a function call in registers.
This will probably break binary only modules.
Compile the kernel with -mregparm=3. This instructs gcc to use
a more efficient function call ABI which passes the first three
arguments of a function call via registers, which results in denser
and faster code.
If this option is disabled, then the default ABI of passing
arguments via the stack is used.
If unsure, say Y.
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"

View File

@ -31,6 +31,15 @@ config DEBUG_STACK_USAGE
This option will slow down process creation somewhat.
config STACK_BACKTRACE_COLS
int "Stack backtraces per line" if DEBUG_KERNEL
range 1 3
default 2
help
Selects how many stack backtrace entries per line to display.
This can save screen space when displaying traces.
comment "Page alloc debug is incompatible with Software Suspend on i386"
depends on DEBUG_KERNEL && SOFTWARE_SUSPEND

View File

@ -7,7 +7,7 @@ extra-y := head.o init_task.o vmlinux.lds
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_i386.o \
pci-dma.o i386_ksyms.o i387.o dmi_scan.o bootflag.o \
quirks.o i8237.o topology.o
quirks.o i8237.o topology.o alternative.o
obj-y += cpu/
obj-y += timers/

View File

@ -0,0 +1,321 @@
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/list.h>
#include <asm/alternative.h>
#include <asm/sections.h>
#define DEBUG 0
#if DEBUG
# define DPRINTK(fmt, args...) printk(fmt, args)
#else
# define DPRINTK(fmt, args...)
#endif
/* Use inline assembly to define this because the nops are defined
as inline assembly strings in the include files and we cannot
get them easily into strings. */
asm("\t.data\nintelnops: "
GENERIC_NOP1 GENERIC_NOP2 GENERIC_NOP3 GENERIC_NOP4 GENERIC_NOP5 GENERIC_NOP6
GENERIC_NOP7 GENERIC_NOP8);
asm("\t.data\nk8nops: "
K8_NOP1 K8_NOP2 K8_NOP3 K8_NOP4 K8_NOP5 K8_NOP6
K8_NOP7 K8_NOP8);
asm("\t.data\nk7nops: "
K7_NOP1 K7_NOP2 K7_NOP3 K7_NOP4 K7_NOP5 K7_NOP6
K7_NOP7 K7_NOP8);
extern unsigned char intelnops[], k8nops[], k7nops[];
static unsigned char *intel_nops[ASM_NOP_MAX+1] = {
NULL,
intelnops,
intelnops + 1,
intelnops + 1 + 2,
intelnops + 1 + 2 + 3,
intelnops + 1 + 2 + 3 + 4,
intelnops + 1 + 2 + 3 + 4 + 5,
intelnops + 1 + 2 + 3 + 4 + 5 + 6,
intelnops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k8_nops[ASM_NOP_MAX+1] = {
NULL,
k8nops,
k8nops + 1,
k8nops + 1 + 2,
k8nops + 1 + 2 + 3,
k8nops + 1 + 2 + 3 + 4,
k8nops + 1 + 2 + 3 + 4 + 5,
k8nops + 1 + 2 + 3 + 4 + 5 + 6,
k8nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k7_nops[ASM_NOP_MAX+1] = {
NULL,
k7nops,
k7nops + 1,
k7nops + 1 + 2,
k7nops + 1 + 2 + 3,
k7nops + 1 + 2 + 3 + 4,
k7nops + 1 + 2 + 3 + 4 + 5,
k7nops + 1 + 2 + 3 + 4 + 5 + 6,
k7nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static struct nop {
int cpuid;
unsigned char **noptable;
} noptypes[] = {
{ X86_FEATURE_K8, k8_nops },
{ X86_FEATURE_K7, k7_nops },
{ -1, NULL }
};
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
extern struct alt_instr __smp_alt_instructions[], __smp_alt_instructions_end[];
extern u8 *__smp_locks[], *__smp_locks_end[];
extern u8 __smp_alt_begin[], __smp_alt_end[];
static unsigned char** find_nop_table(void)
{
unsigned char **noptable = intel_nops;
int i;
for (i = 0; noptypes[i].cpuid >= 0; i++) {
if (boot_cpu_has(noptypes[i].cpuid)) {
noptable = noptypes[i].noptable;
break;
}
}
return noptable;
}
/* Replace instructions with better alternatives for this CPU type.
This runs before SMP is initialized to avoid SMP problems with
self modifying code. This implies that assymetric systems where
APs have less capabilities than the boot processor are not handled.
Tough. Make sure you disable such features by hand. */
void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
{
unsigned char **noptable = find_nop_table();
struct alt_instr *a;
int diff, i, k;
DPRINTK("%s: alt table %p -> %p\n", __FUNCTION__, start, end);
for (a = start; a < end; a++) {
BUG_ON(a->replacementlen > a->instrlen);
if (!boot_cpu_has(a->cpuid))
continue;
memcpy(a->instr, a->replacement, a->replacementlen);
diff = a->instrlen - a->replacementlen;
/* Pad the rest with nops */
for (i = a->replacementlen; diff > 0; diff -= k, i += k) {
k = diff;
if (k > ASM_NOP_MAX)
k = ASM_NOP_MAX;
memcpy(a->instr + i, noptable[k], k);
}
}
}
static void alternatives_smp_save(struct alt_instr *start, struct alt_instr *end)
{
struct alt_instr *a;
DPRINTK("%s: alt table %p-%p\n", __FUNCTION__, start, end);
for (a = start; a < end; a++) {
memcpy(a->replacement + a->replacementlen,
a->instr,
a->instrlen);
}
}
static void alternatives_smp_apply(struct alt_instr *start, struct alt_instr *end)
{
struct alt_instr *a;
for (a = start; a < end; a++) {
memcpy(a->instr,
a->replacement + a->replacementlen,
a->instrlen);
}
}
static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end)
{
u8 **ptr;
for (ptr = start; ptr < end; ptr++) {
if (*ptr < text)
continue;
if (*ptr > text_end)
continue;
**ptr = 0xf0; /* lock prefix */
};
}
static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end)
{
unsigned char **noptable = find_nop_table();
u8 **ptr;
for (ptr = start; ptr < end; ptr++) {
if (*ptr < text)
continue;
if (*ptr > text_end)
continue;
**ptr = noptable[1][0];
};
}
struct smp_alt_module {
/* what is this ??? */
struct module *mod;
char *name;
/* ptrs to lock prefixes */
u8 **locks;
u8 **locks_end;
/* .text segment, needed to avoid patching init code ;) */
u8 *text;
u8 *text_end;
struct list_head next;
};
static LIST_HEAD(smp_alt_modules);
static DEFINE_SPINLOCK(smp_alt);
static int smp_alt_once = 0;
static int __init bootonly(char *str)
{
smp_alt_once = 1;
return 1;
}
__setup("smp-alt-boot", bootonly);
void alternatives_smp_module_add(struct module *mod, char *name,
void *locks, void *locks_end,
void *text, void *text_end)
{
struct smp_alt_module *smp;
unsigned long flags;
if (smp_alt_once) {
if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(locks, locks_end,
text, text_end);
return;
}
smp = kzalloc(sizeof(*smp), GFP_KERNEL);
if (NULL == smp)
return; /* we'll run the (safe but slow) SMP code then ... */
smp->mod = mod;
smp->name = name;
smp->locks = locks;
smp->locks_end = locks_end;
smp->text = text;
smp->text_end = text_end;
DPRINTK("%s: locks %p -> %p, text %p -> %p, name %s\n",
__FUNCTION__, smp->locks, smp->locks_end,
smp->text, smp->text_end, smp->name);
spin_lock_irqsave(&smp_alt, flags);
list_add_tail(&smp->next, &smp_alt_modules);
if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(smp->locks, smp->locks_end,
smp->text, smp->text_end);
spin_unlock_irqrestore(&smp_alt, flags);
}
void alternatives_smp_module_del(struct module *mod)
{
struct smp_alt_module *item;
unsigned long flags;
if (smp_alt_once)
return;
spin_lock_irqsave(&smp_alt, flags);
list_for_each_entry(item, &smp_alt_modules, next) {
if (mod != item->mod)
continue;
list_del(&item->next);
spin_unlock_irqrestore(&smp_alt, flags);
DPRINTK("%s: %s\n", __FUNCTION__, item->name);
kfree(item);
return;
}
spin_unlock_irqrestore(&smp_alt, flags);
}
void alternatives_smp_switch(int smp)
{
struct smp_alt_module *mod;
unsigned long flags;
if (smp_alt_once)
return;
BUG_ON(!smp && (num_online_cpus() > 1));
spin_lock_irqsave(&smp_alt, flags);
if (smp) {
printk(KERN_INFO "SMP alternatives: switching to SMP code\n");
clear_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
clear_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
alternatives_smp_apply(__smp_alt_instructions,
__smp_alt_instructions_end);
list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_lock(mod->locks, mod->locks_end,
mod->text, mod->text_end);
} else {
printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
apply_alternatives(__smp_alt_instructions,
__smp_alt_instructions_end);
list_for_each_entry(mod, &smp_alt_modules, next)
alternatives_smp_unlock(mod->locks, mod->locks_end,
mod->text, mod->text_end);
}
spin_unlock_irqrestore(&smp_alt, flags);
}
void __init alternative_instructions(void)
{
apply_alternatives(__alt_instructions, __alt_instructions_end);
/* switch to patch-once-at-boottime-only mode and free the
* tables in case we know the number of CPUs will never ever
* change */
#ifdef CONFIG_HOTPLUG_CPU
if (num_possible_cpus() < 2)
smp_alt_once = 1;
#else
smp_alt_once = 1;
#endif
if (smp_alt_once) {
if (1 == num_possible_cpus()) {
printk(KERN_INFO "SMP alternatives: switching to UP code\n");
set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
set_bit(X86_FEATURE_UP, cpu_data[0].x86_capability);
apply_alternatives(__smp_alt_instructions,
__smp_alt_instructions_end);
alternatives_smp_unlock(__smp_locks, __smp_locks_end,
_text, _etext);
}
free_init_pages("SMP alternatives",
(unsigned long)__smp_alt_begin,
(unsigned long)__smp_alt_end);
} else {
alternatives_smp_save(__smp_alt_instructions,
__smp_alt_instructions_end);
alternatives_smp_module_add(NULL, "core kernel",
__smp_locks, __smp_locks_end,
_text, _etext);
alternatives_smp_switch(0);
}
}

View File

@ -38,6 +38,7 @@
#include <asm/i8253.h>
#include <mach_apic.h>
#include <mach_apicdef.h>
#include <mach_ipi.h>
#include "io_ports.h"

View File

@ -4,6 +4,7 @@
#include <asm/processor.h>
#include <asm/msr.h>
#include <asm/e820.h>
#include <asm/mtrr.h>
#include "cpu.h"
#ifdef CONFIG_X86_OOSTORE

View File

@ -25,9 +25,10 @@ EXPORT_PER_CPU_SYMBOL(cpu_gdt_descr);
DEFINE_PER_CPU(unsigned char, cpu_16bit_stack[CPU_16BIT_STACK_SIZE]);
EXPORT_PER_CPU_SYMBOL(cpu_16bit_stack);
static int cachesize_override __devinitdata = -1;
static int disable_x86_fxsr __devinitdata = 0;
static int disable_x86_serial_nr __devinitdata = 1;
static int cachesize_override __cpuinitdata = -1;
static int disable_x86_fxsr __cpuinitdata;
static int disable_x86_serial_nr __cpuinitdata = 1;
static int disable_x86_sep __cpuinitdata;
struct cpu_dev * cpu_devs[X86_VENDOR_NUM] = {};
@ -59,7 +60,7 @@ static int __init cachesize_setup(char *str)
}
__setup("cachesize=", cachesize_setup);
int __devinit get_model_name(struct cpuinfo_x86 *c)
int __cpuinit get_model_name(struct cpuinfo_x86 *c)
{
unsigned int *v;
char *p, *q;
@ -89,7 +90,7 @@ int __devinit get_model_name(struct cpuinfo_x86 *c)
}
void __devinit display_cacheinfo(struct cpuinfo_x86 *c)
void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
{
unsigned int n, dummy, ecx, edx, l2size;
@ -130,7 +131,7 @@ void __devinit display_cacheinfo(struct cpuinfo_x86 *c)
/* in particular, if CPUID levels 0x80000002..4 are supported, this isn't used */
/* Look up CPU names by table lookup. */
static char __devinit *table_lookup_model(struct cpuinfo_x86 *c)
static char __cpuinit *table_lookup_model(struct cpuinfo_x86 *c)
{
struct cpu_model_info *info;
@ -151,7 +152,7 @@ static char __devinit *table_lookup_model(struct cpuinfo_x86 *c)
}
static void __devinit get_cpu_vendor(struct cpuinfo_x86 *c, int early)
static void __cpuinit get_cpu_vendor(struct cpuinfo_x86 *c, int early)
{
char *v = c->x86_vendor_id;
int i;
@ -187,6 +188,14 @@ static int __init x86_fxsr_setup(char * s)
__setup("nofxsr", x86_fxsr_setup);
static int __init x86_sep_setup(char * s)
{
disable_x86_sep = 1;
return 1;
}
__setup("nosep", x86_sep_setup);
/* Standard macro to see if a specific flag is changeable */
static inline int flag_is_changeable_p(u32 flag)
{
@ -210,7 +219,7 @@ static inline int flag_is_changeable_p(u32 flag)
/* Probe for the CPUID instruction */
static int __devinit have_cpuid_p(void)
static int __cpuinit have_cpuid_p(void)
{
return flag_is_changeable_p(X86_EFLAGS_ID);
}
@ -254,7 +263,7 @@ static void __init early_cpu_detect(void)
}
}
void __devinit generic_identify(struct cpuinfo_x86 * c)
void __cpuinit generic_identify(struct cpuinfo_x86 * c)
{
u32 tfms, xlvl;
int junk;
@ -307,7 +316,7 @@ void __devinit generic_identify(struct cpuinfo_x86 * c)
#endif
}
static void __devinit squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
static void __cpuinit squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
{
if (cpu_has(c, X86_FEATURE_PN) && disable_x86_serial_nr ) {
/* Disable processor serial number */
@ -335,7 +344,7 @@ __setup("serialnumber", x86_serial_nr_setup);
/*
* This does the hard work of actually picking apart the CPU stuff...
*/
void __devinit identify_cpu(struct cpuinfo_x86 *c)
void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
{
int i;
@ -405,6 +414,10 @@ void __devinit identify_cpu(struct cpuinfo_x86 *c)
clear_bit(X86_FEATURE_XMM, c->x86_capability);
}
/* SEP disabled? */
if (disable_x86_sep)
clear_bit(X86_FEATURE_SEP, c->x86_capability);
if (disable_pse)
clear_bit(X86_FEATURE_PSE, c->x86_capability);
@ -417,7 +430,7 @@ void __devinit identify_cpu(struct cpuinfo_x86 *c)
else
/* Last resort... */
sprintf(c->x86_model_id, "%02x/%02x",
c->x86_vendor, c->x86_model);
c->x86, c->x86_model);
}
/* Now the feature flags better reflect actual CPU features! */
@ -453,7 +466,7 @@ void __devinit identify_cpu(struct cpuinfo_x86 *c)
}
#ifdef CONFIG_X86_HT
void __devinit detect_ht(struct cpuinfo_x86 *c)
void __cpuinit detect_ht(struct cpuinfo_x86 *c)
{
u32 eax, ebx, ecx, edx;
int index_msb, core_bits;
@ -500,7 +513,7 @@ void __devinit detect_ht(struct cpuinfo_x86 *c)
}
#endif
void __devinit print_cpu_info(struct cpuinfo_x86 *c)
void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
{
char *vendor = NULL;
@ -523,7 +536,7 @@ void __devinit print_cpu_info(struct cpuinfo_x86 *c)
printk("\n");
}
cpumask_t cpu_initialized __devinitdata = CPU_MASK_NONE;
cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
/* This is hacky. :)
* We're emulating future behavior.
@ -570,7 +583,7 @@ void __init early_cpu_init(void)
* and IDT. We reload them nevertheless, this function acts as a
* 'CPU state barrier', nothing should get across.
*/
void __devinit cpu_init(void)
void __cpuinit cpu_init(void)
{
int cpu = smp_processor_id();
struct tss_struct * t = &per_cpu(init_tss, cpu);
@ -670,7 +683,7 @@ void __devinit cpu_init(void)
}
#ifdef CONFIG_HOTPLUG_CPU
void __devinit cpu_uninit(void)
void __cpuinit cpu_uninit(void)
{
int cpu = raw_smp_processor_id();
cpu_clear(cpu, cpu_initialized);

View File

@ -1145,9 +1145,7 @@ static int __cpuinit powernowk8_init(void)
{
unsigned int i, supported_cpus = 0;
for (i=0; i<NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_cpu(i) {
if (check_supported_cpu(i))
supported_cpus++;
}

View File

@ -29,7 +29,7 @@ extern int trap_init_f00f_bug(void);
struct movsl_mask movsl_mask __read_mostly;
#endif
void __devinit early_intel_workaround(struct cpuinfo_x86 *c)
void __cpuinit early_intel_workaround(struct cpuinfo_x86 *c)
{
if (c->x86_vendor != X86_VENDOR_INTEL)
return;
@ -44,7 +44,7 @@ void __devinit early_intel_workaround(struct cpuinfo_x86 *c)
* This is called before we do cpu ident work
*/
int __devinit ppro_with_ram_bug(void)
int __cpuinit ppro_with_ram_bug(void)
{
/* Uses data from early_cpu_detect now */
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
@ -62,7 +62,7 @@ int __devinit ppro_with_ram_bug(void)
* P4 Xeon errata 037 workaround.
* Hardware prefetcher may cause stale data to be loaded into the cache.
*/
static void __devinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
static void __cpuinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
{
unsigned long lo, hi;
@ -81,7 +81,7 @@ static void __devinit Intel_errata_workarounds(struct cpuinfo_x86 *c)
/*
* find out the number of processor cores on the die
*/
static int __devinit num_cpu_cores(struct cpuinfo_x86 *c)
static int __cpuinit num_cpu_cores(struct cpuinfo_x86 *c)
{
unsigned int eax, ebx, ecx, edx;
@ -96,7 +96,7 @@ static int __devinit num_cpu_cores(struct cpuinfo_x86 *c)
return 1;
}
static void __devinit init_intel(struct cpuinfo_x86 *c)
static void __cpuinit init_intel(struct cpuinfo_x86 *c)
{
unsigned int l2 = 0;
char *p = NULL;
@ -205,7 +205,7 @@ static unsigned int intel_size_cache(struct cpuinfo_x86 * c, unsigned int size)
return size;
}
static struct cpu_dev intel_cpu_dev __devinitdata = {
static struct cpu_dev intel_cpu_dev __cpuinitdata = {
.c_vendor = "Intel",
.c_ident = { "GenuineIntel" },
.c_models = {

View File

@ -174,7 +174,7 @@ unsigned int __cpuinit init_intel_cacheinfo(struct cpuinfo_x86 *c)
unsigned int new_l1d = 0, new_l1i = 0; /* Cache sizes from cpuid(4) */
unsigned int new_l2 = 0, new_l3 = 0, i; /* Cache sizes from cpuid(4) */
if (c->cpuid_level > 4) {
if (c->cpuid_level > 3) {
static int is_initialized;
if (is_initialized == 0) {
@ -330,7 +330,7 @@ static void __cpuinit cache_shared_cpu_map_setup(unsigned int cpu, int index)
}
}
}
static void __devinit cache_remove_shared_cpu_map(unsigned int cpu, int index)
static void __cpuinit cache_remove_shared_cpu_map(unsigned int cpu, int index)
{
struct _cpuid4_info *this_leaf, *sibling_leaf;
int sibling;

View File

@ -40,7 +40,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
/* Other (Linux-defined) */
"cxmmx", "k6_mtrr", "cyrix_arr", "centaur_mcr",
NULL, NULL, NULL, NULL,
"constant_tsc", NULL, NULL, NULL, NULL, NULL, NULL, NULL,
"constant_tsc", "up", NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,

View File

@ -105,7 +105,7 @@ static int crash_nmi_callback(struct pt_regs *regs, int cpu)
return 1;
local_irq_disable();
if (!user_mode(regs)) {
if (!user_mode_vm(regs)) {
crash_fixup_ss_esp(&fixed_regs, regs);
regs = &fixed_regs;
}

View File

@ -226,6 +226,10 @@ ENTRY(system_call)
pushl %eax # save orig_eax
SAVE_ALL
GET_THREAD_INFO(%ebp)
testl $TF_MASK,EFLAGS(%esp)
jz no_singlestep
orl $_TIF_SINGLESTEP,TI_flags(%ebp)
no_singlestep:
# system call tracing in operation / emulation
/* Note, _TIF_SECCOMP is bit number 8, and so it needs testw and not testb */
testw $(_TIF_SYSCALL_EMU|_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT),TI_flags(%ebp)

View File

@ -450,7 +450,6 @@ int_msg:
.globl boot_gdt_descr
.globl idt_descr
.globl cpu_gdt_descr
ALIGN
# early boot GDT descriptor (must use 1:1 address mapping)
@ -470,8 +469,6 @@ cpu_gdt_descr:
.word GDT_ENTRIES*8-1
.long cpu_gdt_table
.fill NR_CPUS-1,8,0 # space for the other GDT descriptors
/*
* The boot_gdt_table must mirror the equivalent in setup.S and is
* used only for booting.
@ -485,7 +482,7 @@ ENTRY(boot_gdt_table)
/*
* The Global Descriptor Table contains 28 quadwords, per-CPU.
*/
.align PAGE_SIZE_asm
.align L1_CACHE_BYTES
ENTRY(cpu_gdt_table)
.quad 0x0000000000000000 /* NULL descriptor */
.quad 0x0000000000000000 /* 0x0b reserved */

View File

@ -351,8 +351,8 @@ static inline void rotate_irqs_among_cpus(unsigned long useful_load_threshold)
{
int i, j;
Dprintk("Rotating IRQs among CPUs.\n");
for (i = 0; i < NR_CPUS; i++) {
for (j = 0; cpu_online(i) && (j < NR_IRQS); j++) {
for_each_online_cpu(i) {
for (j = 0; j < NR_IRQS; j++) {
if (!irq_desc[j].action)
continue;
/* Is it a significant load ? */
@ -381,7 +381,7 @@ static void do_irq_balance(void)
unsigned long imbalance = 0;
cpumask_t allowed_mask, target_cpu_mask, tmp;
for (i = 0; i < NR_CPUS; i++) {
for_each_cpu(i) {
int package_index;
CPU_IRQ(i) = 0;
if (!cpu_online(i))
@ -422,9 +422,7 @@ static void do_irq_balance(void)
}
}
/* Find the least loaded processor package */
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (i != CPU_TO_PACKAGEINDEX(i))
continue;
if (min_cpu_irq > CPU_IRQ(i)) {
@ -441,9 +439,7 @@ tryanothercpu:
*/
tmp_cpu_irq = 0;
tmp_loaded = -1;
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (i != CPU_TO_PACKAGEINDEX(i))
continue;
if (max_cpu_irq <= CPU_IRQ(i))
@ -619,9 +615,7 @@ static int __init balanced_irq_init(void)
if (smp_num_siblings > 1 && !cpus_empty(tmp))
physical_balance = 1;
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
irq_cpu_data[i].irq_delta = kmalloc(sizeof(unsigned long) * NR_IRQS, GFP_KERNEL);
irq_cpu_data[i].last_irq = kmalloc(sizeof(unsigned long) * NR_IRQS, GFP_KERNEL);
if (irq_cpu_data[i].irq_delta == NULL || irq_cpu_data[i].last_irq == NULL) {
@ -638,9 +632,11 @@ static int __init balanced_irq_init(void)
else
printk(KERN_ERR "balanced_irq_init: failed to spawn balanced_irq");
failed:
for (i = 0; i < NR_CPUS; i++) {
for_each_cpu(i) {
kfree(irq_cpu_data[i].irq_delta);
irq_cpu_data[i].irq_delta = NULL;
kfree(irq_cpu_data[i].last_irq);
irq_cpu_data[i].last_irq = NULL;
}
return 0;
}
@ -1761,7 +1757,8 @@ static void __init setup_ioapic_ids_from_mpc(void)
* Don't check I/O APIC IDs for xAPIC systems. They have
* no meaning without the serial APIC bus.
*/
if (!(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86 < 15))
if (!(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
|| APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
return;
/*
* This is broken; anything with a real cpu count has to

View File

@ -84,9 +84,9 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
down(&kprobe_mutex);
mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn);
up(&kprobe_mutex);
mutex_unlock(&kprobe_mutex);
}
static inline void save_previous_kprobe(struct kprobe_ctlblk *kcb)

View File

@ -104,26 +104,38 @@ int apply_relocate_add(Elf32_Shdr *sechdrs,
return -ENOEXEC;
}
extern void apply_alternatives(void *start, void *end);
int module_finalize(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,
struct module *me)
{
const Elf_Shdr *s;
const Elf_Shdr *s, *text = NULL, *alt = NULL, *locks = NULL;
char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
/* look for .altinstructions to patch */
for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) {
void *seg;
if (strcmp(".altinstructions", secstrings + s->sh_name))
continue;
seg = (void *)s->sh_addr;
apply_alternatives(seg, seg + s->sh_size);
}
if (!strcmp(".text", secstrings + s->sh_name))
text = s;
if (!strcmp(".altinstructions", secstrings + s->sh_name))
alt = s;
if (!strcmp(".smp_locks", secstrings + s->sh_name))
locks= s;
}
if (alt) {
/* patch .altinstructions */
void *aseg = (void *)alt->sh_addr;
apply_alternatives(aseg, aseg + alt->sh_size);
}
if (locks && text) {
void *lseg = (void *)locks->sh_addr;
void *tseg = (void *)text->sh_addr;
alternatives_smp_module_add(me, me->name,
lseg, lseg + locks->sh_size,
tseg, tseg + text->sh_size);
}
return 0;
}
void module_arch_cleanup(struct module *mod)
{
alternatives_smp_module_del(mod);
}

View File

@ -828,6 +828,8 @@ void __init find_smp_config (void)
smp_scan_config(address, 0x400);
}
int es7000_plat;
/* --------------------------------------------------------------------------
ACPI-based MP Configuration
-------------------------------------------------------------------------- */
@ -935,7 +937,8 @@ void __init mp_register_ioapic (
mp_ioapics[idx].mpc_apicaddr = address;
set_fixmap_nocache(FIX_IO_APIC_BASE_0 + idx, address);
if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) && (boot_cpu_data.x86 < 15))
if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
&& !APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
tmpid = io_apic_get_unique_id(idx, id);
else
tmpid = id;
@ -1011,8 +1014,6 @@ void __init mp_override_legacy_irq (
return;
}
int es7000_plat;
void __init mp_config_acpi_legacy_irqs (void)
{
struct mpc_config_intsrc intsrc;

View File

@ -143,7 +143,7 @@ static int __init check_nmi_watchdog(void)
local_irq_enable();
mdelay((10*1000)/nmi_hz); // wait 10 ticks
for (cpu = 0; cpu < NR_CPUS; cpu++) {
for_each_cpu(cpu) {
#ifdef CONFIG_SMP
/* Check cpu_callin_map here because that is set
after the timer is started. */
@ -510,7 +510,7 @@ void touch_nmi_watchdog (void)
* Just reset the alert counters, (other CPUs might be
* spinning on locks we hold):
*/
for (i = 0; i < NR_CPUS; i++)
for_each_cpu(i)
alert_counter[i] = 0;
/*
@ -543,7 +543,7 @@ void nmi_watchdog_tick (struct pt_regs * regs)
/*
* die_nmi will return ONLY if NOTIFY_STOP happens..
*/
die_nmi(regs, "NMI Watchdog detected LOCKUP");
die_nmi(regs, "BUG: NMI Watchdog detected LOCKUP");
} else {
last_irq_sums[cpu] = sum;
alert_counter[cpu] = 0;

View File

@ -295,7 +295,7 @@ void show_regs(struct pt_regs * regs)
printk("EIP: %04x:[<%08lx>] CPU: %d\n",0xffff & regs->xcs,regs->eip, smp_processor_id());
print_symbol("EIP is at %s\n", regs->eip);
if (user_mode(regs))
if (user_mode_vm(regs))
printk(" ESP: %04x:%08lx",0xffff & regs->xss,regs->esp);
printk(" EFLAGS: %08lx %s (%s %.*s)\n",
regs->eflags, print_tainted(), system_utsname.release,

View File

@ -34,10 +34,10 @@
/*
* Determines which flags the user has access to [1 = access, 0 = no access].
* Prohibits changing ID(21), VIP(20), VIF(19), VM(17), IOPL(12-13), IF(9).
* Prohibits changing ID(21), VIP(20), VIF(19), VM(17), NT(14), IOPL(12-13), IF(9).
* Also masks reserved bits (31-22, 15, 5, 3, 1).
*/
#define FLAG_MASK 0x00054dd5
#define FLAG_MASK 0x00050dd5
/* set's the trap flag. */
#define TRAP_FLAG 0x100

View File

@ -110,11 +110,11 @@ asm(
".align 4\n"
".globl __write_lock_failed\n"
"__write_lock_failed:\n\t"
LOCK "addl $" RW_LOCK_BIAS_STR ",(%eax)\n"
LOCK_PREFIX "addl $" RW_LOCK_BIAS_STR ",(%eax)\n"
"1: rep; nop\n\t"
"cmpl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
"jne 1b\n\t"
LOCK "subl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
LOCK_PREFIX "subl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
"jnz __write_lock_failed\n\t"
"ret"
);
@ -124,11 +124,11 @@ asm(
".align 4\n"
".globl __read_lock_failed\n"
"__read_lock_failed:\n\t"
LOCK "incl (%eax)\n"
LOCK_PREFIX "incl (%eax)\n"
"1: rep; nop\n\t"
"cmpl $1,(%eax)\n\t"
"js 1b\n\t"
LOCK "decl (%eax)\n\t"
LOCK_PREFIX "decl (%eax)\n\t"
"js __read_lock_failed\n\t"
"ret"
);

View File

@ -1377,101 +1377,6 @@ static void __init register_memory(void)
pci_mem_start, gapstart, gapsize);
}
/* Use inline assembly to define this because the nops are defined
as inline assembly strings in the include files and we cannot
get them easily into strings. */
asm("\t.data\nintelnops: "
GENERIC_NOP1 GENERIC_NOP2 GENERIC_NOP3 GENERIC_NOP4 GENERIC_NOP5 GENERIC_NOP6
GENERIC_NOP7 GENERIC_NOP8);
asm("\t.data\nk8nops: "
K8_NOP1 K8_NOP2 K8_NOP3 K8_NOP4 K8_NOP5 K8_NOP6
K8_NOP7 K8_NOP8);
asm("\t.data\nk7nops: "
K7_NOP1 K7_NOP2 K7_NOP3 K7_NOP4 K7_NOP5 K7_NOP6
K7_NOP7 K7_NOP8);
extern unsigned char intelnops[], k8nops[], k7nops[];
static unsigned char *intel_nops[ASM_NOP_MAX+1] = {
NULL,
intelnops,
intelnops + 1,
intelnops + 1 + 2,
intelnops + 1 + 2 + 3,
intelnops + 1 + 2 + 3 + 4,
intelnops + 1 + 2 + 3 + 4 + 5,
intelnops + 1 + 2 + 3 + 4 + 5 + 6,
intelnops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k8_nops[ASM_NOP_MAX+1] = {
NULL,
k8nops,
k8nops + 1,
k8nops + 1 + 2,
k8nops + 1 + 2 + 3,
k8nops + 1 + 2 + 3 + 4,
k8nops + 1 + 2 + 3 + 4 + 5,
k8nops + 1 + 2 + 3 + 4 + 5 + 6,
k8nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static unsigned char *k7_nops[ASM_NOP_MAX+1] = {
NULL,
k7nops,
k7nops + 1,
k7nops + 1 + 2,
k7nops + 1 + 2 + 3,
k7nops + 1 + 2 + 3 + 4,
k7nops + 1 + 2 + 3 + 4 + 5,
k7nops + 1 + 2 + 3 + 4 + 5 + 6,
k7nops + 1 + 2 + 3 + 4 + 5 + 6 + 7,
};
static struct nop {
int cpuid;
unsigned char **noptable;
} noptypes[] = {
{ X86_FEATURE_K8, k8_nops },
{ X86_FEATURE_K7, k7_nops },
{ -1, NULL }
};
/* Replace instructions with better alternatives for this CPU type.
This runs before SMP is initialized to avoid SMP problems with
self modifying code. This implies that assymetric systems where
APs have less capabilities than the boot processor are not handled.
Tough. Make sure you disable such features by hand. */
void apply_alternatives(void *start, void *end)
{
struct alt_instr *a;
int diff, i, k;
unsigned char **noptable = intel_nops;
for (i = 0; noptypes[i].cpuid >= 0; i++) {
if (boot_cpu_has(noptypes[i].cpuid)) {
noptable = noptypes[i].noptable;
break;
}
}
for (a = start; (void *)a < end; a++) {
if (!boot_cpu_has(a->cpuid))
continue;
BUG_ON(a->replacementlen > a->instrlen);
memcpy(a->instr, a->replacement, a->replacementlen);
diff = a->instrlen - a->replacementlen;
/* Pad the rest with nops */
for (i = a->replacementlen; diff > 0; diff -= k, i += k) {
k = diff;
if (k > ASM_NOP_MAX)
k = ASM_NOP_MAX;
memcpy(a->instr + i, noptable[k], k);
}
}
}
void __init alternative_instructions(void)
{
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
apply_alternatives(__alt_instructions, __alt_instructions_end);
}
static char * __init machine_specific_memory_setup(void);
#ifdef CONFIG_MCA
@ -1554,6 +1459,16 @@ void __init setup_arch(char **cmdline_p)
parse_cmdline_early(cmdline_p);
#ifdef CONFIG_EARLY_PRINTK
{
char *s = strstr(*cmdline_p, "earlyprintk=");
if (s) {
setup_early_printk(strchr(s, '=') + 1);
printk("early console enabled\n");
}
}
#endif
max_low_pfn = setup_memory();
/*
@ -1578,19 +1493,6 @@ void __init setup_arch(char **cmdline_p)
* NOTE: at this point the bootmem allocator is fully available.
*/
#ifdef CONFIG_EARLY_PRINTK
{
char *s = strstr(*cmdline_p, "earlyprintk=");
if (s) {
extern void setup_early_printk(char *);
setup_early_printk(strchr(s, '=') + 1);
printk("early console enabled\n");
}
}
#endif
dmi_scan_machine();
#ifdef CONFIG_X86_GENERICARCH

View File

@ -123,7 +123,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
err |= __get_user(tmp, &sc->seg); \
loadsegment(seg,tmp); }
#define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_OF | X86_EFLAGS_DF | \
#define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_RF | \
X86_EFLAGS_OF | X86_EFLAGS_DF | \
X86_EFLAGS_TF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \
X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF)
@ -582,9 +583,6 @@ static void fastcall do_signal(struct pt_regs *regs)
if (!user_mode(regs))
return;
if (try_to_freeze())
goto no_signal;
if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = &current->saved_sigmask;
else
@ -613,7 +611,6 @@ static void fastcall do_signal(struct pt_regs *regs)
return;
}
no_signal:
/* Did we come from a system call? */
if (regs->orig_eax >= 0) {
/* Restart the system call - no handlers present */

View File

@ -899,6 +899,7 @@ static int __devinit do_boot_cpu(int apicid, int cpu)
unsigned short nmi_high = 0, nmi_low = 0;
++cpucount;
alternatives_smp_switch(1);
/*
* We can't use kernel_thread since we must avoid to
@ -1368,6 +1369,8 @@ void __cpu_die(unsigned int cpu)
/* They ack this in play_dead by setting CPU_DEAD */
if (per_cpu(cpu_state, cpu) == CPU_DEAD) {
printk ("CPU %d is now offline\n", cpu);
if (1 == num_online_cpus())
alternatives_smp_switch(0);
return;
}
msleep(100);

View File

@ -41,6 +41,15 @@ int arch_register_cpu(int num){
parent = &node_devices[node].node;
#endif /* CONFIG_NUMA */
/*
* CPU0 cannot be offlined due to several
* restrictions and assumptions in kernel. This basically
* doesnt add a control file, one cannot attempt to offline
* BSP.
*/
if (!num)
cpu_devices[num].cpu.no_control = 1;
return register_cpu(&cpu_devices[num].cpu, num, parent);
}

View File

@ -99,6 +99,8 @@ int register_die_notifier(struct notifier_block *nb)
{
int err = 0;
unsigned long flags;
vmalloc_sync_all();
spin_lock_irqsave(&die_notifier_lock, flags);
err = notifier_chain_register(&i386die_chain, nb);
spin_unlock_irqrestore(&die_notifier_lock, flags);
@ -112,12 +114,30 @@ static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
p < (void *)tinfo + THREAD_SIZE - 3;
}
static void print_addr_and_symbol(unsigned long addr, char *log_lvl)
/*
* Print CONFIG_STACK_BACKTRACE_COLS address/symbol entries per line.
*/
static inline int print_addr_and_symbol(unsigned long addr, char *log_lvl,
int printed)
{
printk(log_lvl);
if (!printed)
printk(log_lvl);
#if CONFIG_STACK_BACKTRACE_COLS == 1
printk(" [<%08lx>] ", addr);
#else
printk(" <%08lx> ", addr);
#endif
print_symbol("%s", addr);
printk("\n");
printed = (printed + 1) % CONFIG_STACK_BACKTRACE_COLS;
if (printed)
printk(" ");
else
printk("\n");
return printed;
}
static inline unsigned long print_context_stack(struct thread_info *tinfo,
@ -125,20 +145,24 @@ static inline unsigned long print_context_stack(struct thread_info *tinfo,
char *log_lvl)
{
unsigned long addr;
int printed = 0; /* nr of entries already printed on current line */
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
print_addr_and_symbol(addr, log_lvl);
printed = print_addr_and_symbol(addr, log_lvl, printed);
ebp = *(unsigned long *)ebp;
}
#else
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr))
print_addr_and_symbol(addr, log_lvl);
printed = print_addr_and_symbol(addr, log_lvl, printed);
}
#endif
if (printed)
printk("\n");
return ebp;
}
@ -166,8 +190,7 @@ static void show_trace_log_lvl(struct task_struct *task,
stack = (unsigned long*)context->previous_esp;
if (!stack)
break;
printk(log_lvl);
printk(" =======================\n");
printk("%s =======================\n", log_lvl);
}
}
@ -194,21 +217,17 @@ static void show_stack_log_lvl(struct task_struct *task, unsigned long *esp,
for(i = 0; i < kstack_depth_to_print; i++) {
if (kstack_end(stack))
break;
if (i && ((i % 8) == 0)) {
printk("\n");
printk(log_lvl);
printk(" ");
}
if (i && ((i % 8) == 0))
printk("\n%s ", log_lvl);
printk("%08lx ", *stack++);
}
printk("\n");
printk(log_lvl);
printk("Call Trace:\n");
printk("\n%sCall Trace:\n", log_lvl);
show_trace_log_lvl(task, esp, log_lvl);
}
void show_stack(struct task_struct *task, unsigned long *esp)
{
printk(" ");
show_stack_log_lvl(task, esp, "");
}
@ -233,7 +252,7 @@ void show_registers(struct pt_regs *regs)
esp = (unsigned long) (&regs->esp);
savesegment(ss, ss);
if (user_mode(regs)) {
if (user_mode_vm(regs)) {
in_kernel = 0;
esp = regs->esp;
ss = regs->xss & 0xffff;
@ -333,6 +352,8 @@ void die(const char * str, struct pt_regs * regs, long err)
static int die_counter;
unsigned long flags;
oops_enter();
if (die.lock_owner != raw_smp_processor_id()) {
console_verbose();
spin_lock_irqsave(&die.lock, flags);
@ -385,6 +406,7 @@ void die(const char * str, struct pt_regs * regs, long err)
ssleep(5);
panic("Fatal exception");
}
oops_exit();
do_exit(SIGSEGV);
}
@ -623,7 +645,7 @@ void die_nmi (struct pt_regs *regs, const char *msg)
/* If we are in kernel we are probably nested up pretty bad
* and might aswell get out now while we still can.
*/
if (!user_mode(regs)) {
if (!user_mode_vm(regs)) {
current->thread.trap_no = 2;
crash_kexec(regs);
}
@ -694,6 +716,7 @@ fastcall void do_nmi(struct pt_regs * regs, long error_code)
void set_nmi_callback(nmi_callback_t callback)
{
vmalloc_sync_all();
rcu_assign_pointer(nmi_callback, callback);
}
EXPORT_SYMBOL_GPL(set_nmi_callback);

View File

@ -68,6 +68,26 @@ SECTIONS
*(.data.init_task)
}
/* might get freed after init */
. = ALIGN(4096);
__smp_alt_begin = .;
__smp_alt_instructions = .;
.smp_altinstructions : AT(ADDR(.smp_altinstructions) - LOAD_OFFSET) {
*(.smp_altinstructions)
}
__smp_alt_instructions_end = .;
. = ALIGN(4);
__smp_locks = .;
.smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
*(.smp_locks)
}
__smp_locks_end = .;
.smp_altinstr_replacement : AT(ADDR(.smp_altinstr_replacement) - LOAD_OFFSET) {
*(.smp_altinstr_replacement)
}
. = ALIGN(4096);
__smp_alt_end = .;
/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
__init_begin = .;

View File

@ -21,6 +21,9 @@
* instruction clobbers %esp, the user's %esp won't even survive entry
* into the kernel. We store %esp in %ebp. Code in entry.S must fetch
* arg6 from the stack.
*
* You can not use this vsyscall for the clone() syscall because the
* three dwords on the parent stack do not get copied to the child.
*/
.text
.globl __kernel_vsyscall

View File

@ -83,6 +83,7 @@ struct es7000_oem_table {
struct psai psai;
};
#ifdef CONFIG_ACPI
struct acpi_table_sdt {
unsigned long pa;
unsigned long count;
@ -99,6 +100,9 @@ struct oem_table {
u32 OEMTableSize;
};
extern int find_unisys_acpi_oem_table(unsigned long *oem_addr);
#endif
struct mip_reg {
unsigned long long off_0;
unsigned long long off_8;
@ -114,7 +118,6 @@ struct mip_reg {
#define MIP_FUNC(VALUE) (VALUE & 0xff)
extern int parse_unisys_oem (char *oemptr);
extern int find_unisys_acpi_oem_table(unsigned long *oem_addr);
extern void setup_unisys(void);
extern int es7000_start_cpu(int cpu, unsigned long eip);
extern void es7000_sw_apic(void);

View File

@ -51,8 +51,6 @@ struct mip_reg *host_reg;
int mip_port;
unsigned long mip_addr, host_addr;
#if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_ACPI)
/*
* GSI override for ES7000 platforms.
*/
@ -76,8 +74,6 @@ es7000_rename_gsi(int ioapic, int gsi)
return gsi;
}
#endif /* (CONFIG_X86_IO_APIC) && (CONFIG_ACPI) */
void __init
setup_unisys(void)
{
@ -160,6 +156,7 @@ parse_unisys_oem (char *oemptr)
return es7000_plat;
}
#ifdef CONFIG_ACPI
int __init
find_unisys_acpi_oem_table(unsigned long *oem_addr)
{
@ -212,6 +209,7 @@ find_unisys_acpi_oem_table(unsigned long *oem_addr)
}
return -1;
}
#endif
static void
es7000_spin(int n)

View File

@ -214,6 +214,68 @@ static noinline void force_sig_info_fault(int si_signo, int si_code,
fastcall void do_invalid_op(struct pt_regs *, unsigned long);
static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
{
unsigned index = pgd_index(address);
pgd_t *pgd_k;
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
pgd += index;
pgd_k = init_mm.pgd + index;
if (!pgd_present(*pgd_k))
return NULL;
/*
* set_pgd(pgd, *pgd_k); here would be useless on PAE
* and redundant with the set_pmd() on non-PAE. As would
* set_pud.
*/
pud = pud_offset(pgd, address);
pud_k = pud_offset(pgd_k, address);
if (!pud_present(*pud_k))
return NULL;
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
if (!pmd_present(*pmd_k))
return NULL;
if (!pmd_present(*pmd))
set_pmd(pmd, *pmd_k);
else
BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
return pmd_k;
}
/*
* Handle a fault on the vmalloc or module mapping area
*
* This assumes no large pages in there.
*/
static inline int vmalloc_fault(unsigned long address)
{
unsigned long pgd_paddr;
pmd_t *pmd_k;
pte_t *pte_k;
/*
* Synchronize this task's top level page-table
* with the 'reference' page table.
*
* Do _not_ use "current" here. We might be inside
* an interrupt in the middle of a task switch..
*/
pgd_paddr = read_cr3();
pmd_k = vmalloc_sync_one(__va(pgd_paddr), address);
if (!pmd_k)
return -1;
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
return -1;
return 0;
}
/*
* This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate
@ -223,6 +285,8 @@ fastcall void do_invalid_op(struct pt_regs *, unsigned long);
* bit 0 == 0 means no page found, 1 means protection fault
* bit 1 == 0 means read, 1 means write
* bit 2 == 0 means kernel, 1 means user-mode
* bit 3 == 1 means use of reserved bit detected
* bit 4 == 1 means fault was an instruction fetch
*/
fastcall void __kprobes do_page_fault(struct pt_regs *regs,
unsigned long error_code)
@ -237,13 +301,6 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
/* get the address */
address = read_cr2();
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
return;
/* It's safe to allow irq's after cr2 has been saved */
if (regs->eflags & (X86_EFLAGS_IF|VM_MASK))
local_irq_enable();
tsk = current;
si_code = SEGV_MAPERR;
@ -259,17 +316,29 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
*
* This verifies that the fault happens in kernel space
* (error_code & 4) == 0, and that the fault was not a
* protection error (error_code & 1) == 0.
* protection error (error_code & 9) == 0.
*/
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 5))
goto vmalloc_fault;
/*
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 0x0000000d) && vmalloc_fault(address) >= 0)
return;
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
return;
/*
* Don't take the mm semaphore here. If we fixup a prefetch
* fault we could otherwise deadlock.
*/
goto bad_area_nosemaphore;
}
}
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
return;
/* It's safe to allow irq's after cr2 has been saved and the vmalloc
fault has been handled. */
if (regs->eflags & (X86_EFLAGS_IF|VM_MASK))
local_irq_enable();
mm = tsk->mm;
@ -440,24 +509,31 @@ no_context:
bust_spinlocks(1);
#ifdef CONFIG_X86_PAE
if (error_code & 16) {
pte_t *pte = lookup_address(address);
if (oops_may_print()) {
#ifdef CONFIG_X86_PAE
if (error_code & 16) {
pte_t *pte = lookup_address(address);
if (pte && pte_present(*pte) && !pte_exec_kernel(*pte))
printk(KERN_CRIT "kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n", current->uid);
if (pte && pte_present(*pte) && !pte_exec_kernel(*pte))
printk(KERN_CRIT "kernel tried to execute "
"NX-protected page - exploit attempt? "
"(uid: %d)\n", current->uid);
}
#endif
if (address < PAGE_SIZE)
printk(KERN_ALERT "BUG: unable to handle kernel NULL "
"pointer dereference");
else
printk(KERN_ALERT "BUG: unable to handle kernel paging"
" request");
printk(" at virtual address %08lx\n",address);
printk(KERN_ALERT " printing eip:\n");
printk("%08lx\n", regs->eip);
}
#endif
if (address < PAGE_SIZE)
printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
else
printk(KERN_ALERT "Unable to handle kernel paging request");
printk(" at virtual address %08lx\n",address);
printk(KERN_ALERT " printing eip:\n");
printk("%08lx\n", regs->eip);
page = read_cr3();
page = ((unsigned long *) __va(page))[address >> 22];
printk(KERN_ALERT "*pde = %08lx\n", page);
if (oops_may_print())
printk(KERN_ALERT "*pde = %08lx\n", page);
/*
* We must not directly access the pte in the highpte
* case, the page table might be allocated in highmem.
@ -465,7 +541,7 @@ no_context:
* it's allocated already.
*/
#ifndef CONFIG_HIGHPTE
if (page & 1) {
if ((page & 1) && oops_may_print()) {
page &= PAGE_MASK;
address &= 0x003ff000;
page = ((unsigned long *) __va(page))[address >> PAGE_SHIFT];
@ -510,51 +586,41 @@ do_sigbus:
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 14;
force_sig_info_fault(SIGBUS, BUS_ADRERR, address, tsk);
return;
}
vmalloc_fault:
{
/*
* Synchronize this task's top level page-table
* with the 'reference' page table.
*
* Do _not_ use "tsk" here. We might be inside
* an interrupt in the middle of a task switch..
*/
int index = pgd_index(address);
unsigned long pgd_paddr;
pgd_t *pgd, *pgd_k;
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
pte_t *pte_k;
#ifndef CONFIG_X86_PAE
void vmalloc_sync_all(void)
{
/*
* Note that races in the updates of insync and start aren't
* problematic: insync can only get set bits added, and updates to
* start are only improving performance (without affecting correctness
* if undone).
*/
static DECLARE_BITMAP(insync, PTRS_PER_PGD);
static unsigned long start = TASK_SIZE;
unsigned long address;
pgd_paddr = read_cr3();
pgd = index + (pgd_t *)__va(pgd_paddr);
pgd_k = init_mm.pgd + index;
BUILD_BUG_ON(TASK_SIZE & ~PGDIR_MASK);
for (address = start; address >= TASK_SIZE; address += PGDIR_SIZE) {
if (!test_bit(pgd_index(address), insync)) {
unsigned long flags;
struct page *page;
if (!pgd_present(*pgd_k))
goto no_context;
/*
* set_pgd(pgd, *pgd_k); here would be useless on PAE
* and redundant with the set_pmd() on non-PAE. As would
* set_pud.
*/
pud = pud_offset(pgd, address);
pud_k = pud_offset(pgd_k, address);
if (!pud_present(*pud_k))
goto no_context;
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
if (!pmd_present(*pmd_k))
goto no_context;
set_pmd(pmd, *pmd_k);
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
goto no_context;
return;
spin_lock_irqsave(&pgd_lock, flags);
for (page = pgd_list; page; page =
(struct page *)page->index)
if (!vmalloc_sync_one(page_address(page),
address)) {
BUG_ON(page != pgd_list);
break;
}
spin_unlock_irqrestore(&pgd_lock, flags);
if (!page)
set_bit(pgd_index(address), insync);
}
if (address == start && test_bit(pgd_index(address), insync))
start = address + PGDIR_SIZE;
}
}
#endif

View File

@ -720,21 +720,6 @@ static int noinline do_test_wp_bit(void)
return flag;
}
void free_initmem(void)
{
unsigned long addr;
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, 0xcc, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk (KERN_INFO "Freeing unused kernel memory: %dk freed\n", (__init_end - __init_begin) >> 10);
}
#ifdef CONFIG_DEBUG_RODATA
extern char __start_rodata, __end_rodata;
@ -758,17 +743,31 @@ void mark_rodata_ro(void)
}
#endif
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, 0xcc, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
if (start < end)
printk (KERN_INFO "Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
free_init_pages("initrd memory", start, end);
}
#endif

View File

@ -122,7 +122,7 @@ static void nmi_save_registers(void * dummy)
static void free_msrs(void)
{
int i;
for (i = 0; i < NR_CPUS; ++i) {
for_each_cpu(i) {
kfree(cpu_msrs[i].counters);
cpu_msrs[i].counters = NULL;
kfree(cpu_msrs[i].controls);
@ -138,10 +138,7 @@ static int allocate_msrs(void)
size_t counters_size = sizeof(struct op_msr) * model->num_counters;
int i;
for (i = 0; i < NR_CPUS; ++i) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
cpu_msrs[i].counters = kmalloc(counters_size, GFP_KERNEL);
if (!cpu_msrs[i].counters) {
success = 0;

View File

@ -46,11 +46,6 @@
#define KEYBOARD_INTR 3 /* must match with simulator! */
#define NR_PORTS 1 /* only one port for now */
#define SERIAL_INLINE 1
#ifdef SERIAL_INLINE
#define _INLINE_ inline
#endif
#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? SA_SHIRQ : SA_INTERRUPT)
@ -237,7 +232,7 @@ static void rs_put_char(struct tty_struct *tty, unsigned char ch)
local_irq_restore(flags);
}
static _INLINE_ void transmit_chars(struct async_struct *info, int *intr_done)
static void transmit_chars(struct async_struct *info, int *intr_done)
{
int count;
unsigned long flags;

View File

@ -37,9 +37,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -52,9 +51,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);

View File

@ -18,6 +18,7 @@
#include <linux/module.h>
#include <linux/mc146818rtc.h> /* For struct rtc_time and ioctls, etc */
#include <linux/smp_lock.h>
#include <linux/bcd.h>
#include <asm/bvme6000hw.h>
#include <asm/io.h>
@ -32,9 +33,6 @@
* ioctls.
*/
#define BCD2BIN(val) (((val)&15) + ((val)>>4)*10)
#define BIN2BCD(val) ((((val)/10)<<4) + (val)%10)
static unsigned char days_in_mo[] =
{0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};

View File

@ -68,9 +68,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -83,9 +82,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);

View File

@ -167,8 +167,8 @@ int smp_call_function (void (*func) (void *info), void *info, int retry,
mb();
/* Send a message to all other CPUs and wait for them to respond */
for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i) && i != cpu)
for_each_online_cpu(i)
if (i != cpu)
core_send_ipi(i, SMP_CALL_FUNCTION);
/* Wait for response */

View File

@ -88,12 +88,9 @@ static inline int find_level(cpuid_t *cpunum, int irq)
{
int cpu, i;
for (cpu = 0; cpu <= NR_CPUS; cpu++) {
for_each_online_cpu(cpu) {
struct slice_data *si = cpu_data[cpu].data;
if (!cpu_online(cpu))
continue;
for (i = BASE_PCI_IRQ; i < LEVELS_PER_SLICE; i++)
if (si->level_to_irq[i] == irq) {
*cpunum = cpu;

View File

@ -298,8 +298,8 @@ send_IPI_allbutself(enum ipi_message_type op)
{
int i;
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i) && i != smp_processor_id())
for_each_online_cpu(i) {
if (i != smp_processor_id())
send_IPI_single(i, op);
}
}
@ -643,14 +643,13 @@ int sys_cpus(int argc, char **argv)
if ( argc == 1 ){
#ifdef DUMP_MORE_STATE
for(i=0; i<NR_CPUS; i++) {
for_each_online_cpu(i) {
int cpus_per_line = 4;
if(cpu_online(i)) {
if (j++ % cpus_per_line)
printk(" %3d",i);
else
printk("\n %3d",i);
}
if (j++ % cpus_per_line)
printk(" %3d",i);
else
printk("\n %3d",i);
}
printk("\n");
#else
@ -659,9 +658,7 @@ int sys_cpus(int argc, char **argv)
} else if((argc==2) && !(strcmp(argv[1],"-l"))) {
printk("\nCPUSTATE TASK CPUNUM CPUID HARDCPU(HPA)\n");
#ifdef DUMP_MORE_STATE
for(i=0;i<NR_CPUS;i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (cpu_data[i].cpuid != NO_PROC_ID) {
switch(cpu_data[i].state) {
case STATE_RENDEZVOUS:
@ -695,9 +692,7 @@ int sys_cpus(int argc, char **argv)
} else if ((argc==2) && !(strcmp(argv[1],"-s"))) {
#ifdef DUMP_MORE_STATE
printk("\nCPUSTATE CPUID\n");
for (i=0;i<NR_CPUS;i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (cpu_data[i].cpuid != NO_PROC_ID) {
switch(cpu_data[i].state) {
case STATE_RENDEZVOUS:

View File

@ -135,9 +135,8 @@ skip:
#ifdef CONFIG_TAU_INT
if (tau_initialized){
seq_puts(p, "TAU: ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", tau_interrupts(j));
for_each_online_cpu(j)
seq_printf(p, "%10u ", tau_interrupts(j));
seq_puts(p, " PowerPC Thermal Assist (cpu temp)\n");
}
#endif

View File

@ -81,9 +81,9 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
down(&kprobe_mutex);
mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn);
up(&kprobe_mutex);
mutex_unlock(&kprobe_mutex);
}
static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)

View File

@ -162,9 +162,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
#if defined(CONFIG_SMP) && defined(CONFIG_PPC32)
unsigned long bogosum = 0;
int i;
for (i = 0; i < NR_CPUS; ++i)
if (cpu_online(i))
bogosum += loops_per_jiffy;
for_each_online_cpu(i)
bogosum += loops_per_jiffy;
seq_printf(m, "total bogomips\t: %lu.%02lu\n",
bogosum/(500000/HZ), bogosum/(5000/HZ) % 100);
#endif /* CONFIG_SMP && CONFIG_PPC32 */

View File

@ -272,9 +272,8 @@ int __init ppc_init(void)
if ( ppc_md.progress ) ppc_md.progress(" ", 0xffff);
/* register CPU devices */
for (i = 0; i < NR_CPUS; i++)
if (cpu_possible(i))
register_cpu(&cpu_devices[i], i, NULL);
for_each_cpu(i)
register_cpu(&cpu_devices[i], i, NULL);
/* call platform init */
if (ppc_md.init != NULL) {

View File

@ -191,9 +191,7 @@ static void smp_psurge_message_pass(int target, int msg)
if (num_online_cpus() < 2)
return;
for (i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i))
continue;
for_each_online_cpu(i) {
if (target == MSG_ALL
|| (target == MSG_ALL_BUT_SELF && i != smp_processor_id())
|| target == i) {

View File

@ -168,9 +168,8 @@ int show_cpuinfo(struct seq_file *m, void *v)
/* Show summary information */
#ifdef CONFIG_SMP
unsigned long bogosum = 0;
for (i = 0; i < NR_CPUS; ++i)
if (cpu_online(i))
bogosum += cpu_data[i].loops_per_jiffy;
for_each_online_cpu(i)
bogosum += cpu_data[i].loops_per_jiffy;
seq_printf(m, "total bogomips\t: %lu.%02lu\n",
bogosum/(500000/HZ), bogosum/(5000/HZ) % 100);
#endif /* CONFIG_SMP */
@ -712,9 +711,8 @@ int __init ppc_init(void)
if ( ppc_md.progress ) ppc_md.progress(" ", 0xffff);
/* register CPU devices */
for (i = 0; i < NR_CPUS; i++)
if (cpu_possible(i))
register_cpu(&cpu_devices[i], i, NULL);
for_each_cpu(i)
register_cpu(&cpu_devices[i], i, NULL);
/* call platform init */
if (ppc_md.init != NULL) {

View File

@ -799,9 +799,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
*/
print_cpu_info(&S390_lowcore.cpu_data);
for(i = 0; i < NR_CPUS; i++) {
if (!cpu_possible(i))
continue;
for_each_cpu(i) {
lowcore_ptr[i] = (struct _lowcore *)
__get_free_pages(GFP_KERNEL|GFP_DMA,
sizeof(void*) == 8 ? 1 : 0);

View File

@ -35,9 +35,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_puts(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}

View File

@ -404,9 +404,8 @@ static int __init topology_init(void)
{
int cpu_id;
for (cpu_id = 0; cpu_id < NR_CPUS; cpu_id++)
if (cpu_possible(cpu_id))
register_cpu(&cpu[cpu_id], cpu_id, NULL);
for_each_cpu(cpu_id)
register_cpu(&cpu[cpu_id], cpu_id, NULL);
return 0;
}

View File

@ -53,9 +53,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_puts(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}

View File

@ -184,9 +184,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++) {
if (cpu_online(j))
seq_printf(p, "%10u ",
for_each_online_cpu(j) {
seq_printf(p, "%10u ",
kstat_cpu(cpu_logical_map(j)).irqs[i]);
}
#endif

View File

@ -243,9 +243,8 @@ int setup_profiling_timer(unsigned int multiplier)
return -EINVAL;
spin_lock_irqsave(&prof_setup_lock, flags);
for(i = 0; i < NR_CPUS; i++) {
if (cpu_possible(i))
load_profile_irq(i, lvl14_resolution / multiplier);
for_each_cpu(i) {
load_profile_irq(i, lvl14_resolution / multiplier);
prof_multiplier(i) = multiplier;
}
spin_unlock_irqrestore(&prof_setup_lock, flags);
@ -273,13 +272,12 @@ void smp_bogo(struct seq_file *m)
{
int i;
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i))
seq_printf(m,
"Cpu%dBogo\t: %lu.%02lu\n",
i,
cpu_data(i).udelay_val/(500000/HZ),
(cpu_data(i).udelay_val/(5000/HZ))%100);
for_each_online_cpu(i) {
seq_printf(m,
"Cpu%dBogo\t: %lu.%02lu\n",
i,
cpu_data(i).udelay_val/(500000/HZ),
(cpu_data(i).udelay_val/(5000/HZ))%100);
}
}
@ -288,8 +286,6 @@ void smp_info(struct seq_file *m)
int i;
seq_printf(m, "State:\n");
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i))
seq_printf(m, "CPU%d\t\t: online\n", i);
}
for_each_online_cpu(i)
seq_printf(m, "CPU%d\t\t: online\n", i);
}

View File

@ -103,11 +103,9 @@ found_it: seq_printf(p, "%3d: ", i);
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (x = 0; x < NR_CPUS; x++) {
if (cpu_online(x))
seq_printf(p, "%10u ",
kstat_cpu(cpu_logical_map(x)).irqs[i]);
}
for_each_online_cpu(x)
seq_printf(p, "%10u ",
kstat_cpu(cpu_logical_map(x)).irqs[i]);
#endif
seq_printf(p, "%c %s",
(action->flags & SA_INTERRUPT) ? '+' : ' ',

View File

@ -249,11 +249,9 @@ void __init smp4d_boot_cpus(void)
} else {
unsigned long bogosum = 0;
for(i = 0; i < NR_CPUS; i++) {
if (cpu_isset(i, cpu_present_map)) {
bogosum += cpu_data(i).udelay_val;
smp_highest_cpu = i;
}
for_each_present_cpu(i) {
bogosum += cpu_data(i).udelay_val;
smp_highest_cpu = i;
}
SMP_PRINTK(("Total of %d Processors activated (%lu.%02lu BogoMIPS).\n", cpucount + 1, bogosum/(500000/HZ), (bogosum/(5000/HZ))%100));
printk("Total of %d Processors activated (%lu.%02lu BogoMIPS).\n",

View File

@ -218,10 +218,8 @@ void __init smp4m_boot_cpus(void)
cpu_present_map = cpumask_of_cpu(smp_processor_id());
} else {
unsigned long bogosum = 0;
for(i = 0; i < NR_CPUS; i++) {
if (cpu_isset(i, cpu_present_map))
bogosum += cpu_data(i).udelay_val;
}
for_each_present_cpu(i)
bogosum += cpu_data(i).udelay_val;
printk("Total of %d Processors activated (%lu.%02lu BogoMIPS).\n",
cpucount + 1,
bogosum/(500000/HZ),

View File

@ -117,9 +117,7 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++) {
if (!cpu_online(j))
continue;
for_each_online_cpu(j) {
seq_printf(p, "%10u ",
kstat_cpu(j).irqs[i]);
}

View File

@ -57,25 +57,21 @@ void smp_info(struct seq_file *m)
int i;
seq_printf(m, "State:\n");
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i))
seq_printf(m,
"CPU%d:\t\tonline\n", i);
}
for_each_online_cpu(i)
seq_printf(m, "CPU%d:\t\tonline\n", i);
}
void smp_bogo(struct seq_file *m)
{
int i;
for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i))
seq_printf(m,
"Cpu%dBogo\t: %lu.%02lu\n"
"Cpu%dClkTck\t: %016lx\n",
i, cpu_data(i).udelay_val / (500000/HZ),
(cpu_data(i).udelay_val / (5000/HZ)) % 100,
i, cpu_data(i).clock_tick);
for_each_online_cpu(i)
seq_printf(m,
"Cpu%dBogo\t: %lu.%02lu\n"
"Cpu%dClkTck\t: %016lx\n",
i, cpu_data(i).udelay_val / (500000/HZ),
(cpu_data(i).udelay_val / (5000/HZ)) % 100,
i, cpu_data(i).clock_tick);
}
void __init smp_store_cpu_info(int id)
@ -1282,7 +1278,7 @@ int setup_profiling_timer(unsigned int multiplier)
return -EINVAL;
spin_lock_irqsave(&prof_setup_lock, flags);
for (i = 0; i < NR_CPUS; i++)
for_each_cpu(i)
prof_multiplier(i) = multiplier;
current_tick_offset = (timer_tick_offset / multiplier);
spin_unlock_irqrestore(&prof_setup_lock, flags);
@ -1384,10 +1380,8 @@ void __init smp_cpus_done(unsigned int max_cpus)
unsigned long bogosum = 0;
int i;
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i))
bogosum += cpu_data(i).udelay_val;
}
for_each_online_cpu(i)
bogosum += cpu_data(i).udelay_val;
printk("Total of %ld processors activated "
"(%lu.%02lu BogoMIPS).\n",
(long) num_online_cpus(),

View File

@ -1828,8 +1828,8 @@ void __flush_tlb_all(void)
void online_page(struct page *page)
{
ClearPageReserved(page);
set_page_count(page, 0);
free_cold_page(page);
init_page_count(page);
__free_page(page);
totalram_pages++;
num_physpages++;
}

View File

@ -491,6 +491,16 @@ void __init check_bugs(void)
check_devanon();
}
void apply_alternatives(void *start, void *end)
void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
{
}
void alternatives_smp_module_add(struct module *mod, char *name,
void *locks, void *locks_end,
void *text, void *text_end)
{
}
void alternatives_smp_module_del(struct module *mod)
{
}

View File

@ -17,11 +17,8 @@
#define VGABASE ((void __iomem *)0xffffffff800b8000UL)
#endif
#define MAX_YPOS max_ypos
#define MAX_XPOS max_xpos
static int max_ypos = 25, max_xpos = 80;
static int current_ypos = 1, current_xpos = 0;
static int current_ypos = 25, current_xpos = 0;
static void early_vga_write(struct console *con, const char *str, unsigned n)
{
@ -29,26 +26,26 @@ static void early_vga_write(struct console *con, const char *str, unsigned n)
int i, k, j;
while ((c = *str++) != '\0' && n-- > 0) {
if (current_ypos >= MAX_YPOS) {
if (current_ypos >= max_ypos) {
/* scroll 1 line up */
for (k = 1, j = 0; k < MAX_YPOS; k++, j++) {
for (i = 0; i < MAX_XPOS; i++) {
writew(readw(VGABASE + 2*(MAX_XPOS*k + i)),
VGABASE + 2*(MAX_XPOS*j + i));
for (k = 1, j = 0; k < max_ypos; k++, j++) {
for (i = 0; i < max_xpos; i++) {
writew(readw(VGABASE+2*(max_xpos*k+i)),
VGABASE + 2*(max_xpos*j + i));
}
}
for (i = 0; i < MAX_XPOS; i++)
writew(0x720, VGABASE + 2*(MAX_XPOS*j + i));
current_ypos = MAX_YPOS-1;
for (i = 0; i < max_xpos; i++)
writew(0x720, VGABASE + 2*(max_xpos*j + i));
current_ypos = max_ypos-1;
}
if (c == '\n') {
current_xpos = 0;
current_ypos++;
} else if (c != '\r') {
writew(((0x7 << 8) | (unsigned short) c),
VGABASE + 2*(MAX_XPOS*current_ypos +
VGABASE + 2*(max_xpos*current_ypos +
current_xpos++));
if (current_xpos >= MAX_XPOS) {
if (current_xpos >= max_xpos) {
current_xpos = 0;
current_ypos++;
}
@ -244,6 +241,7 @@ int __init setup_early_printk(char *opt)
&& SCREEN_INFO.orig_video_isVGA == 1) {
max_xpos = SCREEN_INFO.orig_video_cols;
max_ypos = SCREEN_INFO.orig_video_lines;
current_ypos = SCREEN_INFO.orig_y;
early_console = &early_vga_console;
} else if (!strncmp(buf, "simnow", 6)) {
simnow_init(buf + 6);

View File

@ -38,9 +38,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -53,10 +52,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ",
kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
@ -68,15 +65,13 @@ skip:
spin_unlock_irqrestore(&irq_desc[i].lock, flags);
} else if (i == NR_IRQS) {
seq_printf(p, "NMI: ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", cpu_pda(j)->__nmi_count);
for_each_online_cpu(j)
seq_printf(p, "%10u ", cpu_pda(j)->__nmi_count);
seq_putc(p, '\n');
#ifdef CONFIG_X86_LOCAL_APIC
seq_printf(p, "LOC: ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", cpu_pda(j)->apic_timer_irqs);
for_each_online_cpu(j)
seq_printf(p, "%10u ", cpu_pda(j)->apic_timer_irqs);
seq_putc(p, '\n');
#endif
seq_printf(p, "ERR: %10u\n", atomic_read(&irq_err_count));

View File

@ -222,9 +222,9 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
down(&kprobe_mutex);
mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn);
up(&kprobe_mutex);
mutex_unlock(&kprobe_mutex);
}
static inline void save_previous_kprobe(struct kprobe_ctlblk *kcb)

View File

@ -162,9 +162,7 @@ int __init check_nmi_watchdog (void)
local_irq_enable();
mdelay((10*1000)/nmi_hz); // wait 10 ticks
for (cpu = 0; cpu < NR_CPUS; cpu++) {
if (!cpu_online(cpu))
continue;
for_each_online_cpu(cpu) {
if (cpu_pda(cpu)->__nmi_count - counts[cpu] <= 5) {
endflag = 1;
printk("CPU#%d: NMI appears to be stuck (%d->%d)!\n",

View File

@ -443,9 +443,6 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
if (!user_mode(regs))
return 1;
if (try_to_freeze())
goto no_signal;
if (!oldset)
oldset = &current->blocked;
@ -463,7 +460,6 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
return handle_signal(signr, &info, &ka, oldset, regs);
}
no_signal:
/* Did we come from a system call? */
if ((long)regs->orig_rax >= 0) {
/* Restart the system call - no handlers present */

View File

@ -83,9 +83,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (i == 0) {
seq_printf(p, " ");
for (j=0; j<NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "CPU%d ",j);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ",j);
seq_putc(p, '\n');
}
@ -98,9 +97,8 @@ int show_interrupts(struct seq_file *p, void *v)
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %s", action->name);
@ -113,9 +111,8 @@ skip:
spin_unlock_irqrestore(&irq_desc[i].lock, flags);
} else if (i == NR_IRQS) {
seq_printf(p, "NMI: ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", nmi_count(j));
for_each_online_cpu(j)
seq_printf(p, "%10u ", nmi_count(j));
seq_putc(p, '\n');
seq_printf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
}

View File

@ -31,10 +31,6 @@
#include <linux/tty.h>
#include <linux/tty_flip.h>
#ifdef SERIAL_INLINE
#define _INLINE_ inline
#endif
#define SERIAL_MAX_NUM_LINES 1
#define SERIAL_TIMER_VALUE (20 * HZ)

View File

@ -42,9 +42,9 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
return -EINVAL;
}
/* partition number in use? */
down(&bdev->bd_sem);
mutex_lock(&bdev->bd_mutex);
if (disk->part[part - 1]) {
up(&bdev->bd_sem);
mutex_unlock(&bdev->bd_mutex);
return -EBUSY;
}
/* overlap? */
@ -55,13 +55,13 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
continue;
if (!(start+length <= s->start_sect ||
start >= s->start_sect + s->nr_sects)) {
up(&bdev->bd_sem);
mutex_unlock(&bdev->bd_mutex);
return -EBUSY;
}
}
/* all seems OK */
add_partition(disk, part, start, length);
up(&bdev->bd_sem);
mutex_unlock(&bdev->bd_mutex);
return 0;
case BLKPG_DEL_PARTITION:
if (!disk->part[part-1])
@ -71,9 +71,9 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
bdevp = bdget_disk(disk, part);
if (!bdevp)
return -ENOMEM;
down(&bdevp->bd_sem);
mutex_lock(&bdevp->bd_mutex);
if (bdevp->bd_openers) {
up(&bdevp->bd_sem);
mutex_unlock(&bdevp->bd_mutex);
bdput(bdevp);
return -EBUSY;
}
@ -81,10 +81,10 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
fsync_bdev(bdevp);
invalidate_bdev(bdevp, 0);
down(&bdev->bd_sem);
mutex_lock(&bdev->bd_mutex);
delete_partition(disk, part);
up(&bdev->bd_sem);
up(&bdevp->bd_sem);
mutex_unlock(&bdev->bd_mutex);
mutex_unlock(&bdevp->bd_mutex);
bdput(bdevp);
return 0;
@ -102,10 +102,10 @@ static int blkdev_reread_part(struct block_device *bdev)
return -EINVAL;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (down_trylock(&bdev->bd_sem))
if (!mutex_trylock(&bdev->bd_mutex))
return -EBUSY;
res = rescan_partitions(disk, bdev);
up(&bdev->bd_sem);
mutex_unlock(&bdev->bd_mutex);
return res;
}

View File

@ -8,6 +8,7 @@
*
*/
#include <linux/vt_kern.h>
#include <linux/device.h>
#include "../base.h"
#include "power.h"
@ -62,7 +63,6 @@ int suspend_device(struct device * dev, pm_message_t state)
return error;
}
/**
* device_suspend - Save state and stop all devices in system.
* @state: Power state to put each device in.
@ -82,6 +82,9 @@ int device_suspend(pm_message_t state)
{
int error = 0;
if (!is_console_suspend_safe())
return -EINVAL;
down(&dpm_sem);
down(&dpm_list_sem);
while (!list_empty(&dpm_active) && error == 0) {

View File

@ -3268,8 +3268,8 @@ clean2:
unregister_blkdev(hba[i]->major, hba[i]->devname);
clean1:
release_io_mem(hba[i]);
free_hba(i);
hba[i]->busy_initializing = 0;
free_hba(i);
return(-1);
}

View File

@ -179,6 +179,7 @@ static int print_unex = 1;
#include <linux/devfs_fs_kernel.h>
#include <linux/platform_device.h>
#include <linux/buffer_head.h> /* for invalidate_buffers() */
#include <linux/mutex.h>
/*
* PS/2 floppies have much slower step rates than regular floppies.
@ -413,7 +414,7 @@ static struct floppy_write_errors write_errors[N_DRIVE];
static struct timer_list motor_off_timer[N_DRIVE];
static struct gendisk *disks[N_DRIVE];
static struct block_device *opened_bdev[N_DRIVE];
static DECLARE_MUTEX(open_lock);
static DEFINE_MUTEX(open_lock);
static struct floppy_raw_cmd *raw_cmd, default_raw_cmd;
/*
@ -3333,7 +3334,7 @@ static inline int set_geometry(unsigned int cmd, struct floppy_struct *g,
if (type) {
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
down(&open_lock);
mutex_lock(&open_lock);
LOCK_FDC(drive, 1);
floppy_type[type] = *g;
floppy_type[type].name = "user format";
@ -3347,7 +3348,7 @@ static inline int set_geometry(unsigned int cmd, struct floppy_struct *g,
continue;
__invalidate_device(bdev);
}
up(&open_lock);
mutex_unlock(&open_lock);
} else {
int oldStretch;
LOCK_FDC(drive, 1);
@ -3674,7 +3675,7 @@ static int floppy_release(struct inode *inode, struct file *filp)
{
int drive = (long)inode->i_bdev->bd_disk->private_data;
down(&open_lock);
mutex_lock(&open_lock);
if (UDRS->fd_ref < 0)
UDRS->fd_ref = 0;
else if (!UDRS->fd_ref--) {
@ -3684,7 +3685,7 @@ static int floppy_release(struct inode *inode, struct file *filp)
if (!UDRS->fd_ref)
opened_bdev[drive] = NULL;
floppy_release_irq_and_dma();
up(&open_lock);
mutex_unlock(&open_lock);
return 0;
}
@ -3702,7 +3703,7 @@ static int floppy_open(struct inode *inode, struct file *filp)
char *tmp;
filp->private_data = (void *)0;
down(&open_lock);
mutex_lock(&open_lock);
old_dev = UDRS->fd_device;
if (opened_bdev[drive] && opened_bdev[drive] != inode->i_bdev)
goto out2;
@ -3785,7 +3786,7 @@ static int floppy_open(struct inode *inode, struct file *filp)
if ((filp->f_mode & 2) && !(UTESTF(FD_DISK_WRITABLE)))
goto out;
}
up(&open_lock);
mutex_unlock(&open_lock);
return 0;
out:
if (UDRS->fd_ref < 0)
@ -3796,7 +3797,7 @@ out:
opened_bdev[drive] = NULL;
floppy_release_irq_and_dma();
out2:
up(&open_lock);
mutex_unlock(&open_lock);
return res;
}

View File

@ -1144,7 +1144,7 @@ static int lo_ioctl(struct inode * inode, struct file * file,
struct loop_device *lo = inode->i_bdev->bd_disk->private_data;
int err;
down(&lo->lo_ctl_mutex);
mutex_lock(&lo->lo_ctl_mutex);
switch (cmd) {
case LOOP_SET_FD:
err = loop_set_fd(lo, file, inode->i_bdev, arg);
@ -1170,7 +1170,7 @@ static int lo_ioctl(struct inode * inode, struct file * file,
default:
err = lo->ioctl ? lo->ioctl(lo, cmd, arg) : -EINVAL;
}
up(&lo->lo_ctl_mutex);
mutex_unlock(&lo->lo_ctl_mutex);
return err;
}
@ -1178,9 +1178,9 @@ static int lo_open(struct inode *inode, struct file *file)
{
struct loop_device *lo = inode->i_bdev->bd_disk->private_data;
down(&lo->lo_ctl_mutex);
mutex_lock(&lo->lo_ctl_mutex);
lo->lo_refcnt++;
up(&lo->lo_ctl_mutex);
mutex_unlock(&lo->lo_ctl_mutex);
return 0;
}
@ -1189,9 +1189,9 @@ static int lo_release(struct inode *inode, struct file *file)
{
struct loop_device *lo = inode->i_bdev->bd_disk->private_data;
down(&lo->lo_ctl_mutex);
mutex_lock(&lo->lo_ctl_mutex);
--lo->lo_refcnt;
up(&lo->lo_ctl_mutex);
mutex_unlock(&lo->lo_ctl_mutex);
return 0;
}
@ -1233,12 +1233,12 @@ int loop_unregister_transfer(int number)
xfer_funcs[n] = NULL;
for (lo = &loop_dev[0]; lo < &loop_dev[max_loop]; lo++) {
down(&lo->lo_ctl_mutex);
mutex_lock(&lo->lo_ctl_mutex);
if (lo->lo_encryption == xfer)
loop_release_xfer(lo);
up(&lo->lo_ctl_mutex);
mutex_unlock(&lo->lo_ctl_mutex);
}
return 0;
@ -1285,7 +1285,7 @@ static int __init loop_init(void)
lo->lo_queue = blk_alloc_queue(GFP_KERNEL);
if (!lo->lo_queue)
goto out_mem4;
init_MUTEX(&lo->lo_ctl_mutex);
mutex_init(&lo->lo_ctl_mutex);
init_completion(&lo->lo_done);
init_completion(&lo->lo_bh_done);
lo->lo_number = i;

View File

@ -459,9 +459,9 @@ static void do_nbd_request(request_queue_t * q)
req->errors = 0;
spin_unlock_irq(q->queue_lock);
down(&lo->tx_lock);
mutex_lock(&lo->tx_lock);
if (unlikely(!lo->sock)) {
up(&lo->tx_lock);
mutex_unlock(&lo->tx_lock);
printk(KERN_ERR "%s: Attempted send on closed socket\n",
lo->disk->disk_name);
req->errors++;
@ -484,7 +484,7 @@ static void do_nbd_request(request_queue_t * q)
}
lo->active_req = NULL;
up(&lo->tx_lock);
mutex_unlock(&lo->tx_lock);
wake_up_all(&lo->active_wq);
spin_lock_irq(q->queue_lock);
@ -534,9 +534,9 @@ static int nbd_ioctl(struct inode *inode, struct file *file,
case NBD_CLEAR_SOCK:
error = 0;
down(&lo->tx_lock);
mutex_lock(&lo->tx_lock);
lo->sock = NULL;
up(&lo->tx_lock);
mutex_unlock(&lo->tx_lock);
file = lo->file;
lo->file = NULL;
nbd_clear_que(lo);
@ -590,7 +590,7 @@ static int nbd_ioctl(struct inode *inode, struct file *file,
* FIXME: This code is duplicated from sys_shutdown, but
* there should be a more generic interface rather than
* calling socket ops directly here */
down(&lo->tx_lock);
mutex_lock(&lo->tx_lock);
if (lo->sock) {
printk(KERN_WARNING "%s: shutting down socket\n",
lo->disk->disk_name);
@ -598,7 +598,7 @@ static int nbd_ioctl(struct inode *inode, struct file *file,
SEND_SHUTDOWN|RCV_SHUTDOWN);
lo->sock = NULL;
}
up(&lo->tx_lock);
mutex_unlock(&lo->tx_lock);
file = lo->file;
lo->file = NULL;
nbd_clear_que(lo);
@ -683,7 +683,7 @@ static int __init nbd_init(void)
nbd_dev[i].flags = 0;
spin_lock_init(&nbd_dev[i].queue_lock);
INIT_LIST_HEAD(&nbd_dev[i].queue_head);
init_MUTEX(&nbd_dev[i].tx_lock);
mutex_init(&nbd_dev[i].tx_lock);
init_waitqueue_head(&nbd_dev[i].active_wq);
nbd_dev[i].blksize = 1024;
nbd_dev[i].bytesize = 0x7ffffc00ULL << 10; /* 2TB */

View File

@ -56,6 +56,7 @@
#include <linux/seq_file.h>
#include <linux/miscdevice.h>
#include <linux/suspend.h>
#include <linux/mutex.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_ioctl.h>
#include <scsi/scsi.h>
@ -81,7 +82,7 @@
static struct pktcdvd_device *pkt_devs[MAX_WRITERS];
static struct proc_dir_entry *pkt_proc;
static int pkt_major;
static struct semaphore ctl_mutex; /* Serialize open/close/setup/teardown */
static struct mutex ctl_mutex; /* Serialize open/close/setup/teardown */
static mempool_t *psd_pool;
@ -2018,7 +2019,7 @@ static int pkt_open(struct inode *inode, struct file *file)
VPRINTK("pktcdvd: entering open\n");
down(&ctl_mutex);
mutex_lock(&ctl_mutex);
pd = pkt_find_dev_from_minor(iminor(inode));
if (!pd) {
ret = -ENODEV;
@ -2044,14 +2045,14 @@ static int pkt_open(struct inode *inode, struct file *file)
set_blocksize(inode->i_bdev, CD_FRAMESIZE);
}
up(&ctl_mutex);
mutex_unlock(&ctl_mutex);
return 0;
out_dec:
pd->refcnt--;
out:
VPRINTK("pktcdvd: failed open (%d)\n", ret);
up(&ctl_mutex);
mutex_unlock(&ctl_mutex);
return ret;
}
@ -2060,14 +2061,14 @@ static int pkt_close(struct inode *inode, struct file *file)
struct pktcdvd_device *pd = inode->i_bdev->bd_disk->private_data;
int ret = 0;
down(&ctl_mutex);
mutex_lock(&ctl_mutex);
pd->refcnt--;
BUG_ON(pd->refcnt < 0);
if (pd->refcnt == 0) {
int flush = test_bit(PACKET_WRITABLE, &pd->flags);
pkt_release_dev(pd, flush);
}
up(&ctl_mutex);
mutex_unlock(&ctl_mutex);
return ret;
}
@ -2596,21 +2597,21 @@ static int pkt_ctl_ioctl(struct inode *inode, struct file *file, unsigned int cm
case PKT_CTRL_CMD_SETUP:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
down(&ctl_mutex);
mutex_lock(&ctl_mutex);
ret = pkt_setup_dev(&ctrl_cmd);
up(&ctl_mutex);
mutex_unlock(&ctl_mutex);
break;
case PKT_CTRL_CMD_TEARDOWN:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
down(&ctl_mutex);
mutex_lock(&ctl_mutex);
ret = pkt_remove_dev(&ctrl_cmd);
up(&ctl_mutex);
mutex_unlock(&ctl_mutex);
break;
case PKT_CTRL_CMD_STATUS:
down(&ctl_mutex);
mutex_lock(&ctl_mutex);
pkt_get_status(&ctrl_cmd);
up(&ctl_mutex);
mutex_unlock(&ctl_mutex);
break;
default:
return -ENOTTY;
@ -2656,7 +2657,7 @@ static int __init pkt_init(void)
goto out;
}
init_MUTEX(&ctl_mutex);
mutex_init(&ctl_mutex);
pkt_proc = proc_mkdir("pktcdvd", proc_root_driver);

View File

@ -310,12 +310,12 @@ static int rd_ioctl(struct inode *inode, struct file *file,
* cache
*/
error = -EBUSY;
down(&bdev->bd_sem);
mutex_lock(&bdev->bd_mutex);
if (bdev->bd_openers <= 2) {
truncate_inode_pages(bdev->bd_inode->i_mapping, 0);
error = 0;
}
up(&bdev->bd_sem);
mutex_unlock(&bdev->bd_mutex);
return error;
}

View File

@ -407,7 +407,6 @@ int register_cdrom(struct cdrom_device_info *cdi)
ENSURE(get_mcn, CDC_MCN);
ENSURE(reset, CDC_RESET);
ENSURE(audio_ioctl, CDC_PLAY_AUDIO);
ENSURE(dev_ioctl, CDC_IOCTLS);
ENSURE(generic_packet, CDC_GENERIC_PACKET);
cdi->mc_flags = 0;
cdo->n_minors = 0;
@ -2196,395 +2195,586 @@ retry:
return cdrom_read_cdda_old(cdi, ubuf, lba, nframes);
}
/* Just about every imaginable ioctl is supported in the Uniform layer
* these days. ATAPI / SCSI specific code now mainly resides in
* mmc_ioct().
*/
int cdrom_ioctl(struct file * file, struct cdrom_device_info *cdi,
struct inode *ip, unsigned int cmd, unsigned long arg)
static int cdrom_ioctl_multisession(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_device_ops *cdo = cdi->ops;
struct cdrom_multisession ms_info;
u8 requested_format;
int ret;
/* Try the generic SCSI command ioctl's first.. */
ret = scsi_cmd_ioctl(file, ip->i_bdev->bd_disk, cmd, (void __user *)arg);
if (ret != -ENOTTY)
cdinfo(CD_DO_IOCTL, "entering CDROMMULTISESSION\n");
if (!(cdi->ops->capability & CDC_MULTI_SESSION))
return -ENOSYS;
if (copy_from_user(&ms_info, argp, sizeof(ms_info)))
return -EFAULT;
requested_format = ms_info.addr_format;
if (requested_format != CDROM_MSF && requested_format != CDROM_LBA)
return -EINVAL;
ms_info.addr_format = CDROM_LBA;
ret = cdi->ops->get_last_session(cdi, &ms_info);
if (ret)
return ret;
/* the first few commands do not deal with audio drive_info, but
only with routines in cdrom device operations. */
switch (cmd) {
case CDROMMULTISESSION: {
struct cdrom_multisession ms_info;
u_char requested_format;
cdinfo(CD_DO_IOCTL, "entering CDROMMULTISESSION\n");
if (!(cdo->capability & CDC_MULTI_SESSION))
return -ENOSYS;
IOCTL_IN(arg, struct cdrom_multisession, ms_info);
requested_format = ms_info.addr_format;
if (!((requested_format == CDROM_MSF) ||
(requested_format == CDROM_LBA)))
return -EINVAL;
ms_info.addr_format = CDROM_LBA;
if ((ret=cdo->get_last_session(cdi, &ms_info)))
sanitize_format(&ms_info.addr, &ms_info.addr_format, requested_format);
if (copy_to_user(argp, &ms_info, sizeof(ms_info)))
return -EFAULT;
cdinfo(CD_DO_IOCTL, "CDROMMULTISESSION successful\n");
return 0;
}
static int cdrom_ioctl_eject(struct cdrom_device_info *cdi)
{
cdinfo(CD_DO_IOCTL, "entering CDROMEJECT\n");
if (!CDROM_CAN(CDC_OPEN_TRAY))
return -ENOSYS;
if (cdi->use_count != 1 || keeplocked)
return -EBUSY;
if (CDROM_CAN(CDC_LOCK)) {
int ret = cdi->ops->lock_door(cdi, 0);
if (ret)
return ret;
sanitize_format(&ms_info.addr, &ms_info.addr_format,
requested_format);
IOCTL_OUT(arg, struct cdrom_multisession, ms_info);
cdinfo(CD_DO_IOCTL, "CDROMMULTISESSION successful\n");
return 0;
}
}
case CDROMEJECT: {
cdinfo(CD_DO_IOCTL, "entering CDROMEJECT\n");
if (!CDROM_CAN(CDC_OPEN_TRAY))
return -ENOSYS;
if (cdi->use_count != 1 || keeplocked)
return -EBUSY;
if (CDROM_CAN(CDC_LOCK))
if ((ret=cdo->lock_door(cdi, 0)))
return ret;
return cdi->ops->tray_move(cdi, 1);
}
return cdo->tray_move(cdi, 1);
}
static int cdrom_ioctl_closetray(struct cdrom_device_info *cdi)
{
cdinfo(CD_DO_IOCTL, "entering CDROMCLOSETRAY\n");
case CDROMCLOSETRAY: {
cdinfo(CD_DO_IOCTL, "entering CDROMCLOSETRAY\n");
if (!CDROM_CAN(CDC_CLOSE_TRAY))
return -ENOSYS;
return cdo->tray_move(cdi, 0);
}
if (!CDROM_CAN(CDC_CLOSE_TRAY))
return -ENOSYS;
return cdi->ops->tray_move(cdi, 0);
}
case CDROMEJECT_SW: {
cdinfo(CD_DO_IOCTL, "entering CDROMEJECT_SW\n");
if (!CDROM_CAN(CDC_OPEN_TRAY))
return -ENOSYS;
if (keeplocked)
return -EBUSY;
cdi->options &= ~(CDO_AUTO_CLOSE | CDO_AUTO_EJECT);
if (arg)
cdi->options |= CDO_AUTO_CLOSE | CDO_AUTO_EJECT;
return 0;
}
static int cdrom_ioctl_eject_sw(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "entering CDROMEJECT_SW\n");
case CDROM_MEDIA_CHANGED: {
struct cdrom_changer_info *info;
int changed;
if (!CDROM_CAN(CDC_OPEN_TRAY))
return -ENOSYS;
if (keeplocked)
return -EBUSY;
cdinfo(CD_DO_IOCTL, "entering CDROM_MEDIA_CHANGED\n");
if (!CDROM_CAN(CDC_MEDIA_CHANGED))
return -ENOSYS;
cdi->options &= ~(CDO_AUTO_CLOSE | CDO_AUTO_EJECT);
if (arg)
cdi->options |= CDO_AUTO_CLOSE | CDO_AUTO_EJECT;
return 0;
}
/* cannot select disc or select current disc */
if (!CDROM_CAN(CDC_SELECT_DISC) || arg == CDSL_CURRENT)
return media_changed(cdi, 1);
static int cdrom_ioctl_media_changed(struct cdrom_device_info *cdi,
unsigned long arg)
{
struct cdrom_changer_info *info;
int ret;
if ((unsigned int)arg >= cdi->capacity)
return -EINVAL;
cdinfo(CD_DO_IOCTL, "entering CDROM_MEDIA_CHANGED\n");
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
if (!CDROM_CAN(CDC_MEDIA_CHANGED))
return -ENOSYS;
if ((ret = cdrom_read_mech_status(cdi, info))) {
kfree(info);
return ret;
}
/* cannot select disc or select current disc */
if (!CDROM_CAN(CDC_SELECT_DISC) || arg == CDSL_CURRENT)
return media_changed(cdi, 1);
changed = info->slots[arg].change;
kfree(info);
return changed;
}
if ((unsigned int)arg >= cdi->capacity)
return -EINVAL;
case CDROM_SET_OPTIONS: {
cdinfo(CD_DO_IOCTL, "entering CDROM_SET_OPTIONS\n");
/* options need to be in sync with capability. too late for
that, so we have to check each one separately... */
switch (arg) {
case CDO_USE_FFLAGS:
case CDO_CHECK_TYPE:
break;
case CDO_LOCK:
if (!CDROM_CAN(CDC_LOCK))
return -ENOSYS;
break;
case 0:
return cdi->options;
/* default is basically CDO_[AUTO_CLOSE|AUTO_EJECT] */
default:
if (!CDROM_CAN(arg))
return -ENOSYS;
}
cdi->options |= (int) arg;
return cdi->options;
}
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
case CDROM_CLEAR_OPTIONS: {
cdinfo(CD_DO_IOCTL, "entering CDROM_CLEAR_OPTIONS\n");
cdi->options &= ~(int) arg;
return cdi->options;
}
ret = cdrom_read_mech_status(cdi, info);
if (!ret)
ret = info->slots[arg].change;
kfree(info);
return ret;
}
case CDROM_SELECT_SPEED: {
cdinfo(CD_DO_IOCTL, "entering CDROM_SELECT_SPEED\n");
if (!CDROM_CAN(CDC_SELECT_SPEED))
return -ENOSYS;
return cdo->select_speed(cdi, arg);
}
static int cdrom_ioctl_set_options(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_SET_OPTIONS\n");
case CDROM_SELECT_DISC: {
cdinfo(CD_DO_IOCTL, "entering CDROM_SELECT_DISC\n");
if (!CDROM_CAN(CDC_SELECT_DISC))
return -ENOSYS;
if ((arg != CDSL_CURRENT) && (arg != CDSL_NONE))
if ((int)arg >= cdi->capacity)
return -EINVAL;
/* cdo->select_disc is a hook to allow a driver-specific
* way of seleting disc. However, since there is no
* equiv hook for cdrom_slot_status this may not
* actually be useful...
*/
if (cdo->select_disc != NULL)
return cdo->select_disc(cdi, arg);
/* no driver specific select_disc(), call our own */
cdinfo(CD_CHANGER, "Using generic cdrom_select_disc()\n");
return cdrom_select_disc(cdi, arg);
}
case CDROMRESET: {
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
cdinfo(CD_DO_IOCTL, "entering CDROM_RESET\n");
if (!CDROM_CAN(CDC_RESET))
return -ENOSYS;
invalidate_bdev(ip->i_bdev, 0);
return cdo->reset(cdi);
}
case CDROM_LOCKDOOR: {
cdinfo(CD_DO_IOCTL, "%socking door.\n", arg ? "L" : "Unl");
/*
* Options need to be in sync with capability.
* Too late for that, so we have to check each one separately.
*/
switch (arg) {
case CDO_USE_FFLAGS:
case CDO_CHECK_TYPE:
break;
case CDO_LOCK:
if (!CDROM_CAN(CDC_LOCK))
return -EDRIVE_CANT_DO_THIS;
keeplocked = arg ? 1 : 0;
/* don't unlock the door on multiple opens,but allow root
* to do so */
if ((cdi->use_count != 1) && !arg && !capable(CAP_SYS_ADMIN))
return -EBUSY;
return cdo->lock_door(cdi, arg);
}
return -ENOSYS;
break;
case 0:
return cdi->options;
/* default is basically CDO_[AUTO_CLOSE|AUTO_EJECT] */
default:
if (!CDROM_CAN(arg))
return -ENOSYS;
}
cdi->options |= (int) arg;
return cdi->options;
}
case CDROM_DEBUG: {
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
cdinfo(CD_DO_IOCTL, "%sabling debug.\n", arg ? "En" : "Dis");
debug = arg ? 1 : 0;
return debug;
}
static int cdrom_ioctl_clear_options(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_CLEAR_OPTIONS\n");
case CDROM_GET_CAPABILITY: {
cdinfo(CD_DO_IOCTL, "entering CDROM_GET_CAPABILITY\n");
return (cdo->capability & ~cdi->mask);
}
cdi->options &= ~(int) arg;
return cdi->options;
}
/* The following function is implemented, although very few audio
static int cdrom_ioctl_select_speed(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_SELECT_SPEED\n");
if (!CDROM_CAN(CDC_SELECT_SPEED))
return -ENOSYS;
return cdi->ops->select_speed(cdi, arg);
}
static int cdrom_ioctl_select_disc(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_SELECT_DISC\n");
if (!CDROM_CAN(CDC_SELECT_DISC))
return -ENOSYS;
if (arg != CDSL_CURRENT && arg != CDSL_NONE) {
if ((int)arg >= cdi->capacity)
return -EINVAL;
}
/*
* ->select_disc is a hook to allow a driver-specific way of
* seleting disc. However, since there is no equivalent hook for
* cdrom_slot_status this may not actually be useful...
*/
if (cdi->ops->select_disc)
return cdi->ops->select_disc(cdi, arg);
cdinfo(CD_CHANGER, "Using generic cdrom_select_disc()\n");
return cdrom_select_disc(cdi, arg);
}
static int cdrom_ioctl_reset(struct cdrom_device_info *cdi,
struct block_device *bdev)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_RESET\n");
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (!CDROM_CAN(CDC_RESET))
return -ENOSYS;
invalidate_bdev(bdev, 0);
return cdi->ops->reset(cdi);
}
static int cdrom_ioctl_lock_door(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "%socking door.\n", arg ? "L" : "Unl");
if (!CDROM_CAN(CDC_LOCK))
return -EDRIVE_CANT_DO_THIS;
keeplocked = arg ? 1 : 0;
/*
* Don't unlock the door on multiple opens by default, but allow
* root to do so.
*/
if (cdi->use_count != 1 && !arg && !capable(CAP_SYS_ADMIN))
return -EBUSY;
return cdi->ops->lock_door(cdi, arg);
}
static int cdrom_ioctl_debug(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "%sabling debug.\n", arg ? "En" : "Dis");
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
debug = arg ? 1 : 0;
return debug;
}
static int cdrom_ioctl_get_capability(struct cdrom_device_info *cdi)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_GET_CAPABILITY\n");
return (cdi->ops->capability & ~cdi->mask);
}
/*
* The following function is implemented, although very few audio
* discs give Universal Product Code information, which should just be
* the Medium Catalog Number on the box. Note, that the way the code
* is written on the CD is /not/ uniform across all discs!
*/
case CDROM_GET_MCN: {
struct cdrom_mcn mcn;
cdinfo(CD_DO_IOCTL, "entering CDROM_GET_MCN\n");
if (!(cdo->capability & CDC_MCN))
return -ENOSYS;
if ((ret=cdo->get_mcn(cdi, &mcn)))
return ret;
IOCTL_OUT(arg, struct cdrom_mcn, mcn);
cdinfo(CD_DO_IOCTL, "CDROM_GET_MCN successful\n");
return 0;
}
static int cdrom_ioctl_get_mcn(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_mcn mcn;
int ret;
case CDROM_DRIVE_STATUS: {
cdinfo(CD_DO_IOCTL, "entering CDROM_DRIVE_STATUS\n");
if (!(cdo->capability & CDC_DRIVE_STATUS))
return -ENOSYS;
if (!CDROM_CAN(CDC_SELECT_DISC))
return cdo->drive_status(cdi, CDSL_CURRENT);
if ((arg == CDSL_CURRENT) || (arg == CDSL_NONE))
return cdo->drive_status(cdi, CDSL_CURRENT);
if (((int)arg >= cdi->capacity))
return -EINVAL;
return cdrom_slot_status(cdi, arg);
}
cdinfo(CD_DO_IOCTL, "entering CDROM_GET_MCN\n");
/* Ok, this is where problems start. The current interface for the
CDROM_DISC_STATUS ioctl is flawed. It makes the false assumption
that CDs are all CDS_DATA_1 or all CDS_AUDIO, etc. Unfortunatly,
while this is often the case, it is also very common for CDs to
have some tracks with data, and some tracks with audio. Just
because I feel like it, I declare the following to be the best
way to cope. If the CD has ANY data tracks on it, it will be
returned as a data CD. If it has any XA tracks, I will return
it as that. Now I could simplify this interface by combining these
returns with the above, but this more clearly demonstrates
the problem with the current interface. Too bad this wasn't
designed to use bitmasks... -Erik
if (!(cdi->ops->capability & CDC_MCN))
return -ENOSYS;
ret = cdi->ops->get_mcn(cdi, &mcn);
if (ret)
return ret;
Well, now we have the option CDS_MIXED: a mixed-type CD.
User level programmers might feel the ioctl is not very useful.
---david
*/
case CDROM_DISC_STATUS: {
tracktype tracks;
cdinfo(CD_DO_IOCTL, "entering CDROM_DISC_STATUS\n");
cdrom_count_tracks(cdi, &tracks);
if (tracks.error)
return(tracks.error);
if (copy_to_user(argp, &mcn, sizeof(mcn)))
return -EFAULT;
cdinfo(CD_DO_IOCTL, "CDROM_GET_MCN successful\n");
return 0;
}
/* Policy mode on */
if (tracks.audio > 0) {
if (tracks.data==0 && tracks.cdi==0 && tracks.xa==0)
return CDS_AUDIO;
else
return CDS_MIXED;
}
if (tracks.cdi > 0) return CDS_XA_2_2;
if (tracks.xa > 0) return CDS_XA_2_1;
if (tracks.data > 0) return CDS_DATA_1;
/* Policy mode off */
static int cdrom_ioctl_drive_status(struct cdrom_device_info *cdi,
unsigned long arg)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_DRIVE_STATUS\n");
cdinfo(CD_WARNING,"This disc doesn't have any tracks I recognize!\n");
return CDS_NO_INFO;
}
if (!(cdi->ops->capability & CDC_DRIVE_STATUS))
return -ENOSYS;
if (!CDROM_CAN(CDC_SELECT_DISC) ||
(arg == CDSL_CURRENT || arg == CDSL_NONE))
return cdi->ops->drive_status(cdi, CDSL_CURRENT);
if (((int)arg >= cdi->capacity))
return -EINVAL;
return cdrom_slot_status(cdi, arg);
}
case CDROM_CHANGER_NSLOTS: {
cdinfo(CD_DO_IOCTL, "entering CDROM_CHANGER_NSLOTS\n");
return cdi->capacity;
}
/*
* Ok, this is where problems start. The current interface for the
* CDROM_DISC_STATUS ioctl is flawed. It makes the false assumption that
* CDs are all CDS_DATA_1 or all CDS_AUDIO, etc. Unfortunatly, while this
* is often the case, it is also very common for CDs to have some tracks
* with data, and some tracks with audio. Just because I feel like it,
* I declare the following to be the best way to cope. If the CD has ANY
* data tracks on it, it will be returned as a data CD. If it has any XA
* tracks, I will return it as that. Now I could simplify this interface
* by combining these returns with the above, but this more clearly
* demonstrates the problem with the current interface. Too bad this
* wasn't designed to use bitmasks... -Erik
*
* Well, now we have the option CDS_MIXED: a mixed-type CD.
* User level programmers might feel the ioctl is not very useful.
* ---david
*/
static int cdrom_ioctl_disc_status(struct cdrom_device_info *cdi)
{
tracktype tracks;
cdinfo(CD_DO_IOCTL, "entering CDROM_DISC_STATUS\n");
cdrom_count_tracks(cdi, &tracks);
if (tracks.error)
return tracks.error;
/* Policy mode on */
if (tracks.audio > 0) {
if (!tracks.data && !tracks.cdi && !tracks.xa)
return CDS_AUDIO;
else
return CDS_MIXED;
}
/* use the ioctls that are implemented through the generic_packet()
interface. this may look at bit funny, but if -ENOTTY is
returned that particular ioctl is not implemented and we
let it go through the device specific ones. */
if (tracks.cdi > 0)
return CDS_XA_2_2;
if (tracks.xa > 0)
return CDS_XA_2_1;
if (tracks.data > 0)
return CDS_DATA_1;
/* Policy mode off */
cdinfo(CD_WARNING,"This disc doesn't have any tracks I recognize!\n");
return CDS_NO_INFO;
}
static int cdrom_ioctl_changer_nslots(struct cdrom_device_info *cdi)
{
cdinfo(CD_DO_IOCTL, "entering CDROM_CHANGER_NSLOTS\n");
return cdi->capacity;
}
static int cdrom_ioctl_get_subchnl(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_subchnl q;
u8 requested, back;
int ret;
/* cdinfo(CD_DO_IOCTL,"entering CDROMSUBCHNL\n");*/
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
if (copy_from_user(&q, argp, sizeof(q)))
return -EFAULT;
requested = q.cdsc_format;
if (requested != CDROM_MSF && requested != CDROM_LBA)
return -EINVAL;
q.cdsc_format = CDROM_MSF;
ret = cdi->ops->audio_ioctl(cdi, CDROMSUBCHNL, &q);
if (ret)
return ret;
back = q.cdsc_format; /* local copy */
sanitize_format(&q.cdsc_absaddr, &back, requested);
sanitize_format(&q.cdsc_reladdr, &q.cdsc_format, requested);
if (copy_to_user(argp, &q, sizeof(q)))
return -EFAULT;
/* cdinfo(CD_DO_IOCTL, "CDROMSUBCHNL successful\n"); */
return 0;
}
static int cdrom_ioctl_read_tochdr(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_tochdr header;
int ret;
/* cdinfo(CD_DO_IOCTL, "entering CDROMREADTOCHDR\n"); */
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
if (copy_from_user(&header, argp, sizeof(header)))
return -EFAULT;
ret = cdi->ops->audio_ioctl(cdi, CDROMREADTOCHDR, &header);
if (ret)
return ret;
if (copy_to_user(argp, &header, sizeof(header)))
return -EFAULT;
/* cdinfo(CD_DO_IOCTL, "CDROMREADTOCHDR successful\n"); */
return 0;
}
static int cdrom_ioctl_read_tocentry(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_tocentry entry;
u8 requested_format;
int ret;
/* cdinfo(CD_DO_IOCTL, "entering CDROMREADTOCENTRY\n"); */
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
if (copy_from_user(&entry, argp, sizeof(entry)))
return -EFAULT;
requested_format = entry.cdte_format;
if (requested_format != CDROM_MSF && requested_format != CDROM_LBA)
return -EINVAL;
/* make interface to low-level uniform */
entry.cdte_format = CDROM_MSF;
ret = cdi->ops->audio_ioctl(cdi, CDROMREADTOCENTRY, &entry);
if (ret)
return ret;
sanitize_format(&entry.cdte_addr, &entry.cdte_format, requested_format);
if (copy_to_user(argp, &entry, sizeof(entry)))
return -EFAULT;
/* cdinfo(CD_DO_IOCTL, "CDROMREADTOCENTRY successful\n"); */
return 0;
}
static int cdrom_ioctl_play_msf(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_msf msf;
cdinfo(CD_DO_IOCTL, "entering CDROMPLAYMSF\n");
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
if (copy_from_user(&msf, argp, sizeof(msf)))
return -EFAULT;
return cdi->ops->audio_ioctl(cdi, CDROMPLAYMSF, &msf);
}
static int cdrom_ioctl_play_trkind(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_ti ti;
int ret;
cdinfo(CD_DO_IOCTL, "entering CDROMPLAYTRKIND\n");
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
if (copy_from_user(&ti, argp, sizeof(ti)))
return -EFAULT;
ret = check_for_audio_disc(cdi, cdi->ops);
if (ret)
return ret;
return cdi->ops->audio_ioctl(cdi, CDROMPLAYTRKIND, &ti);
}
static int cdrom_ioctl_volctrl(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_volctrl volume;
cdinfo(CD_DO_IOCTL, "entering CDROMVOLCTRL\n");
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
if (copy_from_user(&volume, argp, sizeof(volume)))
return -EFAULT;
return cdi->ops->audio_ioctl(cdi, CDROMVOLCTRL, &volume);
}
static int cdrom_ioctl_volread(struct cdrom_device_info *cdi,
void __user *argp)
{
struct cdrom_volctrl volume;
int ret;
cdinfo(CD_DO_IOCTL, "entering CDROMVOLREAD\n");
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
ret = cdi->ops->audio_ioctl(cdi, CDROMVOLREAD, &volume);
if (ret)
return ret;
if (copy_to_user(argp, &volume, sizeof(volume)))
return -EFAULT;
return 0;
}
static int cdrom_ioctl_audioctl(struct cdrom_device_info *cdi,
unsigned int cmd)
{
int ret;
cdinfo(CD_DO_IOCTL, "doing audio ioctl (start/stop/pause/resume)\n");
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
ret = check_for_audio_disc(cdi, cdi->ops);
if (ret)
return ret;
return cdi->ops->audio_ioctl(cdi, cmd, NULL);
}
/*
* Just about every imaginable ioctl is supported in the Uniform layer
* these days.
* ATAPI / SCSI specific code now mainly resides in mmc_ioctl().
*/
int cdrom_ioctl(struct file * file, struct cdrom_device_info *cdi,
struct inode *ip, unsigned int cmd, unsigned long arg)
{
void __user *argp = (void __user *)arg;
int ret;
/*
* Try the generic SCSI command ioctl's first.
*/
ret = scsi_cmd_ioctl(file, ip->i_bdev->bd_disk, cmd, argp);
if (ret != -ENOTTY)
return ret;
switch (cmd) {
case CDROMMULTISESSION:
return cdrom_ioctl_multisession(cdi, argp);
case CDROMEJECT:
return cdrom_ioctl_eject(cdi);
case CDROMCLOSETRAY:
return cdrom_ioctl_closetray(cdi);
case CDROMEJECT_SW:
return cdrom_ioctl_eject_sw(cdi, arg);
case CDROM_MEDIA_CHANGED:
return cdrom_ioctl_media_changed(cdi, arg);
case CDROM_SET_OPTIONS:
return cdrom_ioctl_set_options(cdi, arg);
case CDROM_CLEAR_OPTIONS:
return cdrom_ioctl_clear_options(cdi, arg);
case CDROM_SELECT_SPEED:
return cdrom_ioctl_select_speed(cdi, arg);
case CDROM_SELECT_DISC:
return cdrom_ioctl_select_disc(cdi, arg);
case CDROMRESET:
return cdrom_ioctl_reset(cdi, ip->i_bdev);
case CDROM_LOCKDOOR:
return cdrom_ioctl_lock_door(cdi, arg);
case CDROM_DEBUG:
return cdrom_ioctl_debug(cdi, arg);
case CDROM_GET_CAPABILITY:
return cdrom_ioctl_get_capability(cdi);
case CDROM_GET_MCN:
return cdrom_ioctl_get_mcn(cdi, argp);
case CDROM_DRIVE_STATUS:
return cdrom_ioctl_drive_status(cdi, arg);
case CDROM_DISC_STATUS:
return cdrom_ioctl_disc_status(cdi);
case CDROM_CHANGER_NSLOTS:
return cdrom_ioctl_changer_nslots(cdi);
}
/*
* Use the ioctls that are implemented through the generic_packet()
* interface. this may look at bit funny, but if -ENOTTY is
* returned that particular ioctl is not implemented and we
* let it go through the device specific ones.
*/
if (CDROM_CAN(CDC_GENERIC_PACKET)) {
ret = mmc_ioctl(cdi, cmd, arg);
if (ret != -ENOTTY) {
if (ret != -ENOTTY)
return ret;
}
}
/* note: most of the cdinfo() calls are commented out here,
because they fill up the sys log when CD players poll
the drive. */
/*
* Note: most of the cdinfo() calls are commented out here,
* because they fill up the sys log when CD players poll
* the drive.
*/
switch (cmd) {
case CDROMSUBCHNL: {
struct cdrom_subchnl q;
u_char requested, back;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
/* cdinfo(CD_DO_IOCTL,"entering CDROMSUBCHNL\n");*/
IOCTL_IN(arg, struct cdrom_subchnl, q);
requested = q.cdsc_format;
if (!((requested == CDROM_MSF) ||
(requested == CDROM_LBA)))
return -EINVAL;
q.cdsc_format = CDROM_MSF;
if ((ret=cdo->audio_ioctl(cdi, cmd, &q)))
return ret;
back = q.cdsc_format; /* local copy */
sanitize_format(&q.cdsc_absaddr, &back, requested);
sanitize_format(&q.cdsc_reladdr, &q.cdsc_format, requested);
IOCTL_OUT(arg, struct cdrom_subchnl, q);
/* cdinfo(CD_DO_IOCTL, "CDROMSUBCHNL successful\n"); */
return 0;
}
case CDROMREADTOCHDR: {
struct cdrom_tochdr header;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
/* cdinfo(CD_DO_IOCTL, "entering CDROMREADTOCHDR\n"); */
IOCTL_IN(arg, struct cdrom_tochdr, header);
if ((ret=cdo->audio_ioctl(cdi, cmd, &header)))
return ret;
IOCTL_OUT(arg, struct cdrom_tochdr, header);
/* cdinfo(CD_DO_IOCTL, "CDROMREADTOCHDR successful\n"); */
return 0;
}
case CDROMREADTOCENTRY: {
struct cdrom_tocentry entry;
u_char requested_format;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
/* cdinfo(CD_DO_IOCTL, "entering CDROMREADTOCENTRY\n"); */
IOCTL_IN(arg, struct cdrom_tocentry, entry);
requested_format = entry.cdte_format;
if (!((requested_format == CDROM_MSF) ||
(requested_format == CDROM_LBA)))
return -EINVAL;
/* make interface to low-level uniform */
entry.cdte_format = CDROM_MSF;
if ((ret=cdo->audio_ioctl(cdi, cmd, &entry)))
return ret;
sanitize_format(&entry.cdte_addr,
&entry.cdte_format, requested_format);
IOCTL_OUT(arg, struct cdrom_tocentry, entry);
/* cdinfo(CD_DO_IOCTL, "CDROMREADTOCENTRY successful\n"); */
return 0;
}
case CDROMPLAYMSF: {
struct cdrom_msf msf;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMPLAYMSF\n");
IOCTL_IN(arg, struct cdrom_msf, msf);
return cdo->audio_ioctl(cdi, cmd, &msf);
}
case CDROMPLAYTRKIND: {
struct cdrom_ti ti;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMPLAYTRKIND\n");
IOCTL_IN(arg, struct cdrom_ti, ti);
CHECKAUDIO;
return cdo->audio_ioctl(cdi, cmd, &ti);
}
case CDROMVOLCTRL: {
struct cdrom_volctrl volume;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMVOLCTRL\n");
IOCTL_IN(arg, struct cdrom_volctrl, volume);
return cdo->audio_ioctl(cdi, cmd, &volume);
}
case CDROMVOLREAD: {
struct cdrom_volctrl volume;
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMVOLREAD\n");
if ((ret=cdo->audio_ioctl(cdi, cmd, &volume)))
return ret;
IOCTL_OUT(arg, struct cdrom_volctrl, volume);
return 0;
}
case CDROMSUBCHNL:
return cdrom_ioctl_get_subchnl(cdi, argp);
case CDROMREADTOCHDR:
return cdrom_ioctl_read_tochdr(cdi, argp);
case CDROMREADTOCENTRY:
return cdrom_ioctl_read_tocentry(cdi, argp);
case CDROMPLAYMSF:
return cdrom_ioctl_play_msf(cdi, argp);
case CDROMPLAYTRKIND:
return cdrom_ioctl_play_trkind(cdi, argp);
case CDROMVOLCTRL:
return cdrom_ioctl_volctrl(cdi, argp);
case CDROMVOLREAD:
return cdrom_ioctl_volread(cdi, argp);
case CDROMSTART:
case CDROMSTOP:
case CDROMPAUSE:
case CDROMRESUME: {
if (!CDROM_CAN(CDC_PLAY_AUDIO))
return -ENOSYS;
cdinfo(CD_DO_IOCTL, "doing audio ioctl (start/stop/pause/resume)\n");
CHECKAUDIO;
return cdo->audio_ioctl(cdi, cmd, NULL);
}
} /* switch */
case CDROMRESUME:
return cdrom_ioctl_audioctl(cdi, cmd);
}
/* do the device specific ioctls */
if (CDROM_CAN(CDC_IOCTLS))
return cdo->dev_ioctl(cdi, cmd, arg);
return -ENOSYS;
}

View File

@ -2668,7 +2668,7 @@ static int scd_audio_ioctl(struct cdrom_device_info *cdi,
return retval;
}
static int scd_dev_ioctl(struct cdrom_device_info *cdi,
static int scd_read_audio(struct cdrom_device_info *cdi,
unsigned int cmd, unsigned long arg)
{
void __user *argp = (void __user *)arg;
@ -2894,11 +2894,10 @@ static struct cdrom_device_ops scd_dops = {
.get_mcn = scd_get_mcn,
.reset = scd_reset,
.audio_ioctl = scd_audio_ioctl,
.dev_ioctl = scd_dev_ioctl,
.capability = CDC_OPEN_TRAY | CDC_CLOSE_TRAY | CDC_LOCK |
CDC_SELECT_SPEED | CDC_MULTI_SESSION |
CDC_MCN | CDC_MEDIA_CHANGED | CDC_PLAY_AUDIO |
CDC_RESET | CDC_IOCTLS | CDC_DRIVE_STATUS,
CDC_RESET | CDC_DRIVE_STATUS,
.n_minors = 1,
};
@ -2936,6 +2935,9 @@ static int scd_block_ioctl(struct inode *inode, struct file *file,
case CDROMCLOSETRAY:
retval = scd_tray_move(&scd_info, 0);
break;
case CDROMREADAUDIO:
retval = scd_read_audio(&scd_info, CDROMREADAUDIO, arg);
break;
default:
retval = cdrom_ioctl(file, &scd_info, inode, cmd, arg);
}

View File

@ -1157,32 +1157,6 @@ static int cm206_audio_ioctl(struct cdrom_device_info *cdi, unsigned int cmd,
}
}
/* Ioctl. These ioctls are specific to the cm206 driver. I have made
some driver statistics accessible through ioctl calls.
*/
static int cm206_ioctl(struct cdrom_device_info *cdi, unsigned int cmd,
unsigned long arg)
{
switch (cmd) {
#ifdef STATISTICS
case CM206CTL_GET_STAT:
if (arg >= NR_STATS)
return -EINVAL;
else
return cd->stats[arg];
case CM206CTL_GET_LAST_STAT:
if (arg >= NR_STATS)
return -EINVAL;
else
return cd->last_stat[arg];
#endif
default:
debug(("Unknown ioctl call 0x%x\n", cmd));
return -EINVAL;
}
}
static int cm206_media_changed(struct cdrom_device_info *cdi, int disc_nr)
{
if (cd != NULL) {
@ -1321,11 +1295,10 @@ static struct cdrom_device_ops cm206_dops = {
.get_mcn = cm206_get_upc,
.reset = cm206_reset,
.audio_ioctl = cm206_audio_ioctl,
.dev_ioctl = cm206_ioctl,
.capability = CDC_CLOSE_TRAY | CDC_OPEN_TRAY | CDC_LOCK |
CDC_MULTI_SESSION | CDC_MEDIA_CHANGED |
CDC_MCN | CDC_PLAY_AUDIO | CDC_SELECT_SPEED |
CDC_IOCTLS | CDC_DRIVE_STATUS,
CDC_DRIVE_STATUS,
.n_minors = 1,
};
@ -1350,6 +1323,21 @@ static int cm206_block_release(struct inode *inode, struct file *file)
static int cm206_block_ioctl(struct inode *inode, struct file *file,
unsigned cmd, unsigned long arg)
{
switch (cmd) {
#ifdef STATISTICS
case CM206CTL_GET_STAT:
if (arg >= NR_STATS)
return -EINVAL;
return cd->stats[arg];
case CM206CTL_GET_LAST_STAT:
if (arg >= NR_STATS)
return -EINVAL;
return cd->last_stat[arg];
#endif
default:
break;
}
return cdrom_ioctl(file, &cm206_info, inode, cmd, arg);
}

File diff suppressed because it is too large Load Diff

View File

@ -627,7 +627,7 @@ static struct cdrom_device_ops viocd_dops = {
.media_changed = viocd_media_changed,
.lock_door = viocd_lock_door,
.generic_packet = viocd_packet,
.capability = CDC_CLOSE_TRAY | CDC_OPEN_TRAY | CDC_LOCK | CDC_SELECT_SPEED | CDC_SELECT_DISC | CDC_MULTI_SESSION | CDC_MCN | CDC_MEDIA_CHANGED | CDC_PLAY_AUDIO | CDC_RESET | CDC_IOCTLS | CDC_DRIVE_STATUS | CDC_GENERIC_PACKET | CDC_CD_R | CDC_CD_RW | CDC_DVD | CDC_DVD_R | CDC_DVD_RAM | CDC_RAM
.capability = CDC_CLOSE_TRAY | CDC_OPEN_TRAY | CDC_LOCK | CDC_SELECT_SPEED | CDC_SELECT_DISC | CDC_MULTI_SESSION | CDC_MCN | CDC_MEDIA_CHANGED | CDC_PLAY_AUDIO | CDC_RESET | CDC_DRIVE_STATUS | CDC_GENERIC_PACKET | CDC_CD_R | CDC_CD_RW | CDC_DVD | CDC_DVD_R | CDC_DVD_RAM | CDC_RAM
};
static int __init find_capability(const char *type)

View File

@ -46,8 +46,6 @@
/* Sanity checks */
#define SERIAL_INLINE
#if defined(MODULE) && defined(SERIAL_DEBUG_MCOUNT)
#define DBG_CNT(s) printk("(%s): [%x] refc=%d, serc=%d, ttyc=%d -> %s\n", \
tty->name, (info->flags), serial_driver->refcount,info->count,tty->count,s)
@ -95,10 +93,6 @@ static char *serial_version = "4.30";
#include <asm/amigahw.h>
#include <asm/amigaints.h>
#ifdef SERIAL_INLINE
#define _INLINE_ inline
#endif
#define custom amiga_custom
static char *serial_name = "Amiga-builtin serial driver";
@ -253,14 +247,14 @@ static void rs_start(struct tty_struct *tty)
* This routine is used by the interrupt handler to schedule
* processing in the software interrupt portion of the driver.
*/
static _INLINE_ void rs_sched_event(struct async_struct *info,
int event)
static void rs_sched_event(struct async_struct *info,
int event)
{
info->event |= 1 << event;
tasklet_schedule(&info->tlet);
}
static _INLINE_ void receive_chars(struct async_struct *info)
static void receive_chars(struct async_struct *info)
{
int status;
int serdatr;
@ -349,7 +343,7 @@ out:
return;
}
static _INLINE_ void transmit_chars(struct async_struct *info)
static void transmit_chars(struct async_struct *info)
{
custom.intreq = IF_TBE;
mb();
@ -389,7 +383,7 @@ static _INLINE_ void transmit_chars(struct async_struct *info)
}
}
static _INLINE_ void check_modem_status(struct async_struct *info)
static void check_modem_status(struct async_struct *info)
{
unsigned char status = ciab.pra & (SER_DCD | SER_CTS | SER_DSR);
unsigned char dstatus;
@ -1959,7 +1953,7 @@ done:
* number, and identifies which options were configured into this
* driver.
*/
static _INLINE_ void show_serial_version(void)
static void show_serial_version(void)
{
printk(KERN_INFO "%s version %s\n", serial_name, serial_version);
}

View File

@ -48,8 +48,8 @@ static int gs_debug;
#define NEW_WRITE_LOCKING 1
#if NEW_WRITE_LOCKING
#define DECL /* Nothing */
#define LOCKIT down (& port->port_write_sem);
#define RELEASEIT up (&port->port_write_sem);
#define LOCKIT mutex_lock(& port->port_write_mutex);
#define RELEASEIT mutex_unlock(&port->port_write_mutex);
#else
#define DECL unsigned long flags;
#define LOCKIT save_flags (flags);cli ()
@ -124,14 +124,14 @@ int gs_write(struct tty_struct * tty,
/* get exclusive "write" access to this port (problem 3) */
/* This is not a spinlock because we can have a disk access (page
fault) in copy_from_user */
down (& port->port_write_sem);
mutex_lock(& port->port_write_mutex);
while (1) {
c = count;
/* This is safe because we "OWN" the "head". Noone else can
change the "head": we own the port_write_sem. */
change the "head": we own the port_write_mutex. */
/* Don't overrun the end of the buffer */
t = SERIAL_XMIT_SIZE - port->xmit_head;
if (t < c) c = t;
@ -153,7 +153,7 @@ int gs_write(struct tty_struct * tty,
count -= c;
total += c;
}
up (& port->port_write_sem);
mutex_unlock(& port->port_write_mutex);
gs_dprintk (GS_DEBUG_WRITE, "write: interrupts are %s\n",
(port->flags & GS_TX_INTEN)?"enabled": "disabled");
@ -214,7 +214,7 @@ int gs_write(struct tty_struct * tty,
c = count;
/* This is safe because we "OWN" the "head". Noone else can
change the "head": we own the port_write_sem. */
change the "head": we own the port_write_mutex. */
/* Don't overrun the end of the buffer */
t = SERIAL_XMIT_SIZE - port->xmit_head;
if (t < c) c = t;
@ -888,7 +888,7 @@ int gs_init_port(struct gs_port *port)
spin_lock_irqsave (&port->driver_lock, flags);
if (port->tty)
clear_bit(TTY_IO_ERROR, &port->tty->flags);
init_MUTEX(&port->port_write_sem);
mutex_init(&port->port_write_mutex);
port->xmit_cnt = port->xmit_head = port->xmit_tail = 0;
spin_unlock_irqrestore(&port->driver_lock, flags);
gs_set_termios(port->tty, NULL);

View File

@ -181,7 +181,6 @@ static struct tty_driver *stli_serial;
* is already swapping a shared buffer won't make things any worse.
*/
static char *stli_tmpwritebuf;
static DECLARE_MUTEX(stli_tmpwritesem);
#define STLI_TXBUFSIZE 4096

View File

@ -132,7 +132,7 @@ static void put_tty_queue(unsigned char c, struct tty_struct *tty)
* We test the TTY_THROTTLED bit first so that it always
* indicates the current state. The decision about whether
* it is worth allowing more input has been taken by the caller.
* Can sleep, may be called under the atomic_read semaphore but
* Can sleep, may be called under the atomic_read_lock mutex but
* this is not guaranteed.
*/
@ -1132,7 +1132,7 @@ static inline int input_available_p(struct tty_struct *tty, int amt)
* buffer, and once to drain the space from the (physical) beginning of
* the buffer to head pointer.
*
* Called under the tty->atomic_read sem and with TTY_DONT_FLIP set
* Called under the tty->atomic_read_lock sem and with TTY_DONT_FLIP set
*
*/
@ -1262,11 +1262,11 @@ do_it_again:
* Internal serialization of reads.
*/
if (file->f_flags & O_NONBLOCK) {
if (down_trylock(&tty->atomic_read))
if (!mutex_trylock(&tty->atomic_read_lock))
return -EAGAIN;
}
else {
if (down_interruptible(&tty->atomic_read))
if (mutex_lock_interruptible(&tty->atomic_read_lock))
return -ERESTARTSYS;
}
@ -1393,7 +1393,7 @@ do_it_again:
timeout = time;
}
clear_bit(TTY_DONT_FLIP, &tty->flags);
up(&tty->atomic_read);
mutex_unlock(&tty->atomic_read_lock);
remove_wait_queue(&tty->read_wait, &wait);
if (!waitqueue_active(&tty->read_wait))

View File

@ -27,6 +27,7 @@
#include <linux/rwsem.h>
#include <linux/init.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include <asm/hardware/dec21285.h>
#include <asm/io.h>
@ -56,7 +57,7 @@ static int gbWriteEnable;
static int gbWriteBase64Enable;
static volatile unsigned char *FLASH_BASE;
static int gbFlashSize = KFLASH_SIZE;
static DECLARE_MUTEX(nwflash_sem);
static DEFINE_MUTEX(nwflash_mutex);
extern spinlock_t gpio_lock;
@ -140,7 +141,7 @@ static ssize_t flash_read(struct file *file, char __user *buf, size_t size,
/*
* We now lock against reads and writes. --rmk
*/
if (down_interruptible(&nwflash_sem))
if (mutex_lock_interruptible(&nwflash_mutex))
return -ERESTARTSYS;
ret = copy_to_user(buf, (void *)(FLASH_BASE + p), count);
@ -149,7 +150,7 @@ static ssize_t flash_read(struct file *file, char __user *buf, size_t size,
*ppos += count;
} else
ret = -EFAULT;
up(&nwflash_sem);
mutex_unlock(&nwflash_mutex);
}
return ret;
}
@ -188,7 +189,7 @@ static ssize_t flash_write(struct file *file, const char __user *buf,
/*
* We now lock against reads and writes. --rmk
*/
if (down_interruptible(&nwflash_sem))
if (mutex_lock_interruptible(&nwflash_mutex))
return -ERESTARTSYS;
written = 0;
@ -277,7 +278,7 @@ static ssize_t flash_write(struct file *file, const char __user *buf,
*/
leds_event(led_release);
up(&nwflash_sem);
mutex_unlock(&nwflash_mutex);
return written;
}

View File

@ -19,6 +19,7 @@
#include <linux/uio.h>
#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <asm/uaccess.h>
@ -29,7 +30,7 @@ struct raw_device_data {
static struct class *raw_class;
static struct raw_device_data raw_devices[MAX_RAW_MINORS];
static DECLARE_MUTEX(raw_mutex);
static DEFINE_MUTEX(raw_mutex);
static struct file_operations raw_ctl_fops; /* forward declaration */
/*
@ -53,7 +54,7 @@ static int raw_open(struct inode *inode, struct file *filp)
return 0;
}
down(&raw_mutex);
mutex_lock(&raw_mutex);
/*
* All we need to do on open is check that the device is bound.
@ -78,7 +79,7 @@ static int raw_open(struct inode *inode, struct file *filp)
filp->f_dentry->d_inode->i_mapping =
bdev->bd_inode->i_mapping;
filp->private_data = bdev;
up(&raw_mutex);
mutex_unlock(&raw_mutex);
return 0;
out2:
@ -86,7 +87,7 @@ out2:
out1:
blkdev_put(bdev);
out:
up(&raw_mutex);
mutex_unlock(&raw_mutex);
return err;
}
@ -99,14 +100,14 @@ static int raw_release(struct inode *inode, struct file *filp)
const int minor= iminor(inode);
struct block_device *bdev;
down(&raw_mutex);
mutex_lock(&raw_mutex);
bdev = raw_devices[minor].binding;
if (--raw_devices[minor].inuse == 0) {
/* Here inode->i_mapping == bdev->bd_inode->i_mapping */
inode->i_mapping = &inode->i_data;
inode->i_mapping->backing_dev_info = &default_backing_dev_info;
}
up(&raw_mutex);
mutex_unlock(&raw_mutex);
bd_release(bdev);
blkdev_put(bdev);
@ -187,9 +188,9 @@ static int raw_ctl_ioctl(struct inode *inode, struct file *filp,
goto out;
}
down(&raw_mutex);
mutex_lock(&raw_mutex);
if (rawdev->inuse) {
up(&raw_mutex);
mutex_unlock(&raw_mutex);
err = -EBUSY;
goto out;
}
@ -211,11 +212,11 @@ static int raw_ctl_ioctl(struct inode *inode, struct file *filp,
bind_device(&rq);
}
}
up(&raw_mutex);
mutex_unlock(&raw_mutex);
} else {
struct block_device *bdev;
down(&raw_mutex);
mutex_lock(&raw_mutex);
bdev = rawdev->binding;
if (bdev) {
rq.block_major = MAJOR(bdev->bd_dev);
@ -223,7 +224,7 @@ static int raw_ctl_ioctl(struct inode *inode, struct file *filp,
} else {
rq.block_major = rq.block_minor = 0;
}
up(&raw_mutex);
mutex_unlock(&raw_mutex);
if (copy_to_user((void __user *)arg, &rq, sizeof(rq))) {
err = -EFAULT;
goto out;

View File

@ -97,7 +97,7 @@
#include <asm/amigahw.h>
#include <linux/zorro.h>
#include <asm/irq.h>
#include <asm/semaphore.h>
#include <linux/mutex.h>
#include <linux/delay.h>
@ -654,7 +654,7 @@ static void a2232_init_portstructs(void)
port->gs.closing_wait = 30 * HZ;
port->gs.rd = &a2232_real_driver;
#ifdef NEW_WRITE_LOCKING
init_MUTEX(&(port->gs.port_write_sem));
init_MUTEX(&(port->gs.port_write_mutex));
#endif
init_waitqueue_head(&port->gs.open_wait);
init_waitqueue_head(&port->gs.close_wait);

View File

@ -5,7 +5,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2004 Silicon Graphics, Inc. All rights reserved.
* Copyright (C) 2004, 2006 Silicon Graphics, Inc. All rights reserved.
*/
/*
@ -77,7 +77,7 @@ scdrv_open(struct inode *inode, struct file *file)
scd = container_of(inode->i_cdev, struct sysctl_data_s, scd_cdev);
/* allocate memory for subchannel data */
sd = kmalloc(sizeof (struct subch_data_s), GFP_KERNEL);
sd = kzalloc(sizeof (struct subch_data_s), GFP_KERNEL);
if (sd == NULL) {
printk("%s: couldn't allocate subchannel data\n",
__FUNCTION__);
@ -85,7 +85,6 @@ scdrv_open(struct inode *inode, struct file *file)
}
/* initialize subch_data_s fields */
memset(sd, 0, sizeof (struct subch_data_s));
sd->sd_nasid = scd->scd_nasid;
sd->sd_subch = ia64_sn_irtr_open(scd->scd_nasid);
@ -394,7 +393,7 @@ scdrv_init(void)
sprintf(devnamep, "#%d", geo_slab(geoid));
/* allocate sysctl device data */
scd = kmalloc(sizeof (struct sysctl_data_s),
scd = kzalloc(sizeof (struct sysctl_data_s),
GFP_KERNEL);
if (!scd) {
printk("%s: failed to allocate device info"
@ -402,7 +401,6 @@ scdrv_init(void)
SYSCTL_BASENAME, devname);
continue;
}
memset(scd, 0, sizeof (struct sysctl_data_s));
/* initialize sysctl device data fields */
scd->scd_nasid = cnodeid_to_nasid(cnode);

View File

@ -287,7 +287,7 @@ scdrv_event_init(struct sysctl_data_s *scd)
{
int rv;
event_sd = kmalloc(sizeof (struct subch_data_s), GFP_KERNEL);
event_sd = kzalloc(sizeof (struct subch_data_s), GFP_KERNEL);
if (event_sd == NULL) {
printk(KERN_WARNING "%s: couldn't allocate subchannel info"
" for event monitoring\n", __FUNCTION__);
@ -295,7 +295,6 @@ scdrv_event_init(struct sysctl_data_s *scd)
}
/* initialize subch_data_s fields */
memset(event_sd, 0, sizeof (struct subch_data_s));
event_sd->sd_nasid = scd->scd_nasid;
spin_lock_init(&event_sd->sd_rlock);
@ -321,5 +320,3 @@ scdrv_event_init(struct sysctl_data_s *scd)
return;
}
}

View File

@ -148,7 +148,6 @@ static struct tty_driver *stl_serial;
* is already swapping a shared buffer won't make things any worse.
*/
static char *stl_tmpwritebuf;
static DECLARE_MUTEX(stl_tmpwritesem);
/*
* Define a local default termios struct. All ports will be created

View File

@ -2318,7 +2318,7 @@ static int sx_init_portstructs (int nboards, int nports)
port->board = board;
port->gs.rd = &sx_real_driver;
#ifdef NEW_WRITE_LOCKING
port->gs.port_write_sem = MUTEX;
port->gs.port_write_mutex = MUTEX;
#endif
port->gs.driver_lock = SPIN_LOCK_UNLOCKED;
/*

View File

@ -130,7 +130,7 @@ LIST_HEAD(tty_drivers); /* linked list of tty drivers */
/* Semaphore to protect creating and releasing a tty. This is shared with
vt.c for deeply disgusting hack reasons */
DECLARE_MUTEX(tty_sem);
DEFINE_MUTEX(tty_mutex);
#ifdef CONFIG_UNIX98_PTYS
extern struct tty_driver *ptm_driver; /* Unix98 pty masters; for /dev/ptmx */
@ -1188,11 +1188,11 @@ void disassociate_ctty(int on_exit)
lock_kernel();
down(&tty_sem);
mutex_lock(&tty_mutex);
tty = current->signal->tty;
if (tty) {
tty_pgrp = tty->pgrp;
up(&tty_sem);
mutex_unlock(&tty_mutex);
if (on_exit && tty->driver->type != TTY_DRIVER_TYPE_PTY)
tty_vhangup(tty);
} else {
@ -1200,7 +1200,7 @@ void disassociate_ctty(int on_exit)
kill_pg(current->signal->tty_old_pgrp, SIGHUP, on_exit);
kill_pg(current->signal->tty_old_pgrp, SIGCONT, on_exit);
}
up(&tty_sem);
mutex_unlock(&tty_mutex);
unlock_kernel();
return;
}
@ -1211,7 +1211,7 @@ void disassociate_ctty(int on_exit)
}
/* Must lock changes to tty_old_pgrp */
down(&tty_sem);
mutex_lock(&tty_mutex);
current->signal->tty_old_pgrp = 0;
tty->session = 0;
tty->pgrp = -1;
@ -1222,7 +1222,7 @@ void disassociate_ctty(int on_exit)
p->signal->tty = NULL;
} while_each_task_pid(current->signal->session, PIDTYPE_SID, p);
read_unlock(&tasklist_lock);
up(&tty_sem);
mutex_unlock(&tty_mutex);
unlock_kernel();
}
@ -1306,7 +1306,7 @@ static inline ssize_t do_tty_write(
ssize_t ret = 0, written = 0;
unsigned int chunk;
if (down_interruptible(&tty->atomic_write)) {
if (mutex_lock_interruptible(&tty->atomic_write_lock)) {
return -ERESTARTSYS;
}
@ -1329,7 +1329,7 @@ static inline ssize_t do_tty_write(
if (count < chunk)
chunk = count;
/* write_buf/write_cnt is protected by the atomic_write semaphore */
/* write_buf/write_cnt is protected by the atomic_write_lock mutex */
if (tty->write_cnt < chunk) {
unsigned char *buf;
@ -1338,7 +1338,7 @@ static inline ssize_t do_tty_write(
buf = kmalloc(chunk, GFP_KERNEL);
if (!buf) {
up(&tty->atomic_write);
mutex_unlock(&tty->atomic_write_lock);
return -ENOMEM;
}
kfree(tty->write_buf);
@ -1374,7 +1374,7 @@ static inline ssize_t do_tty_write(
inode->i_mtime = current_fs_time(inode->i_sb);
ret = written;
}
up(&tty->atomic_write);
mutex_unlock(&tty->atomic_write_lock);
return ret;
}
@ -1442,8 +1442,8 @@ static inline void tty_line_name(struct tty_driver *driver, int index, char *p)
/*
* WSH 06/09/97: Rewritten to remove races and properly clean up after a
* failed open. The new code protects the open with a semaphore, so it's
* really quite straightforward. The semaphore locking can probably be
* failed open. The new code protects the open with a mutex, so it's
* really quite straightforward. The mutex locking can probably be
* relaxed for the (most common) case of reopening a tty.
*/
static int init_dev(struct tty_driver *driver, int idx,
@ -1640,7 +1640,7 @@ fast_track:
success:
*ret_tty = tty;
/* All paths come through here to release the semaphore */
/* All paths come through here to release the mutex */
end_init:
return retval;
@ -1837,7 +1837,7 @@ static void release_dev(struct file * filp)
/* Guard against races with tty->count changes elsewhere and
opens on /dev/tty */
down(&tty_sem);
mutex_lock(&tty_mutex);
tty_closing = tty->count <= 1;
o_tty_closing = o_tty &&
(o_tty->count <= (pty_master ? 1 : 0));
@ -1868,7 +1868,7 @@ static void release_dev(struct file * filp)
printk(KERN_WARNING "release_dev: %s: read/write wait queue "
"active!\n", tty_name(tty, buf));
up(&tty_sem);
mutex_unlock(&tty_mutex);
schedule();
}
@ -1934,7 +1934,7 @@ static void release_dev(struct file * filp)
read_unlock(&tasklist_lock);
}
up(&tty_sem);
mutex_unlock(&tty_mutex);
/* check whether both sides are closing ... */
if (!tty_closing || (o_tty && !o_tty_closing))
@ -2040,11 +2040,11 @@ retry_open:
index = -1;
retval = 0;
down(&tty_sem);
mutex_lock(&tty_mutex);
if (device == MKDEV(TTYAUX_MAJOR,0)) {
if (!current->signal->tty) {
up(&tty_sem);
mutex_unlock(&tty_mutex);
return -ENXIO;
}
driver = current->signal->tty->driver;
@ -2070,18 +2070,18 @@ retry_open:
noctty = 1;
goto got_driver;
}
up(&tty_sem);
mutex_unlock(&tty_mutex);
return -ENODEV;
}
driver = get_tty_driver(device, &index);
if (!driver) {
up(&tty_sem);
mutex_unlock(&tty_mutex);
return -ENODEV;
}
got_driver:
retval = init_dev(driver, index, &tty);
up(&tty_sem);
mutex_unlock(&tty_mutex);
if (retval)
return retval;
@ -2167,9 +2167,9 @@ static int ptmx_open(struct inode * inode, struct file * filp)
}
up(&allocated_ptys_lock);
down(&tty_sem);
mutex_lock(&tty_mutex);
retval = init_dev(ptm_driver, index, &tty);
up(&tty_sem);
mutex_unlock(&tty_mutex);
if (retval)
goto out;
@ -2915,8 +2915,8 @@ static void initialize_tty_struct(struct tty_struct *tty)
init_waitqueue_head(&tty->write_wait);
init_waitqueue_head(&tty->read_wait);
INIT_WORK(&tty->hangup_work, do_tty_hangup, tty);
sema_init(&tty->atomic_read, 1);
sema_init(&tty->atomic_write, 1);
mutex_init(&tty->atomic_read_lock);
mutex_init(&tty->atomic_write_lock);
spin_lock_init(&tty->read_lock);
INIT_LIST_HEAD(&tty->tty_files);
INIT_WORK(&tty->SAK_work, NULL, NULL);

View File

@ -184,7 +184,7 @@ static void scc_init_portstructs(void)
port->gs.closing_wait = 30 * HZ;
port->gs.rd = &scc_real_driver;
#ifdef NEW_WRITE_LOCKING
port->gs.port_write_sem = MUTEX;
port->gs.port_write_mutex = MUTEX;
#endif
init_waitqueue_head(&port->gs.open_wait);
init_waitqueue_head(&port->gs.close_wait);

View File

@ -2489,7 +2489,7 @@ static int con_open(struct tty_struct *tty, struct file *filp)
}
/*
* We take tty_sem in here to prevent another thread from coming in via init_dev
* We take tty_mutex in here to prevent another thread from coming in via init_dev
* and taking a ref against the tty while we're in the process of forgetting
* about it and cleaning things up.
*
@ -2497,7 +2497,7 @@ static int con_open(struct tty_struct *tty, struct file *filp)
*/
static void con_close(struct tty_struct *tty, struct file *filp)
{
down(&tty_sem);
mutex_lock(&tty_mutex);
acquire_console_sem();
if (tty && tty->count == 1) {
struct vc_data *vc = tty->driver_data;
@ -2507,15 +2507,15 @@ static void con_close(struct tty_struct *tty, struct file *filp)
tty->driver_data = NULL;
release_console_sem();
vcs_remove_devfs(tty);
up(&tty_sem);
mutex_unlock(&tty_mutex);
/*
* tty_sem is released, but we still hold BKL, so there is
* tty_mutex is released, but we still hold BKL, so there is
* still exclusion against init_dev()
*/
return;
}
release_console_sem();
up(&tty_sem);
mutex_unlock(&tty_mutex);
}
static void vc_init(struct vc_data *vc, unsigned int rows,
@ -2869,9 +2869,9 @@ void unblank_screen(void)
}
/*
* We defer the timer blanking to work queue so it can take the console semaphore
* We defer the timer blanking to work queue so it can take the console mutex
* (console operations can still happen at irq time, but only from printk which
* has the console semaphore. Not perfect yet, but better than no locking
* has the console mutex. Not perfect yet, but better than no locking
*/
static void blank_screen_t(unsigned long dummy)
{
@ -3234,6 +3234,14 @@ void vcs_scr_writew(struct vc_data *vc, u16 val, u16 *org)
}
}
int is_console_suspend_safe(void)
{
/* It is unsafe to suspend devices while X has control of the
* hardware. Make sure we are running on a kernel-controlled console.
*/
return vc_cons[fg_console].d->vc_mode == KD_TEXT;
}
/*
* Visible symbols for modules
*/

Some files were not shown because too many files have changed in this diff Show More