Commit Graph

12 Commits

Author SHA1 Message Date
Shaohua Li 31ab269a03 [PATCH] x86: add MCE resume
It's widely seen a MCE non-fatal error reported after resume.  It seems MCE
resume is lacked under ia32.  This patch tries to fix the gap.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:53:30 -08:00
Shaohua Li 08967f941a [PATCH] FPU context corrupted after resume
mxcsr_feature_mask_init isn't needed in suspend/resume time (we can use
boot time mask).  And actually it's harmful, as it clear task's saved
fxsave in resume.  This bug is widely seen by users using zsh.

(akpm: my eyes.  Fixed some surrounding whitespace mess)

Signed-off-by: Shaohua Li<shaohua.li@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-30 17:37:11 -08:00
Al Viro ce3a161e69 [PATCH] useless includes of linux/irq.h in arch/i386
Most of these guys are simply not needed (pulled by other stuff
via asm-i386/hardirq.h).  One that is not entirely useless is hilarious -
arch/i386/oprofile/nmi_timer_int.c includes linux/irq.h... as a way to
get linux/errno.h

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-26 18:29:50 -07:00
Sam Ravnborg 86feeaa812 kbuild: full dependency check on asm-offsets.h
Building asm-offsets.h has been moved to a seperate Kbuild file
located in the top-level directory. This allow us to share the
functionality across the architectures.

The old rules in architecture specific Makefiles will die
in subsequent patches.

Furhtermore the usual kbuild dependency tracking is now used
when deciding to rebuild asm-offsets.s. So we no longer risk
to fail a rebuild caused by asm-offsets.c dependencies being touched.

With this common rule-set we now force the same name across
all architectures. Following patches will fix the rest.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2005-09-09 19:28:28 +02:00
Zachary Amsden e9f86e351f [PATCH] x86: remove redundant TSS clearing
When reviewing GDT updates, I found the code:

	set_tss_desc(cpu,t);	/* This just modifies memory; ... */
        per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS].b &= 0xfffffdff;

This second line is unnecessary, since set_tss_desc() has already cleared
the busy bit.

Commented disassembly, line 1:

c028b8bd:       8b 0c 86                mov    (%esi,%eax,4),%ecx
c028b8c0:       01 cb                   add    %ecx,%ebx
c028b8c2:       8d 0c 39                lea    (%ecx,%edi,1),%ecx

  => %ecx = per_cpu(cpu_gdt_table, cpu)

c028b8c5:       8d 91 80 00 00 00       lea    0x80(%ecx),%edx

  => %edx = &per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS]

c028b8cb:       66 c7 42 00 73 20       movw   $0x2073,0x0(%edx)
c028b8d1:       66 89 5a 02             mov    %bx,0x2(%edx)
c028b8d5:       c1 cb 10                ror    $0x10,%ebx
c028b8d8:       88 5a 04                mov    %bl,0x4(%edx)
c028b8db:       c6 42 05 89             movb   $0x89,0x5(%edx)

  => ((char *)%edx)[5] = 0x89
  (equivalent) ((char *)per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS])[5] = 0x89

c028b8df:       c6 42 06 00             movb   $0x0,0x6(%edx)
c028b8e3:       88 7a 07                mov    %bh,0x7(%edx)
c028b8e6:       c1 cb 10                ror    $0x10,%ebx

  => other bits

Commented disassembly, line 2:

c028b8e9:       8b 14 86                mov    (%esi,%eax,4),%edx
c028b8ec:       8d 04 3a                lea    (%edx,%edi,1),%eax

  => %eax = per_cpu(cpu_gdt_table, cpu)

c028b8ef:       81 a0 84 00 00 00 ff    andl   $0xfffffdff,0x84(%eax)

  => per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS].b &= 0xfffffdff;
  (equivalent) ((char *)per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS])[5] &= 0xfd

Note that (0x89 & ~0xfd) == 0; i.e, set_tss_desc(cpu,t) has already stored
the type field in the GDT with the busy bit clear.

Eliminating redundant and obscure code is always a good thing; in fact, I
pointed out this same optimization many moons ago in arch/i386/setup.c,
back when it used to be called that.

Signed-off-by: Zachary Amsden <zach@vmware.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-05 00:06:13 -07:00
Zachary Amsden 4d37e7e3fd [PATCH] i386: inline assembler: cleanup and encapsulate descriptor and task register management
i386 inline assembler cleanup.

This change encapsulates descriptor and task register management.  Also,
it is possible to improve assembler generation in two cases; savesegment
may store the value in a register instead of a memory location, which
allows GCC to optimize stack variables into registers, and MOV MEM, SEG
is always a 16-bit write to memory, making the casting in math-emu
unnecessary.

Signed-off-by: Zachary Amsden <zach@vmware.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-05 00:06:11 -07:00
Zachary Amsden 4bb0d3ec3e [PATCH] i386: inline asm cleanup
i386 Inline asm cleanup.  Use cr/dr accessor functions.

Also, a potential bugfix.  Also, some CR accessors really should be volatile.
Reads from CR0 (numeric state may change in an exception handler), writes to
CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
was not there to begin with, and in no case should kernel memory be clobbered,
except when doing a TLB flush, which already has memory clobber.

I noticed that page invalidation does not have a memory clobber.  I can't find
a bug as a result, but there is definitely a potential for a bug here:

#define __flush_tlb_single(addr) \
	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))

Signed-off-by: Zachary Amsden <zach@vmware.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-05 00:06:11 -07:00
Shaohua Li 3b520b238e [PATCH] MTRR suspend/resume cleanup
There has been some discuss about solving the SMP MTRR suspend/resume
breakage, but I didn't find a patch for it.  This is an intent for it.  The
basic idea is moving mtrr initializing into cpu_identify for all APs (so it
works for cpu hotplug).  For BP, restore_processor_state is responsible for
restoring MTRR.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Acked-by: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-07 18:23:42 -07:00
Pavel Machek 8d783b3e02 [PATCH] swsusp: clean assembly parts
This patch fixes register saving so that each register is only saved once,
and adds missing saving of %cr8 on x86-64.  Some reordering so that
save/restore is more logical/safer (segment registers should be restored
after gdt).

Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:33 -07:00
Li Shaohua 6fe940d6c3 [PATCH] sep initializing rework
Make SEP init per-cpu, so it is hotplug safe.

Signed-off-by: Li Shaohua<shaohua.li@intel.com>
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:29 -07:00
Vincent Hanquez 1cc6f12e03 [PATCH] xen: x86: Use new macro for debugreg
Make use of the 2 new macro set_debugreg and get_debugreg.

Signed-off-by: Vincent Hanquez <vincent.hanquez@cl.cam.ac.uk>
Cc: Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:13 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00