Merge branches 'tracing/ftrace', 'tracing/fastboot', 'tracing/nmisafe' and 'tracing/urgent' into tracing/core

This commit is contained in:
Ingo Molnar 2008-11-08 09:34:35 +01:00
commit a6b0786f7f
45 changed files with 724 additions and 352 deletions

View File

@ -8,7 +8,7 @@ Copyright 2008 Red Hat Inc.
Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton,
John Kacur, and David Teigland.
Written for: 2.6.27-rc1
Written for: 2.6.28-rc2
Introduction
------------
@ -50,26 +50,26 @@ of ftrace. Here is a list of some of the key files:
Note: all time values are in microseconds.
current_tracer : This is used to set or display the current tracer
current_tracer: This is used to set or display the current tracer
that is configured.
available_tracers : This holds the different types of tracers that
available_tracers: This holds the different types of tracers that
have been compiled into the kernel. The tracers
listed here can be configured by echoing their name
into current_tracer.
tracing_enabled : This sets or displays whether the current_tracer
tracing_enabled: This sets or displays whether the current_tracer
is activated and tracing or not. Echo 0 into this
file to disable the tracer or 1 to enable it.
trace : This file holds the output of the trace in a human readable
trace: This file holds the output of the trace in a human readable
format (described below).
latency_trace : This file shows the same trace but the information
latency_trace: This file shows the same trace but the information
is organized more to display possible latencies
in the system (described below).
trace_pipe : The output is the same as the "trace" file but this
trace_pipe: The output is the same as the "trace" file but this
file is meant to be streamed with live tracing.
Reads from this file will block until new data
is retrieved. Unlike the "trace" and "latency_trace"
@ -82,11 +82,11 @@ of ftrace. Here is a list of some of the key files:
tracer is not adding more data, they will display
the same information every time they are read.
iter_ctrl : This file lets the user control the amount of data
iter_ctrl: This file lets the user control the amount of data
that is displayed in one of the above output
files.
trace_max_latency : Some of the tracers record the max latency.
trace_max_latency: Some of the tracers record the max latency.
For example, the time interrupts are disabled.
This time is saved in this file. The max trace
will also be stored, and displayed by either
@ -94,29 +94,26 @@ of ftrace. Here is a list of some of the key files:
only be recorded if the latency is greater than
the value in this file. (in microseconds)
trace_entries : This sets or displays the number of trace
entries each CPU buffer can hold. The tracer buffers
are the same size for each CPU. The displayed number
is the size of the CPU buffer and not total size. The
trace_entries: This sets or displays the number of bytes each CPU
buffer can hold. The tracer buffers are the same size
for each CPU. The displayed number is the size of the
CPU buffer and not total size of all buffers. The
trace buffers are allocated in pages (blocks of memory
that the kernel uses for allocation, usually 4 KB in size).
Since each entry is smaller than a page, if the last
allocated page has room for more entries than were
requested, the rest of the page is used to allocate
entries.
If the last page allocated has room for more bytes
than requested, the rest of the page will be used,
making the actual allocation bigger than requested.
(Note, the size may not be a multiple of the page size due
to buffer managment overhead.)
This can only be updated when the current_tracer
is set to "none".
is set to "nop".
NOTE: It is planned on changing the allocated buffers
from being the number of possible CPUS to
the number of online CPUS.
tracing_cpumask : This is a mask that lets the user only trace
tracing_cpumask: This is a mask that lets the user only trace
on specified CPUS. The format is a hex string
representing the CPUS.
set_ftrace_filter : When dynamic ftrace is configured in (see the
set_ftrace_filter: When dynamic ftrace is configured in (see the
section below "dynamic ftrace"), the code is dynamically
modified (code text rewrite) to disable calling of the
function profiler (mcount). This lets tracing be configured
@ -130,14 +127,11 @@ of ftrace. Here is a list of some of the key files:
be traced. If a function exists in both set_ftrace_filter
and set_ftrace_notrace, the function will _not_ be traced.
available_filter_functions : When a function is encountered the first
time by the dynamic tracer, it is recorded and
later the call is converted into a nop. This file
lists the functions that have been recorded
by the dynamic tracer and these functions can
be used to set the ftrace filter by the above
"set_ftrace_filter" file. (See the section "dynamic ftrace"
below for more details).
available_filter_functions: This lists the functions that ftrace
has processed and can trace. These are the function
names that you can pass to "set_ftrace_filter" or
"set_ftrace_notrace". (See the section "dynamic ftrace"
below for more details.)
The Tracers
@ -145,7 +139,7 @@ The Tracers
Here is the list of current tracers that may be configured.
ftrace - function tracer that uses mcount to trace all functions.
function - function tracer that uses mcount to trace all functions.
sched_switch - traces the context switches between tasks.
@ -166,8 +160,8 @@ Here is the list of current tracers that may be configured.
the highest priority task to get scheduled after
it has been woken up.
none - This is not a tracer. To remove all tracers from tracing
simply echo "none" into current_tracer.
nop - This is not a tracer. To remove all tracers from tracing
simply echo "nop" into current_tracer.
Examples of using the tracer
@ -182,7 +176,7 @@ Output format:
Here is an example of the output format of the file "trace"
--------
# tracer: ftrace
# tracer: function
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
@ -192,7 +186,7 @@ Here is an example of the output format of the file "trace"
--------
A header is printed with the tracer name that is represented by the trace.
In this case the tracer is "ftrace". Then a header showing the format. Task
In this case the tracer is "function". Then a header showing the format. Task
name "bash", the task PID "4251", the CPU that it was running on
"01", the timestamp in <secs>.<usecs> format, the function name that was
traced "path_put" and the parent function that called this function
@ -1003,22 +997,20 @@ is the stack for the hard interrupt. This hides the fact that NEED_RESCHED
has been set. We do not see the 'N' until we switch back to the task's
assigned stack.
ftrace
------
function
--------
ftrace is not only the name of the tracing infrastructure, but it
is also a name of one of the tracers. The tracer is the function
tracer. Enabling the function tracer can be done from the
debug file system. Make sure the ftrace_enabled is set otherwise
this tracer is a nop.
This tracer is the function tracer. Enabling the function tracer
can be done from the debug file system. Make sure the ftrace_enabled is
set; otherwise this tracer is a nop.
# sysctl kernel.ftrace_enabled=1
# echo ftrace > /debug/tracing/current_tracer
# echo function > /debug/tracing/current_tracer
# echo 1 > /debug/tracing/tracing_enabled
# usleep 1
# echo 0 > /debug/tracing/tracing_enabled
# cat /debug/tracing/trace
# tracer: ftrace
# tracer: function
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
@ -1040,10 +1032,10 @@ this tracer is a nop.
[...]
Note: ftrace uses ring buffers to store the above entries. The newest data
may overwrite the oldest data. Sometimes using echo to stop the trace
is not sufficient because the tracing could have overwritten the data
that you wanted to record. For this reason, it is sometimes better to
Note: function tracer uses ring buffers to store the above entries.
The newest data may overwrite the oldest data. Sometimes using echo to
stop the trace is not sufficient because the tracing could have overwritten
the data that you wanted to record. For this reason, it is sometimes better to
disable tracing directly from a program. This allows you to stop the
tracing at the point that you hit the part that you are interested in.
To disable the tracing directly from a C program, something like following
@ -1077,18 +1069,31 @@ every kernel function, produced by the -pg switch in gcc), starts
of pointing to a simple return. (Enabling FTRACE will include the
-pg switch in the compiling of the kernel.)
When dynamic ftrace is initialized, it calls kstop_machine to make
the machine act like a uniprocessor so that it can freely modify code
without worrying about other processors executing that same code. At
initialization, the mcount calls are changed to call a "record_ip"
function. After this, the first time a kernel function is called,
it has the calling address saved in a hash table.
At compile time every C file object is run through the
recordmcount.pl script (located in the scripts directory). This
script will process the C object using objdump to find all the
locations in the .text section that call mcount. (Note, only
the .text section is processed, since processing other sections
like .init.text may cause races due to those sections being freed).
Later on the ftraced kernel thread is awoken and will again call
kstop_machine if new functions have been recorded. The ftraced thread
will change all calls to mcount to "nop". Just calling mcount
and having mcount return has shown a 10% overhead. By converting
it to a nop, there is no measurable overhead to the system.
A new section called "__mcount_loc" is created that holds references
to all the mcount call sites in the .text section. This section is
compiled back into the original object. The final linker will add
all these references into a single table.
On boot up, before SMP is initialized, the dynamic ftrace code
scans this table and updates all the locations into nops. It also
records the locations, which are added to the available_filter_functions
list. Modules are processed as they are loaded and before they are
executed. When a module is unloaded, it also removes its functions from
the ftrace function list. This is automatic in the module unload
code, and the module author does not need to worry about it.
When tracing is enabled, kstop_machine is called to prevent races
with the CPUS executing code being modified (which can cause the
CPU to do undesireable things), and the nops are patched back
to calls. But this time, they do not call mcount (which is just
a function stub). They now call into the ftrace infrastructure.
One special side-effect to the recording of the functions being
traced is that we can now selectively choose which functions we
@ -1251,36 +1256,6 @@ Produces:
We can see that there's no more lock or preempt tracing.
ftraced
-------
As mentioned above, when dynamic ftrace is configured in, a kernel
thread wakes up once a second and checks to see if there are mcount
calls that need to be converted into nops. If there are not any, then
it simply goes back to sleep. But if there are some, it will call
kstop_machine to convert the calls to nops.
There may be a case in which you do not want this added latency.
Perhaps you are doing some audio recording and this activity might
cause skips in the playback. There is an interface to disable
and enable the "ftraced" kernel thread.
# echo 0 > /debug/tracing/ftraced_enabled
This will disable the calling of kstop_machine to update the
mcount calls to nops. Remember that there is a large overhead
to calling mcount. Without this kernel thread, that overhead will
exist.
If there are recorded calls to mcount, any write to the ftraced_enabled
file will cause the kstop_machine to run. This means that a
user can manually perform the updates when they want to by simply
echoing a '0' into the ftraced_enabled file.
The updates are also done at the beginning of enabling a tracer
that uses ftrace function recording.
trace_pipe
----------
@ -1289,14 +1264,14 @@ on the tracing is different. Every read from trace_pipe is consumed.
This means that subsequent reads will be different. The trace
is live.
# echo ftrace > /debug/tracing/current_tracer
# echo function > /debug/tracing/current_tracer
# cat /debug/tracing/trace_pipe > /tmp/trace.out &
[1] 4153
# echo 1 > /debug/tracing/tracing_enabled
# usleep 1
# echo 0 > /debug/tracing/tracing_enabled
# cat /debug/tracing/trace
# tracer: ftrace
# tracer: function
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
@ -1317,7 +1292,7 @@ is live.
Note, reading the trace_pipe file will block until more input is added.
By changing the tracer, trace_pipe will issue an EOF. We needed
to set the ftrace tracer _before_ cating the trace_pipe file.
to set the function tracer _before_ we "cat" the trace_pipe file.
trace entries
@ -1334,10 +1309,10 @@ number of entries.
65620
Note, to modify this, you must have tracing completely disabled. To do that,
echo "none" into the current_tracer. If the current_tracer is not set
to "none", an EINVAL error will be returned.
echo "nop" into the current_tracer. If the current_tracer is not set
to "nop", an EINVAL error will be returned.
# echo none > /debug/tracing/current_tracer
# echo nop > /debug/tracing/current_tracer
# echo 100000 > /debug/tracing/trace_entries
# cat /debug/tracing/trace_entries
100045

View File

@ -0,0 +1,82 @@
The io_mapping functions in linux/io-mapping.h provide an abstraction for
efficiently mapping small regions of an I/O device to the CPU. The initial
usage is to support the large graphics aperture on 32-bit processors where
ioremap_wc cannot be used to statically map the entire aperture to the CPU
as it would consume too much of the kernel address space.
A mapping object is created during driver initialization using
struct io_mapping *io_mapping_create_wc(unsigned long base,
unsigned long size)
'base' is the bus address of the region to be made
mappable, while 'size' indicates how large a mapping region to
enable. Both are in bytes.
This _wc variant provides a mapping which may only be used
with the io_mapping_map_atomic_wc or io_mapping_map_wc.
With this mapping object, individual pages can be mapped either atomically
or not, depending on the necessary scheduling environment. Of course, atomic
maps are more efficient:
void *io_mapping_map_atomic_wc(struct io_mapping *mapping,
unsigned long offset)
'offset' is the offset within the defined mapping region.
Accessing addresses beyond the region specified in the
creation function yields undefined results. Using an offset
which is not page aligned yields an undefined result. The
return value points to a single page in CPU address space.
This _wc variant returns a write-combining map to the
page and may only be used with mappings created by
io_mapping_create_wc
Note that the task may not sleep while holding this page
mapped.
void io_mapping_unmap_atomic(void *vaddr)
'vaddr' must be the the value returned by the last
io_mapping_map_atomic_wc call. This unmaps the specified
page and allows the task to sleep once again.
If you need to sleep while holding the lock, you can use the non-atomic
variant, although they may be significantly slower.
void *io_mapping_map_wc(struct io_mapping *mapping,
unsigned long offset)
This works like io_mapping_map_atomic_wc except it allows
the task to sleep while holding the page mapped.
void io_mapping_unmap(void *vaddr)
This works like io_mapping_unmap_atomic, except it is used
for pages mapped with io_mapping_map_wc.
At driver close time, the io_mapping object must be freed:
void io_mapping_free(struct io_mapping *mapping)
Current Implementation:
The initial implementation of these functions uses existing mapping
mechanisms and so provides only an abstraction layer and no new
functionality.
On 64-bit processors, io_mapping_create_wc calls ioremap_wc for the whole
range, creating a permanent kernel-visible mapping to the resource. The
map_atomic and map functions add the requested offset to the base of the
virtual address returned by ioremap_wc.
On 32-bit processors with HIGHMEM defined, io_mapping_map_atomic_wc uses
kmap_atomic_pfn to map the specified page in an atomic fashion;
kmap_atomic_pfn isn't really supposed to be used with device pages, but it
provides an efficient mapping for this usage.
On 32-bit processors without HIGHMEM defined, io_mapping_map_atomic_wc and
io_mapping_map_wc both use ioremap_wc, a terribly inefficient function which
performs an IPI to inform all processors about the new mapping. This results
in a significant performance penalty.

View File

@ -1,11 +1,6 @@
#ifndef _ASM_ARM_FTRACE
#define _ASM_ARM_FTRACE
#ifndef __ASSEMBLY__
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#ifdef CONFIG_FUNCTION_TRACER
#define MCOUNT_ADDR ((long)(mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */

View File

@ -1,11 +1,6 @@
#ifndef _ASM_POWERPC_FTRACE
#define _ASM_POWERPC_FTRACE
#ifndef __ASSEMBLY__
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#ifdef CONFIG_FUNCTION_TRACER
#define MCOUNT_ADDR ((long)(_mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */

View File

@ -1,11 +1,6 @@
#ifndef __ASM_SH_FTRACE_H
#define __ASM_SH_FTRACE_H
#ifndef __ASSEMBLY__
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#ifndef __ASSEMBLY__
extern void mcount(void);
#endif

View File

@ -1,11 +1,6 @@
#ifndef _ASM_SPARC64_FTRACE
#define _ASM_SPARC64_FTRACE
#ifndef __ASSEMBLY__
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#ifdef CONFIG_MCOUNT
#define MCOUNT_ADDR ((long)(_mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */

View File

@ -1895,6 +1895,10 @@ config SYSVIPC_COMPAT
endmenu
config HAVE_ATOMIC_IOMAP
def_bool y
depends on X86_32
source "net/Kconfig"
source "drivers/Kconfig"

View File

@ -9,6 +9,10 @@
extern int fixmaps_set;
extern pte_t *kmap_pte;
extern pgprot_t kmap_prot;
extern pte_t *pkmap_page_table;
void __native_set_fixmap(enum fixed_addresses idx, pte_t pte);
void native_set_fixmap(enum fixed_addresses idx,
unsigned long phys, pgprot_t flags);

View File

@ -28,10 +28,8 @@ extern unsigned long __FIXADDR_TOP;
#include <asm/acpi.h>
#include <asm/apicdef.h>
#include <asm/page.h>
#ifdef CONFIG_HIGHMEM
#include <linux/threads.h>
#include <asm/kmap_types.h>
#endif
/*
* Here we define all the compile-time 'special' virtual
@ -75,10 +73,8 @@ enum fixed_addresses {
#ifdef CONFIG_X86_CYCLONE_TIMER
FIX_CYCLONE_TIMER, /*cyclone timer register*/
#endif
#ifdef CONFIG_HIGHMEM
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
#endif
#ifdef CONFIG_PCI_MMCONFIG
FIX_PCIE_MCFG,
#endif

View File

@ -17,23 +17,7 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
*/
return addr - 1;
}
#ifdef CONFIG_DYNAMIC_FTRACE
extern void ftrace_nmi_enter(void);
extern void ftrace_nmi_exit(void);
#else
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#endif /* __ASSEMBLY__ */
#else /* CONFIG_FUNCTION_TRACER */
#ifndef __ASSEMBLY__
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#endif /* CONFIG_FUNCTION_TRACER */
#endif /* _ASM_X86_FTRACE_H */

View File

@ -25,14 +25,11 @@
#include <asm/kmap_types.h>
#include <asm/tlbflush.h>
#include <asm/paravirt.h>
#include <asm/fixmap.h>
/* declarations for highmem.c */
extern unsigned long highstart_pfn, highend_pfn;
extern pte_t *kmap_pte;
extern pgprot_t kmap_prot;
extern pte_t *pkmap_page_table;
/*
* Right now we initialize only a single pte table. It can be extended
* easily, subsequent pte tables have to be allocated in one physical

View File

@ -1,7 +1,7 @@
obj-y := init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
pat.o pgtable.o gup.o
obj-$(CONFIG_X86_32) += pgtable_32.o
obj-$(CONFIG_X86_32) += pgtable_32.o iomap_32.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_X86_PTDUMP) += dump_pagetables.o

View File

@ -334,7 +334,6 @@ int devmem_is_allowed(unsigned long pagenr)
return 0;
}
#ifdef CONFIG_HIGHMEM
pte_t *kmap_pte;
pgprot_t kmap_prot;
@ -357,6 +356,7 @@ static void __init kmap_init(void)
kmap_prot = PAGE_KERNEL;
}
#ifdef CONFIG_HIGHMEM
static void __init permanent_kmaps_init(pgd_t *pgd_base)
{
unsigned long vaddr;
@ -436,7 +436,6 @@ static void __init set_highmem_pages_init(void)
#endif /* !CONFIG_NUMA */
#else
# define kmap_init() do { } while (0)
# define permanent_kmaps_init(pgd_base) do { } while (0)
# define set_highmem_pages_init() do { } while (0)
#endif /* CONFIG_HIGHMEM */

59
arch/x86/mm/iomap_32.c Normal file
View File

@ -0,0 +1,59 @@
/*
* Copyright © 2008 Ingo Molnar
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
*/
#include <asm/iomap.h>
#include <linux/module.h>
/* Map 'pfn' using fixed map 'type' and protections 'prot'
*/
void *
iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot)
{
enum fixed_addresses idx;
unsigned long vaddr;
pagefault_disable();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
set_pte(kmap_pte-idx, pfn_pte(pfn, prot));
arch_flush_lazy_mmu_mode();
return (void*) vaddr;
}
EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn);
void
iounmap_atomic(void *kvaddr, enum km_type type)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
/*
* Force other mappings to Oops if they'll try to access this pte
* without first remap it. Keeping stale mappings around is a bad idea
* also, in case the page changes cacheability attributes or becomes
* a protected page in a hypervisor.
*/
if (vaddr == __fix_to_virt(FIX_KMAP_BEGIN+idx))
kpte_clear_flush(kmap_pte-idx, vaddr);
arch_flush_lazy_mmu_mode();
pagefault_enable();
}
EXPORT_SYMBOL_GPL(iounmap_atomic);

View File

@ -3,13 +3,14 @@
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
ccflags-y := -Iinclude/drm
i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o i915_opregion.o \
i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o \
i915_suspend.o \
i915_gem.o \
i915_gem_debug.o \
i915_gem_proc.o \
i915_gem_tiling.o
i915-$(CONFIG_ACPI) += i915_opregion.o
i915-$(CONFIG_COMPAT) += i915_ioc32.o
obj-$(CONFIG_DRM_I915) += i915.o

View File

@ -960,6 +960,7 @@ struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF(DRM_I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_SET_TILING, i915_gem_set_tiling, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_GET_TILING, i915_gem_get_tiling, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_GET_APERTURE, i915_gem_get_aperture_ioctl, 0),
};
int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls);

View File

@ -31,6 +31,7 @@
#define _I915_DRV_H_
#include "i915_reg.h"
#include <linux/io-mapping.h>
/* General customization:
*/
@ -246,6 +247,8 @@ typedef struct drm_i915_private {
struct {
struct drm_mm gtt_space;
struct io_mapping *gtt_mapping;
/**
* List of objects currently involved in rendering from the
* ringbuffer.
@ -502,6 +505,8 @@ int i915_gem_set_tiling(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_get_tiling(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
void i915_gem_load(struct drm_device *dev);
int i915_gem_proc_init(struct drm_minor *minor);
void i915_gem_proc_cleanup(struct drm_minor *minor);
@ -539,11 +544,18 @@ extern int i915_restore_state(struct drm_device *dev);
extern int i915_save_state(struct drm_device *dev);
extern int i915_restore_state(struct drm_device *dev);
#ifdef CONFIG_ACPI
/* i915_opregion.c */
extern int intel_opregion_init(struct drm_device *dev);
extern void intel_opregion_free(struct drm_device *dev);
extern void opregion_asle_intr(struct drm_device *dev);
extern void opregion_enable_asle(struct drm_device *dev);
#else
static inline int intel_opregion_init(struct drm_device *dev) { return 0; }
static inline void intel_opregion_free(struct drm_device *dev) { return; }
static inline void opregion_asle_intr(struct drm_device *dev) { return; }
static inline void opregion_enable_asle(struct drm_device *dev) { return; }
#endif
/**
* Lock test for when it's just for synchronization of ring access.

View File

@ -79,6 +79,28 @@ i915_gem_init_ioctl(struct drm_device *dev, void *data,
return 0;
}
int
i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_gem_get_aperture *args = data;
struct drm_i915_gem_object *obj_priv;
if (!(dev->driver->driver_features & DRIVER_GEM))
return -ENODEV;
args->aper_size = dev->gtt_total;
args->aper_available_size = args->aper_size;
list_for_each_entry(obj_priv, &dev_priv->mm.active_list, list) {
if (obj_priv->pin_count > 0)
args->aper_available_size -= obj_priv->obj->size;
}
return 0;
}
/**
* Creates a new mm object and returns a handle to it.
@ -171,35 +193,50 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
return 0;
}
/*
* Try to write quickly with an atomic kmap. Return true on success.
*
* If this fails (which includes a partial write), we'll redo the whole
* thing with the slow version.
*
* This is a workaround for the low performance of iounmap (approximate
* 10% cpu cost on normal 3D workloads). kmap_atomic on HIGHMEM kernels
* happens to let us map card memory without taking IPIs. When the vmap
* rework lands we should be able to dump this hack.
/* This is the fast write path which cannot handle
* page faults in the source data
*/
static inline int fast_user_write(unsigned long pfn, char __user *user_data,
int l, int o)
{
#ifdef CONFIG_HIGHMEM
unsigned long unwritten;
char *vaddr_atomic;
vaddr_atomic = kmap_atomic_pfn(pfn, KM_USER0);
#if WATCH_PWRITE
DRM_INFO("pwrite i %d o %d l %d pfn %ld vaddr %p\n",
i, o, l, pfn, vaddr_atomic);
#endif
unwritten = __copy_from_user_inatomic_nocache(vaddr_atomic + o, user_data, l);
kunmap_atomic(vaddr_atomic, KM_USER0);
return !unwritten;
#else
static inline int
fast_user_write(struct io_mapping *mapping,
loff_t page_base, int page_offset,
char __user *user_data,
int length)
{
char *vaddr_atomic;
unsigned long unwritten;
vaddr_atomic = io_mapping_map_atomic_wc(mapping, page_base);
unwritten = __copy_from_user_inatomic_nocache(vaddr_atomic + page_offset,
user_data, length);
io_mapping_unmap_atomic(vaddr_atomic);
if (unwritten)
return -EFAULT;
return 0;
}
/* Here's the write path which can sleep for
* page faults
*/
static inline int
slow_user_write(struct io_mapping *mapping,
loff_t page_base, int page_offset,
char __user *user_data,
int length)
{
char __iomem *vaddr;
unsigned long unwritten;
vaddr = io_mapping_map_wc(mapping, page_base);
if (vaddr == NULL)
return -EFAULT;
unwritten = __copy_from_user(vaddr + page_offset,
user_data, length);
io_mapping_unmap(vaddr);
if (unwritten)
return -EFAULT;
return 0;
#endif
}
static int
@ -208,10 +245,12 @@ i915_gem_gtt_pwrite(struct drm_device *dev, struct drm_gem_object *obj,
struct drm_file *file_priv)
{
struct drm_i915_gem_object *obj_priv = obj->driver_private;
drm_i915_private_t *dev_priv = dev->dev_private;
ssize_t remain;
loff_t offset;
loff_t offset, page_base;
char __user *user_data;
int ret = 0;
int page_offset, page_length;
int ret;
user_data = (char __user *) (uintptr_t) args->data_ptr;
remain = args->size;
@ -235,57 +274,37 @@ i915_gem_gtt_pwrite(struct drm_device *dev, struct drm_gem_object *obj,
obj_priv->dirty = 1;
while (remain > 0) {
unsigned long pfn;
int i, o, l;
/* Operation in this page
*
* i = page number
* o = offset within page
* l = bytes to copy
* page_base = page offset within aperture
* page_offset = offset within page
* page_length = bytes to copy for this page
*/
i = offset >> PAGE_SHIFT;
o = offset & (PAGE_SIZE-1);
l = remain;
if ((o + l) > PAGE_SIZE)
l = PAGE_SIZE - o;
page_base = (offset & ~(PAGE_SIZE-1));
page_offset = offset & (PAGE_SIZE-1);
page_length = remain;
if ((page_offset + remain) > PAGE_SIZE)
page_length = PAGE_SIZE - page_offset;
pfn = (dev->agp->base >> PAGE_SHIFT) + i;
ret = fast_user_write (dev_priv->mm.gtt_mapping, page_base,
page_offset, user_data, page_length);
if (!fast_user_write(pfn, user_data, l, o)) {
unsigned long unwritten;
char __iomem *vaddr;
vaddr = ioremap_wc(pfn << PAGE_SHIFT, PAGE_SIZE);
#if WATCH_PWRITE
DRM_INFO("pwrite slow i %d o %d l %d "
"pfn %ld vaddr %p\n",
i, o, l, pfn, vaddr);
#endif
if (vaddr == NULL) {
ret = -EFAULT;
/* If we get a fault while copying data, then (presumably) our
* source page isn't available. In this case, use the
* non-atomic function
*/
if (ret) {
ret = slow_user_write (dev_priv->mm.gtt_mapping,
page_base, page_offset,
user_data, page_length);
if (ret)
goto fail;
}
unwritten = __copy_from_user(vaddr + o, user_data, l);
#if WATCH_PWRITE
DRM_INFO("unwritten %ld\n", unwritten);
#endif
iounmap(vaddr);
if (unwritten) {
ret = -EFAULT;
goto fail;
}
}
remain -= l;
user_data += l;
offset += l;
remain -= page_length;
user_data += page_length;
offset += page_length;
}
#if WATCH_PWRITE && 1
i915_gem_clflush_object(obj);
i915_gem_dump_object(obj, args->offset + args->size, __func__, ~0);
i915_gem_clflush_object(obj);
#endif
fail:
i915_gem_object_unpin(obj);
@ -1503,12 +1522,12 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
struct drm_i915_gem_exec_object *entry)
{
struct drm_device *dev = obj->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_gem_relocation_entry reloc;
struct drm_i915_gem_relocation_entry __user *relocs;
struct drm_i915_gem_object *obj_priv = obj->driver_private;
int i, ret;
uint32_t last_reloc_offset = -1;
void __iomem *reloc_page = NULL;
void __iomem *reloc_page;
/* Choose the GTT offset for our buffer and put it there. */
ret = i915_gem_object_pin(obj, (uint32_t) entry->alignment);
@ -1631,26 +1650,11 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
* perform.
*/
reloc_offset = obj_priv->gtt_offset + reloc.offset;
if (reloc_page == NULL ||
(last_reloc_offset & ~(PAGE_SIZE - 1)) !=
(reloc_offset & ~(PAGE_SIZE - 1))) {
if (reloc_page != NULL)
iounmap(reloc_page);
reloc_page = ioremap_wc(dev->agp->base +
(reloc_offset &
~(PAGE_SIZE - 1)),
PAGE_SIZE);
last_reloc_offset = reloc_offset;
if (reloc_page == NULL) {
drm_gem_object_unreference(target_obj);
i915_gem_object_unpin(obj);
return -ENOMEM;
}
}
reloc_page = io_mapping_map_atomic_wc(dev_priv->mm.gtt_mapping,
(reloc_offset &
~(PAGE_SIZE - 1)));
reloc_entry = (uint32_t __iomem *)(reloc_page +
(reloc_offset & (PAGE_SIZE - 1)));
(reloc_offset & (PAGE_SIZE - 1)));
reloc_val = target_obj_priv->gtt_offset + reloc.delta;
#if WATCH_BUF
@ -1659,6 +1663,7 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
readl(reloc_entry), reloc_val);
#endif
writel(reloc_val, reloc_entry);
io_mapping_unmap_atomic(reloc_page);
/* Write the updated presumed offset for this entry back out
* to the user.
@ -1674,9 +1679,6 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
drm_gem_object_unreference(target_obj);
}
if (reloc_page != NULL)
iounmap(reloc_page);
#if WATCH_BUF
if (0)
i915_gem_dump_object(obj, 128, __func__, ~0);
@ -2518,6 +2520,10 @@ i915_gem_entervt_ioctl(struct drm_device *dev, void *data,
if (ret != 0)
return ret;
dev_priv->mm.gtt_mapping = io_mapping_create_wc(dev->agp->base,
dev->agp->agp_info.aper_size
* 1024 * 1024);
mutex_lock(&dev->struct_mutex);
BUG_ON(!list_empty(&dev_priv->mm.active_list));
BUG_ON(!list_empty(&dev_priv->mm.flushing_list));
@ -2535,11 +2541,13 @@ int
i915_gem_leavevt_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
drm_i915_private_t *dev_priv = dev->dev_private;
int ret;
ret = i915_gem_idle(dev);
drm_irq_uninstall(dev);
io_mapping_free(dev_priv->mm.gtt_mapping);
return ret;
}

View File

@ -653,15 +653,16 @@ static void radeon_cp_init_ring_buffer(struct drm_device * dev,
RADEON_WRITE(RADEON_SCRATCH_UMSK, 0x7);
/* Turn on bus mastering */
if (((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RS400) ||
((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RS690) ||
if (((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RS690) ||
((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RS740)) {
/* rs400, rs690/rs740 */
tmp = RADEON_READ(RADEON_BUS_CNTL) & ~RS400_BUS_MASTER_DIS;
/* rs600/rs690/rs740 */
tmp = RADEON_READ(RADEON_BUS_CNTL) & ~RS600_BUS_MASTER_DIS;
RADEON_WRITE(RADEON_BUS_CNTL, tmp);
} else if (!(((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RV380) ||
((dev_priv->flags & RADEON_FAMILY_MASK) >= CHIP_R423))) {
/* r1xx, r2xx, r300, r(v)350, r420/r481, rs480 */
} else if (((dev_priv->flags & RADEON_FAMILY_MASK) <= CHIP_RV350) ||
((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_R420) ||
((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RS400) ||
((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RS480)) {
/* r1xx, r2xx, r300, r(v)350, r420/r481, rs400/rs480 */
tmp = RADEON_READ(RADEON_BUS_CNTL) & ~RADEON_BUS_MASTER_DIS;
RADEON_WRITE(RADEON_BUS_CNTL, tmp);
} /* PCIE cards appears to not need this */

View File

@ -447,12 +447,12 @@ extern int r300_do_cp_cmdbuf(struct drm_device *dev,
* handling, not bus mastering itself.
*/
#define RADEON_BUS_CNTL 0x0030
/* r1xx, r2xx, r300, r(v)350, r420/r481, rs480 */
/* r1xx, r2xx, r300, r(v)350, r420/r481, rs400/rs480 */
# define RADEON_BUS_MASTER_DIS (1 << 6)
/* rs400, rs690/rs740 */
# define RS400_BUS_MASTER_DIS (1 << 14)
# define RS400_MSI_REARM (1 << 20)
/* see RS480_MSI_REARM in AIC_CNTL for rs480 */
/* rs600/rs690/rs740 */
# define RS600_BUS_MASTER_DIS (1 << 14)
# define RS600_MSI_REARM (1 << 20)
/* see RS400_MSI_REARM in AIC_CNTL for rs480 */
#define RADEON_BUS_CNTL1 0x0034
# define RADEON_PMI_BM_DIS (1 << 2)
@ -937,7 +937,7 @@ extern int r300_do_cp_cmdbuf(struct drm_device *dev,
#define RADEON_AIC_CNTL 0x01d0
# define RADEON_PCIGART_TRANSLATE_EN (1 << 0)
# define RS480_MSI_REARM (1 << 3)
# define RS400_MSI_REARM (1 << 3)
#define RADEON_AIC_STAT 0x01d4
#define RADEON_AIC_PT_BASE 0x01d8
#define RADEON_AIC_LO_ADDR 0x01dc

View File

@ -1,43 +1,45 @@
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/proc_fs.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
#include <linux/time.h>
#include <asm/cputime.h>
static int uptime_proc_show(struct seq_file *m, void *v)
static int proc_calc_metrics(char *page, char **start, off_t off,
int count, int *eof, int len)
{
if (len <= off + count)
*eof = 1;
*start = page + off;
len -= off;
if (len > count)
len = count;
if (len < 0)
len = 0;
return len;
}
static int uptime_read_proc(char *page, char **start, off_t off, int count,
int *eof, void *data)
{
struct timespec uptime;
struct timespec idle;
int len;
cputime_t idletime = cputime_add(init_task.utime, init_task.stime);
do_posix_clock_monotonic_gettime(&uptime);
monotonic_to_bootbased(&uptime);
cputime_to_timespec(idletime, &idle);
seq_printf(m, "%lu.%02lu %lu.%02lu\n",
len = sprintf(page, "%lu.%02lu %lu.%02lu\n",
(unsigned long) uptime.tv_sec,
(uptime.tv_nsec / (NSEC_PER_SEC / 100)),
(unsigned long) idle.tv_sec,
(idle.tv_nsec / (NSEC_PER_SEC / 100)));
return 0;
return proc_calc_metrics(page, start, off, count, eof, len);
}
static int uptime_proc_open(struct inode *inode, struct file *file)
{
return single_open(file, uptime_proc_show, NULL);
}
static const struct file_operations uptime_proc_fops = {
.open = uptime_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int __init proc_uptime_init(void)
{
proc_create("uptime", 0, NULL, &uptime_proc_fops);
create_proc_read_entry("uptime", 0, NULL, uptime_read_proc, NULL);
return 0;
}
module_init(proc_uptime_init);

30
include/asm-x86/iomap.h Normal file
View File

@ -0,0 +1,30 @@
/*
* Copyright © 2008 Ingo Molnar
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
*/
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
#include <asm/cacheflush.h>
#include <asm/pgtable.h>
#include <asm/tlbflush.h>
void *
iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot);
void
iounmap_atomic(void *kvaddr, enum km_type type);

View File

@ -159,6 +159,7 @@ typedef struct _drm_i915_sarea {
#define DRM_I915_GEM_SW_FINISH 0x20
#define DRM_I915_GEM_SET_TILING 0x21
#define DRM_I915_GEM_GET_TILING 0x22
#define DRM_I915_GEM_GET_APERTURE 0x23
#define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t)
#define DRM_IOCTL_I915_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH)
@ -190,6 +191,7 @@ typedef struct _drm_i915_sarea {
#define DRM_IOCTL_I915_GEM_SW_FINISH DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_SW_FINISH, struct drm_i915_gem_sw_finish)
#define DRM_IOCTL_I915_GEM_SET_TILING DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_SET_TILING, struct drm_i915_gem_set_tiling)
#define DRM_IOCTL_I915_GEM_GET_TILING DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling)
#define DRM_IOCTL_I915_GEM_GET_APERTURE DRM_IOR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_APERTURE, struct drm_i915_gem_get_aperture)
/* Allow drivers to submit batchbuffers directly to hardware, relying
* on the security mechanisms provided by hardware.
@ -600,4 +602,15 @@ struct drm_i915_gem_get_tiling {
uint32_t swizzle_mode;
};
struct drm_i915_gem_get_aperture {
/** Total size of the aperture used by i915_gem_execbuffer, in bytes */
uint64_t aper_size;
/**
* Available space in the aperture used by i915_gem_execbuffer, in
* bytes
*/
uint64_t aper_available_size;
};
#endif /* _I915_DRM_H_ */

View File

@ -74,7 +74,6 @@ static inline void ftrace_start(void) { }
#endif /* CONFIG_FUNCTION_TRACER */
#ifdef CONFIG_DYNAMIC_FTRACE
enum {
FTRACE_FL_FREE = (1 << 0),
FTRACE_FL_FAILED = (1 << 1),
@ -135,7 +134,6 @@ extern void ftrace_release(void *start, unsigned long size);
extern void ftrace_disable_daemon(void);
extern void ftrace_enable_daemon(void);
#else
# define skip_trace(ip) ({ 0; })
# define ftrace_force_update() ({ 0; })

View File

@ -0,0 +1,13 @@
#ifndef _LINUX_FTRACE_IRQ_H
#define _LINUX_FTRACE_IRQ_H
#ifdef CONFIG_DYNAMIC_FTRACE
extern void ftrace_nmi_enter(void);
extern void ftrace_nmi_exit(void);
#else
static inline void ftrace_nmi_enter(void) { }
static inline void ftrace_nmi_exit(void) { }
#endif
#endif /* _LINUX_FTRACE_IRQ_H */

View File

@ -4,8 +4,8 @@
#include <linux/preempt.h>
#include <linux/smp_lock.h>
#include <linux/lockdep.h>
#include <linux/ftrace_irq.h>
#include <asm/hardirq.h>
#include <asm/ftrace.h>
#include <asm/system.h>
/*

125
include/linux/io-mapping.h Normal file
View File

@ -0,0 +1,125 @@
/*
* Copyright © 2008 Keith Packard <keithp@keithp.com>
*
* This file is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef _LINUX_IO_MAPPING_H
#define _LINUX_IO_MAPPING_H
#include <linux/types.h>
#include <asm/io.h>
#include <asm/page.h>
#include <asm/iomap.h>
/*
* The io_mapping mechanism provides an abstraction for mapping
* individual pages from an io device to the CPU in an efficient fashion.
*
* See Documentation/io_mapping.txt
*/
/* this struct isn't actually defined anywhere */
struct io_mapping;
#ifdef CONFIG_HAVE_ATOMIC_IOMAP
/*
* For small address space machines, mapping large objects
* into the kernel virtual space isn't practical. Where
* available, use fixmap support to dynamically map pages
* of the object at run time.
*/
static inline struct io_mapping *
io_mapping_create_wc(unsigned long base, unsigned long size)
{
return (struct io_mapping *) base;
}
static inline void
io_mapping_free(struct io_mapping *mapping)
{
}
/* Atomic map/unmap */
static inline void *
io_mapping_map_atomic_wc(struct io_mapping *mapping, unsigned long offset)
{
offset += (unsigned long) mapping;
return iomap_atomic_prot_pfn(offset >> PAGE_SHIFT, KM_USER0,
__pgprot(__PAGE_KERNEL_WC));
}
static inline void
io_mapping_unmap_atomic(void *vaddr)
{
iounmap_atomic(vaddr, KM_USER0);
}
static inline void *
io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset)
{
offset += (unsigned long) mapping;
return ioremap_wc(offset, PAGE_SIZE);
}
static inline void
io_mapping_unmap(void *vaddr)
{
iounmap(vaddr);
}
#else
/* Create the io_mapping object*/
static inline struct io_mapping *
io_mapping_create_wc(unsigned long base, unsigned long size)
{
return (struct io_mapping *) ioremap_wc(base, size);
}
static inline void
io_mapping_free(struct io_mapping *mapping)
{
iounmap(mapping);
}
/* Atomic map/unmap */
static inline void *
io_mapping_map_atomic_wc(struct io_mapping *mapping, unsigned long offset)
{
return ((char *) mapping) + offset;
}
static inline void
io_mapping_unmap_atomic(void *vaddr)
{
}
/* Non-atomic map/unmap */
static inline void *
io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset)
{
return ((char *) mapping) + offset;
}
static inline void
io_mapping_unmap(void *vaddr)
{
}
#endif /* HAVE_ATOMIC_IOMAP */
#endif /* _LINUX_IO_MAPPING_H */

View File

@ -1027,8 +1027,23 @@ rb_reserve_next_event(struct ring_buffer_per_cpu *cpu_buffer,
struct ring_buffer_event *event;
u64 ts, delta;
int commit = 0;
int nr_loops = 0;
again:
/*
* We allow for interrupts to reenter here and do a trace.
* If one does, it will cause this original code to loop
* back here. Even with heavy interrupts happening, this
* should only happen a few times in a row. If this happens
* 1000 times in a row, there must be either an interrupt
* storm or we have something buggy.
* Bail!
*/
if (unlikely(++nr_loops > 1000)) {
RB_WARN_ON(cpu_buffer, 1);
return NULL;
}
ts = ring_buffer_time_stamp(cpu_buffer->cpu);
/*
@ -1526,11 +1541,24 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
{
struct buffer_page *reader = NULL;
unsigned long flags;
int nr_loops = 0;
local_irq_save(flags);
__raw_spin_lock(&cpu_buffer->lock);
again:
/*
* This should normally only loop twice. But because the
* start of the reader inserts an empty page, it causes
* a case where we will loop three times. There should be no
* reason to loop four times (that I know of).
*/
if (unlikely(++nr_loops > 3)) {
RB_WARN_ON(cpu_buffer, 1);
reader = NULL;
goto out;
}
reader = cpu_buffer->reader_page;
/* If there's more to read, return this page */
@ -1661,6 +1689,7 @@ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts)
struct ring_buffer_per_cpu *cpu_buffer;
struct ring_buffer_event *event;
struct buffer_page *reader;
int nr_loops = 0;
if (!cpu_isset(cpu, buffer->cpumask))
return NULL;
@ -1668,6 +1697,19 @@ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts)
cpu_buffer = buffer->buffers[cpu];
again:
/*
* We repeat when a timestamp is encountered. It is possible
* to get multiple timestamps from an interrupt entering just
* as one timestamp is about to be written. The max times
* that this can happen is the number of nested interrupts we
* can have. Nesting 10 deep of interrupts is clearly
* an anomaly.
*/
if (unlikely(++nr_loops > 10)) {
RB_WARN_ON(cpu_buffer, 1);
return NULL;
}
reader = rb_get_reader_page(cpu_buffer);
if (!reader)
return NULL;
@ -1718,6 +1760,7 @@ ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
struct ring_buffer *buffer;
struct ring_buffer_per_cpu *cpu_buffer;
struct ring_buffer_event *event;
int nr_loops = 0;
if (ring_buffer_iter_empty(iter))
return NULL;
@ -1726,6 +1769,19 @@ ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
buffer = cpu_buffer->buffer;
again:
/*
* We repeat when a timestamp is encountered. It is possible
* to get multiple timestamps from an interrupt entering just
* as one timestamp is about to be written. The max times
* that this can happen is the number of nested interrupts we
* can have. Nesting 10 deep of interrupts is clearly
* an anomaly.
*/
if (unlikely(++nr_loops > 10)) {
RB_WARN_ON(cpu_buffer, 1);
return NULL;
}
if (rb_per_cpu_empty(cpu_buffer))
return NULL;

View File

@ -1231,17 +1231,20 @@ static void s_stop(struct seq_file *m, void *p)
mutex_unlock(&trace_types_lock);
}
#define KRETPROBE_MSG "[unknown/kretprobe'd]"
#ifdef CONFIG_KRETPROBES
static inline int kretprobed(unsigned long addr)
static inline const char *kretprobed(const char *name)
{
return addr == (unsigned long)kretprobe_trampoline;
static const char tramp_name[] = "kretprobe_trampoline";
int size = sizeof(tramp_name);
if (strncmp(tramp_name, name, size) == 0)
return "[unknown/kretprobe'd]";
return name;
}
#else
static inline int kretprobed(unsigned long addr)
static inline const char *kretprobed(const char *name)
{
return 0;
return name;
}
#endif /* CONFIG_KRETPROBES */
@ -1250,10 +1253,13 @@ seq_print_sym_short(struct trace_seq *s, const char *fmt, unsigned long address)
{
#ifdef CONFIG_KALLSYMS
char str[KSYM_SYMBOL_LEN];
const char *name;
kallsyms_lookup(address, NULL, NULL, NULL, str);
return trace_seq_printf(s, fmt, str);
name = kretprobed(str);
return trace_seq_printf(s, fmt, name);
#endif
return 1;
}
@ -1264,9 +1270,12 @@ seq_print_sym_offset(struct trace_seq *s, const char *fmt,
{
#ifdef CONFIG_KALLSYMS
char str[KSYM_SYMBOL_LEN];
const char *name;
sprint_symbol(str, address);
return trace_seq_printf(s, fmt, str);
name = kretprobed(str);
return trace_seq_printf(s, fmt, name);
#endif
return 1;
}
@ -1520,10 +1529,7 @@ print_lat_fmt(struct trace_iterator *iter, unsigned int trace_idx, int cpu)
seq_print_ip_sym(s, field->ip, sym_flags);
trace_seq_puts(s, " (");
if (kretprobed(field->parent_ip))
trace_seq_puts(s, KRETPROBE_MSG);
else
seq_print_ip_sym(s, field->parent_ip, sym_flags);
seq_print_ip_sym(s, field->parent_ip, sym_flags);
trace_seq_puts(s, ")\n");
break;
}
@ -1639,12 +1645,9 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
ret = trace_seq_printf(s, " <-");
if (!ret)
return TRACE_TYPE_PARTIAL_LINE;
if (kretprobed(field->parent_ip))
ret = trace_seq_puts(s, KRETPROBE_MSG);
else
ret = seq_print_ip_sym(s,
field->parent_ip,
sym_flags);
ret = seq_print_ip_sym(s,
field->parent_ip,
sym_flags);
if (!ret)
return TRACE_TYPE_PARTIAL_LINE;
}
@ -1895,7 +1898,7 @@ static enum print_line_t print_bin_fmt(struct trace_iterator *iter)
return TRACE_TYPE_HANDLED;
SEQ_PUT_FIELD_RET(s, entry->pid);
SEQ_PUT_FIELD_RET(s, iter->cpu);
SEQ_PUT_FIELD_RET(s, entry->cpu);
SEQ_PUT_FIELD_RET(s, iter->ts);
switch (entry->type) {

View File

@ -176,7 +176,7 @@ int soundbus_add_one(struct soundbus_dev *dev)
return -EINVAL;
}
snprintf(dev->ofdev.dev.bus_id, BUS_ID_SIZE, "soundbus:%x", ++devcount);
dev_set_name(&dev->ofdev.dev, "soundbus:%x", ++devcount);
dev->ofdev.dev.bus = &soundbus_bus_type;
return of_device_register(&dev->ofdev);
}

View File

@ -148,6 +148,8 @@ static int snd_rawmidi_runtime_free(struct snd_rawmidi_substream *substream)
static inline void snd_rawmidi_output_trigger(struct snd_rawmidi_substream *substream,int up)
{
if (!substream->opened)
return;
if (up) {
tasklet_hi_schedule(&substream->runtime->tasklet);
} else {
@ -158,6 +160,8 @@ static inline void snd_rawmidi_output_trigger(struct snd_rawmidi_substream *subs
static void snd_rawmidi_input_trigger(struct snd_rawmidi_substream *substream, int up)
{
if (!substream->opened)
return;
substream->ops->trigger(substream, up);
if (!up && substream->runtime->event)
tasklet_kill(&substream->runtime->tasklet);
@ -857,6 +861,8 @@ int snd_rawmidi_receive(struct snd_rawmidi_substream *substream,
int result = 0, count1;
struct snd_rawmidi_runtime *runtime = substream->runtime;
if (!substream->opened)
return -EBADFD;
if (runtime->buffer == NULL) {
snd_printd("snd_rawmidi_receive: input is not active!!!\n");
return -EINVAL;
@ -1126,6 +1132,8 @@ int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count)
int snd_rawmidi_transmit(struct snd_rawmidi_substream *substream,
unsigned char *buffer, int count)
{
if (!substream->opened)
return -EBADFD;
count = snd_rawmidi_transmit_peek(substream, buffer, count);
if (count < 0)
return count;

View File

@ -1153,7 +1153,7 @@ snd_ml403_ac97cr_create(struct snd_card *card, struct platform_device *pfdev,
/* get irq */
irq = platform_get_irq(pfdev, 0);
if (request_irq(irq, snd_ml403_ac97cr_irq, IRQF_DISABLED,
pfdev->dev.bus_id, (void *)ml403_ac97cr)) {
dev_name(&pfdev->dev), (void *)ml403_ac97cr)) {
snd_printk(KERN_ERR SND_ML403_AC97CR_DRIVER ": "
"unable to grab IRQ %d\n",
irq);
@ -1166,7 +1166,7 @@ snd_ml403_ac97cr_create(struct snd_card *card, struct platform_device *pfdev,
ml403_ac97cr->irq);
irq = platform_get_irq(pfdev, 1);
if (request_irq(irq, snd_ml403_ac97cr_irq, IRQF_DISABLED,
pfdev->dev.bus_id, (void *)ml403_ac97cr)) {
dev_name(&pfdev->dev), (void *)ml403_ac97cr)) {
snd_printk(KERN_ERR SND_ML403_AC97CR_DRIVER ": "
"unable to grab IRQ %d\n",
irq);

View File

@ -24,13 +24,13 @@ static void pcspkr_do_sound(unsigned int count)
spin_lock_irqsave(&i8253_lock, flags);
if (count) {
/* enable counter 2 */
outb_p(inb_p(0x61) | 3, 0x61);
/* set command for counter 2, 2 byte write */
outb_p(0xB6, 0x43);
/* select desired HZ */
outb_p(count & 0xff, 0x42);
outb((count >> 8) & 0xff, 0x42);
/* enable counter 2 */
outb_p(inb_p(0x61) | 3, 0x61);
} else {
/* disable counter 2 */
outb(inb_p(0x61) & 0xFC, 0x61);

View File

@ -70,15 +70,15 @@ static int __devinit snd_ad1848_match(struct device *dev, unsigned int n)
return 0;
if (port[n] == SNDRV_AUTO_PORT) {
snd_printk(KERN_ERR "%s: please specify port\n", dev->bus_id);
dev_err(dev, "please specify port\n");
return 0;
}
if (irq[n] == SNDRV_AUTO_IRQ) {
snd_printk(KERN_ERR "%s: please specify irq\n", dev->bus_id);
dev_err(dev, "please specify irq\n");
return 0;
}
if (dma1[n] == SNDRV_AUTO_DMA) {
snd_printk(KERN_ERR "%s: please specify dma1\n", dev->bus_id);
dev_err(dev, "please specify dma1\n");
return 0;
}
return 1;

View File

@ -36,7 +36,7 @@ static int __devinit snd_adlib_match(struct device *dev, unsigned int n)
return 0;
if (port[n] == SNDRV_AUTO_PORT) {
snd_printk(KERN_ERR "%s: please specify port\n", dev->bus_id);
dev_err(dev, "please specify port\n");
return 0;
}
return 1;
@ -55,13 +55,13 @@ static int __devinit snd_adlib_probe(struct device *dev, unsigned int n)
card = snd_card_new(index[n], id[n], THIS_MODULE, 0);
if (!card) {
snd_printk(KERN_ERR "%s: could not create card\n", dev->bus_id);
dev_err(dev, "could not create card\n");
return -EINVAL;
}
card->private_data = request_region(port[n], 4, CRD_NAME);
if (!card->private_data) {
snd_printk(KERN_ERR "%s: could not grab ports\n", dev->bus_id);
dev_err(dev, "could not grab ports\n");
error = -EBUSY;
goto out;
}
@ -73,13 +73,13 @@ static int __devinit snd_adlib_probe(struct device *dev, unsigned int n)
error = snd_opl3_create(card, port[n], port[n] + 2, OPL3_HW_AUTO, 1, &opl3);
if (error < 0) {
snd_printk(KERN_ERR "%s: could not create OPL\n", dev->bus_id);
dev_err(dev, "could not create OPL\n");
goto out;
}
error = snd_opl3_hwdep_new(opl3, 0, 0, NULL);
if (error < 0) {
snd_printk(KERN_ERR "%s: could not create FM\n", dev->bus_id);
dev_err(dev, "could not create FM\n");
goto out;
}
@ -87,7 +87,7 @@ static int __devinit snd_adlib_probe(struct device *dev, unsigned int n)
error = snd_card_register(card);
if (error < 0) {
snd_printk(KERN_ERR "%s: could not register card\n", dev->bus_id);
dev_err(dev, "could not register card\n");
goto out;
}

View File

@ -74,15 +74,15 @@ static int __devinit snd_cs4231_match(struct device *dev, unsigned int n)
return 0;
if (port[n] == SNDRV_AUTO_PORT) {
snd_printk(KERN_ERR "%s: please specify port\n", dev->bus_id);
dev_err(dev, "please specify port\n");
return 0;
}
if (irq[n] == SNDRV_AUTO_IRQ) {
snd_printk(KERN_ERR "%s: please specify irq\n", dev->bus_id);
dev_err(dev, "please specify irq\n");
return 0;
}
if (dma1[n] == SNDRV_AUTO_DMA) {
snd_printk(KERN_ERR "%s: please specify dma1\n", dev->bus_id);
dev_err(dev, "please specify dma1\n");
return 0;
}
return 1;
@ -133,7 +133,7 @@ static int __devinit snd_cs4231_probe(struct device *dev, unsigned int n)
mpu_port[n], 0, mpu_irq[n],
mpu_irq[n] >= 0 ? IRQF_DISABLED : 0,
NULL) < 0)
printk(KERN_WARNING "%s: MPU401 not detected\n", dev->bus_id);
dev_warn(dev, "MPU401 not detected\n");
}
snd_card_set_dev(card, dev);

View File

@ -488,19 +488,19 @@ static int __devinit snd_cs423x_isa_match(struct device *pdev,
return 0;
if (port[dev] == SNDRV_AUTO_PORT) {
snd_printk(KERN_ERR "%s: please specify port\n", pdev->bus_id);
dev_err(pdev, "please specify port\n");
return 0;
}
if (cport[dev] == SNDRV_AUTO_PORT) {
snd_printk(KERN_ERR "%s: please specify cport\n", pdev->bus_id);
dev_err(pdev, "please specify cport\n");
return 0;
}
if (irq[dev] == SNDRV_AUTO_IRQ) {
snd_printk(KERN_ERR "%s: please specify irq\n", pdev->bus_id);
dev_err(pdev, "please specify irq\n");
return 0;
}
if (dma1[dev] == SNDRV_AUTO_DMA) {
snd_printk(KERN_ERR "%s: please specify dma1\n", pdev->bus_id);
dev_err(pdev, "please specify dma1\n");
return 0;
}
return 1;

View File

@ -88,16 +88,14 @@ static int __devinit snd_es1688_legacy_create(struct snd_card *card,
if (irq[n] == SNDRV_AUTO_IRQ) {
irq[n] = snd_legacy_find_free_irq(possible_irqs);
if (irq[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free IRQ\n",
dev->bus_id);
dev_err(dev, "unable to find a free IRQ\n");
return -EBUSY;
}
}
if (dma8[n] == SNDRV_AUTO_DMA) {
dma8[n] = snd_legacy_find_free_dma(possible_dmas);
if (dma8[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free DMA\n",
dev->bus_id);
dev_err(dev, "unable to find a free DMA\n");
return -EBUSY;
}
}
@ -147,8 +145,7 @@ static int __devinit snd_es1688_probe(struct device *dev, unsigned int n)
if (snd_opl3_create(card, chip->port, chip->port + 2,
OPL3_HW_OPL3, 0, &opl3) < 0)
printk(KERN_WARNING "%s: opl3 not detected at 0x%lx\n",
dev->bus_id, chip->port);
dev_warn(dev, "opl3 not detected at 0x%lx\n", chip->port);
else {
error = snd_opl3_hwdep_new(opl3, 0, 1, NULL);
if (error < 0)

View File

@ -90,24 +90,21 @@ static int __devinit snd_gusclassic_create(struct snd_card *card,
if (irq[n] == SNDRV_AUTO_IRQ) {
irq[n] = snd_legacy_find_free_irq(possible_irqs);
if (irq[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free IRQ\n",
dev->bus_id);
dev_err(dev, "unable to find a free IRQ\n");
return -EBUSY;
}
}
if (dma1[n] == SNDRV_AUTO_DMA) {
dma1[n] = snd_legacy_find_free_dma(possible_dmas);
if (dma1[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free DMA1\n",
dev->bus_id);
dev_err(dev, "unable to find a free DMA1\n");
return -EBUSY;
}
}
if (dma2[n] == SNDRV_AUTO_DMA) {
dma2[n] = snd_legacy_find_free_dma(possible_dmas);
if (dma2[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free DMA2\n",
dev->bus_id);
dev_err(dev, "unable to find a free DMA2\n");
return -EBUSY;
}
}
@ -174,8 +171,8 @@ static int __devinit snd_gusclassic_probe(struct device *dev, unsigned int n)
error = -ENODEV;
if (gus->max_flag || gus->ess_flag) {
snd_printk(KERN_ERR "%s: GUS Classic or ACE soundcard was "
"not detected at 0x%lx\n", dev->bus_id, gus->gf1.port);
dev_err(dev, "GUS Classic or ACE soundcard was "
"not detected at 0x%lx\n", gus->gf1.port);
goto out;
}

View File

@ -106,16 +106,14 @@ static int __devinit snd_gusextreme_es1688_create(struct snd_card *card,
if (irq[n] == SNDRV_AUTO_IRQ) {
irq[n] = snd_legacy_find_free_irq(possible_irqs);
if (irq[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free IRQ "
"for ES1688\n", dev->bus_id);
dev_err(dev, "unable to find a free IRQ for ES1688\n");
return -EBUSY;
}
}
if (dma8[n] == SNDRV_AUTO_DMA) {
dma8[n] = snd_legacy_find_free_dma(possible_dmas);
if (dma8[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free DMA "
"for ES1688\n", dev->bus_id);
dev_err(dev, "unable to find a free DMA for ES1688\n");
return -EBUSY;
}
}
@ -143,16 +141,14 @@ static int __devinit snd_gusextreme_gus_card_create(struct snd_card *card,
if (gf1_irq[n] == SNDRV_AUTO_IRQ) {
gf1_irq[n] = snd_legacy_find_free_irq(possible_irqs);
if (gf1_irq[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free IRQ "
"for GF1\n", dev->bus_id);
dev_err(dev, "unable to find a free IRQ for GF1\n");
return -EBUSY;
}
}
if (dma1[n] == SNDRV_AUTO_DMA) {
dma1[n] = snd_legacy_find_free_dma(possible_dmas);
if (dma1[n] < 0) {
snd_printk(KERN_ERR "%s: unable to find a free DMA "
"for GF1\n", dev->bus_id);
dev_err(dev, "unable to find a free DMA for GF1\n");
return -EBUSY;
}
}
@ -278,8 +274,8 @@ static int __devinit snd_gusextreme_probe(struct device *dev, unsigned int n)
error = -ENODEV;
if (!gus->ess_flag) {
snd_printk(KERN_ERR "%s: GUS Extreme soundcard was not "
"detected at 0x%lx\n", dev->bus_id, gus->gf1.port);
dev_err(dev, "GUS Extreme soundcard was not "
"detected at 0x%lx\n", gus->gf1.port);
goto out;
}
gus->codec_flag = 1;
@ -310,8 +306,7 @@ static int __devinit snd_gusextreme_probe(struct device *dev, unsigned int n)
if (snd_opl3_create(card, es1688->port, es1688->port + 2,
OPL3_HW_OPL3, 0, &opl3) < 0)
printk(KERN_ERR "%s: opl3 not detected at 0x%lx\n",
dev->bus_id, es1688->port);
dev_warn(dev, "opl3 not detected at 0x%lx\n", es1688->port);
else {
error = snd_opl3_hwdep_new(opl3, 0, 2, NULL);
if (error < 0)

View File

@ -85,11 +85,11 @@ static int __devinit snd_sb8_match(struct device *pdev, unsigned int dev)
if (!enable[dev])
return 0;
if (irq[dev] == SNDRV_AUTO_IRQ) {
snd_printk(KERN_ERR "%s: please specify irq\n", pdev->bus_id);
dev_err(pdev, "please specify irq\n");
return 0;
}
if (dma8[dev] == SNDRV_AUTO_DMA) {
snd_printk(KERN_ERR "%s: please specify dma8\n", pdev->bus_id);
dev_err(pdev, "please specify dma8\n");
return 0;
}
return 1;

View File

@ -1464,6 +1464,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
.ca0151_chip = 1,
.spk71 = 1,
.spdif_bug = 1,
.invert_shared_spdif = 1, /* digital/analog switch swapped */
.ac97_chip = 1} ,
{.vendor = 0x1102, .device = 0x0004, .subsystem = 0x20021102,
.driver = "Audigy2", .name = "Audigy 2 ZS [SB0350]",
@ -1473,6 +1474,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
.ca0151_chip = 1,
.spk71 = 1,
.spdif_bug = 1,
.invert_shared_spdif = 1, /* digital/analog switch swapped */
.ac97_chip = 1} ,
{.vendor = 0x1102, .device = 0x0004, .subsystem = 0x20011102,
.driver = "Audigy2", .name = "Audigy 2 ZS [2001]",
@ -1482,6 +1484,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
.ca0151_chip = 1,
.spk71 = 1,
.spdif_bug = 1,
.invert_shared_spdif = 1, /* digital/analog switch swapped */
.ac97_chip = 1} ,
/* Audigy 2 */
/* Tested by James@superbug.co.uk 3rd July 2005 */

View File

@ -829,6 +829,7 @@ static void alc_sku_automute(struct hda_codec *codec)
spec->jack_present ? 0 : PIN_OUT);
}
#if 0 /* it's broken in some acses -- temporarily disabled */
static void alc_mic_automute(struct hda_codec *codec)
{
struct alc_spec *spec = codec->spec;
@ -849,6 +850,9 @@ static void alc_mic_automute(struct hda_codec *codec)
snd_hda_codec_amp_stereo(codec, 0x0b, HDA_INPUT, capsrc_idx_fmic,
HDA_AMP_MUTE, present ? HDA_AMP_MUTE : 0);
}
#else
#define alc_mic_automute(codec) /* NOP */
#endif /* disabled */
/* unsolicited event for HP jack sensing */
static void alc_sku_unsol_event(struct hda_codec *codec, unsigned int res)
@ -1058,12 +1062,14 @@ do_sku:
AC_VERB_SET_UNSOLICITED_ENABLE,
AC_USRSP_EN | ALC880_HP_EVENT);
#if 0 /* it's broken in some acses -- temporarily disabled */
if (spec->autocfg.input_pins[AUTO_PIN_MIC] &&
spec->autocfg.input_pins[AUTO_PIN_FRONT_MIC])
snd_hda_codec_write(codec,
spec->autocfg.input_pins[AUTO_PIN_MIC], 0,
AC_VERB_SET_UNSOLICITED_ENABLE,
AC_USRSP_EN | ALC880_MIC_EVENT);
#endif /* disabled */
spec->unsol_event = alc_sku_unsol_event;
}
@ -8408,6 +8414,7 @@ static const char *alc883_models[ALC883_MODEL_LAST] = {
static struct snd_pci_quirk alc883_cfg_tbl[] = {
SND_PCI_QUIRK(0x1019, 0x6668, "ECS", ALC883_3ST_6ch_DIG),
SND_PCI_QUIRK(0x1025, 0x006c, "Acer Aspire 9810", ALC883_ACER_ASPIRE),
SND_PCI_QUIRK(0x1025, 0x0090, "Acer Aspire", ALC883_ACER_ASPIRE),
SND_PCI_QUIRK(0x1025, 0x0110, "Acer Aspire", ALC883_ACER_ASPIRE),
SND_PCI_QUIRK(0x1025, 0x0112, "Acer Aspire 9303", ALC883_ACER_ASPIRE),
SND_PCI_QUIRK(0x1025, 0x0121, "Acer Aspire 5920G", ALC883_ACER_ASPIRE),
@ -12238,8 +12245,26 @@ static int alc269_auto_create_multi_out_ctls(struct alc_spec *spec,
return 0;
}
#define alc269_auto_create_analog_input_ctls \
alc880_auto_create_analog_input_ctls
static int alc269_auto_create_analog_input_ctls(struct alc_spec *spec,
const struct auto_pin_cfg *cfg)
{
int err;
err = alc880_auto_create_analog_input_ctls(spec, cfg);
if (err < 0)
return err;
/* digital-mic input pin is excluded in alc880_auto_create..()
* because it's under 0x18
*/
if (cfg->input_pins[AUTO_PIN_MIC] == 0x12 ||
cfg->input_pins[AUTO_PIN_FRONT_MIC] == 0x12) {
struct hda_input_mux *imux = &spec->private_imux;
imux->items[imux->num_items].label = "Int Mic";
imux->items[imux->num_items].index = 0x05;
imux->num_items++;
}
return 0;
}
#ifdef CONFIG_SND_HDA_POWER_SAVE
#define alc269_loopbacks alc880_loopbacks

View File

@ -69,6 +69,7 @@ enum {
enum {
STAC_92HD73XX_REF,
STAC_DELL_M6,
STAC_DELL_EQ,
STAC_92HD73XX_MODELS
};
@ -773,9 +774,7 @@ static struct hda_verb dell_eq_core_init[] = {
};
static struct hda_verb dell_m6_core_init[] = {
/* set master volume to max value without distortion
* and direct control */
{ 0x1f, AC_VERB_SET_VOLUME_KNOB_CONTROL, 0xec},
{ 0x1f, AC_VERB_SET_VOLUME_KNOB_CONTROL, 0xff},
/* setup audio connections */
{ 0x0d, AC_VERB_SET_CONNECT_SEL, 0x00},
{ 0x0a, AC_VERB_SET_CONNECT_SEL, 0x01},
@ -1600,11 +1599,13 @@ static unsigned int dell_m6_pin_configs[13] = {
static unsigned int *stac92hd73xx_brd_tbl[STAC_92HD73XX_MODELS] = {
[STAC_92HD73XX_REF] = ref92hd73xx_pin_configs,
[STAC_DELL_M6] = dell_m6_pin_configs,
[STAC_DELL_EQ] = dell_m6_pin_configs,
};
static const char *stac92hd73xx_models[STAC_92HD73XX_MODELS] = {
[STAC_92HD73XX_REF] = "ref",
[STAC_DELL_M6] = "dell-m6",
[STAC_DELL_EQ] = "dell-eq",
};
static struct snd_pci_quirk stac92hd73xx_cfg_tbl[] = {
@ -4131,12 +4132,17 @@ again:
sizeof(stac92hd73xx_dmux));
switch (spec->board_config) {
case STAC_DELL_M6:
case STAC_DELL_EQ:
spec->init = dell_eq_core_init;
/* fallthru */
case STAC_DELL_M6:
spec->num_smuxes = 0;
spec->mixer = &stac92hd73xx_6ch_mixer[DELL_M6_MIXER];
spec->amp_nids = &stac92hd73xx_amp_nids[DELL_M6_AMP];
spec->num_amps = 1;
if (!spec->init)
spec->init = dell_m6_core_init;
switch (codec->subsystem_id) {
case 0x1028025e: /* Analog Mics */
case 0x1028025f:
@ -4146,8 +4152,6 @@ again:
break;
case 0x10280271: /* Digital Mics */
case 0x10280272:
spec->init = dell_m6_core_init;
/* fall-through */
case 0x10280254:
case 0x10280255:
stac92xx_set_config_reg(codec, 0x13, 0x90A60160);

View File

@ -95,8 +95,8 @@ static int soc_ac97_dev_register(struct snd_soc_codec *codec)
codec->ac97->dev.parent = NULL;
codec->ac97->dev.release = soc_ac97_device_release;
snprintf(codec->ac97->dev.bus_id, BUS_ID_SIZE, "%d-%d:%s",
codec->card->number, 0, codec->name);
dev_set_name(&codec->ac97->dev, "%d-%d:%s",
codec->card->number, 0, codec->name);
err = device_register(&codec->ac97->dev);
if (err < 0) {
snd_printk(KERN_ERR "Can't register ac97 bus\n");