Transparent huge zero page is used during the page fault instead of in
khugepaged.
# ls /sys/kernel/mm/transparent_hugepage/
defrag enabled khugepaged use_zero_page
# ls /sys/kernel/mm/transparent_hugepage/khugepaged/
alloc_sleep_millisecs defrag full_scans max_ptes_none pages_collapsed pages_to_scan scan_sleep_millisecs
This patch corrects the documentation just like the codes done.
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The default zonelist order selecter will select "node" order if any nodes
DMA zone comprises greater than 70% of its local memory instead of 60%,
according to default_zonelist_order::low_kmem_size > total * 70/100.
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After commit 839a8e8660 ("writeback: replace custom worker pool
implementation with unbound workqueue"), there is no bdi forker thread
any more. However, WB_REASON_FORKER_THREAD is still used due to it is
TPs userland visible and we won't be exposing exactly the same
information with just a different name.
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After commit 839a8e8660 ("writeback: replace custom worker pool
implementation with unbound workqueue"), bdi_writeback_workfn runs off
bdi_writeback->dwork, on each execution, it processes bdi->work_list and
reschedules if there are more things to do instead of flush any work
that race with us existing. It is unecessary to check force_wait in
wb_do_writeback since it is always 0 after the mentioned commit. This
patch remove the force_wait in wb_do_writeback.
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
wb_reason_name is not used any more - remove it.
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It's not used globally and could be static.
Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With CONFIG_MEMORY_HOTREMOVE unset, there is a compile warning:
mm/sparse.c:755: warning: `clear_hwpoisoned_pages' defined but not used
And Bisecting it ended up pointing to 4edd7ceff ("mm, hotplug: avoid
compiling memory hotremove functions when disabled").
This is because the commit above put sparse_remove_one_section() within
the protection of CONFIG_MEMORY_HOTREMOVE but the only user of
clear_hwpoisoned_pages() is sparse_remove_one_section(), and it is not
within the protection of CONFIG_MEMORY_HOTREMOVE.
So put clear_hwpoisoned_pages within CONFIG_MEMORY_HOTREMOVE should fix
the warning.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Acked-by: Toshi Kani <toshi.kani@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This function is nowhere used, and it has a confusing name with put_page
in mm/swap.c. So better to remove it.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
vfree() only needs schedule_work(&p->wq) if p->list was empty, otherwise
vfree_deferred->wq is already pending or it is running and didn't do
llist_del_all() yet.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In __rmqueue_fallback(), current_order loops down from MAX_ORDER - 1 to
the order passed. MAX_ORDER is typically 11 and pageblock_order is
typically 9 on x86. Integer division truncates, so pageblock_order / 2
is 4. For the first eight iterations, it's guaranteed that
current_order >= pageblock_order / 2 if it even gets that far!
So just remove the unlikely(), it's completely bogus.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Suggested-by: David Rientjes <rientjes@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
These functions are nowhere used, so remove them.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The callers of build_zonelists_node always pass MAX_NR_ZONES -1 as the
zone_type argument, so we can directly use the value in
build_zonelists_node and remove zone_type argument.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add maintainer information for zswap and zbud into the MAINTAINERS file.
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
0xc just means MOVABLE + DMA32, which results in zone DMA32.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The memory we used to hold the memcg arrays is currently accounted to
the current memcg. But that creates a problem, because that memory can
only be freed after the last user is gone. Our only way to know which
is the last user, is to hook up to freeing time, but the fact that we
still have some in flight kmallocs will prevent freeing to happen. I
believe therefore to be just easier to account this memory as global
overhead.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The memory we used to hold the memcg arrays is currently accounted to
the current memcg. But that creates a problem, because that memory can
only be freed after the last user is gone. Our only way to know which
is the last user, is to hook up to freeing time, but the fact that we
still have some in flight kmallocs will prevent freeing to happen. I
believe therefore to be just easier to account this memory as global
overhead.
This patch (of 2):
Disabling accounting is only relevant for some specific memcg internal
allocations. Therefore we would initially not have such check at
memcg_kmem_newpage_charge, since direct calls to the page allocator that
are marked with GFP_KMEMCG only happen outside memcg core. We are
mostly concerned with cache allocations and by having this test at
memcg_kmem_get_cache we are already able to relay the allocation to the
root cache and bypass the memcg caches altogether.
There is one exception, though: the SLUB allocator does not create large
order caches, but rather service large kmallocs directly from the page
allocator. Therefore, the following sequence, when backed by the SLUB
allocator:
memcg_stop_kmem_account();
kmalloc(<large_number>)
memcg_resume_kmem_account();
would effectively ignore the fact that we should skip accounting, since
it will drive us directly to this function without passing through the
cache selector memcg_kmem_get_cache. Such large allocations are
extremely rare but can happen, for instance, for the cache arrays.
This was never a problem in practice, because we weren't skipping
accounting for the cache arrays. All the allocations we were skipping
were fairly small. However, the fact that we were not skipping those
allocations are a problem and can prevent the memcgs from going away.
As we fix that, we need to make sure that the fix will also work with
the SLUB allocator.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Reported-by: Michal Hocko <mhocko@suze.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We should check the VM_UNITIALIZED flag in s_show(). If this flag is
set, that said, the vm_struct is not fully initialized. So it is
unnecessary to try to show the information contained in vm_struct.
We checked this flag in show_numa_info(), but I think it's better to
check it earlier.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
VM_UNLIST was used to indicate that the vm_struct is not listed in
vmlist.
But after commit 4341fa4547 ("mm, vmalloc: remove list management of
vmlist after initializing vmalloc"), the meaning of this flag changed.
It now means the vm_struct is not fully initialized. So renaming it to
VM_UNINITIALIZED seems more reasonable.
Also change clear_vm_unlist to clear_vm_uninitialized_flag.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use goto to jump to the fail label to give a failure message before
returning NULL. This makes the failure handling in this function
consistent.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
As we have removed the dead code in the vb_alloc, it seems there is no
place to use the alloc_map. So there is no reason to maintain the
alloc_map in vmap_block.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This function is nowhere used now, so remove it.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Space in a vmap block that was once allocated is considered dirty and
not made available for allocation again before the whole block is
recycled. The result is that free space within a vmap block is always
contiguous.
So if a vmap block has enough free space for allocation, the allocation
is impossible to fail. Thus, the fragmented block purging was never
invoked from vb_alloc(). So remove this dead code.
[ Same patches also sent by:
Chanho Min <chanho.min@lge.com>
Johannes Weiner <hannes@cmpxchg.org>
but git doesn't do "multiple authors" ]
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There is an extra semi-colon so the function always returns.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When calculating pages in a node, for each zone in that node, we will
have
zone_spanned_pages_in_node
--> get_pfn_range_for_nid
zone_absent_pages_in_node
--> get_pfn_range_for_nid
That is to say, we call the get_pfn_range_for_nid to get start_pfn and
end_pfn of the node for MAX_NR_ZONES * 2 times. And this is totally
unnecessary if we call the get_pfn_range_for_nid before
zone_*_pages_in_node add two extra arguments node_start_pfn and
node_end_pfn for zone_*_pages_in_node, then we can remove the
get_pfn_range_in_node in zone_*_pages_in_node.
[akpm@linux-foundation.org: make definitions more readable]
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A few remaining architectures directly kill the page faulting task in an
out of memory situation. This is usually not a good idea since that
task might not even use a significant amount of memory and so may not be
the optimal victim to resolve the situation.
Since 2.6.29's 1c0fe6e ("mm: invoke oom-killer from page fault") there
is a hook that architecture page fault handlers are supposed to call to
invoke the OOM killer and let it pick the right task to kill. Convert
the remaining architectures over to this hook.
To have the previous behavior of simply taking out the faulting task the
vm.oom_kill_allocating_task sysctl can be set to 1.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc bits]
Cc: James Hogan <james.hogan@imgtec.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove struct mem_cgroup_lru_info and fold its single member, the
variably sized nodeinfo[0], directly into struct mem_cgroup. This
should make it more obvious why it has to be the last member there.
Also move the comment that's above that special last member below it, so
it is more visible to somebody that considers appending to the struct
mem_cgroup.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Rientjes <rientjes@google.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Glauber Costa <glommer@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch is very similar to commit 84d96d8976 ("mm: madvise:
complete input validation before taking lock"): perform some basic
validation of the input to mremap() before taking the
¤t->mm->mmap_sem lock.
This also makes the MREMAP_FIXED => MREMAP_MAYMOVE dependency slightly
more explicit.
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix two obvious problems:
1. We have registered msm_iommu_driver first, and need unregister it
when registered msm_iommu_ctx_driver fail
2. We don't need to kfree drvdata before kzalloc was successful.
[akpm@linux-foundation.org: remove now-unneeded initialization of ctx_drvdata, remove unneeded braces]
Signed-off-by: Libo Chen <libo.chen@huawei.com>
Acked-by: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There have been changes in the locking scheme of fsnotify but the
comments in the source code have not been updated yet. This patch
corrects this.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In inotify_new_watch() the number of watches for a group is compared
against the max number of allowed watches and increased afterwards. The
check and incrementation is not done atomically, so it is possible for
multiple concurrent threads to pass the check and increment the number
of marks above the allowed max.
This patch uses an inotify groups mark_lock to ensure that both check
and incrementation are done atomic. Furthermore we dont have to worry
about the race that allows a concurrent thread to add a watch just after
inotify_update_existing_watch() returned with -ENOENT anymore, since
this is also synchronized by the groups mark mutex now.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There is no need to use a special mutex to protect against the
fcntl/close race (see dnotify.c for a description of this race).
Instead the dnotify_groups mark mutex can be used.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The code under the groups mark_mutex in fanotify_add_inode_mark() and
fanotify_add_vfsmount_mark() is almost identical. So put it into a
seperate function.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
For both adding an event to an existing mark and destroying a mark we
first have to find it via fsnotify_find_[inode|vfsmount]_mark(). But
getting the mark and adding an event (or destroying it) is not done
atomically. This opens a race where a thread is about to destroy a mark
while another thread still finds the same mark and adds an event to its
mask although it will be destroyed.
Another race exists concerning the excess of a groups number of marks
limit: When a mark is added the number of group marks is checked against
the max number of marks per group and increased afterwards. Since check
and increment is also not done atomically, this may result in 2 or more
processes passing the check at the same time and increasing the number
of group marks above the allowed limit.
With this patch both races are avoided by doing the concerning
operations with the groups mark mutex locked.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The ->reserved field isn't cleared so we leak one byte of stack
information to userspace.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use proper decimal type for comparison with u32.
Compilation warning was introduced by 780a7654 ("audit: Make testing for
a valid loginuid explicit.")
kernel/auditfilter.c: In function 'audit_data_to_entry':
kernel/auditfilter.c:426:3: warning: this decimal constant is unsigned only in ISO C90 [enabled by default]
if ((f->type == AUDIT_LOGINUID) && (f->val == 4294967295)) {
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Eric Paris <eparis@redhat.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If both 'tree' and 'watch' are valid we must call audit_put_tree(), just
like the preceding code within audit_add_rule().
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Eric Paris <eparis@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kernel/auditfilter.c:426: warning: this decimal constant is unsigned only in ISO C90
Signed-off-by: Raphael S. Carvalho <raphael.scarv@gmail.com>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The old audit PATH records for mq_open looked like this:
type=PATH msg=audit(1366282323.982:869): item=1 name=(null) inode=6777
dev=00:0c mode=041777 ouid=0 ogid=0 rdev=00:00
obj=system_u:object_r:tmpfs_t:s15:c0.c1023
type=PATH msg=audit(1366282323.982:869): item=0 name="test_mq" inode=26732
dev=00:0c mode=0100700 ouid=0 ogid=0 rdev=00:00
obj=staff_u:object_r:user_tmpfs_t:s15:c0.c1023
...with the audit related changes that went into 3.7, they now look like this:
type=PATH msg=audit(1366282236.776:3606): item=2 name=(null) inode=66655
dev=00:0c mode=0100700 ouid=0 ogid=0 rdev=00:00
obj=staff_u:object_r:user_tmpfs_t:s15:c0.c1023
type=PATH msg=audit(1366282236.776:3606): item=1 name=(null) inode=6926
dev=00:0c mode=041777 ouid=0 ogid=0 rdev=00:00
obj=system_u:object_r:tmpfs_t:s15:c0.c1023
type=PATH msg=audit(1366282236.776:3606): item=0 name="test_mq"
Both of these look wrong to me. As Steve Grubb pointed out:
"What we need is 1 PATH record that identifies the MQ. The other PATH
records probably should not be there."
Fix it to record the mq root as a parent, and flag it such that it
should be hidden from view when the names are logged, since the root of
the mq filesystem isn't terribly interesting. With this change, we get
a single PATH record that looks more like this:
type=PATH msg=audit(1368021604.836:484): item=0 name="test_mq" inode=16914
dev=00:0c mode=0100644 ouid=0 ogid=0 rdev=00:00
obj=unconfined_u:object_r:user_tmpfs_t:s0
In order to do this, a new audit_inode_parent_hidden() function is
added. If we do it this way, then we avoid having the existing callers
of audit_inode needing to do any sort of flag conversion if auditing is
inactive.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Reported-by: Jiri Jaburek <jjaburek@redhat.com>
Cc: Steve Grubb <sgrubb@redhat.com>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The recent "drivers/dma: remove unused support for MEMSET operations"
change has fallout from lack of build testing by the author. This
fixes:
drivers/dma/iop-adma.c:1020:13: warning: unused variable 'dma_addr' [-Wunused-variable]
drivers/dma/iop-adma.c:1519:2: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull slave-dmaengine updates from Vinod Koul:
"Once you have some time from extended weekend celebrations please
consider pulling the following to get:
- Various fixes and PCI driver for dw_dmac by Andy
- DT binding for imx-dma by Markus & imx-sdma by Shawn
- DT fixes for dmaengine by Lars
- jz4740 dmac driver by Lars
- and various fixes across the drivers"
What "extended weekend celebrations"? I'm in the merge window, who has
time for extended celebrations..
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits)
DMA: shdma: add DT support
DMA: shdma: shdma_chan_filter() has to be in shdma-base.h
DMA: shdma: (cosmetic) don't re-calculate a pointer
dmaengine: at_hdmac: prepare clk before calling enable
dmaengine/trivial: at_hdmac: add curly brackets to if/else expressions
dmaengine: at_hdmac: remove unsuded atc_cleanup_descriptors()
dmaengine: at_hdmac: add FIFO configuration parameter to DMA DT binding
ARM: at91: dt: add header to define at_hdmac configuration
MIPS: jz4740: Correct clock gate bit for DMA controller
MIPS: jz4740: Remove custom DMA API
MIPS: jz4740: Register jz4740 DMA device
dma: Add a jz4740 dmaengine driver
MIPS: jz4740: Acquire and enable DMA controller clock
dma: mmp_tdma: disable irq when disabling dma channel
dmaengine: PL08x: Avoid collisions with get_signal() macro
dmaengine: dw: select DW_DMAC_BIG_ENDIAN_IO automagically
dma: dw: add PCI part of the driver
dma: dw: split driver to library part and platform code
dma: move dw_dmac driver to an own directory
dw_dmac: don't check resource with devm_ioremap_resource
...
Pull first stage of __cpuinit removal from Paul Gortmaker:
"The two commits here 1) dummy out all the __cpuinit macros so that we
no longer generate such sections, and then 2) remove all the section
processing that we used to do for those sections.
This makes all the __cpuinit and friends no-ops, so that we can remove
the use cases of it at our leisure. Expect stage 2, which does the
tree wide removal sweep at the end of the merge window."
* 'cpuinit-delete' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux:
modpost: remove all traces of cpuinit/cpuexit sections
init.h: remove __cpuinit sections from the kernel
Pull timer core updates from Thomas Gleixner:
"The timer changes contain:
- posix timer code consolidation and fixes for odd corner cases
- sched_clock implementation moved from ARM to core code to avoid
duplication by other architectures
- alarm timer updates
- clocksource and clockevents unregistration facilities
- clocksource/events support for new hardware
- precise nanoseconds RTC readout (Xen feature)
- generic support for Xen suspend/resume oddities
- the usual lot of fixes and cleanups all over the place
The parts which touch other areas (ARM/XEN) have been coordinated with
the relevant maintainers. Though this results in an handful of
trivial to solve merge conflicts, which we preferred over nasty cross
tree merge dependencies.
The patches which have been committed in the last few days are bug
fixes plus the posix timer lot. The latter was in akpms queue and
next for quite some time; they just got forgotten and Frederic
collected them last minute."
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (59 commits)
hrtimer: Remove unused variable
hrtimers: Move SMP function call to thread context
clocksource: Reselect clocksource when watchdog validated high-res capability
posix-cpu-timers: don't account cpu timer after stopped thread runtime accounting
posix_timers: fix racy timer delta caching on task exit
posix-timers: correctly get dying task time sample in posix_cpu_timer_schedule()
selftests: add basic posix timers selftests
posix_cpu_timers: consolidate expired timers check
posix_cpu_timers: consolidate timer list cleanups
posix_cpu_timer: consolidate expiry time type
tick: Sanitize broadcast control logic
tick: Prevent uncontrolled switch to oneshot mode
tick: Make oneshot broadcast robust vs. CPU offlining
x86: xen: Sync the CMOS RTC as well as the Xen wallclock
x86: xen: Sync the wallclock when the system time is set
timekeeping: Indicate that clock was set in the pvclock gtod notifier
timekeeping: Pass flags instead of multiple bools to timekeeping_update()
xen: Remove clock_was_set() call in the resume path
hrtimers: Support resuming with two or more CPUs online (but stopped)
timer: Fix jiffies wrap behavior of round_jiffies_common()
...
Pull ARM DMA mapping updates from Marek Szyprowski:
"This contains important bugfixes and an update for IOMMU integration
support for ARM architecture"
* 'for-v3.11' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
ARM: dma: Drop __GFP_COMP for iommu dma memory allocations
ARM: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as clean
ARM: dma-mapping: NULLify dev->archdata.mapping pointer on detach
ARM: dma-mapping: convert DMA direction into IOMMU protection attributes
ARM: dma-mapping: Get pages if the cpu_addr is out of atomic_pool
- Infrastructure and DT files for TZ1090 SoC (pin control drivers
already merged via pinctrl tree).
- Panic on boot instead of just warning if cache aliasing possible.
- Various SMP/hotplug fixes.
- Various other randconfig/sparse fixes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQIcBAABAgAGBQJR1T3SAAoJEKHZs+irPybfOPcP/0HDcQFZ0/2j5jbobz8VumdC
+PtEcKQc5vY2p96NGO0xPX86ebPZ7b52E4GWkk1RTEsreIHGeCxAdX/UsDWUqPrs
b4ECnB+TOXvMN09UCkP4d1bVb/Fzh9aBp/NUkgp3v24Rb6j3WSoAh9TtRRw4OY0/
ZIbLKvaAlQIKofErYJldQvdk5ymskMQ9dAQg3A/8f6az5+qkISgCckJgzi6kyH3Y
An3GH0nK8dgucNPHBoyfzRvCCLuBeUpdZiMK/LPuV1MVuK/4R4m/pjhxE8Eyf6Q/
JSwu3paH8NBNAY/gRXQcbNkxDV2+yv1cn1MUpMD/taYf3Q7BKpR6uzIelkipri7K
bbsvPHoI91ZmZzhGCtkT9ZtArGB1OhomNDC1P9vbCe8boN2ajvmJKfmA8VfpioFS
urW6e42bchplXXd7BPTEKx5gWJWDz0TNqPoOX1io24Avmp6ljHgJ3/oMlcSYnuiy
WfHBydFthZDp869mpG8qrlRwa4lRgc04EjYEgXpIW1ov5ix5HkTpwJFkZ+p8keZK
PNA/jk/C1Y0yMEHJ7uueqjxxW6Ji4SdeaPLcpgIw2GRHptJxFcvK/GPLR3Z+peze
Vrx22Ld3/bXrRPlTPKQb3ZVQCh+LToX3xbFXFra5SDeId/M7v4TDNBnOoZKbNxoJ
J+sf+lrHxmH9lxqHLyiY
=nmbJ
-----END PGP SIGNATURE-----
Merge tag 'metag-for-v3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag
Pull Metag architecture changes from James Hogan:
- Infrastructure and DT files for TZ1090 SoC (pin control drivers
already merged via pinctrl tree).
- Panic on boot instead of just warning if cache aliasing possible.
- Various SMP/hotplug fixes.
- Various other randconfig/sparse fixes.
* tag 'metag-for-v3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag: (24 commits)
metag: move EXPORT_SYMBOL(csum_partial) to metag_ksyms.c
metag: cpu hotplug: route_irq: preserve irq mask
metag: kick: add missing irq_enter/exit to kick_handler()
metag: smp: don't spin waiting for CPU to start
metag: smp: enable irqs after set_cpu_online
metag: use clear_tasks_mm_cpumask()
metag: tz1090: select and instantiate pinctrl-tz1090-pdc
metag: tz1090: select and instantiate pinctrl-tz1090
metag: don't check for cache aliasing on smp cpu boot
metag: panic if cache aliasing possible
metag: *.dts: include using preprocessor
metag: add <dt-bindings/> symlink
metag/.gitignore: Extend the *.dtb pattern to match the dtb.S files
metag/traps: include setup.h for the per_cpu_trap_init declaration
metag/traps: Mark die() as __noreturn to match the declaration.
metag/processor.h: Add missing cpuinfo_op declaration.
metag/setup: Restrict scope for the capabilities variable
metag/mm/cache: Restrict scope for metag_lnkget_probe
metag/asm/irq.h: Declare init_IRQ
metag/kernel/irq.c: Declare root_domain as static
...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQIcBAABAgAGBQJR1qpUAAoJEIlPj0hw4a6Q2qUP/0GUvRJZnx5bJlhy4GZ3OfmD
o6FIbSwmGDWg71L/Ey9xZO0I9oQVe2fyt4azRZ1aCewf16xkmSIYa46Cye7VlMa2
BOVqB8oEse7ui2nG85NLarvi/GJYa0NnRetDjo4/4zMk34MVdX0g9RQZPLDcVpcK
LaPNDqDM5CqjWwYrrcDoo6GqO47xsVM4hteDBR0PvZu1bIdTUOW5AuxBTMxxnOPX
K27sNI2JoKe0G9v9XU75yYIMWhumJZfJjQJFgSWdLme6659cJjwcjeZPuXxfwAkW
0m3pMtr1mmxMzdBXox+46rvlIxg/NSg2kp0fWLt4zbXPxdPOn+GO+fBe19ACA+4C
Nm5TSLDybnSpLrbVt4wH3lE46/12CMnLUNsG5A9S+S098qh6FanTKU2xjPFCzWnh
7JHf9f/wQT/N8IhhwWDSgdeqaClujtqkfOouNk12y7cqSzQBFxfkmQx3ytguo7X3
/hZa+5yCj9EgOvbvZ4+2vZ/irXJiJwhY3Bgwwxa/tKW2a0wvOKIBC4ysO25/u4CO
XXbOCHiDhV+My7C5zsqPxrKKNnPLCgszebp8xRrqEI8SZkHM57tiFXvVpM1kZf92
QLLOVs7O/wXGav220qeiqAvd2dnxtbKhcFrvGYxTbU+noo+6Q8Y/bc5qLBD8B39q
8u5GNwQeesR5S92mBXTg
=XQbF
-----END PGP SIGNATURE-----
Merge tag 'xenarm-for-3.11-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen
Pull Xen ARM update rom Stefano Stabellini:
"Just one commit this time: the implementation of the tmem hypercall
for arm and arm64"
* tag 'xenarm-for-3.11-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen:
xen/arm and xen/arm64: implement HYPERVISOR_tmem_op
This is the long awaited simplification of irqdomain. It gets rid of the
different types of irq domains and instead both linear and tree mappings
can be supported in a single domain. Doing this removes a lot of special
case code and makes irq domains simpler to understand overall.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJR1mnQAAoJEEFnBt12D9kBW7gP/jQpLqFU6PTdsCMX459Ib7fE
JGLQWYjAsqVbVT3H8LZYOZl+ItIbXSQVk+wf0mislnOzxtGxNCxZeJdniFK2RWyz
sGNm/eEZfp9bvm9jlrPE615QE7U7c9fQ5dqETNS4UMMFFmjgrsY4Z82zYH2/Y/jK
YbxFUT6+fep5QaF7lYM7phfrs7FrvnpCoxCio6LO+GgxnMxHSsGJ8L9JcojZ5Qoa
MwlSwsMaQrWJ6PNVbB2YrtlMxjHkNpQ7EV7bNF693PjACVpXf3ItpvQ1k6W/ns06
j6Nysj4nbjsTdWOVh5mY7tPfknZUDs38rLT5s+RbfHqySMIGQXp6CzieYaJ+lxV9
YsItyfZE7h3anebjwRv/9XqLNsDM4KmSIJGcnq+QxNSGxO3rXkItWtpCKaqYdWnN
wI359rSZ//x2ltFzvB14mZJm2j7r9Hh7gH8M5h8W8YTqCbe3oSES3iRhJqnyPxMX
of4PsQaTAicuLwberGdVkKafg9XIsHPoebBzDxssB4nxPrEKIILL4g54x1rV17TZ
kBXurd79FhP4gOQxJHf5uEpdxrQ3Cm/PQpD5CG2rZJrrcV7hfP4QMMFHqqbFSIOu
2h4M4fZX2gsDh3RLboGRGn0GckSPTHiO05yeeBriLzMCVGcZ1JMqAxLUSJpH/UYl
EzFy9RpwnwLoUPi8jUXE
=yJRw
-----END PGP SIGNATURE-----
Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux
Pull irqdomain refactoring from Grant Likely:
"This is the long awaited simplification of irqdomain. It gets rid of
the different types of irq domains and instead both linear and tree
mappings can be supported in a single domain. Doing this removes a
lot of special case code and makes irq domains simpler to understand
overall"
* tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux:
irq: fix checkpatch error
irqdomain: Include hwirq number in /proc/interrupts
irqdomain: make irq_linear_revmap() a fast path again
irqdomain: remove irq_domain_generate_simple()
irqdomain: Refactor irq_domain_associate_many()
irqdomain: Beef up debugfs output
irqdomain: Clean up aftermath of irq_domain refactoring
irqdomain: Eliminate revmap type
irqdomain: merge linear and tree reverse mappings.
irqdomain: Add a name field
irqdomain: Replace LEGACY mapping with LINEAR
irqdomain: Relax failure path on setting up mappings
UBI device numbers when attaching MTD devices by using the "mtd="
module parameter.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJR1t2RAAoJECmIfjd9wqK0bYIQALppqcFo27hkiCVcBHHjMRCw
UonQBsD0GNXTyzrr/a+lBPK/33DgkiQ+/SHA/1ZXWPdBAwsacWBwzFtNo8OvC8Pg
L0VPLY3OD5WOnbU8l9HzhjYm3MBrwvWqUmtzfM93c506tacR7z4j22uuF2jeBtDS
qvVNeiVwYRjHdSrODd0IP16RqNGg2BHJ3FM+Eek9c+cGpkxUUzCRohugzFidrSVQ
qNgptNGuWtpkAE3ytOGmmPXBSDQEFSUpjg6U6pZwQ3A5e4lKR7QqCC0bEf1S1uPc
9pqmY21SC/fIm2VMN5jl+276mlwAOgsVI4+X7tK8ccrHzvkdoEwiMuqo0TwIMiPt
PrhGjWBdGqDb8pvLvUSuzKUraVcOW7m1jE60sTLtq5xIkUd7CbBIq6n5o3oIdeW8
jOCy9+Bkae45l8WKtQlBd+GHw8bDfgsxFt0qQ2UOGdu4+km09VsEzT6SDx3ZfsBM
fJZzBm2qw02ZwtfzCI3FzzS4CJnDWlq20gbyf0B9a1aBMtN/HffbtyIFop4Nrlb2
TlFjh/kFRNkNf0gXrGz5AvM5jhOR3tkh9Jtz1LsLaqRVkTsfsKUt1PvP4pUFAbvB
FIaqSVUz9+ri/CyNpFYe2ofLHqFBMukygaqPU8JpMlo4i6kI2aO7XlRqQz62hSlu
mFROtIxWeHhQJwxtJ3UC
=N5BA
-----END PGP SIGNATURE-----
Merge tag 'upstream-3.11-rc1' of git://git.infradead.org/linux-ubi
Pull ubi fixes from Artem Bityutskiy:
"A couple of fixes and clean-ups, allow for assigning user-defined UBI
device numbers when attaching MTD devices by using the "mtd=" module
parameter"
* tag 'upstream-3.11-rc1' of git://git.infradead.org/linux-ubi:
UBI: support ubi_num on mtd.ubi command line
UBI: fastmap break out of used PEB search
UBI: document UBI_IOCVOLUP better in user header
UBI: do not abort init when ubi.mtd devices cannot be found
UBI: drop redundant "UBI error" string