fa0d7e3de6
RCU free the struct inode. This will allow: - Subsequent store-free path walking patch. The inode must be consulted for permissions when walking, so an RCU inode reference is a must. - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want to take i_lock no longer need to take sb_inode_list_lock to walk the list in the first place. This will simplify and optimize locking. - Could remove some nested trylock loops in dcache code - Could potentially simplify things a bit in VM land. Do not need to take the page lock to follow page->mapping. The downsides of this is the performance cost of using RCU. In a simple creat/unlink microbenchmark, performance drops by about 10% due to inability to reuse cache-hot slab objects. As iterations increase and RCU freeing starts kicking over, this increases to about 20%. In cases where inode lifetimes are longer (ie. many inodes may be allocated during the average life span of a single inode), a lot of this cache reuse is not applicable, so the regression caused by this patch is smaller. The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU, however this adds some complexity to list walking and store-free path walking, so I prefer to implement this at a later date, if it is shown to be a win in real situations. I haven't found a regression in any non-micro benchmark so I doubt it will be a problem. Signed-off-by: Nick Piggin <npiggin@kernel.dk> |
||
---|---|---|
.. | ||
array.c | ||
base.c | ||
cmdline.c | ||
cpuinfo.c | ||
devices.c | ||
generic.c | ||
inode.c | ||
internal.h | ||
interrupts.c | ||
Kconfig | ||
kcore.c | ||
kmsg.c | ||
loadavg.c | ||
Makefile | ||
meminfo.c | ||
mmu.c | ||
nommu.c | ||
page.c | ||
proc_devtree.c | ||
proc_net.c | ||
proc_sysctl.c | ||
proc_tty.c | ||
root.c | ||
softirqs.c | ||
stat.c | ||
task_mmu.c | ||
task_nommu.c | ||
uptime.c | ||
version.c | ||
vmcore.c |