linux/mm
Linus Torvalds 6547ec4286 Revert "mm, slub: consider rest of partial list if acquire_slab() fails"
commit 9b1ea29bc0d7b94d420f96a0f4121403efc3dd85 upstream.

This reverts commit 8ff60eb052eeba95cfb3efe16b08c9199f8121cf.

The kernel test robot reports a huge performance regression due to the
commit, and the reason seems fairly straightforward: when there is
contention on the page list (which is what causes acquire_slab() to
fail), we do _not_ want to just loop and try again, because that will
transfer the contention to the 'n->list_lock' spinlock we hold, and
just make things even worse.

This is admittedly likely a problem only on big machines - the kernel
test robot report comes from a 96-thread dual socket Intel Xeon Gold
6252 setup, but the regression there really is quite noticeable:

   -47.9% regression of stress-ng.rawpkt.ops_per_sec

and the commit that was marked as being fixed (7ced37197196: "slub:
Acquire_slab() avoid loop") actually did the loop exit early very
intentionally (the hint being that "avoid loop" part of that commit
message), exactly to avoid this issue.

The correct thing to do may be to pick some kind of reasonable middle
ground: instead of breaking out of the loop on the very first sign of
contention, or trying over and over and over again, the right thing may
be to re-try _once_, and then give up on the second failure (or pick
your favorite value for "once"..).

Reported-by: kernel test robot <oliver.sang@intel.com>
Link: https://lore.kernel.org/lkml/20210301080404.GF12822@xsang-OptiPlex-9020/
Cc: Jann Horn <jannh@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-17 17:03:34 +01:00
..
kasan kasan: fix incorrect arguments passing in kasan_add_zero_shadow 2021-01-27 11:47:53 +01:00
Kconfig mm/zsmalloc.c: drop ZSMALLOC_PGTABLE_MAPPING 2020-12-16 10:56:59 +01:00
Kconfig.debug
Makefile
backing-dev.c bdi: add a ->dev_name field to struct backing_dev_info 2020-05-14 07:58:30 +02:00
balloon_compaction.c
cleancache.c
cma.c cma: don't quit at first error when activating reserved areas 2020-09-03 11:26:51 +02:00
cma.h
cma_debug.c
compaction.c mm/compaction: fix misbehaviors of fast_find_migrateblock() 2021-03-04 10:26:39 +01:00
debug.c mm/debug.c: always print flags in dump_page() 2020-03-05 16:43:51 +01:00
debug_page_ref.c
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c mm/error_inject: Fix allow_error_inject function signatures. 2020-10-29 09:57:37 +01:00
frame_vector.c
frontswap.c
gup.c mm/gup: fix gup_fast with dynamic page table folding 2020-10-01 13:18:24 +02:00
gup_benchmark.c
highmem.c
hmm.c
huge_memory.c mm: thp: fix MADV_REMOVE deadlock on shmem THP 2021-02-10 09:25:31 +01:00
hugetlb.c mm/hugetlb.c: fix unnecessary address expansion of pmd sharing 2021-03-07 12:20:43 +01:00
hugetlb_cgroup.c
hwpoison-inject.c
init-mm.c
internal.h
interval_tree.c
khugepaged.c mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged 2020-10-14 10:33:05 +02:00
kmemleak-test.c
kmemleak.c mm/kmemleak.c: use address-of operator on section symbols 2020-10-01 13:17:53 +02:00
ksm.c mm/ksm: fix NULL pointer dereference when KSM zero page is enabled 2020-04-29 16:33:15 +02:00
list_lru.c mm: list_lru: set shrinker map bit when child nr_items is not zero 2020-12-11 13:23:31 +01:00
maccess.c
madvise.c mm: validate pmd after splitting 2020-10-01 13:18:21 +02:00
memblock.c memblock: do not start bottom-up allocations with kernel_end 2021-02-10 09:25:28 +01:00
memcontrol.c mm: memcg/slab: fix root memcg vmstats 2020-11-24 13:29:24 +01:00
memfd.c
memory-failure.c
memory.c hugetlb: fix copy_huge_page_from_user contig page struct assumption 2021-03-04 10:26:49 +01:00
memory_hotplug.c mm: don't rely on system state to detect hot-plug operations 2020-10-07 08:01:30 +02:00
mempolicy.c mm: mempolicy: fix potential pte_unmap_unlock pte error 2020-11-10 12:37:27 +01:00
mempool.c
memremap.c
memtest.c
migrate.c mm: move_pages: report the number of non-attempted pages 2020-02-11 04:35:13 -08:00
mincore.c
mlock.c
mm_init.c
mmap.c mm/mmap.c: initialize align_offset explicitly for vm_unmapped_area 2020-10-01 13:17:54 +02:00
mmu_context.c mm: fix kthread_use_mm() vs TLB invalidate 2020-09-03 11:26:51 +02:00
mmu_gather.c mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush 2020-02-11 04:35:42 -08:00
mmu_notifier.c
mmzone.c
mprotect.c mm, numa: fix bad pmd by atomically check for pmd_trans_huge when marking page tables prot_numa 2020-03-12 13:00:19 +01:00
mremap.c mm: Fix mremap not considering huge pmd devmap 2020-06-07 13:18:46 +02:00
msync.c
nommu.c x86/mm: split vmalloc_sync_all() 2020-03-25 08:25:58 +01:00
oom_kill.c mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary 2020-10-29 09:57:45 +01:00
page-writeback.c
page_alloc.c mm: don't wake kswapd prematurely when watermark boosting is disabled 2020-12-30 11:51:27 +01:00
page_counter.c mm/page_counter.c: fix protection usage propagation 2020-08-21 13:05:27 +02:00
page_ext.c
page_idle.c
page_io.c swap: fix swapfile read/write offset 2021-03-07 12:20:49 +01:00
page_isolation.c mm/memory_hotplug: drain per-cpu pages again during memory offline 2020-09-23 12:40:47 +02:00
page_owner.c mm/page_owner: change split_page_owner to take a count 2020-10-29 09:57:52 +01:00
page_poison.c
page_vma_mapped.c
pagewalk.c mm: pagewalk: fix termination condition in walk_pte_range() 2020-10-01 13:17:30 +02:00
percpu-internal.h
percpu-km.c
percpu-stats.c
percpu-vm.c
percpu.c percpu: fix first chunk size calculation for populated bitmap 2020-09-23 12:40:45 +02:00
pgtable-generic.c
process_vm_access.c
readahead.c
rmap.c
rodata_test.c
shmem.c shmem: fix possible deadlocks on shmlock_user_lock 2020-05-20 08:20:03 +02:00
shuffle.c mm/shuffle: don't move pages between zones and don't read garbage memmaps 2020-09-03 11:26:51 +02:00
shuffle.h
slab.c
slab.h
slab_common.c mm: memcg/slab: fix memory leak at non-root kmem_cache destroy 2020-07-29 10:18:44 +02:00
slob.c
slub.c Revert "mm, slub: consider rest of partial list if acquire_slab() fails" 2021-03-17 17:03:34 +01:00
sparse-vmemmap.c
sparse.c mm/sparse: fix kernel crash with pfn_section_valid check 2020-04-01 11:02:03 +02:00
swap.c
swap_cgroup.c
swap_slots.c
swap_state.c mm/swap_state: fix a data race in swapin_nr_pages 2020-10-01 13:18:08 +02:00
swapfile.c swap: fix swapfile read/write offset 2021-03-07 12:20:49 +01:00
truncate.c
usercopy.c
userfaultfd.c
util.c mm: add kvfree_sensitive() for freeing sensitive data objects 2020-06-17 16:40:23 +02:00
vmacache.c
vmalloc.c mm/vunmap: add cond_resched() in vunmap_pmd_range 2020-09-03 11:26:52 +02:00
vmpressure.c
vmscan.c mm/vmscan.c: fix data races using kswapd_classzone_idx 2020-10-01 13:17:53 +02:00
vmstat.c
workingset.c
z3fold.c
zbud.c
zpool.c
zsmalloc.c zsmalloc: account the number of compacted pages correctly 2021-03-07 12:20:49 +01:00
zswap.c