mm, page_alloc: more extensive free page checking with debug_pagealloc

The page allocator checks struct pages for expected state (mapcount,
flags etc) as pages are being allocated (check_new_page()) and freed
(free_pages_check()) to provide some defense against errors in page
allocator users.

Prior commits 479f854a20 ("mm, page_alloc: defer debugging checks of
pages allocated from the PCP") and 4db7548ccb ("mm, page_alloc: defer
debugging checks of freed pages until a PCP drain") this has happened
for order-0 pages as they were allocated from or freed to the per-cpu
caches (pcplists).  Since those are fast paths, the checks are now
performed only when pages are moved between pcplists and global free
lists.  This however lowers the chances of catching errors soon enough.

In order to increase the chances of the checks to catch errors, the
kernel has to be rebuilt with CONFIG_DEBUG_VM, which also enables
multiple other internal debug checks (VM_BUG_ON() etc), which is
suboptimal when the goal is to catch errors in mm users, not in mm code
itself.

To catch some wrong users of the page allocator we have
CONFIG_DEBUG_PAGEALLOC, which is designed to have virtually no overhead
unless enabled at boot time.  Memory corruptions when writing to freed
pages have often the same underlying errors (use-after-free, double free)
as corrupting the corresponding struct pages, so this existing debugging
functionality is a good fit to extend by also perform struct page checks
at least as often as if CONFIG_DEBUG_VM was enabled.

Specifically, after this patch, when debug_pagealloc is enabled on boot,
and CONFIG_DEBUG_VM disabled, pages are checked when allocated from or
freed to the pcplists *in addition* to being moved between pcplists and
free lists.  When both debug_pagealloc and CONFIG_DEBUG_VM are enabled,
pages are checked when being moved between pcplists and free lists *in
addition* to when allocated from or freed to the pcplists.

When debug_pagealloc is not enabled on boot, the overhead in fast paths
should be virtually none thanks to the use of static key.

Link: http://lkml.kernel.org/r/20190603143451.27353-3-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Vlastimil Babka 2019-07-11 20:55:09 -07:00 committed by Linus Torvalds
parent 96a2b03f28
commit 4462b32c92
2 changed files with 52 additions and 14 deletions

View File

@ -19,12 +19,17 @@ config DEBUG_PAGEALLOC
Depending on runtime enablement, this results in a small or large Depending on runtime enablement, this results in a small or large
slowdown, but helps to find certain types of memory corruption. slowdown, but helps to find certain types of memory corruption.
Also, the state of page tracking structures is checked more often as
pages are being allocated and freed, as unexpected state changes
often happen for same reasons as memory corruption (e.g. double free,
use-after-free).
For architectures which don't enable ARCH_SUPPORTS_DEBUG_PAGEALLOC, For architectures which don't enable ARCH_SUPPORTS_DEBUG_PAGEALLOC,
fill the pages with poison patterns after free_pages() and verify fill the pages with poison patterns after free_pages() and verify
the patterns before alloc_pages(). Additionally, the patterns before alloc_pages(). Additionally, this option cannot
this option cannot be enabled in combination with hibernation as be enabled in combination with hibernation as that would result in
that would result in incorrect warnings of memory corruption after incorrect warnings of memory corruption after a resume because free
a resume because free pages are not saved to the suspend image. pages are not saved to the suspend image.
By default this option will have a small overhead, e.g. by not By default this option will have a small overhead, e.g. by not
allowing the kernel mapping to be backed by large pages on some allowing the kernel mapping to be backed by large pages on some

View File

@ -1160,19 +1160,36 @@ static __always_inline bool free_pages_prepare(struct page *page,
} }
#ifdef CONFIG_DEBUG_VM #ifdef CONFIG_DEBUG_VM
static inline bool free_pcp_prepare(struct page *page) /*
* With DEBUG_VM enabled, order-0 pages are checked immediately when being freed
* to pcp lists. With debug_pagealloc also enabled, they are also rechecked when
* moved from pcp lists to free lists.
*/
static bool free_pcp_prepare(struct page *page)
{ {
return free_pages_prepare(page, 0, true); return free_pages_prepare(page, 0, true);
} }
static inline bool bulkfree_pcp_prepare(struct page *page) static bool bulkfree_pcp_prepare(struct page *page)
{ {
return false; if (debug_pagealloc_enabled())
return free_pages_check(page);
else
return false;
} }
#else #else
/*
* With DEBUG_VM disabled, order-0 pages being freed are checked only when
* moving from pcp lists to free list in order to reduce overhead. With
* debug_pagealloc enabled, they are checked also immediately when being freed
* to the pcp lists.
*/
static bool free_pcp_prepare(struct page *page) static bool free_pcp_prepare(struct page *page)
{ {
return free_pages_prepare(page, 0, false); if (debug_pagealloc_enabled())
return free_pages_prepare(page, 0, true);
else
return free_pages_prepare(page, 0, false);
} }
static bool bulkfree_pcp_prepare(struct page *page) static bool bulkfree_pcp_prepare(struct page *page)
@ -2035,23 +2052,39 @@ static inline bool free_pages_prezeroed(void)
} }
#ifdef CONFIG_DEBUG_VM #ifdef CONFIG_DEBUG_VM
static bool check_pcp_refill(struct page *page) /*
* With DEBUG_VM enabled, order-0 pages are checked for expected state when
* being allocated from pcp lists. With debug_pagealloc also enabled, they are
* also checked when pcp lists are refilled from the free lists.
*/
static inline bool check_pcp_refill(struct page *page)
{ {
return false; if (debug_pagealloc_enabled())
return check_new_page(page);
else
return false;
} }
static bool check_new_pcp(struct page *page) static inline bool check_new_pcp(struct page *page)
{ {
return check_new_page(page); return check_new_page(page);
} }
#else #else
static bool check_pcp_refill(struct page *page) /*
* With DEBUG_VM disabled, free order-0 pages are checked for expected state
* when pcp lists are being refilled from the free lists. With debug_pagealloc
* enabled, they are also checked when being allocated from the pcp lists.
*/
static inline bool check_pcp_refill(struct page *page)
{ {
return check_new_page(page); return check_new_page(page);
} }
static bool check_new_pcp(struct page *page) static inline bool check_new_pcp(struct page *page)
{ {
return false; if (debug_pagealloc_enabled())
return check_new_page(page);
else
return false;
} }
#endif /* CONFIG_DEBUG_VM */ #endif /* CONFIG_DEBUG_VM */