mlock: replace stale comments in munlock_vma_page()

Cleanup stale comments on munlock_vma_page().

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Lee Schermerhorn 2009-12-14 17:59:55 -08:00 committed by Linus Torvalds
parent 418b27ef50
commit 6927c1dd93
1 changed files with 19 additions and 22 deletions

View File

@ -88,23 +88,20 @@ void mlock_vma_page(struct page *page)
} }
} }
/* /**
* called from munlock()/munmap() path with page supposedly on the LRU. * munlock_vma_page - munlock a vma page
* @page - page to be unlocked
* *
* Note: unlike mlock_vma_page(), we can't just clear the PageMlocked * called from munlock()/munmap() path with page supposedly on the LRU.
* [in try_to_munlock()] and then attempt to isolate the page. We must * When we munlock a page, because the vma where we found the page is being
* isolate the page to keep others from messing with its unevictable * munlock()ed or munmap()ed, we want to check whether other vmas hold the
* and mlocked state while trying to munlock. However, we pre-clear the * page locked so that we can leave it on the unevictable lru list and not
* mlocked state anyway as we might lose the isolation race and we might * bother vmscan with it. However, to walk the page's rmap list in
* not get another chance to clear PageMlocked. If we successfully * try_to_munlock() we must isolate the page from the LRU. If some other
* isolate the page and try_to_munlock() detects other VM_LOCKED vmas * task has removed the page from the LRU, we won't be able to do that.
* mapping the page, it will restore the PageMlocked state, unless the page * So we clear the PageMlocked as we might not get another chance. If we
* is mapped in a non-linear vma. So, we go ahead and ClearPageMlocked(), * can't isolate the page, we leave it for putback_lru_page() and vmscan
* perhaps redundantly. * [page_referenced()/try_to_unmap()] to deal with.
* If we lose the isolation race, and the page is mapped by other VM_LOCKED
* vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap()
* either of which will restore the PageMlocked state by calling
* mlock_vma_page() above, if it can grab the vma's mmap sem.
*/ */
void munlock_vma_page(struct page *page) void munlock_vma_page(struct page *page)
{ {
@ -123,12 +120,12 @@ void munlock_vma_page(struct page *page)
putback_lru_page(page); putback_lru_page(page);
} else { } else {
/* /*
* We lost the race. let try_to_unmap() deal * Some other task has removed the page from the LRU.
* with it. At least we get the page state and * putback_lru_page() will take care of removing the
* mlock stats right. However, page is still on * page from the unevictable list, if necessary.
* the noreclaim list. We'll fix that up when * vmscan [page_referenced()] will move the page back
* the page is eventually freed or we scan the * to the unevictable list if some other vma has it
* noreclaim list. * mlocked.
*/ */
if (PageUnevictable(page)) if (PageUnevictable(page))
count_vm_event(UNEVICTABLE_PGSTRANDED); count_vm_event(UNEVICTABLE_PGSTRANDED);