Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmscan.c Create Date:2022-07-28 14:17:56
Last Modify:2022-05-23 13:41:30 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:shrink_page_list() returns the number of reclaimed pages

Proto:static unsigned long shrink_page_list(struct list_head *page_list, struct pglist_data *pgdat, struct scan_control *sc, enum ttu_flags ttu_flags, struct reclaim_stat *stat, bool ignore_references)

Type:unsigned long

Parameter:

TypeParameterName
struct list_head *page_list
struct pglist_data *pgdat
struct scan_control *sc
enum ttu_flagsttu_flags
struct reclaim_stat *stat
boolignore_references
1090  LIST_HEAD(ret_pages)
1091  LIST_HEAD(free_pages)
1092  nr_reclaimed = 0
1093  pgactivate = 0
1095  memset(stat, 0, size of stat )
1096  cond_resched()
1098  When Not list_empty - tests whether a list is empty*@head: the list to test. cycle
1102  references = PAGEREF_RECLAIM
1106  cond_resched()
1108  page = lru_to_page(page_list)
1109  deletes entry from list
1111  If Not Return true if the page was successfully locked Then Go to keep
1114  VM_BUG_ON_PAGE(PageActive(page), page)
1116  nr_pages = Returns the number of pages in this potentially compound page.
1119  Incremented by the number of inactive pages that were scanned += nr_pages
1121  If Value for the false possibility is greater at compile time(!page_evictable - test whether a page is evictable*@page: the page to test* Test whether page is evictable--i) Then Go to activate_locked
1124  If Not Can mapped pages be reclaimed? && page_mapped(page) Then Go to keep_locked
1127  may_enter_fs = This context's GFP mask & __GFP_FS || PageSwapCache(page) && This context's GFP mask & DOC: Reclaim modifiers* Reclaim modifiers* ~~~~~~~~~~~~~~~~~* %__GFP_IO can start physical IO.* %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the* allocator recursing into the filesystem which might already be holding* locks.
1136  Check if a page is dirty or under writeback
1137  If dirty || writeback Then nr_dirty++
1140  If dirty && Not writeback Then nr_unqueued_dirty++
1149  mapping = page_mapping(page)
1150  If (dirty || writeback) && mapping && inode_write_congested(host) || writeback && PG_readahead is only used for reads; PG_reclaim is only for writes Then nr_congested++
1202  nr_immediate++
1203  Go to activate_locked
1224  Else
1233  If Not ignore_references Then references = page_check_references(page, sc)
1237  Case references == PAGEREF_ACTIVATE
1238  Go to activate_locked
1239  Case references == PAGEREF_KEEP
1240  nr_ref_keep += nr_pages
1241  Go to keep_locked
1242  Case references == PAGEREF_RECLAIM
1243  Case references == PAGEREF_RECLAIM_CLEAN
1252  If PageAnon(page) && PageSwapBacked(page) Then
1253  If Not PageSwapCache(page) Then
1284  may_enter_fs = 1
1304  nr_pages = 1
1311  If page_mapped(page) Then
1316  If Not try_to_unmap(page, flags) Then
1318  Go to activate_locked
1322  If PageDirty(page) Then
1348  If references == PAGEREF_RECLAIM_CLEAN Then Go to keep_locked
1350  If Not may_enter_fs Then Go to keep_locked
1363  Go to keep_locked
1365  Go to activate_locked
1410  If Not mapping && page_count(page) == 1 Then
1428  If PageAnon(page) && Not PageSwapBacked(page) Then
1430  If Not page_ref_freeze(page, 1) Then Go to keep_locked
1432  If PageDirty(page) Then
1433  page_ref_unfreeze(page, 1)
1434  Go to keep_locked
1439  Else if Not mapping || Not Same as remove_mapping, but if the page is removed from the mapping, it* gets returned with a refcount of 0. Then Go to keep_locked
1443  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1444  free_it :
1449  nr_reclaimed += nr_pages
1455  If Value for the false possibility is greater at compile time(PageHuge() only returns true for hugetlbfs pages, but not for* normal or transparent huge pages.* PageTransHuge() returns true for both transparent huge and* hugetlbfs pages, but not normal pages. PageTransHuge() can only be) Then ( * get_compound_page_dtor(page))(page)
1457  Else list_add - add a new entry*@new: new entry to be added*@head: list head to add it after* Insert a new entry after the specified head.* This is good for implementing stacks.
1459  Continue
1461  activate_locked_split :
1466  If nr_pages > 1 Then
1468  nr_pages = 1
1470  activate_locked :
1472  If PageSwapCache(page) && (mem_cgroup_swap_full(page) || PageMlocked(page)) Then try_to_free_swap(page)
1475  VM_BUG_ON_PAGE(PageActive(page), page)
1476  If Not PageMlocked(page) Then
1482  keep_locked :
1483  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1484  keep :
1485  list_add - add a new entry*@new: new entry to be added*@head: list head to add it after* Insert a new entry after the specified head.* This is good for implementing stacks.
1486  VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page)
1489  pgactivate = nr_activate[0] + nr_activate[1]
1491  mem_cgroup_uncharge_list( & free_pages)
1492  try_to_unmap_flush()
1493  free_unref_page_list( & free_pages)
1495  list_splice - join two lists, this is designed for stacks*@list: the new list to add.*@head: the place to add it in the first list.
1496  count_vm_events(PGACTIVATE, pgactivate)
1498  Return nr_reclaimed
Caller
NameDescribe
reclaim_clean_pages_from_list
shrink_inactive_listshrink_inactive_list() is a helper for shrink_node(). It returns the number* of reclaimed pages
reclaim_pages