Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmscan.c Create Date:2022-07-28 14:18:28
Last Modify:2022-05-23 13:41:30 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:shrink_inactive_list() is a helper for shrink_node(). It returns the number* of reclaimed pages

Proto:static __attribute__((__noinline__)) unsigned long shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, struct scan_control *sc, enum lru_list lru)

Type:unsigned long

Parameter:

TypeParameterName
unsigned longnr_to_scan
struct lruvec *lruvec
struct scan_control *sc
enum lru_listlru
1916  LIST_HEAD(page_list)
1918  nr_reclaimed = 0
1921  file = is_file_lru(lru)
1923  pgdat = lruvec_pgdat(lruvec)
1924  reclaim_stat = reclaim_stat
1925  bool stalled = false
1927  When Value for the false possibility is greater at compile time(A direct reclaimer may isolate SWAP_CLUSTER_MAX pages from the LRU list and* then get rescheduled) cycle
1928  If stalled Then Return 0
1932  msleep - sleep safely even with waitqueue interruptions*@msecs: Time in milliseconds to sleep for
1933  stalled = true
1936  If fatal_signal_pending(current process) Then Return SWAP_CLUSTER_MAX
1940  lru_add_drain()
1942  spin_lock_irq( & Write-intensive fields used by page reclaim )
1944  nr_taken = pgdat->lru_lock is heavily contended. Some of the functions that* shrink the lists perform better by taking out a batch of pages* and working on them outside the LRU lock.* For pagecache intensive workloads, this function is the hottest
1947  __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken)
1948  recent_scanned[file] += nr_taken
1950  item = If current_is_kswapd() Then PGSCAN_KSWAPD Else PGSCAN_DIRECT
1951  If Not cgroup_reclaim(sc) Then __count_vm_events(item, nr_scanned)
1953  __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned)
1954  spin_unlock_irq( & Write-intensive fields used by page reclaim )
1956  If nr_taken == 0 Then Return 0
1959  nr_reclaimed = shrink_page_list() returns the number of reclaimed pages
1962  spin_lock_irq( & Write-intensive fields used by page reclaim )
1964  item = If current_is_kswapd() Then PGSTEAL_KSWAPD Else PGSTEAL_DIRECT
1965  If Not cgroup_reclaim(sc) Then __count_vm_events(item, nr_reclaimed)
1967  __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed)
1968  * The pageout code in vmscan.c keeps track of how many of the * mem/swap backed and file backed pages are referenced. * The higher the rotated/scanned ratio, the more valuable * that cache is. * The anon LRU stats live in [0], file LRU stats in [1][0] += nr_activate[0]
1969  * The pageout code in vmscan.c keeps track of how many of the * mem/swap backed and file backed pages are referenced. * The higher the rotated/scanned ratio, the more valuable * that cache is. * The anon LRU stats live in [0], file LRU stats in [1][1] += nr_activate[1]
1971  This moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
1973  __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, - nr_taken)
1975  spin_unlock_irq( & Write-intensive fields used by page reclaim )
1977  mem_cgroup_uncharge_list( & page_list)
1978  free_unref_page_list( & page_list)
1991  If nr_unqueued_dirty == nr_taken Then wakeup_flusher_threads(WB_REASON_VMSCAN)
1994  dirty += nr_dirty
1995  congested += nr_congested
1996  unqueued_dirty += nr_unqueued_dirty
1997  writeback += nr_writeback
1998  immediate += nr_immediate
1999  taken += nr_taken
2000  If file Then file_taken += nr_taken
2003  trace_mm_vmscan_lru_shrink_inactive(node_id, nr_scanned, nr_reclaimed, & stat, Scan (total_size >> priority) pages at once , file)
2005  Return nr_reclaimed
Caller
NameDescribe
shrink_list