Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmscan.c Create Date:2022-07-28 14:19:16
Last Modify:2022-05-23 13:41:30 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:shrink_node

Proto:static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)

Type:bool

Parameter:

TypeParameterName
pg_data_t *pgdat
struct scan_control *sc
2703  reclaim_state = reclaim_state
2706  bool reclaimable = false
2709  target_lruvec = mem_cgroup_lruvec - get the lru list vector for a memcg & node*@memcg: memcg of the wanted lruvec* Returns the lru list vector holding pages for a given @memcg &*@node combination. This can be the node lruvec, if the memory* controller is disabled.
2711  again :
2712  memset( & nr, 0, size of nr )
2714  nr_reclaimed = Number of pages freed so far during a call to shrink_zones()
2715  nr_scanned = Incremented by the number of inactive pages that were scanned
2721  If Not force_deactivate Then
2724  If The inactive anon list should be small enough that the VM never has* to do too much work.* The inactive file list should be small enough to leave most memory* to the established workingset on the scan-resistant active list, Then may_deactivate |= Can active pages be deactivated as part of reclaim?
2726  Else may_deactivate &= ~Can active pages be deactivated as part of reclaim?
2734  refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE)
2736  If refaults != Refaults at the time of last reclaim cycle || The inactive anon list should be small enough that the VM never has* to do too much work.* The inactive file list should be small enough to leave most memory* to the established workingset on the scan-resistant active list, Then may_deactivate |= DEACTIVATE_FILE
2739  Else may_deactivate &= ~DEACTIVATE_FILE
2741  Else may_deactivate = Can active pages be deactivated as part of reclaim? | DEACTIVATE_FILE
2749  file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE)
2750  If file >> Scan (total_size >> priority) pages at once && Not (may_deactivate & DEACTIVATE_FILE) Then There is easily reclaimable cold cache in the current node = 1
2752  Else There is easily reclaimable cold cache in the current node = 0
2764  If Not cgroup_reclaim(sc) Then
2765  total_high_wmark = 0
2769  free = sum_zone_node_page_state(node_id, NR_FREE_PAGES)
2770  file = node_page_state(pgdat, NR_ACTIVE_FILE) + node_page_state(pgdat, NR_INACTIVE_FILE)
2773  When z < MAX_NR_ZONES cycle
2774  zone = node_zones[z]
2786  anon = node_page_state(pgdat, NR_INACTIVE_ANON)
2788  The file pages on the current node are dangerously low = file + free <= total_high_wmark && Not (may_deactivate & Can active pages be deactivated as part of reclaim? ) && anon >> Scan (total_size >> priority) pages at once
2794  shrink_node_memcgs(pgdat, sc)
2796  If reclaim_state Then
2797  Number of pages freed so far during a call to shrink_zones() += reclaimed_slab
2798  reclaimed_slab = 0
2802  vmpressure( This context's GFP mask , * The memory cgroup that hit its limit and as a result is the * primary target of this reclaim invocation., true, Incremented by the number of inactive pages that were scanned - nr_scanned, Number of pages freed so far during a call to shrink_zones() - nr_reclaimed)
2806  If Number of pages freed so far during a call to shrink_zones() - nr_reclaimed Then reclaimable = true
2809  If current_is_kswapd() Then
2827  If writeback && writeback == taken Then Atomically set a bit in memory
2831  If unqueued_dirty == file_taken Then Atomically set a bit in memory
2840  If immediate Then gestion_wait - wait for a backing_dev to become uncongested*@sync: SYNC or ASYNC IO*@timeout: timeout in jiffies* Waits for up to @timeout jiffies for a backing_dev (any backing_dev) to exit* write congestion
2852  If (current_is_kswapd() || cgroup_reclaim(sc) && writeback_throttling_sane - is the usual dirty throttling mechanism available?*@sc: scan_control in question* The normal page dirty throttling mechanism in balance_dirty_pages() is* completely broken with the legacy memcg and direct stalling in* ) && dirty && dirty == congested Then Atomically set a bit in memory
2863  If Not current_is_kswapd() && If a kernel thread (such as nfsd for loop-back mounts) services* a backing device by writing to the page cache it sets PF_LESS_THROTTLE.* In that case we should only throttle if the backing device it is* writing to is congested && Not hibernation_mode && st_bit - Determine whether a bit is set*@nr: bit number to test*@addr: Address to start counting from Then wait_iff_congested - Conditionally wait for a backing_dev to become uncongested or a pgdat to complete writes*@sync: SYNC or ASYNC IO*@timeout: timeout in jiffies* In the event of a congested backing_dev (any backing_dev) this waits* for up to @timeout
2868  If Reclaim/compaction is used for high-order allocation requests. It reclaims* order-0 pages before compacting the zone. should_continue_reclaim() returns* true if more pages should be reclaimed such that when the page allocator Then Go to again
2878  If reclaimable Then Number of 'reclaimed == 0' runs = 0
2881  Return reclaimable
Caller
NameDescribe
shrink_zonesThis is the direct reclaim path, for page-allocating processes. We only* try to reclaim pages from zones which will satisfy the caller's allocation* request.* If a zone is deemed to be full of pinned pages then just give it a light
kswapd_shrink_nodekswapd shrinks a node of pages that are at or below the highest usable* zone that is currently unbalanced.* Returns true if kswapd scanned at least the requested number of pages to* reclaim or if the lack of progress was due to pages under writeback.
__node_reclaimTry to free up some pages from this node through reclaim.