Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:03:26
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:deferred_split_scan

Proto:static unsigned long deferred_split_scan(struct shrinker *shrink, struct shrink_control *sc)

Type:unsigned long

Parameter:

TypeParameterName
struct shrinker *shrink
struct shrink_control *sc
2905  pgdata = NODE_DATA(current node being shrunk (for NUMA aware shrinkers) )
2906  ds_queue = deferred_split_queue
2908  LIST_HEAD(list), pos, next
2910  split = 0
2913  If current memcg being shrunk (for memcg aware shrinkers) Then ds_queue = deferred_split_queue
2917  spin_lock_irqsave( & split_queue_lock, flags)
2920  page = list_entry - get the struct for this entry*@ptr: the &struct list_head pointer.*@type: the type of the struct this is embedded in.*@member: the name of the list_head within the struct.((void * )pos, structpage, mapping)
2921  page = compound_head(page)
2922  If Try to grab a ref unless the page has a refcount of zero, return false if* that is the case.* This can be called when MMU is off so it must not access* any of the virtual mappings. Then
2923  list_move - delete from one list and add as another's head*@list: the entry to move*@head: the head that will precede our entry
2924  Else
2926  list_del_init - deletes entry from list and reinitialize it.*@entry: the element to delete from the list.
2927  split_queue_len--
2929  If Not --How many objects scan_objects should scan and try to reclaim.* This is reset before every call, so it is safe for callees* to modify. Then Break
2932  spin_unlock_irqrestore( & split_queue_lock, flags)
2935  page = list_entry - get the struct for this entry*@ptr: the &struct list_head pointer.*@type: the type of the struct this is embedded in.*@member: the name of the list_head within the struct.((void * )pos, structpage, mapping)
2936  If Not Return true if the page was successfully locked Then Go to next
2939  If Not split_huge_page(page) Then split++
2941  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
2942  :
2943  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
2946  spin_lock_irqsave( & split_queue_lock, flags)
2947  list_splice_tail - join two lists, each list being a queue*@list: the new list to add.*@head: the place to add it in the first list.
2948  spin_unlock_irqrestore( & split_queue_lock, flags)
2954  If Not split && list_empty - tests whether a list is empty*@head: the list to test. Then Return SHRINK_STOP
2956  Return split