Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\swapfile.c Create Date:2022-07-28 15:20:19
Last Modify:2020-03-17 22:19:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:add_swap_count_continuation - called when a swap count is duplicated* beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's* page of the original vmalloc'ed swap_map, to hold the continuation count

Proto:int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)

Type:int

Parameter:

TypeParameterName
swp_entry_tentry
gfp_tgfp_mask
3538  ret = 0
3544  page = alloc_page(gfp_mask | __GFP_HIGHMEM)
3546  si = Check whether swap entry is valid in the swap device
3547  If Not si Then
3552  Go to outer
3554  spin_lock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
3556  offset = Extract the `offset' field from a swp_entry_t. The swp_entry_t is in* arch-independent format
3558  ci = lock_cluster(si, offset)
3560  count = vmalloc'ed array of usage counts [offset] & ~Flag page is cached, in first swap_map
3562  If (count & ~See swap_map continuation for full count ) != Max duplication count, in first swap_map Then
3568  Go to out
3571  If Not page Then
3572  ret = -ENOMEM
3573  Go to out
3581  head = Walk a vmap address to the struct page it maps.
3582  offset &= ~PAGE_MASK
3584  spin_lock( & protect swap count continuation page* list.)
3589  If Not page_private(head) Then
3590  BUG_ON(count & See swap_map continuation for full count )
3591  Initialization list head
3592  set_page_private(head, SWP_CONTINUED)
3593  SWP_USED etc: see above |= SWP_CONTINUED
3603  If Not (count & See swap_map continuation for full count ) Then Go to out_unlock_cont
3606  map = kmap_atomic(list_page) + offset
3607  count = map
3608  Prevent people trying to call kunmap_atomic() as if it were kunmap()* kunmap_atomic() should get the return value of kmap_atomic, not the page.(map)
3614  If (count & ~See swap_map continuation for full count ) != Max count, in each swap_map continuation Then Go to out_unlock_cont
3618  list_add_tail - add a new entry*@new: new entry to be added*@head: list head to add it before* Insert a new entry before the specified head.* This is useful for implementing queues.
3619  page = NULL
3620  out_unlock_cont :
3621  spin_unlock( & protect swap count continuation page* list.)
3622  out :
3623  unlock_cluster(ci)
3624  spin_unlock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
3625  put_swap_device(si)
3626  outer :
3627  If page Then __free_page(page)
3629  Return ret
Caller
NameDescribe
swap_duplicateIncrease reference count of swap entry by 1