Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\swapfile.c Create Date:2022-07-28 15:17:05
Last Modify:2020-03-17 22:19:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:scan_swap_map_slots

Proto:static int scan_swap_map_slots(struct swap_info_struct *si, unsigned char usage, int nr, swp_entry_t slots[])

Type:int

Parameter:

TypeParameterName
struct swap_info_struct *si
unsigned charusage
intnr
swp_entry_tslots
739  last_in_cluster = 0
740  latency_ration = LATENCY_LIMIT
741  n_ret = 0
743  If nr > SWAP_BATCH Then nr = SWAP_BATCH
757  SWP_USED etc: see above += SWP_SCANNING
758  scan_base = offset = likely index for next allocation
761  If cluster info. Only for SSD Then
762  If Try to get a swap entry from current cpu's swap entry pool (a cluster). This* might involve allocating a new cluster for current CPU too. Then Go to checks
764  Else Go to scan
768  If Value for the false possibility is greater at compile time(!countdown to next cluster search --) Then
771  Go to checks
774  spin_unlock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
782  scan_base = offset = dex of first free in swap_map
783  last_in_cluster = offset + SWAPFILE_CLUSTER - 1
789  Else if offset == last_in_cluster Then
802  offset = scan_base
803  spin_lock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
804  countdown to next cluster search = SWAPFILE_CLUSTER - 1
807  checks :
808  If cluster info. Only for SSD Then
811  If n_ret Then Go to done
818  If Not (SWP_USED etc: see above & SWP_WRITEOK) Then Go to no_page
820  If Not dex of last free in swap_map Then Go to no_page
822  If offset > dex of last free in swap_map Then scan_base = offset = dex of first free in swap_map
825  ci = lock_cluster(si, offset)
827  If vm_swap_full() && vmalloc'ed array of usage counts [offset] == Flag page is cached, in first swap_map Then
829  unlock_cluster(ci)
830  spin_unlock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
831  swap_was_freed = rns 1 if swap entry is freed
832  spin_lock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
834  If swap_was_freed Then Go to checks
836  Go to scan
839  If vmalloc'ed array of usage counts [offset] Then
840  unlock_cluster(ci)
841  If Not n_ret Then Go to scan
843  Else Go to done
846  vmalloc'ed array of usage counts [offset] = usage
847  The cluster corresponding to page_nr will be used. The cluster will be* removed from free cluster list and its usage counter will be increased.
848  unlock_cluster(ci)
850  swap_range_alloc(si, offset, 1)
851  likely index for next allocation = offset + 1
852  slots[n_ret++] = Store a type+offset into a swp_entry_t in an arch-independent format
855  If n_ret == nr || offset >= dex of last free in swap_map Then Go to done
861  If Value for the false possibility is greater at compile time(--latency_ration < 0) Then
862  If n_ret Then Go to done
864  spin_unlock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
865  cond_resched()
866  spin_lock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
867  latency_ration = LATENCY_LIMIT
871  If cluster info. Only for SSD Then
872  If Try to get a swap entry from current cpu's swap entry pool (a cluster). This* might involve allocating a new cluster for current CPU too. Then Go to checks
874  Else Go to done
878  ++offset
881  If countdown to next cluster search && Not vmalloc'ed array of usage counts [offset] Then
882  --countdown to next cluster search
883  Go to checks
886  done :
887  SWP_USED etc: see above -= SWP_SCANNING
888  Return n_ret
890  scan :
891  spin_unlock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
892  When ++offset <= dex of last free in swap_map cycle
895  Go to checks
899  Go to checks
902  cond_resched()
906  offset = dex of first free in swap_map
907  When offset < scan_base cycle
910  Go to checks
914  Go to checks
917  cond_resched()
920  offset++
922  spin_lock( & protect map scan related fields like* swap_map, lowest_bit, highest_bit,* inuse_pages, cluster_next,* cluster_nr, lowest_alloc,* highest_alloc, free/discard cluster* list. other fields are only changed* at swapon/swapoff, so are protected* by swap_lock)
924  no_page :
925  SWP_USED etc: see above -= SWP_SCANNING
926  Return n_ret
Caller
NameDescribe
scan_swap_map
get_swap_pages