Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-28 05:34:29
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:list_move - delete from one list and add as another's head*@list: the entry to move*@head: the head that will precede our entry

Proto:static inline void list_move(struct list_head *list, struct list_head *head)

Type:void

Parameter:

TypeParameterName
struct list_head *list
struct list_head *head
201  deletes entry from list
202  list_add - add a new entry*@new: new entry to be added*@head: list head to add it after* Insert a new entry after the specified head.* This is good for implementing stacks.
Caller
NameDescribe
lc_dellc_del - removes an element from the cache*@lc: The lru_cache object*@e: The element to remove*@e must be unused (refcnt == 0). Moves @e from "lru" to "free" list,* sets @e->enr to %LC_FREE.
lc_prepare_for_change
__lc_get
lc_committedlc_committed - tell @lc that pending changes have been recorded*@lc: the lru cache to operate on* User is expected to serialize on explicit lc_try_lock_for_transaction()* before the transaction is started, and later needs to lc_unlock() explicitly
lc_putlc_put - give up refcnt of @e*@lc: the lru cache to operate on*@e: the element to put* If refcnt reaches zero, the element is moved to the lru list,* and a %LC_STARVING (if set) is cleared.* Returns the new (post-decrement) refcnt.
lc_setlc_set - associate index with label*@lc: the lru cache to operate on*@enr: the label to set*@index: the element index to associate label with.* Used to initialize the active set to some previously recorded state.
parman_prio_shift_up
list_test_list_move
arch_unoptimize_kprobesRecover original instructions and breakpoints from relative jumps.* Caller must call with locking kprobe_mutex.
requeue_rt_entityPut task to the head or the end of the run list without the overhead of* dequeue followed by enqueue.
torture_ww_mutex_lock
stress_reorder_work
audit_remove_tree_rulealled with audit_filter_mutex
evict_chunkHere comes the stuff asynchronous to auditctl operations
unoptimize_kprobeUnoptimize a kprobe if p is optimized
__bpf_lru_node_move_to_free
__bpf_lru_node_move_inMove nodes from local list to the LRU list
__bpf_lru_node_moveMove nodes between or within active and inactive list (like* active to inactive, inactive to active or tail of active back to* the head of active).
bpf_common_lru_push_free
perf_event_release_kernelKill an event dead; while event:refcount will preserve the event* object, it will not preserve its functionality. Once the last 'user'* gives up the object, we'll destroy the thing.
reclaim_clean_pages_from_list
isolate_lru_pagespgdat->lru_lock is heavily contended. Some of the functions that* shrink the lists perform better by taking out a batch of pages* and working on them outside the LRU lock.* For pagecache intensive workloads, this function is the hottest
move_pages_to_lruThis moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
reclaim_pages
__pcpu_chunk_move
pcpu_balance_workfnBalance work is used to populate or destroy chunks asynchronously. We* try to keep the number of populated free pages between* PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one* empty chunk.
list_lru_isolate_move
enqueue_huge_page
dequeue_huge_page_node_exact
alloc_huge_page
__ksm_exit
free_blockCaller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller
__kmem_cache_shrinkkmem_cache_shrink discards empty slabs and promotes the slabs filled* up most to the head of the partial lists. New allocations will then* fill those up and thus they can be removed from the partial lists.* The slabs with the least items are placed last
deferred_split_scan
hugetlb_cgroup_migratehugetlb_lock will make sure a parallel cgroup rmdir won't happen* when we migrate hugepages
dd_merged_requests
kyber_insert_requests
__d_move__d_move - move a dentry*@dentry: entry to move*@target: new dentry*@exchange: exchange the two dentries* Update the dcache to reflect the move of a file name
umount_treemount_lock must be held* namespace_sem must be held for write
mark_mounts_for_expiryprocess a list of expirable mountpoints with the intent of discarding any* mountpoints that aren't in use and haven't been touched since last we came* here
scan_positivesReturns an element of siblings' list.* We are looking for th positive after

; if* found, dentry is grabbed and returned to caller.* If no such element exists, NULL is returned.

dcache_dir_lseek
inode_io_list_move_lockedde_io_list_move_locked - move an inode onto a bdi_writeback IO list*@inode: inode to be moved*@wb: target bdi_writeback*@head: one of @wb->b_{dirty|io|more_io|dirty_time}* Move @inode->i_io_list to @list of @wb and set %WB_has_dirty_io
move_expired_inodesMove expired (dirtied before work->older_than_this) dirty inodes from*@delaying_queue to @dispatch_queue.
do_make_slave
fsnotify_clear_marks_by_groupClear any marks in a group with given type mask
userfaultfd_ctx_read
io_cqring_overflow_flushReturns true if there are no backlogged entries after the flush