函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-27 06:38:25
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:链表项移动到头部

函数原型:static inline void list_move(struct list_head *list, struct list_head *head)

返回类型:void

参数:

类型参数名称
struct list_head *list
struct list_head *head
201  删除链表项
202  添加链表项
调用者
名称描述
lc_del从缓存中移除锁
lc_prepare_for_change
__lc_get
lc_committedlc_committed - tell @lc that pending changes have been recorded*@lc: the lru cache to operate on* User is expected to serialize on explicit lc_try_lock_for_transaction()* before the transaction is started, and later needs to lc_unlock() explicitly
lc_put释放计数器
lc_set设置关联索引
parman_prio_shift_up
list_test_list_move
arch_unoptimize_kprobesRecover original instructions and breakpoints from relative jumps.* Caller must call with locking kprobe_mutex.
requeue_rt_entityPut task to the head or the end of the run list without the overhead of* dequeue followed by enqueue.
torture_ww_mutex_lock
stress_reorder_work
audit_remove_tree_rulealled with audit_filter_mutex
evict_chunkHere comes the stuff asynchronous to auditctl operations
unoptimize_kprobeUnoptimize a kprobe if p is optimized
__bpf_lru_node_move_to_free
__bpf_lru_node_move_inMove nodes from local list to the LRU list
__bpf_lru_node_moveMove nodes between or within active and inactive list (like* active to inactive, inactive to active or tail of active back to* the head of active).
bpf_common_lru_push_free
perf_event_release_kernelKill an event dead; while event:refcount will preserve the event* object, it will not preserve its functionality. Once the last 'user'* gives up the object, we'll destroy the thing.
reclaim_clean_pages_from_list
isolate_lru_pagespgdat->lru_lock is heavily contended. Some of the functions that* shrink the lists perform better by taking out a batch of pages* and working on them outside the LRU lock.* For pagecache intensive workloads, this function is the hottest
move_pages_to_lruThis moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
reclaim_pages
__pcpu_chunk_move
pcpu_balance_workfnBalance work is used to populate or destroy chunks asynchronously. We* try to keep the number of populated free pages between* PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one* empty chunk.
list_lru_isolate_move
enqueue_huge_page
dequeue_huge_page_node_exact
alloc_huge_page
__ksm_exit
free_blockCaller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller
__kmem_cache_shrinkkmem_cache_shrink discards empty slabs and promotes the slabs filled* up most to the head of the partial lists. New allocations will then* fill those up and thus they can be removed from the partial lists.* The slabs with the least items are placed last
deferred_split_scan
hugetlb_cgroup_migratehugetlb_lock will make sure a parallel cgroup rmdir won't happen* when we migrate hugepages
dd_merged_requests
kyber_insert_requests
__d_move__d_move - move a dentry*@dentry: entry to move*@target: new dentry*@exchange: exchange the two dentries* Update the dcache to reflect the move of a file name
umount_treemount_lock must be held* namespace_sem must be held for write
mark_mounts_for_expiryprocess a list of expirable mountpoints with the intent of discarding any* mountpoints that aren't in use and haven't been touched since last we came* here
scan_positivesReturns an element of siblings' list.* We are looking for th positive after

; if* found, dentry is grabbed and returned to caller.* If no such element exists, NULL is returned.

dcache_dir_lseek
inode_io_list_move_lockedde_io_list_move_locked - move an inode onto a bdi_writeback IO list*@inode: inode to be moved*@wb: target bdi_writeback*@head: one of @wb->b_{dirty|io|more_io|dirty_time}* Move @inode->i_io_list to @list of @wb and set %WB_has_dirty_io
move_expired_inodesMove expired (dirtied before work->older_than_this) dirty inodes from*@delaying_queue to @dispatch_queue.
do_make_slave
fsnotify_clear_marks_by_groupClear any marks in a group with given type mask
userfaultfd_ctx_read
io_cqring_overflow_flushReturns true if there are no backlogged entries after the flush
move_to_free_areaUsed for pages which are on another list
flow_block_cb_remove