函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-27 06:38:25
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:删除链表项并重新初始化

函数原型:static inline void list_del_init(struct list_head *entry)

返回类型:void

参数:

类型参数名称
struct list_head *entry
190  删除链表项
191  初始化链表头
调用者
名称描述
plist_del从plist移除节点
kobj_kset_leavemove the kobject from its kset's list
test_update_node
shadow_remove
module_unload_ei_list
ddebug_table_free
sbitmap_del_wait_queue
list_test_list_del_init
arch_optimize_kprobesReplace breakpoints (int3) with relative jumps.* Caller must call with locking kprobe_mutex and text_mutex.
__unhash_process
find_child_reaper
exit_notifySend signals to all our closest relatives so that they know* to properly mourn us..
__ptrace_unlink__ptrace_unlink - unlink ptracee and restore its execution state*@child: ptracee to be unlinked* Remove @child from the ptrace list, move it back to the original parent,* and restore the execution state so that it conforms to the group stop* state
flush_sigqueue
collect_signal
dequeue_synchronous_signal
flush_sigqueue_maskRemove signals in mask from the pending set and queue.* Returns 1 if any signals were found.* All callers must be holding the siglock.
try_to_grab_pendingry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any
worker_leave_idleworker_leave_idle - leave idle state*@worker: worker which is leaving idle state*@worker is leaving idle state. Update stats.* LOCKING:* spin_lock_irq(pool->lock).
destroy_workerdestroy_worker - destroy a workqueue worker*@worker: worker to be destroyed* Destroy @worker and adjust @pool stats accordingly. The worker should* be idle.* CONTEXT:* spin_lock_irq(pool->lock).
process_one_workprocess_one_work - process single work*@worker: self*@work: work to process* Process @work
rescuer_threadscuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function
flush_workqueuelush_workqueue - ensure that any scheduled work has run to completion.*@wq: workqueue to flush* This function sleeps until all work items which were queued on entry* have finished execution, but it is not livelocked by new incoming ones.
kthreadd
kthread_worker_fnkthread_worker_fn - kthread function to process kthread_worker*@worker_ptr: pointer to initialized kthread_worker* This function implements the main cycle of kthread worker. It processes* work_list until it is stopped with kthread_stop()
kthread_delayed_work_timer_fnkthread_delayed_work_timer_fn - callback that queues the associated kthread* delayed work when the timer expires.*@t: pointer to the expired timer* The format of the function is defined by struct timer_list.
__kthread_cancel_workThis function removes the work from the worker queue
async_run_entry_fnpick the first pending entry and run it
__delist_rt_entity
prepare_to_wait_event
finish_waitsh_wait - clean up after waiting in a queue*@wq_head: waitqueue waited on*@wq_entry: wait descriptor* Sets current thread back to running state and removes* the wait descriptor from the given waitqueue if still* queued.
autoremove_wake_function
swake_up_lockedThe thing about the wake_up_state() return value; I think we can ignore it.* If for some reason it would return 0, that means the previously waiting* task is already running, so it will observe condition true (or has already).
swake_up_allDoes not allow usage from IRQ disabled, since we must be able to* release IRQs to guarantee bounded hold time.
prepare_to_swait_event
__finish_swait
finish_swait
mutex_remove_waiter
srcu_init
srcu_initQueue work for srcu_struct structures with early boot callbacks.* The work won't actually execute until the workqueue initialization* phase that takes place after the scheduler starts.
rcu_torture_allocAllocate an element from the rcu_tortures pool.
clocksource_unbindUnbind clocksource @cs. Called with clocksource_mutex held
run_posix_cpu_timersThis is called from the timer interrupt handler. The irq handler has* already updated our counts. We need to check if any timers fire now.* Interrupts are disabled.
clockevents_replaceTry to install a replacement clock event device
__clockevents_try_unbindCalled with clockevents_mutex and clockevents_lock held
put_pi_stateDrops a reference to the pi_state object and frees or caches it* when the last reference is gone.
wake_futex_piCaller must hold a reference on @pi_state.
fixup_pi_state_owner
css_set_move_taskss_set_move_task - move a task from one css_set to another*@task: task being moved*@from_cset: css_set @task currently belongs to (may be NULL)*@to_cset: new css_set @task is being moved to (may be NULL)*@use_mg_tasks: move to @to_cset->mg_tasks instead
cgroup_migrate_executegroup_taskset_migrate - migrate a taskset*@mgctx: migration context* Migrate tasks in @mgctx as setup by migration preparation functions.* This function fails iff one of the ->can_attach callbacks fails and
cgroup_migrate_finishgroup_migrate_finish - cleanup after attach*@mgctx: migration context* Undo cgroup_migrate_add_src() and cgroup_migrate_prepare_dst(). See* those functions for details.
cgroup_migrate_prepare_dstgroup_migrate_prepare_dst - prepare destination css_sets for migration*@mgctx: migration context* Tasks are about to be moved and all the source css_sets have been* preloaded to @mgctx->preloaded_src_csets
cgroup_release
rdmacg_unregister_devicedmacg_unregister_device - unregister rdmacg device from rdma controller
cpu_stopper_thread
remove_chunk_node
untag_chunk
kill_rules
trim_markedrim the uncommitted chunks from tree
audit_remove_tree_rulealled with audit_filter_mutex
prune_tree_threadThat gets run when evict_chunk() ends up needing to kill audit_tree.* Runs from a separate thread.
audit_add_tree_rulealled with audit_filter_mutex
audit_kill_trees... and that one is done if evict_chunk() decides to delay until the end* of syscall. Runs synchronously.
evict_chunkHere comes the stuff asynchronous to auditctl operations
do_unoptimize_kprobesUnoptimize (replace a jump with a breakpoint and remove the breakpoint* if need) kprobes listed on unoptimizing_list.
do_free_cleaned_kprobesReclaim all kprobes on the free_list
optimize_kprobeOptimize kprobe if p is ready to be optimized
unoptimize_kprobeUnoptimize a kprobe if p is optimized
kill_optimized_kprobeRemove optimized instructions
__rb_allocate_pages
rb_free_cpu_buffer
rb_insert_pages
ring_buffer_resizeg_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure.
unregister_event_commandCurrently we only unregister event commands from __init, so mark* this __init too.
trace_probe_append
trace_probe_unlink
prog_array_map_poke_untrack
prog_array_map_free
__bpf_prog_offload_destroy
__bpf_map_offload_destroy
perf_event_ctx_deactivate
perf_group_detach
event_sched_out
perf_remove_from_ownerRemove user event from the owner task.
perf_event_exit_event
perf_event_exit_taskWhen a child task exits, feed back event values to parent events.* Can be called with cred_guard_mutex held when called from* install_exec_creds().
perf_free_event
padata_parallel_worker
padata_find_nextpadata_find_next - Find the next object that needs serialization
padata_serial_worker
list_lru_del
list_lru_isolate
drain_mmlistAfter a successful try_to_unuse, if no swap is now in use, we know* we can empty the mmlist. swap_lock must be held on entry and exit.* Note that mmlist_lock nests inside swap_lock, and an mm must be
deferred_split_scan
memcg_event_wakeGets called on EPOLLHUP on eventfd when user closes it.* Called with wqh->lock held and interrupts disabled.
mem_cgroup_css_offline
remove_zspageThis function removes the given zspage from the freelist identified* by .
__release_z3fold_page
release_z3fold_page_locked_list
do_compact_page
__z3fold_allocrns _locked_ z3fold page header or NULL
z3fold_freez3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides
z3fold_reclaim_pagez3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different
z3fold_page_isolate
z3fold_page_putback
elv_unregister
blk_flush_complete_seqlk_flush_complete_seq - complete flush sequence*@rq: PREFLUSH/FUA request being sequenced*@fq: flush queue*@seq: sequences to complete (mask of %REQ_FSEQ_*, can be zero)*@error: whether an error occurred*@rq just completed @seq part of its flush sequence,
ioc_destroy_icqRelease an icq. Called with ioc locked for blk-mq, and with both ioc* and queue locked for legacy.
blk_done_softirqSoftirq action handler - move entries to local list and loop over them* while passing them to the queue registered handler.
blk_mq_requeue_work
dispatch_rq_from_ctx
blk_mq_dispatch_wake
blk_mq_mark_tag_waitMark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting.
blk_mq_dispatch_rq_listReturns true if we did some work AND can potentially do more.
blk_mq_flush_plug_list
blk_mq_try_issue_list_directly
blk_mq_make_request
blk_mq_free_rqs
blk_mq_releaseIt is the actual release handler for mq, but we do it from* request queue's release handler for avoiding use-after-free* and headache because q->mq_kobj shouldn't have been introduced,* but we can't group ctx/kctx kobj without it.
blk_mq_alloc_and_init_hctx
disk_del_events
rq_qos_wake_function
blkg_destroy
throtl_pop_queuedhrotl_pop_queued - pop the first bio form a qnode list*@queued: the qnode list to pop a bio from*@tg_to_put: optional out argument for throtl_grp to put* Pop the first bio from the qnode list @queued
iocg_wake_fn
ioc_timer_fn
ioc_pd_free
deadline_remove_requestmove rq from rbtree and fifo.
__dd_dispatch_requestdeadline_dispatch_requests selects the best request according to* read/write expire, fifo_batch, etc
dd_insert_requests
kyber_dispatch_cur_domain
bfq_remove_request
bfq_requests_mergedThis function is called to notify the scheduler that the requests* rq and 'next' have been merged, with 'next' going away
__bfq_dispatch_request
bfq_insert_requests
unregister_key_typeregister_key_type - Unregister a type of key.*@ktype: The key type.* Unregister a key type and mark all the extant keys of this type as dead.* Those keys of this type are then destroyed to get rid of their payloads and
key_free_user_nsClean up the bits of user_namespace that belong to us.
inode_free_security
sb_finish_set_opts
tomoyo_write_answermoyo_write_answer - Write the supervisor's decision.*@head: Pointer to "struct tomoyo_io_buffer".* Returns 0 on success, -EINVAL otherwise.
__aa_fs_remove_rawdata
__replace_profile__replace_profile - replace @old with @new on a list*@old: profile to be replaced (NOT NULL)*@new: profile to replace @old with (NOT NULL)*@share_proxy: transfer @old->proxy to @new* Will duplicate and refcount elements that @new inherits from @old
aa_replace_profilesaa_replace_profiles - replace profile(s) on the profile list*@policy_ns: namespace load is occurring on*@label: label that is attempting to load/replace policy*@mask: permission mask*@udata: serialized data stream (NOT NULL)* unpack and replace a profile
aa_unpackaa_unpack - unpack packed binary profile(s) data loaded from user space*@udata: user data copied to kmem (NOT NULL)*@lh: list to place unpacked profiles in a aa_repl_ws*@ns: Returns namespace profile is in if specified else NULL (NOT NULL)* Unpack user
__put_superDrop a superblock's refcount. The caller must hold sb_lock.
cd_forget
cdev_purge
d_shrink_del
inode_sb_list_del
dispose_listdispose_list - dispose of the contents of a local list*@head: the head of the list to free* Dispose-list gets a local list with local inodes in it, so it doesn't* need to worry about list corruption and SMP locks.
unhash_mntvfsmount lock must be held for write
mnt_change_mountpoint
umount_treemount_lock must be held* namespace_sem must be held for write
attach_recursive_mnt@source_mnt : mount tree to be attached*@nd : place the mount tree @source_mnt is attached*@parent_nd : if non-null, detach the source_mnt from its parent and* store the parent mount and mountpoint dentry
do_move_mount
finish_automount
SYSCALL_DEFINE2pivot_root Semantics:* Moves the root file system of the current process to the directory put_old,* makes new_root as the new root file system of the current process, and sets* root/cwd of all processes which had them on the current root to new_root
dcache_dir_lseek
dcache_readdirDirectory is locked and all positive dentries in it are safe, since* for ramfs-type trees they can't go away without unlink() or rmdir(),* both impossible due to the lock on directory.
inode_io_list_del_lockedde_io_list_del_locked - remove an inode from its bdi_writeback IO list*@inode: inode to be removed*@wb: bdi_writeback @inode is being removed from* Remove @inode which may be on one of @wb->b_{dirty|io|more_io} lists and
sb_clear_inode_writebacklear an inode as under writeback on the sb
get_next_work_itemReturn the next wb_writeback_work struct that hasn't been processed yet.
do_make_slave
change_mnt_propagationvfsmount lock must be held for write
umount_one
restore_mounts
cleanup_umount_visitations
__remove_assoc_queueThe buffer's backing address_space's private_lock must be held
__bforgetrget() is like brelse(), except it discards any* potentially dirty data.
bdev_evict_inode
fsnotify_remove_queued_event
fsnotify_detach_markMark mark as detached, remove it from group list
fsnotify_add_mark_lockedAttach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group.
fsnotify_mark_destroy_workfn
process_access_response
fanotify_release
ep_removeRemoves a "struct epitem" from the eventpoll RB tree and deallocates* all the associated resources. Must be called with "mtx" held.
ep_read_events_proc
ep_poll_callbackThis is the callback that is passed to the wait queue wakeup* mechanism
ep_insertMust be called with "mtx" held.
ep_send_events_proc
clear_tfile_check_list
userfaultfd_wake_function
free_ioctx_usersWhen this function runs, the kioctx has been removed from the "hash table"* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -* now it's safe to cancel any that need to be.
aio_poll_complete_work
aio_poll_cancelassumes we are called with irqs disabled
aio_poll_wake
aio_poll
SYSCALL_DEFINE3sys_io_cancel:* Attempts to cancel an iocb previously passed to io_submit. If* the operation is successfully cancelled, the resulting event is* copied into the memory pointed to by result without being placed* into the completion queue and 0 is returned
io_get_deferred_req
io_get_timeout_req
io_kill_timeout
io_req_link_next
io_fail_linksCalled if REQ_F_LINK is set, and we fail the head request
io_poll_remove_one
io_poll_wake
io_poll_add
io_timeout_fn
io_timeout_cancel
io_link_timeout_fn
locks_dispose_list
__locks_delete_blockRemove waiter from blocker's block list.* When blocker ends up pointing to itself then the list is empty.* Must be called with blocked_lock_lock held.
locks_unlink_lock_ctx
kill_node
mb_cache_entry_deletemb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value.
mb_cache_shrink
locks_end_gracelocks_end_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* Call this function to state that the given lock manager is ready to* resume regular locking. The grace period will not end until all lock
iomap_finish_ioends
iomap_writepage_mapWe implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide
remove_free_dquot
clear_dquot_dirty
put_dquot_listFree list of dquots* Dquots are removed from inodes and no new references can be got so we are* the only ones holding reference
dyn_event_remove