函数逻辑报告 |
Source Code:include\linux\list.h |
Create Date:2022-07-27 06:38:25 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:删除链表项并重新初始化
函数原型:static inline void list_del_init(struct list_head *entry)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
struct list_head * | entry |
190 | 删除链表项 |
191 | 初始化链表头 |
名称 | 描述 |
---|---|
plist_del | 从plist移除节点 |
kobj_kset_leave | move the kobject from its kset's list |
test_update_node | |
shadow_remove | |
module_unload_ei_list | |
ddebug_table_free | |
sbitmap_del_wait_queue | |
list_test_list_del_init | |
arch_optimize_kprobes | Replace breakpoints (int3) with relative jumps.* Caller must call with locking kprobe_mutex and text_mutex. |
__unhash_process | |
find_child_reaper | |
exit_notify | Send signals to all our closest relatives so that they know* to properly mourn us.. |
__ptrace_unlink | __ptrace_unlink - unlink ptracee and restore its execution state*@child: ptracee to be unlinked* Remove @child from the ptrace list, move it back to the original parent,* and restore the execution state so that it conforms to the group stop* state |
flush_sigqueue | |
collect_signal | |
dequeue_synchronous_signal | |
flush_sigqueue_mask | Remove signals in mask from the pending set and queue.* Returns 1 if any signals were found.* All callers must be holding the siglock. |
try_to_grab_pending | ry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any |
worker_leave_idle | worker_leave_idle - leave idle state*@worker: worker which is leaving idle state*@worker is leaving idle state. Update stats.* LOCKING:* spin_lock_irq(pool->lock). |
destroy_worker | destroy_worker - destroy a workqueue worker*@worker: worker to be destroyed* Destroy @worker and adjust @pool stats accordingly. The worker should* be idle.* CONTEXT:* spin_lock_irq(pool->lock). |
process_one_work | process_one_work - process single work*@worker: self*@work: work to process* Process @work |
rescuer_thread | scuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function |
flush_workqueue | lush_workqueue - ensure that any scheduled work has run to completion.*@wq: workqueue to flush* This function sleeps until all work items which were queued on entry* have finished execution, but it is not livelocked by new incoming ones. |
kthreadd | |
kthread_worker_fn | kthread_worker_fn - kthread function to process kthread_worker*@worker_ptr: pointer to initialized kthread_worker* This function implements the main cycle of kthread worker. It processes* work_list until it is stopped with kthread_stop() |
kthread_delayed_work_timer_fn | kthread_delayed_work_timer_fn - callback that queues the associated kthread* delayed work when the timer expires.*@t: pointer to the expired timer* The format of the function is defined by struct timer_list. |
__kthread_cancel_work | This function removes the work from the worker queue |
async_run_entry_fn | pick the first pending entry and run it |
__delist_rt_entity | |
prepare_to_wait_event | |
finish_wait | sh_wait - clean up after waiting in a queue*@wq_head: waitqueue waited on*@wq_entry: wait descriptor* Sets current thread back to running state and removes* the wait descriptor from the given waitqueue if still* queued. |
autoremove_wake_function | |
swake_up_locked | The thing about the wake_up_state() return value; I think we can ignore it.* If for some reason it would return 0, that means the previously waiting* task is already running, so it will observe condition true (or has already). |
swake_up_all | Does not allow usage from IRQ disabled, since we must be able to* release IRQs to guarantee bounded hold time. |
prepare_to_swait_event | |
__finish_swait | |
finish_swait | |
mutex_remove_waiter | |
srcu_init | |
srcu_init | Queue work for srcu_struct structures with early boot callbacks.* The work won't actually execute until the workqueue initialization* phase that takes place after the scheduler starts. |
rcu_torture_alloc | Allocate an element from the rcu_tortures pool. |
clocksource_unbind | Unbind clocksource @cs. Called with clocksource_mutex held |
run_posix_cpu_timers | This is called from the timer interrupt handler. The irq handler has* already updated our counts. We need to check if any timers fire now.* Interrupts are disabled. |
clockevents_replace | Try to install a replacement clock event device |
__clockevents_try_unbind | Called with clockevents_mutex and clockevents_lock held |
put_pi_state | Drops a reference to the pi_state object and frees or caches it* when the last reference is gone. |
wake_futex_pi | Caller must hold a reference on @pi_state. |
fixup_pi_state_owner | |
css_set_move_task | ss_set_move_task - move a task from one css_set to another*@task: task being moved*@from_cset: css_set @task currently belongs to (may be NULL)*@to_cset: new css_set @task is being moved to (may be NULL)*@use_mg_tasks: move to @to_cset->mg_tasks instead |
cgroup_migrate_execute | group_taskset_migrate - migrate a taskset*@mgctx: migration context* Migrate tasks in @mgctx as setup by migration preparation functions.* This function fails iff one of the ->can_attach callbacks fails and |
cgroup_migrate_finish | group_migrate_finish - cleanup after attach*@mgctx: migration context* Undo cgroup_migrate_add_src() and cgroup_migrate_prepare_dst(). See* those functions for details. |
cgroup_migrate_prepare_dst | group_migrate_prepare_dst - prepare destination css_sets for migration*@mgctx: migration context* Tasks are about to be moved and all the source css_sets have been* preloaded to @mgctx->preloaded_src_csets |
cgroup_release | |
rdmacg_unregister_device | dmacg_unregister_device - unregister rdmacg device from rdma controller |
cpu_stopper_thread | |
remove_chunk_node | |
untag_chunk | |
kill_rules | |
trim_marked | rim the uncommitted chunks from tree |
audit_remove_tree_rule | alled with audit_filter_mutex |
prune_tree_thread | That gets run when evict_chunk() ends up needing to kill audit_tree.* Runs from a separate thread. |
audit_add_tree_rule | alled with audit_filter_mutex |
audit_kill_trees | ... and that one is done if evict_chunk() decides to delay until the end* of syscall. Runs synchronously. |
evict_chunk | Here comes the stuff asynchronous to auditctl operations |
do_unoptimize_kprobes | Unoptimize (replace a jump with a breakpoint and remove the breakpoint* if need) kprobes listed on unoptimizing_list. |
do_free_cleaned_kprobes | Reclaim all kprobes on the free_list |
optimize_kprobe | Optimize kprobe if p is ready to be optimized |
unoptimize_kprobe | Unoptimize a kprobe if p is optimized |
kill_optimized_kprobe | Remove optimized instructions |
__rb_allocate_pages | |
rb_free_cpu_buffer | |
rb_insert_pages | |
ring_buffer_resize | g_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure. |
unregister_event_command | Currently we only unregister event commands from __init, so mark* this __init too. |
trace_probe_append | |
trace_probe_unlink | |
prog_array_map_poke_untrack | |
prog_array_map_free | |
__bpf_prog_offload_destroy | |
__bpf_map_offload_destroy | |
perf_event_ctx_deactivate | |
perf_group_detach | |
event_sched_out | |
perf_remove_from_owner | Remove user event from the owner task. |
perf_event_exit_event | |
perf_event_exit_task | When a child task exits, feed back event values to parent events.* Can be called with cred_guard_mutex held when called from* install_exec_creds(). |
perf_free_event | |
padata_parallel_worker | |
padata_find_next | padata_find_next - Find the next object that needs serialization |
padata_serial_worker | |
list_lru_del | |
list_lru_isolate | |
drain_mmlist | After a successful try_to_unuse, if no swap is now in use, we know* we can empty the mmlist. swap_lock must be held on entry and exit.* Note that mmlist_lock nests inside swap_lock, and an mm must be |
deferred_split_scan | |
memcg_event_wake | Gets called on EPOLLHUP on eventfd when user closes it.* Called with wqh->lock held and interrupts disabled. |
mem_cgroup_css_offline | |
remove_zspage | This function removes the given zspage from the freelist identified* by |
__release_z3fold_page | |
release_z3fold_page_locked_list | |
do_compact_page | |
__z3fold_alloc | rns _locked_ z3fold page header or NULL |
z3fold_free | z3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides |
z3fold_reclaim_page | z3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different |
z3fold_page_isolate | |
z3fold_page_putback | |
elv_unregister | |
blk_flush_complete_seq | lk_flush_complete_seq - complete flush sequence*@rq: PREFLUSH/FUA request being sequenced*@fq: flush queue*@seq: sequences to complete (mask of %REQ_FSEQ_*, can be zero)*@error: whether an error occurred*@rq just completed @seq part of its flush sequence, |
ioc_destroy_icq | Release an icq. Called with ioc locked for blk-mq, and with both ioc* and queue locked for legacy. |
blk_done_softirq | Softirq action handler - move entries to local list and loop over them* while passing them to the queue registered handler. |
blk_mq_requeue_work | |
dispatch_rq_from_ctx | |
blk_mq_dispatch_wake | |
blk_mq_mark_tag_wait | Mark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting. |
blk_mq_dispatch_rq_list | Returns true if we did some work AND can potentially do more. |
blk_mq_flush_plug_list | |
blk_mq_try_issue_list_directly | |
blk_mq_make_request | |
blk_mq_free_rqs | |
blk_mq_release | It is the actual release handler for mq, but we do it from* request queue's release handler for avoiding use-after-free* and headache because q->mq_kobj shouldn't have been introduced,* but we can't group ctx/kctx kobj without it. |
blk_mq_alloc_and_init_hctx | |
disk_del_events | |
rq_qos_wake_function | |
blkg_destroy | |
throtl_pop_queued | hrotl_pop_queued - pop the first bio form a qnode list*@queued: the qnode list to pop a bio from*@tg_to_put: optional out argument for throtl_grp to put* Pop the first bio from the qnode list @queued |
iocg_wake_fn | |
ioc_timer_fn | |
ioc_pd_free | |
deadline_remove_request | move rq from rbtree and fifo. |
__dd_dispatch_request | deadline_dispatch_requests selects the best request according to* read/write expire, fifo_batch, etc |
dd_insert_requests | |
kyber_dispatch_cur_domain | |
bfq_remove_request | |
bfq_requests_merged | This function is called to notify the scheduler that the requests* rq and 'next' have been merged, with 'next' going away |
__bfq_dispatch_request | |
bfq_insert_requests | |
unregister_key_type | register_key_type - Unregister a type of key.*@ktype: The key type.* Unregister a key type and mark all the extant keys of this type as dead.* Those keys of this type are then destroyed to get rid of their payloads and |
key_free_user_ns | Clean up the bits of user_namespace that belong to us. |
inode_free_security | |
sb_finish_set_opts | |
tomoyo_write_answer | moyo_write_answer - Write the supervisor's decision.*@head: Pointer to "struct tomoyo_io_buffer".* Returns 0 on success, -EINVAL otherwise. |
__aa_fs_remove_rawdata | |
__replace_profile | __replace_profile - replace @old with @new on a list*@old: profile to be replaced (NOT NULL)*@new: profile to replace @old with (NOT NULL)*@share_proxy: transfer @old->proxy to @new* Will duplicate and refcount elements that @new inherits from @old |
aa_replace_profiles | aa_replace_profiles - replace profile(s) on the profile list*@policy_ns: namespace load is occurring on*@label: label that is attempting to load/replace policy*@mask: permission mask*@udata: serialized data stream (NOT NULL)* unpack and replace a profile |
aa_unpack | aa_unpack - unpack packed binary profile(s) data loaded from user space*@udata: user data copied to kmem (NOT NULL)*@lh: list to place unpacked profiles in a aa_repl_ws*@ns: Returns namespace profile is in if specified else NULL (NOT NULL)* Unpack user |
__put_super | Drop a superblock's refcount. The caller must hold sb_lock. |
cd_forget | |
cdev_purge | |
d_shrink_del | |
inode_sb_list_del | |
dispose_list | dispose_list - dispose of the contents of a local list*@head: the head of the list to free* Dispose-list gets a local list with local inodes in it, so it doesn't* need to worry about list corruption and SMP locks. |
unhash_mnt | vfsmount lock must be held for write |
mnt_change_mountpoint | |
umount_tree | mount_lock must be held* namespace_sem must be held for write |
attach_recursive_mnt | @source_mnt : mount tree to be attached*@nd : place the mount tree @source_mnt is attached*@parent_nd : if non-null, detach the source_mnt from its parent and* store the parent mount and mountpoint dentry |
do_move_mount | |
finish_automount | |
SYSCALL_DEFINE2 | pivot_root Semantics:* Moves the root file system of the current process to the directory put_old,* makes new_root as the new root file system of the current process, and sets* root/cwd of all processes which had them on the current root to new_root |
dcache_dir_lseek | |
dcache_readdir | Directory is locked and all positive dentries in it are safe, since* for ramfs-type trees they can't go away without unlink() or rmdir(),* both impossible due to the lock on directory. |
inode_io_list_del_locked | de_io_list_del_locked - remove an inode from its bdi_writeback IO list*@inode: inode to be removed*@wb: bdi_writeback @inode is being removed from* Remove @inode which may be on one of @wb->b_{dirty|io|more_io} lists and |
sb_clear_inode_writeback | lear an inode as under writeback on the sb |
get_next_work_item | Return the next wb_writeback_work struct that hasn't been processed yet. |
do_make_slave | |
change_mnt_propagation | vfsmount lock must be held for write |
umount_one | |
restore_mounts | |
cleanup_umount_visitations | |
__remove_assoc_queue | The buffer's backing address_space's private_lock must be held |
__bforget | rget() is like brelse(), except it discards any* potentially dirty data. |
bdev_evict_inode | |
fsnotify_remove_queued_event | |
fsnotify_detach_mark | Mark mark as detached, remove it from group list |
fsnotify_add_mark_locked | Attach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group. |
fsnotify_mark_destroy_workfn | |
process_access_response | |
fanotify_release | |
ep_remove | Removes a "struct epitem" from the eventpoll RB tree and deallocates* all the associated resources. Must be called with "mtx" held. |
ep_read_events_proc | |
ep_poll_callback | This is the callback that is passed to the wait queue wakeup* mechanism |
ep_insert | Must be called with "mtx" held. |
ep_send_events_proc | |
clear_tfile_check_list | |
userfaultfd_wake_function | |
free_ioctx_users | When this function runs, the kioctx has been removed from the "hash table"* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -* now it's safe to cancel any that need to be. |
aio_poll_complete_work | |
aio_poll_cancel | assumes we are called with irqs disabled |
aio_poll_wake | |
aio_poll | |
SYSCALL_DEFINE3 | sys_io_cancel:* Attempts to cancel an iocb previously passed to io_submit. If* the operation is successfully cancelled, the resulting event is* copied into the memory pointed to by result without being placed* into the completion queue and 0 is returned |
io_get_deferred_req | |
io_get_timeout_req | |
io_kill_timeout | |
io_req_link_next | |
io_fail_links | Called if REQ_F_LINK is set, and we fail the head request |
io_poll_remove_one | |
io_poll_wake | |
io_poll_add | |
io_timeout_fn | |
io_timeout_cancel | |
io_link_timeout_fn | |
locks_dispose_list | |
__locks_delete_block | Remove waiter from blocker's block list.* When blocker ends up pointing to itself then the list is empty.* Must be called with blocked_lock_lock held. |
locks_unlink_lock_ctx | |
kill_node | |
mb_cache_entry_delete | mb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value. |
mb_cache_shrink | |
locks_end_grace | locks_end_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* Call this function to state that the given lock manager is ready to* resume regular locking. The grace period will not end until all lock |
iomap_finish_ioends | |
iomap_writepage_map | We implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide |
remove_free_dquot | |
clear_dquot_dirty | |
put_dquot_list | Free list of dquots* Dquots are removed from inodes and no new references can be got so we are* the only ones holding reference |
dyn_event_remove |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |