函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-27 06:38:27
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:hlist_add_head

函数原型:static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h)

返回类型:void

参数:

类型参数名称
struct hlist_node *n
struct hlist_head *h
784  first等于first
785  next等于first
786  如果firstpprev等于next
788  WRITE_ONCE(first, n)
789  pprev等于first
调用者
名称描述
fill_pool
alloc_objectAllocate a new object. If the pool is empty, switch off the debugger.* Must be called with interrupts disabled.
free_obj_work
__free_object
debug_objects_early_init初始化debug kernel相关信息
debug_objects_replace_static_objectsConvert the statically allocated objects to dynamic ones:
lc_prepare_for_change
lc_set设置关联索引
kvm_async_pf_task_wait@interrupt_kernel: Is this called from a routine which interrupts the kernel* (other than user space)?
kvm_async_pf_task_wake
copy_process创建进程
__cpuhp_state_add_instance_cpuslocked
uid_hash_insertThese routines must be called with the uidhash spinlock held!
get_ucounts
enqueue_timerEnqueue the timer into the hash bucket, mark it pending in* the bitmap and store the index in the timer flags.
recycle_rp_inst
register_trace_eventgister_trace_event - register output for an event type*@event: the event type to register* Event types are stored in a hash and this hash is used to* find a way to print an event
bpf_trampoline_lookup
bpf_trampoline_link_prog
user_return_notifier_registerRequest a notification when the current cpu returns to userspace. Must be* called in atomic context. The notifier will also be called in atomic* context.
__mmu_interval_notifier_insert
mmu_interval_notifier_removemmu_interval_notifier_remove - Remove a interval notifier*@mni: Interval notifier to unregister* This function must be paired with mmu_interval_notifier_insert()
stable_node_chain_add_dup
stable_node_dup
stable_tree_appendstable_tree_append - add another rmap_item to the linked list of* rmap_items hanging off a given node of the stable tree, all sharing* the same ksm page.
add_scan_areaAdd a scanning area to the object. If at least one such area is added,* kmemleak will only scan these ranges rather than the whole memory block.
cma_add_to_cma_mem_list
ioc_create_icq_create_icq - create and link io_cq*@ioc: io_context of interest*@q: request_queue of interest*@gfp_mask: allocation mask* Make sure io_cq linking @ioc and @q exists
bsg_add_device
bfq_reset_burst_listEmpty burst list and add just bfqq (see comments on bfq_handle_burst)
bfq_add_to_burstAdd bfqq to the list of queues in current burst (see bfq_handle_burst)
bfq_add_request
bfq_get_bfqq_handle_split
sget_fcsget_fc - Find or create a superblock*@fc: Filesystem context
sget查找或创建超级块
__d_instantiate
__d_instantiate_anon
__d_add
__insert_inode_hash在哈希表中插入索引节点
inode_insert5de_insert5 - obtain an inode from a mounted file system*@inode: pre-allocated inode to use for insert to cache*@hashval: hash value (usually inode number) to get*@test: callback used for comparisons between inodes*@set: callback used to initialize a new
iget_locked从文件系统上获得索引节点
insert_inode_locked
get_mountpoint
mnt_set_mountpointvfsmount lock must be held for write
mntput_no_expire
umount_treemount_lock must be held* namespace_sem must be held for write
__detach_mounts__detach_mounts - lazily unmount all mounts on the specified dentry* During unlink, rmdir, and d_drop it is possible to loose the path* to an existing mountpoint, and wind up leaking the mount
propagate_one
pin_insert
io_poll_req_insert
locks_insert_global_locksMust be called with the flc_lock held!
insert_dquot_hashFollowing list functions expect dq_list_lock to be held
__sk_add_node
sk_add_bind_node