函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-27 06:38:26
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:链表为空

函数原型:static inline int list_empty(const struct list_head *head)

返回类型:int

参数:

类型参数名称
const struct list_head *head
268  返回:READ_ONCE(链表后项)恒等于head
调用者
名称描述
radix_tree_shrink一个radix树的收缩高度最小
delete_node
radix_tree_free_nodes
xa_node_free
xas_destroyxas_destroy() - Free any resources allocated during the XArray operation.*@xas: XArray operation state.* This function is now internal-only.
xas_nomemxas_nomem() - Allocate memory if needed.*@xas: XArray operation state.*@gfp: Memory allocation flags.* If we need to add new nodes to the XArray, we try to allocate memory* with GFP_NOWAIT while holding the lock, which will usually succeed.
__xas_nomem__xas_nomem() - Drop locks and allocate memory if needed.*@xas: XArray operation state.*@gfp: Memory allocation flags.* Internal variant of xas_nomem().* Return: true if memory was needed, and was successfully allocated.
xas_update
xas_alloc
plist_add添加节点到头上
plist_del从plist移除节点
test_update_node
check_workingset
kunit_cleanup
string_stream_is_empty
kunit_resource_test_init_resources
kunit_resource_test_destroy_resource
kunit_resource_test_cleanup_resources
ddebug_iter_firstSet the iterator to point to the first _ddebug object* and return a pointer to that first object. Returns* NULL if there are no _ddebugs at all.
ddebug_remove_all_tables
lc_prepare_for_change
lc_unused_element_available
irq_poll_softirq
parman_prio_used
parman_destroyparman_destroy - destroys existing parman instance*@parman: parman instance* Note: all locking must be provided by the caller.
objagg_destroybjagg_destroy - destroys a new objagg instance*@objagg: objagg instance* Note: all locking must be provided by the caller.
list_test_list_move
list_test_list_move_tail
list_test_list_empty
list_test_list_for_each_safe
list_test_list_for_each_prev_safe
__register_nmi_handler
show_saved_mc
mkdir_mondata_subdir
rdtgroup_rmdir_mon
alloc_rmidAs of now the RMIDs allocation is global.* However we keep track of which packages the RMIDs* are used to optimize the limbo list management.
rdtgroup_monitor_in_progressdtgroup_monitor_in_progress - Test if monitoring in progress*@r: resource group being queried* Return: 1 if monitor groups have been created for this resource* group, 0 otherwise.
print_IO_APICs
mp_irqdomain_free
__recover_optprobed_insn
__mmput
forget_original_parentThis does two things:* A. Make init inherit all the child processes* B. Check to see if any process groups have become orphaned* as a result of our exiting, and if they have any stopped* jobs, send them a SIGHUP and then a SIGCONT. (POSIX 3.2.2.2)
__ptrace_link
flush_sigqueue
sigqueue_free
send_sigqueue
need_more_workerNeed to wake up a worker? Called from anything but currently* running workers.* Note that, because unbound workers never contribute to nr_running, this* function will always return %true for unbound pools as long as the* worklist isn't empty.
keep_workingDo I need to keep working? Called from currently running workers.
first_idle_workerReturn the first idle worker. Safe with preemption disabled
wq_worker_sleeping准备休眠的进程
pwq_activate_delayed_work
pwq_dec_nr_in_flightpwq_dec_nr_in_flight - decrement pwq's nr_in_flight*@pwq: pwq of interest*@color: color of work which left the queue* A work either has completed or is removed from pending queue,* decrement nr_in_flight of its pwq and handle workqueue flushing.* CONTEXT:
__queue_work
__queue_delayed_work
worker_enter_idleworker_enter_idle - enter idle state*@worker: worker which is entering idle state*@worker is entering idle state. Update stats and idle timer if* necessary.* LOCKING:* spin_lock_irq(pool->lock).
worker_detach_from_poolworker_detach_from_pool() - detach a worker from its pool*@worker: worker which is attached to its pool* Undo the attaching which had been done in worker_attach_to_pool(). The* caller worker shouldn't access to the pool after detached except it has
destroy_workerdestroy_worker - destroy a workqueue worker*@worker: worker to be destroyed* Destroy @worker and adjust @pool stats accordingly. The worker should* be idle.* CONTEXT:* spin_lock_irq(pool->lock).
send_mayday
process_scheduled_worksprocess_scheduled_works - process scheduled works*@worker: self* Process all scheduled works
worker_thread
rescuer_threadscuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function
flush_workqueuelush_workqueue - ensure that any scheduled work has run to completion.*@wq: workqueue to flush* This function sleeps until all work items which were queued on entry* have finished execution, but it is not livelocked by new incoming ones.
drain_workqueuedrain_workqueue - drain a workqueue*@wq: workqueue to drain* Wait until the workqueue becomes empty. While draining is in progress,* only chain queueing is allowed. IOW, only currently pending or running
put_unbound_poolput_unbound_pool - put a worker_pool*@pool: worker_pool to put* Put @pool
pwq_unbound_release_workfnScheduled on system_wq by put_pwq() when an unbound pwq hits zero refcnt* and needs to be destroyed.
pwq_adjust_max_activepwq_adjust_max_active - update a pwq's max_active to the current setting*@pwq: target pool_workqueue* If @pwq isn't freezing, set @pwq->max_active to the associated* workqueue's saved_max_active and activate delayed work items* accordingly
link_pwqsync @pwq with the current state of its associated wq and link it
apply_workqueue_attrs_locked
pwq_busy
workqueue_congestedworkqueue_congested - test whether a workqueue is congested*@cpu: CPU in question*@wq: target workqueue* Test whether @wq's cpu workqueue for @cpu is congested. There is* no synchronization around this function and the test result is
show_pwq
show_workqueue_stateshow_workqueue_state - dump workqueue state* Called from a sysrq handler or try_to_freeze_tasks() and prints out* all busy workqueues and pools.
kthreadd
kthread_worker_fnkthread_worker_fn - kthread function to process kthread_worker*@worker_ptr: pointer to initialized kthread_worker* This function implements the main cycle of kthread worker. It processes* work_list until it is stopped with kthread_stop()
queuing_blockedReturns true when the work could not be queued at the moment.* It happens when it is already pending in a worker list* or when it is being cancelled.
kthread_insert_work_sanity_check
kthread_delayed_work_timer_fnkthread_delayed_work_timer_fn - callback that queues the associated kthread* delayed work when the timer expires.*@t: pointer to the expired timer* The format of the function is defined by struct timer_list.
kthread_flush_workkthread_flush_work - flush a kthread_work*@work: work to flush* If @work is queued or executing, wait for it to finish execution.
__kthread_cancel_workThis function removes the work from the worker queue
kthread_destroy_workerkthread_destroy_worker - destroy a kthread worker*@worker: worker to be destroyed* Flush and destroy @worker. The simple flush is enough because the kthread* worker API is used only in trivial scenarios. There are no multi-step state* machines needed.
lowest_in_progress
async_unregister_domainasync_unregister_domain - ensure no more anonymous waiters on this domain*@domain: idle domain to flush out of any async_synchronize_full instances* async_synchronize_{cookie|full}_domain() are not flushed since callers* of these routines should know the
__delist_rt_entity
prepare_to_waitNote: we use "set_current_state()" _after_ the wait-queue add,* because we need a memory barrier there on SMP, so that any* wake-function that tests for the wait-queue being active* will be guaranteed to see waitqueue addition _or_ subsequent
prepare_to_wait_exclusive
prepare_to_wait_event
do_wait_intrNote! These two wait functions are entered with the* case), so there is no race with testing the wakeup* condition in the caller before they add the wait* entry to the wake queue.
do_wait_intr_irq
swake_up_lockedThe thing about the wake_up_state() return value; I think we can ignore it.* If for some reason it would return 0, that means the previously waiting* task is already running, so it will observe condition true (or has already).
swake_up_allDoes not allow usage from IRQ disabled, since we must be able to* release IRQs to guarantee bounded hold time.
__prepare_to_swait
__finish_swait
psi_trigger_destroy
__mutex_lock_commonLock a mutex (possibly interruptible), slowpath:
__mutex_unlock_slowpath
upp - release the semaphore*@sem: the semaphore to release* Release the semaphore. Unlike mutexes, up() may be called from any* context and even by tasks which have never called down().
rwsem_mark_wakehandle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
rwsem_down_read_slowpathWait for the read lock to be granted
rwsem_down_write_slowpathWait until we successfully acquire the write lock
rwsem_wakehandle waking up a waiter on the semaphore* - up_read/up_write has decremented the active part of count if we come here
rwsem_downgrade_wakedowngrade a write lock into a read lock* - caller incremented waiting part of count and discovered it still negative* - just wake up any readers at the front of the queue
debug_mutex_wake_waiter
debug_mutex_free_waiter
mutex_remove_waiter
register_lock_classRegister a lock's class in the hash-table, if the class is not present* yet. Otherwise we look it up. We cache the result in the lock object* itself, so actual lookup of the hash should be once per lock object.
zap_classRemove all references to a lock class. The caller must hold the graph lock.
reinit_class
call_rcu_zappedSchedule an RCU callback if no RCU callback is pending. Must be called with* the graph lock held.
pm_qos_update_flagspm_qos_update_flags - Update a set of PM QoS flags
pm_vt_switchThere are three cases when a VT switch on suspend/resume are required:* 1) no driver has indicated a requirement one way or another, so preserve* the old behavior* 2) console suspend is disabled, we want to see debug messages across* suspend/resume* 3)
__register_nosave_regiongister_nosave_region - Register a region of unsaveable memory.* Register a range of page frames the contents of which should not be saved* during hibernation (to be used in the early initialization code).
mark_nosave_pagesmark_nosave_pages - Mark pages that should not be saved.*@bm: Memory bitmap.* Set the bits in @bm that correspond to the page frames the contents of which* should not be saved.
srcu_funnel_gp_startFunnel-locking scheme to scalably mediate many concurrent grace-period* requests
srcu_init
call_srcuEnqueue an SRCU callback on the specified srcu_struct structure,* initiating grace-period processing if it is not already running.
srcu_initQueue work for srcu_struct structures with early boot callbacks.* The work won't actually execute until the workqueue initialization* phase that takes place after the scheduler starts.
rcu_torture_allocAllocate an element from the rcu_tortures pool.
rcu_torture_writerRCU torture writer kthread. Repeatedly substitutes a new structure* for that pointed to by rcu_torture_current, freeing the old structure* after a series of grace periods (the "pipeline").
rcu_torture_stats_printPrint torture statistics
klp_free_patch_startThis function implements the free operations that can be called safely* under klp_mutex.* The operation must be completed by calling klp_free_patch_finish()* outside klp_mutex.
clocksource_find_best
clocksource_unregisterlocksource_unregister - remove a registered clocksource*@cs: clocksource to be unregistered
exit_itimersThis is called by do_exit or de_thread, only when there are no more* references to the shared signal_struct.
clockevents_notify_releasedCalled after a notify add to make devices available which were* released from the notifier call.
attach_to_pi_ownerLookup the task for the TID provided from user space and attach to* it after doing proper sanity checks.
wake_futex_piCaller must hold a reference on @pi_state.
fixup_pi_state_owner
futex_cleanup
kexec_add_bufferkexec_add_buffer - place a buffer in a kexec segment*@kbuf: Buffer contents and memory parameters.* This function assumes that kexec_mutex is held.* On successful return, @kbuf->mem will have the physical address of* the buffer in memory.
css_set_populatedss_set_populated - does a css_set contain any tasks?*@cset: target css_set* css_set_populated() should be the same as !!cset->nr_tasks at steady* state
css_set_move_taskss_set_move_task - move a task from one css_set to another*@task: task being moved*@from_cset: css_set @task currently belongs to (may be NULL)*@to_cset: new css_set @task is being moved to (may be NULL)*@use_mg_tasks: move to @to_cset->mg_tasks instead
put_css_set_locked
link_css_setlink_css_set - a helper function to link a css_set to a cgroup*@tmp_links: cgrp_cset_link objects allocated by allocate_cgrp_cset_links()*@cset: the css_set to be linked*@cgrp: the destination cgroup
find_css_setd_css_set - return a new css_set with one cgroup updated*@old_cset: the baseline css_set*@cgrp: the cgroup to be updated* Return a new css_set that's equivalent to @old_cset, but with @cgrp* substituted into the appropriate hierarchy.
cgroup_destroy_root
cgroup_setup_root
cgroup_kill_sb
cgroup_migrate_add_taskgroup_migrate_add_task - add a migration target task to a migration context*@task: target task*@mgctx: target migration context* Add @task, which is a migration target, to @mgctx->tset. This function* becomes noop if @task doesn't need to be migrated
cgroup_migrate_add_srcgroup_migrate_add_src - add a migration source css_set*@src_cset: the source css_set to add*@dst_cgrp: the destination cgroup*@mgctx: migration context* Tasks belonging to @src_cset are about to be migrated to @dst_cgrp
cgroup_migrate_prepare_dstgroup_migrate_prepare_dst - prepare destination css_sets for migration*@mgctx: migration context* Tasks are about to be moved and all the source css_sets have been* preloaded to @mgctx->preloaded_src_csets
css_task_iter_advance_css_setss_task_iter_advance_css_set - advance a task itererator to the next css_set*@it: the iterator to advance* Advance @it to the next css_set to walk.
css_release_work_fn
cgroup_init_subsys
cgroup_post_forkgroup_post_fork - called on a new task after adding it to the task list*@child: the task in question* Adds the task to the list running through its css_set if necessary and* call the subsystem fork() callbacks
cgroup_exitgroup_exit - detach cgroup from exiting task*@tsk: pointer to task_struct of exiting process* Description: Detach cgroup from @tsk.
cgroup1_pidlist_destroy_allUsed to destroy all pidlists lingering waiting for destroy timer. None* should be left afterwards.
cgroup1_reconfigure
cgroup_css_links_read
cpu_stop_should_run
cpu_stopper_thread
cpu_stop_park
__audit_free__audit_free - free a per-task audit context*@tsk: task whose audit context block to free* Called from copy_process and do_exit
__audit_syscall_exit__audit_syscall_exit - deallocate audit context after a system call*@success: success value of the syscall*@return_code: return value of the syscall* Tear down after system call
audit_free_parent
audit_put_watch
audit_remove_watch_rule
prune_tree_chunksRemove tree from chunks. If 'tagged' is set, remove tree only from tagged* chunks. The function expects tagged chunks are all at the beginning of the* chunks list.
audit_remove_tree_rulealled with audit_filter_mutex
prune_tree_threadThat gets run when evict_chunk() ends up needing to kill audit_tree.* Runs from a separate thread.
audit_add_tree_rulealled with audit_filter_mutex
audit_kill_trees... and that one is done if evict_chunk() decides to delay until the end* of syscall. Runs synchronously.
evict_chunkHere comes the stuff asynchronous to auditctl operations
remove_node
reset_writewrite() implementation for reset file. Reset all profiling data to zero* and remove nodes for which all associated object files are unloaded.
kprobe_unusedReturn true(!0) if the kprobe is unused
kprobe_disarmedReturn true(!0) if the kprobe is disarmed. Note: p must be on hash list
kprobe_queuedReturn true(!0) if the probe is queued on (un)optimizing lists
do_optimize_kprobesOptimize (replace a breakpoint with a jump) kprobes listed on* optimizing_list.
do_unoptimize_kprobesUnoptimize (replace a jump with a breakpoint and remove the breakpoint* if need) kprobes listed on unoptimizing_list.
kprobe_optimizer
wait_for_kprobe_optimizerWait for completing optimization and unoptimization
optimize_kprobeOptimize kprobe if p is ready to be optimized
unoptimize_kprobeUnoptimize a kprobe if p is optimized
reuse_unused_kprobeCancel unoptimizing for reusing
kill_optimized_kprobeRemove optimized instructions
__unregister_kprobe_bottom
taskstats_exitSend pid data out on exit
rb_remove_pages
ring_buffer_resizeg_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure.
tracing_mark_write
trace_search_list
find_next_mod_formatThe debugfs/tracing/printk_formats file maps the addresses with* the ASCII formats that are used in the bprintk events in the* buffer. For userspace tools to be able to decode the events from
event_create_dir
event_triggers_callvent_triggers_call - Call triggers associated with a trace event*@file: The trace_event_file associated with the event*@rec: The trace entry for the event, NULL for unconditional invocation* For each trigger associated with an event, invoke the trigger
trigger_start
trace_kprobe_is_registered
trace_probe_unlink
trace_probe_remove_file
uprobe_filter_is_empty
__bpf_lru_list_rotate_inactiveRotate the inactive list. It starts from the next_inactive_rotation* 1. If the node has ref bit set, it will be moved to the head* of active list with the ref bit cleared.* 2. If the node does not have ref bit set, it will leave it
__bpf_lru_list_shrinkCalls __bpf_lru_list_shrink_inactive() to shrink some* ref-bit-cleared nodes and move them to the designated* free list
bpf_percpu_lru_pop_free
cgroup_storage_get_next_key
cgroup_storage_map_free
dev_map_free
cpu_map_free
bpf_offload_dev_netdev_unregister
bpf_offload_dev_destroy
__cgroup_bpf_attach__cgroup_bpf_attach() - Attach the program to a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to attach*@type: Type of attach operation*@flags: Option flags
__cgroup_bpf_detach__cgroup_bpf_detach() - Detach the program from a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to detach or NULL*@type: Type of detach operation* Must be called with cgroup_mutex held.
perf_event_ctx_activateperf_event_ctx_activate(), perf_event_ctx_deactivate(), and* perf_event_task_tick() are fully serialized because they're strictly cpu* affine and perf_event_ctx{activate,deactivate} are called with IRQs
perf_event_ctx_deactivate
is_event_hup
ring_buffer_put
padata_parallel_worker
padata_find_nextpadata_find_next - Find the next object that needs serialization
padata_reorder
padata_serial_worker
__padata_free
wait_on_page_bit_common
read_cache_pages_invalidate_pageslease a list of pages, invalidating them first if need be
read_cache_pagesad_cache_pages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These* pages have their ->index populated and are otherwise uninitialised.
__do_page_cache_readahead__do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback.
put_pages_listput_pages_list() - release a list of pages*@pages: list of pages threaded on page->lru* Release a list of pages which are strung together on page.lru. Currently* used by read_cache_pages() and related error recovery code.
shrink_page_listshrink_page_list() returns the number of reclaimed pages
isolate_lru_pagespgdat->lru_lock is heavily contended. Some of the functions that* shrink the lists perform better by taking out a batch of pages* and working on them outside the LRU lock.* For pagecache intensive workloads, this function is the hottest
move_pages_to_lruThis moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
shrink_active_list
reclaim_pages
wb_shutdownRemove bdi from the global list and shutdown any threads we have running
pcpu_allocpcpu_alloc - the percpu allocator*@size: size of area to allocate in bytes*@align: alignment of area (max PAGE_SIZE)*@reserved: allocate from the reserved chunk if available*@gfp: allocation flags* Allocate percpu area of @size bytes aligned at @align
slab_caches_to_rcu_destroy_workfn
list_lru_add
list_lru_del
workingset_update_node
check_and_migrate_cma_pages
copy_one_ptepy one vm_area from one task to the other. Assumes the page tables* already present in the new task to be cleared in the whole range* covered by this vma.
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
free_pcppages_bulkFrees a number of pages from the PCP lists* Assumes all pages on list are in same zone, and of same order.* count is the number of pages to free.* If the zone was previously in an "all pages pinned" state then look to
__rmqueue_pcplistRemove page from the per-cpu list, caller must protect the list
__zone_watermark_okReturn true if free base pages are above 'mark'. For high-order checks it* will return true of the order-0 watermark is reached and there is at least* one free page of a suitable size. Checking now avoids taking the zone lock
dma_pool_createdma_pool_create - Creates a pool of consistent memory blocks, for dma
dma_pool_destroydma_pool_destroy - destroys a pool of dma memory blocks.*@pool: dma pool that will be destroyed* Context: !in_interrupt()* Caller guarantees that no more memory from the pool is in use,* and that nothing will try to use the pool after this call.
free_pool_huge_pageFree huge page from pool from next node to free.* Attempt to keep persistent huge pages more or less* balanced over allowed nodes.* Called with hugetlb_lock locked.
migrate_to_nodeMigrate pages from one node to a target node.* Returns error or the number of pages not migrated.
do_mbind
scan_get_next_rmap_item
ksmd_should_run
__ksm_enter
drain_cache_node_nodeDrains freelist for a node on each slab cache, used for memory hot-remove.* Returns -EBUSY if all objects cannot be drained so that the node is not* removed.* Must hold slab_mutex.
drain_freelist
__kmem_cache_empty
__kmem_cache_shrink
free_blockCaller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller
do_move_pages_to_node
do_pages_moveMigrate an array of page address onto an array of nodes and fill* the corresponding array of status.
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
free_transhuge_page
deferred_split_huge_page
deferred_split_scan
__khugepaged_enter
khugepaged_has_work
khugepaged_wait_event
start_stop_khugepaged
memcg_event_wakeGets called on EPOLLHUP on eventfd when user closes it.* Called with wqh->lock held and interrupts disabled.
mem_cgroup_uncharge_listmem_cgroup_uncharge_list - uncharge a list of page*@page_list: list of pages to uncharge* Uncharge a list of pages previously charged with* mem_cgroup_try_charge() and mem_cgroup_commit_charge().
soft_offline_huge_page
__soft_offline_page
scan_gray_listScan the objects already referenced (gray objects). More objects will be* referenced and, if there are no memory leaks, all the objects are scanned.
zbud_alloczbud_alloc() - allocates a region of a given size*@pool: zbud pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt to
zbud_reclaim_pagezbud_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* zbud reclaim is different from
remove_zspageThis function removes the given zspage from the freelist identified* by .
free_zspage
zs_destroy_pool
__release_z3fold_page
free_pages_work
z3fold_allocz3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt
z3fold_reclaim_pagez3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different
z3fold_page_isolate
z3fold_page_putback
balloon_page_dequeuealloon_page_dequeue - removes a page from balloon's page list and returns* its address to allow the driver to release the page
check_restartheck_restart(sma, q)*@sma: semaphore array*@q: the operation that just completed* update_queue is O(N^2) when it restarts scanning the whole queue of* waiting operations. Therefore this function checks if the restart is* really necessary
do_smart_updatedo_smart_update - optimized update_queue*@sma: semaphore array*@sops: operations that were performed*@nsops: number of operations*@otime: force setting otime*@wake_q: lockless wake-queue head* do_smart_update() does the required calls to update_queue and
exit_shmLocking assumes this will only be called with task == current
msg_get
flush_plug_callbacks
blk_flush_plug_list添加到请求队列
blk_flush_complete_seqlk_flush_complete_seq - complete flush sequence*@rq: PREFLUSH/FUA request being sequenced*@fq: flush queue*@seq: sequences to complete (mask of %REQ_FSEQ_*, can be zero)*@error: whether an error occurred*@rq just completed @seq part of its flush sequence,
blk_kick_flush
__ioc_clear_queue
blk_done_softirqSoftirq action handler - move entries to local list and loop over them* while passing them to the queue registered handler.
blk_mq_requeue_request
blk_mq_requeue_work
dispatch_rq_from_ctx
blk_mq_dispatch_wake
blk_mq_mark_tag_waitMark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting.
blk_mq_dispatch_rq_listReturns true if we did some work AND can potentially do more.
blk_mq_flush_plug_list
blk_mq_try_issue_list_directly
blk_mq_make_request
blk_mq_free_rqs
blk_mq_hctx_notify_dead'cpu' is going away. splice any existing rq_list entries from this* software queue to the hw queue dispatch list, and ensure that it* gets run.
blk_mq_add_queue_tag_set
blk_mq_releaseIt is the actual release handler for mq, but we do it from* request queue's release handler for avoiding use-after-free* and headache because q->mq_kobj shouldn't have been introduced,* but we can't group ctx/kctx kobj without it.
blk_stat_remove_callback
blk_free_queue_stats
blk_mq_sched_dispatch_requests
blk_mq_sched_insert_requests
blkg_destroy
tg_bps_limit
tg_iops_limit
throtl_qnode_add_biohrotl_qnode_add_bio - add a bio to a throtl_qnode and activate it*@bio: bio being added*@qn: qnode to add bio to*@queued: the service_queue->queued[] list @qn belongs to* Add @bio to @qn and put @qn on @queued if it's not already on.
throtl_peek_queuedhrotl_peek_queued - peek the first bio on a qnode list*@queued: the qnode list to peek
throtl_pop_queuedhrotl_pop_queued - pop the first bio form a qnode list*@queued: the qnode list to pop a bio from*@tg_to_put: optional out argument for throtl_grp to put* Pop the first bio from the qnode list @queued
throtl_can_upgrade
throtl_tg_can_downgrade
throtl_downgrade_check
iocg_activate
ioc_timer_fn
ioc_rqos_throttle
ioc_pd_free
dd_merged_requests
deadline_fifo_requestFor the specified data direction, return the next request to* dispatch using arrival ordered lists.
__dd_dispatch_requestdeadline_dispatch_requests selects the best request according to* read/write expire, fifo_batch, etc
dd_exit_queue
dd_insert_requests
dd_finish_requestFor zoned block devices, write unlock the target zone of* completed write requests. Do this while holding the zone lock* spinlock so that the zone is never unlocked while deadline_fifo_request()* or deadline_next_request() are executing
bfq_requests_mergedThis function is called to notify the scheduler that the requests* rq and 'next' have been merged, with 'next' going away
__bfq_dispatch_request
bfq_insert_requests
key_gc_unused_keysGarbage collect a list of unreferenced, detached keys
key_garbage_collectorReaper for unused keys.
keyring_destroy
sb_finish_set_opts
inode_doinit_with_dentry
flush_unauthorized_filesDerived from fs/exec.c:flush_old_files.
smack_setprocattrsmack_setprocattr - Smack process attribute setting*@name: the name of the attribute in /proc/
smack_privileged_credsmack_privileged_cred - are all privilege requirements met by cred*@cap: The requested capability*@cred: the credential to use* Is the task privileged and allowed to be privileged* by the onlycap rule.
smk_net4addr_insertsmk_net4addr_insert*@new : netlabel to insert* This helper insert netlabel in the smack_net4addrs list* sorted by netmask length (longest to smallest)* locked by &smk_net4addr_lock in smk_write_net4addr
smk_net6addr_insertsmk_net6addr_insert*@new : entry to insert* This inserts an entry in the smack_net6addrs list* sorted by netmask length (longest to smallest)* locked by &smk_net6addr_lock in smk_write_net6addr
smk_list_swap_rcusmk_list_swap_rcu - swap public list with a private one in RCU-safe way* The caller must hold appropriate mutex to prevent concurrent modifications* to the public list
smk_write_onlycapsmk_write_onlycap - write() for smackfs/onlycap*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Returns number of bytes written or error code, as appropriate
smk_write_relabel_selfsmk_write_relabel_self - write() for /smack/relabel-self*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start - must be 0
tomoyo_read_logmoyo_read_log - Read an audit log.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing.
tomoyo_init_policy_namespacemoyo_init_policy_namespace - Initialize namespace.*@ns: Pointer to "struct tomoyo_policy_namespace".* Returns nothing.
tomoyo_poll_querymoyo_poll_query - poll() for /sys/kernel/security/tomoyo/query.*@file: Pointer to "struct file".*@wait: Pointer to "poll_table".* Returns EPOLLIN | EPOLLRDNORM when ready to read, 0 otherwise.
tomoyo_collect_entrymoyo_collect_entry - Try to kfree() deleted elements.* Returns nothing.
__next_ns__next_ns - find the next namespace to list*@root: root namespace to stop search at (NOT NULL)*@ns: current ns position (NOT NULL)* Find the next namespace from @ns under @root and handle all locking needed* while switching current namespace
__first_profile__first_profile - find the first profile in a namespace*@root: namespace that is root of profiles being displayed (NOT NULL)*@ns: namespace to start in (NOT NULL)* Returns: unrefcounted profile or NULL if no profile* Requires: profile->ns.lock to be held
__next_profile__next_profile - step to the next profile in a profile tree*@profile: current profile in tree (NOT NULL)* Perform a depth first traversal on the profile tree in a namespace* Returns: next profile or NULL if done* Requires: profile->ns.lock to be held
change_hathelper fn for changing into a hat* Returns: label for hat transition or ERR_PTR. Does not return NULL
__replace_profile__replace_profile - replace @old with @new on a list*@old: profile to be replaced (NOT NULL)*@new: profile to replace @old with (NOT NULL)*@share_proxy: transfer @old->proxy to @new* Will duplicate and refcount elements that @new inherits from @old
aa_get_buffer
destroy_buffers
revalidate_tty
ima_check_policyMake sure we have a valid policy, at least containing some rules.
ima_init_template_list
init_evm
__put_superDrop a superblock's refcount. The caller must hold sb_lock.
generic_shutdown_supergeneric_shutdown_super - common helper for ->kill_sb()*@sb: superblock to kill* generic_shutdown_super() does all fs-independent work on superblock* shutdown
cdev_purge
dentry_unlist
shrink_dentry_list
d_walkd_walk - walk the dentry tree*@parent: start of walk*@data: data passed to @enter() and @finish()*@enter: callback when first entering the dentry* The @enter() callbacks are called with d_lock held.
select_collect
select_collect2
shrink_dcache_parent收缩高速缓存区
umount_check
destroy_inode
inode_sb_list_del
clear_inode清除索引节点
evictFree the inode passed in, removing it from the lists it is still connected* to
dispose_listdispose_list - dispose of the contents of a local list*@head: the head of the list to free* Dispose-list gets a local list with local inodes in it, so it doesn't* need to worry about list corruption and SMP locks.
iput_finalCalled when we're dropping the last reference* to an inode
clone_mnt
mntput_no_expire
umount_treemount_lock must be held* namespace_sem must be held for write
do_umount
lock_mnt_tree
finish_automount
mark_mounts_for_expiryprocess a list of expirable mountpoints with the intent of discarding any* mountpoints that aren't in use and haven't been touched since last we came* here
select_submountsRipoff of 'select_parent()'* search the list of submounts for a given mountpoint, and move any* shrinkable submounts to the 'graveyard' list.
shrink_submounts
dcache_readdirDirectory is locked and all positive dentries in it are safe, since* for ramfs-type trees they can't go away without unlink() or rmdir(),* both impossible due to the lock on directory.
wb_io_lists_depopulated
sb_mark_inode_writebackmark an inode as under writeback on the sb
sb_clear_inode_writebacklear an inode as under writeback on the sb
redirty_tailRedirty an inode: set its when-it-was dirtied timestamp and move it to the* furthest end of its superblock's dirty-inode list.* Before stamping the inode's ->dirtied_when, we check to see whether it is
move_expired_inodesMove expired (dirtied before work->older_than_this) dirty inodes from*@delaying_queue to @dispatch_queue.
writeback_sb_inodesWrite a portion of b_io inodes which belong to @sb.* Return the number of pages and/or inodes written.* NOTE! This is called with wb->list_lock held, and will* unlock and relock that for each inode it ends up doing* IO for.
__writeback_inodes_wb
writeback_inodes_wb
wb_writebackExplicit flushing or periodic writeback of "old" data
get_next_work_itemReturn the next wb_writeback_work struct that hasn't been processed yet.
wb_workfnHandle writeback of dirty data for the device backed by this bdi. Also* reschedules periodically and does kupdated style flushing.
wakeup_dirtytime_writebackWake up bdi's periodically to make sure dirtytime inodes gets* written back periodically. We deliberately do *not* check the* b_dirtytime list in wb_has_dirty_io(), since this would cause the* kernel to be constantly waking up once there are any dirtytime
wait_sb_inodesThe @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks
do_make_slave
propagation_nextget the next mount in the propagation tree
skip_propagation_subtree
next_group
propagate_mount_busyheck if the mount 'mnt' can be unmounted successfully.*@mnt: the mount to be checked for unmount* NOTE: unmounting 'mnt' would naturally propagate to all* other mounts its parent propagates to.* Check if any of these mounts that **do not have submounts**
__propagate_umountNOTE: unmounting 'mnt' naturally propagates to all other mounts its* parent propagates to.
restore_mounts
cleanup_umount_visitations
propagate_umountllect all mounts that receive propagation from the mount in @list,* and return these additional mounts in the same list.*@list: the list of mounts to be unmounted.* vfsmount lock must be held for write
pin_kill
inode_has_buffers
sync_mapping_bufferssync_mapping_buffers - write out & wait upon a mapping's "associated" buffers*@mapping: the mapping which wants those buffers written* Starts I/O against the buffers at mapping->private_list, and waits upon* that I/O.
fsync_buffers_list
invalidate_inode_buffersInvalidate any and all dirty buffers on a given inode. We are* probably unmounting the fs, but that doesn't mean we have already* done a sync(). Just drop the buffers from the inode list.* NOTE: we take the inode's blockdev's mapping's private_lock. Which
remove_inode_buffersRemove any clean buffers from the inode's buffer list. This is called* when we're trying to free the inode itself. Those buffers can pin it.* Returns true if all buffers were removed.
free_buffer_head
mpage_readpagesmpage_readpages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These
fsnotify_notify_queue_is_emptyrn true if the notify queue is empty, false otherwise
fsnotify_destroy_event
fsnotify_add_eventAdd an event to the group notification queue
fsnotify_clear_marks_by_groupClear any marks in a group with given type mask
fanotify_release
ep_is_linkedTells us if the item is currently linked
ep_unregister_pollwaitThis function unregisters poll callbacks from the associated file* descriptor. Must be called with "mtx" held (or "epmutex" if called from* ep_free).
reverse_path_check_proc
ep_loop_check_procp_loop_check_proc - Callback function to be passed to the @ep_call_nested()* API, to verify that adding an epoll file inside another* epoll structure, does not violate the constraints, in* terms of closed loops, or too deep chains (which can
clear_tfile_check_list
SYSCALL_DEFINE4The following function implements the controller interface for* the eventpoll file that enables the insertion/removal/change of* file descriptors inside the interest set.
userfaultfd_ctx_read
kiocb_set_cancel_fn
free_ioctx_usersWhen this function runs, the kioctx has been removed from the "hash table"* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -* now it's safe to cancel any that need to be.
aio_poll_cancelassumes we are called with irqs disabled
aio_poll
io_cqring_overflow_flushReturns true if there are no backlogged entries after the flush
io_req_link_next
io_fail_linksCalled if REQ_F_LINK is set, and we fail the head request
io_cqring_events
io_iopoll_completeFind and free completed poll iocbs
io_do_iopoll
io_iopoll_geteventsPoll for a minimum of 'min' events. Note that if min == 0 we consider that a* non-spinning poll check - we'll still enter the driver poll loop, but only* as a non-spinning completion check.
io_iopoll_reap_eventsWe can't just wait for polled events to come to us, we have to actively* find and complete them.
io_iopoll_req_issuedAfter the iocb has been issued, it's safe to be found on the poll list.* Adding the kiocb to the list AFTER submission ensures that we don't* find it from a io_iopoll_getevents() thread before the issuer is done* accessing the kiocb cookie.
io_poll_remove_one
io_poll_add
io_timeout_fn
io_req_defer
io_link_timeout_fn
io_queue_linked_timeout
io_submit_sqes
io_sq_thread
locks_check_ctx_lists
locks_release_private
locks_dispose_list
locks_move_blocks
__locks_wake_up_blocks
locks_delete_blocklocks_delete_lock - stop waiting for a file lock*@waiter: the lock which was waiting* lockd/nfsd need to disconnect the lock while working on it.
__locks_insert_blockInsert waiter into blocker's block list.* We use a circular list so that processes can be easily woken up in* the order they blocked. The documentation doesn't require this but* it seems like the reasonable thing to do.
locks_wake_up_blocksWake up processes blocked waiting for blocker.* Must be called with the inode->flc_lock held!
__break_lease撤销所有未偿还的文件
locks_remove_posixThis function is called when the file is being removed* from the task's fd array. POSIX locks belonging to this task* are deleted at this time.
locks_remove_flockThe i_flctx must be valid when calling into here
locks_remove_leaseThe i_flctx must be valid when calling into here
bm_entry_write
bm_status_write
mb_cache_entry_deletemb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value.
mb_cache_shrink
locks_start_gracelocks_start_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* A grace period is a period during which locks should not be given* out
__state_in_grace
grace_exit_net
iomap_next_page
iomap_readpages
iomap_finish_ioends
iomap_writepage_mapWe implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide
remove_free_dquot
dquot_writeback_dquotsWrite all dquot structures to quota files
dqcache_shrink_scan
remove_inode_dquot_refRemove references to dquots from inode and add dquot to list for freeing* if we have the last reference to dquot
is_live
list_rotate_left向左旋转链表
list_is_singular链表只有一项
list_cut_positionlist_cut_position - cut a list into two*@list: a new list to add all removed entries*@head: a list with entries*@entry: an entry within head, could be the head itself* and if so we won't cut the list* This helper moves the initial part of @head, up to and
list_splice为栈加入第二个链表项
list_splice_tail加入二个链表项
list_splice_init加入二个链表项并重新初始化
list_splice_tail_init加入二个链表项并重新初始化
waitqueue_activewaitqueue_active -- locklessly test for waiters on the queue*@wq_head: the waitqueue to test for waiters* returns true if the wait list is not empty* NOTE: this function is lockless and requires care, incorrect usage _will_
rwsem_is_contendedThis is the same regardless of which rwsem implementation that is being used.* It is just a heuristic meant to be called by somebody alreadying holding the* rwsem to see if somebody from an incompatible type is wanting access to the* lock.
swait_activeswait_active -- locklessly test for waiters on the queue*@wq: the waitqueue to test for waiters* returns true if the wait list is not empty* NOTE: this function is lockless and requires care, incorrect usage _will_
free_area_empty
list_splice_init_rculist_splice_init_rcu - splice an RCU-protected list into an existing list,* designed for stacks.*@list: the RCU-protected list to splice*@head: the place in the existing list to splice the first list into
list_splice_tail_init_rculist_splice_tail_init_rcu - splice an RCU-protected list into an existing* list, designed for queues.*@list: the RCU-protected list to splice*@head: the place in the existing list to splice the first list into
plist_head_empty判断链表空
plist_node_empty节点不在链表上
thread_group_empty
blk_needs_flush_plug处于睡眠态
ptrace_release_task回收跟踪
first_net_device
dump_blkd_tasksDump the guaranteed-empty blocked-tasks state. Trust but verify.
eventpoll_releaseThis is called from inside fs/file_table.c:__fput() to unlink files* from the eventpoll interface. We need to have this facility to cleanup* correctly files that are closed without being removed from the eventpoll* interface.
top_trace_arrayThe global tracer (top) should be the first trace array added,* but we check the flag anyway.
trace_probe_has_sibling
sk_psock_queue_empty