Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\sched\wait.c Create Date:2022-07-28 09:40:49
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:sh_wait - clean up after waiting in a queue*@wq_head: waitqueue waited on*@wq_entry: wait descriptor* Sets current thread back to running state and removes* the wait descriptor from the given waitqueue if still* queued.

Proto:void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)

Type:void

Parameter:

TypeParameterName
struct wait_queue_head *wq_head
struct wait_queue_entry *wq_entry
365  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
379  If Not list_empty_careful - tests whether a list is empty and not being modified*@head: the list to test* Description:* tests whether a list is empty _and_ checks that no other CPU might be* in the process of modifying either member (next or prev)* NOTE: using Then
380  spin_lock_irqsave( & lock, flags)
381  list_del_init - deletes entry from list and reinitialize it.*@entry: the element to delete from the list.
382  spin_unlock_irqrestore( & lock, flags)
Caller
NameDescribe
usermodehelper_read_lock_wait
__cancel_work_timer
__wait_on_bitTo allow interruptible waiting and asynchronous (i.e. nonblocking)* waiting, the actions of __wait_on_bit() and __wait_on_bit_lock() are* permitted return codes. Nonzero return codes halt waiting and return.
__wait_on_bit_lock
cgroup_lock_and_drain_offlinegroup_lock_and_drain_offline - lock cgroup_mutex and drain offlined csses*@cgrp: root of the target subtree* Because css offlining is asynchronous, userland may try to re-enable a* controller while the previous css is still around. This function grabs
ring_buffer_waitg_buffer_wait - wait for input to the ring buffer*@buffer: buffer to wait on*@cpu: the cpu buffer to wait on*@full: wait until a full page is available, if @cpu != RING_BUFFER_ALL_CPUS* If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon
mempool_allocmempool_alloc - allocate an element from a specific memory pool*@pool: pointer to the memory pool which was allocated via* mempool_create().*@gfp_mask: the usual allocation bitmask.* this function only sleeps if the alloc_fn() function sleeps or
kswapd_try_to_sleep
congestion_waitgestion_wait - wait for a backing_dev to become uncongested*@sync: SYNC or ASYNC IO*@timeout: timeout in jiffies* Waits for up to @timeout jiffies for a backing_dev (any backing_dev) to exit* write congestion
wait_iff_congestedwait_iff_congested - Conditionally wait for a backing_dev to become uncongested or a pgdat to complete writes*@sync: SYNC or ASYNC IO*@timeout: timeout in jiffies* In the event of a congested backing_dev (any backing_dev) this waits* for up to @timeout
mem_cgroup_wait_acct_move
mem_cgroup_oom_synchronizemem_cgroup_oom_synchronize - complete memcg OOM handling*@handle: actually kill/wait or just clean up the OOM state* This has to be called at the end of a page fault if the memcg OOM* handler was enabled
rq_qos_waitq_qos_wait - throttle on a rqw if we need to*@rqw: rqw to throttle on*@private_data: caller provided specific data*@acquire_inflight_cb: inc the rqw->inflight counter if we can*@cleanup_cb: the callback to cleanup in case we race with a waker
ioc_rqos_throttle
__wait_on_freeing_inode
__inode_dio_waitDirect i/o helper functions
inode_sleep_on_writebackSleep until I_SYNC is cleared. This function must be called with i_lock* held and drops it. It is aimed for callers not holding any inode reference* so once i_lock is dropped, inode can go away.
bd_prepare_to_claimd_prepare_to_claim - prepare to claim a block device*@bdev: block device of interest*@whole: the whole device containing @bdev, may equal @bdev*@holder: holder trying to claim @bdev* Prepare to claim @bdev
io_sq_thread
io_cqring_waitWait until events become available, if we don't already have some. The* application must reap them itself, as they reside on the shared cq ring.
io_uring_cancel_files
get_unlocked_entryLook up entry in page cache, wait for it to become unlocked if it* is a DAX entry and return it. The caller must subsequently call* put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()* if it did
wait_entry_unlockedThe only thing keeping the address space around is the i_pages lock* (it's cycled in clear_inode() after removing the entries from i_pages)* After we call xas_unlock_irq(), we cannot touch xas->xa.
sbitmap_finish_wait