Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-28 05:34:30
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:list_splice_init - join two lists and reinitialise the emptied list.*@list: the new list to add.*@head: the place to add it in the first list.* The list at @list is reinitialised

Proto:static inline void list_splice_init(struct list_head *list, struct list_head *head)

Type:void

Parameter:

TypeParameterName
struct list_head *list
struct list_head *head
449  If Not list_empty - tests whether a list is empty*@head: the list to test. Then
450  __list_splice(list, head, next)
451  Initialization list head
Caller
NameDescribe
irq_poll_cpu_dead
list_test_list_splice_init
swake_up_allDoes not allow usage from IRQ disabled, since we must be able to* release IRQs to guarantee bounded hold time.
__free_zapped_classesThe caller must hold the graph lock. May be called from RCU context.
cgroup_migrate_executegroup_taskset_migrate - migrate a taskset*@mgctx: migration context* Migrate tasks in @mgctx as setup by migration preparation functions.* This function fails iff one of the ->can_attach callbacks fails and
replace_chunk
bpf_offload_dev_netdev_unregister
perf_addr_filters_splice
slab_caches_to_rcu_destroy_workfn
merge_queuesmerge_queues - merge single semop queues into global queue*@sma: semaphore array* This function merges all per-semaphore queues into the global queue.* It is necessary to achieve FIFO ordering for the pending single-sop
flush_plug_callbacks
ioc_clear_queue_clear_queue - break any ioc association with the specified queue*@q: request_queue being cleared* Walk @q->icq_list and exit all io_cq's.
blk_softirq_cpu_dead
blk_mq_requeue_work
blk_mq_dispatch_rq_listReturns true if we did some work AND can potentially do more.
blk_mq_flush_plug_list
blk_mq_hctx_notify_dead'cpu' is going away. splice any existing rq_list entries from this* software queue to the hw queue dispatch list, and ensure that it* gets run.
blk_mq_sched_dispatch_requests
namespace_unlock
queue_ioQueue all expired dirty inodes for io, eldest first.* Before* newly dirtied b_dirty b_io b_more_io* =============> gf edc BA* After* newly dirtied b_dirty b_io b_more_io* =============> g fBAedc* |* +--> dequeue for IO
wait_sb_inodesThe @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks
ep_scan_ready_listp_scan_ready_list - Scans the ready list in a way that makes possible for* the scan code, to call f_op->poll(). Also allows for* O(NumReady) performance.*@ep: Pointer to the epoll private data structure.*@sproc: Pointer to the scan callback.
locks_move_blocks