Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-28 05:34:29
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:list_move_tail - delete from one list and add as another's tail*@list: the entry to move*@head: the head that will follow our entry

Proto:static inline void list_move_tail(struct list_head *list, struct list_head *head)

Type:void

Parameter:

TypeParameterName
struct list_head *list
struct list_head *head
213  deletes entry from list
214  list_add_tail - add a new entry*@new: new entry to be added*@head: list head to add it before* Insert a new entry before the specified head.* This is useful for implementing queues.
Caller
NameDescribe
irq_poll_softirq
parman_prio_shift_down
list_test_list_move_tail
move_linked_worksmove_linked_works - move linked works to a list*@work: start of series of works to be scheduled*@head: target list to append @work to*@nextp: out parameter for nested worklist walking* Schedule linked works starting from @work to @head
requeue_rt_entityPut task to the head or the end of the run list without the overhead of* dequeue followed by enqueue.
rwsem_mark_wakehandle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
register_lock_classRegister a lock's class in the hash-table, if the class is not present* yet. Otherwise we look it up. We cache the result in the lock object* itself, so actual lookup of the hash should be once per lock object.
zap_classRemove all references to a lock class. The caller must hold the graph lock.
link_css_setlink_css_set - a helper function to link a css_set to a cgroup*@tmp_links: cgrp_cset_link objects allocated by allocate_cgrp_cset_links()*@cset: the css_set to be linked*@cgrp: the destination cgroup
rebind_subsystems
cgroup_migrate_add_taskgroup_migrate_add_task - add a migration target task to a migration context*@task: target task*@mgctx: target migration context* Add @task, which is a migration target, to @mgctx->tset. This function* becomes noop if @task doesn't need to be migrated
__pcpu_chunk_move
__list_lru_walk_one
isolate_huge_page
putback_active_hugepage
ss_wakeup
blk_flush_complete_seqlk_flush_complete_seq - complete flush sequence*@rq: PREFLUSH/FUA request being sequenced*@fq: flush queue*@seq: sequences to complete (mask of %REQ_FSEQ_*, can be zero)*@error: whether an error occurred*@rq just completed @seq part of its flush sequence,
throtl_pop_queuedhrotl_pop_queued - pop the first bio form a qnode list*@queued: the qnode list to pop a bio from*@tg_to_put: optional out argument for throtl_grp to put* Pop the first bio from the qnode list @queued
kyber_insert_requests
select_submountsRipoff of 'select_parent()'* search the list of submounts for a given mountpoint, and move any* shrinkable submounts to the 'graveyard' list.
dcache_readdirDirectory is locked and all positive dentries in it are safe, since* for ramfs-type trees they can't go away without unlink() or rmdir(),* both impossible due to the lock on directory.
wait_sb_inodesThe @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks
umount_one
__propagate_umountNOTE: unmounting 'mnt' naturally propagates to all other mounts its* parent propagates to.
umount_list
mark_buffer_dirty_inode
io_do_iopoll
mb_cache_shrink
iomap_ioend_try_merge
list_rotate_leftlist_rotate_left - rotate the list to the left*@head: the head of the list
list_rotate_to_frontlist_rotate_to_front() - Rotate list to specific item.*@list: The desired new front of the list.*@head: The head of the list.* Rotates list so that @list becomes the new front of the list.