Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-28 05:34:28
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:list_add - add a new entry*@new: new entry to be added*@head: list head to add it after* Insert a new entry after the specified head.* This is good for implementing stacks.

Proto:static inline void list_add(struct list_head *new, struct list_head *head)

Type:void

Parameter:

TypeParameterName
struct list_head *new
struct list_head *head
79  Insert a new entry between two known consecutive entries.* This is only for internal list manipulation where we know* the prev/next entries already!
Caller
NameDescribe
plist_delplist_del - Remove a @node from plist.*@node: &struct plist_node pointer - entry to be removed*@head: &struct plist_head pointer - list head
add_head
klist_add_behindklist_add_behind - Init a klist_node and add it after an existing node*@n: node we're adding.*@pos: node to put @n after
klist_removeklist_remove - Decrement the refcount of node and wait for it to go away.*@n: node we're removing.
rhashtable_walk_enterhashtable_walk_enter - Initialise an iterator*@ht: Table to walk over*@iter: Hash table Iterator* This function prepares a hash table walk.* Note that if you restart a walk after rhashtable_walk_stop you* may see the same object twice
rhashtable_walk_stophashtable_walk_stop - Finish a hash table walk*@iter: Hash table iterator* Finish a hash table walk. Does not reset the iterator to the start of the* hash table.
test_update_node
ptr_id
codec_initdec_init - Initialize a Reed-Solomon codec*@symsize: symbol size, bits (1-8)*@gfpoly: Field generator polynomial coefficients*@gffunc: Field generator function*@fcr: first root of RS code generator polynomial, index form*@prim: primitive element to
lc_createlc_create - prepares to track objects in an active set*@name: descriptive name only used in lc_seq_printf_stats and lc_seq_dump_details*@max_pending_changes: maximum changes to accumulate until a transaction is required*@e_count: number of elements
lc_resetlc_reset - does a full reset for @lc and the hash table slots.*@lc: the lru cache to operate on* It is roughly the equivalent of re-allocating a fresh lru_cache object,* basically a short cut to lc_destroy(lc); lc = lc_create(...);
objagg_obj_create
objagg_hints_node_create
list_test_list_add
allocate_threshold_blocks
rdtgroup_mkdir_ctrl_monThese are rdtgroups created under the root directory. Can be used* to allocate and monitor resources.
rdtgroup_setup_root
pseudo_lock_cstates_constrainpseudo_lock_cstates_constrain - Restrict cores from entering C6* To prevent the cache from being affected by power management entering* C6 has to be avoided
reparent_leaderAny that need to be release_task'd are put on the @dead list.
exit_notifySend signals to all our closest relatives so that they know* to properly mourn us..
__ptrace_link
exit_ptraceDetach all tasks we were using ptrace on. Called with tasklist held* for writing.
fork_usermode_blobrk_usermode_blob - fork a blob of bytes as a usermode process*@data: a blob of bytes that can be do_execv-ed as a file*@len: length of the blob*@info: information about usermode process (shouldn't be NULL)* If info->cmdline is set it will be used as
worker_enter_idleworker_enter_idle - enter idle state*@worker: worker which is entering idle state*@worker is entering idle state. Update stats and idle timer if* necessary.* LOCKING:* spin_lock_irq(pool->lock).
kmalloc_parameter
__kthread_queue_delayed_work
smpboot_register_percpu_threadsmpboot_register_percpu_thread - Register a per_cpu thread related* to hotplug*@plug_thread: Hotplug thread descriptor* Creates and starts the threads on all online cpus.
__enqueue_rt_entity
psi_trigger_create
torture_ww_mutex_lock
stress_reorder_work
pm_vt_switch_requiredpm_vt_switch_required - indicate VT switch at suspend requirements*@dev: device*@required: if true, caller needs VT switch at suspend/resume time* The different console drivers may or may not require VT switches across* suspend/resume, depending on how
__irq_domain_add__irq_domain_add() - Allocate a new irq_domain data structure*@fwnode: firmware node for the interrupt controller*@size: Size of linear map; 0 for radix mapping only*@hwirq_max: Maximum number of interrupts supported by controller*@direct_max: Maximum
srcu_funnel_gp_startFunnel-locking scheme to scalably mediate many concurrent grace-period* requests
call_srcuEnqueue an SRCU callback on the specified srcu_struct structure,* initiating grace-period processing if it is not already running.
rcu_torture_pipe_updateUpdate all callbacks in the pipe. Suitable for synchronous grace-period* primitives.
klp_patch_func
dma_entry_free
clocksource_enqueueEnqueue the clocksource sorted by rating
do_timer_createCreate a POSIX.1b interval timer.
clockevents_notify_releasedCalled after a notify add to make devices available which were* released from the notifier call.
clockevents_register_devicelockevents_register_device - register a clock event device*@dev: device to register
clockevents_exchange_devicelockevents_exchange_device - release and request clock devices*@old: device to release (can be NULL)*@new: device to request (can be NULL)* Called from various tick functions with clockevents_lock held and* interrupts disabled.
attach_to_pi_ownerLookup the task for the TID provided from user space and attach to* it after doing proper sanity checks.
wake_futex_piCaller must hold a reference on @pi_state.
fixup_pi_state_owner
kimage_alloc_normal_control_pages
kimage_alloc_page
allocate_cgrp_cset_linksallocate_cgrp_cset_links - allocate cgrp_cset_links*@count: the number of links to allocate*@tmp_links: list_head the allocated links are put on* Allocate @count cgrp_cset_link structures and chain them on @tmp_links* through ->cset_link
cgroup_setup_root
css_task_iter_advance_css_setss_task_iter_advance_css_set - advance a task itererator to the next css_set*@it: the iterator to advance* Advance @it to the next css_set to walk.
cgroup_pidlist_find_created the appropriate pidlist for our purpose (given procs vs tasks)* returns with the lock on that pidlist already held, and takes care* of the use count, or returns NULL with no locks held if we're out of* memory.
audit_add_ruleAdd rule to given filterlist if not a duplicate.
audit_update_watchUpdate inode info in audit rules based on filesystem event.
audit_add_to_parentAssociate the given rule with an existing parent.* Caller must hold audit_filter_mutex.
create_chunkCall with group->mark_mutex held, releases it
tag_chunkhe first tagged inode becomes root of tree
trim_markedrim the uncommitted chunks from tree
audit_trim_trees
audit_add_tree_rulealled with audit_filter_mutex
audit_tag_tree
new_nodeCreate a new node and associated debugfs entry. Needs to be called with* node_lock held.
kcov_remote_area_putMust be called with kcov_remote_lock locked.
optimize_kprobeOptimize kprobe if p is ready to be optimized
unoptimize_kprobeUnoptimize a kprobe if p is optimized
kill_optimized_kprobeRemove optimized instructions
relay_openlay_open - create a new relay channel*@base_filename: base name of files to create, %NULL for buffering only*@parent: dentry of parent directory, %NULL for root directory or buffer*@subbuf_size: size of sub-buffers*@n_subbufs: number of sub-buffers*@cb:
add_del_listener
__rb_allocate_pages
trace_array_create
tracer_alloc_buffers
__trace_define_field
create_new_subsystem
event_subsystem_dir
__register_event
trace_create_new_event
event_trace_enable
register_event_commandCurrently we only register event commands from __init, so mark this* __init too.
save_named_triggersave_named_trigger - save the trigger in the named trigger list*@name: The name of the named trigger set*@data: The trigger data to save* Return: 0 if successful, negative error otherwise.
save_hist_vars
bpf_event_notify
trace_probe_init
__local_list_add_pending
bpf_common_lru_populate
bpf_percpu_lru_populate
bpf_cgroup_storage_link
bq_enqueueRuns under RCU-read-side, plus in softirq under NAPI protection.* Thus, safe percpu variable access.
bq_enqueueRuns under RCU-read-side, plus in softirq under NAPI protection.* Thus, safe percpu variable access.
bpf_offload_dev_netdev_register
perf_event_ctx_activateperf_event_ctx_activate(), perf_event_ctx_deactivate(), and* perf_event_task_tick() are fully serialized because they're strictly cpu* affine and perf_event_ctx{activate,deactivate} are called with IRQs
perf_sched_cb_inc
perf_pmu_migrate_context
delayed_uprobe_add
build_probe_listFor a given range in vma, build a list of probes that need to be inserted.
padata_do_serialpadata_do_serial - padata serialization function*@padata: object to be serialized.* padata_do_serial must be called for every parallelized object.* The serialization callback function will run with BHs off.
padata_alloc_shellpadata_alloc_shell - Allocate and initialize padata shell.*@pinst: Parent padata_instance object.
torture_shuffle_task_registerRegister a task to be shuffled. If there is no memory, just splat* and don't bother registering.
dir_add
__do_page_cache_readahead__do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback.
release_pageslease_pages - batched put_page()*@pages: array of pages to release*@nr: number of pages* Decrement the reference count on all the pages in @pages. If it* fell to zero, remove the page from the LRU and free it.
shrink_page_listshrink_page_list() returns the number of reclaimed pages
move_pages_to_lruThis moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
shrink_active_list
create_cache
split_map_pages
isolate_migratepages_blocksolate_migratepages_block() - isolate all migrate-able pages within* a single pageblock*@cc: Compaction control structure.*@low_pfn: The first PFN to isolate*@end_pfn: The one-past-the-last PFN to isolate, within same pageblock
copy_one_ptepy one vm_area from one task to the other. Assumes the page tables* already present in the new task to be cleared in the whole range* covered by this vma.
pgtable_trans_huge_deposit
anon_vma_chain_link
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
link_va
free_unref_page_commit
madvise_cold_or_pageout_pte_range
init_zswapmodule init and exit
dma_pool_createdma_pool_create - Creates a pool of consistent memory blocks, for dma
dma_pool_allocdma_pool_alloc - get a block of consistent memory*@pool: dma pool that will produce the block*@mem_flags: GFP_* bitmask*@handle: pointer to dma address of block* Return: the kernel virtual address of a currently unused block,
region_addAdd the huge page range represented by [f, t) to the reserve* map
region_chgExamine the existing reserve map and determine how many* huge pages in the specified range [f, t) are NOT currently* represented. This routine is called before a subsequent* call to region_add that will actually modify the reserve
region_delDelete the specified range [f, t) from the reserve map. If the* t parameter is LONG_MAX, this indicates that ALL regions after f* should be deleted. Locate the regions which intersect [f, t)* and either trim, delete or split the existing regions.
resv_map_alloc
gather_surplus_pagesIncrease the hugetlb pool such that it can accommodate a reservation* of size 'delta'.
__alloc_bootmem_huge_page
set_slob_page_free
stable_tree_searchstable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now
cmp_and_merge_pagemp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and
kmem_cache_initInitialisation. Called after the page allocator have been initialised and* before smp_init().
fixup_slab_list
free_blockCaller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller
__add_partialManagement of partially allocated slabs.
free_partialAttempt to free all partial slabs on a node.* This is called from __kmem_cache_shutdown(). We must take list_lock* because sysfs file might still access partial list after the shutdowning.
bootstrapUsed for early kmem_cache structures that were allocated using* the page allocator. Allocate them properly then fix up the pointers* that may be pointing to the wrong kmem_cache structure.
mem_cgroup_oom_register_event
memcg_write_event_controlDO NOT USE IN NEW FILES.* Parse input and register new cgroup event handler.* Input must be in format ' '.* Interpretation of args is defined by control file implementation.
vmpressure_register_eventvmpressure_register_event() - Bind vmpressure notifications to an eventfd*@memcg: memcg that is interested in vmpressure notifications*@eventfd: eventfd context to link notifications with*@args: event arguments (pressure level threshold, optional mode)*
__soft_offline_page
mem_pool_freeReturn the object to either the slab allocator or the memory pool.
zpool_register_driverzpool_register_driver() - register a zpool implementation.*@driver: driver to register
zpool_create_poolzpool_create_pool() - Create a new zpool*@type: The type of the zpool to create (e.g. zbud, zsmalloc)*@name: The name of the zpool (e.g. zram0, zswap)*@gfp: The GFP flags to use when allocating the pool.*@ops: The optional ops callback.
zbud_alloczbud_alloc() - allocates a region of a given size*@pool: zbud pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt to
zbud_freezbud_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by zbud_alloc()* In the case that the zbud page in which the allocation resides is
zbud_reclaim_pagezbud_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* zbud reclaim is different from
insert_zspageEach size class maintains various freelists and zspages are assigned* to one of these freelists based on the number of live objects they* have. This functions inserts the given zspage into the freelist* identified by .
__release_z3fold_page
add_to_unbuddiedAdd to the appropriate unbuddied list
z3fold_allocz3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt
z3fold_reclaim_pagez3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different
z3fold_page_migrate
z3fold_page_putback
balloon_page_list_dequeuealloon_page_list_dequeue() - removes pages from balloon's page list and* returns a list of the pages.*@b_dev_info: balloon device decriptor where we will grab a page from.*@pages: pointer to the list of pages that would be returned to the caller.
find_alloc_undod_alloc_undo - lookup (and if not present create) undo array*@ns: namespace*@semid: semaphore array id* The function looks up (and if not present creates) the undo structure.* The size of the undo structure depends on the size of the semaphore
newsegwseg - Create a new shared memory segment*@ns: namespace*@params: ptr to the structure that contains key, size and shmflg* Called with shm_ids.rwsem held as a writer.
blk_check_plugged
ioc_create_icq_create_icq - create and link io_cq*@ioc: io_context of interest*@q: request_queue of interest*@gfp_mask: allocation mask* Make sure io_cq linking @ioc and @q exists
blk_mq_add_to_requeue_list
blk_mq_dispatch_rq_listReturns true if we did some work AND can potentially do more.
__blk_mq_insert_req_list
blk_mq_exit_hctx
blk_mq_elv_switch_noneCache the elevator_type in qe pair list and switch the* io scheduler to 'none'
blk_mq_do_dispatch_schedOnly SCSI implements .get_budget and .put_budget, and SCSI restarts* its queue by itself in its completion handler, so we don't need to* restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
blk_mq_do_dispatch_ctxOnly SCSI implements .get_budget and .put_budget, and SCSI restarts* its queue by itself in its completion handler, so we don't need to* restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
blk_mq_sched_bypass_insert
blk_mq_sched_insert_request
ldm_ldmdb_addldm_ldmdb_add - Adds a raw VBLK entry to the ldmdb database*@data: Raw VBLK to add to the database*@len: Size of the raw VBLK*@ldb: Cache of the database structures* The VBLKs are sorted into categories. Partitions are also sorted by offset.* N
blkg_createIf @new_blkg is %NULL, this function tries to allocate a new one as* necessary using %GFP_NOWAIT. @new_blkg is always consumed on return.
iocg_activate
dd_insert_requestadd rq to rbtree and fifo
bfq_insert_request
bfq_active_insertq_active_insert - insert an entity in the active tree of its* group/device
bfq_idle_insertq_idle_insert - insert an entity into the idle tree.*@st: the service tree containing the tree.*@entity: the entity to insert.
register_key_typegister_key_type - Register a type of key.*@ktype: The new key type.* Register a new key type.* Returns 0 on success or -EEXIST if a type of this name already exists.
avc_add_xperms_decision
avc_xperms_populate
inode_doinit_with_dentry
smk_copy_relabelsmk_copy_relabel - copy smk_relabel labels list*@nhead: new rules header pointer*@ohead: old rules header pointer*@gfp: type of the memory for the allocation* Returns 0 on success, -ENOMEM on error
smk_parse_label_listsmk_parse_label_list - parse list of Smack labels, separated by spaces*@data: the string to parse*@private: destination list* Returns zero on success or error code, as appropriate
tomoyo_commit_conditionmoyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if
tomoyo_notify_gcmoyo_notify_gc - Register/unregister /sys/kernel/security/tomoyo/ users.*@head: Pointer to "struct tomoyo_io_buffer".*@is_register: True if register, false if unregister.* Returns nothing.
__aa_fs_create_rawdata
aa_put_buffer
chrdev_openCalled every time a character special file is opened
__register_binfmt
d_shrink_add
d_alloc allocate a dcache entry
inode_sb_list_addde_sb_list_add - add inode to the superblock list of inodes*@inode: inode to add
evict_inodesvict_inodes - evict all evictable inodes for a superblock*@sb: superblock to operate on* Make sure that no inodes with zero refcount are retained
invalidate_inodesvalidate_inodes - attempt to free all inodes on a superblock*@sb: superblock to operate on*@kill_dirty: flag to guide handling of dirty inodes* Attempts to free all inodes for a given superblock. If there were any
clone_mnt
mount_subtree
SYSCALL_DEFINE3Create a kernel mount representation for a new, prepared superblock* (specified by fs_fd) and attach to an open_tree-like file descriptor.
init_mount_tree
simple_xattr_setsimple_xattr_set - xattr SET operation for in-memory/pseudo filesystems*@xattrs: target simple_xattr list*@name: name of the extended attribute*@value: value of the xattr
simple_xattr_list_addAdds an extended attribute to the list
fsync_buffers_list
bdget
fsnotify_put_mark
fsnotify_add_mark_lockedAttach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group.
ep_call_nestedp_call_nested - Perform a bound (possibly) nested call, by checking* that the recursion limit is not exceeded, and that* the same nested call (by the meaning of same cookie) is* no re-entered.
ep_scan_ready_listp_scan_ready_list - Scans the ready list in a way that makes possible for* the scan code, to call f_op->poll(). Also allows for* O(NumReady) performance.*@ep: Pointer to the epoll private data structure.*@sproc: Pointer to the scan callback.
ep_send_events_proc
ep_loop_check_procp_loop_check_proc - Callback function to be passed to the @ep_call_nested()* API, to verify that adding an epoll file inside another* epoll structure, does not violate the constraints, in* terms of closed loops, or too deep chains (which can
SYSCALL_DEFINE4The following function implements the controller interface for* the eventpoll file that enables the insertion/removal/change of* file descriptors inside the interest set.
io_iopoll_req_issuedAfter the iocb has been issued, it's safe to be found on the poll list.* Adding the kiocb to the list AFTER submission ensures that we don't* find it from a io_iopoll_getevents() thread before the issuer is done* accessing the kiocb cookie.
io_timeout
io_grab_files
fscrypt_get_encryption_info
locks_delete_lock_ctx
bm_register_write/register
locks_start_gracelocks_start_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* A grace period is a period during which locks should not be given* out
iomap_add_to_ioendTest to see if we have an existing ioend structure that we could append to* first, otherwise finish off the current ioend and start another.
dquot_mark_dquot_dirtyMark dquot dirty in atomic manner, and return it's old dirty flag state
remove_inode_dquot_refRemove references to dquots from inode and add dquot to list for freeing* if we have the last reference to dquot
hash_dcookie
dcookie_register
list_swaplist_swap - replace entry1 with entry2 and re-add entry1 at entry2's position*@entry1: the location to place entry2*@entry2: the location to place entry1
list_movelist_move - delete from one list and add as another's head*@list: the entry to move*@head: the head that will precede our entry
__add_wait_queue