调用者名称 | 描述 |
uevent_net_exit | |
klist_release | |
rhashtable_walk_exit | 释放迭代器 |
rhashtable_walk_start_check | hashtable_walk_start_check - Start a hash table walk*@iter: Hash table iterator* Start a hash table walk at the current iterator position. Note that we take* the RCU lock in all cases including when we return an error. So you must |
test_kmod_exit | |
free_ptr_list | |
kunit_resource_remove | |
kunit_cleanup | |
string_stream_fragment_free | |
gen_pool_destroy | 销毁内存池 |
free_rs | 释放长期不使用控制结构 |
__irq_poll_complete | __irq_poll_complete - Mark this @iop as un-polled again*@iop: The parent iopoll structure* Description:* See irq_poll_complete(). This function must be called with interrupts* disabled. |
parman_prio_item_remove | |
parman_prio_fini | parman_prio_fini - finalizes use of parman priority chunk*@prio: parman prio structure* Note: all locking must be provided by the caller. |
objagg_obj_destroy | |
objagg_hints_flush | |
list_test_list_del | |
list_test_list_for_each_safe | |
list_test_list_for_each_prev_safe | |
allocate_threshold_blocks | |
deallocate_threshold_block | |
domain_remove_cpu | |
free_all_child_rdtgrp | |
rmdir_all_sub | Forcibly remove all of subdirectories under root. |
rdtgroup_mkdir_ctrl_mon | These are rdtgroups created under the root directory. Can be used* to allocate and monitor resources. |
rdtgroup_rmdir_mon | |
rdtgroup_ctrl_remove | |
alloc_rmid | As of now the RMIDs allocation is global.* However we keep track of which packages the RMIDs* are used to optimize the limbo list management. |
dom_data_init | |
pseudo_lock_cstates_relax | |
__remove_pin_from_irq | |
__mmput | |
__exit_umh | |
worker_detach_from_pool | worker_detach_from_pool() - detach a worker from its pool*@worker: worker which is attached to its pool* Undo the attaching which had been done in worker_attach_to_pool(). The* caller worker shouldn't access to the pool after detached except it has |
maybe_kfree_parameter | Does nothing if parameter wasn't kmalloced above. |
smpboot_unregister_percpu_thread | smpboot_unregister_percpu_thread - Unregister a per_cpu thread related to hotplug*@plug_thread: Hotplug thread descriptor* Stops all threads on all possible cpus. |
__wake_up_common | The core wakeup function |
psi_trigger_destroy | |
__down_common | Because this function is inlined, the 'state' parameter will be* constant, and thus optimised away by the compiler. Likewise the* 'timeout' parameter for the cases without timeouts. |
__up | |
rwsem_down_read_slowpath | Wait for the read lock to be granted |
rwsem_down_write_slowpath | Wait until we successfully acquire the write lock |
pm_qos_flags_remove_req | pm_qos_flags_remove_req - Remove device PM QoS flags request.*@pqf: Device PM QoS flags set to remove the request from.*@req: Request to remove from the set. |
pm_vt_switch_unregister | pm_vt_switch_unregister - stop tracking a device's VT switching needs*@dev: device* Remove @dev from the vt switch list. |
free_mem_extents | _mem_extents - Free a list of memory extents.*@list: List of extents to free. |
create_mem_extents | reate_mem_extents - Create a list of memory extents.*@list: List to put the extents into.*@gfp_mask: Mask to use for memory allocations.* The extents represent contiguous ranges of PFNs. |
irq_remove_generic_chip | q_remove_generic_chip - Remove a chip*@gc: Generic irq chip holding all data*@msk: Bitmask holding the irqs to initialize relative to gc->irq_base*@clr: IRQ_* bits to clear*@set: IRQ_* bits to set* Remove up to 32 interrupts starting from gc->irq_base. |
irq_domain_remove | q_domain_remove() - Remove an irq domain.*@domain: domain to remove* This routine is used to remove an irq domain. The caller must ensure* that all mappings within the domain have been disposed of prior to* use, depending on the revmap type. |
rcu_torture_pipe_update | Update all callbacks in the pipe. Suitable for synchronous grace-period* primitives. |
__klp_free_funcs | |
__klp_free_objects | |
klp_free_patch_start | This function implements the free operations that can be called safely* under klp_mutex.* The operation must be completed by calling klp_free_patch_finish()* outside klp_mutex. |
klp_unpatch_func | |
klp_patch_func | |
hash_bucket_del | Remove entry from a hash bucket list |
__dma_entry_alloc | |
__clocksource_change_rating | |
SYSCALL_DEFINE1 | Delete a POSIX.1b interval timer. |
itimer_delete | rn timer owned by the process, used by exit_itimers |
clockevents_notify_released | Called after a notify add to make devices available which were* released from the notifier call. |
clockevents_exchange_device | lockevents_exchange_device - release and request clock devices*@old: device to release (can be NULL)*@new: device to request (can be NULL)* Called from various tick functions with clockevents_lock held and* interrupts disabled. |
kimage_free_page_list | |
kimage_alloc_page | |
put_css_set_locked | |
free_cgrp_cset_links | |
cgroup_destroy_root | |
cgroup_rm_cftypes_locked | |
css_task_iter_advance_css_set | ss_task_iter_advance_css_set - advance a task itererator to the next css_set*@it: the iterator to advance* Advance @it to the next css_set to walk. |
css_task_iter_end | ss_task_iter_end - finish task iteration*@it: the task iterator to finish* Finish task iteration started by css_task_iter_start(). |
cgroup_pidlist_destroy_work_fn | |
free_cg_rpool_locked | |
audit_del_rule | Remove an existing rule from filterlist. |
update_lsm_rule | |
audit_free_names | |
audit_remove_watch | |
audit_update_watch | Update inode info in audit rules based on filesystem event. |
audit_remove_parent_watches | Remove all watches & rules associated with a parent that is going away. |
audit_remove_watch_rule | |
kill_rules | |
audit_trim_trees | |
audit_tag_tree | |
release_node | Remove node from all lists and debugfs and release associated resources.* Needs to be called with node_lock held. |
gcov_info_free | gcov_info_free - release memory for profiling data set duplicate*@info: profiling data set duplicate to free |
kcov_remote_area_get | Must be called with kcov_remote_lock locked. |
__unregister_kprobe_bottom | |
fei_attr_remove | |
relay_close | lay_close - close the channel*@chan: the channel* Closes all channel buffers and frees the channel. |
send_cpu_listeners | Send taskstats data in @skb to listeners registered for @cpu's exit data |
add_del_listener | |
tracepoint_module_going | |
rb_allocate_pages | |
get_tracing_log_err | |
clear_tracing_err_log | |
__remove_instance | |
__unregister_trace_event | Used by module code with the trace_event_sem held for write. |
unregister_stat_tracer | |
trace_destroy_fields | |
__put_system | |
remove_subsystem | |
remove_event_file_dir | |
event_remove | |
process_system_preds | |
del_named_trigger | del_named_trigger - delete a trigger from the named trigger list*@data: The trigger data to delete |
remove_hist_vars | |
bpf_event_notify | |
__local_list_pop_free | |
__local_list_pop_pending | |
bpf_cgroup_storage_unlink | |
xsk_map_sock_delete | |
bpf_offload_dev_netdev_unregister | |
cgroup_bpf_release | group_bpf_release() - put references of all bpf programs and* release all cgroup bpf data*@work: work structure embedded into the cgroup to modify |
__cgroup_bpf_attach | __cgroup_bpf_attach() - Attach the program to a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to attach*@type: Type of attach operation*@flags: Option flags |
__cgroup_bpf_detach | __cgroup_bpf_detach() - Detach the program from a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to detach or NULL*@type: Type of detach operation* Must be called with cgroup_mutex held. |
perf_sched_cb_dec | |
perf_event_release_kernel | Kill an event dead; while event:refcount will preserve the event* object, it will not preserve its functionality. Once the last 'user'* gives up the object, we'll destroy the thing. |
free_filters_list | |
perf_pmu_migrate_context | |
toggle_bp_slot | Add/remove the given breakpoint in our constraint table |
delayed_uprobe_delete | |
padata_free_shell | padata_free_shell - free a padata shell*@ps: padata shell to free |
torture_shuffle_task_unregister_all | Unregister all tasks, for example, at the end of the torture run. |
dir_utime | |
read_cache_pages_invalidate_pages | lease a list of pages, invalidating them first if need be |
read_cache_pages | ad_cache_pages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These* pages have their ->index populated and are otherwise uninitialised. |
read_pages | |
put_pages_list | put_pages_list() - release a list of pages*@pages: list of pages threaded on page->lru* Release a list of pages which are strung together on page.lru. Currently* used by read_cache_pages() and related error recovery code. |
unregister_shrinker | Remove one |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
move_pages_to_lru | This moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is |
shrink_active_list | |
reclaim_pages | |
shutdown_cache | |
release_freepages | |
split_map_pages | |
pgtable_trans_huge_withdraw | "address" argument so destroys page coloring of some arch |
unlink_anon_vmas | |
unlink_va | |
purge_fragmented_blocks | |
free_pcppages_bulk | Frees a number of pages from the PCP lists* Assumes all pages on list are in same zone, and of same order.* count is the number of pages to free.* If the zone was previously in an "all pages pinned" state then look to |
free_unref_page_list | Free a list of 0-order pages |
__rmqueue_pcplist | Remove page from the per-cpu list, caller must protect the list |
free_swap_count_continuations | _swap_count_continuations - swapoff free all the continuation pages* appended to the swap_map, after swap_map is quiesced, before vfree'ing it. |
dma_pool_create | dma_pool_create - Creates a pool of consistent memory blocks, for dma |
pool_free_page | |
dma_pool_destroy | dma_pool_destroy - destroys a pool of dma memory blocks.*@pool: dma pool that will be destroyed* Context: !in_interrupt()* Caller guarantees that no more memory from the pool is in use,* and that nothing will try to use the pool after this call. |
add_reservation_in_range | Must be called with resv->lock held. Calling this with count_only == true* will count the number of pages to be added but will not modify the linked* list. |
region_add | Add the huge page range represented by [f, t) to the reserve* map |
region_del | Delete the specified range [f, t) from the reserve map. If the* t parameter is LONG_MAX, this indicates that ALL regions after f* should be deleted. Locate the regions which intersect [f, t)* and either trim, delete or split the existing regions. |
resv_map_release | |
__free_huge_page | |
free_pool_huge_page | Free huge page from pool from next node to free.* Attempt to keep persistent huge pages more or less* balanced over allowed nodes.* Called with hugetlb_lock locked. |
dissolve_free_huge_page | Dissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use |
clear_slob_page_free | |
remove_node_from_stable_tree | |
stable_tree_search | stable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now |
scan_get_next_rmap_item | |
__ksm_exit | |
slabs_destroy | |
drain_freelist | |
fixup_slab_list | |
get_valid_first_slab | Try to find non-pfmemalloc slab if needed |
free_block | Caller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller |
remove_partial | |
putback_movable_pages | Put previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page |
unmap_and_move | Obtain the lock on page, remove all ptes and migrate the page* to the newly allocated page in newpage. |
split_huge_page_to_list | This function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked. |
free_transhuge_page | |
__khugepaged_exit | |
collect_mm_slot | |
mem_cgroup_oom_unregister_event | |
vmpressure_unregister_event | vmpressure_unregister_event() - Unbind eventfd from vmpressure*@memcg: memcg handle*@eventfd: eventfd context that was used to link vmpressure with the @cg* This function does internal manipulations to detach the @eventfd from* the vmpressure |
mem_pool_alloc | Memory pool allocation and freeing. kmemleak_lock must not be held. |
scan_gray_list | Scan the objects already referenced (gray objects). More objects will be* referenced and, if there are no memory leaks, all the objects are scanned. |
kmemleak_test_exit | |
zpool_unregister_driver | zpool_unregister_driver() - unregister a zpool implementation |
zpool_destroy_pool | zpool_destroy_pool() - Destroy a zpool*@zpool: The zpool to destroy.* Implementations must guarantee this to be thread-safe,* however only when destroying different pools. The same* pool should only be destroyed once, and should not be used |
zbud_alloc | zbud_alloc() - allocates a region of a given size*@pool: zbud pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt to |
zbud_free | zbud_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by zbud_alloc()* In the case that the zbud page in which the allocation resides is |
zbud_reclaim_page | zbud_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* zbud reclaim is different from |
free_pages_work | |
z3fold_alloc | z3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt |
z3fold_free | z3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides |
balloon_page_list_enqueue | alloon_page_list_enqueue() - inserts a list of pages into the balloon page* list |
ss_del | |
pipelined_send | |
do_msgrcv | |
unlink_queue | |
freeary | Free a semaphore set. freeary() is called with sem_ids.rwsem locked* as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem* remains locked on exit. |
exit_sem | add semadj values to semaphores, free undo structures |
shm_rmid | |
exit_shm | Locking assumes this will only be called with task == current |
msg_get | |
mqueue_evict_inode | |
wq_sleep | Puts current task to sleep. Caller must hold queue lock. After return* lock isn't held. |
pipelined_send | pipelined_send() - send a message directly to the task waiting in* sys_mq_timedreceive() (without inserting message into a queue). |
pipelined_receive | pipelined_receive() - if there is task waiting in sys_mq_timedsend()* gets its message and put to the queue (we have one free place for sure). |
flush_plug_callbacks | |
blk_mq_elv_switch_back | |
blkcg_css_free | |
bfq_idle_extract | q_idle_extract - extract an entity from the idle tree.*@st: the service tree of the owning @entity.*@entity: the entity being removed. |
bfq_active_extract | q_active_extract - remove an entity from the active tree.*@st: the service_tree containing the tree.*@entity: the entity being removed. |
add_suspend_info | |
clean_opal_dev | |
key_gc_unused_keys | Garbage collect a list of unreferenced, detached keys |
keyring_destroy | |
avc_xperms_free | |
smack_cred_free | smack_cred_free - "free" task-level security credentials*@cred: the credentials in question |
tomoyo_read_log | moyo_read_log - Read an audit log.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing. |
tomoyo_supervisor | moyo_supervisor - Ask for the supervisor's decision |
tomoyo_gc_thread | moyo_gc_thread - Garbage collector thread function.*@unused: Unused.* Returns 0. |
tomoyo_notify_gc | moyo_notify_gc - Register/unregister /sys/kernel/security/tomoyo/ users.*@head: Pointer to "struct tomoyo_io_buffer".*@is_register: True if register, false if unregister.* Returns nothing. |
aa_get_buffer | |
destroy_buffers | |
dev_exceptions_copy | alled under devcgroup_mutex |
ima_delete_rules | ma_delete_rules() called to cleanup invalid in-flight policy.* We don't need locking as we operate on the temp list, which is* different from the active one. There is also only one user of* ima_delete_rules() at a time. |
init_evm | |
unregister_binfmt | |
mntput_no_expire | |
simple_xattr_set | simple_xattr_set - xattr SET operation for in-memory/pseudo filesystems*@xattrs: target simple_xattr list*@name: name of the extended attribute*@value: value of the xattr |
mpage_readpages | mpage_readpages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These |
ep_call_nested | p_call_nested - Perform a bound (possibly) nested call, by checking* that the recursion limit is not exceeded, and that* the same nested call (by the meaning of same cookie) is* no re-entered. |
ep_unregister_pollwait | This function unregisters poll callbacks from the associated file* descriptor. Must be called with "mtx" held (or "epmutex" if called from* ep_free). |
ep_loop_check | p_loop_check - Performs a check to verify that adding an epoll file (@file)* another epoll file (represented by @ep) does not create* closed loops or too deep chains.*@ep: Pointer to the epoll private data structure. |
handle_userfault | The locking rules involved in returning VM_FAULT_RETRY depending on* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and* FAULT_FLAG_KILLABLE are not straightforward |
dup_userfaultfd_complete | |
userfaultfd_unmap_complete | |
userfaultfd_ctx_read | |
aio_remove_iocb | |
aio_poll_wake | |
io_cqring_overflow_flush | Returns true if there are no backlogged entries after the flush |
__io_free_req | |
io_iopoll_complete | Find and free completed poll iocbs |
put_crypt_info | |
mb_cache_destroy | mb_cache_destroy - destroy cache*@cache: the cache to destroy* Free all entries in cache and cache itself. Caller must make sure nobody* (except shrinker) can reach @cache when calling this. |
iomap_next_page | |
remove_inuse | |
dcookie_exit | |
dcookie_unregister | |
list_swap | list_swap - replace entry1 with entry2 and re-add entry1 at entry2's position*@entry1: the location to place entry2*@entry2: the location to place entry1 |
__remove_wait_queue | |
del_page_from_free_area | |
tcp_rtx_queue_unlink_and_free | |
resource_list_del | |
del_page_from_lru_list | |
balloon_page_delete | |
balloon_page_pop | alloon_page_pop - remove a page from a page list.*@head : pointer to list*@page : page to be added* Caller must ensure the page is private and protect the list. |