Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\spinlock.h Create Date:2022-07-28 05:35:20
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:spin_unlock_irqrestore

Proto:static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)

Type:void

Parameter:

TypeParameterName
spinlock_t *lock
unsigned longflags
393  raw_spin_unlock_irqrestore( & rlock, flags)
Caller
NameDescribe
_atomic_dec_and_lock_irqsave
klist_prevklist_prev - Ante up prev node in list.*@i: Iterator structure.* First grab list lock. Decrement the reference count of the previous* node, if there was one. Grab the prev node, increment its reference* count, drop the lock, and return that prev node.
klist_nextklist_next - Ante up next node in list.*@i: Iterator structure.* First grab list lock. Decrement the reference count of the previous* node, if there was one. Grab the next node, increment its reference* count, drop the lock, and return that next node.
__prandom_reseedGenerate better values after random number generator* is fully initialized.
percpu_ref_switch_to_atomicpercpu_ref_switch_to_atomic - switch a percpu_ref to atomic mode*@ref: percpu_ref to switch to atomic mode*@confirm_switch: optional confirmation callback* There's no reason to use this function for the usual reference counting.
percpu_ref_switch_to_percpupercpu_ref_switch_to_percpu - switch a percpu_ref to percpu mode*@ref: percpu_ref to switch to percpu mode* There's no reason to use this function for the usual reference counting.* To re-use an expired ref, use percpu_ref_reinit().
percpu_ref_kill_and_confirmpercpu_ref_kill_and_confirm - drop the initial ref and schedule confirmation*@ref: percpu_ref to kill*@confirm_kill: optional confirmation callback* Equivalent to percpu_ref_kill() but also schedules kill confirmation if*@confirm_kill is not NULL
percpu_ref_resurrectpercpu_ref_resurrect - modify a percpu refcount from dead to live*@ref: perpcu_ref to resurrect* Modify @ref so that it's in the same state as before percpu_ref_kill() was* called
__do_once_start
__do_once_done
refcount_dec_and_lock_irqsave_dec_and_lock_irqsave - return holding spinlock with disabled* interrupts if able to decrement refcount to 0*@r: the refcount*@lock: the spinlock to be locked*@flags: saved IRQ-flags if the is acquired* Same as refcount_dec_and_lock() above except that
stack_depot_savestack_depot_save - Save a stack trace from an array*@entries: Pointer to storage array*@nr_entries: Size of the storage array*@alloc_flags: Allocation gfp flags* Return: The handle of the stack struct stored in depot
sbitmap_deferred_clearSee if we have deferred clears that we can batch move
sbf_write
sbf_read
mach_get_cmos_time
smpboot_setup_warm_reset_vector
smpboot_restore_warm_reset_vector
irq_handler
free_vm86_irq
get_and_reset_irq
amd_flush_garts
alloc_iommu
free_iommu
flush_gartUse global flush state to avoid races with multiple flushers.
do_oops_enter_exitIt just happens that oops_enter() and oops_exit() are identically* implemented...
free_userIRQs are disabled and uidhash_lock is held upon function entry.* IRQ state (as stored in flags) is restored and uidhash_lock released* upon function exit.
find_userLocate the user_struct for the passed UID. If found, take a ref on it. The* caller must undo that ref with free_uid().* If the user_struct could not be found, return NULL.
flush_signalsFlush all pending signals for this kthread.
force_sig_info_to_taskForce a signal that the process can't ignore: if necessary* we unblock the signal and change any SIG_IGN to SIG_DFL.* Note: If we unblock the signal, we always reset it to SIG_DFL,* since we do not want to have a signal handler that was blocked
__lock_task_sighand
force_sigsegvWhen things go south during signal handling, we* will force a SIGSEGV. And if the signal that caused* the problem was already a SIGSEGV, we'll want to* make sure we don't even try to deliver the signal..
sigqueue_free
do_notify_parentLet a parent know about the death of a child.* For a stopped/continued status change, use do_notify_parent_cldstop instead.* Returns true if our parent ignored us and so we've switched to* self-reaping.
do_notify_parent_cldstopdo_notify_parent_cldstop - notify parent of stopped/continued state change*@tsk: task reporting the state change*@for_ptracer: the notification is for ptracer*@why: CLD_{CONTINUED|STOPPED|TRAPPED} to report
pwq_adjust_max_activepwq_adjust_max_active - update a pwq's max_active to the current setting*@pwq: target pool_workqueue* If @pwq isn't freezing, set @pwq->max_active to the associated* workqueue's saved_max_active and activate delayed work items* accordingly
work_busywork_busy - test whether a work is currently pending or running*@work: the work to be tested* Test whether @work is currently pending or running. There is no* synchronization around this function and the test result is
show_workqueue_stateshow_workqueue_state - dump workqueue state* Called from a sysrq handler or try_to_freeze_tasks() and prints out* all busy workqueues and pools.
free_pid
atomic_notifier_chain_registeratomic_notifier_chain_register - Add notifier to an atomic notifier chain*@nh: Pointer to head of the atomic notifier chain*@n: New entry in notifier chain* Adds a notifier to an atomic notifier chain.* Currently always returns zero.
atomic_notifier_chain_unregisteratomic_notifier_chain_unregister - Remove notifier from an atomic notifier chain*@nh: Pointer to head of the atomic notifier chain*@n: Entry to remove from notifier chain* Removes a notifier from an atomic notifier chain.
lowest_in_progress
async_run_entry_fnpick the first pending entry and run it
async_schedule_node_domainasync_schedule_node_domain - NUMA specific version of async_schedule_domain*@func: function to execute asynchronously*@data: data pointer to pass to the function*@node: NUMA node that we want to schedule this on or close to*@domain: the domain
put_ucounts
add_wait_queue
add_wait_queue_exclusive
remove_wait_queue
__wake_up_common_lock
prepare_to_waitNote: we use "set_current_state()" _after_ the wait-queue add,* because we need a memory barrier there on SMP, so that any* wake-function that tests for the wait-queue being active* will be guaranteed to see waitqueue addition _or_ subsequent
prepare_to_wait_exclusive
prepare_to_wait_event
finish_waitsh_wait - clean up after waiting in a queue*@wq_head: waitqueue waited on*@wq_entry: wait descriptor* Sets current thread back to running state and removes* the wait descriptor from the given waitqueue if still* queued.
completemplete: - signals a single thread waiting on this completion*@x: holds the state of this particular completion* This will wake up a single thread waiting on this completion. Threads will be* awakened in the same order in which they were queued.
complete_allmplete_all: - signals all threads waiting on this completion*@x: holds the state of this particular completion* This will wake up all threads waiting on this particular completion event
try_wait_for_completionry_wait_for_completion - try to decrement a completion without blocking*@x: completion structure* Return: 0 if a decrement cannot be done without blocking* 1 if a decrement succeeded.* If a completion is being used as a counting completion,
completion_donempletion_done - Test to see if a completion has any waiters*@x: completion structure* Return: 0 if there are waiters (wait_for_completion() in progress)* 1 if there are no waiters.* Note, this will always return true if complete_all() was called on @X.
print_cpu
torture_lock_spin_write_unlock_irq
pm_qos_debug_show
pm_qos_update_targetpm_qos_update_target - manages the constraints list and calls the notifiers* if needed*@c: constraints data struct*@node: request to add to the list, to update or to remove*@action: action to take on the constraints list*@value: value of the request to
pm_qos_update_flagspm_qos_update_flags - Update a set of PM QoS flags
pm_qos_power_read
kmsg_dump_registerkmsg_dump_register - register a kernel log dumper.*@dumper: pointer to the kmsg_dumper structure* Adds a kernel log dumper to the system. The dump callback in the* structure will be called when the kernel oopses or panics and must be* set
kmsg_dump_unregisterkmsg_dump_unregister - unregister a kmsg dumper.*@dumper: pointer to the kmsg_dumper structure* Removes a dump device from the system. Returns zero on success and* %-EINVAL otherwise.
rcu_sync_func
rcu_torture_fwd_cb_crCallback function for continuous-flood RCU callbacks.
rcu_torture_fwd_prog_cbfreeFree all callbacks on the rcu_fwd_cb_head list, either because the* test is over or because we hit an OOM event.
__klp_shadow_get_or_alloc
klp_shadow_freeklp_shadow_free() - detach and free a shadow variable*@obj: pointer to parent object*@id: data identifier*@dtor: custom callback that can be used to unregister the variable* and/or free data that the shadow variable points to (optional)* This
klp_shadow_free_allklp_shadow_free_all() - detach and free all <*, id> shadow variables*@id: data identifier*@dtor: custom callback that can be used to unregister the variable* and/or free data that the shadow variable points to (optional)* This function releases the memory
__dma_alloc_from_coherent
__dma_release_from_coherent
put_hash_bucketGive up exclusive access to the hash bucket
debug_dma_dump_mappingsDump mapping entries for debugging purposes
active_cacheline_insert
active_cacheline_remove
debug_dma_assert_idledebug_dma_assert_idle() - assert that a page is not undergoing dma*@page: page to lookup in the dma_active_cacheline tree* Place a call to this routine in cases where the cpu touching the page* before the dma completes (page is dma_unmapped) will lead to
dma_entry_allocstruct dma_entry allocator* The next two functions implement the allocator for* struct dma_debug_entries.
dma_entry_free
dump_show
device_dma_allocations
swiotlb_tbl_map_single
swiotlb_tbl_unmap_singlelb_addr is the physical address of the bounce buffer to unmap.
freeze_taskze_task - send a freeze request to given task*@p: task to send the request to* If @p is freezing, the freeze request is sent either by sending a fake* signal (if it's not a kernel thread) or waking it up (if it's a kernel* thread)
__thaw_task
alarmtimer_firedalarmtimer_fired - Handles alarm hrtimer being fired
alarm_startalarm_start - Sets an absolute alarm to fire*@alarm: ptr to alarm to set*@start: time to run the alarm
alarm_restart
alarm_try_to_cancelalarm_try_to_cancel - Tries to cancel an alarm timer*@alarm: ptr to alarm to be canceled* Returns 1 if the timer was canceled, 0 if it was not running,* and -1 if the callback was running
unlock_timer
release_posix_timer
__lock_timerCLOCKs: The POSIX standard calls for a couple of clocks and allows us* to implement others
cgroup_file_notifygroup_file_notify - generate a file modified event for a cgroup_file*@cfile: target cgroup_file*@cfile must have been obtained by setting cftype->file_offset.
cgroup_rstat_flush_irqsafegroup_rstat_flush_irqsafe - irqsafe version of cgroup_rstat_flush()*@cgrp: target cgroup* This function can be called from any context.
cpuset_cpus_allowedpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset
cpuset_mems_allowedpuset_mems_allowed - return mems_allowed mask from a tasks cpuset.*@tsk: pointer to task_struct from which to obtain cpuset->mems_allowed.* Description: Returns the nodemask_t mems_allowed of the cpuset* attached to the specified @tsk
__cpuset_node_allowedpuset_node_allowed - Can we allocate on a memory node?*@node: is this an allowed node?*@gfp_mask: memory allocation flags* If we're in interrupt, yes, we can always allocate. If @node is set in* current's mems_allowed, yes
audit_rate_check
audit_log_lostaudit_log_lost - conditionally log lost audit message event*@message: the message stating reason for lost audit message* Emit at least 1 message per second, even if audit_rate_check is* throttling.* Always increment the lost messages counter.
auditd_setauditd_set - Set/Reset the auditd connection state*@pid: auditd PID*@portid: auditd netlink portid*@net: auditd network namespace pointer* Description:* This function will obtain and drop network namespace references as* necessary
auditd_resetauditd_reset - Disconnect the auditd connection*@ac: auditd connection state* Description:* Break the auditd/kauditd connection and move all the queued records into the* hold queue in case auditd reconnects
audit_get_tty
fill_tgid_exit
output_printk
bpf_map_free_id
btf_free_id
dev_map_hash_delete_elem
__dev_map_hash_update_elem
dev_map_hash_remove_netdev
ring_buffer_attach
wake_up_page_bit
add_page_wait_queueadd_page_wait_queue - Add an arbitrary waiter to a page's wait queue*@page: Page defining the wait queue of interest*@waiter: Waiter to add to the queue* Add an arbitrary @waiter to the wait queue for the nominated @page.
mempool_resizemempool_resize - resize an existing memory pool*@pool: pointer to the memory pool which was allocated via* mempool_create().*@new_min_nr: the new minimum number of elements guaranteed to be* allocated for this pool.* This function shrinks/grows the pool
mempool_allocmempool_alloc - allocate an element from a specific memory pool*@pool: pointer to the memory pool which was allocated via* mempool_create().*@gfp_mask: the usual allocation bitmask.* this function only sleeps if the alloc_fn() function sleeps or
mempool_freemempool_free - return an element to the pool.*@element: pool element pointer.*@pool: pointer to the memory pool which was allocated via* mempool_create().* this function only sleeps if the free_fn() function sleeps.
__page_cache_releaseThis path almost never happens for VM activity - pages are normally* freed via pagevecs. But it gets used by networking.
pagevec_lru_move_fn
release_pageslease_pages - batched put_page()*@pages: array of pages to release*@nr: number of pages* Decrement the reference count on all the pages in @pages. If it* fell to zero, remove the page from the LRU and free it.
balance_pgdatFor kswapd, balance_pgdat() will reclaim pages across a node from zones* that are eligible for use by the caller until at least one zone is* balanced.* Returns the order kswapd finished reclaiming at.
walk_zones_in_nodeWalk zones in a node and print using a callback.* If @assert_populated is true, only use callback for zones that are populated.
pcpu_allocpcpu_alloc - the percpu allocator*@size: size of area to allocate in bytes*@align: alignment of area (max PAGE_SIZE)*@reserved: allocate from the reserved chunk if available*@gfp: allocation flags* Allocate percpu area of @size bytes aligned at @align
free_percpufree previously allocated percpu memory
compact_unlock_should_abortCompaction requires the taking of some coarse locks that are potentially* very heavily contended. The lock should be periodically unlocked to avoid* having disabled IRQs for a long time, even when there is nobody waiting on* the lock
isolate_freepages_blockIsolate free pages onto a private freelist. If @strict is true, will abort* returning 0 on any invalid PFNs or non-free pages inside of the pageblock* (even though it may still end up isolating some pages).
isolate_migratepages_blocksolate_migratepages_block() - isolate all migrate-able pages within* a single pageblock*@cc: Compaction control structure.*@low_pfn: The first PFN to isolate*@end_pfn: The one-past-the-last PFN to isolate, within same pageblock
reserve_highatomic_pageblockReserve a pageblock for exclusive use of high-order atomic allocations if* there are no empty page blocks that contain a page with a suitable order
unreserve_highatomic_pageblockUsed when an allocation is about to fail under memory pressure
show_free_areasShow free area list (used inside shift_scroll-lock stuff)* We also calculate the percentage fragmentation
__setup_per_zone_wmarks
is_free_buddy_page
set_hwpoison_free_buddy_pageSet PG_hwpoison flag if a given page is confirmed to be a free page. This* test is performed under the zone lock to prevent a race against page* allocation.
__shuffle_zone
dma_pool_allocdma_pool_alloc - get a block of consistent memory*@pool: dma pool that will produce the block*@mem_flags: GFP_* bitmask*@handle: pointer to dma address of block* Return: the kernel virtual address of a currently unused block,
dma_pool_freedma_pool_free - put block back into dma pool*@pool: the dma pool holding the block*@vaddr: virtual address of block*@dma: dma address of block* Caller promises neither device nor driver will again touch this block* unless it is first re-allocated.
slob_allocslob_alloc: entry point into the slob allocator.
slob_freeslob_free: entry point into the slob allocator.
drain_alien_cache
__slab_freeSlow path handling. This may still be called frequently since objects* have a longer lifetime than the cpu slabs in most processing loads.* So we still attempt to reduce cache line usage. Just take the slab* lock and free the item
__kmem_cache_shrinkkmem_cache_shrink discards empty slabs and promotes the slabs filled* up most to the head of the partial lists. New allocations will then* fill those up and thus they can be removed from the partial lists.* The slabs with the least items are placed last
end_report
__split_huge_page
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
free_transhuge_page
deferred_split_huge_page
deferred_split_scan
mem_cgroup_remove_exceeded
mem_cgroup_update_tree
lock_page_memcglock_page_memcg - lock a page->mem_cgroup binding*@page: the page* This function protects unlocked LRU pages from being moved to* another cgroup
__unlock_page_memcg__unlock_page_memcg - unlock and unpin a memcg*@memcg: the memcg* Unlock and unpin a memcg returned by lock_page_memcg().
mem_cgroup_move_accountmem_cgroup_move_account - move account of the page*@page: the page*@compound: charge the page as compound or small page*@from: mem_cgroup which the page is moved from.*@to: mem_cgroup which the page is moved to. @from != @to.
swap_cgroup_cmpxchgswap_cgroup_cmpxchg - cmpxchg mem_cgroup's id for this swp_entry.*@ent: swap entry to be cmpxchged*@old: old id*@new: new id* Returns old id at success, 0 at failure.* (There is no mem_cgroup using 0 as its id)
swap_cgroup_recordswap_cgroup_record - record mem_cgroup for a set of swap entries*@ent: the first swap entry to be recorded into*@id: mem_cgroup to be recorded*@nr_ents: number of swap entries to be recorded* Returns old value at success, 0 at failure.
memory_failure_queuememory_failure_queue - Schedule handling memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: Flags for memory failure handling* This function is called by the low level hardware error handler
memory_failure_work_func
__delete_objectMark the object as not allocated and schedule RCU freeing via put_object().
paint_it
add_scan_areaAdd a scanning area to the object. If at least one such area is added,* kmemleak will only scan these ranges rather than the whole memory block.
object_set_excess_refAny surplus references (object already gray) to 'ptr' are passed to* 'excess_ref'. This is used in the vmalloc() case where a pointer to* vm_struct may be used as an alternative reference to the vmalloc'ed object* (see free_thread_stack()).
object_no_scanSet the OBJECT_NO_SCAN flag for the object corresponding to the give* pointer. Such object will not be scanned by kmemleak but references to it* are searched.
kmemleak_update_tracekmemleak_update_trace - update object allocation stack trace*@ptr: pointer to beginning of the object* Override the object allocation stack trace for cases where the actual* allocation place is not always useful.
scan_objectScan a memory block corresponding to a kmemleak_object. A condition is* that object->use_count >= 1.
kmemleak_scanScan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held.
kmemleak_seq_showPrint the information for an unreferenced object to the seq file.
dump_str_object_info
kmemleak_clearWe use grey instead of black to ensure we can do future scans on the same* objects. If we did not do future scans these black objects could* potentially contain references to newly allocated objects in the future and* we'd end up with false positives.
set_migratetype_isolate
unset_migratetype_isolate
test_pages_isolatedCaller should ensure that requested range is in a single zone
balloon_page_list_enqueuealloon_page_list_enqueue() - inserts a list of pages into the balloon page* list
balloon_page_list_dequeuealloon_page_list_dequeue() - removes pages from balloon's page list and* returns a list of the pages.*@b_dev_info: balloon device decriptor where we will grab a page from.*@pages: pointer to the list of pages that would be returned to the caller.
balloon_page_enqueuealloon_page_enqueue - inserts a new page into the balloon page list
balloon_page_dequeuealloon_page_dequeue - removes a page from balloon's page list and returns* its address to allow the driver to release the page
bio_check_pages_dirty
flush_end_io
mq_flush_data_end_io
ioc_release_fnSlow path for ioc release in put_io_context(). Performs double-lock* dancing to unlink all icq's and then frees ioc.
put_io_contextput_io_context - put a reference of io_context*@ioc: io_context to put* Decrement reference count of @ioc and release it if the count reaches* zero.
put_io_context_activeput_io_context_active - put active reference on ioc*@ioc: ioc of interest* Undo get_io_context_active(). If active reference reaches zero after* put, @ioc can never issue further IOs and ioscheds are notified.
__ioc_clear_queue
blk_mq_add_to_requeue_list
disk_block_eventsdisk_block_events - block and flush disk event checking*@disk: disk to block events for* On return from this function, it is guaranteed that event checking* isn't in progress and won't happen until unblocked by* disk_unblock_events()
__disk_unblock_events
blkg_lookup_createlkg_lookup_create - find or create a blkg*@blkcg: target block cgroup*@q: target request_queue* This looks up or creates the blkg representing the unique pair* of the blkcg and the request_queue.
iolatency_check_latencies
blkiolatency_timer_fn
iocg_waitq_timer_fn
ioc_pd_init
deadline_fifo_requestFor the specified data direction, return the next request to* dispatch using arrival ordered lists.
deadline_next_requestFor the specified data direction, return the next request to* dispatch using sector position sorted lists.
dd_finish_requestFor zoned block devices, write unlock the target zone of* completed write requests. Do this while holding the zone lock* spinlock so that the zone is never unlocked while deadline_fifo_request()* or deadline_next_request() are executing
bfq_bic_lookupq_bic_lookup - search into @ioc a bic associated to @bfqd.*@bfqd: the lookup key.*@ioc: the io_context of the process doing I/O.*@q: the request queue.
bfq_exit_icq_bfqq
bfq_finish_requeue_requestHandle either a requeue or a finish for rq. The things to do are* the same in both cases: all references to rq are to be dropped. In* particular, rq is considered completed from the point of view of* the scheduler.
bfq_idle_slice_timer_body
avc_reclaim_node
avc_latest_notif_update
avc_insertavc_insert - Insert an AVC entry.*@ssid: source security identifier*@tsid: target security identifier*@tclass: target security class*@avd: resulting av decision*@xp_node: resulting extended permissions* Insert an AVC entry for the SID pair
avc_update_nodeavc_update_node Update an AVC entry*@event : Updating event*@perms : Permission mask bits*@ssid,@tsid,@tclass : identifier of an AVC entry*@seqno : sequence number when decision was made*@xpd: extended_perms_decision to be added to the node
avc_flushavc_flush - Flush the cache
sel_ib_pkey_sid_slowsel_ib_pkey_sid_slow - Lookup the SID of a pkey using the policy*@subnet_prefix: subnet prefix*@pkey_num: pkey number*@sid: pkey SID* Description:* This function determines the SID of a pkey by querying the security* policy
sel_ib_pkey_flushsel_ib_pkey_flush - Flush the entire pkey table* Description:* Remove all entries from the pkey table
aa_secid_updateaa_secid_update - update a secid mapping to a new label*@secid: secid to update*@label: label the secid will now map to
aa_alloc_secidaa_alloc_secid - allocate a new secid for a profile*@label: the label to allocate a secid for*@gfp: memory allocation flags* Returns: 0 with @label->secid initialized* <0 returns error with @label->secid set to AA_SECID_INVALID
aa_free_secidaa_free_secid - free a secid*@secid: secid to free
sb_mark_inode_writebackmark an inode as under writeback on the sb
sb_clear_inode_writebacklear an inode as under writeback on the sb
dio_bio_end_aioAsynchronous IO callback.
dio_bio_end_ioThe BIO completion handler simply queues the BIO up for the process-context* handler.* During I/O bi_private points at the dio. After I/O, bi_private is used to* implement a singly-linked list of completed BIOs, at dio->bio_list.
dio_bio_submitIn the AIO read case we speculatively dirty the pages before starting IO.* During IO completion, any of these pages which happen to have been written* back will be redirtied by bio_check_pages_dirty().
dio_await_oneWait for the next BIO to complete. Remove it and return it. NULL is* returned once all BIOs have been completed. This must only be called once* all bios have been issued so that dio->refcount can only decrease. This
dio_bio_reapA really large O_DIRECT read or write can generate a lot of BIOs. So* to keep the memory consumption sane we periodically reap any completed BIOs* during the BIO generation phase.* This also helps to limit the peak amount of pinned userspace memory.
drop_refcount
ep_call_nestedp_call_nested - Perform a bound (possibly) nested call, by checking* that the recursion limit is not exceeded, and that* the same nested call (by the meaning of same cookie) is* no re-entered.
timerfd_triggeredThis gets called when the timer event triggers. We set the "expired"* flag, but we do not re-arm the timer (in case it's necessary,* tintv != 0) until the timer is accessed.
timerfd_clock_was_setCalled when the clock was set to cancel the timers in the cancel* list. This will wake up processes waiting on these timers. The* wake-up requires ctx->ticks to be non zero, therefore we increment* it before calling wake_up_locked().
timerfd_poll
eventfd_signalventfd_signal - Adds @n to the eventfd counter
eventfd_ctx_remove_wait_queueventfd_ctx_remove_wait_queue - Read the current counter and removes wait queue
aio_migratepage
kiocb_set_cancel_fn
aio_completeaio_complete* Called when the io request on the given iocb is complete.
aio_remove_iocb
aio_poll_wake
io_cqring_overflow_flushReturns true if there are no backlogged entries after the flush
io_cqring_add_event
__io_free_req
io_fail_linksCalled if REQ_F_LINK is set, and we fail the head request
io_req_find_next
io_poll_wake
io_timeout_fn
io_async_find_and_cancel
io_link_timeout_fn
io_wqe_enqueue
io_work_cancel
io_wqe_cancel_cb_work
io_wq_worker_cancel
io_wqe_cancel_work
iomap_iop_set_range_uptodate
write_sequnlock_irqrestore
read_sequnlock_excl_irqrestore