Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\spinlock.h Create Date:2022-07-28 05:35:20
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:spin_unlock_irq

Proto:static __always_inline void spin_unlock_irq(spinlock_t *lock)

Type:void

Parameter:

TypeParameterName
spinlock_t *lock
388  raw_spin_unlock_irq( & rlock)
Caller
NameDescribe
copy_sighand
copy_processCreate a new process
do_group_exitTake down every thread in the group. This is called by fatal signals* as well as by sys_exit_group (below).
wait_task_zombieHandle sys_wait4 work for one task in state EXIT_ZOMBIE. We hold* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have
wait_task_stoppedwait_task_stopped - Wait for %TASK_STOPPED or %TASK_TRACED*@wo: wait options*@ptrace: is the wait for ptrace*@p: task to wait for* Handle sys_wait4() work for %p in state %TASK_STOPPED or %TASK_TRACED
wait_task_continuedHandle do_wait work for one task in a live, non-stopped state.* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have
ptrace_freeze_tracedEnsure that nothing can wake it up, even SIGKILL
ptrace_unfreeze_traced
ptrace_peek_siginfo
ptrace_resume
ptrace_request
alloc_uid
uid_cache_init
calculate_sigpending
ptrace_stopThis must be called with current->sighand->siglock held.* This should be the path for all ptrace stops.* We always set current->last_siginfo while stopped here.* That makes it a way to test a stopped process for
ptrace_notify
do_signal_stopdo_signal_stop - handle group stop for SIGSTOP and other stop signals*@signr: signr causing group stop if initiating* If %JOBCTL_STOP_PENDING is not set yet, initiate group stop with @signr* and participate in it
do_freezer_trapdo_freezer_trap - handle the freezer jobctl trap* Puts the task into frozen state, if only the task is not about to quit.* In this case it drops JOBCTL_TRAP_FREEZE.* CONTEXT:* Must be called with @current->sighand->siglock held,
get_signal
exit_signals
__set_current_blocked
do_sigpending
do_sigtimedwaitdo_sigtimedwait - wait for queued signals specified in @which*@which: queued signals to wait for*@info: if non-null, the signal's siginfo is returned here*@ts: upper bound on process time suspension
kernel_sigactionFor kthreads only, must not be used if cloned with CLONE_SIGHAND
do_sigaction
call_usermodehelper_exec_asyncThis is the task which runs the usermode application
wq_worker_sleepingwq_worker_sleeping - a worker is going to sleep*@task: task going to sleep* This function is called from schedule() when a busy worker is* going to sleep.
put_pwq_unlockedput_pwq_unlocked - put_pwq() with surrounding pool lock/unlock*@pwq: pool_workqueue to put (can be %NULL)* put_pwq() with locking. This function also allows %NULL @pwq.
create_workerreate_worker - create a new workqueue worker*@pool: pool the new worker will belong to* Create and start a new worker which is attached to @pool.* CONTEXT:* Might sleep. Does GFP_KERNEL allocations.* Return:* Pointer to the newly created worker.
idle_worker_timeout
pool_mayday_timeout
maybe_create_workermaybe_create_worker - create a new worker if necessary*@pool: pool to create a new worker for* Create a new worker for @pool if necessary
process_one_workprocess_one_work - process single work*@worker: self*@work: work to process* Process @work
worker_thread
rescuer_threadscuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function
flush_workqueue_prep_pwqslush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing
drain_workqueuedrain_workqueue - drain a workqueue*@wq: workqueue to drain* Wait until the workqueue becomes empty. While draining is in progress,* only chain queueing is allowed. IOW, only currently pending or running
start_flush_work
put_unbound_poolput_unbound_pool - put a worker_pool*@pool: worker_pool to put* Put @pool
wq_update_unbound_numawq_update_unbound_numa - update NUMA affinity of a wq for CPU hot[un]plug*@wq: the target workqueue*@cpu: the CPU coming up or going down*@online: whether @cpu is coming up or going down* This function is to be called from %CPU_DOWN_PREPARE, %CPU_ONLINE
destroy_workqueuedestroy_workqueue - safely terminate a workqueue*@wq: target workqueue* Safely destroy a workqueue. All work currently pending will be done first.
wq_worker_commsed to show worker information through /proc/PID/{comm,stat,status}
alloc_pid
disable_pid_allocation
async_unregister_domainasync_unregister_domain - ensure no more anonymous waiters on this domain*@domain: idle domain to flush out of any async_synchronize_full instances* async_synchronize_{cookie|full}_domain() are not flushed since callers* of these routines should know the
get_ucounts
do_wait_intr_irq
do_wait_for_common
__wait_for_common
rcu_sync_enter_sync_enter() - Force readers onto slowpath*@rsp: Pointer to rcu_sync structure to use for synchronization* This function is used by updaters who need readers to make use of* a slowpath during the update
rcu_sync_exit_sync_exit() - Allow readers back onto fast path after grace period*@rsp: Pointer to rcu_sync structure to use for synchronization* This function is used by updaters who have completed, and can therefore* now allow readers to make use of their fastpaths
rcu_sync_dtor_sync_dtor() - Clean up an rcu_sync structure*@rsp: Pointer to rcu_sync structure to be cleaned up
klp_send_signalsSends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.* Kthreads with TIF_PATCH_PENDING set are woken up.
__refrigeratorRefrigerator is place where frozen processes are stored :-).
set_freezableset_freezable - make %current freezable* Mark %current freezable and enter refrigerator if necessary.
do_timer_createCreate a POSIX.1b interval timer.
itimer_deletern timer owned by the process, used by exit_itimers
update_rlimit_cpuCalled after updating RLIMIT_CPU to run cpu timer and update* tsk->signal->posix_cputimers.bases[clock].nextevt expiration cache if* necessary. Needs siglock protection since other code may update the* expiration cache as well.
do_cpu_nanosleep
get_cpu_itimer
do_getitimer
set_cpu_itimer
do_setitimer
fill_acWrite an accounting entry for an exiting process* The acct_process() call is the workhorse of the process* accounting system. The struct acct is built here and then written* into the accounting file. This function should only be called from
acct_collectacct_collect - collect accounting information into pacct_struct*@exitcode: task exit code*@group_dead: not 0, if this thread is the last one in the process.
cgroup_task_countgroup_task_count - count the number of tasks in a cgroup.*@cgrp: the cgroup in question
find_css_setd_css_set - return a new css_set with one cgroup updated*@old_cset: the baseline css_set*@cgrp: the cgroup to be updated* Return a new css_set that's equivalent to @old_cset, but with @cgrp* substituted into the appropriate hierarchy.
cgroup_destroy_root
cgroup_rm_file
rebind_subsystems
cgroup_show_path
cgroup_setup_root
cgroup_do_get_tree
cgroup_path_ns
task_cgroup_pathask_cgroup_path - cgroup path of a task in the first cgroup hierarchy*@task: target task*@buf: the buffer to write the path into*@buflen: the length of the buffer* Determine @task's cgroup on the first (the one with the lowest non-zero
cgroup_migrate_executegroup_taskset_migrate - migrate a taskset*@mgctx: migration context* Migrate tasks in @mgctx as setup by migration preparation functions.* This function fails iff one of the ->can_attach callbacks fails and
cgroup_migrate_finishgroup_migrate_finish - cleanup after attach*@mgctx: migration context* Undo cgroup_migrate_add_src() and cgroup_migrate_prepare_dst(). See* those functions for details.
cgroup_migrategroup_migrate - migrate a process or task to a cgroup*@leader: the leader of the process or the task to migrate*@threadgroup: whether @leader points to the whole process or a single task*@mgctx: migration context
cgroup_attach_taskgroup_attach_task - attach a task or a whole threadgroup to a cgroup*@dst_cgrp: the cgroup to attach to*@leader: the task or the leader of the threadgroup to be attached*@threadgroup: attach the whole threadgroup?
cgroup_update_dfl_cssesgroup_update_dfl_csses - update css assoc of a subtree in default hierarchy*@cgrp: root of the subtree to update csses for*@cgrp's control masks have changed and its subtree's css associations* need to be updated accordingly
cgroup_add_file
css_task_iter_startss_task_iter_start - initiate task iteration*@css: the css to walk tasks of*@flags: CSS_TASK_ITER_* flags*@it: the task iterator to use* Initiate iteration through the tasks of @css
css_task_iter_nextss_task_iter_next - return the next task for the iterator*@it: the task iterator being iterated* The "next" function for task iteration. @it should have been* initialized via css_task_iter_start(). Returns NULL when the iteration* reaches the end.
css_task_iter_endss_task_iter_end - finish task iteration*@it: the task iterator to finish* Finish task iteration started by css_task_iter_start().
cgroup_procs_write
cgroup_threads_write
css_release_work_fn
cgroup_createThe returned cgroup is fully initialized including its control mask, but* it isn't associated with its kernfs_node and doesn't have the control* mask applied.
cgroup_destroy_lockedgroup_destroy_locked - the first stage of cgroup destruction*@cgrp: cgroup to be destroyed* css's make use of percpu refcnts whose killing latency shouldn't be* exposed to userland and are RCU protected
proc_cgroup_showproc_cgroup_show()* - Print task's cgroup paths into seq_file, one line for each hierarchy* - Used for /proc//cgroup.
cgroup_post_forkgroup_post_fork - called on a new task after adding it to the task list*@child: the task in question* Adds the task to the list running through its css_set if necessary and* call the subsystem fork() callbacks
cgroup_exitgroup_exit - detach cgroup from exiting task*@tsk: pointer to task_struct of exiting process* Description: Detach cgroup from @tsk.
cgroup_release
cgroup_rstat_flush_lockedsee cgroup_rstat_flush()
cgroup_rstat_flushgroup_rstat_flush - flush stats in @cgrp's subtree*@cgrp: target cgroup* Collect all per-cpu stats in @cgrp's subtree into the global counters* and propagate them upwards
cgroup_rstat_flush_releasegroup_rstat_flush_release - release cgroup_rstat_flush_hold()
copy_cgroup_ns
cgroup_attach_task_allgroup_attach_task_all - attach task 'tsk' to all cgroups of task 'from'*@from: attach to all cgroups of a given task*@tsk: the task to be attached
cgroup_transfer_tasksgroup_trasnsfer_tasks - move tasks from one cgroup to another*@to: cgroup to which the tasks will be moved*@from: cgroup in which the tasks currently reside* Locking rules between cgroup_post_fork() and the migration path* guarantee that, if a task is
cgroup1_release_agentNotify userspace when a cgroup is released, by running the* configured release agent with the name of the cgroup (path* relative to the root of cgroup file system) as the argument
cgroup_enter_frozenEnter frozen/stopped state, if not yet there. Update cgroup's counters,* and revisit the state of the cgroup, if necessary.
cgroup_leave_frozenConditionally leave frozen/stopped state
cgroup_do_freezeFreeze or unfreeze all tasks in the given cgroup.
update_parent_subparts_cpumaskpdate_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset*@cpuset: The cpuset that requests change in partition root state*@cmd: Partition root state change command*@newmask: Optional new cpumask for partcmd_update*@tmp: Temporary addmask
update_cpumasks_hierpdate_cpumasks_hier - Update effective cpumasks and tasks in the subtree*@cs: the cpuset to consider*@tmp: temp variables for calculating effective_cpus & partition setup* When congifured cpumask is changed, the effective cpumasks of this cpuset
update_cpumaskpdate_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it*@cs: the cpuset to consider*@trialcs: trial cpuset*@buf: buffer of cpu numbers written to this cpuset
update_nodemasks_hierpdate_nodemasks_hier - Update effective nodemasks and tasks in the subtree*@cs: the cpuset to consider*@new_mems: a temp variable for calculating new effective_mems* When configured nodemask is changed, the effective nodemasks of this cpuset
update_nodemaskHandle user request to change the 'mems' memory placement* of a cpuset
update_flagpdate_flag - read a 0 or a 1 in a file and update associated flag* Call with cpuset_mutex held.
cpuset_common_seq_showThese ascii lists should be read in a single call, by using a user* buffer large enough to hold the entire map
cpuset_css_online
cpuset_bind
hotplug_update_tasks_legacy
hotplug_update_tasks
cpuset_hotplug_workfnCPU / memory hotplug is handled asynchronously.
current_css_set_read
current_css_set_cg_links_read
cgroup_css_links_read
zap_pid_ns_processes
seccomp_set_mode_strictseccomp_set_mode_strict: internal function for setting strict seccomp* Once current->seccomp.mode is non-zero, it may not be changed.* Returns 0 on success or -EINVAL on failure.
taskstats_tgid_alloc
uprobe_deny_signalIf we are singlestepping, then ensure this thread is not connected to* non-fatal signals until completion of singlestep. When xol insn itself* triggers the signal, restart the original insn even if the task is
handle_singlestepPerform required fix-ups and disable singlestep.* Allow pending signals to take effect.
wait_on_page_bit_common
activate_page
isolate_lru_pagesolate_lru_page - tries to isolate a page from its LRU list*@page: page to isolate from its LRU list* Isolates a @page from an LRU list, clears PageLRU and adjusts the* vmstat statistic corresponding to whatever LRU list the page was on.
move_pages_to_lruThis moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
shrink_inactive_listshrink_inactive_list() is a helper for shrink_node(). It returns the number* of reclaimed pages
shrink_active_list
get_scan_countDetermine how aggressively the anon and file LRU lists should be* scanned
check_move_unevictable_pagesheck_move_unevictable_pages - check pages for evictability and move to* appropriate zone lru list*@pvec: pagevec with lru pages to check* Checks pages for evictability, if an evictable page is in the unevictable
pagetypeinfo_showfree_print
pcpu_balance_workfnBalance work is used to populate or destroy chunks asynchronously. We* try to keep the number of populated free pages between* PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one* empty chunk.
list_lru_walk_one_irq
shadow_lru_isolate
munlock_vma_pagemunlock_vma_page - munlock a vma page*@page: page to be unlocked, either a normal page or THP page head* returns the size of the page as a page mask (0 for normal page,* HPAGE_PMD_NR - 1 for THP head page)
__munlock_pagevecMunlock a batch of pages from the same zone* The work is split to two main phases
drain_slots_cache_cpu
free_swap_slot
show_pools
reap_alienCalled from cache_reap() to regularly drain alien caches round robin.
init_cache_node
setup_kmem_cache_node
drain_cpu_caches
drain_freelist
__do_tune_cpucacheAlways called with the slab_mutex held
drain_arrayDrain an array if it contains any elements taking the node lock only if* necessary. Note that the node listlock also protects the array_cache* if drain_array() is used on the shared array.
get_slabinfo
free_partialAttempt to free all partial slabs on a node.* This is called from __kmem_cache_shutdown(). We must take list_lock* because sysfs file might still access partial list after the shutdowning.
mem_cgroup_largest_soft_limit_node
unlock_page_lru
mem_cgroup_soft_limit_reclaim
page_idle_get_pageIdle page tracking only considers user memory pages, for other types of* pages the idle flag is always unset and an attempt to set it is silently* ignored
percpu_stats_show
bio_dirty_fn_check_pages_dirty() will check that all the BIO's pages are still dirty.* If they are, then fine. If, however, some pages are clean then they must* have been written out during the direct-IO read. So we take another ref on
queue_max_sectors_store
blk_insert_flushlk_insert_flush - insert a new PREFLUSH/FUA request*@rq: request to insert* To be called from __elv_add_request() for %ELEVATOR_INSERT_FLUSH insertions.* or __blk_mq_run_hw_queue() to dispatch request.*@rq is being submitted
ioc_clear_queue_clear_queue - break any ioc association with the specified queue*@q: request_queue being cleared* Walk @q->icq_list and exit all io_cq's.
ioc_create_icq_create_icq - create and link io_cq*@ioc: io_context of interest*@q: request_queue of interest*@gfp_mask: allocation mask* Make sure io_cq linking @ioc and @q exists
blk_mq_requeue_work
blk_mq_mark_tag_waitMark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting.
blk_mq_sched_assign_ioc
disk_flush_eventsdisk_flush_events - schedule immediate event checking and flushing*@disk: disk to check and flush events for*@mask: events to flush* Schedule immediate event checking on @disk if not blocked. Events in*@mask are scheduled to be cleared from the driver
disk_clear_eventsdisk_clear_events - synchronously check, clear and return pending events*@disk: disk to fetch and clear events from*@mask: mask of events to be fetched and cleared* Disk events are synchronously checked and pending events in @mask
disk_check_events
bsg_set_command_q
blkg_destroy_alllkg_destroy_all - destroy all blkgs associated with a request_queue*@q: request_queue of interest* Destroy all blkgs associated with @q.
blkcg_reset_stats
blkcg_print_blkgslkcg_print_blkgs - helper for printing per-blkg data*@sf: seq_file to print to*@blkcg: blkcg of interest*@prfill: fill function to print out a blkg*@pol: policy in question*@data: data to be passed to @prfill*@show_total: to print out sum of prfill return
blkg_conf_preplkg_conf_prep - parse and prepare for per-blkg config update*@blkcg: target block cgroup*@pol: target policy*@input: input string*@ctx: blkg_conf_ctx to be filled* Parse per-blkg config update from @input and initialize @ctx with the* result
blkg_conf_finishlkg_conf_finish - finish up per-blkg config update*@ctx: blkg_conf_ctx intiailized by blkg_conf_prep()* Finish up after per-blkg config update. This function must be paired* with blkg_conf_prep().
blkcg_print_stat
blkcg_destroy_blkgslkcg_destroy_blkgs - responsible for shooting down blkgs*@blkcg: blkcg of interest* blkgs should be removed while holding both q and blkcg locks
blkcg_init_queuelkcg_init_queue - initialize blkcg part of request queue*@q: request_queue to initialize* Called from blk_alloc_queue_node(). Responsible for initializing blkcg* part of new request_queue @q.* RETURNS:* 0 on success, -errno on failure.
blkcg_activate_policylkcg_activate_policy - activate a blkcg policy on a request_queue*@q: request_queue of interest*@pol: blkcg policy to activate* Activate @pol on @q
blkcg_deactivate_policylkcg_deactivate_policy - deactivate a blkcg policy on a request_queue*@q: request_queue of interest*@pol: blkcg policy to deactivate* Deactivate @pol on @q. Follows the same synchronization rules as* blkcg_activate_policy().
throtl_pending_timer_fn
blk_throtl_dispatch_work_fnlk_throtl_dispatch_work_fn - work function for throtl_data->dispatch_work*@work: work item being executed* This function is queued for execution when bio's reach the bio_lists[]* of throtl_data->service_queue. Those bio's are ready and issued by this
blk_throtl_bio
blk_throtl_drainlk_throtl_drain - drain throttled bios*@q: request_queue to drain throttled bios for* Dispatch all currently throttled bios on @q through ->make_request_fn().
iocg_activate
ioc_timer_fn
ioc_rqos_throttle
ioc_rqos_queue_depth_changed
ioc_rqos_exit
blk_iocost_init
ioc_weight_write
ioc_qos_write
ioc_cost_model_write
kyber_get_domain_token
bfq_bio_merge
bfq_end_wr
bfq_dispatch_request
bfq_insert_request
bfq_exit_queue
bfq_init_queue
queue_requeue_list_stop
blk_pre_runtime_suspendlk_pre_runtime_suspend - Pre runtime suspend check*@q: the queue of the device* Description:* This function will check if runtime suspend is allowed for the device* by examining if there are any requests pending in the queue
blk_post_runtime_suspendlk_post_runtime_suspend - Post runtime suspend processing*@q: the queue of the device*@err: return value of the device's runtime_suspend function* Description:* Update the queue's runtime status according to the return value of the* device's runtime
blk_pre_runtime_resumelk_pre_runtime_resume - Pre runtime resume processing*@q: the queue of the device* Description:* Update the queue's runtime status to RESUMING in preparation for the* runtime resume of the device
blk_post_runtime_resumelk_post_runtime_resume - Post runtime resume processing*@q: the queue of the device*@err: return value of the device's runtime_resume function* Description:* Update the queue's runtime status according to the return value of the
blk_set_runtime_activelk_set_runtime_active - Force runtime status of the queue to be active*@q: the queue of the device* If the device is left runtime suspended during system suspend the resume* hook typically resumes the device and corrects runtime status* accordingly
selinux_bprm_committed_credsClean up the process immediately after the installation of new credentials* due to exec
de_threadThis function makes sure the current process has its own signal table,* so that flush_signal_handlers can later reset the handlers without* disturbing other processes. (Other processes might share the signal* table via the CLONE_SIGHAND option to clone().)
pipe_read
pipe_write
wait_sb_inodesThe @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks
pin_remove
pin_kill
ep_pollp_poll - Retrieves ready events, and delivers them to the caller supplied* event buffer.*@ep: Pointer to the eventpoll context.*@events: Pointer to the userspace buffer where the ready events should be* stored.
signalfd_poll
signalfd_dequeue
do_signalfd4
timerfd_read
timerfd_show
do_timerfd_settime
do_timerfd_gettime
eventfd_read
eventfd_write
eventfd_show_fdinfo
handle_userfaultThe locking rules involved in returning VM_FAULT_RETRY depending on* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and* FAULT_FLAG_KILLABLE are not straightforward
userfaultfd_event_wait_completion
userfaultfd_release
userfaultfd_ctx_read
__wake_userfault
userfaultfd_show_fdinfo
free_ioctx_usersWhen this function runs, the kioctx has been removed from the "hash table"* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -* now it's safe to cancel any that need to be.
user_refill_reqs_availableser_refill_reqs_available* Called to refill reqs_available when aio_get_req() encounters an* out of space in the completion ring.
aio_poll_complete_work
aio_poll
SYSCALL_DEFINE3sys_io_cancel:* Attempts to cancel an iocb previously passed to io_submit. If* the operation is successfully cancelled, the resulting event is* copied into the memory pointed to by result without being placed* into the completion queue and 0 is returned
io_kill_timeouts
io_poll_remove_all
io_poll_removeFind a running poll command that matches one specified in sqe->addr,* and remove it if found.
io_poll_complete_work
io_poll_add
io_timeout_removeRemove or update an existing timeout command
io_timeout
io_req_defer
io_grab_files
io_queue_linked_timeout
io_uring_cancel_files
__io_worker_unuseNote: drops the wqe->lock if returning true! The caller must re-acquire* the lock in that case. Some callers need to restart handling if this* happens, so we can't just re-acquire the lock on behalf of the caller.
io_worker_exit
io_worker_handle_work
io_wqe_worker
io_wq_worker_sleepingCalled when worker is going to sleep. If there are no workers currently* running and we have work pending, wake up a free one or have the manager* set one up.
create_io_worker
io_wq_managerManager thread. Tasked with creating new workers, if we need them.
zap_threads
coredump_finish
write_sequnlock_irq
read_sequnlock_excl_irq