Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:fs\io-wq.c Create Date:2022-07-28 20:22:38
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:io_worker_exit

Proto:static void io_worker_exit(struct io_worker *worker)

Type:void

Parameter:

TypeParameterName
struct io_worker *worker
198  wqe = wqe
199  acct = io_wqe_get_acct(wqe, worker)
206  set_current_state(TASK_INTERRUPTIBLE)
207  If Not _dec_and_test - decrement a refcount and test if it is 0*@r: the refcount* Similar to atomic_dec_and_test(), it will WARN on underflow and fail to* decrement when saturated at REFCOUNT_SATURATED Then schedule()
209  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
211  Even if we don't have any preemption, we need preempt disable/enable* to be barriers, so that we don't have things like get_user/put_user* that can cause faults and scheduling migrate into our preempt-protected* region.()
212  flags &= ~Task is an IO worker
213  If flags & IO_WORKER_F_RUNNING Then atomic_dec( & nr_running)
215  If Not (flags & IO_WORKER_F_BOUND) Then atomic_dec( & processes)
217  flags = 0
218  preempt_enable()
220  spin_lock_irq( & lock)
221  hlist_nulls_del_rcu - deletes entry from hash list without re-initialization*@n: the element to delete from the hash list.* Note: hlist_nulls_unhashed() on entry does not return true after this,* the entry is in an undefined state
222  list_del_rcu - deletes entry from list without re-initialization*@entry: the element to delete from the list
223  If Note: drops the wqe->lock if returning true! The caller must re-acquire* the lock in that case. Some callers need to restart handling if this* happens, so we can't just re-acquire the lock on behalf of the caller. Then
224  __release( & lock)
225  spin_lock_irq( & lock)
227  nr_workers--
228  nr_workers = nr_workers + nr_workers
230  spin_unlock_irq( & lock)
233  If Not nr_workers && _dec_and_test - decrement a refcount and test if it is 0*@r: the refcount* Similar to atomic_dec_and_test(), it will WARN on underflow and fail to* decrement when saturated at REFCOUNT_SATURATED Then mplete: - signals a single thread waiting on this completion*@x: holds the state of this particular completion* This will wake up a single thread waiting on this completion. Threads will be* awakened in the same order in which they were queued.
236  kfree_rcu() - kfree an object after a grace period(worker, rcu)
Caller
NameDescribe
io_wqe_worker