Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\workqueue.c Create Date:2022-07-28 09:26:41
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:process_one_work - process single work*@worker: self*@work: work to process* Process @work

Proto:static void process_one_work(struct worker *worker, struct work_struct *work)__releases(&pool->lock) __acquires(&pool->lock)

Type:void

Parameter:

TypeParameterName
struct worker *worker
struct work_struct *work
2161  pwq = get_work_pwq(work)
2162  pool = A: the associated pool
2163  cpu_intensive = flags & WQ_CPU_INTENSIVE
2179  WARN_ON_ONCE(!(X: flags & POOL_DISASSOCIATED) && These macros fold the SMP functionality into a single CPU system() != I: the associated cpu )
2188  collision = d_worker_executing_work - find worker which is executing a work*@pool: pool of interest*@work: work to find worker for* Find a worker which is executing @work on @pool by searching*@pool->busy_hash which is keyed by the address of @work
2189  If Value for the false possibility is greater at compile time(collision) Then
2190  move_linked_works - move linked works to a list*@work: start of series of works to be scheduled*@head: target list to append @work to*@nextp: out parameter for nested worklist walking* Schedule linked works starting from @work to @head
2191  Return
2195  debug_work_deactivate(work)
2196  hash_add - add an object to a hashtable*@hashtable: hashtable to add to*@node: the &struct hlist_node of the object to be added*@key: the key of the object to be added(a workers is either on busy_hash or idle_list, or the manager , & L: while busy , (unsignedlong)work)
2197  L: work being processed = work
2198  L: current_work's fn = func
2199  L: current_work's pwq = pwq
2200  work_color = get_work_color(work)
2206  strscpy - Copy a C-string into a sized buffer*@dest: Where to copy the string to*@src: Where to copy the string from*@count: Size of destination buffer* Copy the string, or as much of it as fits, into the dest buffer. The
2208  list_del_init - deletes entry from list and reinitialize it.*@entry: the element to delete from the list.
2216  If Value for the false possibility is greater at compile time(cpu_intensive) Then worker_set_flags - set worker flags and adjust nr_running accordingly*@worker: self*@flags: flags to set* Set @flags in @worker->flags and adjust nr_running accordingly.* CONTEXT:* spin_lock_irq(pool->lock)
2226  If Need to wake up a worker? Called from anything but currently* running workers.* Note that, because unbound workers never contribute to nr_running, this* function will always return %true for unbound pools as long as the* worklist isn't empty. Then wake_up_worker - wake up an idle worker*@pool: worker pool to wake worker from* Wake up the first idle worker of @pool.* CONTEXT:* spin_lock_irq(pool->lock).
2235  set_work_pool_and_clear_pending(work, I: pool ID )
2237  spin_unlock_irq( & he pool lock )
2239  lock_map_acquire( & lockdep_map)
2240  lock_map_acquire( & lockdep_map)
2262  lockdep_invariant_state(true)
2263  workqueue_execute_start - called immediately before the workqueue callback*@work: pointer to struct work_struct* Allows to track workqueue execution.
2264  L: current_work's fn (work)
2269  workqueue_execute_end - called immediately after the workqueue callback*@work: pointer to struct work_struct* Allows to track workqueue execution.
2270  lock_map_release( & lockdep_map)
2271  lock_map_release( & lockdep_map)
2273  If Value for the false possibility is greater at compile time(Are we running in atomic context? WARNING: this macro cannot* always detect atomic context; in particular, it cannot know about* held spinlocks in non-preemptible kernels. Thus it should not be() || lockdep_depth(current process) > 0) Then
2274  pr_err("BUG: workqueue leaked lock or atomic: %s/0x%08x/%d\n last function: %ps\n", comm, We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users* that think a non-zero value indicates we cannot preempt., task_pid_nr(current process), L: current_work's fn )
2278  debug_show_held_locks(current process)
2279  dump_stack()
2290  cond_resched()
2292  spin_lock_irq( & he pool lock )
2295  If Value for the false possibility is greater at compile time(cpu_intensive) Then worker_clr_flags - clear worker flags and adjust nr_running accordingly*@worker: self*@flags: flags to clear* Clear @flags in @worker->flags and adjust nr_running accordingly.* CONTEXT:* spin_lock_irq(pool->lock)
2299  used by the scheduler to determine a worker's last known identity = L: current_work's fn
2302  hash_del - remove an object from a hashtable*@node: &struct hlist_node of the object to remove
2303  L: work being processed = NULL
2304  L: current_work's fn = NULL
2305  L: current_work's pwq = NULL
2306  pwq_dec_nr_in_flight - decrement pwq's nr_in_flight*@pwq: pwq of interest*@color: color of work which left the queue* A work either has completed or is removed from pending queue,* decrement nr_in_flight of its pwq and handle workqueue flushing.* CONTEXT:
Caller
NameDescribe
process_scheduled_worksprocess_scheduled_works - process scheduled works*@worker: self* Process all scheduled works
worker_thread