Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\sched\psi.c Create Date:2022-07-28 09:46:30
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:psi_poll_work

Proto:static void psi_poll_work(struct kthread_work *work)

Type:void

Parameter:

TypeParameterName
struct kthread_work *work
586  dwork = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(work, structkthread_delayed_work, work)
587  group = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(dwork, structpsi_group, poll_work)
589  atomic_set( & poll_scheduled, 0)
591  mutex_lock( & trigger_lock)
593  now = Scheduler clock - returns current time in nanosec units.* This is default implementation.* Architectures and sub-architectures can override this.
595  collect_percpu_times(group, PSI_POLL, & changed_states)
597  If changed_states & poll_states Then
599  If now > polling_until Then init_triggers(group, now)
607  polling_until = now + poll_min_period * 10 updates per window
611  If now > polling_until Then
612  polling_next_update = ULLONG_MAX
613  Go to out
616  If now >= polling_next_update Then polling_next_update = update_triggers(group, now)
619  Schedule polling if it's not already scheduled. It's safe to call even from* hotpath because even though kthread_queue_delayed_work takes worker->lock* spinlock that spinlock is never contended due to poll_scheduled atomic* preventing such competition.
622  out :
623  mutex_unlock( & trigger_lock)