Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\exit.c Create Date:2022-07-28 09:02:19
Last Modify:2020-03-17 11:17:32 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:This function expects the tasklist_lock write-locked.

Proto:static void __exit_signal(struct task_struct *tsk)

Type:void

Parameter:

TypeParameterName
struct task_struct *tsk
94  sig = Signal handlers:
95  group_dead = thread_group_leader(tsk)
97  tty = tty
100  sighand = cu_dereference_check() - rcu_dereference with debug checking*@p: The pointer to read, prior to dereferencing*@c: The conditions under which the dereference will take place* Do an rcu_dereference(), but check that the conditions under which the(sighand, lockdep_tasklist_lock_is_held())
102  spin_lock( & siglock)
119  If group_dead Then
120  tty = NULL if no tty
121  NULL if no tty = NULL
122  Else
127  If verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal. > 0 && Not --verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal. Then wake_up_process(group_exit_task)
130  If tsk == current thread group signal load-balancing target: Then current thread group signal load-balancing target: = next_thread(tsk)
134  add_device_randomness((constvoid * ) & sum_exec_runtime, sizeof(unsignedlonglong))
143  task_cputime(tsk, & utime, & stime)
144  Lock out other writers and update the count.* Acts like a normal spin_lock/unlock.* Don't need preempt_disable() because that is in the spin_lock already.
145  utime += utime
146  stime += stime
147  gtime += task_gtime(tsk)
148  min_flt += MM fault and swap info: this can arguably be seen as either mm-specific or thread-specific:
149  maj_flt += maj_flt
150  nvcsw += Context switch counts:
151  nivcsw += nivcsw
152  inblock += task_io_get_inblock(tsk)
153  oublock += task_io_get_oublock(tsk)
154  task_io_accounting_add( & ioac, & ioac)
155  Cumulative ns of schedule CPU time fo dead threads in the* group, not including a zombie group leader, (This only differs* from jiffies_to_ns(utime + stime) if sched_clock uses something* other than jiffies.) += sum_exec_runtime
156  nr_threads--
157  __unhash_process(tsk, group_dead)
158  write_sequnlock( & Cumulative resource counters for dead threads in the group,* and for reaped dead child processes forked by this group.* Live threads maintain their own counters and add to these* in __exit_signal, except for the group leader.)
164  flush_sigqueue( & pending)
165  sighand = NULL
166  spin_unlock( & siglock)
168  __cleanup_sighand(sighand)
169  clear_tsk_thread_flag(tsk, signal pending )
170  If group_dead Then
171  flush_sigqueue( & shared signal handling: )
172  tty_kref_put(tty)
Caller
NameDescribe
release_task