Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:fs\exec.c Create Date:2022-07-28 20:03:59
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:This function makes sure the current process has its own signal table,* so that flush_signal_handlers can later reset the handlers without* disturbing other processes. (Other processes might share the signal* table via the CLONE_SIGHAND option to clone().)

Proto:static int de_thread(struct task_struct *tsk)

Type:int

Parameter:

TypeParameterName
struct task_struct *tsk
1062  sig = signal
1063  oldsighand = sighand
1064  lock = siglock
1066  If thread_group_empty(tsk) Then Go to no_thread_group
1072  spin_lock_irq(lock)
1073  If If true, all threads except ->group_exit_task have pending SIGKILL Then
1078  spin_unlock_irq(lock)
1079  Return -EAGAIN
1082  group_exit_task = tsk
1083  verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal. = Nuke all other threads in the group.
1084  If Not thread_group_leader(tsk) Then verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal.--
1087  When verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal. cycle
1088  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Convenience macros for the sake of set_current_state: )
1089  spin_unlock_irq(lock)
1090  schedule()
1091  If __fatal_signal_pending(tsk) Then Go to killed
1093  spin_lock_irq(lock)
1095  spin_unlock_irq(lock)
1102  If Not thread_group_leader(tsk) Then
1103  leader = group_leader
1105  cycle
1118  schedule()
1119  If __fatal_signal_pending(tsk) Then Go to killed
1133  start_time = start_time
1134  start_boottime = start_boottime
1136  BUG_ON(!same_thread_group(leader, tsk))
1137  BUG_ON(Do to the insanities of de_thread it is possible for a process* to have the pid of the thread group leader without actually being* the thread group leader)
1150  pid = pid
1151  change_pid(tsk, PIDTYPE_PID, task_pid(leader))
1152  ransfer_pid is an optimization of attach_pid(new), detach_pid(old)
1153  ransfer_pid is an optimization of attach_pid(new), detach_pid(old)
1154  ransfer_pid is an optimization of attach_pid(new), detach_pid(old)
1156  list_replace_rcu - replace old entry by new one*@old : the element to be replaced*@new : the new element to insert* The @old entry will be replaced with the @new entry atomically.* Note: @old should not be empty.
1157  list_replace_init( & sibling, & sibling)
1159  group_leader = tsk
1160  group_leader = tsk
1162  exit_signal = SIGCHLD
1163  exit_signal = -1
1165  BUG_ON(exit_state != EXIT_ZOMBIE)
1166  exit_state = Used in tsk->exit_state:
1173  If Value for the false possibility is greater at compile time(ptrace) Then __wake_up_parent(leader, parent)
1175  write_unlock_irq( & tasklist_lock)
1176  group_threadgroup_change_end - threadgroup exclusion for cgroups*@tsk: target task* Counterpart of cgroup_threadcgroup_change_begin().
1178  release_task(leader)
1181  group_exit_task = NULL
1182  verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal. = 0
1184  no_thread_group :
1186  exit_signal = SIGCHLD
1193  If _read - get a refcount's value*@r: the refcount* Return: the refcount's value != 1 Then
1199  newsighand = kmem_cache_alloc(SLAB cache for sighand_struct structures (tsk->sighand) , GFP_KERNEL)
1200  If Not newsighand Then Return -ENOMEM
1203  _set - set a refcount's value*@r: the refcount*@n: value to which the refcount will be set
1204  memcpy(action, action, size of action )
1207  write_lock_irq( & tasklist_lock)
1208  spin_lock( & siglock)
1209  cu_assign_pointer() - assign to RCU-protected pointer*@p: pointer to assign to*@v: value to assign (publish)* Assigns the specified value to the specified RCU-protected* pointer, ensuring that any concurrent RCU readers will see* any prior initialization(sighand, newsighand)
1210  spin_unlock( & siglock)
1211  write_unlock_irq( & tasklist_lock)
1213  __cleanup_sighand(oldsighand)
1216  BUG_ON(!thread_group_leader(tsk))
1217  Return 0
1219  killed :
1221  read_lock( & tasklist_lock)
1222  group_exit_task = NULL
1223  verloaded:* - notify group_exit_task when ->count is equal to notify_count* - everyone except group_exit_task is stopped during signal delivery* of fatal signals, group_exit_task processes the signal. = 0
1224  read_unlock( & tasklist_lock)
1225  Return -EAGAIN
Caller
NameDescribe
flush_old_execCalling this is the point of no return. None of the failures will be* seen by userspace since either the process is already taking a fatal* signal (via de_thread() or coredump), or will have SEGV raised