Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\exit.c Create Date:2022-07-28 09:03:18
Last Modify:2020-03-17 11:17:32 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:do_exit

Proto:void __noreturn do_exit(long code)

Type:void

Parameter:

TypeParameterName
longcode
713  tsk = current process
716  profile_task_exit(tsk)
717  kcov_task_exit(tsk)
719  WARN_ON(blk_needs_flush_plug(tsk))
721  If Value for the false possibility is greater at compile time(in_interrupt()) Then panic - halt the system*@fmt: The text string to print* Display a message, then perform cleanups.* This function never returns.
723  If Value for the false possibility is greater at compile time(!process id) Then panic - halt the system*@fmt: The text string to print* Display a message, then perform cleanups.* This function never returns.
733  set_fs(USER_DS)
735  ptrace_event - possibly stop for a ptrace event notification*@event: %PTRACE_EVENT_* value to report*@message: value for %PTRACE_GETEVENTMSG to return* Check whether @event is enabled and, if so, report @event and @message* to the ptrace parent.
737  validate_creds_for_do_exit(tsk)
743  If Value for the false possibility is greater at compile time( Per task flags (PF_*), defined further below: & Getting shut down ) Then
744  pr_alert("Fixing recursive fault but reboot is needed!\n")
745  futex_exit_recursive(tsk)
746  set_current_state(TASK_UNINTERRUPTIBLE)
747  schedule()
750  exit_signals(tsk)
752  If Value for the false possibility is greater at compile time(Are we running in atomic context? WARNING: this macro cannot* always detect atomic context; in particular, it cannot know about* held spinlocks in non-preemptible kernels. Thus it should not be()) Then
753  pr_info("note: %s[%d] exited with preempt_count %d\n", comm, task_pid_nr(current process), We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users* that think a non-zero value indicates we cannot preempt.)
756  preempt_count_set(We use the PREEMPT_NEED_RESCHED bit as an inverted NEED_RESCHED such* that a decrement hitting 0 means we can and should reschedule.)
760  If mm Then sync_mm_rss(mm)
762  acct_update_integrals(tsk)
763  group_dead = atomic_dec_and_test( & live)
764  If group_dead Then
769  If Value for the false possibility is greater at compile time(s_global_init - check if a task structure is init. Since init* is free to have sub-threads we need to check tgid.*@tsk: Task structure to be checked.* Check if a task structure is the first user space task the kernel created.) Then panic - halt the system*@fmt: The text string to print* Display a message, then perform cleanups.* This function never returns.
777  If mm Then setmax_mm_hiwater_rss( & maxrss, mm)
780  acct_collect(code, group_dead)
781  If group_dead Then tty_audit_exit()
783  audit_free(tsk)
785  exit code = code
786  taskstats_exit(tsk, group_dead)
788  Turn us into a lazy TLB process if we* aren't already..
790  If group_dead Then acct_process()
792  Tracepoint for a task exiting:
794  exit_sem(tsk)
795  exit_shm(tsk)
796  exit_files(tsk)
797  exit_fs(tsk)
798  If group_dead Then disassociate_ctty(1)
800  exit_task_namespaces(tsk)
801  exit_task_work(tsk)
802  Free current thread data structures etc..
803  exit_umh(tsk)
811  perf_event_exit_task(tsk)
813  sched_autogroup_exit_task(tsk)
814  cgroup_exit(tsk)
819  flush_ptrace_hw_breakpoint(tsk)
821  exit_tasks_rcu_start()
822  Send signals to all our closest relatives so that they know* to properly mourn us..
823  proc_exit_connector(tsk)
824  mpol_put_task_policy(tsk)
832  debug_check_no_locks_held()
834  If io_context Then exit_io_context(tsk)
837  If Cache last used pipe for splice(): Then free_pipe_info( Cache last used pipe for splice(): )
840  If page Then put_page(page)
843  validate_creds_for_do_exit(tsk)
845  check_stack_usage()
846  Even if we don't have any preemption, we need preempt disable/enable* to be barriers, so that we don't have things like get_user/put_user* that can cause faults and scheduling migrate into our preempt-protected* region.()
847  If * When (nr_dirtied >= nr_dirtied_pause), it's time to call * balance_dirty_pages() for a dirty throttling pause: Then __this_cpu_add(dirty_throttle_leaks, * When (nr_dirtied >= nr_dirtied_pause), it's time to call * balance_dirty_pages() for a dirty throttling pause:)
849  Because preemptible RCU does not exist, tasks cannot possibly exit* while in preemptible RCU read-side critical sections.
850  exit_tasks_rcu_finish()
852  lockdep_free_task(tsk)
853  do_task_dead()
Caller
NameDescribe
save_v86_state
complete_and_exit
SYSCALL_DEFINE1
do_group_exitTake down every thread in the group. This is called by fatal signals* as well as by sys_exit_group (below).
call_usermodehelper_exec_asyncThis is the task which runs the usermode application
kthread
SYSCALL_DEFINE4Reboot system call: for obvious reasons only root may call it,* and even root needs to set up some magic numbers in the registers* so that some mistake won't make this reboot the whole machine.* You can also set the meaning of the ctrl-alt-del-key here.
__module_put_and_exitA thread that wants to hold a reference to a module only while it* is running can call this to safely exit. nfsd and lockd use this.
reboot_pid_ns
__secure_computing_strict