Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\sched\task.h Create Date:2022-07-28 05:38:59
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:get_task_struct

Proto:static inline struct task_struct *get_task_struct(struct task_struct *t)

Type:struct task_struct

Parameter:

TypeParameterName
struct task_struct *t
113  _inc - increment a refcount*@r: the refcount to increment* Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN
114  Return t
Caller
NameDescribe
rdtgroup_move_task
_do_forkfork routine
mm_update_next_ownerA task is exiting. If it owned this mm, find a new owner for the mm.
wait_task_zombieHandle sys_wait4 work for one task in state EXIT_ZOMBIE. We hold* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have
wait_task_stoppedwait_task_stopped - Wait for %TASK_STOPPED or %TASK_TRACED*@wo: wait options*@ptrace: is the wait for ptrace*@p: task to wait for* Handle sys_wait4() work for %p in state %TASK_STOPPED or %TASK_TRACED
wait_task_continuedHandle do_wait work for one task in a live, non-stopped state.* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have
SYSCALL_DEFINE4
find_get_task_by_vpid
get_pid_task
kthread_stopstop a thread
__smpboot_create_thread
wake_q_addwake_q_add() - queue a wakeup for 'later' waking
do_sched_setscheduler
SYSCALL_DEFINE3sys_sched_setattr - same as above, but with extended sched_attr*@pid: the pid in question.*@uattr: structure containing the extended parameters.*@flags: for future extension.
sched_setaffinity
task_non_contendingThe utilization of a task cannot be immediately removed from* the rq active utilization (running_bw) when the task blocks
start_dl_timerIf the entity depleted all its runtime, and if we want it to sleep* while waiting for some new execution time to become available, we* set the bandwidth replenishment timer to the replenishment instant* and try to activate it
rwsem_mark_wakehandle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
rt_mutex_adjust_prio_chainAdjust the priority chain
task_blocks_on_rt_mutexTask blocks on lock.* Prepare waiter and propagate pi chain* This must be called with lock->wait_lock held and interrupts disabled
remove_waiterRemove a waiter from a lock and give up* Must be called with lock->wait_lock held and interrupts disabled. I must* have just failed to try_to_take_rt_mutex().
rt_mutex_adjust_piRecheck the pi chain, in case we got a priority setting* Called from sched_setscheduler
setup_irq_thread
SYSCALL_DEFINE5
__get_task_for_clock
mark_wake_futexThe hash bucket lock must be held when this is called.* Afterwards, the futex_q must not be accessed. Callers* must ensure to later call wake_up_q() for the actual* wakeups to occur.
cgroup_procs_write_start
css_task_iter_nextss_task_iter_next - return the next task for the iterator*@it: the task iterator being iterated* The "next" function for task iteration. @it should have been* initialized via css_task_iter_start(). Returns NULL when the iteration* reaches the end.
cgroup_transfer_tasksgroup_trasnsfer_tasks - move tasks from one cgroup to another*@to: cgroup to which the tasks will be moved*@from: cgroup in which the tasks currently reside* Locking rules between cgroup_post_fork() and the migration path* guarantee that, if a task is
rcu_lock_breakTo avoid extending the RCU grace period for an unbounded amount of time,* periodically exit the critical section and enter a new one.* For preemptible RCU it is sufficient to call rcu_read_unlock in order* to exit the grace period
probe_wakeup
alloc_perf_context
find_lively_task_by_vpid
perf_remove_from_ownerRemove user event from the owner task.
perf_event_allocAllocate and initialize an event structure
oom_evaluate_task
wake_oom_reaper
__oom_kill_process
oom_kill_memcg_memberKill provided task unless it's secured by setting* oom_score_adj to OOM_SCORE_ADJ_MIN.
out_of_memory_of_memory - kill the "best" process when we run out of memory*@oc: pointer to struct oom_control* If we run out of memory, we have the choice between either* killing a random task (bad), letting the system crash (worse)
swap_readpage
kernel_migrate_pages
kernel_move_pagesMove a list of pages in the address space of the currently executing* process.
add_to_killSchedule a process for later kill.* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
report_accessdefers execution because cmdline access can sleep
yama_task_prctlyama_task_prctl - check for Yama-specific prctl operations*@option: operation*@arg2: argument*@arg3: argument*@arg4: argument*@arg5: argument* Return 0 on success, -ve on error. -ENOSYS is returned when Yama* does not handle the given option.