Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\sched\core.c Create Date:2022-07-28 09:36:10
Last Modify:2022-05-22 13:40:38 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:schedule

Proto:asmlinkage __visible void __sched schedule(void)

Type:void

Parameter:Nothing

4155  tsk = current process
4157  sched_submit_work(tsk)
4158  Do
4159  Even if we don't have any preemption, we need preempt disable/enable* to be barriers, so that we don't have things like get_user/put_user* that can cause faults and scheduling migrate into our preempt-protected* region.()
4160  __schedule() is the main scheduler function.* The main means of driving the scheduler and thus entering this function are:* 1. Explicit blocking: mutex, semaphore, waitqueue, etc.* 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
4161  sched_preempt_enable_no_resched()
4162  When need_resched() cycle
4163  sched_update_worker(tsk)
Caller
NameDescribe
threadfunc
do_boot_cpuNOTE - on most systems this is a PHYSICAL apic ID, but on multiquad* (ie clustered apic addressing mode), this is a LOGICAL apic ID.* Returns zero if CPU booted OK, else error code from* ->wakeup_secondary_cpu.
kvm_async_pf_task_wait@interrupt_kernel: Is this called from a routine which interrupts the kernel* (other than user space)?
do_wait
__request_region__request_region - create a new busy resource region*@parent: parent resource descriptor*@start: resource start address*@n: resource region size*@name: reserving caller's ID string*@flags: IO resource flags
sys_pause
sigsuspend
usermodehelper_read_trylock
worker_thread
rescuer_threadscuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function
__cancel_work_timer
__kthread_parkme
kthread
kthreadd
kthread_worker_fnkthread_worker_fn - kthread function to process kthread_worker*@worker_ptr: pointer to initialized kthread_worker* This function implements the main cycle of kthread worker. It processes* work_list until it is stopped with kthread_stop()
smpboot_thread_fnsmpboot_thread_fn - percpu hotplug thread loop function*@data: thread data pointer* Checks for thread stop and park conditions. Calls the necessary* setup, cleanup, park and unpark functions for the registered* thread.
schedule_preempt_disabledschedule_preempt_disabled - called with preemption disabled* Returns with preemption disabled. Note: preempt_count must be 1
do_sched_yieldsys_sched_yield - yield the current processor to other threads.* This function yields the current CPU to other tasks. If there are no* other threads running on this CPU then this function will return.* Return: 0.
yield_toyield_to - yield the current processor to another thread in* your thread group, or accelerate that thread toward the* processor it's on
io_schedule
do_wait_intrNote! These two wait functions are entered with the* case), so there is no race with testing the wakeup* condition in the caller before they add the wait* entry to the wake queue.
do_wait_intr_irq
bit_wait
rwsem_down_read_slowpathWait for the read lock to be granted
rwsem_down_write_slowpathWait until we successfully acquire the write lock
__rt_mutex_slowlock__rt_mutex_slowlock() - Perform the wait-wake-try-to-take loop*@lock: the rt_mutex to take*@state: the state the task should block in (TASK_INTERRUPTIBLE* or TASK_UNINTERRUPTIBLE)*@timeout: the pre-initialized and started timer, or NULL for none*@waiter:
rt_mutex_handle_deadlock
thaw_processes
thaw_kernel_threads
irq_wait_for_interrupt
rcu_torture_fwd_prog_cond_reschedGive the scheduler a chance, even on nohz_full CPUs.
__refrigeratorRefrigerator is place where frozen processes are stored :-).
schedule_timeoutschedule_timeout - sleep until timeout*@timeout: timeout value in jiffies* Make the current task sleep until @timeout jiffies have* elapsed
schedule_hrtimeout_range_clockschedule_hrtimeout_range_clock - sleep until timeout*@expires: timeout value (ktime_t)*@delta: slack in expires timeout (ktime_t)*@mode: timer mode*@clock_id: timer clock to be used
do_cpu_nanosleep
cgroup_lock_and_drain_offlinegroup_lock_and_drain_offline - lock cgroup_mutex and drain offlined csses*@cgrp: root of the target subtree* Because css offlining is asynchronous, userland may try to re-enable a* controller while the previous css is still around. This function grabs
zap_pid_ns_processes
prune_tree_threadThat gets run when evict_chunk() ends up needing to kill audit_tree.* Runs from a separate thread.
ring_buffer_waitg_buffer_wait - wait for input to the ring buffer*@buffer: buffer to wait on*@cpu: the cpu buffer to wait on*@full: wait until a full page is available, if @cpu != RING_BUFFER_ALL_CPUS* If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon
ring_buffer_consumer
wait_to_die
ring_buffer_consumer_thread
cpu_map_kthread_run
mem_cgroup_wait_acct_move
mem_cgroup_oom_synchronizemem_cgroup_oom_synchronize - complete memcg OOM handling*@handle: actually kill/wait or just clean up the OOM state* This has to be called at the end of a page fault if the memcg OOM* handler was enabled
do_msgrcv
do_semtimedop
de_threadThis function makes sure the current process has its own signal table,* so that flush_signal_handlers can later reset the handlers without* disturbing other processes. (Other processes might share the signal* table via the CLONE_SIGHAND option to clone().)
pipe_waitDrop the inode semaphore and wait for a pipe event, atomically
d_wait_lookup
__wait_on_freeing_inode
__inode_dio_waitDirect i/o helper functions
inode_sleep_on_writebackSleep until I_SYNC is cleared. This function must be called with i_lock* held and drops it. It is aimed for callers not holding any inode reference* so once i_lock is dropped, inode can go away.
pin_kill
bd_prepare_to_claimd_prepare_to_claim - prepare to claim a block device*@bdev: block device of interest*@whole: the whole device containing @bdev, may equal @bdev*@holder: holder trying to claim @bdev* Prepare to claim @bdev
signalfd_dequeue
eventfd_read
eventfd_write
handle_userfaultThe locking rules involved in returning VM_FAULT_RETRY depending on* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and* FAULT_FLAG_KILLABLE are not straightforward
userfaultfd_event_wait_completion
userfaultfd_ctx_read
io_sq_thread
io_cqring_waitWait until events become available, if we don't already have some. The* application must reap them itself, as they reside on the shared cq ring.
io_uring_cancel_files
io_worker_exit
get_unlocked_entryLook up entry in page cache, wait for it to become unlocked if it* is a DAX entry and return it. The caller must subsequently call* put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()* if it did
wait_entry_unlockedThe only thing keeping the address space around is the i_pages lock* (it's cycled in clear_inode() after removing the entries from i_pages)* After we call xas_unlock_irq(), we cannot touch xas->xa.
dqgetGet reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock
kernel_signal_stop
klist_removeklist_remove - Decrement the refcount of node and wait for it to go away.*@n: node we're removing.