Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\sched.h Create Date:2022-07-28 05:59:47
Last Modify:2021-07-28 10:30:23 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:need_resched

Proto:static __always_inline bool need_resched(void)

Type:bool

Parameter:Nothing

1824  Return Value for the false possibility is greater at compile time(tif_need_resched())
Caller
NameDescribe
mwait_idleMONITOR/MWAIT with no hints, used for default C1 state. This invokes MWAIT* with interrupts enabled and no flags, which is backwards compatible with the* original MWAIT implementation.
apm_do_idleapm_do_idle - perform power saving* This function notifies the BIOS that the processor is (in the view* of the OS) idle. It returns -1 in the event that the BIOS refuses* to handle the idle request. On a success the function returns 1
apm_cpu_idle
__do_softirq
schedule
schedule_idlesynchronize_rcu_tasks() makes sure that no task is stuck in preempted* state (have scheduled out non-voluntarily) by making sure that all* tasks have either left the run queue or have gone into user space
preempt_schedule_common
preempt_schedule_irqThis is the entry point to schedule() from kernel preemption* off of irq context.* Note, that this is called and return with irqs disabled. This will* protect us against recursive calling from irq.
cpuidle_idle_callpuidle_idle_call - the main idle function* NOTE: no locks or semaphores should be used here* On archs that support TIF_POLLING_NRFLAG, is called with polling* set, and it returns with polling set. If it ever stops polling, it* must clear the polling bit.
do_idleGeneric idle loop implementation* Called with polling cleared.
osq_lock
rcu_torture_fwd_prog_cond_reschedGive the scheduler a chance, even on nohz_full CPUs.
rcu_torture_fwd_prog_nrCarry out need_resched()/cond_resched() forward-progress testing.
rcu_do_batchInvoke any RCU callbacks that have made it to the end of their grace* period. Thottle as specified by rdp->blimit.
cgroup_rstat_flush_lockedsee cgroup_rstat_flush()
do_check
copy_pte_range
zap_pte_range
blk_polllk_poll - poll for IO completions*@q: the queue*@cookie: cookie passed back at IO submission time*@spin: whether to spin for completions* Description:* Poll for completions on the passed in queue. Returns number of* completed entries found
key_garbage_collectorReaper for unused keys.
do_select
do_poll
select_collect
select_collect2
evict_inodesvict_inodes - evict all evictable inodes for a superblock*@sb: superblock to operate on* Make sure that no inodes with zero refcount are retained
invalidate_inodesvalidate_inodes - attempt to free all inodes on a superblock*@sb: superblock to operate on*@kill_dirty: flag to guide handling of dirty inodes* Attempts to free all inodes for a given superblock. If there were any
scan_positivesReturns an element of siblings' list.* We are looking for th positive after

; if* found, dentry is grabbed and returned to caller.* If no such element exists, NULL is returned.

writeback_sb_inodesWrite a portion of b_io inodes which belong to @sb.* Return the number of pages and/or inodes written.* NOTE! This is called with wb->list_lock held, and will* unlock and relock that for each inode it ends up doing* IO for.
io_iopoll_geteventsPoll for a minimum of 'min' events. Note that if min == 0 we consider that a* non-spinning poll check - we'll still enter the driver poll loop, but only* as a non-spinning completion check.
io_iopoll_check
io_worker_spin_for_work
drop_pagecache_sb