函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:arch\x86\include\asm\processor.h Create Date:2022-07-27 06:39:10
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:cpu_relax

函数原型:static __always_inline void cpu_relax(void)

返回类型:void

参数:

684  REP NOP (PAUSE) is a good thing to insert into busy-wait loops.
调用者
名称描述
set_bits_ll
clear_bits_ll
raid6_choose_gen
serial_putcharThese functions are in .inittext so they can be used to signal* error during initialization.
default_do_nmi
mach_get_cmos_time
native_cpu_up
native_apic_wait_icr_idle
calibrate_APIC_clock
__xapic_wait_icr_idle
early_serial_putc
amd_flush_garts
kvm_async_pf_task_wake
panic_smp_self_stopStop ourself in panic -- architecture code may override this
try_to_grab_pendingry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any
__queue_work
__task_rq_lock__task_rq_lock - lock the rq @p resides on.
task_rq_lockask_rq_lock - lock p->pi_lock and lock the rq @p resides on.
do_task_dead
__cond_resched_lock__cond_resched_lock() - if a reschedule is pending, drop the given lock,* call schedule, and on return reacquire the lock
cpu_idle_poll
osq_wait_nextGet a stable @node->next pointer, either for unlock() or unqueue() purposes.* Can return NULL in case we were the last queued and we updated @lock instead.
osq_lock
queued_spin_lock_slowpathqueued_spin_lock_slowpath - acquire the queued spinlock*@lock: Pointer to queued spinlock structure*@val: Current value of the queued spinlock 32-bit word* (queue tail, pending bit, lock value)* fast : slow : unlock* : :* uncontended (0,0,0) -:--> (0,0,1)
rt_mutex_adjust_prio_chainAdjust the priority chain
power_downpower_down - Shut the machine down for hibernation.* Use the platform driver, if configured, to put the system into the sleep* state corresponding to hibernation, or try to power it off or reboot,* depending on the value of hibernation_mode.
console_trylock_spinningsole_trylock_spinning - try to get console_lock by busy waiting* This allows to busy wait for the console_lock when the current* owner is running in specially marked sections. It means that* the current owner is running and cannot reschedule until it
__synchronize_hardirq
irq_finalize_oneshotOneshot interrupts keep the irq line masked until the threaded* handler finished. unmask if the interrupt has not been disabled and* is marked MASKED.
lock_timer_baseWe are using hashed locking: Holding per_cpu(timer_bases[x]).lock means* that all timers which are tied to this base are locked, and the base itself* is locked too.* So __run_timers/migrate_timers can safely modify all timers which could
acct_get
cgroup_rstat_flush_lockedsee cgroup_rstat_flush()
stop_machine_yield
cpu_stop_queue_two_works
stop_machine_from_inactive_cpustop_machine_from_inactive_cpu - stop_machine() from inactive CPU*@fn: the function to run*@data: the data ptr for the @fn()*@cpus: the cpus to run the @fn() on (NULL = any online cpu)* This is identical to stop_machine() but can be called from a CPU which
wait_for_kprobe_optimizerWait for completing optimization and unoptimization
kdb_dump_stack_on_cpu
kgdb_cpu_enter
vkdb_printf
kdb_rebootkdb_reboot - This function implements the 'reboot' command. Reboot* the system immediately, or loop for ever on failure.
kdb_kbd_cleanup_stateBest effort cleanup of ENTER break codes on leaving KDB. Called on* exiting KDB, when we know we processed an ENTER or KP ENTER scan* code.
irq_work_syncSynchronize against the irq_work @entry, ensures the entry is not* currently in use.
wake_up_page_bit
get_ksm_pageget_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped,
__cmpxchg_double_slabInterrupts must be disabled (for the fallback code to work right)
cmpxchg_double_slab
__get_z3fold_header
ioc_release_fnSlow path for ioc release in put_io_context(). Performs double-lock* dancing to unlink all icq's and then frees ioc.
blk_polllk_poll - poll for IO completions*@q: the queue*@cookie: cookie passed back at IO submission time*@spin: whether to spin for completions* Description:* Poll for completions on the passed in queue. Returns number of* completed entries found
blkcg_destroy_blkgslkcg_destroy_blkgs - responsible for shooting down blkgs*@blkcg: blkcg of interest* blkgs should be removed while holding both q and blkcg locks
throtl_pending_timer_fn
__d_lookup_rcu__d_lookup_rcu - search for a dentry (racy, store-free)*@parent: parent dentry*@name: qstr of name we wish to find*@seqp: returns d_seq value at the point where the dentry was found* Returns: dentry, or NULL* __d_lookup_rcu is the dcache lookup function
start_dir_add
__mnt_want_write__mnt_want_write - get write access to a mount without freeze protection*@m: the mount on which to take a write* This tells the low-level filesystem that a write is about to be performed to* it, and makes sure that writes are allowed (mnt it read-write)
__ns_get_path
io_ring_ctx_wait_and_kill
io_worker_spin_for_work
get_cached_acl
__read_seqcount_begin__read_seqcount_begin - begin a seq-read critical section (without barrier)*@s: pointer to seqcount_t* Returns: count to be passed to read_seqcount_retry* __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb()* barrier
hrtimer_cancel_wait_running
task_get_cssask_get_css - find and get the css for (task, subsys)*@task: the target task*@subsys_id: the target subsystem ID* Find the css for the (@task, @subsys_id) combination, increment a* reference on and return it. This function is guaranteed to return a
lock_cmosAll of these below must be called with interrupts off, preempt* disabled, etc.
vdso_read_begin
mcs_spin_unlockReleases the lock. The caller should pass in the corresponding node that* was used to acquire the lock.