Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\asm-generic\atomic-instrumented.h Create Date:2022-07-28 05:34:50
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:atomic_dec

Proto:static inline void atomic_dec(atomic_t *v)

Type:void

Parameter:

TypeParameterName
atomic_t *v
329  kasan_check_write(v, size of v )
330  arch_atomic_dec - decrement atomic variable*@v: pointer of type atomic_t* Atomically decrements @v by 1.
Caller
NameDescribe
test_rht_init
sbitmap_del_wait_queue
sbitmap_finish_wait
mce_unregister_decode_chain
mce_intel_hcpu_update
__rdtgroup_move_task
pseudo_lock_dev_release
dup_mmap
copy_processCreate a new process
set_cpu_online
release_task
__sigqueue_allocallocate a new signal queue record* - this may be called without locks if and only if t == current, otherwise an* appropriate lock must be held to stop the target task from exiting
__sigqueue_free
worker_set_flagsworker_set_flags - set worker flags and adjust nr_running accordingly*@worker: self*@flags: flags to set* Set @flags in @worker->flags and adjust nr_running accordingly.* CONTEXT:* spin_lock_irq(pool->lock)
commit_credsmmit_creds - Install new credentials upon the current task*@new: The credentials to be assigned* Install a new set of credentials to the current task, using RCU to replace* the old set. Both the objective and the subjective credentials pointers are
async_run_entry_fnpick the first pending entry and run it
inc_ucount
try_to_wake_upry_to_wake_up - wake up a thread*@p: the thread to be awakened*@state: the mask of task states that can be woken*@wake_flags: wake modifier flags (WF_*)* If (@state & @p->state) @p->state = TASK_RUNNING.
cpupri_setpupri_set - update the CPU priority setting*@cp: The cpupri context*@cpu: The target CPU*@newpri: The priority (INVALID-RT99) to assign to this CPU* Note: Assumes cpu_rq(cpu)->lock is locked* Returns: (void)
thaw_processes
misrouted_irq
poll_spurious_irqs
rcu_unexpedite_gp_unexpedite_gp - Cancel prior rcu_expedite_gp() invocation* Undo a prior call to rcu_expedite_gp()
srcu_barriersrcu_barrier - Wait until all in-flight call_srcu() callbacks complete.*@ssp: srcu_struct on which to wait for in-flight callbacks.
rcu_perf_async_cbCallback function for asynchronous grace periods from rcu_perf_writer().
css_free_rwork_fnss destruction is four-stage process
freezer_css_offlinezer_css_offline - initiate destruction of a freezer css*@css: css being destroyed*@css is going away. Mark it dead and decrement system_freezing_count if* it was holding one.
freezer_apply_statezer_apply_state - apply state change to a single cgroup_freezer*@freezer: freezer to apply state change to*@freeze: whether to freeze or unfreeze*@state: CGROUP_FREEZING_* flag to set or clear* Set or clear @state on @cgroup according to @freeze, and
kgdb_cpu_enter
kgdb_breakpointkgdb_breakpoint - generate breakpoint exception* This function will generate a breakpoint exception. It is used at the* beginning of a program to sync up with a debugger and can be used* otherwise as a quick means to stop program execution and "break" into
hardlockup_detector_perf_disablehardlockup_detector_perf_disable - Disable the local event
rb_remove_pages
ring_buffer_resizeg_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure.
ring_buffer_record_enableg_buffer_record_enable - enable writes to the buffer*@buffer: The ring buffer to enable writes* Note, multiple disables will need the same number of enables* to truly enable the writing (much like preempt_disable).
ring_buffer_record_enable_cpug_buffer_record_enable_cpu - enable writes to the buffer*@buffer: The ring buffer to enable writes*@cpu: The CPU to enable.* Note, multiple disables will need the same number of enables* to truly enable the writing (much like preempt_disable).
ring_buffer_read_finishg_buffer_read_finish - finish reading the iterator of the buffer*@iter: The iterator retrieved by ring_buffer_start* This re-enables the recording to the buffer, and frees the* iterator.
ring_buffer_reset_cpug_buffer_reset_cpu - reset a ring buffer per CPU buffer*@buffer: The ring buffer to reset a per cpu buffer of*@cpu: The CPU buffer to be reset
s_stop
tracing_cpumask_write
ftrace_dump
function_stack_trace_call
func_prolog_decPrologue for the preempt and irqs off function tracers.* Returns 1 if it is OK to continue, and data->disabled is* incremented.* 0 if the trace is to be ignored, and data->disabled* is kept the same.* Note, this function is also used outside this ifdef but
irqsoff_tracer_callqsoff uses its own tracer function to keep the overhead down:
start_critical_timing
stop_critical_timing
func_prolog_preempt_disablePrologue for the wakeup function tracers
wakeup_tracer_callwakeup uses its own tracer function to keep the overhead down:
probe_wakeup_sched_switch
probe_wakeup
trace_graph_entry
trace_graph_return
kdb_ftdumpkdb_ftdump - Dump the ftrace log buffer
free_htab_elem
alloc_htab_elem
unaccount_event_cpu
unaccount_freq_event
unaccount_event
exclusive_event_destroy
perf_mmap_closeA buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy
perf_mmap
get_callchain_buffers
xol_free_insn_slotxol_free_insn_slot - If slot was earlier allocated by*@xol_get_insn_slot(), make the slot available for* subsequent requests.
padata_find_nextpadata_find_next - Find the next object that needs serialization
clear_wb_congested
__vma_link_file
page_remove_file_rmap
SYSCALL_DEFINE1
SYSCALL_DEFINE2
__frontswap_clear
zswap_free_entryCarries out the common pattern of freeing and entry's zpool allocation,* freeing the entry itself, and decrementing the number of stored pages.
__split_huge_pmd_locked
mem_cgroup_move_charge
zpool_put_driver
freequeque() wakes up waiters on the sender and receiver waiting queue,* removes the message queue from message queue ID IDR, and cleans up all the* messages associated with this queue.* msg_ids.rwsem (writer) and the spinlock for this message queue are held
do_msgrcv
exit_io_contextCalled by the exiting task
blk_mq_free_request
blk_mq_dispatch_wake
blk_mq_mark_tag_waitMark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting.
__blk_mq_tag_idleIf a previously busy queue goes inactive, potential waiters could now* be allowed to queue. Wake them up and check.
iolat_cleanup_cb
scale_cookie_changeWe scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much
iolatency_set_limit
iolatency_pd_offline
key_gc_unused_keysGarbage collect a list of unreferenced, detached keys
keyctl_chown_keyChange the ownership of a key* The key must grant the caller Setattr permission for this to work, though* the key need not be fully instantiated yet. For the UID to be changed, or* for the GID to be changed to a group the caller is not a member of, the
avc_node_delete
avc_node_kill
avc_node_replace
selinux_secmark_refcount_dec
selinux_xfrm_freeFree the xfrm_sec_ctx structure.
tomoyo_write_selfmoyo_write_self - write() for /sys/kernel/security/tomoyo/self_domain interface
tomoyo_domainmoyo_domain - Get "struct tomoyo_domain_info" for current thread.* Returns pointer to "struct tomoyo_domain_info" for current thread.
tomoyo_cred_preparemoyo_cred_prepare - Target for security_prepare_creds().*@new: Pointer to "struct cred".*@old: Pointer to "struct cred".*@gfp: Memory allocation flags.* Returns 0.
tomoyo_bprm_committed_credsmoyo_bprm_committed_creds - Target for security_bprm_committed_creds().*@bprm: Pointer to "struct linux_binprm".
tomoyo_task_freemoyo_task_free - Target for security_task_free().*@task: Pointer to "struct task_struct".
wb_wait_for_completionwb_wait_for_completion - wait for completion of bdi_writeback_works*@done: target wb_completion* Wait for one or more work items issued to @bdi with their ->done field* set to @done, which should have been initialized with* DEFINE_WB_COMPLETION()
fsnotify_detach_markMark mark as detached, remove it from group list
fsnotify_add_mark_lockedAttach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group.
fanotify_free_group_priv
io_worker_exit
__io_worker_busyWorker will start processing some work. Move it to the busy list, if* it's currently on the freelist
mb_cache_entry_deletemb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value.
mb_cache_shrink
mb_cache_destroymb_cache_destroy - destroy cache*@cache: the cache to destroy* Free all entries in cache and cache itself. Caller must make sure nobody* (except shrinker) can reach @cache when calling this.
do_coredump
dqputPut reference to dquot
devpts_new_indexThe normal naming convention is simply /dev/pts/; this conforms* to the System V naming convention
devpts_kill_index
atomic_long_dec
static_key_slow_dec