Function report |
Source Code:include\asm-generic\atomic-instrumented.h |
Create Date:2022-07-28 05:34:50 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:atomic_dec
Proto:static inline void atomic_dec(atomic_t *v)
Type:void
Parameter:
Type | Parameter | Name |
---|---|---|
atomic_t * | v |
329 | kasan_check_write(v, size of v ) |
Name | Describe |
---|---|
test_rht_init | |
sbitmap_del_wait_queue | |
sbitmap_finish_wait | |
mce_unregister_decode_chain | |
mce_intel_hcpu_update | |
__rdtgroup_move_task | |
pseudo_lock_dev_release | |
dup_mmap | |
copy_process | Create a new process |
set_cpu_online | |
release_task | |
__sigqueue_alloc | allocate a new signal queue record* - this may be called without locks if and only if t == current, otherwise an* appropriate lock must be held to stop the target task from exiting |
__sigqueue_free | |
worker_set_flags | worker_set_flags - set worker flags and adjust nr_running accordingly*@worker: self*@flags: flags to set* Set @flags in @worker->flags and adjust nr_running accordingly.* CONTEXT:* spin_lock_irq(pool->lock) |
commit_creds | mmit_creds - Install new credentials upon the current task*@new: The credentials to be assigned* Install a new set of credentials to the current task, using RCU to replace* the old set. Both the objective and the subjective credentials pointers are |
async_run_entry_fn | pick the first pending entry and run it |
inc_ucount | |
try_to_wake_up | ry_to_wake_up - wake up a thread*@p: the thread to be awakened*@state: the mask of task states that can be woken*@wake_flags: wake modifier flags (WF_*)* If (@state & @p->state) @p->state = TASK_RUNNING. |
cpupri_set | pupri_set - update the CPU priority setting*@cp: The cpupri context*@cpu: The target CPU*@newpri: The priority (INVALID-RT99) to assign to this CPU* Note: Assumes cpu_rq(cpu)->lock is locked* Returns: (void) |
thaw_processes | |
misrouted_irq | |
poll_spurious_irqs | |
rcu_unexpedite_gp | _unexpedite_gp - Cancel prior rcu_expedite_gp() invocation* Undo a prior call to rcu_expedite_gp() |
srcu_barrier | srcu_barrier - Wait until all in-flight call_srcu() callbacks complete.*@ssp: srcu_struct on which to wait for in-flight callbacks. |
rcu_perf_async_cb | Callback function for asynchronous grace periods from rcu_perf_writer(). |
css_free_rwork_fn | ss destruction is four-stage process |
freezer_css_offline | zer_css_offline - initiate destruction of a freezer css*@css: css being destroyed*@css is going away. Mark it dead and decrement system_freezing_count if* it was holding one. |
freezer_apply_state | zer_apply_state - apply state change to a single cgroup_freezer*@freezer: freezer to apply state change to*@freeze: whether to freeze or unfreeze*@state: CGROUP_FREEZING_* flag to set or clear* Set or clear @state on @cgroup according to @freeze, and |
kgdb_cpu_enter | |
kgdb_breakpoint | kgdb_breakpoint - generate breakpoint exception* This function will generate a breakpoint exception. It is used at the* beginning of a program to sync up with a debugger and can be used* otherwise as a quick means to stop program execution and "break" into |
hardlockup_detector_perf_disable | hardlockup_detector_perf_disable - Disable the local event |
rb_remove_pages | |
ring_buffer_resize | g_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure. |
ring_buffer_record_enable | g_buffer_record_enable - enable writes to the buffer*@buffer: The ring buffer to enable writes* Note, multiple disables will need the same number of enables* to truly enable the writing (much like preempt_disable). |
ring_buffer_record_enable_cpu | g_buffer_record_enable_cpu - enable writes to the buffer*@buffer: The ring buffer to enable writes*@cpu: The CPU to enable.* Note, multiple disables will need the same number of enables* to truly enable the writing (much like preempt_disable). |
ring_buffer_read_finish | g_buffer_read_finish - finish reading the iterator of the buffer*@iter: The iterator retrieved by ring_buffer_start* This re-enables the recording to the buffer, and frees the* iterator. |
ring_buffer_reset_cpu | g_buffer_reset_cpu - reset a ring buffer per CPU buffer*@buffer: The ring buffer to reset a per cpu buffer of*@cpu: The CPU buffer to be reset |
s_stop | |
tracing_cpumask_write | |
ftrace_dump | |
function_stack_trace_call | |
func_prolog_dec | Prologue for the preempt and irqs off function tracers.* Returns 1 if it is OK to continue, and data->disabled is* incremented.* 0 if the trace is to be ignored, and data->disabled* is kept the same.* Note, this function is also used outside this ifdef but |
irqsoff_tracer_call | qsoff uses its own tracer function to keep the overhead down: |
start_critical_timing | |
stop_critical_timing | |
func_prolog_preempt_disable | Prologue for the wakeup function tracers |
wakeup_tracer_call | wakeup uses its own tracer function to keep the overhead down: |
probe_wakeup_sched_switch | |
probe_wakeup | |
trace_graph_entry | |
trace_graph_return | |
kdb_ftdump | kdb_ftdump - Dump the ftrace log buffer |
free_htab_elem | |
alloc_htab_elem | |
unaccount_event_cpu | |
unaccount_freq_event | |
unaccount_event | |
exclusive_event_destroy | |
perf_mmap_close | A buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy |
perf_mmap | |
get_callchain_buffers | |
xol_free_insn_slot | xol_free_insn_slot - If slot was earlier allocated by*@xol_get_insn_slot(), make the slot available for* subsequent requests. |
padata_find_next | padata_find_next - Find the next object that needs serialization |
clear_wb_congested | |
__vma_link_file | |
page_remove_file_rmap | |
SYSCALL_DEFINE1 | |
SYSCALL_DEFINE2 | |
__frontswap_clear | |
zswap_free_entry | Carries out the common pattern of freeing and entry's zpool allocation,* freeing the entry itself, and decrementing the number of stored pages. |
__split_huge_pmd_locked | |
mem_cgroup_move_charge | |
zpool_put_driver | |
freeque | que() wakes up waiters on the sender and receiver waiting queue,* removes the message queue from message queue ID IDR, and cleans up all the* messages associated with this queue.* msg_ids.rwsem (writer) and the spinlock for this message queue are held |
do_msgrcv | |
exit_io_context | Called by the exiting task |
blk_mq_free_request | |
blk_mq_dispatch_wake | |
blk_mq_mark_tag_wait | Mark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting. |
__blk_mq_tag_idle | If a previously busy queue goes inactive, potential waiters could now* be allowed to queue. Wake them up and check. |
iolat_cleanup_cb | |
scale_cookie_change | We scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much |
iolatency_set_limit | |
iolatency_pd_offline | |
key_gc_unused_keys | Garbage collect a list of unreferenced, detached keys |
keyctl_chown_key | Change the ownership of a key* The key must grant the caller Setattr permission for this to work, though* the key need not be fully instantiated yet. For the UID to be changed, or* for the GID to be changed to a group the caller is not a member of, the |
avc_node_delete | |
avc_node_kill | |
avc_node_replace | |
selinux_secmark_refcount_dec | |
selinux_xfrm_free | Free the xfrm_sec_ctx structure. |
tomoyo_write_self | moyo_write_self - write() for /sys/kernel/security/tomoyo/self_domain interface |
tomoyo_domain | moyo_domain - Get "struct tomoyo_domain_info" for current thread.* Returns pointer to "struct tomoyo_domain_info" for current thread. |
tomoyo_cred_prepare | moyo_cred_prepare - Target for security_prepare_creds().*@new: Pointer to "struct cred".*@old: Pointer to "struct cred".*@gfp: Memory allocation flags.* Returns 0. |
tomoyo_bprm_committed_creds | moyo_bprm_committed_creds - Target for security_bprm_committed_creds().*@bprm: Pointer to "struct linux_binprm". |
tomoyo_task_free | moyo_task_free - Target for security_task_free().*@task: Pointer to "struct task_struct". |
wb_wait_for_completion | wb_wait_for_completion - wait for completion of bdi_writeback_works*@done: target wb_completion* Wait for one or more work items issued to @bdi with their ->done field* set to @done, which should have been initialized with* DEFINE_WB_COMPLETION() |
fsnotify_detach_mark | Mark mark as detached, remove it from group list |
fsnotify_add_mark_locked | Attach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group. |
fanotify_free_group_priv | |
io_worker_exit | |
__io_worker_busy | Worker will start processing some work. Move it to the busy list, if* it's currently on the freelist |
mb_cache_entry_delete | mb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value. |
mb_cache_shrink | |
mb_cache_destroy | mb_cache_destroy - destroy cache*@cache: the cache to destroy* Free all entries in cache and cache itself. Caller must make sure nobody* (except shrinker) can reach @cache when calling this. |
do_coredump | |
dqput | Put reference to dquot |
devpts_new_index | The normal naming convention is simply /dev/pts/ |
devpts_kill_index | |
atomic_long_dec | |
static_key_slow_dec |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |