函数逻辑报告 |
Source Code:include\asm-generic\atomic-instrumented.h |
Create Date:2022-07-27 06:38:47 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:atomic_inc
函数原型:static inline void atomic_inc(atomic_t *v)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
atomic_t * | v |
239 | kasan_check_write(v, v的长度) |
名称 | 描述 |
---|---|
rhashtable_insert_one | |
do_concurrent_test | |
add_template | |
add_repeat_template | |
add_short_data_template | |
add_zeros_template | |
add_end_template | |
do_op | |
sw842_decompress | sw842_decompress* Decompress the 842-compressed buffer of length @ilen at @in* to the output buffer @out, using no more than @olen bytes.* The compressed buffer must be only a single 842-compressed buffer, |
sbitmap_add_wait_queue | |
sbitmap_prepare_to_wait | |
mask_and_ack_8259A | Careful! The 8259A is a fragile beast, it pretty* much _has_ to be done exactly like this (mask it* first, _then_ send the EOI, and the order of EOI* to the two 8259s is important! |
tboot_dying_cpu | |
mce_register_decode_chain | |
mce_end | Synchronize between CPUs after main scanning loop.* This invokes the bulk of the Monarch processing. |
__wait_for_cpus | |
__rdtgroup_move_task | |
rdtgroup_kn_lock_live | |
pseudo_lock_dev_open | |
smp_error_interrupt | This interrupt should never happen with our APIC/SMP architecture |
ioapic_ack_level | |
dup_mmap | |
copy_files | 复制打开文件信息 |
copy_process | 创建进程 |
set_cpu_online | |
__sigqueue_alloc | allocate a new signal queue record* - this may be called without locks if and only if t == current, otherwise an* appropriate lock must be held to stop the target task from exiting |
helper_lock | |
wq_worker_running | wq_worker_running - a worker is running again*@task: task waking up* This function is called when a worker returns from schedule() |
worker_clr_flags | worker_clr_flags - clear worker flags and adjust nr_running accordingly*@worker: self*@flags: flags to clear* Clear @flags in @worker->flags and adjust nr_running accordingly.* CONTEXT:* spin_lock_irq(pool->lock) |
flush_workqueue_prep_pwqs | lush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing |
copy_creds | 复制信任 |
commit_creds | mmit_creds - Install new credentials upon the current task*@new: The credentials to be assigned* Install a new set of credentials to the current task, using RCU to replace* the old set. Both the objective and the subjective credentials pointers are |
async_schedule_node_domain | async_schedule_node_domain - NUMA specific version of async_schedule_domain*@func: function to execute asynchronously*@data: data pointer to pass to the function*@node: NUMA node that we want to schedule this on or close to*@domain: the domain |
__request_module | __request_module - try to load a kernel module*@wait: wait (or not) for the operation to complete*@fmt: printf style format string for the name of the module*@...: arguments as specified in the format string |
__schedule | 调度器 |
cpupri_set | pupri_set - update the CPU priority setting*@cp: The cpupri context*@cpu: The target CPU*@newpri: The priority (INVALID-RT99) to assign to this CPU* Note: Assumes cpu_rq(cpu)->lock is locked* Returns: (void) |
rq_attach_root | |
sched_get_rd | |
build_group_from_child_sched_domain | XXX: This creates per-node group entries; since the load-balancer will* immediately access remote memory to construct this group's load-balance* statistics having the groups node local is of dubious benefit. |
sd_init | |
__torture_print_stats | Create an lock-torture-statistics message in the specified buffer. |
freeze_processes | ze_processes - Signal user space processes to enter the refrigerator.* The current thread will not be frozen. The same process that calls* freeze_processes must later call thaw_processes.* On success, returns 0 |
hibernate | hibernate - Carry out system hibernation, including saving the image. |
software_resume | software_resume - Resume from a saved hibernation image.* This routine is called as a late initcall, when all devices have been* discovered and initialized already.* The image reading code is called to see if there is a hibernation image |
hib_submit_io | |
snapshot_open | |
snapshot_release | |
printk_safe_log_store | Add a message to per-CPU context-dependent buffer |
__irq_wake_thread | |
irq_forced_thread_fn | Interrupts which are not explicitly requested as threaded* interrupts rely on the implicit bh/preempt disable of the hard irq* context. So we need to disable bh here to avoid deadlocks and other* side effects. |
irq_thread_fn | Interrupts explicitly requested as threaded interrupts want to be* preemtible - many of them need to sleep and wait for slow busses to* complete. |
rcu_expedite_gp | _expedite_gp - Expedite future RCU grace periods* After a call to this function, future calls to synchronize_rcu() and* friends act as the corresponding synchronize_rcu_expedited() function* had instead been called. |
srcu_barrier | srcu_barrier - Wait until all in-flight call_srcu() callbacks complete.*@ssp: srcu_struct on which to wait for in-flight callbacks. |
rcu_torture_alloc | Allocate an element from the rcu_tortures pool. |
rcu_torture_free | Free an element to the rcu_tortures pool. |
rcu_torture_pipe_update_one | Update callback in the pipe. This should be invoked after a grace period. |
rcu_torture_writer | RCU torture writer kthread. Repeatedly substitutes a new structure* for that pointed to by rcu_torture_current, freeing the old structure* after a series of grace periods (the "pipeline"). |
rcu_torture_one_read | Do one read-side critical section, returning false if there was* no data to read. Can be invoked both from process context and* from a timer handler. |
rcu_torture_stats_print | Print torture statistics |
rcu_torture_barrier_cbf | Callback function for RCU barrier testing. |
rcu_perf_reader | RCU perf reader kthread. Repeatedly does empty RCU read-side* critical section, minimizing update-side interference. |
rcu_perf_writer | RCU perf writer kthread. Repeatedly does a grace period. |
rcu_barrier_func | Called with preemption disabled, and from cross-cpu IRQ context. |
online_css | voke ->css_online() on a new CSS and mark it online if successful |
cgroup_create | The returned cgroup is fully initialized including its control mask, but* it isn't associated with its kernfs_node and doesn't have the control* mask applied. |
freezer_css_online | zer_css_online - commit creation of a freezer css*@css: css being created* We're committing to creation of @css. Mark it online and inherit* parent's freezing state while holding both parent's and our* freezer->lock. |
freezer_apply_state | zer_apply_state - apply state change to a single cgroup_freezer*@freezer: freezer to apply state change to*@freeze: whether to freeze or unfreeze*@state: CGROUP_FREEZING_* flag to set or clear* Set or clear @state on @cgroup according to @freeze, and |
audit_log_lost | audit_log_lost - conditionally log lost audit message event*@message: the message stating reason for lost audit message* Emit at least 1 message per second, even if audit_rate_check is* throttling.* Always increment the lost messages counter. |
kgdb_cpu_enter | |
kgdb_schedule_breakpoint | |
kgdb_breakpoint | kgdb_breakpoint - generate breakpoint exception* This function will generate a breakpoint exception. It is used at the* beginning of a program to sync up with a debugger and can be used* otherwise as a quick means to stop program execution and "break" into |
rb_remove_pages | |
ring_buffer_resize | g_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure. |
ring_buffer_record_disable | g_buffer_record_disable - stop all writes into the buffer*@buffer: The ring buffer to stop writes to.* This prevents all writes to the buffer. Any attempt to write* to the buffer after this will fail and return NULL. |
ring_buffer_record_disable_cpu | g_buffer_record_disable_cpu - stop all writes into the cpu_buffer*@buffer: The ring buffer to stop writes to.*@cpu: The CPU buffer to stop* This prevents all writes to the buffer. Any attempt to write* to the buffer after this will fail and return NULL. |
rb_reader_lock | |
ring_buffer_read_prepare | g_buffer_read_prepare - Prepare for a non consuming read of the buffer*@buffer: The ring buffer to read from*@cpu: The cpu buffer to iterate over*@flags: gfp flags to use for memory allocation* This performs the initial preparations necessary to iterate |
ring_buffer_reset_cpu | g_buffer_reset_cpu - reset a ring buffer per CPU buffer*@buffer: The ring buffer to reset a per cpu buffer of*@cpu: The CPU buffer to be reset |
s_start | The current tracer is copied to avoid a global locking* all around. |
tracing_cpumask_write | |
ftrace_dump | |
start_critical_timing | |
stop_critical_timing | |
__trace_mmiotrace_rw | |
__trace_mmiotrace_map | |
ftrace_push_return_trace | Add a function return address to the trace stack on thread info. |
kdb_ftdump | kdb_ftdump - Dump the ftrace log buffer |
get_cpu_map_entry | |
exclusive_event_destroy | |
perf_mmap_open | |
perf_mmap | |
account_event_cpu | |
account_freq_event | |
account_event | |
xol_take_insn_slot | - search for a free slot. |
padata_do_parallel | padata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i |
padata_do_serial | padata_do_serial - padata serialization function*@padata: object to be serialized.* padata_do_serial must be called for every parallelized object.* The serialization callback function will run with BHs off. |
static_key_slow_inc | |
mark_oom_victim | mark_oom_victim - mark the given task as OOM victim*@tsk: task to mark* Has to be called with oom_lock held and never after* oom has been disabled already.* under task_lock or operate on the current). |
set_wb_congested | |
__remove_shared_vm_struct | Requires inode->i_mapping->i_mmap_rwsem |
__vma_link_file | |
lookup_swap_cache | Lookup a swap entry in the swap cache. A found page will be returned* unlocked and with its refcount incremented - we rely on the kernel* lock getting page table operations atomic even if we drop the page* lock before returning. |
SYSCALL_DEFINE1 | |
SYSCALL_DEFINE2 | |
__frontswap_set | |
zswap_frontswap_store | attempts to compress and store an single page |
__split_huge_pmd_locked | |
mem_cgroup_move_charge | |
zpool_get_driver | his assumes @type is null-terminated. |
do_msgsnd | |
blk_set_pm_only | lk_set_pm_only - increment pm_only counter*@q: request queue pointer |
blk_mq_rq_ctx_init | |
blk_mq_get_driver_tag | |
blk_mq_mark_tag_wait | Mark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting. |
__blk_mq_tag_busy | If a previously inactive queue goes active, bump the active user count.* We need to do this before try to allocate driver tag, then even if fail* to get tag when first time, the other shared-tag users could reserve* budget for it. |
__blkcg_iolatency_throttle | |
scale_cookie_change | We scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much |
iolatency_set_limit | |
iolatency_pd_offline | |
commit_active_weights | |
add_latency_sample | |
key_alloc | key_alloc - Allocate a key of the specified type.*@type: The type of key to allocate.*@desc: The key description to allow the key to be searched out.*@uid: The owner of the new key.*@gid: The group ID for the new key's group permissions. |
__key_instantiate_and_link | Instantiate a key and link it into the target keyring atomically. Must be* called with the target keyring's semaphore writelocked. The target key's* semaphore need not be locked as instantiation is serialised by* key_construction_mutex. |
key_reject_and_link | key_reject_and_link - Negatively instantiate a key and link it into the keyring.*@key: The key to instantiate.*@timeout: The timeout on the negative key.*@error: The error to return when the key is hit. |
keyctl_chown_key | Change the ownership of a key* The key must grant the caller Setattr permission for this to work, though* the key need not be fully instantiated yet. For the UID to be changed, or* for the GID to be changed to a group the caller is not a member of, the |
selinux_secmark_refcount_inc | |
selinux_xfrm_alloc_user | Allocates a xfrm_sec_state and populates it using the supplied security* xfrm_user_sec_ctx context. |
selinux_xfrm_policy_clone | LSM hook implementation that copies security data structure from old to new* for policy cloning. |
selinux_xfrm_state_alloc_acquire | LSM hook implementation that allocates a xfrm_sec_state and populates based* on a secid. |
tomoyo_update_stat | moyo_update_stat - Update statistic counters.*@index: Index for policy type.* Returns nothing. |
tomoyo_open_control | moyo_open_control - open() for /sys/kernel/security/tomoyo/ interface.*@type: Type of interface.*@file: Pointer to "struct file".* Returns 0 on success, negative value otherwise. |
tomoyo_commit_condition | moyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if |
tomoyo_find_next_domain | moyo_find_next_domain - Find a domain.*@bprm: Pointer to "struct linux_binprm".* Returns 0 on success, negative value otherwise.* Caller holds tomoyo_read_lock(). |
tomoyo_get_group | moyo_get_group - Allocate memory for "struct tomoyo_path_group"/"struct tomoyo_number_group".*@param: Pointer to "struct tomoyo_acl_param".*@idx: Index number.* Returns pointer to "struct tomoyo_group" on success, NULL otherwise. |
tomoyo_get_name | moyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise. |
tomoyo_write_self | moyo_write_self - write() for /sys/kernel/security/tomoyo/self_domain interface |
tomoyo_task_alloc | moyo_task_alloc - Target for security_task_alloc().*@task: Pointer to "struct task_struct".*@flags: clone() flags.* Returns 0. |
tomoyo_init | moyo_init - Register TOMOYO Linux as a LSM module.* Returns 0. |
freeze_super | ze_super - lock the filesystem and force it into a consistent state*@sb: the super to lock* Syncs the super to make sure the filesystem is consistent and calls the fs's* freeze_fs. Subsequent calls to this without first thawing the fs will return* -EBUSY. |
take_dentry_name_snapshot | |
copy_name | |
__iget | |
iput | 放置一个索引节点 |
get_files_struct | |
vfs_create_mount | vfs_create_mount - Create a mount for a configured superblock*@fc: The configuration context with the superblock attached* Create a mount to an already configured superblock. If necessary, the* caller should invoke vfs_get_tree() before calling this. |
clone_mnt | |
mount_subtree | |
wb_queue_work | |
alloc_fs_context | alloc_fs_context - Create a filesystem context.*@fs_type: The filesystem type.*@reference: The dentry from which this one derives (or NULL)*@sb_flags: Filesystem/superblock flags (SB_*)*@sb_flags_mask: Applicable members of @sb_flags |
__blkdev_direct_IO | |
fsnotify_get_mark_safe | Get mark reference when we found the mark via lockless traversal of object* list. Mark can be already removed from the list by now and on its way to be* destroyed once SRCU period ends.* Also pin the group so it doesn't disappear under us. |
fsnotify_add_mark_locked | Attach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group. |
SYSCALL_DEFINE2 | anotify syscalls |
io_kill_timeout | |
io_timeout_fn | |
io_wqe_inc_running | |
__io_worker_busy | Worker will start processing some work. Move it to the busy list, if* it's currently on the freelist |
create_io_worker | |
mb_cache_entry_create | mb_cache_entry_create - create entry in cache*@cache - cache where the entry should be created*@mask - gfp mask with which the entry should be allocated*@key - key of the entry*@value - value of the entry*@reusable - is the entry reusable by others? |
__entry_find | |
mb_cache_entry_get | mb_cache_entry_get - get a cache entry by value (and key)*@cache - cache we work with*@key - key*@value - value |
iomap_readpage_actor | |
iomap_add_to_ioend | Test to see if we have an existing ioend structure that we could append to* first, otherwise finish off the current ioend and start another. |
iomap_dio_submit_bio | |
dquot_scan_active | Call callback for every active dquot on given filesystem |
dqget | Get reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock |
devpts_acquire | |
atomic_long_inc | |
static_key_slow_inc | |
inc_tlb_flush_pending | |
get_group_info | get_group_info - Get a reference to a group info structure*@group_info: The group info to reference* This gets a reference to a set of supplementary groups.* If the caller is accessing a task's credentials, they must hold the RCU read* lock when reading. |
get_new_cred | 获得一组新的凭据引用 |
mmgrab | mmgrab() - Pin a &struct mm_struct |
mmget | mmget() - Pin the address space associated with a &struct mm_struct.*@mm: The address space to pin.* Make sure that the address space of the given &struct mm_struct doesn't* go away. This does not protect against parts of the address space being |
page_ref_inc | |
get_io_context_active | 取得I/O活跃引用 |
ioc_task_link | |
mapping_allow_writable | |
allow_write_access | |
i_readcount_inc | |
inode_dio_begin | 允许I/O信号请求 |
bio_get | get a reference to a bio, so it won't disappear. the intended use is* something like:* bio_get(bio);* submit_bio(rw, bio);* if (bio->bi_flags ...)* do_something* bio_put(bio);* without the bio_get(), it could potentially complete I/O before submit_bio |
bio_inc_remaining | Increment chain count for the bio. Make sure the CHAIN flag update* is visible before the raised count. |
get_nsproxy | |
nf_conntrack_get | |
rt_genid_bump_ipv4 | |
fnhe_genid_bump | |
mpol_get | |
tasklet_disable_nosync | |
blkcg_use_delay | |
__rhashtable_insert_fast | Internal function, please use rhashtable_insert_fast() instead. This* function returns the existing element already in hashes in there is a clash,* otherwise it returns an error via ERR_PTR(). |
reqsk_queue_added | |
fscache_get_retrieval | 获取检索操作的额外引用 |
__fscache_use_cookie | |
tcp_listendrop | TCP listen path runs lockless.* We forced "struct sock" to be const qualified to make sure* we don't modify one of its field by mistake.* Here, we increment sk_drops which is an atomic_t, so we can safely* make sock writable again. |
get_anon_vma | |
page_dup_rmap | |
get_bh | |
get_mnt_ns | |
dqgrab |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |