Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\asm-generic\atomic-instrumented.h Create Date:2022-07-28 05:34:49
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:atomic_set

Proto:static inline void atomic_set(atomic_t *v, int i)

Type:void

Parameter:

TypeParameterName
atomic_t *v
inti
44  kasan_check_write(v, size of v )
45  arch_atomic_set - set atomic variable*@v: pointer of type atomic_t*@i: required value* Atomically sets the value of @v to @i.
Caller
NameDescribe
rhashtable_inithashtable_init - initialize a new hash table*@ht: hash table to be initialized*@params: configuration parameters* Initializes a new hash table based on the provided configuration* parameters
test_rht_init
setup_fault_attrsetup_fault_attr() is a helper function for various __setup handlers, so it* returns 0 on error, because that is what __setup handlers do.
sbitmap_queue_init_node
sbitmap_queue_update_wake_batch
sbq_wake_ptr
tboot_late_init
mce_startStart of Monarch synchronization. This waits until all CPUs have* entered the exception handler and then determines if any of them* saw a fatal event that requires panic. Then it executes them* in the entry order.* TBD double check parallel CPU hotunplug
mce_endSynchronize between CPUs after main scanning loop.* This invokes the bulk of the Monarch processing.
intel_init_thermal
microcode_reload_lateReload microcode late on all CPUs. Wait for a sec until they* all gather together.
kgdb_arch_handle_exceptionkgdb_arch_handle_exception - Handle architecture specific GDB packets.*@e_vector: The error vector of the exception that happened.*@signo: The signal number of the exception that happened.*@err_code: The error code of the exception that happened.
mm_init
copy_signal
tasklet_init
flush_workqueue_prep_pwqslush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing
alloc_workqueue
create_nsproxy
cred_alloc_blankAllocate blank credentials, such that the credentials can be filled in at a* later date without risk of ENOMEM.
prepare_credsprepare_creds - Prepare a new set of credentials for modification* Prepare a new set of task credentials for modification
prepare_kernel_credprepare_kernel_cred - Prepare a set of credentials for a kernel service*@daemon: A userspace daemon to be used as a reference* Prepare a set of credentials for a kernel service
cpu_check_up_prepareIf CPU has died properly, set its state to CPU_UP_PREPARE and* return success
groups_alloc
sched_init
cpupri_initpupri_init - initialize the cpupri structure*@cp: The cpupri context* Return: -ENOMEM on memory allocation failure.
init_defrootdomain
sd_init
membarrier_exec_mmap
group_init
psi_schedule_poll_workSchedule polling if it's not already scheduled. It's safe to call even from* hotpath because even though kthread_queue_delayed_work takes worker->lock* spinlock that spinlock is never contended due to poll_scheduled atomic* preventing such competition.
psi_poll_work
psi_trigger_destroy
hib_init_batch
crc32_threadfnCRC32 update function that runs in its own thread.
lzo_compress_threadfnCompression function that runs in its own thread.
save_image_lzosave_image_lzo - Save the suspend image data compressed with LZO.*@handle: Swap map handle to use for saving the image.*@snapshot: Image to read data from.*@nr_to_write: Number of pages to save.
lzo_decompress_threadfnDeompression function that runs in its own thread.
load_image_lzoload_image_lzo - Load compressed image data and decompress them with LZO.*@handle: Swap map handle to use for loading data.*@snapshot: Image to copy uncompressed data into.*@nr_to_read: Number of pages to load.
init_srcu_struct_fieldsInitialize non-compile-time initialized fields, including the* associated srcu_node and srcu_data structures. The is_static* parameter is passed through to init_srcu_struct_nodes(), and* also tells us that ->sda has already been wired up to srcu_data.
srcu_barriersrcu_barrier - Wait until all in-flight call_srcu() callbacks complete.*@ssp: srcu_struct on which to wait for in-flight callbacks.
rcu_torture_barrierkthread function to drive and coordinate RCU barrier testing.
rcu_torture_barrier_initInitialize RCU barrier testing.
rcu_torture_init
rcu_perf_init
rcu_barrier_barrier - Wait until all in-flight call_rcu() callbacks complete
futex_init
crash_kexec
init_cgroup_root
init_and_link_css
create_user_nsCreate a new user namespace, deriving the creator from the user in the* passed credentials, and replacing that user with the new root user for the* new namespace.* This is called by copy_creds(), which will finish setting the target task's* credentials.
cpu_stop_init_done
set_state
kgdb_cpu_enter
kgdb_tasklet_bptThere are times a tasklet needs to be used vs a compiled in* break point so as to cause an exception outside a kgdb I/O module,* such as is the case with kgdboe, where calling a breakpoint in the* I/O driver itself would be fatal.
kdb_disable_nmi
reset_hung_task_detector
tracing_map_clearracing_map_clear - Clear a tracing_map*@map: The tracing_map to clear* Resets the tracing map to a cleared or initial state
tracing_map_createracing_map_create - Create a lock-free map and element pool*@map_bits: The size of the map (2 ** map_bits)*@key_size: The size of the key for the map in bytes*@ops: Optional client-defined tracing_map_ops instance*@private_data: Client data associated
alloc_retstack_tasklistTry to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks.
graph_init_task
trace_create_new_event
perf_mmap
perf_pmu_register
perf_output_wakeup
__create_xol_area
padata_init_pqueuesInitialize all percpu queues used by parallel workers
padata_alloc_pdAllocate and initialize the internal cpumask dependend resources.
static_key_slow_inc
static_key_enable_cpuslocked
anon_vma_alloc
anon_vma_ctor
page_add_new_anon_rmappage_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
hugepage_add_new_anon_rmap
prep_compound_page
swapin_nr_pages
init_swap_address_space
__frontswap_invalidate_areaInvalidate all data from frontswap associated with all offsets for the* specified swaptype.
prep_compound_gigantic_page
mpol_newThis function just creates a new policy, does some check and simple* initialization. You must invoke mpol_set_nodemask() to set nodes.
__mpol_dupSlow path of a mempolicy duplicate
shared_policy_replaceReplace a policy range.
get_huge_zero_page
create_objectCreate the metadata (struct kmemleak_object) corresponding to an allocated* memory block and add it to the object_list and object_tree_root.
zpool_register_driverzpool_register_driver() - register a zpool implementation.*@driver: driver to register
msg_init_ns
bio_initUsers of this function have their own bio allocation. Subsequently,* they must remember to pair any call to bio_init() with bio_uninit()* when IO has completed, or when the bio is released.
bio_reset_reset - reinitialize a bio*@bio: bio to reset* Description:* After calling bio_reset(), @bio will be in the same state as a freshly* allocated bio returned bio bio_alloc_bioset() - the only fields that are
create_task_io_context
blk_mq_alloc_hctx
scale_cookie_changeWe scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much
iolatency_clear_scaling
iolatency_pd_init
blk_iocost_init
kyber_init_hctx
key_user_lookupGet the key quota record for a user, allocating a new record if one doesn't* already exist.
selinux_avc_init
tomoyo_commit_conditionmoyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if
tomoyo_collect_entrymoyo_collect_entry - Try to kfree() deleted elements.* Returns nothing.
tomoyo_get_groupmoyo_get_group - Allocate memory for "struct tomoyo_path_group"/"struct tomoyo_number_group".*@param: Pointer to "struct tomoyo_acl_param".*@idx: Index number.* Returns pointer to "struct tomoyo_group" on success, NULL otherwise.
tomoyo_get_namemoyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise.
alloc_nsalloc_ns - allocate, initialize and return a new namespace*@prefix: parent namespace name (MAYBE NULL)*@name: a preallocated name (NOT NULL)* Returns: refcounted namespace or NULL on failure.
alloc_superalloc_super - create new superblock*@type: filesystem type superblock should belong to*@flags: the mount flags*@user_ns: User namespace for the super_block* Allocates and initializes a new &struct super_block. alloc_super()
__d_alloc__d_alloc - allocate a dcache entry*@sb: filesystem it will belong to*@name: qstr of the name* Allocates a dentry. It returns %NULL if there is insufficient memory* available. On a success the dentry is returned. The name passed in is
inode_init_alwaysde_init_always - perform inode structure initialisation*@sb: superblock inode belongs to*@inode: inode to initialise* These are initializations that need to be done on every inode* allocation as the fields are not initialised by slab allocation.
dup_fdAllocate a new files structure and copy contents from the* passed in files structure.* errorp will be valid only when the returned files_struct is NULL.
alloc_mnt_ns
__blkdev_direct_IO
fsnotify_alloc_groupCreate a new fsnotify_group and hold a reference for the group returned.
ioctx_allocx_alloc* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed.
exit_aioxit_aio: called when the last user of mm goes away. At this point, there is* no way for any new requests to be submited or any of the io_* syscalls to be* called on the context.* There may be outstanding kiocbs, but free_ioctx() will explicitly wait on
SYSCALL_DEFINE1sys_io_destroy:* Destroy the aio_context specified. May cancel any outstanding * AIOs and block on completion. Will fail with -ENOSYS if not* implemented. May fail with -EINVAL if the context pointed to* is invalid.
io_wq_create
mb_cache_entry_createmb_cache_entry_create - create entry in cache*@cache - cache where the entry should be created*@mask - gfp mask with which the entry should be allocated*@key - key of the entry*@value - value of the entry*@reusable - is the entry reusable by others?
zap_threads
iomap_page_create
iomap_dio_rwmap_dio_rw() always completes O_[D]SYNC writes regardless of whether the IO* is being issued as AIO or not. This allows us to optimise pure data writes* to use REQ_FUA rather than requiring generic_write_sync() to issue a* REQ_FLUSH post write
get_empty_dquot
atomic_long_set
static_key_enable
static_key_disable
osq_lock_init