Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\asm-generic\atomic-long.h Create Date:2022-07-28 05:34:52
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:atomic_long_set

Proto:static inline void atomic_long_set(atomic_long_t *v, long i)

Type:void

Parameter:

TypeParameterName
atomic_long_t *v
longi
534  atomic_set(v, i)
Caller
NameDescribe
percpu_ref_initpercpu_ref_init - initialize a percpu refcount*@ref: percpu_ref to initialize*@release: function which will be called when refcount hits 0*@flags: PERCPU_REF_INIT_* flags*@gfp: allocation mask to use* Initializes @ref
gen_pool_add_ownergen_pool_add_owner- add a new chunk of special memory to the pool*@pool: pool to add new memory chunk to*@virt: virtual starting address of memory chunk to add to pool*@phys: physical starting address of memory chunk to add to pool*@size: size in bytes of
set_work_dataWhile queued, %WORK_STRUCT_PWQ is set and non flag bits of a work's data* contain the pointer to the queued pwq
__mutex_init
rwsem_set_ownerAll writes to owner are protected by WRITE_ONCE() to make sure that* store tearing can't happen as optimistic spinners may read and use* the owner value concurrently without lock
rwsem_clear_owner
__rwsem_set_reader_ownedThe task_struct pointer of the last owning reader will be left in* the owner field.* Note that the owner value just indicates the task has owned the rwsem* previously, it may not be the real owner or one of the real owners
__init_rwsemInitialize an rwsem:
acct_on
alloc_chunk
perf_event_allocAllocate and initialize an event structure
zero_zone_numa_counterszero numa counters within a zone
zero_global_numa_counterszero global numa counters
set_iounmap_nonlazyalled before a call to iounmap() if the caller wants vm_area_struct's* immediately freed.
zone_init_internals
reset_node_managed_pages
lookup_swap_cacheLookup a swap entry in the swap cache. A found page will be returned* unlocked and with its refcount incremented - we rely on the kernel* lock getting page table operations atomic even if we drop the page* lock before returning.
swap_ra_info
create_task_io_context
__alloc_file
ns_prune_dentry