Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:lib\xarray.c Create Date:2022-07-28 06:13:27
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:xas_store() - Store this entry in the XArray

Proto:void *xas_store(struct xa_state *xas, void *entry)

Type:void

Parameter:

TypeParameterName
struct xa_state *xas
void *entry
772  __rcu * slot = xa_head
774  count = 0
775  values = 0
777  value = xa_is_value() - Determine if an entry is a value.*@entry: XArray entry.* Context: Any context.* Return: True if the entry is a value, false if it is a pointer.
779  If entry Then
780  allow_root = Not Private && Not xa_is_zero() - Is the entry a zero entry?*@entry: Entry retrieved from the XArray* The normal API will return NULL as the contents of a slot containing* a zero entry. You can only see zero entries by using the advanced API.
781  first = xas_create() - Create a slot to store an entry in.*@xas: XArray operation state.*@allow_root: %true if we can store the entry in the root directly* Most users will not need to call this function directly, as it is called* by xas_store()
782  Else
783  first = xas_load() - Load an entry from the XArray (advanced).*@xas: XArray operation state.* Usually walks the @xas to the appropriate state to load the entry* stored at xa_index. However, it will do nothing and return %NULL if*@xas is in an error state
786  If xas_invalid() - Is the xas in a retry or error state?*@xas: XArray operation state.* Return: %true if the xas cannot be used for operations. Then Return first
788  node = xa_node
789  If node && xa_shift < Bits remaining in each slot Then xa_sibs = 0
791  If first == entry && Not xa_sibs Then Return first
794  next = first
795  offset = xa_offset
796  max = xa_offset + xa_sibs
797  If node Then
798  slot = slots[offset]
799  If xa_sibs Then xas_squash_marks() - Merge all marks to the first entry*@xas: Array operation state.* Set a mark on the first entry if any entry has it set. Clear marks on* all sibling entries.
802  If Not entry Then xas_init_marks() - Initialise all marks for the entry*@xas: Array operations state
805  cycle
813  cu_assign_pointer() - assign to RCU-protected pointer*@p: pointer to assign to*@v: value to assign (publish)* Assigns the specified value to the specified RCU-protected* pointer, ensuring that any concurrent RCU readers will see* any prior initialization( * slot, entry)
814  If Private && ( Not node || Bits remaining in each slot ) Then xas_free_nodes() - Free this node and all nodes that it references*@xas: Array operation state.*@top: Node to free* This node has been removed from the tree. We must now free it and all* of its subnodes
816  If Not node Then Break
818  count += Not next - Not entry
819  values += Not xa_is_value() - Determine if an entry is a value.*@entry: XArray entry.* Context: Any context.* Return: True if the entry is a value, false if it is a pointer. - Not value
820  If entry Then
821  If offset == max Then Break
825  Else
826  If offset == XA_CHUNK_MASK Then Break
829  next = Private
831  If Not entry && offset > max Then Break
833  first = next
835  slot++
838  update_node(xas, node, count, values)
839  Return first
Caller
NameDescribe
__xa_erase__xa_erase() - Erase this entry from the XArray while locked.*@xa: XArray.*@index: Index into array.* After this function returns, loading from @index will return %NULL.* If the index is part of a multi-index entry, all indices will be erased
__xa_store__xa_store() - Store this entry in the XArray
__xa_cmpxchg__xa_cmpxchg() - Store this entry in the XArray
__xa_insert__xa_insert() - Store this entry in the XArray if no entry is present.*@xa: XArray.*@index: Index into array.*@entry: New entry.*@gfp: Memory allocation flags.* Inserting a NULL entry will store a reserved entry (like xa_reserve())* if no entry is present
xa_store_rangexa_store_range() - Store this entry at a range of indices in the XArray
__xa_alloc
ida_alloc_rangeda_alloc_range() - Allocate an unused ID.*@ida: IDA handle.*@min: Lowest ID to allocate.*@max: Highest ID to allocate.*@gfp: Memory allocation flags.* Allocate an ID between @min and @max, inclusive. The allocated ID will
ida_freeda_free() - Release an allocated ID.*@ida: IDA handle.*@id: Previously allocated ID.* Context: Any context.
ida_destroyda_destroy() - Free all IDs.*@ida: IDA handle.* Calling this function frees all IDs and releases all resources used* by an IDA. When this call returns, the IDA is empty and can be reused* or freed. If the IDA is already empty, there is no need to call this
xa_store_orderIf anyone needs this, please move it to xarray.c. We have no current* users outside the test suite because all current multislot users want* to use the advanced API.
check_xas_retry
check_xa_shrink
check_xas_erase
check_multi_store_1
check_multi_store_2
__check_store_iter
xa_store_many_order
check_create_range_4
shadow_remove
check_workingset
page_cache_delete_batchpage_cache_delete_batch - delete several pages from page cache*@mapping: the mapping to which pages belong*@pvec: pagevec with pages to delete* The function walks over mapping->i_pages and removes pages passed in @pvec* from the mapping
replace_page_cache_pageplace_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one
__add_to_page_cache_locked
__clear_shadow_entryRegular page slots are stabilized by the page lock even without the tree* itself locked. These unlocked entries need verification under the tree* lock.
shadow_lru_isolate
add_to_swap_cacheadd_to_swap_cache resembles add_to_page_cache_locked on swapper_space,* but sets SwapCache flag and private instead of mapping and index.
__delete_from_swap_cacheThis must be called only on pages that have* been verified to be in the swap cache.
migrate_page_move_mappingReplace the page in the mapping.* The number of remaining references must be:* 1 for anonymous pages without a mapping* 2 for pages with a mapping* 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
migrate_huge_page_move_mappingThe expected number of remaining references is the same as that* of migrate_page_move_mapping().
dax_lock_entryReturn: The entry stored at this location before it was locked.
grab_mapping_entryFind page cache entry at given index. If it is a DAX entry, return it* with the entry locked. If the page cache doesn't contain an entry at* that index, add a locked empty entry.* When requesting an entry with size DAX_PMD, grab_mapping_entry() will
__dax_invalidate_entry
dax_writeback_one