CallerName | Describe |
__xa_erase | __xa_erase() - Erase this entry from the XArray while locked.*@xa: XArray.*@index: Index into array.* After this function returns, loading from @index will return %NULL.* If the index is part of a multi-index entry, all indices will be erased |
__xa_store | __xa_store() - Store this entry in the XArray |
__xa_cmpxchg | __xa_cmpxchg() - Store this entry in the XArray |
__xa_insert | __xa_insert() - Store this entry in the XArray if no entry is present.*@xa: XArray.*@index: Index into array.*@entry: New entry.*@gfp: Memory allocation flags.* Inserting a NULL entry will store a reserved entry (like xa_reserve())* if no entry is present |
xa_store_range | xa_store_range() - Store this entry at a range of indices in the XArray |
__xa_alloc | |
ida_alloc_range | da_alloc_range() - Allocate an unused ID.*@ida: IDA handle.*@min: Lowest ID to allocate.*@max: Highest ID to allocate.*@gfp: Memory allocation flags.* Allocate an ID between @min and @max, inclusive. The allocated ID will |
ida_free | da_free() - Release an allocated ID.*@ida: IDA handle.*@id: Previously allocated ID.* Context: Any context. |
ida_destroy | da_destroy() - Free all IDs.*@ida: IDA handle.* Calling this function frees all IDs and releases all resources used* by an IDA. When this call returns, the IDA is empty and can be reused* or freed. If the IDA is already empty, there is no need to call this |
xa_store_order | If anyone needs this, please move it to xarray.c. We have no current* users outside the test suite because all current multislot users want* to use the advanced API. |
check_xas_retry | |
check_xa_shrink | |
check_xas_erase | |
check_multi_store_1 | |
check_multi_store_2 | |
__check_store_iter | |
xa_store_many_order | |
check_create_range_4 | |
shadow_remove | |
check_workingset | |
page_cache_delete_batch | page_cache_delete_batch - delete several pages from page cache*@mapping: the mapping to which pages belong*@pvec: pagevec with pages to delete* The function walks over mapping->i_pages and removes pages passed in @pvec* from the mapping |
replace_page_cache_page | place_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one |
__add_to_page_cache_locked | |
__clear_shadow_entry | Regular page slots are stabilized by the page lock even without the tree* itself locked. These unlocked entries need verification under the tree* lock. |
shadow_lru_isolate | |
add_to_swap_cache | add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,* but sets SwapCache flag and private instead of mapping and index. |
__delete_from_swap_cache | This must be called only on pages that have* been verified to be in the swap cache. |
migrate_page_move_mapping | Replace the page in the mapping.* The number of remaining references must be:* 1 for anonymous pages without a mapping* 2 for pages with a mapping* 3 for pages with a mapping and PagePrivate/PagePrivate2 set. |
migrate_huge_page_move_mapping | The expected number of remaining references is the same as that* of migrate_page_move_mapping(). |
dax_lock_entry | Return: The entry stored at this location before it was locked. |
grab_mapping_entry | Find page cache entry at given index. If it is a DAX entry, return it* with the entry locked. If the page cache doesn't contain an entry at* that index, add a locked empty entry.* When requesting an entry with size DAX_PMD, grab_mapping_entry() will |
__dax_invalidate_entry | |
dax_writeback_one | |