CallerName | Describe |
__replace_page | __replace_page - replace page in vma by new page |
page_endio | After completing I/O on a page, call this routine to update the page* flags appropriately |
find_lock_entry | d_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased |
pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. |
generic_file_buffered_read | generic_file_buffered_read - generic file read routine*@iocb: the iocb to read*@iter: data destination*@written: already copied* This is a generic file read routine, and uses the* mapping->a_ops->readpage() function for the actual low-level stuff. |
filemap_fault | lemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault |
filemap_map_pages | |
filemap_page_mkwrite | |
do_read_cache_page | |
write_cache_pages | write_cache_pages - walk the list of dirty pages of the given address space and write all of them |
write_one_page | write_one_page - write out a single page and wait on I/O*@page: the page to write* The page must be locked by the caller and will be unlocked upon return |
set_page_dirty_lock | set_page_dirty() is racy if the caller has no reference against* CPU could truncate the page off the mapping and then free the mapping.* Usually, the page _is_ locked, or the caller is a user-space process which |
read_cache_pages_invalidate_page | see if a page needs releasing upon read_cache_pages() failure* - the caller of read_cache_pages() may have set PG_private or PG_fscache* before calling, such as the NFS fs marking pages that are cached locally* on disk, thus we need to give the fs a |
truncate_inode_pages_range | runcate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that |
invalidate_mapping_pages | validate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only |
invalidate_inode_pages2_range | validate_inode_pages2_range - remove range of pages from an address_space*@mapping: the address_space*@start: the page offset 'from' which to invalidate*@end: the page offset 'to' which to invalidate (inclusive)* Any pages which are found to be mapped |
pagecache_isize_extended | pagecache_isize_extended - update pagecache after extension of i_size*@inode: inode for which i_size was extended*@from: original inode size*@to: new inode size* Handle extension of inode size either caused by extending truncate or by |
handle_write_error | We detected a synchronous write error writing a page out. Probably* -ENOSPC. We need to propagate that into the address_space for a subsequent* fsync(), msync() or close().* The tricky part is that after writepage we cannot touch the mapping: nothing |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
__isolate_lru_page | Attempt to remove the specified page from its LRU. Only take this page* if it is of the appropriate PageActive status. Pages which are being* freed elsewhere are also ignored.* returns 0 on success, -ve errno on failure. |
shrink_active_list | |
follow_page_pte | |
follow_pmd_mask | |
do_page_mkwrite | Notify the address space that the page is about to become writable so that* it can prohibit this or wait for the page to get into an appropriate state.* We do this without the lock held, so that it can sleep if it needs to. |
fault_dirty_shared_page | Handle dirtying of a page in shared file mapping on a write fault.* The function expects the page to be locked and unlocks it. |
wp_page_copy | Handle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one. |
wp_page_shared | |
do_wp_page | This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been |
do_swap_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases |
__do_fault | The mmap_sem must have been held on entry, and may have been* released depending on flags and vma->vm_ops->fault() return value.* See filemap_fault() and __lock_page_retry(). |
do_read_fault | |
do_cow_fault | |
do_shared_fault | |
__putback_lru_fast_prepare | Prepare page for fast batched LRU putback via putback_lru_evictable_pagevec()* The fast path is available only for evictable pages with single mapping.* Then we can bypass the per-cpu pvec and get better performance. |
__munlock_pagevec | Munlock a batch of pages from the same zone* The work is split to two main phases |
munlock_vma_pages_range | munlock_vma_pages_range() - munlock all pages in the vma range.'*@vma - vma containing range to be munlock()ed.*@start - start address in @vma of the range*@end - end of range in @vma.* For mremap(), munmap() and exit().* Called with @vma VM_LOCKED. |
page_referenced | page_referenced - test if the page was referenced*@page: the page to test*@is_locked: caller holds lock on the page*@memcg: target memory cgroup*@vm_flags: collect encountered vma->vm_flags who actually referenced the page* Quick test_and_clear_referenced |
madvise_cold_or_pageout_pte_range | |
madvise_free_pte_range | |
end_swap_bio_read | |
swap_writepage | We may have stale swap cache pages in memory: notice* them here and get rid of the unnecessary final write. |
__swap_writepage | |
swap_readpage | |
free_swap_cache | If we are the only user, then try to free up the swap cache. * Its ok to check for PageSwapCache without the page lock* here because we are going to recheck again inside* try_to_free_swap() _with_ the lock.* - Marcelo |
__try_to_reclaim_swap | rns 1 if swap entry is freed |
unuse_pte | No need to decide whether this PTE shares the swap entry with others,* just let do_wp_page work it out if a write is requested later - to* force COW, vm_page_prot omits write permission from any private vma. |
unuse_pte_range | |
try_to_unuse | If the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false |
hugetlb_no_page | |
hugetlb_fault | |
hugetlb_mcopy_atomic_pte | Used by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with* modifications for huge pages. |
get_ksm_page | get_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped, |
remove_rmap_item_from_tree | Removing rmap_item from stable or unstable tree.* This function will clean the information from the stable/unstable tree. |
try_to_merge_one_page | ry_to_merge_one_page - take two pages and merge them into one*@vma: the vma that holds the pte pointing to page*@page: the PageAnon page that we want to replace with kpage*@kpage: the PageKsm page that we want to map instead of page, |
stable_tree_search | stable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now |
cmp_and_merge_page | mp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and |
isolate_movable_page | |
putback_movable_pages | Put previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page |
__unmap_and_move | |
unmap_and_move | Obtain the lock on page, remove all ptes and migrate the page* to the newly allocated page in newpage. |
unmap_and_move_huge_page | Counterpart of unmap_and_move_page() for hugepage migration |
migrate_pages | migrate_pages - migrate the pages specified in a list, to the free pages* supplied as the target for the page migration*@from: The list of pages to be migrated.*@get_new_page: The function used to allocate free pages to be used |
do_huge_pmd_wp_page | |
follow_trans_huge_pmd | |
do_huge_pmd_numa_page | NUMA hinting page fault entry point for trans huge pmds |
madvise_free_huge_pmd | Return true if we do MADV_FREE successfully on entire pmd page.* Otherwise, return false. |
__split_huge_page | |
deferred_split_scan | |
release_pte_page | |
__collapse_huge_page_isolate | |
mem_cgroup_move_account | mem_cgroup_move_account - move account of the page*@page: the page*@compound: charge the page as compound or small page*@from: mem_cgroup which the page is moved from.*@to: mem_cgroup which the page is moved to. @from != @to. |
me_huge_page | Huge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd. |
memory_failure_hugetlb | |
memory_failure | memory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page |
unpoison_memory | poison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier |
soft_offline_huge_page | |
__soft_offline_page | |
soft_offline_in_use_page | |
trylock_zspage | |
__free_zspage | |
free_z3fold_page | Resets the struct page fields and frees the page |
z3fold_alloc | z3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt |
balloon_page_enqueue_one | |
balloon_page_list_dequeue | alloon_page_list_dequeue() - removes pages from balloon's page list and* returns a list of the pages.*@b_dev_info: balloon device decriptor where we will grab a page from.*@pages: pointer to the list of pages that would be returned to the caller. |
page_idle_clear_pte_refs | |
vfs_unlock_two_pages | Unlock two pages, being careful not to unlock the same page twice. |
simple_readpage | |
simple_write_end | simple_write_end - .write_end helper for non-block-device FSes*@file: See .write_end of address_space_operations*@mapping: "*@pos: "*@len: "*@copied: "*@page: "*@fsdata: "* simple_write_end does the minimum needed for updating a page after writing is |
page_cache_pipe_buf_steal | Attempt to steal a page from a pipe buffer. This should perhaps go into* a vm helper function, it's already simplified quite a bit by the* addition of remove_mapping(). If success is returned, the caller may |
page_cache_pipe_buf_confirm | Check whether the contents of buf is OK to access. Since the content* is a page cache page, IO may be in flight. |
end_buffer_async_read | |
grow_dev_page | Create the page-cache page that contains the requested block.* This is used purely for blockdev mappings. |
clean_bdev_aliases | lean_bdev_aliases: clean a range of buffers in block device*@bdev: Block device to clean buffers in*@block: Start of a range of blocks to clean*@len: Number of blocks to clean* We are taking a range of blocks for data and we don't want writeback of any* |
__block_write_full_page | While block_write_full_page is writing back the dirty buffers under* the page lock, whoever dirtied the buffers may decide to clean them* again at any time |
block_write_begin | lock_write_begin takes care of the basic task of block allocation and* bringing partial write blocks uptodate first.* The filesystem needs to handle block truncation upon failure. |
generic_write_end | |
block_read_full_page | Generic "read page" function for block devices that have the normal* get_block functionality |
block_page_mkwrite | lock_page_mkwrite() is not allowed to change the file size as it gets* called from a page fault handler when a page is first dirtied |
nobh_write_begin | On entry, the page is fully not uptodate.* On exit the page is fully uptodate in the areas outside (from,to)* The filesystem needs to handle block truncation upon failure. |
nobh_write_end | |
nobh_writepage | bh_writepage() - based on block_full_write_page() except* that it tries to operate without attaching bufferheads to* the page. |
nobh_truncate_page | |
block_truncate_page | |
block_write_full_page | The generic ->writepage function for buffer-backed address_spaces |
blkdev_write_end | |
bdev_write_page | dev_write_page() - Start writing a page to a block device*@bdev: The device to write the page to*@sector: The offset on the device to write the page to (need not be aligned)*@page: The page to write*@wbc: The writeback_control for the write |
do_mpage_readpage | This is the worker routine which does all the work of mapping the disk* blocks and constructs largest possible bios, submits them for IO if the* blocks are not contiguous on the disk |
__mpage_writepage | |
aio_setup_ring | |
iomap_read_finish | |
iomap_readpage | |
iomap_readpages_actor | |
iomap_readpages | |
iomap_write_begin | |
iomap_write_end | |
iomap_page_mkwrite | |
iomap_writepage_map | We implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide |
iomap_do_writepage | Write out a dirty page.* For delalloc space on the page we need to allocate space and flush it.* For unwritten space on the page we need to start the conversion to* regular allocated space. |
page_seek_hole_data | Seek for SEEK_DATA / SEEK_HOLE within @page, starting at @lastoff.* Returns true if found and updates @lastoff to the offset in file. |
ramfs_nommu_expand_for_mapping | add a contiguous set of pages into a ramfs inode when it's truncated from* size 0 on the assumption that it's going to be used for an mmap of shared* memory |