Caller| Name | Describe | 
|---|
| uprobe_write_opcode | NOTE:* Expect the breakpoint instruction to be the smallest size instruction for* the architecture | 
| unaccount_page_cache_page |  | 
| wait_on_page_bit_common |  | 
| generic_file_buffered_read | generic_file_buffered_read - generic file read routine*@iocb: the iocb to read*@iter: data destination*@written: already copied* This is a generic file read routine, and uses the* mapping->a_ops->readpage() function for the actual low-level stuff. | 
| filemap_fault | lemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault | 
| filemap_map_pages |  | 
| wait_on_page_read |  | 
| do_read_cache_page |  | 
| __set_page_dirty_nobuffers | For address_spaces which do not use buffers. Just tag the page as dirty in* the xarray.* This is also used when a single buffer is being dirtied: we want to set the* page dirty in that case, but not all the buffers. This is a "bottom-up" | 
| do_swap_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases | 
| mincore_page | Later we can get more picky about what "in core" means precisely.* For now, simply check to see if the page is in the page cache,* and is up to date; i.e. that no page-in operation would be required | 
| swap_readpage |  | 
| add_to_swap | add_to_swap - allocate swap space for a page*@page: page we want to move to swap* Allocate swap space for the page and add the page to the* swap cache. Caller needs to hold the page lock. | 
| ksm_might_need_to_copy |  | 
| migrate_page_states | Copy the page to its new location | 
| vfs_dedupe_get_page | Read a page's worth of file data into the page cache. | 
| vfs_dedupe_file_range_compare | Compare extents of two files to see if they are the same.* Caller must have locked both inodes to prevent write races. | 
| page_get_link | get the link contents into pagecache | 
| simple_write_begin |  | 
| simple_write_end | simple_write_end - .write_end helper for non-block-device FSes*@file: See .write_end of address_space_operations*@mapping: "*@pos: "*@len: "*@copied: "*@page: "*@fsdata: "* simple_write_end does the minimum needed for updating a page after writing is | 
| page_cache_pipe_buf_steal | Attempt to steal a page from a pipe buffer. This should perhaps go into* a vm helper function, it's already simplified quite a bit by the* addition of remove_mapping(). If success is returned, the caller may | 
| page_cache_pipe_buf_confirm | Check whether the contents of buf is OK to access. Since the content* is a page cache page, IO may be in flight. | 
| __set_page_dirty | Mark the page dirty, and set it dirty in the page cache, and mark the inode* dirty.* If warn is true, then emit a warning if the page is not uptodate and has* not been truncated.* The caller must hold lock_page_memcg(). | 
| init_page_buffers | Initialise the state of a blockdev page's buffers. | 
| create_empty_buffers | We attach and possibly dirty the buffers atomically wrt* __set_page_dirty_buffers() via private_lock. try_to_free_buffers* is already excluded via the page lock. | 
| page_zero_new_buffers | If a page has any new buffers, zero them out here, and mark them uptodate* and dirty so they'll be written out (in order to prevent uninitialised* block data from leaking). And clear the new bit. | 
| __block_write_begin_int |  | 
| block_write_end |  | 
| nobh_write_begin | On entry, the page is fully not uptodate.* On exit the page is fully uptodate in the areas outside (from,to)* The filesystem needs to handle block truncation upon failure. | 
| nobh_truncate_page |  | 
| block_truncate_page |  | 
| do_mpage_readpage | This is the worker routine which does all the work of mapping the disk* blocks and constructs largest possible bios, submits them for IO if the* blocks are not contiguous on the disk | 
| clean_buffers | We have our BIO, so we can now mark the buffers clean. Make* sure to only clean buffers which we know we'll be writing. | 
| __mpage_writepage |  | 
| verify_page | Verify a single data page against the file's Merkle tree | 
| iomap_read_inline_data |  | 
| __iomap_write_begin |  | 
| __iomap_write_end |  | 
| iomap_write_end_inline |  | 
| iomap_page_mkwrite_actor |  | 
| page_seek_hole_data | Seek for SEEK_DATA / SEEK_HOLE within @page, starting at @lastoff.* Returns true if found and updates @lastoff to the offset in file. |