调用者名称 | 描述 |
do_exit | |
do_mount_root | |
page_cache_free_page | |
bio_release_pages | |
tomoyo_dump_page | moyo_dump_page - Dump a page to buffer.*@bprm: Pointer to "struct linux_binprm".*@pos: Location to dump.*@dump: Poiner to "struct tomoyo_page_dump".* Returns true on success, false otherwise. |
vfs_dedupe_get_page | Read a page's worth of file data into the page cache. |
hib_end_io | |
get_futex_key | get_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored |
stack_map_get_build_id | Parse build ID of ELF file mapped to vma |
perf_virt_to_phys | |
__replace_page | __replace_page - replace page in vma by new page |
__update_ref_ctr | |
uprobe_write_opcode | NOTE:* Expect the breakpoint instruction to be the smallest size instruction for* the architecture |
__copy_insn | |
uprobe_clear_state | probe_clear_state - Free the area allocated for slots. |
is_trap_at_addr | |
mount_block_root | |
replace_page_cache_page | place_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one |
__add_to_page_cache_locked | |
wait_on_page_bit_common | |
find_get_entry | d_get_entry - find and get a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset |
find_lock_entry | d_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased |
pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. |
find_get_entries | d_get_entries - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page cache index*@nr_entries: The maximum number of entries*@entries: Where the resulting entries are placed*@indices: The cache indices corresponding to the |
find_get_pages_range | d_get_pages_range - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page index*@end: The final page index (inclusive)*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* |
find_get_pages_contig | d_get_pages_contig - gang contiguous pagecache lookup*@mapping: The address_space to search*@index: The starting page index*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* find_get_pages_contig() works exactly like |
find_get_pages_range_tag | d_get_pages_range_tag - find and return pages in given range matching @tag*@mapping: the address_space to search*@index: the starting page index*@end: The final page index (inclusive)*@tag: the tag index*@nr_pages: the maximum number of pages*@pages: |
generic_file_buffered_read | generic_file_buffered_read - generic file read routine*@iocb: the iocb to read*@iter: data destination*@written: already copied* This is a generic file read routine, and uses the* mapping->a_ops->readpage() function for the actual low-level stuff. |
filemap_fault | lemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault |
filemap_map_pages | |
wait_on_page_read | |
do_read_cache_page | |
write_one_page | write_one_page - write out a single page and wait on I/O*@page: the page to write* The page must be locked by the caller and will be unlocked upon return |
read_cache_pages_invalidate_page | see if a page needs releasing upon read_cache_pages() failure* - the caller of read_cache_pages() may have set PG_private or PG_fscache* before calling, such as the NFS fs marking pages that are cached locally* on disk, thus we need to give the fs a |
read_cache_pages | ad_cache_pages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These* pages have their ->index populated and are otherwise uninitialised. |
read_pages | |
put_pages_list | put_pages_list() - release a list of pages*@pages: list of pages threaded on page->lru* Release a list of pages which are strung together on page.lru. Currently* used by read_cache_pages() and related error recovery code. |
truncate_inode_pages_range | runcate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that |
invalidate_mapping_pages | validate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only |
invalidate_complete_page2 | This is like invalidate_complete_page(), except it ignores the page's* refcount |
pagecache_isize_extended | pagecache_isize_extended - update pagecache after extension of i_size*@inode: inode for which i_size was extended*@from: original inode size*@to: new inode size* Handle extension of inode size either caused by extending truncate or by |
putback_lru_page | putback_lru_page - put previously isolated page onto appropriate LRU list*@page: page to be put back to appropriate lru list* Add previously isolated @page to appropriate LRU list.* Page may still be unevictable for other reasons. |
follow_page_pte | |
follow_pmd_mask | |
check_and_migrate_cma_pages | |
__gup_longterm_locked | __gup_longterm_locked() is a wrapper for __get_user_pages_locked which* allows us to process the FOLL_LONGTERM flag. |
free_page_series | a contiguous series of pages |
zap_pte_range | |
wp_page_copy | Handle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one. |
wp_page_shared | |
do_wp_page | This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been |
do_swap_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases |
do_anonymous_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked. |
__do_fault | The mmap_sem must have been held on entry, and may have been* released depending on flags and vma->vm_ops->fault() return value.* See filemap_fault() and __lock_page_retry(). |
do_read_fault | |
do_cow_fault | |
do_shared_fault | |
do_numa_page | |
__access_remote_vm | Access another process' address space as given in mm. If non-NULL, use the* given task for page fault accounting. |
mincore_page | Later we can get more picky about what "in core" means precisely.* For now, simply check to see if the page is in the page cache,* and is up to date; i.e. that no page-in operation would be required |
__munlock_pagevec | Munlock a batch of pages from the same zone* The work is split to two main phases |
munlock_vma_pages_range | munlock_vma_pages_range() - munlock all pages in the vma range.'*@vma - vma containing range to be munlock()ed.*@start - start address in @vma of the range*@end - end of range in @vma.* For mremap(), munmap() and exit().* Called with @vma VM_LOCKED. |
try_to_unmap_one | @arg: enum ttu_flags will be passed to this argument |
process_vm_rw_single_vec | process_vm_rw_single_vec - read/write pages from task specified*@addr: start memory address of target process*@len: size of area to copy to/from*@iter: where to copy to/from locally*@process_pages: struct pages area that can store at least* |
madvise_cold_or_pageout_pte_range | |
madvise_free_pte_range | |
madvise_inject_error | Error injection support for memory error handling. |