调用者名称 | 描述 |
unaccount_page_cache_page | |
page_cache_free_page | |
replace_page_cache_page | place_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one |
__add_to_page_cache_locked | |
__put_compound_page | |
page_mapped | Return true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped. |
__page_mapcount | Slow path of page_mapcount() for compound pages |
new_non_cma_page | |
check_and_migrate_cma_pages | |
page_vma_mapped_walk | page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma |
page_remove_file_rmap | |
page_remove_anon_compound_rmap | |
try_to_unmap_one | @arg: enum ttu_flags will be passed to this argument |
has_unmovable_pages | This function checks whether pageblock includes unmovable pages or not.* If @count is not zero, it is okay to include less @count unmovable pages* PageLRU check without isolation or lru_lock could race so that |
page_trans_huge_map_swapcount | |
page_huge_active | Test to determine whether the hugepage is "active/in-use" (i.e. being linked* to hstate->hugepage_activelist.)* This function can be called for tail pages, but never returns true for them. |
PageHugeTemporary | Internal hugetlb specific page flag. Do not use outside of the hugetlb* code |
__basepage_index | |
dissolve_free_huge_page | Dissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use |
alloc_new_node_page | page allocation callback for NUMA node migration |
new_page | Allocate a new page for page migration based on vma policy |
putback_movable_pages | Put previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page |
remove_migration_pte | Restore a potential migration pte to a working pte entry |
copy_huge_page | |
migrate_page_copy | |
migrate_pages | migrate_pages - migrate the pages specified in a list, to the free pages* supplied as the target for the page migration*@from: The list of pages to be migrated.*@get_new_page: The function used to allocate free pages to be used |
add_page_for_migration | Resolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist |
total_mapcount | |
page_trans_huge_mapcount | This calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate) |
hugetlb_cgroup_migrate | hugetlb_lock will make sure a parallel cgroup rmdir won't happen* when we migrate hugepages |
shake_page | When a unknown page type is encountered drain as many buffers as possible* in the hope to turn the page into a LRU or free page, which we can handle. |
me_huge_page | Huge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd. |
get_hwpoison_page | get_hwpoison_page() - Get refcount for memory error handling:*@page: raw error page (hit by memory error)* Return: return 0 if failed to grab the refcount, otherwise true (some* non-zero value.) |
hwpoison_user_mappings | Do all that is necessary to remove user space mappings. Unmap* the pages and send SIGBUS to the processes if the data was dirty. |
memory_failure | memory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page |
unpoison_memory | poison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier |
__get_any_page | Safely get reference count of an arbitrary page.* Returns 0 for a free page, -EIO for a zero refcount page* that is not free, and 1 for any other page type.* For 1 the page is returned with increased page count, otherwise not. |
get_any_page | |
soft_offline_in_use_page | |
hwpoison_inject | |
page_cache_delete | Lock ordering:* ->i_mmap_rwsem (truncate_pagecache)* ->private_lock (__free_pte->__set_page_dirty_buffers)* ->swap_lock (exclusive_swap_page, others)* ->i_pages lock* ->i_mutex* ->i_mmap_rwsem (truncate->unmap_mapping_range)* ->mmap_sem* ->i_mmap_rwsem |
find_subpage | |
page_hstate | |
new_page_nodemask | |