调用者| 名称 | 描述 | 
|---|
| page_copy_sane |  | 
| get_futex_key | get_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored | 
| __replace_page | __replace_page - replace page in vma by new page | 
| put_and_wait_on_page_locked | put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked*@page: The page to wait for | 
| unlock_page | lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared. | 
| __lock_page | __lock_page - get a lock on the page, assuming we need to sleep to get it*@__page: the page to lock | 
| __lock_page_killable |  | 
| pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. | 
| filemap_fault | lemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault | 
| set_page_dirty | Dirty a page | 
| activate_page |  | 
| mark_page_accessed | Mark a page as having seen activity.* inactive,unreferenced -> inactive,referenced* inactive,referenced -> active,unreferenced* active,unreferenced -> active,referenced* When a newly allocated page is not yet visible, so safe for non-atomic ops, | 
| release_pages | lease_pages - batched put_page()*@pages: array of pages to release*@nr: number of pages* Decrement the reference count on all the pages in @pages. If it* fell to zero, remove the page from the LRU and free it. | 
| page_rmapping | Neutral page->mapping pointer to address_space or anon_vma or other | 
| page_mapped | Return true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped. | 
| page_anon_vma |  | 
| page_mapping |  | 
| __page_mapcount | Slow path of page_mapcount() for compound pages | 
| put_user_pages_dirty_lock | put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages*@pages: array of pages to be maybe marked dirty, and definitely released | 
| check_and_migrate_cma_pages |  | 
| page_move_anon_rmap | page_move_anon_rmap - move a page to our anon_vma*@page: the page to move to our anon_vma*@vma: the vma the page belongs to* When a page belongs exclusively to one process after a COW event,* that page can be moved into the anon_vma that belongs to just | 
| page_add_file_rmap | page_add_file_rmap - add pte mapping to a file page*@page: the page to add the mapping to*@compound: charge the page as compound or small page* The caller needs to hold the pte lock. | 
| page_remove_rmap | page_remove_rmap - take down pte mapping from a page*@page: page to remove mapping from*@compound: uncharge the page as compound or small page* The caller needs to hold the pte lock. | 
| free_tail_pages_check |  | 
| has_unmovable_pages | This function checks whether pageblock includes unmovable pages or not.* If @count is not zero, it is okay to include less @count unmovable pages* PageLRU check without isolation or lru_lock could race so that | 
| madvise_inject_error | Error injection support for memory error handling. | 
| page_swapped |  | 
| page_trans_huge_map_swapcount |  | 
| reuse_swap_page | We can write to an anon page without COW if there are no other references* to it. And as a side-effect, free up its swap: because the old content* on disk will never be read, and seeking back there to write new content | 
| try_to_free_swap | If swap is getting full, or if there are no more mappings of this page,* then try_to_free_swap is called to free its swap space. | 
| PageHuge | PageHuge() only returns true for hugetlbfs pages, but not for normal or* transparent huge pages. See the PageTransHuge() documentation for more* details. | 
| __basepage_index |  | 
| dissolve_free_huge_page | Dissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use | 
| migrate_page_add |  | 
| alloc_new_node_page | page allocation callback for NUMA node migration | 
| new_page | Allocate a new page for page migration based on vma policy | 
| cmp_and_merge_page | mp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and | 
| add_page_for_migration | Resolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist | 
| get_deferred_split_queue |  | 
| __split_huge_page |  | 
| page_trans_huge_mapcount | This calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate) | 
| split_huge_page_to_list | This function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked. | 
| deferred_split_huge_page |  | 
| deferred_split_scan |  | 
| mem_cgroup_try_charge | mem_cgroup_try_charge - try charging a page*@page: page to charge*@mm: mm context of the victim*@gfp_mask: reclaim mode*@memcgp: charged memcg return*@compound: charge the page as compound or small page* Try to charge @page to the memcg that @mm belongs | 
| add_to_kill | Schedule a process for later kill.* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. | 
| me_huge_page | Huge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd. | 
| get_hwpoison_page | get_hwpoison_page() - Get refcount for memory error handling:*@page: raw error page (hit by memory error)* Return: return 0 if failed to grab the refcount, otherwise true (some* non-zero value.) | 
| memory_failure_hugetlb |  | 
| memory_failure | memory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page | 
| unpoison_memory | poison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier | 
| soft_offline_huge_page |  | 
| soft_offline_in_use_page |  | 
| hwpoison_inject |  | 
| check_heap_object |  | 
| PageLocked |  | 
| __SetPageLocked |  | 
| __ClearPageLocked |  | 
| PageReferenced |  | 
| SetPageReferenced |  | 
| ClearPageReferenced |  | 
| TestClearPageReferenced |  | 
| __SetPageReferenced |  | 
| PageDirty |  | 
| SetPageDirty |  | 
| ClearPageDirty |  | 
| TestSetPageDirty |  | 
| TestClearPageDirty |  | 
| __ClearPageDirty |  | 
| PageLRU |  | 
| SetPageLRU |  | 
| ClearPageLRU |  | 
| __ClearPageLRU |  | 
| PageActive |  | 
| SetPageActive |  | 
| ClearPageActive |  | 
| __ClearPageActive |  | 
| TestClearPageActive |  | 
| PageWorkingset |  | 
| SetPageWorkingset |  | 
| ClearPageWorkingset |  | 
| TestClearPageWorkingset |  | 
| PageSlab |  | 
| __SetPageSlab |  | 
| __ClearPageSlab |  | 
| PageSlobFree |  | 
| __SetPageSlobFree |  | 
| __ClearPageSlobFree |  | 
| PageSwapBacked |  | 
| SetPageSwapBacked |  | 
| ClearPageSwapBacked |  | 
| __ClearPageSwapBacked |  | 
| __SetPageSwapBacked |  | 
| PageWriteback | Only test-and-set exist for PG_writeback. The unconditional operators are* risky: they bypass page accounting. | 
| TestSetPageWriteback |  | 
| TestClearPageWriteback |  | 
| PageMappedToDisk |  | 
| SetPageMappedToDisk |  | 
| ClearPageMappedToDisk |  | 
| PageReclaim | PG_readahead is only used for reads; PG_reclaim is only for writes | 
| SetPageReclaim | PG_readahead is only used for reads; PG_reclaim is only for writes | 
| ClearPageReclaim | PG_readahead is only used for reads; PG_reclaim is only for writes | 
| TestClearPageReclaim |  | 
| PageUnevictable |  | 
| SetPageUnevictable |  | 
| ClearPageUnevictable |  | 
| __ClearPageUnevictable |  | 
| TestClearPageUnevictable |  | 
| PageMlocked |  | 
| SetPageMlocked |  | 
| ClearPageMlocked |  | 
| __ClearPageMlocked |  | 
| TestSetPageMlocked |  | 
| TestClearPageMlocked |  | 
| PageAnon |  | 
| PageKsm | A KSM page is one of those write-protected "shared pages" or "merged pages"* which KSM maps into multiple mms, wherever identical anonymous page content* is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any | 
| PageUptodate |  | 
| ClearPageUptodate |  | 
| PageTransCompoundMap | PageTransCompoundMap is the same as PageTransCompound, but it also* guarantees the primary MMU has the entire compound page mapped* through pmd_trans_huge, which in turn guarantees the secondary MMUs* can also map the entire compound page | 
| page_count |  | 
| compound_mapcount |  | 
| virt_to_head_page |  | 
| get_page |  | 
| try_get_page |  | 
| put_page |  | 
| page_to_index | Get index of the page with in radix-tree* (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE) | 
| trylock_page | Return true if the page was successfully locked | 
| wait_on_page_locked | Wait for a page to be unlocked.* This must be called with the caller "holding" the page,* ie with increased "page->count" so that the page won't* go away during the wait.. | 
| wait_on_page_locked_killable |  | 
| __skb_fill_page_desc | __skb_fill_page_desc - initialise a paged fragment in an skb*@skb: buffer containing fragment to be initialised*@i: paged fragment index to initialise*@page: the page to use for this fragment*@off: the offset to the data with @page*@size: the length of | 
| new_page_nodemask |  | 
| make_migration_entry |  | 
| migration_entry_to_page |  |