调用者名称 | 描述 |
unaccount_page_cache_page | |
lru_deactivate_file_fn | If the page can not be invalidated, it is moved to the* inactive list to speed up its reclaim. It is moved to the* head of the list, rather than the tail, to give the flusher* threads some time to write it out, as this is much more |
truncate_cleanup_page | If truncate cannot remove the fs-private metadata from the page, the page* becomes orphaned |
invalidate_inode_page | Safely invalidate one page from its pagecache mapping.* It only drops clean, unused pages. The page must be locked.* Returns 1 if the page is successfully invalidated, otherwise 0. |
invalidate_inode_pages2_range | validate_inode_pages2_range - remove range of pages from an address_space*@mapping: the address_space*@start: the page offset 'from' which to invalidate*@end: the page offset 'to' which to invalidate (inclusive)* Any pages which are found to be mapped |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
__isolate_lru_page | Attempt to remove the specified page from its LRU. Only take this page* if it is of the appropriate PageActive status. Pages which are being* freed elsewhere are also ignored.* returns 0 on success, -ve errno on failure. |
page_get_anon_vma | Getting a lock on a stable anon_vma from a page off the LRU is tricky!* Since there is no serialization what so ever against page_remove_rmap()* the best this function can do is return a locked anon_vma that might* have been relevant to this page |
page_lock_anon_vma_read | Similar to page_get_anon_vma() except it locks the anon_vma.* Its a little more complex as it tries to keep the fast path to a single* atomic op -- the trylock. If we fail the trylock, we fall back to getting a |
page_mkclean | |
page_not_mapped | |
free_swap_cache | If we are the only user, then try to free up the swap cache. * Its ok to check for PageSwapCache without the page lock* here because we are going to recheck again inside* try_to_free_swap() _with_ the lock.* - Marcelo |
__try_to_reclaim_swap | rns 1 if swap entry is freed |
replace_page | place_page - replace page in vma by new ksm page*@vma: vma that holds the pte pointing to page*@page: the page we are replacing by kpage*@kpage: the ksm page we replace page by*@orig_pte: the original value of the pte |
__unmap_and_move | |
unmap_and_move_huge_page | Counterpart of unmap_and_move_page() for hugepage migration |
mc_handle_present_pte | |
mem_cgroup_move_account | mem_cgroup_move_account - move account of the page*@page: the page*@compound: charge the page as compound or small page*@from: mem_cgroup which the page is moved from.*@to: mem_cgroup which the page is moved to. @from != @to. |
hwpoison_user_mappings | Do all that is necessary to remove user space mappings. Unmap* the pages and send SIGBUS to the processes if the data was dirty. |
unpoison_memory | poison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier |
page_idle_clear_pte_refs | |
__replace_page | __replace_page - replace page in vma by new page |