函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\page-flags.h Create Date:2022-07-27 06:40:02
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:compound_head

函数原型:static inline struct page *compound_head(struct page *page)

返回类型:struct page

参数:

类型参数名称
struct page *page
174  head等于READ_ONCE( Bit zero is set )
176  如果此条件成立可能性小(为编译器优化)(head & 1)则返回:(struct page * )(head - 1)
178  返回:page
调用者
名称描述
page_copy_sane
get_futex_keyget_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored
__replace_page__replace_page - replace page in vma by new page
put_and_wait_on_page_lockedput_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked*@page: The page to wait for
unlock_pagelock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
__lock_page__lock_page - get a lock on the page, assuming we need to sleep to get it*@__page: the page to lock
__lock_page_killable
pagecache_get_pagepagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset.
filemap_faultlemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault
set_page_dirtyDirty a page
activate_page
mark_page_accessedMark a page as having seen activity.* inactive,unreferenced -> inactive,referenced* inactive,referenced -> active,unreferenced* active,unreferenced -> active,referenced* When a newly allocated page is not yet visible, so safe for non-atomic ops,
release_pageslease_pages - batched put_page()*@pages: array of pages to release*@nr: number of pages* Decrement the reference count on all the pages in @pages. If it* fell to zero, remove the page from the LRU and free it.
page_rmappingNeutral page->mapping pointer to address_space or anon_vma or other
page_mappedReturn true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped.
page_anon_vma
page_mapping
__page_mapcountSlow path of page_mapcount() for compound pages
put_user_pages_dirty_lockput_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages*@pages: array of pages to be maybe marked dirty, and definitely released
check_and_migrate_cma_pages
page_move_anon_rmappage_move_anon_rmap - move a page to our anon_vma*@page: the page to move to our anon_vma*@vma: the vma the page belongs to* When a page belongs exclusively to one process after a COW event,* that page can be moved into the anon_vma that belongs to just
page_add_file_rmappage_add_file_rmap - add pte mapping to a file page*@page: the page to add the mapping to*@compound: charge the page as compound or small page* The caller needs to hold the pte lock.
page_remove_rmappage_remove_rmap - take down pte mapping from a page*@page: page to remove mapping from*@compound: uncharge the page as compound or small page* The caller needs to hold the pte lock.
free_tail_pages_check
has_unmovable_pagesThis function checks whether pageblock includes unmovable pages or not.* If @count is not zero, it is okay to include less @count unmovable pages* PageLRU check without isolation or lru_lock could race so that
madvise_inject_errorError injection support for memory error handling.
page_swapped
page_trans_huge_map_swapcount
reuse_swap_pageWe can write to an anon page without COW if there are no other references* to it. And as a side-effect, free up its swap: because the old content* on disk will never be read, and seeking back there to write new content
try_to_free_swapIf swap is getting full, or if there are no more mappings of this page,* then try_to_free_swap is called to free its swap space.
PageHugePageHuge() only returns true for hugetlbfs pages, but not for normal or* transparent huge pages. See the PageTransHuge() documentation for more* details.
__basepage_index
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
migrate_page_add
alloc_new_node_pagepage allocation callback for NUMA node migration
new_pageAllocate a new page for page migration based on vma policy
cmp_and_merge_pagemp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and
add_page_for_migrationResolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist
get_deferred_split_queue
__split_huge_page
page_trans_huge_mapcountThis calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate)
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
deferred_split_huge_page
deferred_split_scan
mem_cgroup_try_chargemem_cgroup_try_charge - try charging a page*@page: page to charge*@mm: mm context of the victim*@gfp_mask: reclaim mode*@memcgp: charged memcg return*@compound: charge the page as compound or small page* Try to charge @page to the memcg that @mm belongs
add_to_killSchedule a process for later kill.* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
me_huge_pageHuge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd.
get_hwpoison_pageget_hwpoison_page() - Get refcount for memory error handling:*@page: raw error page (hit by memory error)* Return: return 0 if failed to grab the refcount, otherwise true (some* non-zero value.)
memory_failure_hugetlb
memory_failurememory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page
unpoison_memorypoison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier
soft_offline_huge_page
soft_offline_in_use_page
hwpoison_inject
check_heap_object
PageLocked
__SetPageLocked
__ClearPageLocked
PageReferenced
SetPageReferenced
ClearPageReferenced
TestClearPageReferenced
__SetPageReferenced
PageDirty
SetPageDirty
ClearPageDirty
TestSetPageDirty
TestClearPageDirty
__ClearPageDirty
PageLRU
SetPageLRU
ClearPageLRU
__ClearPageLRU
PageActive
SetPageActive
ClearPageActive
__ClearPageActive
TestClearPageActive
PageWorkingset
SetPageWorkingset
ClearPageWorkingset
TestClearPageWorkingset
PageSlab
__SetPageSlab
__ClearPageSlab
PageSlobFree
__SetPageSlobFree
__ClearPageSlobFree
PageSwapBacked
SetPageSwapBacked
ClearPageSwapBacked
__ClearPageSwapBacked
__SetPageSwapBacked
PageWritebackOnly test-and-set exist for PG_writeback. The unconditional operators are* risky: they bypass page accounting.
TestSetPageWriteback
TestClearPageWriteback
PageMappedToDisk
SetPageMappedToDisk
ClearPageMappedToDisk
PageReclaimPG_readahead is only used for reads; PG_reclaim is only for writes
SetPageReclaimPG_readahead is only used for reads; PG_reclaim is only for writes
ClearPageReclaimPG_readahead is only used for reads; PG_reclaim is only for writes
TestClearPageReclaim
PageUnevictable
SetPageUnevictable
ClearPageUnevictable
__ClearPageUnevictable
TestClearPageUnevictable
PageMlocked
SetPageMlocked
ClearPageMlocked
__ClearPageMlocked
TestSetPageMlocked
TestClearPageMlocked
PageAnon
PageKsmA KSM page is one of those write-protected "shared pages" or "merged pages"* which KSM maps into multiple mms, wherever identical anonymous page content* is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any
PageUptodate
ClearPageUptodate
PageTransCompoundMapPageTransCompoundMap is the same as PageTransCompound, but it also* guarantees the primary MMU has the entire compound page mapped* through pmd_trans_huge, which in turn guarantees the secondary MMUs* can also map the entire compound page
page_count
compound_mapcount
virt_to_head_page
get_page
try_get_page
put_page
page_to_indexGet index of the page with in radix-tree* (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE)
trylock_pageReturn true if the page was successfully locked
wait_on_page_lockedWait for a page to be unlocked.* This must be called with the caller "holding" the page,* ie with increased "page->count" so that the page won't* go away during the wait..
wait_on_page_locked_killable
__skb_fill_page_desc__skb_fill_page_desc - initialise a paged fragment in an skb*@skb: buffer containing fragment to be initialised*@i: paged fragment index to initialise*@page: the page to use for this fragment*@off: the offset to the data with @page*@size: the length of
new_page_nodemask
make_migration_entry
migration_entry_to_page