Function report |
Source Code:include\linux\mm.h |
Create Date:2022-07-28 05:43:31 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:page_zone
Proto:static inline struct zone *page_zone(const struct page *page)
Type:struct zone
Parameter:
Type | Parameter | Name |
---|---|---|
const struct page * | page |
1229 | Return node_zones[page_zonenum(page)] |
Name | Describe |
---|---|
account_kernel_stack | |
saveable_page | saveable_page - Check if the given page is saveable |
lru_cache_add_active_or_unevictable | lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability |
pagetypeinfo_showblockcount_print | |
clear_page_mlock | LRU accounting for clear_page_mlock() |
mlock_vma_page | Mark page as mlocked if not already.* If page on LRU, isolate and putback to move to unevictable list. |
munlock_vma_page | munlock_vma_page - munlock a vma page*@page: page to be unlocked, either a normal page or THP page head* returns the size of the page as a page mask (0 for normal page,* HPAGE_PMD_NR - 1 for THP head page) |
__munlock_pagevec_fill | Fill up pagevec for __munlock_pagevec using pte walk* The function expects that the struct page corresponding to @start address is* a non-TPH page already pinned and in the @pvec, and that it belongs to @zone |
munlock_vma_pages_range | munlock_vma_pages_range() - munlock all pages in the vma range.'*@vma - vma containing range to be munlock()ed.*@start - start address in @vma of the range*@end - end of range in @vma.* For mremap(), munmap() and exit().* Called with @vma VM_LOCKED. |
get_pageblock_bitmap | Return a pointer to the bitmap storing bits affecting a block of pages |
pfn_to_bitidx | |
set_pfnblock_flags_mask | set_pfnblock_flags_mask - Set the requested group of flags for a pageblock_nr_pages block of pages*@page: The page within the block of interest*@flags: The flags to set*@pfn: The target page frame number*@end_bitidx: The last bit of interest*@mask: mask |
__free_pages_ok | |
__free_pages_core | |
__pageblock_pfn_to_page | Check that the whole (or subset of) a pageblock given by the interval of* [start_pfn, end_pfn) is valid and within the same zone, before scanning it* with the migration of free compaction scanner |
move_freepages | Move the free pages in a range to the free lists of the requested type.* Note that start_page and end_pages are not aligned on a pageblock* boundary. If alignment is required, use move_freepages_block() |
free_unref_page_commit | |
__isolate_free_page | |
adjust_managed_page_count | |
is_free_buddy_page | |
set_hwpoison_free_buddy_page | Set PG_hwpoison flag if a given page is confirmed to be a free page. This* test is performed under the zone lock to prevent a race against page* allocation. |
alloc_page_interleave | Allocate a page in interleaved policy.Own path because it needs to do special accounting. |
migrate_page_move_mapping | Replace the page in the mapping.* The number of remaining references must be:* 1 for anonymous pages without a mapping* 2 for pages with a mapping* 3 for pages with a mapping and PagePrivate/PagePrivate2 set. |
shake_page | When a unknown page type is encountered drain as many buffers as possible* in the hope to turn the page into a LRU or free page, which we can handle. |
pagetypeinfo_showmixedcount_print | |
init_pages_in_zone | |
set_migratetype_isolate | |
unset_migratetype_isolate | |
test_pages_isolated | Caller should ensure that requested range is in a single zone |
cma_activate_area |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |