函数逻辑报告 |
Source Code:include\linux\mm.h |
Create Date:2022-07-27 06:44:41 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:get_page
函数原型:static inline void get_page(struct page *page)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
struct page * | page |
1003 | page等于compound_head(page) |
1009 | page_ref_inc(page) |
名称 | 描述 |
---|---|
copy_page_to_iter_pipe | |
__pipe_get_pages | |
iov_iter_get_pages | |
iov_iter_get_pages_alloc | |
relay_buf_fault | ault() vm_op implementation for relay file mapping. |
perf_mmap_fault | |
__replace_page | __replace_page - replace page in vma by new page |
replace_page_cache_page | place_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one |
__add_to_page_cache_locked | |
write_one_page | write_one_page - write out a single page and wait on I/O*@page: the page to write* The page must be locked by the caller and will be unlocked upon return |
get_kernel_pages | get_kernel_pages() - pin kernel pages in memory*@kiov: An array of struct kvec structures*@nr_segs: number of segments to pin*@write: pinning for read/write, currently ignored*@pages: array that receives pointers to the pages pinned. |
rotate_reclaimable_page | Writeback is about to end against a page which has been marked for immediate* reclaim. If it still appears to be reclaimable, move it to the tail of the* inactive list. |
__lru_cache_add | |
deactivate_page | deactivate_page - deactivate a page*@page: page to deactivate* deactivate_page() moves @page to the inactive list if @page was on the active* list and was not an unevictable page. This is done to accelerate the reclaim* of @page. |
mark_page_lazyfree | mark_page_lazyfree - make an anon page lazyfree*@page: page to deactivate* mark_page_lazyfree() moves @page to the inactive file list.* This is done to accelerate the reclaim of @page. |
lru_add_page_tail | sed by __split_huge_page_refcount() |
invalidate_mapping_pages | validate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only |
isolate_lru_page | solate_lru_page - tries to isolate a page from its LRU list*@page: page to isolate from its LRU list* Isolates a @page from an LRU list, clears PageLRU and adjusts the* vmstat statistic corresponding to whatever LRU list the page was on. |
follow_page_pte | |
copy_one_pte | py one vm_area from one task to the other. Assumes the page tables* already present in the new task to be cleared in the whole range* covered by this vma. |
insert_page | This is the old fallback for page remapping.* For historical reasons, it only allows reserved pages. Only* old drivers should use this, and they needed to mark their* pages reserved for the old functions anyway. |
wp_page_shared | |
do_wp_page | This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been |
numa_migrate_prep | |
__munlock_isolate_lru_page | Isolate a page from LRU with optional get_page() pin.* Assumes lru_lock already held and page already pinned. |
__munlock_pagevec | Munlock a batch of pages from the same zone* The work is split to two main phases |
__munlock_pagevec_fill | Fill up pagevec for __munlock_pagevec using pte walk* The function expects that the struct page corresponding to @start address is* a non-TPH page already pinned and in the @pvec, and that it belongs to @zone |
special_mapping_fault | |
madvise_cold_or_pageout_pte_range | |
madvise_free_pte_range | |
unuse_pte | No need to decide whether this PTE shares the swap entry with others,* just let do_wp_page work it out if a write is requested later - to* force COW, vm_page_prot omits write permission from any private vma. |
copy_hugetlb_page_range | |
hugetlb_cow | Hugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration. |
hugetlb_fault | |
follow_hugetlb_page | |
follow_huge_pmd | |
replace_page | place_page - replace page in vma by new ksm page*@vma: vma that holds the pte pointing to page*@page: the page we are replacing by kpage*@kpage: the ksm page we replace page by*@orig_pte: the original value of the pte |
stable_tree_search | stable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now |
remove_migration_pte | Restore a potential migration pte to a working pte entry |
migrate_huge_page_move_mapping | The expected number of remaining references is the same as that* of migrate_page_move_mapping(). |
__buffer_migrate_page | |
follow_devmap_pmd | |
copy_huge_pmd | |
do_huge_pmd_wp_page | |
follow_trans_huge_pmd | |
do_huge_pmd_numa_page | NUMA hinting page fault entry point for trans huge pmds |
madvise_free_huge_pmd | Return true if we do MADV_FREE successfully on entire pmd page.* Otherwise, return false. |
get_mctgt_type_thp | We don't consider PMD mapped swapping or file mapped pages because THP does* not support them for now.* Caller should make sure that pmd_trans_huge(pmd) is true. |
z3fold_page_migrate | |
sel_mmap_policy_fault | |
dio_refill_pages | Go grab and pin some userspace pages. Typically we'll get 64 at a time. |
dio_bio_add_page | Attempt to put the current chunk of 'cur_page' into the current BIO. If* that was successful then update final_block_in_bio and take a ref against* the just-added page.* Return zero on success. Non-zero means the caller needs to start a new BIO. |
submit_page_section | An autonomous function to put a chunk of a page under deferred IO.* The caller doesn't actually know (or care) whether this piece of page is in* a BIO, or is under IO or whatever. We just take care of all possible * situations here |
aio_migratepage | |
iomap_page_create | |
iomap_migrate_page | |
iomap_dio_zero | |
__skb_frag_ref | 添加一个分页的片段参考 |
attach_page_buffers | line definitions |
sk_msg_page_add |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |