函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\pagemap.h Create Date:2022-07-27 06:45:58
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:lock_page may only be called if we have the page's inode pinned.

函数原型:static inline void lock_page(struct page *page)

返回类型:void

参数:

类型参数名称
struct page *page
478  might_sleep()
479  如果非Return true if the page was successfully locked__lock_page - get a lock on the page, assuming we need to sleep to get it*@__page: the page to lock
调用者
名称描述
get_futex_keyget_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored
__replace_page__replace_page - replace page in vma by new page
find_lock_entryd_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased
pagecache_get_pagepagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset.
filemap_page_mkwrite
do_read_cache_page
write_cache_pageswrite_cache_pages - walk the list of dirty pages of the given address space and write all of them
set_page_dirty_lockset_page_dirty() is racy if the caller has no reference against* CPU could truncate the page off the mapping and then free the mapping.* Usually, the page _is_ locked, or the caller is a user-space process which
truncate_inode_pages_rangeruncate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that
invalidate_inode_pages2_rangevalidate_inode_pages2_range - remove range of pages from an address_space*@mapping: the address_space*@start: the page offset 'from' which to invalidate*@end: the page offset 'to' which to invalidate (inclusive)* Any pages which are found to be mapped
handle_write_errorWe detected a synchronous write error writing a page out. Probably* -ENOSPC. We need to propagate that into the address_space for a subsequent* fsync(), msync() or close().* The tricky part is that after writepage we cannot touch the mapping: nothing
follow_page_pte
follow_pmd_mask
do_page_mkwriteNotify the address space that the page is about to become writable so that* it can prohibit this or wait for the page to get into an appropriate state.* We do this without the lock held, so that it can sleep if it needs to.
wp_page_copyHandle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one.
wp_page_shared
do_wp_pageThis routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been
__do_faultThe mmap_sem must have been held on entry, and may have been* released depending on flags and vma->vm_ops->fault() return value.* See filemap_fault() and __lock_page_retry().
__munlock_pagevecMunlock a batch of pages from the same zone* The work is split to two main phases
munlock_vma_pages_rangemunlock_vma_pages_range() - munlock all pages in the vma range.'*@vma - vma containing range to be munlock()ed.*@start - start address in @vma of the range*@end - end of range in @vma.* For mremap(), munmap() and exit().* Called with @vma VM_LOCKED.
madvise_cold_or_pageout_pte_range
unuse_pte_range
try_to_unuseIf the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false
hugetlb_no_page
get_ksm_pageget_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped,
try_to_merge_one_pagery_to_merge_one_page - take two pages and merge them into one*@vma: the vma that holds the pte pointing to page*@page: the PageAnon page that we want to replace with kpage*@kpage: the PageKsm page that we want to map instead of page,
cmp_and_merge_pagemp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and
putback_movable_pagesPut previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page
writeoutWriteback a page to clean the dirty state
__unmap_and_move
unmap_and_moveObtain the lock on page, remove all ptes and migrate the page* to the newly allocated page in newpage.
unmap_and_move_huge_pageCounterpart of unmap_and_move_page() for hugepage migration
migrate_pagesmigrate_pages - migrate the pages specified in a list, to the free pages* supplied as the target for the page migration*@from: The list of pages to be migrated.*@get_new_page: The function used to allocate free pages to be used
do_huge_pmd_wp_page
me_huge_pageHuge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd.
memory_failure_hugetlb
memory_failurememory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page
unpoison_memorypoison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier
soft_offline_huge_page
__soft_offline_page
soft_offline_in_use_page
free_z3fold_pageResets the struct page fields and frees the page
z3fold_allocz3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt
vfs_lock_two_pagesLock two pages, ensuring that we lock in offset order if the pages are from* the same file.
generic_pipe_buf_stealgeneric_pipe_buf_steal - attempt to take ownership of a &pipe_buffer*@pipe: the pipe that the buffer belongs to*@buf: the buffer to attempt to steal* Description:* This function attempts to steal the &struct page attached to*@buf
page_cache_pipe_buf_stealAttempt to steal a page from a pipe buffer. This should perhaps go into* a vm helper function, it's already simplified quite a bit by the* addition of remove_mapping(). If success is returned, the caller may
page_cache_pipe_buf_confirmCheck whether the contents of buf is OK to access. Since the content* is a page cache page, IO may be in flight.
clean_bdev_aliaseslean_bdev_aliases: clean a range of buffers in block device*@bdev: Block device to clean buffers in*@block: Start of a range of blocks to clean*@len: Number of blocks to clean* We are taking a range of blocks for data and we don't want writeback of any*
block_page_mkwritelock_page_mkwrite() is not allowed to change the file size as it gets* called from a page fault handler when a page is first dirtied
nobh_truncate_page
iomap_page_mkwrite
page_seek_hole_dataSeek for SEEK_DATA / SEEK_HOLE within @page, starting at @lastoff.* Returns true if found and updates @lastoff to the offset in file.