Function report |
Source Code:include\linux\pagemap.h |
Create Date:2022-07-28 05:45:06 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:lock_page may only be called if we have the page's inode pinned.
Proto:static inline void lock_page(struct page *page)
Type:void
Parameter:
Type | Parameter | Name |
---|---|---|
struct page * | page |
478 | might_sleep() |
479 | If Not Return true if the page was successfully locked Then __lock_page - get a lock on the page, assuming we need to sleep to get it*@__page: the page to lock |
Name | Describe |
---|---|
get_futex_key | get_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored |
__replace_page | __replace_page - replace page in vma by new page |
find_lock_entry | d_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased |
pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. |
filemap_page_mkwrite | |
do_read_cache_page | |
write_cache_pages | write_cache_pages - walk the list of dirty pages of the given address space and write all of them |
set_page_dirty_lock | set_page_dirty() is racy if the caller has no reference against* CPU could truncate the page off the mapping and then free the mapping.* Usually, the page _is_ locked, or the caller is a user-space process which |
truncate_inode_pages_range | runcate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that |
invalidate_inode_pages2_range | validate_inode_pages2_range - remove range of pages from an address_space*@mapping: the address_space*@start: the page offset 'from' which to invalidate*@end: the page offset 'to' which to invalidate (inclusive)* Any pages which are found to be mapped |
handle_write_error | We detected a synchronous write error writing a page out. Probably* -ENOSPC. We need to propagate that into the address_space for a subsequent* fsync(), msync() or close().* The tricky part is that after writepage we cannot touch the mapping: nothing |
follow_page_pte | |
follow_pmd_mask | |
do_page_mkwrite | Notify the address space that the page is about to become writable so that* it can prohibit this or wait for the page to get into an appropriate state.* We do this without the lock held, so that it can sleep if it needs to. |
wp_page_copy | Handle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one. |
wp_page_shared | |
do_wp_page | This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been |
__do_fault | The mmap_sem must have been held on entry, and may have been* released depending on flags and vma->vm_ops->fault() return value.* See filemap_fault() and __lock_page_retry(). |
__munlock_pagevec | Munlock a batch of pages from the same zone* The work is split to two main phases |
munlock_vma_pages_range | munlock_vma_pages_range() - munlock all pages in the vma range.'*@vma - vma containing range to be munlock()ed.*@start - start address in @vma of the range*@end - end of range in @vma.* For mremap(), munmap() and exit().* Called with @vma VM_LOCKED. |
madvise_cold_or_pageout_pte_range | |
unuse_pte_range | |
try_to_unuse | If the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false |
hugetlb_no_page | |
get_ksm_page | get_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped, |
try_to_merge_one_page | ry_to_merge_one_page - take two pages and merge them into one*@vma: the vma that holds the pte pointing to page*@page: the PageAnon page that we want to replace with kpage*@kpage: the PageKsm page that we want to map instead of page, |
cmp_and_merge_page | mp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and |
putback_movable_pages | Put previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page |
writeout | Writeback a page to clean the dirty state |
__unmap_and_move | |
unmap_and_move | Obtain the lock on page, remove all ptes and migrate the page* to the newly allocated page in newpage. |
unmap_and_move_huge_page | Counterpart of unmap_and_move_page() for hugepage migration |
migrate_pages | migrate_pages - migrate the pages specified in a list, to the free pages* supplied as the target for the page migration*@from: The list of pages to be migrated.*@get_new_page: The function used to allocate free pages to be used |
do_huge_pmd_wp_page | |
me_huge_page | Huge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd. |
memory_failure_hugetlb | |
memory_failure | memory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page |
unpoison_memory | poison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier |
soft_offline_huge_page | |
__soft_offline_page | |
soft_offline_in_use_page | |
free_z3fold_page | Resets the struct page fields and frees the page |
z3fold_alloc | z3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt |
vfs_lock_two_pages | Lock two pages, ensuring that we lock in offset order if the pages are from* the same file. |
generic_pipe_buf_steal | generic_pipe_buf_steal - attempt to take ownership of a &pipe_buffer*@pipe: the pipe that the buffer belongs to*@buf: the buffer to attempt to steal* Description:* This function attempts to steal the &struct page attached to*@buf |
page_cache_pipe_buf_steal | Attempt to steal a page from a pipe buffer. This should perhaps go into* a vm helper function, it's already simplified quite a bit by the* addition of remove_mapping(). If success is returned, the caller may |
page_cache_pipe_buf_confirm | Check whether the contents of buf is OK to access. Since the content* is a page cache page, IO may be in flight. |
clean_bdev_aliases | lean_bdev_aliases: clean a range of buffers in block device*@bdev: Block device to clean buffers in*@block: Start of a range of blocks to clean*@len: Number of blocks to clean* We are taking a range of blocks for data and we don't want writeback of any* |
block_page_mkwrite | lock_page_mkwrite() is not allowed to change the file size as it gets* called from a page fault handler when a page is first dirtied |
nobh_truncate_page | |
iomap_page_mkwrite | |
page_seek_hole_data | Seek for SEEK_DATA / SEEK_HOLE within @page, starting at @lastoff.* Returns true if found and updates @lastoff to the offset in file. |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |