Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\pagemap.h Create Date:2022-07-28 05:45:06
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:linear_page_index

Proto:static inline unsigned long linear_page_index(struct vm_area_struct *vma, unsigned long address)

Type:unsigned long

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddress
451  If Value for the false possibility is greater at compile time(is_vm_hugetlb_page(vma)) Then Return linear_hugepage_index(vma, address)
453  pgoff = address - Our start address within vm_mm. >> PAGE_SHIFT determines the page size
454  pgoff += Offset (within vm_file) in PAGE_SIZEunits
455  Return pgoff
Caller
NameDescribe
print_bad_pteThis function is called to print an error when a bad pte* is found. For example, we might have a PFN-mapped pte in* a region that doesn't allow it.* The calling function must still handle the error.
__handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().
__mincore_unmapped_range
__page_set_anon_rmap__page_set_anon_rmap - set up new anonymous rmap*@page: Page or Hugepage to add to rmap*@vma: VM area to add page to.*@address: User virtual address of the mapping *@exclusive: the page is exclusively owned by the current process
__page_check_anon_rmap__page_check_anon_rmap - sanity check anonymous rmap addition*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped
ksm_might_need_to_copy
reuse_ksm_page
remove_migration_pteRestore a potential migration pte to a working pte entry
__collapse_huge_page_swapinBring missing pages in from swap, to complete THP collapse.* Only done if khugepaged_scan_pmd believes it is worthwhile.* Called and returns without pte mapped or spinlocks held,* but with mmap_sem held to protect against vma changes.
khugepaged_scan_mm_slot
mc_handle_file_pte
mcopy_atomic_pte
mfill_zeropage_pte
__mcopy_atomic_hugetlb__mcopy_atomic processing for HUGETLB vmas. Note that this routine is* called with mmap_sem held, it will release mmap_sem before returning.
dax_associate_entryTODO: for reflink+dax we need a way to associate a single page with* multiple address_space instances at different linear_page_index()* offsets.