Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\hugetlb.c Create Date:2022-07-28 15:28:31
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:This is called when the original mapper is failing to COW a MAP_PRIVATE* mappping it owns the reserve page for. The intention is to unmap the page* from other VMAs and let the children be SIGKILLed if they are faulting the* same region.

Proto:static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, struct page *page, unsigned long address)

Type:void

Parameter:

TypeParameterName
struct mm_struct *mm
struct vm_area_struct *vma
struct page *page
unsigned longaddress
3555  h = hstate_vma(vma)
3564  address = address & huge_page_mask(h)
3565  pgoff = ( address - Our start address within vm_mm. >> PAGE_SHIFT determines the page size ) + Offset (within vm_file) in PAGE_SIZEunits
3567  mapping = f_mapping
3574  i_mmap_lock_write(mapping)
3577  If iter_vma == vma Then Continue
3585  If Flags, see mm.h. & VM_MAYSHARE Then Continue
3595  If Not is_vma_resv_set(iter_vma, Flags for MAP_PRIVATE reservations. These are stored in the bottom* bits of the reservation map pointer, which are always clear due to* alignment.) Then unmap_hugepage_range(iter_vma, address, address + huge_page_size(h), page)
3599  i_mmap_unlock_write(mapping)
Caller
NameDescribe
hugetlb_cowHugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration.