Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:01:02
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:do_huge_pmd_anonymous_page

Proto:vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
709  vma = Target VMA
712  haddr = Faulting virtual address & HPAGE_PMD_MASK
714  If Not transhuge_vma_suitable(vma, haddr) Then Return VM_FAULT_FALLBACK
716  If Value for the false possibility is greater at compile time(anon_vma_prepare(vma)) Then Return VM_FAULT_OOM
718  If Value for the false possibility is greater at compile time(khugepaged_enter(vma, Flags, see mm.h. )) Then Return VM_FAULT_OOM
720  If Not (FAULT_FLAG_xxx flags & Fault was a write access ) && Not To prevent common memory management code establishing* a zero page mapping on a read fault.* This macro should be defined within .* s390 does this to prevent multiplexing of hardware bits(The address space we belong to. ) && transparent_hugepage_use_zero_page() Then
727  pgtable = pte_alloc_one(The address space we belong to. )
728  If Value for the false possibility is greater at compile time(!pgtable) Then Return VM_FAULT_OOM
730  zero_page = mm_get_huge_zero_page(The address space we belong to. )
734  Return VM_FAULT_FALLBACK
736  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
737  ret = 0
738  set = false
741  If ret Then
743  Else if userfaultfd_missing(vma) Then
747  Else
753  Else spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
755  If Not set Then pte_free - free PTE-level user page table page*@mm: the mm_struct of the current context*@pte_page: the `struct page` representing the page table
757  Return ret
759  gfp = always: directly stall for all thp allocations* defer: wake kswapd and fail if not immediately available* defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise* fail if not immediately available* madvise: directly stall for
760  page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER)
761  If Value for the false possibility is greater at compile time(!page) Then
762  Disable counters
763  Return VM_FAULT_FALLBACK
765  prep_transhuge_page(page)
766  Return __do_huge_pmd_anonymous_page(vmf, page, gfp)