Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:00:52
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:__do_huge_pmd_anonymous_page

Proto:static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page, gfp_t gfp)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
struct page *page
gfp_tgfp
578  vma = Target VMA
581  haddr = Faulting virtual address & HPAGE_PMD_MASK
582  ret = 0
584  VM_BUG_ON_PAGE(!PageCompound(page), page)
586  If mem_cgroup_try_charge_delay(page, The address space we belong to. , gfp, & memcg, true) Then
587  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
588  Disable counters
589  Return VM_FAULT_FALLBACK
592  pgtable = pte_alloc_one(The address space we belong to. )
593  If Value for the false possibility is greater at compile time(!pgtable) Then
594  ret = VM_FAULT_OOM
595  Go to release
598  clear_huge_page(page, Faulting virtual address , HPAGE_PMD_NR)
604  __SetPageUptodate(page)
606  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
607  If Value for the false possibility is greater at compile time(!pmd_none( * Pointer to pmd entry matching* the 'address' )) Then
608  Go to unlock_release
609  Else
612  ret = Checks whether a page fault on the given mm is still reliable
613  If ret Then Go to unlock_release
617  If userfaultfd_missing(vma) Then
626  Return ret2
629  entry = mk_huge_pmd(page, Access permissions of this VMA. )
630  entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma)
631  page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
632  mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., true)
633  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
634  pgtable_trans_huge_deposit(The address space we belong to. , Pointer to pmd entry matching* the 'address' , pgtable)
635  set_pmd_at(The address space we belong to. , haddr, Pointer to pmd entry matching* the 'address' , entry)
636  add_mm_counter(The address space we belong to. , MM_ANONPAGES, HPAGE_PMD_NR)
637  mm_inc_nr_ptes(The address space we belong to. )
638  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
639  Disable counters
640  count_memcg_events(memcg, THP_FAULT_ALLOC, 1)
643  Return 0
644  unlock_release :
645  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
646  release :
647  If pgtable Then pte_free - free PTE-level user page table page*@mm: the mm_struct of the current context*@pte_page: the `struct page` representing the page table
649  mem_cgroup_cancel_charge(page, memcg, true)
650  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
651  Return ret
Caller
NameDescribe
do_huge_pmd_anonymous_page