Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:02:01
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:NUMA hinting page fault entry point for trans huge pmds

Proto:vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
pmd_tpmd
1537  vma = Target VMA
1538  struct anon_vma * anon_vma = NULL
1540  haddr = Faulting virtual address & HPAGE_PMD_MASK
1541  page_nid = NUMA_NO_NODE , this_nid = Returns the number of the current Node.
1542  last_cpupid = -1
1544  bool migrated = false
1546  flags = 0
1548  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
1549  If Value for the false possibility is greater at compile time(!pmd_same(pmd, * Pointer to pmd entry matching* the 'address' )) Then Go to out_unlock
1557  If Value for the false possibility is greater at compile time(pmd_trans_migrating( * Pointer to pmd entry matching* the 'address' )) Then
1558  page = Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:( * Pointer to pmd entry matching* the 'address' )
1559  If Not Try to grab a ref unless the page has a refcount of zero, return false if* that is the case.* This can be called when MMU is off so it must not access* any of the virtual mappings. Then Go to out_unlock
1561  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1562  put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked*@page: The page to wait for
1563  Go to out
1566  page = Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:(pmd)
1567  BUG_ON(is_huge_zero_page(page))
1568  page_nid = page_to_nid(page)
1569  last_cpupid = page_cpupid_last(page)
1570  count_vm_numa_event(NUMA_HINT_FAULTS)
1571  If page_nid == this_nid Then
1572  count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL)
1573  flags |= TNF_FAULT_LOCAL
1577  If Not pmd_savedwrite(pmd) Then flags |= TNF_NO_GROUP
1584  page_locked = Return true if the page was successfully locked
1585  target_nid = mpol_misplaced - check whether current page node is valid in policy*@page: page to be checked*@vma: vm area where page mapped*@addr: virtual address where page mapped* Lookup current policy node id for vma,addr and "compare to" page's* node id
1586  If target_nid == NUMA_NO_NODE Then
1588  If page_locked Then Go to clear_pmdnuma
1593  If Not page_locked Then
1594  page_nid = NUMA_NO_NODE
1595  If Not Try to grab a ref unless the page has a refcount of zero, return false if* that is the case.* This can be called when MMU is off so it must not access* any of the virtual mappings. Then Go to out_unlock
1597  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1598  put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked*@page: The page to wait for
1599  Go to out
1606  get_page(page)
1607  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1608  anon_vma = Similar to page_get_anon_vma() except it locks the anon_vma.* Its a little more complex as it tries to keep the fast path to a single* atomic op -- the trylock. If we fail the trylock, we fall back to getting a
1611  spin_lock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1612  If Value for the false possibility is greater at compile time(!pmd_same(pmd, * Pointer to pmd entry matching* the 'address' )) Then
1613  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1614  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1615  page_nid = NUMA_NO_NODE
1616  Go to out_unlock
1620  If Value for the false possibility is greater at compile time(!anon_vma) Then
1621  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1622  page_nid = NUMA_NO_NODE
1623  Go to clear_pmdnuma
1637  If mm_tlb_flush_pending(The address space we belong to. ) Then
1638  flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE)
1648  mmu_notifier_invalidate_range(The address space we belong to. , haddr, haddr + HPAGE_PMD_SIZE)
1656  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1658  migrated = migrate_misplaced_transhuge_page(The address space we belong to. , vma, Pointer to pmd entry matching* the 'address' , pmd, Faulting virtual address , page, target_nid)
1660  If migrated Then
1661  flags |= TNF_MIGRATED
1662  page_nid = target_nid
1663  Else flags |= TNF_MIGRATE_FAIL
1666  Go to out
1667  clear_pmdnuma :
1668  BUG_ON(!PageLocked(page))
1669  was_writable = pmd_savedwrite(pmd)
1670  pmd = pmd_modify(pmd, Access permissions of this VMA. )
1671  pmd = pmd_mkyoung(pmd)
1672  If was_writable Then pmd = pmd_mkwrite(pmd)
1674  set_pmd_at(The address space we belong to. , haddr, Pointer to pmd entry matching* the 'address' , pmd)
1675  update_mmu_cache_pmd(vma, Faulting virtual address , Pointer to pmd entry matching* the 'address' )
1676  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1677  out_unlock :
1678  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1680  out :
1681  If anon_vma Then page_unlock_anon_vma_read(anon_vma)
1684  If page_nid != NUMA_NO_NODE Then task_numa_fault(last_cpupid, page_nid, HPAGE_PMD_NR, flags)
1688  Return 0
Caller
NameDescribe
__handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().