Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:01:46
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:do_huge_pmd_wp_page

Proto:vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
pmd_torig_pmd
1317  vma = Target VMA
1318  page = NULL
1320  haddr = Faulting virtual address & HPAGE_PMD_MASK
1323  ret = 0
1325  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lockptr(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
1326  VM_BUG_ON_VMA(!Serialized by page_table_lock , vma)
1327  If is_huge_zero_pmd(orig_pmd) Then Go to alloc
1329  spin_lock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1330  If Value for the false possibility is greater at compile time(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd)) Then Go to out_unlock
1333  page = Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:(orig_pmd)
1334  VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page)
1339  If Not Return true if the page was successfully locked Then
1340  get_page(page)
1341  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1342  lock_page may only be called if we have the page's inode pinned.
1343  spin_lock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1347  Go to out_unlock
1349  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1351  If We can write to an anon page without COW if there are no other references* to it. And as a side-effect, free up its swap: because the old content* on disk will never be read, and seeking back there to write new content Then
1353  entry = pmd_mkyoung(orig_pmd)
1354  entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma)
1355  If pmdp_set_access_flags(vma, haddr, Pointer to pmd entry matching* the 'address' , entry, 1) Then update_mmu_cache_pmd(vma, Faulting virtual address , Pointer to pmd entry matching* the 'address' )
1357  ret |= VM_FAULT_WRITE
1358  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1359  Go to out_unlock
1361  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1362  get_page(page)
1363  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1364  alloc :
1365  If be used on vmas which are known to support THP.* Use transparent_hugepage_enabled otherwise && Not transparent_hugepage_debug_cow() Then
1367  huge_gfp = always: directly stall for all thp allocations* defer: wake kswapd and fail if not immediately available* defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise* fail if not immediately available* madvise: directly stall for
1368  new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER)
1369  Else new_page = NULL
1372  If Value is more likely to compile time(new_page) Then
1373  prep_transhuge_page(new_page)
1374  Else
1375  If Not page Then
1378  Else
1380  If ret & VM_FAULT_OOM Then
1386  Disable counters
1387  Go to out
1390  If Value for the false possibility is greater at compile time(mem_cgroup_try_charge_delay(new_page, The address space we belong to. , huge_gfp, & memcg, true)) Then
1392  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1393  split_huge_pmd(vma, Pointer to pmd entry matching* the 'address' , Faulting virtual address )
1394  If page Then Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1396  ret |= VM_FAULT_FALLBACK
1397  Disable counters
1398  Go to out
1401  Disable counters
1402  count_memcg_events(memcg, THP_FAULT_ALLOC, 1)
1404  If Not page Then clear_huge_page(new_page, Faulting virtual address , HPAGE_PMD_NR)
1406  Else copy_user_huge_page(new_page, page, Faulting virtual address , vma, HPAGE_PMD_NR)
1409  __SetPageUptodate(new_page)
1411  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, The address space we belong to. , haddr, haddr + HPAGE_PMD_SIZE)
1413  mmu_notifier_invalidate_range_start( & range)
1415  spin_lock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1416  If page Then Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1418  If Value for the false possibility is greater at compile time(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd)) Then
1419  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1420  mem_cgroup_cancel_charge(new_page, memcg, true)
1421  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1422  Go to out_mn
1423  Else
1425  entry = mk_huge_pmd(new_page, Access permissions of this VMA. )
1426  entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma)
1427  pmdp_huge_clear_flush_notify(vma, haddr, Pointer to pmd entry matching* the 'address' )
1428  page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
1429  mem_cgroup_commit_charge(new_page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., true)
1430  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
1431  set_pmd_at(The address space we belong to. , haddr, Pointer to pmd entry matching* the 'address' , entry)
1432  update_mmu_cache_pmd(vma, Faulting virtual address , Pointer to pmd entry matching* the 'address' )
1433  If Not page Then
1435  Else
1440  ret |= VM_FAULT_WRITE
1442  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1443  out_mn :
1448  mmu_notifier_invalidate_range_only_end( & range)
1449  out :
1450  Return ret
1451  out_unlock :
1452  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1453  Return ret
Caller
NameDescribe
wp_huge_pmd`inline' is required to avoid gcc 4.1.2 build error