Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:01:33
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:do_huge_pmd_wp_page_fallback

Proto:static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd, struct page *page)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
pmd_torig_pmd
struct page *page
1201  vma = Target VMA
1202  haddr = Faulting virtual address & HPAGE_PMD_MASK
1207  ret = 0
1211  pages = kmalloc_array - allocate memory for an array.*@n: number of elements.*@size: element size.*@flags: the type of memory to allocate (see kmalloc).
1213  If Value for the false possibility is greater at compile time(!pages) Then
1214  ret |= VM_FAULT_OOM
1215  Go to out
1218  When i < HPAGE_PMD_NR cycle
1219  pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE, vma, Faulting virtual address , page_to_nid(page))
1226  When --i >= 0 cycle
1233  kfree(pages)
1234  ret |= VM_FAULT_OOM
1235  Go to out
1237  set_page_private(pages[i], (unsignedlong)memcg)
1240  When i < HPAGE_PMD_NR cycle
1241  copy_user_highpage(pages[i], page + i, haddr + PAGE_SIZE * i, vma)
1243  __SetPageUptodate(pages[i])
1244  cond_resched()
1247  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, The address space we belong to. , haddr, haddr + HPAGE_PMD_SIZE)
1249  mmu_notifier_invalidate_range_start( & range)
1251  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
1252  If Value for the false possibility is greater at compile time(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd)) Then Go to out_free_pages
1254  VM_BUG_ON_PAGE(!PageHead(page), page)
1264  pmdp_huge_clear_flush_notify(vma, haddr, Pointer to pmd entry matching* the 'address' )
1266  pgtable = "address" argument so destroys page coloring of some arch
1267  pmd_populate(The address space we belong to. , & _pmd, pgtable)
1269  When i < HPAGE_PMD_NR cycle
1271  entry = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(pages[i], Access permissions of this VMA. )
1272  entry = Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when* servicing faults for write access. In the normal case, do always want* pte_mkwrite. But get_user_pages can cause write faults for mappings
1273  memcg = page_private(pages[i])
1274  set_page_private(pages[i], 0)
1275  page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
1276  mem_cgroup_commit_charge(pages[i], memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
1277  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
1278  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map( & _pmd, haddr)
1279  VM_BUG_ON(!pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.))
1280  set_pte_at(The address space we belong to. , haddr, Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry)
1281  pte_unmap(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)
1283  kfree(pages)
1285  smp_wmb()
1286  pmd_populate(The address space we belong to. , Pointer to pmd entry matching* the 'address' , pgtable)
1287  page_remove_rmap - take down pte mapping from a page*@page: page to remove mapping from*@compound: uncharge the page as compound or small page* The caller needs to hold the pte lock.
1288  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1294  mmu_notifier_invalidate_range_only_end( & range)
1296  ret |= VM_FAULT_WRITE
1297  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1299  out :
1300  Return ret
1302  out_free_pages :
1303  spin_unlock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
1304  mmu_notifier_invalidate_range_end( & range)
1305  When i < HPAGE_PMD_NR cycle
1306  memcg = page_private(pages[i])
1307  set_page_private(pages[i], 0)
1308  mem_cgroup_cancel_charge(pages[i], memcg, false)
1309  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1311  kfree(pages)
1312  Go to out
Caller
NameDescribe
do_huge_pmd_wp_page