Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:42:05
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Handle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one.

Proto:static vm_fault_t wp_page_copy(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
2462  vma = Target VMA
2463  mm = The address space we belong to.
2464  old_page = ->fault handlers should return a* page here, unless VM_FAULT_NOPAGE* is set (which is also implied by* VM_FAULT_ERROR).
2465  struct page * new_page = NULL
2467  page_copied = 0
2471  If Value for the false possibility is greater at compile time(anon_vma_prepare(vma)) Then Go to oom
2474  If is_zero_pfn(pte_pfn(Value of PTE at the time of fault )) Then
2475  new_page = alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move*@vma: The VMA the page is to be allocated for*@vaddr: The virtual address the page will be inserted into* This function will allocate a page for a
2477  If Not new_page Then Go to oom
2479  Else
2480  new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, Faulting virtual address )
2482  If Not new_page Then Go to oom
2485  If Not cow_user_page(new_page, old_page, vmf) Then
2492  put_page(new_page)
2493  If old_page Then put_page(old_page)
2495  Return 0
2499  If mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, & memcg, false) Then Go to oom_free_new
2502  __SetPageUptodate(new_page)
2504  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, mm, Faulting virtual address & PAGE_MASK, (Faulting virtual address & PAGE_MASK) + PAGE_SIZE)
2507  mmu_notifier_invalidate_range_start( & range)
2512  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map_lock(mm, Pointer to pmd entry matching* the 'address' , Faulting virtual address , & Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
2513  If Value is more likely to compile time(pte_same( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Value of PTE at the time of fault )) Then
2514  If old_page Then
2515  If Not PageAnon(old_page) Then
2520  Else
2523  flush_cache_page(vma, Faulting virtual address , pte_pfn(Value of PTE at the time of fault ))
2524  entry = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(new_page, Access permissions of this VMA. )
2525  entry = Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when* servicing faults for write access. In the normal case, do always want* pte_mkwrite. But get_user_pages can cause write faults for mappings
2532  ptep_clear_flush_notify(vma, Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)
2533  page_add_new_anon_rmap(new_page, vma, Faulting virtual address , false)
2534  mem_cgroup_commit_charge(new_page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
2535  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
2541  set_pte_at_notify() sets the pte _after_ running the notifier.* This is safe to start by updating the secondary MMUs, because the primary MMU* pte invalidate must have already happened with a ptep_clear_flush() before* set_pte_at_notify() has been invoked(mm, Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry)
2542  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
2543  If old_page Then
2566  page_remove_rmap(old_page, false)
2570  new_page = old_page
2571  page_copied = 1
2572  Else
2573  mem_cgroup_cancel_charge(new_page, memcg, false)
2576  If new_page Then put_page(new_page)
2579  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
2584  mmu_notifier_invalidate_range_only_end( & range)
2585  If old_page Then
2596  put_page(old_page)
2598  Return If page_copied Then VM_FAULT_WRITE Else 0
2599  oom_free_new :
2600  put_page(new_page)
2601  oom :
2602  If old_page Then put_page(old_page)
2604  Return VM_FAULT_OOM
Caller
NameDescribe
do_wp_pageThis routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been