Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:42:56
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:alloc_set_pte - setup new PTE entry for given page and add reverse page* mapping

Proto:vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, struct page *page)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
struct mem_cgroup *memcg
struct page *page
3437  vma = Target VMA
3438  write = FAULT_FLAG_xxx flags & Fault was a write access
3442  If pmd_none( * Pointer to pmd entry matching* the 'address' ) && PageTransCompound returns true for both transparent huge pages* and hugetlbfs pages, so it should only be called when it's known* that hugetlbfs pages aren't involved. && IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm',* 0 otherwise.(CONFIG_TRANSPARENT_HUGE_PAGECACHE) Then
3445  VM_BUG_ON_PAGE(memcg, page)
3447  ret = do_set_pmd(vmf, page)
3448  If ret != VM_FAULT_FALLBACK Then Return ret
3452  If Not Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. Then
3453  ret = pte_alloc_one_map(vmf)
3454  If ret Then Return ret
3459  If Value for the false possibility is greater at compile time(!pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)) Then Return VM_FAULT_NOPAGE
3462  flush_icache_page(vma, page)
3463  entry = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(page, Access permissions of this VMA. )
3464  If write Then entry = Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when* servicing faults for write access. In the normal case, do always want* pte_mkwrite. But get_user_pages can cause write faults for mappings
3467  If write && Not (Flags, see mm.h. & VM_SHARED) Then
3468  inc_mm_counter_fast(The address space we belong to. , MM_ANONPAGES)
3469  page_add_new_anon_rmap(page, vma, Faulting virtual address , false)
3470  mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
3471  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
3472  Else
3473  inc_mm_counter_fast(The address space we belong to. , Optimized variant when page is already known not to be PageAnon )
3474  page_add_file_rmap - add pte mapping to a file page*@page: the page to add the mapping to*@compound: charge the page as compound or small page* The caller needs to hold the pte lock.
3476  set_pte_at(The address space we belong to. , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry)
3479  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
3481  Return 0
Caller
NameDescribe
finish_faultsh_fault - finish page fault once we have prepared the page to fault*@vmf: structure describing the fault* This function handles all that is needed to finish a page fault once the* page to fault in is prepared