Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:42:46
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.

Proto:static vm_fault_t do_anonymous_page(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
3119  vma = Target VMA
3122  ret = 0
3126  If Flags, see mm.h. & VM_SHARED Then Return VM_FAULT_SIGBUS
3139  If pte_alloc(The address space we belong to. , Pointer to pmd entry matching* the 'address' ) Then Return VM_FAULT_OOM
3143  If Value for the false possibility is greater at compile time(This is a noop if Transparent Hugepage Support is not built into* the kernel) Then Return 0
3147  If Not (FAULT_FLAG_xxx flags & Fault was a write access ) && Not To prevent common memory management code establishing* a zero page mapping on a read fault.* This macro should be defined within .* s390 does this to prevent multiplexing of hardware bits(The address space we belong to. ) Then
3149  entry = pte_mkspecial(pfn_pte(my_zero_pfn(Faulting virtual address ), Access permissions of this VMA. ))
3151  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' , Faulting virtual address , & Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3153  If Not pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.) Then Go to unlock
3155  ret = Checks whether a page fault on the given mm is still reliable
3156  If ret Then Go to unlock
3159  If userfaultfd_missing(vma) Then
3163  Go to setpte
3167  If Value for the false possibility is greater at compile time(anon_vma_prepare(vma)) Then Go to oom
3169  page = alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move*@vma: The VMA the page is to be allocated for*@vaddr: The virtual address the page will be inserted into* This function will allocate a page for a
3170  If Not page Then Go to oom
3173  If mem_cgroup_try_charge_delay(page, The address space we belong to. , GFP_KERNEL, & memcg, false) Then Go to oom_free_page
3182  __SetPageUptodate(page)
3184  entry = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(page, Access permissions of this VMA. )
3185  If Flags, see mm.h. & VM_WRITE Then entry = pte_mkwrite(pte_mkdirty(entry))
3188  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' , Faulting virtual address , & Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3190  If Not pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.) Then Go to release
3193  ret = Checks whether a page fault on the given mm is still reliable
3194  If ret Then Go to release
3198  If userfaultfd_missing(vma) Then
3199  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3200  mem_cgroup_cancel_charge(page, memcg, false)
3201  put_page(page)
3202  Return handle_userfault(vmf, missing pages tracking )
3205  inc_mm_counter_fast(The address space we belong to. , MM_ANONPAGES)
3206  page_add_new_anon_rmap(page, vma, Faulting virtual address , false)
3207  mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
3208  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
3209  setpte :
3210  set_pte_at(The address space we belong to. , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry)
3213  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
3214  unlock :
3215  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3216  Return ret
3217  release :
3218  mem_cgroup_cancel_charge(page, memcg, false)
3219  put_page(page)
3220  Go to unlock
3221  oom_free_page :
3222  put_page(page)
3223  oom :
3224  Return VM_FAULT_OOM
Caller
NameDescribe
handle_pte_faultThese routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)