函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-27 16:09:58
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.

函数原型:static vm_fault_t do_anonymous_page(struct vm_fault *vmf)

返回类型:vm_fault_t

参数:

类型参数名称
struct vm_fault *vmf
3119  vma等于Target VMA
3122  ret等于0
3126  如果Flags, see mm.h. 按位与VM_SHARED则返回:VM_FAULT_SIGBUS
3139  如果pte_alloc(The address space we belong to. , Pointer to pmd entry matching* the 'address' )则返回:VM_FAULT_OOM
3143  如果此条件成立可能性小(为编译器优化)(This is a noop if Transparent Hugepage Support is not built into* the kernel)则返回:0
3147  如果非FAULT_FLAG_xxx flags 按位与Fault was a write access 的值且非To prevent common memory management code establishing* a zero page mapping on a read fault.* This macro should be defined within .* s390 does this to prevent multiplexing of hardware bits(The address space we belong to. )则
3149  entry等于pte_mkspecial(pfn_pte(my_zero_pfn(Faulting virtual address ), Access permissions of this VMA. ))
3151  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.等于pte_offset_map_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' , Faulting virtual address , & Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3153  如果非pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)则转到:unlock
3155  ret等于Checks whether a page fault on the given mm is still reliable
3156  如果ret则转到:unlock
3159  如果userfaultfd_missing(vma)则
3163  转到:setpte
3167  如果此条件成立可能性小(为编译器优化)(anon_vma_prepare(vma))则转到:oom
3169  page等于alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move*@vma: The VMA the page is to be allocated for*@vaddr: The virtual address the page will be inserted into* This function will allocate a page for a
3170  如果非page则转到:oom
3173  如果mem_cgroup_try_charge_delay(page, The address space we belong to. , GFP_KERNEL, & memcg, false)则转到:oom_free_page
3182  __SetPageUptodate(page)
3184  entry等于Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(page, Access permissions of this VMA. )
3185  如果Flags, see mm.h. 按位与VM_WRITEentry等于pte_mkwrite(pte_mkdirty(entry))
3188  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.等于pte_offset_map_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' , Faulting virtual address , & Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3190  如果非pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)则转到:release
3193  ret等于Checks whether a page fault on the given mm is still reliable
3194  如果ret则转到:release
3198  如果userfaultfd_missing(vma)则
3199  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3200  mem_cgroup_cancel_charge(page, memcg, false)
3201  put_page(page)
3202  返回:handle_userfault(vmf, missing pages tracking )
3205  inc_mm_counter_fast(The address space we belong to. , MM_ANONPAGES)
3206  page_add_new_anon_rmap(page, vma, Faulting virtual address , false)
3207  mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
3208  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
3209  setpte :
3210  set_pte_at(The address space we belong to. , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry)
3213  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
3214  unlock :
3215  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3216  返回:ret
3217  release :
3218  mem_cgroup_cancel_charge(page, memcg, false)
3219  put_page(page)
3220  转到:unlock
3221  oom_free_page :
3222  put_page(page)
3223  oom :
3224  返回:VM_FAULT_OOM
调用者
名称描述
handle_pte_faultThese routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)