Function report |
Source Code:mm\memory.c |
Create Date:2022-07-28 14:42:46 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.
Proto:static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
Type:vm_fault_t
Parameter:
Type | Parameter | Name |
---|---|---|
struct vm_fault * | vmf |
3119 | vma = Target VMA |
3122 | ret = 0 |
3126 | If Flags, see mm.h. & VM_SHARED Then Return VM_FAULT_SIGBUS |
3139 | If pte_alloc(The address space we belong to. , Pointer to pmd entry matching* the 'address' ) Then Return VM_FAULT_OOM |
3143 | If Value for the false possibility is greater at compile time(This is a noop if Transparent Hugepage Support is not built into* the kernel) Then Return 0 |
3149 | entry = pte_mkspecial(pfn_pte(my_zero_pfn(Faulting virtual address ), Access permissions of this VMA. )) |
3153 | If Not pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.) Then Go to unlock |
3159 | If userfaultfd_missing(vma) Then |
3160 | pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.) |
3161 | Return handle_userfault(vmf, missing pages tracking ) |
3163 | Go to setpte |
3167 | If Value for the false possibility is greater at compile time(anon_vma_prepare(vma)) Then Go to oom |
3173 | If mem_cgroup_try_charge_delay(page, The address space we belong to. , GFP_KERNEL, & memcg, false) Then Go to oom_free_page |
3182 | __SetPageUptodate(page) |
3185 | If Flags, see mm.h. & VM_WRITE Then entry = pte_mkwrite(pte_mkdirty(entry)) |
3190 | If Not pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.) Then Go to release |
3198 | If userfaultfd_missing(vma) Then |
3199 | pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.) |
3200 | mem_cgroup_cancel_charge(page, memcg, false) |
3202 | Return handle_userfault(vmf, missing pages tracking ) |
3206 | page_add_new_anon_rmap(page, vma, Faulting virtual address , false) |
3207 | mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false) |
3209 | setpte : |
3210 | set_pte_at(The address space we belong to. , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry) |
3214 | unlock : |
3215 | pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.) |
3216 | Return ret |
3217 | release : |
3218 | mem_cgroup_cancel_charge(page, memcg, false) |
3220 | Go to unlock |
3221 | oom_free_page : |
3223 | oom : |
3224 | Return VM_FAULT_OOM |
Name | Describe |
---|---|
handle_pte_fault | These routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures) |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |