Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:43:37
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:By the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().

Proto:static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddress
unsigned intflags
4063  struct vm_fault vmf = {Target VMA = vma, Faulting virtual address = address & PAGE_MASK, FAULT_FLAG_xxx flags = flags, Logical page offset based on vma = linear_page_index(vma, address), gfp mask to be used for allocations = __get_fault_gfp_mask(vma), }
4070  dirty = flags & Fault was a write access
4071  mm = The address space we belong to.
4076  pgd = a shortcut to get a pgd_t in a given mm(mm, address)
4077  p4d = The following ifdef needed to get the 5level-fixup.h header to work.* Remove it when 5level-fixup.h has been removed.
4078  If Not p4d Then Return VM_FAULT_OOM
4081  Pointer to pud entry matching* the 'address' = pud_alloc(mm, p4d, address)
4082  If Not Pointer to pud entry matching* the 'address' Then Return VM_FAULT_OOM
4084  retry_pud :
4085  If The "pud_xxx()" functions here are trivial for a folded two-level* setup: the pmd is never bad, and a pmd always exists (as it's folded* into the pud entry) && be used on vmas which are known to support THP.* Use transparent_hugepage_enabled otherwise Then
4086  ret = create_huge_pud( & vmf)
4087  If Not (ret & VM_FAULT_FALLBACK) Then Return ret
4089  Else
4090  orig_pud = Pointer to pud entry matching* the 'address'
4092  The "volatile" is due to gcc bugs ()
4097  If dirty && Not pud_write(orig_pud) Then
4098  ret = wp_huge_pud( & vmf, orig_pud)
4099  If Not (ret & VM_FAULT_FALLBACK) Then Return ret
4101  Else
4103  Return 0
4108  Pointer to pmd entry matching* the 'address' = pmd_alloc(mm, Pointer to pud entry matching* the 'address', address)
4109  If Not Pointer to pmd entry matching* the 'address' Then Return VM_FAULT_OOM
4113  If See pmd_trans_unstable for discussion. Then Go to retry_pud
4116  If pmd_none( * Pointer to pmd entry matching* the 'address' ) && be used on vmas which are known to support THP.* Use transparent_hugepage_enabled otherwise Then
4117  ret = create_huge_pmd( & vmf)
4118  If Not (ret & VM_FAULT_FALLBACK) Then Return ret
4120  Else
4121  orig_pmd = Pointer to pmd entry matching* the 'address'
4123  The "volatile" is due to gcc bugs ()
4129  Return 0
4135  If dirty && Not pmd_write(orig_pmd) Then
4139  Else
4141  Return 0
4146  Return These routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)
Caller
NameDescribe
handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().