Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:43:31
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:These routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)

Proto:static vm_fault_t handle_pte_fault(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
3976  If Value for the false possibility is greater at compile time(pmd_none( * Pointer to pmd entry matching* the 'address' )) Then
3983  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = NULL
3984  Else
3986  If The ordering of these checks is important for pmds with _PAGE_DEVMAP set.* If we check pmd_trans_unstable() first we will trip the bad_pmd() check* inside of pmd_none_or_trans_huge_or_clear_bad(). This will end up correctly Then Return 0
3994  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map(Pointer to pmd entry matching* the 'address' , Faulting virtual address )
3995  Value of PTE at the time of fault = Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.
4005  The "volatile" is due to gcc bugs ()
4012  If Not Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. Then
4013  If vma_is_anonymous(Target VMA ) Then Return We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.
4015  Else Return We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults)
4019  If Not pte_present(Value of PTE at the time of fault ) Then Return We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases
4022  If Technically a PTE can be PROTNONE even when not doing NUMA balancing but* the only case the kernel cares is for NUMA balancing and is only ever set* when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not marked && vma_is_accessible(Target VMA ) Then Return do_numa_page(vmf)
4025  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pte_lockptr(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
4026  spin_lock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
4027  entry = Value of PTE at the time of fault
4028  If Value for the false possibility is greater at compile time(!pte_same( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry)) Then Go to unlock
4030  If FAULT_FLAG_xxx flags & Fault was a write access Then
4031  If Not pte_write(entry) Then Return This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been
4033  entry = pte_mkdirty(entry)
4035  entry = pte_mkyoung(entry)
4036  If ptep_set_access_flags(Target VMA , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry, FAULT_FLAG_xxx flags & Fault was a write access ) Then
4038  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
4039  Else
4046  If FAULT_FLAG_xxx flags & Fault was a write access Then flush_tlb_fix_spurious_fault(Target VMA , Faulting virtual address )
4049  unlock :
4050  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
4051  Return 0
Caller
NameDescribe
__handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().