Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:43:03
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:do_fault_around() tries to map few pages around the fault address. The hope* is that the pages will be needed soon and this will lower the number of* faults to handle.* It uses vm_ops->map_pages() to map the pages, which skips the page if it's

Proto:static vm_fault_t do_fault_around(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
3587  address = Faulting virtual address
3588  start_pgoff = Logical page offset based on vma
3591  ret = 0
3593  nr_pages = READ_ONCE(fault_around_bytes) >> PAGE_SHIFT determines the page size
3594  mask = ~( nr_pages * PAGE_SIZE - 1) & PAGE_MASK
3596  Faulting virtual address = max - return maximum of two values of the same or compatible types*@x: first value*@y: second value(address & mask, Our start address within vm_mm. )
3597  off = address - Faulting virtual address >> PAGE_SHIFT determines the page size & he i386 is two-level, so we don't really have any* PMD directory physically. - 1
3598  start_pgoff -= off
3604  end_pgoff = start_pgoff - ( Faulting virtual address >> PAGE_SHIFT determines the page size & he i386 is two-level, so we don't really have any* PMD directory physically. - 1 ) + he i386 is two-level, so we don't really have any* PMD directory physically. - 1
3607  end_pgoff = min3 - return minimum of three values*@x: first value*@y: second value*@z: third value(end_pgoff, vma_pages(Target VMA ) + Offset (within vm_file) in PAGE_SIZEunits - 1, start_pgoff + nr_pages - 1)
3610  If pmd_none( * Pointer to pmd entry matching* the 'address' ) Then
3611  Pre-allocated pte page table.* vm_ops->map_pages() calls* alloc_set_pte() from atomic context.* do_fault_around() pre-allocates* page table to avoid allocation from* atomic context. = pte_alloc_one(The address space we belong to. )
3612  If Not Pre-allocated pte page table.* vm_ops->map_pages() calls* alloc_set_pte() from atomic context.* do_fault_around() pre-allocates* page table to avoid allocation from* atomic context. Then Go to out
3614  smp_wmb()
3617  map_pages(vmf, start_pgoff, end_pgoff)
3620  If pmd_trans_huge( * Pointer to pmd entry matching* the 'address' ) Then
3621  ret = VM_FAULT_NOPAGE
3622  Go to out
3626  If Not Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. Then Go to out
3630  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. -= (Faulting virtual address >> PAGE_SHIFT determines the page size ) - (address >> PAGE_SHIFT determines the page size )
3631  If Not pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.) Then ret = VM_FAULT_NOPAGE
3633  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3634  out :
3635  Faulting virtual address = address
3636  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = NULL
3637  Return ret
Caller
NameDescribe
do_read_fault