Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:43:21
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:do_numa_page

Proto:static vm_fault_t do_numa_page(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
3816  vma = Target VMA
3817  struct page * page = NULL
3818  page_nid = NUMA_NO_NODE
3821  bool migrated = false
3823  was_writable = pte_savedwrite(Value of PTE at the time of fault )
3824  flags = 0
3831  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pte_lockptr(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
3832  spin_lock(Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3833  If Value for the false possibility is greater at compile time(!pte_same( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Value of PTE at the time of fault )) Then
3834  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3835  Go to out
3842  old_pte = Start a pte protection read-modify-write transaction, which* protects against asynchronous hardware modifications to the pte
3843  pte = pte_modify(old_pte, Access permissions of this VMA. )
3844  pte = pte_mkyoung(pte)
3845  If was_writable Then pte = pte_mkwrite(pte)
3847  Commit an update to a pte, leaving any hardware-controlled bits in* the PTE unmodified.
3848  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
3850  page = vm_normal_page -- This function gets the "struct page" associated with a pte.* "Special" mappings do not wish to be associated with a "struct page" (either* it doesn't exist, or it exists but they don't want to touch it). In this
3851  If Not page Then
3852  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3853  Return 0
3857  If PageCompound(page) Then
3858  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3859  Return 0
3870  If Not pte_write(pte) Then flags |= TNF_NO_GROUP
3877  If page_mapcount(page) > 1 && Flags, see mm.h. & VM_SHARED Then flags |= TNF_SHARED
3880  last_cpupid = page_cpupid_last(page)
3881  page_nid = page_to_nid(page)
3882  target_nid = numa_migrate_prep(page, vma, Faulting virtual address , page_nid, & flags)
3884  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3885  If target_nid == NUMA_NO_NODE Then
3886  put_page(page)
3887  Go to out
3891  migrated = migrate_misplaced_page(page, vma, target_nid)
3892  If migrated Then
3893  page_nid = target_nid
3894  flags |= TNF_MIGRATED
3895  Else flags |= TNF_MIGRATE_FAIL
3898  out :
3899  If page_nid != NUMA_NO_NODE Then task_numa_fault(last_cpupid, page_nid, 1, flags)
3901  Return 0
Caller
NameDescribe
handle_pte_faultThese routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)