Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:40:35
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:zap_pte_range

Proto:static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, struct zap_details *details)

Type:unsigned long

Parameter:

TypeParameterName
struct mmu_gather *tlb
struct vm_area_struct *vma
pmd_t *pmd
unsigned longaddr
unsigned longend
struct zap_details *details
1027  mm = mm
1028  force_flush = 0
1035  tlb_change_page_size(tlb, PAGE_SIZE)
1036  again :
1037  init_rss_vec(rss)
1038  start_pte = pte_offset_map_lock(mm, pmd, addr, & ptl)
1039  pte = start_pte
1040  flush_tlb_batched_pending(mm)
1041  A facility to provide lazy MMU batching()
1042  Do
1043  ptent = pte
1044  If pte_none(ptent) Then Continue
1047  If need_resched() Then Break
1050  If pte_present(ptent) Then
1070  If Not PageAnon(page) Then
1079  rss[mm_counter(page)]--
1080  page_remove_rmap(page, false)
1084  force_flush = 1
1085  addr += PAGE_SIZE
1086  Break
1088  Continue
1091  entry = Convert the arch-dependent pte representation of a swp_entry_t into an* arch-independent swp_entry_t.
1107  rss[mm_counter(page)]--
1108  page_remove_rmap(page, false)
1109  put_page(page)
1110  Continue
1114  If Value for the false possibility is greater at compile time(details) Then Continue
1117  If Not non_swap_entry(entry) Then rss[MM_SWAPENTS]--
1119  Else if is_migration_entry(entry) Then
1123  rss[mm_counter(page)]--
1125  If Value for the false possibility is greater at compile time(!free_swap_and_cache(entry)) Then This function is called to print an error when a bad pte* is found. For example, we might have a PFN-mapped pte in* a region that doesn't allow it.* The calling function must still handle the error.
1127  Some architectures may be able to avoid expensive synchronization* primitives when modifications are made to PTE's which are already* not present, or in the process of an address space destruction.
1128  When pte++, addr += PAGE_SIZE , addr != end cycle
1130  add_mm_rss_vec(mm, rss)
1131  arch_leave_lazy_mmu_mode()
1134  If force_flush Then tlb_flush_mmu_tlbonly(tlb)
1136  pte_unmap_unlock(start_pte, ptl)
1144  If force_flush Then
1145  force_flush = 0
1146  tlb_flush_mmu(tlb)
1149  If addr != end Then
1150  cond_resched()
1151  Go to again
1154  Return addr
Caller
NameDescribe
zap_pmd_range