Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\userfaultfd.c Create Date:2022-07-28 16:33:37
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:mcopy_atomic_pte

Proto:static int mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, struct page **pagep)

Type:int

Parameter:

TypeParameterName
struct mm_struct *dst_mm
pmd_t *dst_pmd
struct vm_area_struct *dst_vma
unsigned longdst_addr
unsigned longsrc_addr
struct page **pagep
67  If Not pagep Then
68  ret = -ENOMEM
69  page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr)
70  If Not page Then Go to out
73  page_kaddr = kmap_atomic(page)
74  ret = copy_from_user(page_kaddr, (constvoid__user * )src_addr, PAGE_SIZE)
77  Prevent people trying to call kunmap_atomic() as if it were kunmap()* kunmap_atomic() should get the return value of kmap_atomic, not the page.(page_kaddr)
81  ret = -ENOENT
82  pagep = page
84  Go to out
86  Else
87  page = pagep
88  * pagep = NULL
96  __SetPageUptodate(page)
98  ret = -ENOMEM
99  If mem_cgroup_try_charge - try charging a page*@page: page to charge*@mm: mm context of the victim*@gfp_mask: reclaim mode*@memcgp: charged memcg return*@compound: charge the page as compound or small page* Try to charge @page to the memcg that @mm belongs Then Go to out_release
102  _dst_pte = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(page, Access permissions of this VMA. )
103  If Flags, see mm.h. & VM_WRITE Then _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte))
106  dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, & ptl)
107  If File we map to (can be NULL). Then
109  inode = f_inode
110  offset = linear_page_index(dst_vma, dst_addr)
111  max_off = DIV_ROUND_UP(NOTE: in a 32bit arch with a preemptable kernel and* an UP compile the i_size_read/write must be atomic* with respect to the local cpu (unlike with preempt disabled),* but they don't need to be atomic with respect to other cpus like in* true SMP (so they , PAGE_SIZE)
112  ret = -EFAULT
113  If Value for the false possibility is greater at compile time(offset >= max_off) Then Go to out_release_uncharge_unlock
116  ret = -EEXIST
117  If Not pte_none( * dst_pte) Then Go to out_release_uncharge_unlock
120  inc_mm_counter(dst_mm, MM_ANONPAGES)
121  page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
122  mem_cgroup_commit_charge - commit a page charge*@page: page to charge*@memcg: memcg to charge the page to*@lrucare: page might be on LRU already*@compound: charge the page as compound or small page* Finalize a charge transaction started by
123  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
125  set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte)
128  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
130  pte_unmap_unlock(dst_pte, ptl)
131  ret = 0
132  out :
133  Return ret
134  out_release_uncharge_unlock :
135  pte_unmap_unlock(dst_pte, ptl)
136  mem_cgroup_cancel_charge - cancel a page charge*@page: page to charge*@memcg: memcg to charge the page to*@compound: charge the page as compound or small page* Cancel a charge transaction started by mem_cgroup_try_charge().
137  out_release :
138  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
139  Go to out
Caller
NameDescribe
mfill_atomic_pte