Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\hugetlb.c Create Date:2022-07-28 15:29:12
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Used by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with* modifications for huge pages.

Proto:int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, struct page **pagep)

Type:int

Parameter:

TypeParameterName
struct mm_struct *dst_mm
pte_t *dst_pte
struct vm_area_struct *dst_vma
unsigned longdst_addr
unsigned longsrc_addr
struct page **pagep
4149  vm_shared = Flags, see mm.h. & VM_SHARED
4150  h = hstate_vma(dst_vma)
4156  If Not pagep Then
4157  ret = -ENOMEM
4158  page = alloc_huge_page(dst_vma, dst_addr, 0)
4159  If IS_ERR(page) Then Go to out
4162  ret = copy_huge_page_from_user(page, (constvoid__user * )src_addr, pages_per_huge_page(h), false)
4168  ret = -ENOENT
4169  pagep = page
4171  Go to out
4173  Else
4174  page = pagep
4175  * pagep = NULL
4183  __SetPageUptodate(page)
4185  mapping = f_mapping
4186  idx = Convert the address within this vma to the page offset within* the mapping, in pagecache page units; huge pages here.
4191  If vm_shared Then
4192  size = NOTE: in a 32bit arch with a preemptable kernel and* an UP compile the i_size_read/write must be atomic* with respect to the local cpu (unlike with preempt disabled),* but they don't need to be atomic with respect to other cpus like in* true SMP (so they >> huge_page_shift(h)
4193  ret = -EFAULT
4194  If idx >= size Then Go to out_release_nounlock
4203  ret = huge_add_to_page_cache(page, mapping, idx)
4204  If ret Then Go to out_release_nounlock
4208  ptl = huge_pte_lockptr(h, dst_mm, dst_pte)
4209  spin_lock(ptl)
4220  size = NOTE: in a 32bit arch with a preemptable kernel and* an UP compile the i_size_read/write must be atomic* with respect to the local cpu (unlike with preempt disabled),* but they don't need to be atomic with respect to other cpus like in* true SMP (so they >> huge_page_shift(h)
4221  ret = -EFAULT
4222  If idx >= size Then Go to out_release_unlock
4225  ret = -EEXIST
4226  If Not huge_pte_none(huge_ptep_get(dst_pte)) Then Go to out_release_unlock
4229  If vm_shared Then
4230  page_dup_rmap(page, true)
4231  Else
4232  Private page markings that may be used by the filesystem that owns the page* for its own purposes.* - PG_private and PG_private_2 cause releasepage() and co to be invoked
4233  hugepage_add_new_anon_rmap(page, dst_vma, dst_addr)
4236  _dst_pte = make_huge_pte(dst_vma, page, Flags, see mm.h. & VM_WRITE)
4237  If Flags, see mm.h. & VM_WRITE Then _dst_pte = huge_pte_mkdirty(_dst_pte)
4239  _dst_pte = pte_mkyoung(_dst_pte)
4241  set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte)
4243  huge_ptep_set_access_flags(dst_vma, dst_addr, dst_pte, _dst_pte, Flags, see mm.h. & VM_WRITE)
4245  hugetlb_count_add(pages_per_huge_page(h), dst_mm)
4248  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
4250  spin_unlock(ptl)
4251  ver called for tail page
4252  If vm_shared Then lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
4254  ret = 0
4255  out :
4256  Return ret
4257  out_release_unlock :
4258  spin_unlock(ptl)
4259  If vm_shared Then lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
4261  out_release_nounlock :
4262  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
4263  Go to out
Caller
NameDescribe
__mcopy_atomic_hugetlb__mcopy_atomic processing for HUGETLB vmas. Note that this routine is* called with mmap_sem held, it will release mmap_sem before returning.