Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\hugetlb.c Create Date:2022-07-28 15:26:54
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:alloc_huge_page

Proto:struct page *alloc_huge_page(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve)

Type:struct page

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddr
intavoid_reserve
1998  spool = subpool_vma(vma)
1999  h = hstate_vma(vma)
2006  idx = hstate_index(h)
2012  map_chg = gbl_chg = vma_needs_reservation(h, vma, addr)
2013  If map_chg < 0 Then Return ERR_PTR( - ENOMEM)
2023  If map_chg || avoid_reserve Then
2024  gbl_chg = Subpool accounting for allocating and reserving pages
2025  If gbl_chg < 0 Then
2027  Return ERR_PTR( - ENOSPC)
2038  If avoid_reserve Then gbl_chg = 1
2042  ret = hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), & h_cg)
2043  If ret Then Go to out_subpool_put
2046  spin_lock( & Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,* free_huge_pages, and surplus_huge_pages.)
2052  page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg)
2053  If Not page Then
2054  spin_unlock( & Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,* free_huge_pages, and surplus_huge_pages.)
2055  page = Use the VMA's mpolicy to allocate a huge page from the buddy.
2056  If Not page Then Go to out_uncharge_cgroup
2060  resv_huge_pages--
2062  spin_lock( & Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,* free_huge_pages, and surplus_huge_pages.)
2063  list_move - delete from one list and add as another's head*@list: the entry to move*@head: the head that will precede our entry
2066  Should be called with hugetlb_lock held
2067  spin_unlock( & Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,* free_huge_pages, and surplus_huge_pages.)
2069  set_page_private(page, (unsignedlong)spool)
2071  map_commit = vma_commit_reservation(h, vma, addr)
2072  If Value for the false possibility is greater at compile time(map_chg > map_commit) Then
2084  rsv_adjust = Subpool accounting for freeing and unreserving pages.* Return the number of global page reservations that must be dropped.* The return value may only be different than the passed value (delta)* in the case where a subpool minimum size must be maintained.
2085  Forward declaration
2087  Return page
2089  out_uncharge_cgroup :
2090  hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg)
2091  out_subpool_put :
2092  If map_chg || avoid_reserve Then Subpool accounting for freeing and unreserving pages.* Return the number of global page reservations that must be dropped.* The return value may only be different than the passed value (delta)* in the case where a subpool minimum size must be maintained.
2094  vma_end_reservation(h, vma, addr)
2095  Return ERR_PTR( - ENOSPC)
Caller
NameDescribe
hugetlb_cowHugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration.
hugetlb_no_page
hugetlb_mcopy_atomic_pteUsed by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with* modifications for huge pages.