Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-28 16:01:52
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:follow_trans_huge_pmd

Proto:struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags)

Type:struct page

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddr
pmd_t *pmd
unsigned intflags
1471  mm = The address space we belong to.
1472  struct page * page = NULL
1474  assert_spin_locked(pmd_lockptr(mm, pmd))
1476  If flags & check pte is writable && Not FOLL_FORCE can write to even unwritable pmd's, but only* after we've gone through a COW cycle and they are dirty. Then Go to out
1480  If flags & give error on hole if it would be zero && is_huge_zero_pmd( * pmd) Then Return ERR_PTR( - EFAULT)
1484  If flags & rce NUMA hinting page fault && pmd_protnone( * pmd) Then Go to out
1487  page = Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:( * pmd)
1488  VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page)
1489  If flags & mark page accessed Then touch_pmd(vma, addr, pmd, flags)
1491  If flags & lock present pages && Flags, see mm.h. & VM_LOCKED Then
1513  If PageAnon(page) && compound_mapcount(page) != 1 Then Go to skip_mlock
1515  If PageDoubleMap indicates that the compound page is mapped with PTEs as well* as PMDs.* This is required for optimization of rmap operations for THP: we can postpone* per small page mapcount accounting (and its overhead from atomic operations) || Not See page-flags.h for PAGE_MAPPING_FLAGS Then Go to skip_mlock
1517  If Not Return true if the page was successfully locked Then Go to skip_mlock
1519  lru_add_drain()
1520  If See page-flags.h for PAGE_MAPPING_FLAGS && Not PageDoubleMap indicates that the compound page is mapped with PTEs as well* as PMDs.* This is required for optimization of rmap operations for THP: we can postpone* per small page mapcount accounting (and its overhead from atomic operations) Then Mark page as mlocked if not already.* If page on LRU, isolate and putback to move to unevictable list.
1522  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1524  skip_mlock :
1525  page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT determines the page size
1526  VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page)
1527  If flags & do get_page on page Then get_page(page)
1530  out :
1531  Return page
Caller
NameDescribe
follow_pmd_mask