Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\swapfile.c Create Date:2022-07-28 15:18:13
Last Modify:2020-03-17 22:19:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:No need to decide whether this PTE shares the swap entry with others,* just let do_wp_page work it out if a write is requested later - to* force COW, vm_page_prot omits write permission from any private vma.

Proto:static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, swp_entry_t entry, struct page *page)

Type:int

Parameter:

TypeParameterName
struct vm_area_struct *vma
pmd_t *pmd
unsigned longaddr
swp_entry_tentry
struct page *page
1860  ret = 1
1862  swapcache = page
1863  page = ksm_might_need_to_copy(page, vma, addr)
1864  If Value for the false possibility is greater at compile time(!page) Then Return -ENOMEM
1867  If mem_cgroup_try_charge(page, The address space we belong to. , GFP_KERNEL, & memcg, false) Then
1869  ret = -ENOMEM
1870  Go to out_nolock
1873  pte = pte_offset_map_lock(The address space we belong to. , pmd, addr, & ptl)
1874  If Value for the false possibility is greater at compile time(!pte_same_as_swp( * pte, Convert the arch-independent representation of a swp_entry_t into the* arch-dependent pte representation.)) Then
1875  mem_cgroup_cancel_charge(page, memcg, false)
1876  ret = 0
1877  Go to out
1880  dec_mm_counter(The address space we belong to. , MM_SWAPENTS)
1881  inc_mm_counter(The address space we belong to. , MM_ANONPAGES)
1882  get_page(page)
1883  set_pte_at(The address space we belong to. , addr, pte, pte_mkold(Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(page, Access permissions of this VMA. )))
1885  If page == swapcache Then
1886  page_add_anon_rmap - add pte mapping to an anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page* The caller
1887  mem_cgroup_commit_charge(page, memcg, true, false)
1888  Else
1889  page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
1890  mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
1891  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
1893  Caller has made sure that the swap device corresponding to entry* is still around or has not been recycled.
1898  activate_page(page)
1899  out :
1900  pte_unmap_unlock(pte, ptl)
1901  out_nolock :
1902  If page != swapcache Then
1903  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1904  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1906  Return ret
Caller
NameDescribe
unuse_pte_range