Function report |
Source Code:mm\page_vma_mapped.c |
Create Date:2022-07-28 14:54:23 |
| Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
| home page | Tree |
| Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma
Proto:bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
Type:bool
Parameter:
| Type | Parameter | Name |
|---|---|---|
| struct page_vma_mapped_walk * | pvmw |
| 156 | pte = huge_pte_offset(mm, address, Returns the number of bytes in this potentially compound page. ) |
| 157 | If Not pte Then Return false |
| 160 | ptl = huge_pte_lockptr(page_hstate(page), mm, pte) |
| 162 | If Not heck_pte - check if @pvmw->page is mapped at the @pvmw->pte* page_vma_mapped_walk() found a place where @pvmw->page is *potentially** mapped Then Return not_found(pvmw) |
| 164 | Return true |
| 166 | restart : |
| 168 | If Not pgd_present( * pgd) Then Return false |
| 170 | p4d = p4d_offset(pgd, address) |
| 171 | If Not p4d_present( * p4d) Then Return false |
| 173 | pud = pud_offset(p4d, address) |
| 174 | If Not pud_present( * pud) Then Return false |
| 176 | pmd = pmd_offset(pud, address) |
| 183 | If pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde) Then |
| 185 | If Value is more likely to compile time(pmd_trans_huge( * pmd)) Then |
| 186 | If flags & Look for migarion entries rather than present PTEs Then Return not_found(pvmw) |
| 188 | If Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:( * pmd) != page Then Return not_found(pvmw) |
| 190 | Return true |
| 191 | Else if Not pmd_present( * pmd) Then |
| 192 | If thp_migration_supported() Then |
| 193 | If Not (flags & Look for migarion entries rather than present PTEs ) Then Return not_found(pvmw) |
| 195 | If is_migration_entry(pmd_to_swp_entry( * pmd)) Then |
| 196 | entry = pmd_to_swp_entry( * pmd) |
| 198 | If migration_entry_to_page(entry) != page Then Return not_found(pvmw) |
| 200 | Return true |
| 204 | Else |
| 206 | spin_unlock(ptl) |
| 207 | ptl = NULL |
| 209 | Else if Not pmd_present(pmde) Then |
| 210 | Return false |
| 214 | When 1 cycle |
| 215 | If heck_pte - check if @pvmw->page is mapped at the @pvmw->pte* page_vma_mapped_walk() found a place where @pvmw->page is *potentially** mapped Then Return true |
| 217 | next_pte : |
| 221 | Do |
| 223 | If address >= The first byte after our end addresswithin vm_mm. || address >= At what user virtual address is page expected in @vma? + hpage_nr_pages(page) * PAGE_SIZE Then Return not_found(pvmw) |
| 236 | Else |
| 237 | pte++ |
| 241 | If Not ptl Then |
| Name | Describe |
|---|---|
| page_mapped_in_vma | page_mapped_in_vma - check whether a page is really mapped in a VMA*@page: the page to test*@vma: the VMA to test* Returns 1 if the page is mapped into the page tables of the VMA, 0* if the page is not mapped into the page tables of this VMA. Only |
| page_referenced_one | |
| page_mkclean_one | |
| try_to_unmap_one | @arg: enum ttu_flags will be passed to this argument |
| write_protect_page | |
| remove_migration_pte | Restore a potential migration pte to a working pte entry |
| page_idle_clear_pte_refs_one | |
| __replace_page | __replace_page - replace page in vma by new page |
| Source code conversion tool public plug-in interface | X |
|---|---|
| Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |