函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\events\uprobes.c Create Date:2022-07-27 15:05:53
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:__replace_page - replace page in vma by new page

函数原型:static int __replace_page(struct vm_area_struct *vma, unsigned long addr, struct page *old_page, struct page *new_page)

返回类型:int

参数:

类型参数名称
struct vm_area_struct *vma
unsigned longaddr
struct page *old_page
struct page *new_page
157  mm等于The address space we belong to.
158  struct page_vma_mapped_walk pvmw = {page = compound_head(old_page), vma = vma, address = addr, }
167  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE)
170  如果new_page
171  err等于mem_cgroup_try_charge(new_page, The address space we belong to. , GFP_KERNEL, & memcg, false)
173  如果err则返回:err
178  lock_page may only be called if we have the page's inode pinned.
180  mmu_notifier_invalidate_range_start( & range)
181  err等于负EAGAIN
182  如果非page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma
183  如果new_pagemem_cgroup_cancel_charge(new_page, memcg, false)
185  转到:unlock
187  VM_BUG_ON_PAGE(addr != address, old_page)
189  如果new_page
190  get_page(new_page)
191  page_add_new_anon_rmap(new_page, vma, addr, false)
192  mem_cgroup_commit_charge(new_page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
193  lru_cache_add_active_or_unevictable(new_page, vma)
194  否则dec_mm_counter(mm, MM_ANONPAGES)
198  如果非PageAnon(old_page)则
199  dec_mm_counter(mm, Optimized variant when page is already known not to be PageAnon )
200  inc_mm_counter(mm, MM_ANONPAGES)
203  flush_cache_page(vma, addr, pte_pfn( * pte))
204  ptep_clear_flush_notify(vma, addr, pte)
205  如果new_pageset_pte_at_notify() sets the pte _after_ running the notifier.* This is safe to start by updating the secondary MMUs, because the primary MMU* pte invalidate must have already happened with a ptep_clear_flush() before* set_pte_at_notify() has been invoked(mm, addr, pte, Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(new_page, Access permissions of this VMA. ))
209  page_remove_rmap(old_page, false)
210  如果非page_mapped(old_page)则try_to_free_swap(old_page)
212  page_vma_mapped_walk_done( & pvmw)
214  如果Flags, see mm.h. 按位与VM_LOCKEDmunlock_vma_page(old_page)
216  put_page(old_page)
218  err等于0
219  unlock :
220  mmu_notifier_invalidate_range_end( & range)
221  unlock_page(old_page)
222  返回:err
调用者
名称描述
uprobe_write_opcodeNOTE:* Expect the breakpoint instruction to be the smallest size instruction for* the architecture