Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\mprotect.c Create Date:2022-07-28 14:52:14
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:mprotect_fixup

Proto:int mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, unsigned long start, unsigned long end, unsigned long newflags)

Type:int

Parameter:

TypeParameterName
struct vm_area_struct *vma
struct vm_area_struct **pprev
unsigned longstart
unsigned longend
unsigned longnewflags
376  mm = The address space we belong to.
377  oldflags = Flags, see mm.h.
378  nrpages = end - start >> PAGE_SHIFT determines the page size
379  charged = 0
382  dirty_accountable = 0
384  If newflags == oldflags Then
385  pprev = vma
386  Return 0
394  If arch_has_pfn_modify_check() && Flags, see mm.h. & (Page-ranges managed without "struct page", just pure PFN | Can contain "struct page" and pure PFN pages ) && (newflags & ( currently active flags | VM_WRITE | VM_EXEC)) == 0 Then
397  new_pgprot = vm_get_page_prot(newflags)
399  error = walk_page_range - walk page table with caller specific callbacks*@mm: mm_struct representing the target process of page table walk*@start: start address of the virtual address range*@end: end address of the virtual address range*@ops: operation to call
401  If error Then Return error
411  If newflags & VM_WRITE Then
413  If Not Return true if the calling process may expand its vm space by the passed* number of pages && Return true if the calling process may expand its vm space by the passed* number of pages Then Return -ENOMEM
418  charged = nrpages
419  If security_vm_enough_memory_mm(mm, charged) Then Return -ENOMEM
428  pgoff = Offset (within vm_file) in PAGE_SIZEunits + ( start - Our start address within vm_mm. >> PAGE_SHIFT determines the page size )
429  pprev = Given a mapping request (addr,end,vm_flags,file,pgoff), figure out* whether that can be merged with its predecessor or its successor
432  If pprev Then
433  vma = pprev
434  VM_WARN_ON((Flags, see mm.h. ^ newflags) & ~VM_SOFTDIRTY)
435  Go to success
438  pprev = vma
440  If start != Our start address within vm_mm. Then
441  error = Split a vma into two pieces at address 'addr', a new vma is allocated* either for the first part or the tail.
442  If error Then Go to fail
446  If end != The first byte after our end addresswithin vm_mm. Then
447  error = Split a vma into two pieces at address 'addr', a new vma is allocated* either for the first part or the tail.
448  If error Then Go to fail
452  success :
457  Flags, see mm.h. = newflags
458  dirty_accountable = Some shared mappings will want the pages marked read-only* to track write events. If so, we'll downgrade vm_page_prot* to the private version (using protection_map[] without the* VM_SHARED bit).
459  Update vma->vm_page_prot to reflect vma->vm_flags.
461  change_protection(vma, start, end, Access permissions of this VMA. , dirty_accountable, 0)
468  If (oldflags & ( VM_WRITE | VM_SHARED | VM_LOCKED)) == VM_LOCKED && newflags & VM_WRITE Then
470  populate_vma_page_range() - populate a range of pages in the vma
473  vm_stat_account(mm, oldflags, - nrpages)
474  vm_stat_account(mm, newflags, nrpages)
475  perf_event_mmap(vma)
476  Return 0
478  fail :
479  vm_unacct_memory(charged)
480  Return error
Caller
NameDescribe
do_mprotect_pkeypkey==-1 when doing a legacy mprotect()