函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\vmstat.h Create Date:2022-07-27 06:44:44
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:Disable counters

函数原型:static inline void count_vm_event(enum vm_event_item item)

返回类型:void

参数:

类型参数名称
enum vm_event_itemitem

NULL

调用者
名称描述
filemap_faultlemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault
__oom_kill_process
lru_cache_add_active_or_unevictablelru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
__pagevec_lru_add_fn
shrink_page_listshrink_page_list() returns the number of reclaimed pages
throttle_direct_reclaimThrottle direct reclaimers if backing storage is backed by the network* and the PFMEMALLOC reserve for the preferred node is getting dangerously* depleted. kswapd will continue to make progress and wake the processes* when the low watermark is reached.
balance_pgdatFor kswapd, balance_pgdat() will reclaim pages across a node from zones* that are eligible for use by the caller until at least one zone is* balanced.* Returns the order kswapd finished reclaiming at.
kswapd_try_to_sleep
node_reclaim
do_swap_pageWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases
handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().
clear_page_mlockLRU accounting for clear_page_mlock()
mlock_vma_pageMark page as mlocked if not already.* If page on LRU, isolate and putback to move to unevictable list.
__munlock_isolated_pageFinish munlock after successful page isolation* Page must be locked. This is a wrapper for try_to_munlock()* and putback_lru_page() with munlock accounting.
count_swpout_vm_event
__swap_writepage
swap_readpage
lookup_swap_cacheLookup a swap entry in the swap cache. A found page will be returned* unlocked and with its refcount incremented - we rely on the kernel* lock getting page table operations atomic even if we drop the page* lock before returning.
swap_cluster_readaheadswap_cluster_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.
swap_vma_readaheadswap_vma_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.* Primitive swap readahead code
get_huge_zero_page
__do_huge_pmd_anonymous_page
do_huge_pmd_anonymous_page
do_huge_pmd_wp_page
__split_huge_pmd_locked
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
deferred_split_huge_page
khugepaged_alloc_page
dax_iomap_pte_fault
drop_caches_sysctl_handler