函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\highmem.h Create Date:2022-07-27 06:45:50
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:kmap_atomic

函数原型:static inline void *kmap_atomic(struct page *page)

返回类型:void

参数:

类型参数名称
struct page *page
93  禁止抢占()
94  These routines enable/disable the pagefault handler. If disabled, it will* not take any locks and go straight to the fixup table.* User access methods will not sleep when called from a pagefault_disabled()* environment.
95  返回:page_address(page)
调用者
名称描述
sg_miter_nextsg_miter_next - proceed mapping iterator to the next mapping*@miter: sg mapping iter to proceed* Description:* Proceeds @miter to the next mapping. @miter should have been started* using sg_miter_start(). On successful return, @miter->page,
snapshot_read_nextsnapshot_read_next - Get the address to read the next image page from.*@handle: Snapshot handle to be used for the reading.* On the first call, @handle should point to a zeroed snapshot_handle* structure
memcmp_pages
bio_copy_data_iter
tomoyo_dump_pagemoyo_dump_page - Dump a page to buffer.*@bprm: Pointer to "struct linux_binprm".*@pos: Location to dump.*@dump: Poiner to "struct tomoyo_page_dump".* Returns true on success, false otherwise.
vfs_dedupe_file_range_compareCompare extents of two files to see if they are the same.* Caller must have locked both inodes to prevent write races.
copy_page_to_iter_iovec
copy_page_from_iter_iovec
memcpy_from_page
memcpy_to_page
memzero_page
csum_and_copy_to_pipe_iter
copy_page_to_iter
copy_page_from_iter
iov_iter_copy_from_user_atomic
csum_and_copy_from_iter
csum_and_copy_from_iter_full
csum_and_copy_to_iter
swiotlb_bounceBounce: copy the swiotlb buffer from or back to the original dma location
kdb_getphyskdb_getphys - Read data from a physical address
stack_map_get_build_idParse build ID of ELF file mapped to vma
copy_from_page
copy_to_page
__update_ref_ctr
cow_user_page
copy_huge_page_from_user
aligned_vreadsmall helper routine , copy contents to buf from addr.* If the page is not present, fill zero.
aligned_vwrite
clear_user_highpagewhen CONFIG_HIGHMEM is not set these will be plain clear/copy_page
clear_highpage
zero_user_segments
copy_user_highpage
copy_highpage
scatterwalk_map
swp_swapcountHow many references to @entry are currently swapped out?* This considers COUNT_CONTINUED so it returns exact answer.
add_swap_count_continuationadd_swap_count_continuation - called when a swap count is duplicated* beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's* page of the original vmalloc'ed swap_map, to hold the continuation count
swap_count_continuedswap_count_continued - when the original swap_map count is incremented* from SWAP_MAP_MAX, check if there is already a continuation page to carry* into, carry if so, or else fail until a new continuation page is allocated;* when the original swap_map
zswap_writeback_entry
zswap_frontswap_storeattempts to compress and store an single page
zswap_frontswap_loadrns 0 if the page was successfully decompressed* return -1 on entry not found or error
calc_checksum
poison_page
unpoison_page
init_zspageInitialize a newly allocated zspage
__zs_map_object
__zs_unmap_object
zs_map_objectzs_map_object - get address of allocated object from handle.*@pool: pool from which the object was allocated*@handle: handle returned from zs_malloc*@mm: maping mode to use* Before using an object allocated from zs_malloc, it must be mapped using
obj_malloc
obj_free
zs_object_copy
find_alloced_objFind alloced object in zspage from index object and* return handle.
mcopy_atomic_pte
__blk_queue_bounce
bio_integrity_process_integrity_process - Process integrity metadata for a bio*@bio: bio to generate/verify integrity metadata for*@proc_iter: iterator to process*@proc_fn: Pointer to the relevant processing function
t10_pi_type1_prepare10_pi_type1_prepare - prepare PI prior submitting request to device*@rq: request with PI that should be prepared* For Type 1/Type 2, the virtual start sector is the one that was* originally submitted by the block layer for the ref_tag usage. Due to
t10_pi_type1_complete10_pi_type1_complete - prepare PI prior returning request to the blk layer*@rq: request with PI that should be prepared*@nr_bytes: total bytes to prepare* For Type 1/Type 2, the virtual start sector is the one that was
remove_arg_zeroArguments are '\0' separated strings found at the location bprm->p* points to; chop off the first by relocating brpm->p to right after* the first '\0' encountered.
aio_setup_ring
ioctx_add_table
user_refill_reqs_availableser_refill_reqs_available* Called to refill reqs_available when aio_get_req() encounters an* out of space in the completion ring.
aio_completeaio_complete* Called when the io request on the given iocb is complete.
aio_read_events_ringaio_read_events_ring* Pull an event off of the ioctx's event ring. Returns the number of* events fetched
copy_user_dax
extract_hashExtract a hash from a hash page
iomap_read_inline_data
iomap_write_end_inline