Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\page_io.c Create Date:2022-07-28 15:13:53
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:swap_readpage

Proto:int swap_readpage(struct page *page, bool synchronous)

Type:int

Parameter:

TypeParameterName
struct page *page
boolsynchronous
354  ret = 0
355  sis = page_swap_info(page)
360  VM_BUG_ON_PAGE(!PageSwapCache(page) && !synchronous, page)
361  VM_BUG_ON_PAGE(!PageLocked(page), page)
362  VM_BUG_ON_PAGE(PageUptodate(page), page)
369  psi_memstall_enter - mark the beginning of a memory stall section*@flags: flags to handle nested sections* Marks the calling task as being stalled due to a lack of memory,* such as waiting for a refault or performing reclaim.
371  If frontswap_load(page) == 0 Then
372  SetPageUptodate(page)
373  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
374  Go to out
377  If SWP_USED etc: see above & SWP_FS Then
378  swap_file = seldom referenced
379  mapping = f_mapping
381  ret = readpage(swap_file, page)
382  If Not ret Then Disable counters
384  Go to out
387  ret = bdev_read_page(swap device or bdev of swap file , swap_page_sector(page), page)
388  If Not ret Then
394  Disable counters
395  Go to out
398  ret = 0
399  bio = get_swap_bio(GFP_KERNEL, page, end_swap_bio_read)
400  If (bio == NULL) Then
401  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
402  ret = -ENOMEM
403  Go to out
405  disk = bi_disk
410  bsolete, don't use in new code
411  If synchronous Then
412  bottom bits req flags, * top bits REQ_OP. Use * accessors. |= REQ_HIPRI
413  get_task_struct(current process)
414  bi_private = current process
416  Disable counters
417  get a reference to a bio, so it won't disappear. the intended use is* something like:* bio_get(bio);* submit_bio(rw, bio);* if (bio->bi_flags ...)* do_something* bio_put(bio);* without the bio_get(), it could potentially complete I/O before submit_bio
418  qc = submit_bio(bio)
419  When synchronous cycle
420  set_current_state(TASK_UNINTERRUPTIBLE)
421  If Not READ_ONCE(bi_private) Then Break
424  If Not blk_poll(queue, qc, true) Then io_schedule()
427  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
428  bio_put(bio)
430  out :
431  psi_memstall_leave - mark the end of an memory stall section*@flags: flags to handle nested memdelay sections* Marks the calling task as no longer stalled due to lack of memory.
432  Return ret
Caller
NameDescribe
read_swap_cache_asyncLocate a page of swap in physical memory, reserving swap cache space* and reading the disk if it is not already cached.* A failure return means that either the page allocation failed or that* the swap entry is no longer in use.
swap_cluster_readaheadswap_cluster_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.
swap_vma_readaheadswap_vma_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.* Primitive swap readahead code
do_swap_pageWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases