Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\readahead.c Create Date:2022-07-28 14:11:59
Last Modify:2020-03-17 21:13:07 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:__do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback.

Proto:unsigned int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, unsigned long offset, unsigned long nr_to_read, unsigned long lookahead_size)

Type:unsigned int

Parameter:

TypeParameterName
struct address_space *mapping
struct file *filp
unsigned longoffset
unsigned longnr_to_read
unsigned longlookahead_size
159  inode = host
162  LIST_HEAD(page_pool)
164  nr_pages = 0
165  isize = NOTE: in a 32bit arch with a preemptable kernel and* an UP compile the i_size_read/write must be atomic* with respect to the local cpu (unlike with preempt disabled),* but they don't need to be atomic with respect to other cpus like in* true SMP (so they
166  gfp_mask = readahead_gfp_mask(mapping)
168  If isize == 0 Then Go to out
171  end_index = isize - 1 >> PAGE_SHIFT determines the page size
176  When page_idx < nr_to_read cycle
177  page_offset = offset + page_idx
179  If page_offset > end_index Then Break
182  page = xa_load() - Load an entry from an XArray.*@xa: XArray.*@index: index into array.* Context: Any context. Takes and releases the RCU lock.* Return: The entry at @index in @xa.
192  nr_pages = 0
193  Continue
196  page = __page_cache_alloc(gfp_mask)
197  If Not page Then Break
199  Our offset within mapping. = page_offset
200  list_add - add a new entry*@new: new entry to be added*@head: list head to add it after* Insert a new entry after the specified head.* This is good for implementing stacks.
201  If page_idx == nr_to_read - lookahead_size Then SetPageReadahead(page)
203  nr_pages++
211  If nr_pages Then read_pages(mapping, filp, & page_pool, nr_pages, gfp_mask)
213  BUG_ON(!list_empty - tests whether a list is empty*@head: the list to test.)
214  out :
215  Return nr_pages
Caller
NameDescribe
force_page_cache_readaheadChunk the readahead into 2 megabyte units, so that we don't pin too much* memory at once.
ondemand_readaheadA minimal readahead algorithm for trivial sequential/random reads.
ra_submitSubmit IO for the read-ahead request in file_ra_state.