Function report |
Source Code:mm\readahead.c |
Create Date:2022-07-28 14:11:59 |
Last Modify:2020-03-17 21:13:07 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:__do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback.
Proto:unsigned int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, unsigned long offset, unsigned long nr_to_read, unsigned long lookahead_size)
Type:unsigned int
Parameter:
Type | Parameter | Name |
---|---|---|
struct address_space * | mapping | |
struct file * | filp | |
unsigned long | offset | |
unsigned long | nr_to_read | |
unsigned long | lookahead_size |
164 | nr_pages = 0 |
166 | gfp_mask = readahead_gfp_mask(mapping) |
171 | end_index = isize - 1 >> PAGE_SHIFT determines the page size |
176 | When page_idx < nr_to_read cycle |
177 | page_offset = offset + page_idx |
179 | If page_offset > end_index Then Break |
182 | page = xa_load() - Load an entry from an XArray.*@xa: XArray.*@index: index into array.* Context: Any context. Takes and releases the RCU lock.* Return: The entry at @index in @xa. |
192 | nr_pages = 0 |
193 | Continue |
196 | page = __page_cache_alloc(gfp_mask) |
197 | If Not page Then Break |
201 | If page_idx == nr_to_read - lookahead_size Then SetPageReadahead(page) |
203 | nr_pages++ |
214 | out : |
215 | Return nr_pages |
Name | Describe |
---|---|
force_page_cache_readahead | Chunk the readahead into 2 megabyte units, so that we don't pin too much* memory at once. |
ondemand_readahead | A minimal readahead algorithm for trivial sequential/random reads. |
ra_submit | Submit IO for the read-ahead request in file_ra_state. |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |