Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmalloc.c Create Date:2022-07-28 14:58:59
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Allocate a region of KVA of the specified size and alignment, within the* vstart and vend.

Proto:static struct vmap_area *alloc_vmap_area(unsigned long size, unsigned long align, unsigned long vstart, unsigned long vend, int node, gfp_t gfp_mask)

Type:struct vmap_area

Parameter:

TypeParameterName
unsigned longsize
unsigned longalign
unsigned longvstart
unsigned longvend
intnode
gfp_tgfp_mask
1095  purged = 0
1098  BUG_ON(!size)
1099  BUG_ON(offset_in_page(size))
1100  BUG_ON(!s_power_of_2() - check if a value is a power of two*@n: the value to check* Determine whether some value is a power of two, where zero is* *not* considered a power of two.* Return: true if @n is a power of 2, otherwise false.)
1102  If Value for the false possibility is greater at compile time(!vmap_initialized) Then Return ERR_PTR( - EBUSY)
1105  might_sleep()
1106  gfp_mask = gfp_mask & The set of flags that only affect watermark checking and reclaim* behaviour. This is used by the MM to obey the caller constraints* about IO, FS and watermark checking while ignoring placement* hints such as HIGHMEM usage.
1108  va = kmem_cache_alloc_node(This kmem_cache is used for vmap_area objects. Instead of* allocating from slab we reuse an object from this cache to* make things faster. Especially in "no edge" splitting of* free block., gfp_mask, node)
1109  If Value for the false possibility is greater at compile time(!va) Then Return ERR_PTR( - ENOMEM)
1116  kmemleak_scan_area( & address sorted rbtree , SIZE_MAX, gfp_mask)
1118  retry :
1134  pva = NULL
1136  If Not Operations with implied preemption/interrupt protection. These* operations can be used without worrying about preemption or interrupt.(Preload a CPU with one object for "no edge" split case. The* aim is to get rid of allocations from the atomic context, thus* to use more permissive allocation masks.) Then pva = kmem_cache_alloc_node(This kmem_cache is used for vmap_area objects. Instead of* allocating from slab we reuse an object from this cache to* make things faster. Especially in "no edge" splitting of* free block., gfp_mask, node)
1144  spin_lock( & free_vmap_area_lock)
1146  If pva && __this_cpu_cmpxchg(Preload a CPU with one object for "no edge" split case. The* aim is to get rid of allocations from the atomic context, thus* to use more permissive allocation masks., NULL, pva) Then kmem_cache_free(This kmem_cache is used for vmap_area objects. Instead of* allocating from slab we reuse an object from this cache to* make things faster. Especially in "no edge" splitting of* free block., pva)
1153  addr = Returns a start address of the newly allocated area, if success.* Otherwise a vend is returned that indicates failure.
1154  spin_unlock( & free_vmap_area_lock)
1156  If Value for the false possibility is greater at compile time(addr == vend) Then Go to overflow
1159  va_start = addr
1160  va_end = addr + size
1161  in "busy" tree = NULL
1164  spin_lock( & vmap_area_lock)
1165  insert_vmap_area(va, & vmap_area_root, & Export for kexec only )
1166  spin_unlock( & vmap_area_lock)
1168  BUG_ON(!IS_ALIGNED(va_start, align))
1169  BUG_ON(va_start < vstart)
1170  BUG_ON(va_end > vend)
1172  ret = kasan_populate_vmalloc(addr, size)
1173  If ret Then
1174  Free a region of KVA allocated by alloc_vmap_area
1175  Return ERR_PTR(ret)
1178  Return va
1180  overflow :
1181  If Not purged Then
1182  purge_vmap_area_lazy()
1183  purged = 1
1184  Go to retry
1187  If gfpflags_allow_blocking(gfp_mask) Then
1188  freed = 0
1189  blocking_notifier_call_chain( & vmap_notify_list, 0, & freed)
1190  If freed > 0 Then
1191  purged = 0
1192  Go to retry
1196  If Not (gfp_mask & DOC: Action modifiers* Action modifiers* ~~~~~~~~~~~~~~~~* %__GFP_NOWARN suppresses allocation failure reports.* %__GFP_COMP address compound page metadata.* %__GFP_ZERO returns a zeroed page on success.) && printk_ratelimit() Then pr_warn("vmap allocation for size %lu failed: use vmalloc=<size> to increase size\n", size)
1200  kmem_cache_free(This kmem_cache is used for vmap_area objects. Instead of* allocating from slab we reuse an object from this cache to* make things faster. Especially in "no edge" splitting of* free block., va)
1201  Return ERR_PTR( - EBUSY)
Caller
NameDescribe
new_vmap_blockw_vmap_block - allocates new vmap_block and occupies 2^order pages in this* block
vm_map_ramvm_map_ram - map pages linearly into kernel virtual address (vmalloc space)*@pages: an array of pointers to the pages to be mapped*@count: number of pages*@node: prefer to allocate data structures on this node*@prot: memory protection to use
__get_vm_area_node