Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmalloc.c Create Date:2022-07-28 14:59:18
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:w_vmap_block - allocates new vmap_block and occupies 2^order pages in this* block

Proto:static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)

Type:void

Parameter:

TypeParameterName
unsigned intorder
gfp_tgfp_mask
1499  node = Returns the number of the current Node.
1501  vb = kmalloc_node(sizeof(structvmap_block), gfp_mask & The set of flags that only affect watermark checking and reclaim* behaviour. This is used by the MM to obey the caller constraints* about IO, FS and watermark checking while ignoring placement* hints such as HIGHMEM usage., node)
1503  If Value for the false possibility is greater at compile time(!vb) Then Return ERR_PTR( - ENOMEM)
1506  va = Allocate a region of KVA of the specified size and alignment, within the* vstart and vend.
1509  If IS_ERR(va) Then
1510  kfree(vb)
1511  Return ERR_CAST - Explicitly cast an error-valued pointer to another pointer type*@ptr: The pointer to cast.* Explicitly cast an error-valued pointer to another pointer type in such a* way as to make it clear that's what's going on.
1514  err = Load up this CPU's radix_tree_node buffer with sufficient objects to* ensure that the addition of a single element in the tree cannot fail
1515  If Value for the false possibility is greater at compile time(err) Then
1516  kfree(vb)
1517  Free a region of KVA allocated by alloc_vmap_area
1518  Return ERR_PTR(err)
1521  vaddr = vmap_block_vaddr(va_start, 0)
1522  Process spin lock initialization( & lock)
1523  va = va
1525  BUG_ON(VMAP_BBMAP_BITS <= (1UL << order))
1526  free = VMAP_BBMAP_BITS - (1UL << order)
1527  dirty = 0
1528  dirty_min = VMAP_BBMAP_BITS
1529  < dirty range = 0
1530  Initialization list head
1532  vb_idx = We should probably have a fallback mechanism to allocate virtual memory* out of partially filled vmap blocks. However vmap block sizing should be* fairly reasonable according to the vmalloc size, so it shouldn't be a* big problem.
1533  spin_lock( & Radix tree of vmap blocks, indexed by address, to quickly find a vmap block* in the free path. Could get rid of this if we change the API to return a* "cookie" from alloc, to be passed to free. But no big deal yet.)
1534  err = __radix_tree_insert - insert into a radix tree*@root: radix tree root*@index: index key*@item: item to insert* Insert an item into the radix tree at position @index.
1535  spin_unlock( & Radix tree of vmap blocks, indexed by address, to quickly find a vmap block* in the free path. Could get rid of this if we change the API to return a* "cookie" from alloc, to be passed to free. But no big deal yet.)
1536  BUG_ON(err)
1537  radix_tree_preload_end()
1539  vbq = Must be an lvalue. Since @var must be a simple identifier,* we force a syntax error here if it isn't.(Queue of free and dirty vmap blocks, for allocation and flushing purposes )
1540  spin_lock( & lock)
1541  list_add_tail_rcu - add a new entry to rcu-protected list*@new: new entry to be added*@head: list head to add it before* Insert a new entry before the specified head
1542  spin_unlock( & lock)
1543  The weird & is necessary because sparse considers (void)(var) to be* a direct dereference of percpu variable (var).(Queue of free and dirty vmap blocks, for allocation and flushing purposes )
1545  Return vaddr
Caller
NameDescribe
vb_alloc