Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\slub.c Create Date:2022-07-28 15:48:18
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Slow path. The lockless freelist is empty or we need to perform* debugging duties.* Processing is still very fast if new objects have been freed to the* regular freelist. In that case we simply take over the regular freelist

Proto:static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c)

Type:void

Parameter:

TypeParameterName
struct kmem_cache *s
gfp_tgfpflags
intnode
unsigned longaddr
struct kmem_cache_cpu *c
2546  page = The slab from which we are allocating
2547  If Not page Then Go to new_slab
2549  redo :
2551  If Value for the false possibility is greater at compile time(!Check if the objects in a per cpu structure fit numa* locality expectations.) Then
2552  searchnode = node
2554  If node != NUMA_NO_NODE && Not node_present_pages(node) Then searchnode = node_to_mem_node(node)
2560  Go to new_slab
2569  If Value for the false possibility is greater at compile time(!pfmemalloc_match(page, gfpflags)) Then
2570  Remove the cpu slab
2571  Go to new_slab
2575  freelist = Pointer to next available object
2576  If freelist Then Go to load_freelist
2579  freelist = Check the page->freelist of a page and either transfer the freelist to the* per cpu freelist or deactivate the page.* The page is still frozen if the return value is not NULL.* If this function returns NULL then the page has been unfrozen.
2581  If Not freelist Then
2582  The slab from which we are allocating = NULL
2583  stat(s, Implicit deactivation )
2584  Go to new_slab
2587  stat(s, Refill cpu slab from slab freelist )
2589  load_freelist :
2595  VM_BUG_ON(!frozen)
2596  Pointer to next available object = get_freepointer(s, freelist)
2597  Globally unique transaction id = next_tid( Globally unique transaction id )
2598  Return freelist
2600  new_slab :
2602  If slub_percpu_partial(c) Then
2603  page = The slab from which we are allocating = slub_percpu_partial(c)
2605  stat(s, Used cpu partial on alloc )
2606  Go to redo
2609  freelist = new_slab_objects(s, gfpflags, node, & c)
2611  If Value for the false possibility is greater at compile time(!freelist) Then
2612  slab_out_of_memory(s, gfpflags, node)
2613  Return NULL
2616  page = The slab from which we are allocating
2617  If Value is more likely to compile time(!Lock order:* 1. slab_mutex (Global Mutex)* 2. node->list_lock* 3. slab_lock(page) (Only on some arches and for debugging)* slab_mutex* The role of the slab_mutex is to protect the list of all the slabs && pfmemalloc_match(page, gfpflags)) Then Go to load_freelist
2621  If Lock order:* 1. slab_mutex (Global Mutex)* 2. node->list_lock* 3. slab_lock(page) (Only on some arches and for debugging)* slab_mutex* The role of the slab_mutex is to protect the list of all the slabs && Not alloc_debug_processing(s, page, freelist, addr) Then Go to new_slab
2625  Remove the cpu slab
2626  Return freelist
Caller
NameDescribe
__slab_allocAnother one that disabled interrupt and compensates for possible* cpu changes by refetching the per cpu area pointer.
kmem_cache_alloc_bulkNote that interrupts must be enabled when calling this function.