Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\slub.c Create Date:2022-07-28 15:48:02
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Remove the cpu slab

Proto:static void deactivate_slab(struct kmem_cache *s, struct page *page, void *freelist, struct kmem_cache_cpu *c)

Type:void

Parameter:

TypeParameterName
struct kmem_cache *s
struct page *page
void *freelist
struct kmem_cache_cpu *c
2042  enum slab_modes{M_NONE, M_PARTIAL, M_FULL, M_FREE}
2043  n = get_node(s, page_to_nid(page))
2044  lock = 0
2045  l = M_NONE , m = M_NONE
2047  tail = Cpu slab was moved to the head of partials
2051  If first free object Then
2052  stat(s, Slab contained remotely freed objects )
2053  tail = Cpu slab was moved to the tail of partials
2064  When freelist && (nextfree = get_freepointer(s, freelist)) cycle
2068  Do
2070  counters = SLUB
2072  SLUB = counters
2073  SLUB --
2074  VM_BUG_ON(!frozen)
2076  When Not Interrupts must be disabled (for the fallback code to work right) cycle
2081  freelist = nextfree
2098  redo :
2100  first free object = first free object
2101  SLUB = SLUB
2102  VM_BUG_ON(!frozen)
2105  SLUB = SLUB
2106  If freelist Then
2107  SLUB --
2108  set_freepointer(s, freelist, first free object )
2109  first free object = freelist
2110  Else first free object = first free object
2113  frozen = 0
2115  If Not SLUB && nr_partial >= min_partial Then m = M_FREE
2117  Else if first free object Then
2118  m = M_PARTIAL
2119  If Not lock Then
2120  lock = 1
2126  spin_lock( & list_lock)
2128  Else
2129  m = M_FULL
2131  lock = 1
2137  spin_lock( & list_lock)
2141  If l != m Then
2142  If l == M_PARTIAL Then remove_partial(n, page)
2144  Else if l == M_FULL Then remove_full(s, n, page)
2147  If m == M_PARTIAL Then add_partial(n, page, tail)
2149  Else if m == M_FULL Then add_full(s, n, page)
2153  l = m
2154  If Not Interrupts must be disabled (for the fallback code to work right) Then Go to redo
2160  If lock Then spin_unlock( & list_lock)
2163  If m == M_PARTIAL Then stat(s, tail)
2165  Else if m == M_FULL Then stat(s, Cpu slab was full when deactivated )
2167  Else if m == M_FREE Then
2168  stat(s, Cpu slab was empty when deactivated )
2169  discard_slab(s, page)
2170  stat(s, Slab freed to the page allocator )
2173  The slab from which we are allocating = NULL
2174  Pointer to next available object = NULL
Caller
NameDescribe
flush_slab
___slab_allocSlow path. The lockless freelist is empty or we need to perform* debugging duties.* Processing is still very fast if new objects have been freed to the* regular freelist. In that case we simply take over the regular freelist