Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\percpu.c Create Date:2022-07-28 14:27:29
Last Modify:2022-05-23 13:52:24 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Balance work is used to populate or destroy chunks asynchronously. We* try to keep the number of populated free pages between* PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one* empty chunk.

Proto:static void pcpu_balance_workfn(struct work_struct *work)

Type:void

Parameter:

TypeParameterName
struct work_struct *work
1835  gfp = GFP_KERNEL | __GFP_NORETRY | DOC: Action modifiers* Action modifiers* ~~~~~~~~~~~~~~~~* %__GFP_NOWARN suppresses allocation failure reports.* %__GFP_COMP address compound page metadata.* %__GFP_ZERO returns a zeroed page on success.
1836  LIST_HEAD(to_free)
1837  free_head = pcpu_slot[pcpu_nr_slots - 1]
1845  mutex_lock( & pcpu_alloc_mutex)
1846  spin_lock_irq( & pcpu_lock)
1849  WARN_ON( no [de]population allowed )
1852  If chunk == list_first_entry - get the first element from a list*@ptr: the list head to take the element from.*@type: the type of the struct this is embedded in.*@member: the name of the list_head within the struct.* Note, that list is expected to be not empty.(free_head, structpcpu_chunk, list) Then Continue
1855  list_move - delete from one list and add as another's head*@list: the entry to move*@head: the head that will precede our entry
1858  spin_unlock_irq( & pcpu_lock)
1865  pcpu_depopulate_chunk - depopulate and unmap an area of a pcpu_chunk*@chunk: chunk to depopulate*@page_start: the start page*@page_end: the end page* For each cpu, depopulate and unmap pages [@page_start,@page_end)* from @chunk.* CONTEXT:
1866  spin_lock_irq( & pcpu_lock)
1867  pcpu_chunk_depopulated - post-depopulation bookkeeping*@chunk: pcpu_chunk which got depopulated*@page_start: the start page*@page_end: the end page* Pages in [@page_start,@page_end) have been depopulated from @chunk.
1868  spin_unlock_irq( & pcpu_lock)
1870  pcpu_destroy_chunk(chunk)
1871  cond_resched()
1884  retry_pop :
1885  If pcpu_atomic_alloc_failed Then
1886  nr_to_pop = PCPU_EMPTY_POP_PAGES_HIGH
1888  pcpu_atomic_alloc_failed = false
1889  Else
1890  nr_to_pop = clamp - return a value clamped to a given range with strict typechecking*@val: current value*@lo: lowest allowable value*@hi: highest allowable value* This macro does strict typechecking of @lo/@hi to make sure they are of the* same type as @val(PCPU_EMPTY_POP_PAGES_HIGH - The number of empty populated pages, protected by pcpu_lock. The* reserved chunk doesn't contribute to the count., 0, PCPU_EMPTY_POP_PAGES_HIGH)
1895  When slot < pcpu_nr_slots cycle
1896  nr_unpop = 0
1898  If Not nr_to_pop Then Break
1901  spin_lock_irq( & pcpu_lock)
1903  nr_unpop = # of pages served by this chunk - # of populated pages
1904  If nr_unpop Then Break
1907  spin_unlock_irq( & pcpu_lock)
1909  If Not nr_unpop Then Continue
1915  nr = min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(re - rs, nr_to_pop)
1917  ret = pcpu_populate_chunk - populate and map an area of a pcpu_chunk*@chunk: chunk of interest*@page_start: the start page*@page_end: the end page*@gfp: allocation flags passed to the underlying memory allocator* For each cpu, populate and map pages
1918  If Not ret Then
1919  nr_to_pop -= nr
1920  spin_lock_irq( & pcpu_lock)
1921  pcpu_chunk_populated - post-population bookkeeping*@chunk: pcpu_chunk which got populated*@page_start: the start page*@page_end: the end page* Pages in [@page_start,@page_end) have been populated to @chunk. Update* the bookkeeping information accordingly
1922  spin_unlock_irq( & pcpu_lock)
1923  Else
1924  nr_to_pop = 0
1927  If Not nr_to_pop Then Break
1932  If nr_to_pop Then
1934  chunk = pcpu_create_chunk(gfp)
1935  If chunk Then
1939  Go to retry_pop
1943  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.