Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\trace\ring_buffer.c Create Date:2022-07-28 11:53:36
Last Modify:2020-03-17 19:30:04 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:rb_allocate_cpu_buffer

Proto:static struct ring_buffer_per_cpu *rb_allocate_cpu_buffer(struct ring_buffer *buffer, long nr_pages, int cpu)

Type:struct ring_buffer_per_cpu

Parameter:

TypeParameterName
struct ring_buffer *buffer
longnr_pages
intcpu
1293  cpu_buffer = kzalloc_node - allocate zeroed memory from a particular memory node.*@size: how many bytes of memory are required.*@flags: the type of memory to allocate (see kmalloc).*@node: memory node from which to allocate
1295  If Not cpu_buffer Then Return NULL
1298  cpu = cpu
1299  buffer = buffer
1300  raw_spin_lock_init( & serialize readers )
1301  lockdep_set_class( & serialize readers , reader_lock_key)
1302  lock = (arch_spinlock_t)
1303  INIT_WORK( & update_pages_work, update_pages_handler)
1304  init_completion( & update_done)
1305  init_irq_work( & work, _wake_up_waiters - wake up tasks waiting for ring buffer input* Schedules a delayed work to wake up any task that is blocked on the* ring buffer waiters queue.)
1306  init_waitqueue_head( & waiters)
1307  init_waitqueue_head( & full_waiters)
1309  bpage = kzalloc_node - allocate zeroed memory from a particular memory node.*@size: how many bytes of memory are required.*@flags: the type of memory to allocate (see kmalloc).*@node: memory node from which to allocate
1311  If Not bpage Then Go to fail_free_buffer
1314  rb_check_bpage(cpu_buffer, bpage)
1316  reader_page = bpage
1317  page = Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,* prefer the current CPU's closest node. Otherwise node must be valid and* online.
1318  If Not page Then Go to fail_free_reader
1320  Actual data page = page_address(page)
1321  rb_init_page(Actual data page )
1323  Initialization list head
1324  Initialization list head
1326  ret = rb_allocate_pages(cpu_buffer, nr_pages)
1327  If ret < 0 Then Go to fail_free_reader
1330  ad from head = list_entry - get the struct for this entry*@ptr: the &struct list_head pointer.*@type: the type of the struct this is embedded in.*@member: the name of the list_head within the struct.(pages, structbuffer_page, list)
1332  write to tail = committed pages = ad from head
1334  _head_page_activate - sets up head page
1336  Return cpu_buffer
1338  fail_free_reader :
1339  Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing* this issue out.
1341  fail_free_buffer :
1342  kfree(cpu_buffer)
1343  Return NULL
Caller
NameDescribe
__ring_buffer_alloc__ring_buffer_alloc - allocate a new ring_buffer*@size: the size in bytes per cpu that is needed.*@flags: attributes to set for the ring buffer.* Currently the only flag that is available is the RB_FL_OVERWRITE* flag
trace_rb_cpu_prepareWe only allocate new buffers, never free them if the CPU goes down.* If we were to free the buffer, then the user would lose any trace that was in* the buffer.