Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\trace\trace.c Create Date:2022-07-28 11:58:51
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:race_buffered_event_enable - enable buffering events* When events are being filtered, it is quicker to use a temporary* buffer to write the event data into if there's a likely chance* that it will not be committed

Proto:void trace_buffered_event_enable(void)

Type:void

Parameter:Nothing

2479  WARN_ON_ONCE(!mutex_is_locked( & event_mutex))
2481  If trace_buffered_event_ref++ Then Return
2484  for_each_tracing_cpu(cpu)
2485  page = Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,* prefer the current CPU's closest node. Otherwise node must be valid and* online.
2487  If Not page Then Go to failed
2490  event = page_address(page)
2491  memset(event, 0, size of event )
2493  per_cpu(trace_buffered_event, cpu) = event
2495  Even if we don't have any preemption, we need preempt disable/enable* to be barriers, so that we don't have things like get_user/put_user* that can cause faults and scheduling migrate into our preempt-protected* region.()
2496  If cpu == smp_processor_id() && Operations with implied preemption/interrupt protection. These* operations can be used without worrying about preemption or interrupt.(trace_buffered_event) != per_cpu(trace_buffered_event, cpu) Then WARN_ON_ONCE(1)
2500  preempt_enable()
2503  Return
2504  failed :
2505  race_buffered_event_disable - disable buffering events* When a filter is removed, it is faster to not use the buffered* events, and to commit directly into the ring buffer. Free up* the temp buffers when there are no more users. This requires
Caller
NameDescribe
__ftrace_event_enable_disable
event_set_filtered_flag