Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\events\core.c Create Date:2022-07-28 13:35:54
Last Modify:2022-05-20 07:50:19 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:A buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy

Proto:static void perf_mmap_close(struct vm_area_struct *vma)

Type:void

Parameter:

TypeParameterName
struct vm_area_struct *vma
5674  event = private_data
5676  rb = ring_buffer_get(event)
5677  mmap_user = mmap_user
5678  mmap_locked = mmap_locked
5679  size = perf_data_size(rb)
5681  If event_unmapped Then event_unmapped(event, The address space we belong to. )
5689  If rb_has_aux(rb) && Offset (within vm_file) in PAGE_SIZEunits == aux_pgoff && atomic_dec_and_mutex_lock - return holding mutex if we dec to 0*@cnt: the atomic which we are to dec*@lock: the mutex to return holding if we dec to 0* return true and hold lock if we dec to 0, return false otherwise Then
5697  perf_pmu_output_stop(event)
5700  atomic_long_sub(aux_nr_pages - aux_mmap_locked, & locked_vm)
5701  atomic64_sub(aux_mmap_locked, & pinned_vm)
5704  rb_free_aux(rb)
5705  WARN_ON_ONCE(_read - get a refcount's value*@r: the refcount* Return: the refcount's value)
5707  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.
5710  atomic_dec( & mmap_count)
5712  If Not atomic_dec_and_mutex_lock - return holding mutex if we dec to 0*@cnt: the atomic which we are to dec*@lock: the mutex to return holding if we dec to 0* return true and hold lock if we dec to 0, return false otherwise Then Go to out_put
5715  ring_buffer_attach(event, NULL)
5716  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.
5719  If atomic_read( & mmap_count) Then Go to out_put
5727  again :
5728  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
5730  If Not atomic_long_inc_not_zero( & refcount) Then
5735  Continue
5737  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
5739  mutex_lock( & mmap_mutex)
5750  If rb == rb Then ring_buffer_attach(event, NULL)
5753  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.
5754  put_event(event)
5760  Go to again
5762  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
5773  atomic_long_sub((size >> PAGE_SHIFT determines the page size ) + 1 - mmap_locked, & locked_vm)
5775  atomic64_sub(mmap_locked, & pinned_vm)
5776  free_uid(mmap_user)
5778  out_put :
5779  ring_buffer_put(rb)