Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\locking\qspinlock.c Create Date:2022-07-28 09:51:42
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:queued_spin_lock_slowpath - acquire the queued spinlock*@lock: Pointer to queued spinlock structure*@val: Current value of the queued spinlock 32-bit word* (queue tail, pending bit, lock value)* fast : slow : unlock* : :* uncontended (0,0,0) -:--> (0,0,1)

Proto:void queued_spin_lock_slowpath(struct qspinlock *lock, unsigned int val)

Type:void

Parameter:

TypeParameterName
struct qspinlock *lock
unsigned intval
320  BUILD_BUG_ON - break compile if a condition is true(FIXME: This should be fixed in the arch's Kconfig >= (1U << _Q_TAIL_CPU_BITS))
322  If pv_enabled() Then Go to pv_queue
325  If virt_spin_lock(lock) Then Return
334  If val == _Q_PENDING_VAL Then
335  cnt = _Q_PENDING_LOOPS
336  val = atomic_cond_read_relaxed( & val, (VAL != _Q_PENDING_VAL) || !cnt--)
343  If val & ~_Q_LOCKED_MASK Then Go to queue
351  val = queued_fetch_set_pending_acquire(lock)
360  If Value for the false possibility is greater at compile time(val & ~_Q_LOCKED_MASK) Then
363  If Not (val & _Q_PENDING_MASK) Then lear_pending - clear the pending bit.*@lock: Pointer to queued spinlock structure* *,1,* -> *,0,*
366  Go to queue
380  If val & _Q_LOCKED_MASK Then atomic_cond_read_acquire( & val, !(VAL & _Q_LOCKED_MASK))
388  lear_pending_set_locked - take ownership and clear the pending bit.*@lock: Pointer to queued spinlock structure* *,1,0 -> *,0,1* Lock stealing is not allowed if this function is used.
390  Return
396  queue :
398  pv_queue :
399  node = this_cpu_ptr( & mcs)
400  idx = nesting count, see qspinlock.c ++
401  tail = We must be able to distinguish between no-tail and the tail at 0:0,* therefore increment the cpu number by one.
412  If Value for the false possibility is greater at compile time(idx >= MAX_NODES) Then
414  When Not queued_spin_trylock - try to acquire the queued spinlock*@lock : Pointer to queued spinlock structure* Return: 1 if lock acquired, 0 if failed cycle
415  cpu_relax()
416  Go to release
419  node = grab_mcs_node(node, idx)
431  The "volatile" is due to gcc bugs ()
433  1 if lock acquired = 0
434  next = NULL
435  pv_init_node(node)
442  If queued_spin_trylock - try to acquire the queued spinlock*@lock : Pointer to queued spinlock structure* Return: 1 if lock acquired, 0 if failed Then Go to release
450  smp_wmb()
459  old = xchg_tail - Put in the new queue tail code word & retrieve previous one*@lock : Pointer to queued spinlock structure*@tail : The new queue tail code word* Return: The previous queue tail code word* xchg(lock, tail), which heads an address dependency*
460  next = NULL
466  If old & _Q_TAIL_MASK Then
467  prev = decode_tail(old)
470  WRITE_ONCE(next, node)
472  pv_wait_node(node, prev)
473  Using smp_cond_load_acquire() provides the acquire semantics* required so that subsequent operations happen after the* lock is acquired. Additionally, some architectures such as* ARM64 would like to do spin-waiting instead of purely( & 1 if lock acquired )
481  next = READ_ONCE(next)
482  If next Then 3dnow prefetch to get an exclusive cache line.* Useful for spinlocks to avoid one state transition in the* cache coherency protocol:
507  If val = pv_wait_head_or_lock(lock, node) Then Go to locked
510  val = atomic_cond_read_acquire( & val, !(VAL & _Q_LOCKED_PENDING_MASK))
512  locked :
534  If (val & _Q_TAIL_MASK) == tail Then
535  If atomic_try_cmpxchg_relaxed( & val, & val, _Q_LOCKED_VAL) Then Go to release
544  set_locked - Set the lock bit and own the lock*@lock: Pointer to queued spinlock structure* *,*,0 -> *,0,1
549  If Not next Then next = smp_cond_load_relaxed() - (Spin) wait for cond with no ordering guarantees*@ptr: pointer to the variable to wait on*@cond: boolean expression to wait for* Equivalent to using READ_ONCE() on the condition variable( & next, (VAL))
552  smp_store_release() provides a memory barrier to ensure all* operations in the critical section has been completed before* unlocking.( & 1 if lock acquired )
553  pv_kick_node(lock, next)
555  release :
559  __this_cpu_dec(count)
Caller
NameDescribe
queued_spin_lockqueued_spin_lock - acquire a queued spinlock*@lock: Pointer to queued spinlock structure