Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\locking\rwsem.c Create Date:2022-07-28 09:48:15
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Wait for the read lock to be granted

Proto:static struct rw_semaphore __sched *rwsem_down_read_slowpath(struct rw_semaphore *sem, int state)

Type:struct rw_semaphore

Parameter:

TypeParameterName
struct rw_semaphore *sem
intstate
997  adjustment = -RWSEM_READER_BIAS
1000  bool wake = false
1006  last_rowner = atomic_long_read( & Write owner or one of the read owners as well flags regarding* the current state of the rwsem. Can be used as a speculative* check to see if the write owner is running on the cpu.)
1007  If Not (last_rowner & The least significant 3 bits of the owner value has the following* meanings when set.* - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers* - Bit 1: RWSEM_RD_NONSPINNABLE - Readers cannot spin on this lock.) Then last_rowner &= RWSEM_RD_NONSPINNABLE
1010  If Not rwsem_can_spin_on_owner(sem, RWSEM_RD_NONSPINNABLE) Then Go to queue
1016  atomic_long_add( - RWSEM_READER_BIAS, & count)
1017  adjustment = 0
1018  If rwsem_optimistic_spin(sem, false) Then
1030  wake_up_q( & wake_q)
1032  Return sem
1033  Else if rwsem_reader_phase_trylock(sem, last_rowner) Then
1035  Return sem
1038  queue :
1039  task = current process
1040  type = RWSEM_WAITING_FOR_READ
1041  timeout = jiffies + The typical HZ value is either 250 or 1000. So set the minimum waiting* time to at least 4ms or 1 jiffy (if it is higher than 4ms) in the wait* queue before initiating the handoff protocol.
1043  raw_spin_lock_irq( & wait_lock)
1044  If list_empty - tests whether a list is empty*@head: the list to test. Then
1058  Return sem
1060  adjustment += RWSEM_FLAG_WAITERS
1062  list_add_tail - add a new entry*@new: new entry to be added*@head: list head to add it before* Insert a new entry before the specified head.* This is useful for implementing queues.
1065  If adjustment Then count = atomic_long_add_return(adjustment, & count)
1067  Else count = atomic_long_read( & count)
1076  If Not (count & RWSEM_LOCK_MASK) Then
1077  clear_wr_nonspinnable(sem)
1078  wake = true
1080  If wake || Not (count & RWSEM_WRITER_MASK) && adjustment & RWSEM_FLAG_WAITERS Then handle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
1084  raw_spin_unlock_irq( & wait_lock)
1085  wake_up_q( & wake_q)
1088  cycle
1089  set_current_state(state)
1090  If Not smp_load_acquire( & task) Then
1092  Break
1096  If task Then Go to out_nolock
1100  Break
1102  schedule()
1106  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
1108  Return sem
1110  out_nolock :
1111  deletes entry from list
1112  If list_empty - tests whether a list is empty*@head: the list to test. Then
1113  atomic_long_andnot(RWSEM_FLAG_WAITERS | RWSEM_FLAG_HANDOFF, & count)
1116  raw_spin_unlock_irq( & wait_lock)
1117  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
1119  Return ERR_PTR( - EINTR)
Caller
NameDescribe
__down_readlock for reading
__down_read_killable