Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\locking\rwsem.c Create Date:2022-07-28 09:48:21
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Wait until we successfully acquire the write lock

Proto:static struct rw_semaphore *rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)

Type:struct rw_semaphore

Parameter:

TypeParameterName
struct rw_semaphore *sem
intstate
1145  ret = sem
1149  If rwsem_can_spin_on_owner(sem, RWSEM_WR_NONSPINNABLE) && rwsem_optimistic_spin(sem, true) Then
1152  Return sem
1160  disable_rspin = atomic_long_read( & Write owner or one of the read owners as well flags regarding* the current state of the rwsem. Can be used as a speculative* check to see if the write owner is running on the cpu.) & RWSEM_NONSPINNABLE
1166  task = current process
1167  type = RWSEM_WAITING_FOR_WRITE
1168  timeout = jiffies + The typical HZ value is either 250 or 1000. So set the minimum waiting* time to at least 4ms or 1 jiffy (if it is higher than 4ms) in the wait* queue before initiating the handoff protocol.
1170  raw_spin_lock_irq( & wait_lock)
1173  wstate = If list_empty - tests whether a list is empty*@head: the list to test. Then Writer is first in wait list Else Writer is not first in wait list
1175  list_add_tail - add a new entry*@new: new entry to be added*@head: list head to add it before* Insert a new entry before the specified head.* This is useful for implementing queues.
1178  If wstate == Writer is not first in wait list Then
1179  count = atomic_long_read( & count)
1189  If count & RWSEM_WRITER_MASK Then Go to wait
1192  handle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
1196  If Not wake_q_empty( & wake_q) Then
1202  wake_up_q( & wake_q)
1203  wake_q_init( & wake_q)
1206  Else
1207  atomic_long_or(RWSEM_FLAG_WAITERS, & count)
1210  wait :
1212  set_current_state(state)
1213  cycle
1216  Break
1219  raw_spin_unlock_irq( & wait_lock)
1229  If wstate == WRITER_HANDOFF && rwsem_spin_on_owner(sem, RWSEM_NONSPINNABLE) == OWNER_NULL Then Go to trylock_again
1234  cycle
1238  schedule()
1245  If wstate == WRITER_HANDOFF Then Break
1253  If Not (count & RWSEM_LOCK_MASK) Then Break
1264  Break
1267  trylock_again :
1268  raw_spin_lock_irq( & wait_lock)
1270  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
1271  deletes entry from list
1272  This function is called by the a write lock owner. So the owner value* won't get changed by others.
1273  raw_spin_unlock_irq( & wait_lock)
1276  Return ret
1278  out_nolock :
1279  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
1280  raw_spin_lock_irq( & wait_lock)
1281  deletes entry from list
1283  If Value for the false possibility is greater at compile time(wstate == WRITER_HANDOFF) Then atomic_long_add( - RWSEM_FLAG_HANDOFF, & count)
1286  If list_empty - tests whether a list is empty*@head: the list to test. Then atomic_long_andnot(RWSEM_FLAG_WAITERS, & count)
1288  Else handle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
1290  raw_spin_unlock_irq( & wait_lock)
1291  wake_up_q( & wake_q)
1294  Return ERR_PTR( - EINTR)
Caller
NameDescribe
__down_writelock for writing
__down_write_killable