Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:block\blk-mq.c Create Date:2022-07-28 17:12:24
Last Modify:2020-03-17 23:18:05 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:lk_poll - poll for IO completions*@q: the queue*@cookie: cookie passed back at IO submission time*@spin: whether to spin for completions* Description:* Poll for completions on the passed in queue. Returns number of* completed entries found

Proto:int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)

Type:int

Parameter:

TypeParameterName
struct request_queue *q
blk_qc_tcookie
boolspin
3471  If Not blk_qc_t_valid(cookie) || Not Determine whether a bit is set(IO polling enabled if set , & * various queue flags, see QUEUE_* below) Then Return 0
3475  If plug Then blk_flush_plug_list(plug, false)
3478  hctx = hw dispatch queues [blk_qc_t_to_queue_num(cookie)]
3487  If blk_mq_poll_hybrid(q, hctx, cookie) Then Return 1
3490  @poll_considered: Count times blk_poll() was called. ++
3492  state = state
3493  Do
3496  @poll_invoked: Count how many requests blk_poll() polled. ++
3498  ret = poll(hctx)
3499  If ret > 0 Then
3502  Return ret
3505  If signal_pending_state(state, current process) Then set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
3508  If state == Used in tsk->state: Then Return 1
3510  If ret < 0 || Not spin Then Break
3512  cpu_relax()
3513  When Not need_resched() cycle
3515  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (Used in tsk->state: )
3516  Return 0
Caller
NameDescribe
blkdev_iopoll
__blkdev_direct_IO
dio_await_oneWait for the next BIO to complete. Remove it and return it. NULL is* returned once all BIOs have been completed. This must only be called once* all bios have been issued so that dio->refcount can only decrease. This
iomap_dio_iopoll
iomap_dio_rwmap_dio_rw() always completes O_[D]SYNC writes regardless of whether the IO* is being issued as AIO or not. This allows us to optimise pure data writes* to use REQ_FUA rather than requiring generic_write_sync() to issue a* REQ_FLUSH post write