函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:block\blk-mq.c Create Date:2022-07-27 18:47:45
Last Modify:2020-03-17 23:18:05 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:lk_poll - poll for IO completions*@q: the queue*@cookie: cookie passed back at IO submission time*@spin: whether to spin for completions* Description:* Poll for completions on the passed in queue. Returns number of* completed entries found

函数原型:int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)

返回类型:int

参数:

类型参数名称
struct request_queue *q
blk_qc_tcookie
boolspin
3471  如果非blk_qc_t_valid(cookie)或非test_bit(IO polling enabled if set , & * various queue flags, see QUEUE_* below)则返回:0
3475  如果plug添加到请求队列
3478  hctx等于 hw dispatch queues [blk_qc_t_to_queue_num(cookie)]
3487  如果blk_mq_poll_hybrid(q, hctx, cookie)则返回:1
3490  @poll_considered: Count times blk_poll() was called. 自加
3492  state等于state
3493  循环
3496  @poll_invoked: Count how many requests blk_poll() polled. 自加
3498  ret等于poll(hctx)
3499  如果ret大于0则
3502  返回:ret
3505  如果signal_pending_state(state, 当前进程)则set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (就绪态)
3508  如果state恒等于就绪态则返回:1
3510  如果ret小于0或非spin退出
3512  cpu_relax()
3513  当非need_resched()循环
3515  set_current_state() includes a barrier so that the write of current->state* is correctly serialised wrt the caller's subsequent test of whether to* actually sleep:* for (;;) {* set_current_state(TASK_UNINTERRUPTIBLE);* if (!need_sleep)* break;* (就绪态)
3516  返回:0
调用者
名称描述
blkdev_iopoll
__blkdev_direct_IO
dio_await_oneWait for the next BIO to complete. Remove it and return it. NULL is* returned once all BIOs have been completed. This must only be called once* all bios have been issued so that dio->refcount can only decrease. This
iomap_dio_iopoll
iomap_dio_rwmap_dio_rw() always completes O_[D]SYNC writes regardless of whether the IO* is being issued as AIO or not. This allows us to optimise pure data writes* to use REQ_FUA rather than requiring generic_write_sync() to issue a* REQ_FLUSH post write