Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:block\blk-iocost.c Create Date:2022-07-28 17:48:49
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:ioc_rqos_throttle

Proto:static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)

Type:void

Parameter:

TypeParameterName
struct rq_qos *rqos
struct bio *bio
1681  blkg = * Represents the association of the css and request_queue for the bio. * If a bio goes direct to device, it will not have a blkg as it will * not have a request_queue associated with it. The reference is put * on release of the bio.
1682  ioc = accessors and helpers
1683  iocg = blkg_to_iocg(blkg)
1690  If Not enabled || Not this iocg's depth in the hierarchy and ancestors including self Then Return
1694  If Not iocg_activate(iocg, & now) Then Return
1698  abs_cost = calc_vtime_cost(bio, iocg, false)
1699  If Not abs_cost Then Return
1702  to detect randio = bio_end_sector(bio)
1704  vtime = atomic64_read( & * `vtime` is this iocg's vtime cursor which progresses as IOs are * issued. If lagging behind device vtime, the delta represents * the currently available IO budget. If runnning ahead, the * overage. * `vtime_done` is the same but progressed on completion )
1705  current_hweight(iocg, & hw_active, & hw_inuse)
1707  If hw_inuse < hw_active && time_after_eq64(vtime + inuse_margin_vtime, vnow) Then
1709  TRACE_IOCG_PATH(inuse_reset, iocg, & now, inuse, weight, hw_inuse, hw_active)
1711  spin_lock_irq( & lock)
1712  propagate_active_weight(iocg, weight, weight)
1713  spin_unlock_irq( & lock)
1714  current_hweight(iocg, & hw_active, & hw_inuse)
1717  cost = Scale @abs_cost to the inverse of @hw_inuse. The lower the hierarchical* weight, the more expensive each IO. Must round up.
1724  If Not waitqueue_active -- locklessly test for waiters on the queue*@wq_head: the waitqueue to test for waiters* returns true if the wait list is not empty* NOTE: this function is lockless and requires care, incorrect usage _will_ && Not atomic64_read( & abs_vdebt) && time_before_eq64(vtime + cost, vnow) Then
1727  iocg_commit_bio(iocg, bio, cost)
1728  Return
1741  If _issue_as_root_blkg - see if this bio needs to be issued as root blkg*@return: true if this bio needs to be submitted with the root blkg context || fatal_signal_pending(current process) Then
1742  atomic64_add(abs_cost, & abs_vdebt)
1743  If iocg_kick_delay(iocg, & now, cost) Then lkcg_schedule_throttle - this task needs to check for throttling*@q: the request queue IO was submitted on*@use_memdelay: do we charge this to memory delay for PSI* This is called by the IO controller when we know there's delay accumulated
1746  Return
1762  spin_lock_irq( & lock)
1770  If Value for the false possibility is greater at compile time(list_empty - tests whether a list is empty*@head: the list to test.) Then
1771  spin_unlock_irq( & lock)
1772  iocg_commit_bio(iocg, bio, cost)
1773  Return
1776  init_waitqueue_func_entry( & wait, iocg_wake_fn)
1777  private = current process
1778  bio = bio
1779  abs_cost = abs_cost
1780  committed = false
1782  __add_wait_queue_entry_tail( & waitq, & wait)
1783  iocg_kick_waitq(iocg, & now)
1785  spin_unlock_irq( & lock)
1787  When (true) cycle
1788  set_current_state(TASK_UNINTERRUPTIBLE)
1789  If committed Then Break
1791  io_schedule()
1795  sh_wait - clean up after waiting in a queue*@wq_head: waitqueue waited on*@wq_entry: wait descriptor* Sets current thread back to running state and removes* the wait descriptor from the given waitqueue if still* queued.