Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\rcu\tree_exp.h Create Date:2022-07-28 10:28:29
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Select the CPUs within the specified rcu_node that the upcoming* expedited grace period needs to wait for.

Proto:static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)

Type:void

Parameter:

TypeParameterName
struct work_struct *wp
339  rewp = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(wp, structrcu_exp_work, rew_work)
341  rnp = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(rewp, structrcu_node, rew)
343  raw_spin_lock_irqsave_rcu_node(rnp, flags)
346  mask_ofl_test = 0
348  mask = Bitmasks in an rcu_node cover the interval [grplo, grphi] of CPU IDs, and* are indexed relative to this interval rather than the global CPU ID space.* This generates the bit for a CPU in node-local masks.(rnp, cpu)
349  rdp = per_cpu_ptr( & rcu_data, cpu)
352  If These macros fold the SMP functionality into a single CPU system() == cpu || Not (ginning of each grace period. & mask) Then
354  mask_ofl_test |= mask
355  Else
356  snap = Snapshot the ->dynticks counter with full ordering so as to allow* stable comparison of this counter with past and future snapshots.
357  If Return true if the snapshot returned from rcu_dynticks_snap()* indicates that RCU is in an extended quiescent state. Then mask_ofl_test |= mask
359  Else Double-check need for IPI. = snap
363  mask_ofl_ipi = CPUs or groups that need to check in & ~mask_ofl_test
370  If rcu_preempt_has_tasks(rnp) Then s no such task. = next
372  raw_spin_unlock_irqrestore_rcu_node(rnp, flags)
376  mask = Bitmasks in an rcu_node cover the interval [grplo, grphi] of CPU IDs, and* are indexed relative to this interval rather than the global CPU ID space.* This generates the bit for a CPU in node-local masks.(rnp, cpu)
377  rdp = per_cpu_ptr( & rcu_data, cpu)
379  retry_ipi :
380  If Return true if the CPU corresponding to the specified rcu_data* structure has spent some time in an extended quiescent state since* rcu_dynticks_snap() returned the specified snapshot. Then
381  mask_ofl_test |= mask
382  Continue
384  If get_cpu() == cpu Then
385  put_cpu()
386  Continue
388  ret = smp_call_function_single(cpu, SPDX-License-Identifier: GPL-2.0+ , NULL, 0)
389  put_cpu()
390  If Not ret Then
391  mask_ofl_ipi &= ~mask
392  Continue
395  raw_spin_lock_irqsave_rcu_node(rnp, flags)
396  If ginning of each grace period. & mask && CPUs or groups that need to check in & mask Then
399  raw_spin_unlock_irqrestore_rcu_node(rnp, flags)
400  trace_rcu_exp_grace_period(Name of structure. , Return then value that expedited-grace-period counter will have* at the end of the current grace period., Strings used in tracepoints need to be exported via the* tracing system such that tools like perf and trace-cmd can* translate the string address pointers to actual text.("selectofl"))
401  schedule_timeout_uninterruptible(1)
402  Go to retry_ipi
405  If Not (CPUs or groups that need to check in & mask) Then mask_ofl_ipi &= ~mask
407  raw_spin_unlock_irqrestore_rcu_node(rnp, flags)
410  mask_ofl_test |= mask_ofl_ipi
411  If mask_ofl_test Then Report expedited quiescent state for multiple CPUs, all covered by the* specified leaf rcu_node structure.