函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\cpumask.h Create Date:2022-07-27 06:38:52
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:设置CPU信息

函数原型:static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp)

返回类型:void

参数:

类型参数名称
unsigned intcpu
struct cpumask *dstp
327  set_bit - Atomically set a bit in memory*@nr: the bit to set*@addr: the address to start counting from* This is a relaxed atomic operation (no implied memory barriers).* Note that @nr may be almost arbitrarily large; this function is not
调用者
名称描述
init_test_configurtion
cpu_rmap_updatepu_rmap_update - update CPU rmap following a change of object affinity*@rmap: CPU rmap to update*@index: Index of object whose affinity changed*@affinity: New CPU affinity of object
__cache_amd_cpumap_setup
__cache_cpumap_setup
mce_device_createPer CPU device init. All of the CPUs still share the same bank device:
domain_add_cpudomain_add_cpu - Add a cpu to a resource's domain list.* If an existing domain in the resource r's domain list matches the cpu's* resource id, add the cpu in the domain.* Otherwise, a new domain is allocated and inserted into the right position
resctrl_online_cpu
set_cache_qos_cfg
reset_all_ctrls
update_domains
smp_callinReport back to the Boot Processor during boot time or to the caller processor* during CPU online.
set_cpu_sibling_map
do_boot_cpuNOTE - on most systems this is a PHYSICAL apic ID, but on multiquad* (ie clustered apic addressing mode), this is a LOGICAL apic ID.* Returns zero if CPU booted OK, else error code from* ->wakeup_secondary_cpu.
disable_smpFall back to non SMP mode after errors.* RED-PEN audit/test this more. I bet there is more state messed up here.
native_smp_prepare_boot_cpuEarly setup to make printk work.
init_x2apic_ldr
local_ipi
wq_numa_init
cpupri_setpupri_set - update the CPU priority setting*@cp: The cpupri context*@cpu: The target CPU*@newpri: The priority (INVALID-RT99) to assign to this CPU* Note: Assumes cpu_rq(cpu)->lock is locked* Returns: (void)
cpudl_findpudl_find - find the best (later-dl) CPU in the system*@cp: the cpudl max-heap context*@p: the task*@later_mask: a mask to fill in with the selected CPUs (or NULL)* Returns: int - CPUs were found
cpudl_clearpudl_clear - remove a CPU from the cpudl max-heap*@cp: the cpudl max-heap context*@cpu: the target CPU* Notes: assumes cpu_rq(cpu)->lock is locked* Returns: (void)
cpudl_set_freecpupudl_set_freecpu - Set the cpudl.free_cpus*@cp: the cpudl max-heap context*@cpu: rd attached CPU
rq_attach_root
build_balance_maskBuild the balance mask; it contains only those CPUs that can arrive at this* group and should be considered to continue balancing
get_groupPackage topology (also see the load-balance blurb in fair
sched_domains_numa_masks_set
irq_percpu_enable
irq_spread_init_one
build_node_to_cpumask
tick_device_uses_broadcastCheck, if the device is disfunctional and a place holder, which* needs to be handled by the broadcast device.
tick_broadcast_controlk_broadcast_control - Enable/disable or force broadcast mode*@mode: The selected broadcast mode* Called when the system enters a state where affected tick devices* might stop. Note: TICK_BROADCAST_FORCE cannot be undone.
hardlockup_detector_perf_disablehardlockup_detector_perf_disable - Disable the local event
__ring_buffer_alloc__ring_buffer_alloc - allocate a new ring_buffer*@size: the size in bytes per cpu that is needed.*@flags: attributes to set for the ring buffer.* Currently the only flag that is available is the RB_FL_OVERWRITE* flag
trace_rb_cpu_prepareWe only allocate new buffers, never free them if the CPU goes down.* If we were to free the buffer, then the user would lose any trace that was in* the buffer.
test_cpu_buff_start
move_to_next_cpu
start_kthreadstart_kthread - Kick off the hardware latency sampling/detector kthread* This starts the kernel thread that will sit and sample the CPU timestamp* counter (TSC or similar) and look for potential hardware latencies.
perf_event_init_cpu
drain_all_pagesSpill all the per-cpu pages from all CPUs back into the buddy allocator.* When zone parameter is non-NULL, spill just the single zone's pages.* Note that this can be extremely slow as the draining happens in a workqueue.
blk_mq_map_swqueue
set_cpu_possible
set_cpu_present
set_cpu_active