Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:arch\x86\kernel\espfix_64.c Create Date:2022-07-28 07:42:27
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:init_espfix_ap

Proto:void init_espfix_ap(int cpu)

Type:void

Parameter:

TypeParameterName
intcpu
143  If Value is more likely to compile time(per_cpu(This contains the *bottom* address of the espfix stack , cpu)) Then Return
146  addr = This returns the bottom address of the espfix stack for a specific CPU.* The math allows for a non-power-of-two ESPFIX_STACK_SIZE, in which case* we have to account for some amount of padding at the end of each page.
147  page = cpu / ESPFIX_STACKS_PER_PAGE
150  stack_page = READ_ONCE(espfix_pages[page])
151  If Value is more likely to compile time(stack_page) Then Go to done
154  mutex_lock( & Initialization mutex - should this be a spinlock? )
157  stack_page = READ_ONCE(espfix_pages[page])
158  If stack_page Then Go to unlock_done
161  node = cpu_to_node(cpu)
162  ptemask = __supported_pte_mask
164  pud_p = espfix_pud_page[pud_index(addr)]
165  pud = pud_p
166  If Not pud_present(pud) Then
167  page = Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,* prefer the current CPU's closest node. Otherwise node must be valid and* online.
169  pmd_p = page_address(page)
170  pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask))
171  paravirt_alloc_pmd( & init_mm, __pa(pmd_p) >> PAGE_SHIFT determines the page size )
172  When n < ESPFIX_PUD_CLONES cycle (pmds are folded into puds so this doesn't get actually called,* but the define is needed for a generic inline function.)( & pud_p[n], pud)
176  pmd_p = pmd_offset( & pud, addr)
177  pmd = pmd_p
178  If Not pmd_present(pmd) Then
179  page = Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,* prefer the current CPU's closest node. Otherwise node must be valid and* online.
181  pte_p = page_address(page)
182  pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask))
183  paravirt_alloc_pte( & init_mm, __pa(pte_p) >> PAGE_SHIFT determines the page size )
184  When n < ESPFIX_PMD_CLONES cycle set_pmd( & pmd_p[n], pmd)
188  pte_p = pte_offset_kernel( & pmd, addr)
189  stack_page = page_address(Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,* prefer the current CPU's closest node. Otherwise node must be valid and* online.)
194  pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask))
195  When n < ESPFIX_PTE_CLONES cycle set_pte( & pte_p[n * PTE_STRIDE], pte)
199  WRITE_ONCE(espfix_pages[page], stack_page)
201  unlock_done :
202  mutex_unlock( & Initialization mutex - should this be a spinlock? )
203  done :
204  per_cpu(This contains the *bottom* address of the espfix stack , cpu) = addr
205  per_cpu(espfix_waddr, cpu) = stack_page + (addr & ~PAGE_MASK)
Caller
NameDescribe
init_espfix_bsp
do_boot_cpuNOTE - on most systems this is a PHYSICAL apic ID, but on multiquad* (ie clustered apic addressing mode), this is a LOGICAL apic ID.* Returns zero if CPU booted OK, else error code from* ->wakeup_secondary_cpu.