Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\dma\swiotlb.c Create Date:2022-07-28 10:36:06
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:swiotlb_exit

Proto:void __init swiotlb_exit(void)

Type:void

Parameter:Nothing

384  If Not io_tlb_orig_addr Then Return
387  If late_alloc Then
388  free_pages((unsignedlong)io_tlb_orig_addr, get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory)
390  free_pages((unsignedlong)This is a free list describing the number of free entries available from* each index, get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory)
392  free_pages((unsignedlong)phys_to_virt - map physical address to virtual*@address: address to remap* The returned virtual address is a current CPU mapping for* the memory address given. It is only valid to use this function on* addresses that have a kernel mapping, get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory)
394  Else
395  memblock_free_late(__pa(io_tlb_orig_addr), align the pointer to the (next) page boundary (The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. * sizeof(phys_addr_t)))
397  memblock_free_late(__pa(This is a free list describing the number of free entries available from* each index), align the pointer to the (next) page boundary (The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. * sizeof(int)))
399  memblock_free_late(Used to do a quick range check in swiotlb_tbl_unmap_single and* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this* API., align the pointer to the (next) page boundary (The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. << log of the size of each IO TLB slab. The number of slabs is command line* controllable.))
402  swiotlb_cleanup()