Function report |
Source Code:lib\dump_stack.c |
Create Date:2022-07-28 06:15:39 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:dump_stack
Proto:asmlinkage __visible void dump_stack(void)
Type:void
Parameter:Nothing
128 | __dump_stack() |
Name | Describe |
---|---|
kobject_init | kobject_init() - Initialize a kobject structure |
kobject_add | kobject_add() - The main kobject add function.*@kobj: the kobject to add*@parent: pointer to the parent of the kobject.*@fmt: format to name the kobject with.* The kobject name is set and added to the kobject hierarchy in this* function. |
check_preemption_disabled | |
fail_dump | |
ubsan_epilogue | |
dotest | |
reserve_region_with_split | |
process_one_work | process_one_work - process single work*@worker: self*@work: work to process* Process @work |
__schedule_bug | Print scheduling while atomic bug: |
dequeue_task_idle | It is not legal to sleep in the idle task - print a warning* message if some code attempts to do it: |
look_up_lock_class | |
assign_lock_key | Static locks do not have their class-keys yet - for them the key is* the lock object itself. If the lock is in the per cpu area, the* canonical address of the lock (per cpu offset removed) is used. |
register_lock_class | Register a lock's class in the hash-table, if the class is not present* yet. Otherwise we look it up. We cache the result in the lock object* itself, so actual lookup of the hash should be once per lock object. |
print_lock_nested_lock_not_held | |
__lock_acquire | This gets called for every mutex_lock*()/spin_lock*() operation |
print_unlock_imbalance_bug | |
print_freed_lock_bug | |
print_held_locks_bug | |
lockdep_rcu_suspicious | |
debug_rt_mutex_print_deadlock | |
spin_dump | |
rwlock_bug | |
__report_bad_irq | If 99,900 of the previous 100,000 interrupts have not been handled* then assume that the IRQ is stuck in some manner |
schedule_timeout | schedule_timeout - sleep until timeout*@timeout: timeout value in jiffies* Make the current task sleep until @timeout jiffies have* elapsed |
do_init_module | This is where the real work happens.* Keep it uninlined to provide a reliable breakpoint target, e.g. for the gdb* helper command 'lx-symbols'. |
backtrace_test_normal | |
backtrace_test_irq_callback | |
kdb_dump_stack_on_cpu | |
kgdb_reenter_check | |
kgdb_cpu_enter | |
getthread | |
watchdog_overflow_callback | Callback function for perf event subsystem |
dump_header | |
pcpu_alloc | pcpu_alloc - the percpu allocator*@size: size of area to allocate in bytes*@align: alignment of area (max PAGE_SIZE)*@reserved: allocate from the reserved chunk if available*@gfp: allocation flags* Allocate percpu area of @size bytes aligned at @align |
kmem_cache_create_usercopy | kmem_cache_create_usercopy - Create a cache with a region suitable* for copying to userspace*@name: A string which is used in /proc/slabinfo to identify this cache.*@size: The size of objects to be created in this cache. |
kmem_cache_destroy | |
print_bad_pte | This function is called to print an error when a bad pte* is found. For example, we might have a PFN-mapped pte in* a region that doesn't allow it.* The calling function must still handle the error. |
bad_page | |
warn_alloc | |
memblock_alloc_range_nid | memblock_alloc_range_nid - allocate boot memory block*@size: size of memory block to be allocated in bytes*@align: alignment of the region and block's size*@start: the lower bound of the memory region to allocate (phys address)*@end: the upper bound of |
check_poison_mem | |
cache_grow_begin | Grow (by 1) the number of slabs within a cache. This is called by* kmem_cache_alloc() when there are no active objs left in a cache. |
new_slab | |
print_address_description | |
__kasan_report |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |