Function report |
Source Code:include\linux\rcupdate.h |
Create Date:2022-07-28 05:35:31 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:_read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
Proto:static inline void rcu_read_unlock(void)
Type:void
Parameter:Nothing
667 | RCU_LOCKDEP_WARN(!_is_watching - see if RCU thinks that the current CPU is not idle* Return true if RCU is watching the running CPU, which means that this* CPU can safely enter RCU read-side critical sections, "rcu_read_unlock() used illegally while idle") |
670 | __rcu_read_unlock() |
671 | rcu_lock_release( & rcu_lock_map) |
Name | Describe |
---|---|
dentry_name | |
xa_load | xa_load() - Load an entry from an XArray.*@xa: XArray.*@index: index into array.* Context: Any context. Takes and releases the RCU lock.* Return: The entry at @index in @xa. |
xa_get_mark | xa_get_mark() - Inquire whether this mark is set on this entry.*@xa: XArray.*@index: Index of entry.*@mark: Mark number.* This function uses the RCU read lock, so the result may be out of date* by the time it returns |
xa_find | xa_find() - Search the XArray for an entry.*@xa: XArray.*@indexp: Pointer to an index.*@max: Maximum index to search to.*@filter: Selection criterion.* Finds the entry in @xa which matches the @filter, and has the lowest |
xa_find_after | xa_find_after() - Search the XArray for a present entry.*@xa: XArray.*@indexp: Pointer to an index.*@max: Maximum index to search to.*@filter: Selection criterion.* Finds the entry in @xa which matches the @filter and has the lowest |
xas_extract_present | |
xas_extract_marked | |
current_is_single_threaded | Returns true if the task does not share ->mm with another thread/process. |
find_io_range_by_fwnode | d_io_range_by_fwnode - find logical PIO range for given FW node*@fwnode: FW node handle associated with logical PIO range* Returns pointer to node on success, NULL otherwise.* Traverse the io_range_list to find the registered node for @fwnode. |
find_io_range | Return a registered range given an input PIO token |
logic_pio_trans_cpuaddr | |
rhashtable_insert_slow | |
rhashtable_walk_stop | hashtable_walk_stop - Finish a hash table walk*@iter: Hash table iterator* Finish a hash table walk. Does not reset the iterator to the start of the* hash table. |
test_rhashtable | |
test_rhltable | |
check_xas_retry | |
check_xa_mark_1 | |
check_xa_mark_2 | |
check_xa_shrink | |
check_multi_store | |
check_multi_find_2 | |
check_find_3 | |
xa_find_entry | See find_swap_entry() in mm/shmem.c |
check_move_tiny | |
check_move_max | |
check_move_small | |
check_move | |
check_account | Check that the pointer / value / sibling entries are accounted the* way we expect them to be. |
do_kmem_cache_size | Test kmem_cache with given parameters: |
next_prime_number | xt_prime_number - return the next prime number*@x: the starting point for searching to test* A prime number is an integer greater than 1 that is only divisible by* itself and 1 |
is_prime_number | s_prime_number - test whether the given number is prime*@x: the number to test* A prime number is an integer greater than 1 that is only divisible by* itself and 1 |
dump_primes | |
crc_t10dif_update | |
gen_pool_virt_to_phys | gen_pool_virt_to_phys - return the physical address of memory*@pool: pool to allocate from*@addr: starting address of memory* Returns the physical address on success, or -1 on error. |
gen_pool_alloc_algo_owner | gen_pool_alloc_algo_owner - allocate special memory from the pool*@pool: pool to allocate from*@size: number of bytes to allocate from the pool*@algo: algorithm passed from caller*@data: data passed to algorithm*@owner: optionally retrieve the chunk owner |
gen_pool_free_owner | gen_pool_free_owner - free allocated special memory back to the pool*@pool: pool to free to*@addr: starting address of memory to free back to pool*@size: size in bytes of memory to free*@owner: private data stashed at gen_pool_add() time* Free previously |
gen_pool_for_each_chunk | gen_pool_for_each_chunk - call func for every chunk of generic memory pool*@pool: the generic memory pool*@func: func to call*@data: additional data used by @func* Call @func for every chunk of generic memory pool. The @func is |
gen_pool_has_addr | gen_pool_has_addr - checks if an address falls within the range of a pool*@pool: the generic memory pool*@start: start address*@size: size of the region* Check if the range of addresses falls within the specified pool. Returns |
gen_pool_avail | gen_pool_avail - get available free space of the pool*@pool: pool to get available free space* Return available free space of the specified pool. |
gen_pool_size | gen_pool_size - get size in bytes of memory managed by the pool*@pool: pool to get size* Return size in bytes of memory managed by the pool. |
gen_pool_set_algo | gen_pool_set_algo - set the allocation algorithm*@pool: pool to change allocation algorithm*@algo: custom algorithm function*@data: additional data used by @algo* Call @algo for each memory allocation in the pool |
lookup_ts_algo | |
nmi_handle | |
hw_breakpoint_handler | Handle debug exception notifications.* Return value is either NOTIFY_STOP or NOTIFY_DONE as explained below.* NOTIFY_DONE returned if one of the following conditions is true.* i) When the causative address is from user-space and the exception |
rdtgroup_tasks_assigned | dtgroup_tasks_assigned - Test if tasks have been assigned to resource group*@r: Resource group* Return: 1 if tasks have been assigned to @r, 0 otherwise |
rdtgroup_move_task | |
show_rdt_tasks | |
cpu_crash_vmclear_loaded_vmcss | |
get_mm_exe_file | get_mm_exe_file - acquire a reference to the mm's executable file* Returns %NULL if mm has no associated executable file.* User must release file via fput(). |
pidfd_poll | Poll support for process exit notification. |
release_task | |
rcuwait_wake_up | |
ptracer_capable | ptracer_capable - Determine if the ptracer holds CAP_SYS_PTRACE in the namespace*@tsk: The task that may be ptraced*@ns: The user namespace to search for CAP_SYS_PTRACE in* Return true if the task that is ptracing the current task had CAP_SYS_PTRACE |
__ptrace_may_access | Returns 0 on success, -errno on denial. |
__sigqueue_alloc | allocate a new signal queue record* - this may be called without locks if and only if t == current, otherwise an* appropriate lock must be held to stop the target task from exiting |
__send_signal | |
send_signal | |
__lock_task_sighand | |
group_send_sig_info | send signal info to all the members of a group |
kill_pid_info | |
kill_proc_info | |
kill_pid_usb_asyncio | The usb asyncio usage of siginfo is wrong |
kill_something_info | kill_something_info() interprets pid in interesting ways just like kill(2).* POSIX specifies that kill(-1,sig) is unspecified, but what we have* is probably wrong. Should make it like BSD or SYSV. |
send_sigqueue | |
do_notify_parent | Let a parent know about the death of a child.* For a stopped/continued status change, use do_notify_parent_cldstop instead.* Returns true if our parent ignored us and so we've switched to* self-reaping. |
do_notify_parent_cldstop | do_notify_parent_cldstop - notify parent of stopped/continued state change*@tsk: task reporting the state change*@for_ptracer: the notification is for ptracer*@why: CLD_{CONTINUED|STOPPED|TRAPPED} to report |
ptrace_signal | |
do_send_specific | |
SYSCALL_DEFINE3 | |
SYSCALL_DEFINE2 | Ugh. To avoid negative return values, "getpriority()" will* not return the normal nice-value, but a negated value that* has been offset by 20 (ie it returns 40..1 instead of -20..19)* to stay compatible. |
sys_getppid | |
SYSCALL_DEFINE2 | This needs some heavy checking |
do_getpgid | |
SYSCALL_DEFINE1 | |
SYSCALL_DEFINE4 | |
try_to_grab_pending | ry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any |
__queue_work | |
start_flush_work | |
workqueue_congested | workqueue_congested - test whether a workqueue is congested*@cpu: CPU in question*@wq: target workqueue* Test whether @wq's cpu workqueue for @cpu is congested. There is* no synchronization around this function and the test result is |
work_busy | work_busy - test whether a work is currently pending or running*@work: the work to be tested* Test whether @work is currently pending or running. There is no* synchronization around this function and the test result is |
show_workqueue_state | show_workqueue_state - dump workqueue state* Called from a sysrq handler or try_to_freeze_tasks() and prints out* all busy workqueues and pools. |
find_get_task_by_vpid | |
get_task_pid | |
get_pid_task | |
find_get_pid | |
__task_pid_nr_ns | |
__atomic_notifier_call_chain | __atomic_notifier_call_chain - Call functions in an atomic notifier chain*@nh: Pointer to head of the atomic notifier chain*@val: Value passed unmodified to notifier function*@v: Pointer passed unmodified to notifier function |
get_task_cred | get_task_cred - Get another task's objective credentials*@task: The task to query* Get the objective credentials of a task, pinning them so that they can't go* away |
check_same_owner | Check the target process has a UID that matches the current process's: |
do_sched_setscheduler | |
SYSCALL_DEFINE3 | sys_sched_setattr - same as above, but with extended sched_attr*@pid: the pid in question.*@uattr: structure containing the extended parameters.*@flags: for future extension. |
SYSCALL_DEFINE2 | sys_sched_getparam - get the RT priority of a thread*@pid: the pid in question.*@param: structure containing the RT priority.* Return: On success, 0 and the RT priority is in @param. Otherwise, an error* code. |
SYSCALL_DEFINE4 | sys_sched_getattr - similar to sched_getparam, but with sched_attr*@pid: the pid in question.*@uattr: structure containing the extended parameters.*@usize: sizeof(attr) for fwd/bwd comp.*@flags: for future extension. |
sched_setaffinity | |
sched_getaffinity | |
sched_rr_get_interval | |
sched_show_task | |
show_state_filter | |
init_idle | _idle - set up an idle thread for a given CPU*@idle: task in question*@cpu: CPU the idle task belongs to* NOTE: this function does not set the idle thread's NEED_RESCHED* flag, to make booting more robust. |
thread_group_cputime | Accumulate raw cputime values of dead tasks (sig->[us]time) and live* tasks (sum on group iteration) belonging to @tsk's group. |
build_sched_domains | Build sched domains for a given set of CPUs and attach the sched domains* to the individual CPUs |
detach_destroy_domains | Detach sched domains from a group of CPUs specified in cpu_map* These CPUs will now be attached to the NULL domain |
print_rq | |
cpuacct_charge | harge this task's execution time to its accounting group.* called with rq->lock held. |
cpuacct_account_field | Add user/system time to cpuacct.* Note: it's the caller that updates the account of the root cgroup. |
membarrier_global_expedited | |
membarrier_private_expedited | |
sync_runqueues_membarrier_state | |
psi_schedule_poll_work | Schedule polling if it's not already scheduled. It's safe to call even from* hotpath because even though kthread_queue_delayed_work takes worker->lock* spinlock that spinlock is never contended due to poll_scheduled atomic* preventing such competition. |
psi_trigger_poll | |
is_dynamic_key | Check whether a key has been registered as a dynamic key. |
debug_show_all_locks | |
debug_rt_mutex_print_deadlock | |
kmsg_dump | kmsg_dump - dump kernel log to kernel message dumpers.*@reason: the reason (oops, panic etc) for dumping* Call each of the registered dumper's dump() callback, which can* retrieve the kmsg records with kmsg_dump_get_line() or* kmsg_dump_get_buffer(). |
kstat_irqs_usr | kstat_irqs_usr - Get the statistics for an interrupt*@irq: The interrupt number* Returns the sum of interrupt counts on all cpus since boot for @irq |
irq_find_mapping | q_find_mapping() - Find a linux irq from a hw irq number.*@domain: domain owning this hardware interrupt*@hwirq: hardware irq number in that domain space |
rcu_torture_read_unlock | |
rcu_torture_stall | CPU-stall kthread. It waits as specified by stall_cpu_holdoff, then* induces a CPU stall for the time specified by stall_cpu. |
rcu_perf_read_unlock | |
klp_shadow_get | klp_shadow_get() - retrieve a shadow variable data pointer*@obj: pointer to parent object*@id: data identifier* Return: the shadow variable data element, NULL on failure. |
get_file_raw_ptr | The caller must have pinned the task |
SYSCALL_DEFINE5 | |
do_timer_create | Create a POSIX.1b interval timer. |
__lock_timer | CLOCKs: The POSIX standard calls for a couple of clocks and allows us* to implement others |
timer_wait_running | On PREEMPT_RT this prevent priority inversion against softirq kthread in* case it gets preempted while executing a timer callback. See comments in* hrtimer_cancel_wait_running. For PREEMPT_RT=n this just results in a* cpu_relax(). |
__get_task_for_clock | |
get_futex_key | get_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored |
SYSCALL_DEFINE3 | sys_get_robust_list() - Get the robust-futex list head of a task*@pid: pid of the process [zero for current task]*@head_ptr: pointer to a list-head pointer, the kernel fills it in*@len_ptr: pointer to a length field, the kernel fills in the header size |
COMPAT_SYSCALL_DEFINE3 | |
acct_get | |
cgroup_tryget_css | group_tryget_css - try to get a cgroup's css for the specified subsystem*@cgrp: the cgroup of interest*@ss: the subsystem of interest* Find and get @cgrp's css assocaited with @ss. If the css doesn't exist* or is offline, %NULL is returned. |
cgroup_get_e_css | group_get_e_css - get a cgroup's effective css for the specified subsystem*@cgrp: the cgroup of interest*@ss: the subsystem of interest* Find and get the effective css of @cgrp for @ss |
current_cgns_cgroup_from_root | look up cgroup associated with current task's cgroup namespace on the* specified hierarchy |
cgroup_migrate | group_migrate - migrate a process or task to a cgroup*@leader: the leader of the process or the task to migrate*@threadgroup: whether @leader points to the whole process or a single task*@mgctx: migration context |
cgroup_attach_task | group_attach_task - attach a task or a whole threadgroup to a cgroup*@dst_cgrp: the cgroup to attach to*@leader: the task or the leader of the threadgroup to be attached*@threadgroup: attach the whole threadgroup? |
cgroup_procs_write_start | |
cgroup_file_write | |
css_has_online_children | ss_has_online_children - does a css have online children*@css: the target css* Returns %true if @css has any online children; otherwise, %false. This* function can be called from any context but the caller is responsible |
css_tryget_online_from_dir | ss_tryget_online_from_dir - get corresponding css from a cgroup dentry*@dentry: directory dentry of interest*@ss: subsystem of interest* If @dentry is a directory for a cgroup which has @ss enabled on it, try* to get the corresponding css and return it |
cgroup_rstat_flush_locked | see cgroup_rstat_flush() |
cgroupstats_build | groupstats_build - build and fill cgroupstats*@stats: cgroupstats to fill information into*@dentry: A dentry entry belonging to the cgroup for which stats have* been requested.* Build and fill cgroupstats so that taskstats can export it to user* space. |
cgroup_freezing | |
freezer_fork | zer_fork - cgroup post fork callback*@task: a task which has just been forked*@task has just been created and should conform to the current state of* the cgroup_freezer it belongs to. This function may race against* freezer_attach() |
update_if_frozen | pdate_if_frozen - update whether a cgroup finished freezing*@css: css of interest* Once FREEZING is initiated, transition to FROZEN is lazily updated by* calling this function |
freezer_read | |
freezer_change_state | zer_change_state - change the freezing state of a cgroup_freezer*@freezer: freezer of interest*@freeze: whether to freeze or thaw* Freeze or thaw @freezer according to @freeze. The operations are* recursive - all descendants of @freezer will be affected. |
validate_change | validate_change() - Used to validate that any proposed cpuset change* follows the structural rules for cpusets |
update_cpumasks_hier | pdate_cpumasks_hier - Update effective cpumasks and tasks in the subtree*@cs: the cpuset to consider*@tmp: temp variables for calculating effective_cpus & partition setup* When congifured cpumask is changed, the effective cpumasks of this cpuset |
update_sibling_cpumasks | pdate_sibling_cpumasks - Update siblings cpumasks*@parent: Parent cpuset*@cs: Current cpuset*@tmp: Temp variables |
update_nodemasks_hier | pdate_nodemasks_hier - Update effective nodemasks and tasks in the subtree*@cs: the cpuset to consider*@new_mems: a temp variable for calculating new effective_mems* When configured nodemask is changed, the effective nodemasks of this cpuset |
current_cpuset_is_being_rebound | |
cpuset_css_online | |
cpuset_hotplug_workfn | CPU / memory hotplug is handled asynchronously. |
cpuset_cpus_allowed | puset_cpus_allowed - return cpus_allowed mask from a tasks cpuset |
cpuset_cpus_allowed_fallback | puset_cpus_allowed_fallback - final fallback before complete catastrophe |
cpuset_mems_allowed | puset_mems_allowed - return mems_allowed mask from a tasks cpuset.*@tsk: pointer to task_struct from which to obtain cpuset->mems_allowed.* Description: Returns the nodemask_t mems_allowed of the cpuset* attached to the specified @tsk |
__cpuset_node_allowed | puset_node_allowed - Can we allocate on a memory node?*@node: is this an allowed node?*@gfp_mask: memory allocation flags* If we're in interrupt, yes, we can always allocate. If @node is set in* current's mems_allowed, yes |
cpuset_print_current_mems_allowed | puset_print_current_mems_allowed - prints current's cpuset and mems_allowed* Description: Prints current's name, cpuset name, and cached copy of its* mems_allowed to the kernel log. |
__cpuset_memory_pressure_bump | puset_memory_pressure_bump - keep stats of per-cpuset reclaims |
current_css_set_read | |
current_css_set_refcount_read | |
current_css_set_cg_links_read | |
userns_get | |
zap_pid_ns_processes | |
pidns_get | |
auditd_test_task | auditd_test_task - Check to see if a given task is an audit daemon*@task: the task to check* Description:* Return 1 if the task is a registered audit daemon, 0 otherwise. |
auditd_pid_vnr | auditd_pid_vnr - Return the auditd PID relative to the namespace* Description:* Returns the PID in relation to the namespace, 0 on failure. |
auditd_send_unicast_skb | auditd_send_unicast_skb - Send a record via unicast to auditd*@skb: audit record* Description:* Send a skb to the audit daemon, returns positive/zero values on success and* negative values on failure; in all cases the skb will be consumed by this |
kauditd_thread | kauditd_thread - Worker thread to send audit records to userspace*@dummy: unused |
audit_filter | |
audit_filter_task | At process creation time, we can determine if system-call auditing is* completely disabled for this task. Since we only have the task* structure at this point, we can only check uid and gid. |
audit_filter_syscall | At syscall entry and exit time, this filter is called if the* audit_state is not low enough that auditing cannot take place, but is* also not high enough that we already know we have to write an audit* record (i |
audit_filter_inodes | At syscall exit time, this filter is called if any audit_names have been* collected during syscall processing. We only check rules in sublists at hash* buckets applicable to the inode numbers in audit_names. |
handle_one | |
handle_path | |
__audit_inode | __audit_inode - store the inode and device from a lookup*@name: name being audited*@dentry: dentry being audited*@flags: attributes for this particular entry |
__audit_inode_child | __audit_inode_child - collect inode info for created/removed objects*@parent: inode of dentry parent*@dentry: dentry being audited*@type: AUDIT_TYPE_* value that we're looking for* For syscalls that create or remove filesystem objects, audit_inode |
__get_insn_slot | __get_insn_slot() - Find a slot on an executable page for an instruction.* We allocate an executable page if there's no room on existing ones. |
__free_insn_slot | |
__is_insn_slot_addr | Check given address is on the page of kprobe instruction slots.* This will be used for checking whether the address on a stack* is on a text area or not. |
rcu_lock_break | To avoid extending the RCU grace period for an unbounded amount of time,* periodically exit the critical section and enter a new one.* For preemptible RCU it is sufficient to call rcu_read_unlock in order* to exit the grace period |
check_hung_uninterruptible_tasks | Check whether a TASK_UNINTERRUPTIBLE does not get woken up for* a really long time (120 seconds). If that happens, print out* a warning. |
fill_stats_for_tgid | |
bacct_add_tsk | ll in basic accounting fields |
trace_user_stack_print | TRACE_USER_STACK |
__bpf_trace_run | |
uprobe_trace_func | probe handler |
uretprobe_trace_func | |
map_lookup_elem | |
map_update_elem | |
map_delete_elem | |
map_get_next_key | |
htab_map_seq_show_elem | |
bpf_percpu_hash_copy | |
bpf_percpu_hash_update | |
htab_percpu_map_seq_show_elem | |
bpf_fd_htab_map_lookup_elem | ly called from syscall |
bpf_percpu_array_copy | |
bpf_percpu_array_update | |
array_map_seq_show_elem | |
percpu_array_map_seq_show_elem | |
bpf_fd_array_map_lookup_elem | ly called from syscall |
prog_array_map_seq_show_elem | |
perf_event_fd_array_release | |
bpf_percpu_cgroup_storage_copy | |
bpf_percpu_cgroup_storage_update | |
cgroup_storage_seq_show_elem | |
__bpf_prog_exit | |
btf_get_fd_by_id | |
__dev_map_flush | __dev_map_flush is called from xdp_do_flush_map() which _must_ be signaled* from the driver before returning from its napi->poll() routine. The poll()* routine is called either from busy_poll context or net_rx_action signaled* from NET_RX_SOFTIRQ |
dev_map_flush_old | |
dev_map_notification | |
cpu_map_update_elem | |
__cgroup_bpf_check_dev_permission | |
__cgroup_bpf_run_filter_sysctl | __cgroup_bpf_run_filter_sysctl - Run a program on sysctl*@head: sysctl table header*@table: sysctl table*@write: sysctl is being read (= 0) or written (= 1)*@buf: pointer to buffer passed by user space*@pcount: value-result argument: value is size of |
__cgroup_bpf_prog_array_is_empty | |
reuseport_array_free | |
bpf_fd_reuseport_array_lookup_elem | |
perf_event_ctx_lock_nested | Because of perf_event::ctx migration in sys_perf_event_open::move_group and* perf_pmu_migrate_context() we need some magic.* Those places that change perf_event::ctx will hold both* perf_event_ctx::mutex of the 'old' and 'new' ctx value. |
perf_lock_task_context | Get the perf_event_context for a task and lock it.* This has to cope with with the fact that until it is locked,* the context could get moved to another task. |
perf_event_context_sched_out | |
find_lively_task_by_vpid | |
perf_remove_from_owner | Remove user event from the owner task. |
_perf_ioctl | |
perf_event_init_userpage | |
perf_event_update_userpage | Callers need to ensure there can be no nesting of this function, otherwise* the seqlock logic goes bad. We can not serialize this because the arch* code calls this from NMI context. |
perf_mmap_fault | |
ring_buffer_wakeup | |
ring_buffer_get | |
perf_mmap_close | A buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy |
__perf_event_output | |
perf_iterate_sb | Iterate all events that need to receive side-band events.* For new callers; ensure that account_pmu_sb_event() includes* your event, otherwise it might not get delivered. |
perf_event_exec | |
__perf_pmu_output_stop | |
perf_pmu_output_stop | |
perf_addr_filters_adjust | Adjust all task's events' filters to the new vma |
do_perf_sw_event | |
perf_init_event | |
__perf_event_ctx_lock_double | Variation on perf_event_ctx_lock_nested(), except we take two context* mutexes. |
__perf_output_begin | |
perf_output_end | |
rest_init | |
filemap_range_has_page | lemap_range_has_page - check if a page exists in range |
find_get_entry | d_get_entry - find and get a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset |
find_get_entries | d_get_entries - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page cache index*@nr_entries: The maximum number of entries*@entries: Where the resulting entries are placed*@indices: The cache indices corresponding to the |
find_get_pages_range | d_get_pages_range - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page index*@end: The final page index (inclusive)*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* |
find_get_pages_contig | d_get_pages_contig - gang contiguous pagecache lookup*@mapping: The address_space to search*@index: The starting page index*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* find_get_pages_contig() works exactly like |
find_get_pages_range_tag | d_get_pages_range_tag - find and return pages in given range matching @tag*@mapping: the address_space to search*@index: the starting page index*@end: The final page index (inclusive)*@tag: the tag index*@nr_pages: the maximum number of pages*@pages: |
filemap_map_pages | |
oom_cpuset_eligible | m_cpuset_eligible() - check task eligiblity for kill*@start: task struct of which task to consider*@oc: pointer to struct oom_control* Task eligibility is determined by whether or not a candidate task, @tsk,* shares the same mempolicy nodes as current if |
find_lock_task_mm | The process p may have detached its own ->mm while exiting or through* use_mm(), but one or more of its subthreads may still have a valid* pointer. Return p, or any of its subthreads with a valid ->mm, with* task_lock() held. |
select_bad_process | Simple selection loop. We choose the process with the highest number of* 'points'. In case scan was aborted, oc->chosen is set to -1. |
dump_tasks | dump_tasks - dump current memory state of all system tasks*@oc: pointer to struct oom_control* Dumps the current memory state of all eligible tasks. Tasks not in the same* memcg, not in the same cpuset, or bound to a disjoint set of mempolicy nodes |
task_will_free_mem | Checks whether the given task is dying or exiting and likely to* release its address space. This means that all threads and processes* sharing the same mm have to be killed or exiting.* Caller has to make sure that task->mm is stable (hold task_lock or |
__oom_kill_process | |
laptop_sync_completion | We're in laptop mode and we've just synced. The sync's writes will have* caused another writeback to be scheduled by laptop_io_completion.* Nothing needs to be written back anymore, so we unschedule the writeback. |
count_history_pages | Count contiguously cached pages from @offset-1 to @offset-@max,* this count is a conservative estimation of* - length of the sequential read sequence, or* - thrashing threshold in memory tight systems |
ondemand_readahead | A minimal readahead algorithm for trivial sequential/random reads. |
page_evictable | page_evictable - test whether a page is evictable*@page: the page to test* Test whether page is evictable--i |
list_lru_count_one | |
workingset_refault | workingset_refault - evaluate the refault of a previously evicted page*@page: the freshly allocated replacement page*@shadow: shadow entry of the evicted page* Calculates and evaluates the refault distance of the previously* evicted page in the context of |
workingset_activation | workingset_activation - note a page activation*@page: page that is being activated |
page_get_anon_vma | Getting a lock on a stable anon_vma from a page off the LRU is tricky!* Since there is no serialization what so ever against page_remove_rmap()* the best this function can do is return a locked anon_vma that might* have been relevant to this page |
page_lock_anon_vma_read | Similar to page_get_anon_vma() except it locks the anon_vma.* Its a little more complex as it tries to keep the fast path to a single* atomic op -- the trylock. If we fail the trylock, we fall back to getting a |
purge_fragmented_blocks | |
vb_alloc | |
vb_free | |
_vm_unmap_aliases | |
get_swap_device | Check whether swap entry is valid in the swap device |
zswap_update_total_size | |
zswap_pool_current_get | |
zswap_pool_last_get | |
kernel_migrate_pages | |
kernel_move_pages | Move a list of pages in the address space of the currently executing* process. |
memcg_set_shrinker_bit | |
page_cgroup_ino | page_cgroup_ino - return inode number of the memcg a page is charged to*@page: the page* Look up the closest online ancestor of the memory cgroup @page is charged to* and return its inode number or 0 if @page is not charged to any cgroup. It |
__mod_lruvec_slab_state | |
get_mem_cgroup_from_mm | get_mem_cgroup_from_mm: Obtain a reference on given mm_struct's memcg.*@mm: mm from which memcg should be extracted. It can be NULL.* Obtain a reference on mm->memcg and returns it if successful. Otherwise* root_mem_cgroup is returned |
get_mem_cgroup_from_page | get_mem_cgroup_from_page: Obtain a reference on given page's memcg.*@page: page from which memcg should be extracted.* Obtain a reference on page->memcg and returns it if successful. Otherwise* root_mem_cgroup is returned. |
get_mem_cgroup_from_current | If current->active_memcg is non-NULL, do not fallback to current->mm->memcg. |
mem_cgroup_iter | mem_cgroup_iter - iterate over memory cgroup hierarchy*@root: hierarchy root*@prev: previously returned memcg, NULL on first invocation*@reclaim: cookie for shared reclaim walks, NULL for full walks* Returns references to children of the hierarchy below |
mem_cgroup_print_oom_context | mem_cgroup_print_oom_context: Print OOM information relevant to* memory controller.*@memcg: The memory cgroup that went over limit*@p: Task that is going to be killed* NOTE: @memcg and @p's mem_cgroup can be different when hierarchy is* enabled |
mem_cgroup_get_oom_group | mem_cgroup_get_oom_group - get a memory cgroup to clean up after OOM*@victim: task to be killed by the OOM killer*@oom_domain: memcg in case of memcg OOM, NULL in case of system-wide OOM* Returns a pointer to a memory cgroup, which has to be cleaned up |
__unlock_page_memcg | __unlock_page_memcg - unlock and unpin a memcg*@memcg: the memcg* Unlock and unpin a memcg returned by lock_page_memcg(). |
drain_all_stock | Drains all per-CPU charge caches for given root_memcg resp. subtree* of the hierarchy under it. |
memcg_has_children | Test whether @memcg has children, dead or alive. Note that this* function doesn't care whether @memcg has use_hierarchy enabled and* returns %true if there are child csses according to the cgroup* hierarchy |
__mem_cgroup_threshold | |
mem_cgroup_try_charge | mem_cgroup_try_charge - try charging a page*@page: page to charge*@mm: mm context of the victim*@gfp_mask: reclaim mode*@memcgp: charged memcg return*@compound: charge the page as compound or small page* Try to charge @page to the memcg that @mm belongs |
mem_cgroup_sk_alloc | |
mem_cgroup_uncharge_swap | mem_cgroup_uncharge_swap - uncharge swap space*@entry: swap entry to uncharge*@nr_pages: the amount of swap space to uncharge |
hugetlb_cgroup_charge_cgroup | |
find_and_get_object | Look up an object in the object search tree and increase its use_count. |
kmemleak_scan | Scan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held. |
kmemleak_seq_stop | Decrement the use_count of the last object required, if any. |
kmemleak_clear | We use grey instead of black to ensure we can do future scans on the same* objects. If we did not do future scans these black objects could* potentially contain references to newly allocated objects in the future and* we'd end up with false positives. |
ipc_addid | pc_addid - add an ipc identifier*@ids: ipc identifier set*@new: new ipc permission set*@limit: limit for the number of used ids* Add an entry 'new' to the ipc ids idr |
newque | wque - Create a new msg queue*@ns: namespace*@params: ptr to the structure that contains the key and msgflg* Called with msg_ids.rwsem held (writer) |
freeque | que() wakes up waiters on the sender and receiver waiting queue,* removes the message queue from message queue ID IDR, and cleans up all the* messages associated with this queue.* msg_ids.rwsem (writer) and the spinlock for this message queue are held |
msgctl_down | This function handles some msgctl commands which require the rwsem* to be held in write mode.* NOTE: no locks must be held, the rwsem is taken inside this function. |
msgctl_stat | |
do_msgsnd | |
do_msgrcv | |
newary | wary - Create a new semaphore set*@ns: namespace*@params: ptr to the structure that contains key, semflg and nsems* Called with sem_ids.rwsem held (as a writer) |
freeary | Free a semaphore set. freeary() is called with sem_ids.rwsem locked* as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem* remains locked on exit. |
semctl_stat | |
semctl_setval | |
semctl_main | |
semctl_down | This function handles some semctl commands which require the rwsem* to be held in write mode.* NOTE: no locks must be held, the rwsem is taken inside this function. |
find_alloc_undo | d_alloc_undo - lookup (and if not present create) undo array*@ns: namespace*@semid: semaphore array id* The function looks up (and if not present creates) the undo structure.* The size of the undo structure depends on the size of the semaphore |
do_semtimedop | |
exit_sem | add semadj values to semaphores, free undo structures |
shm_lock | shm_lock_(check_) routines are called in the paths where the rwsem* is not necessarily held. |
newseg | wseg - Create a new shared memory segment*@ns: namespace*@params: ptr to the structure that contains key, size and shmflg* Called with shm_ids.rwsem held as a writer. |
shmctl_down | This function handles some shmctl commands which require the rwsem* to be held in write mode.* NOTE: no locks must be held, the rwsem is taken inside this function. |
shmctl_stat | |
shmctl_do_lock | |
do_shmat | Fix shmaddr, allocate descriptor, map shm, add attach descriptor to lists.* NOTE! Despite the name, this is NOT a direct system call entrypoint. The* "raddr" thing points to kernel space, and there has to be a wrapper around* this. |
__do_notify | The next function is only to split too long sys_mq_timedsend |
bio_associate_blkg_from_css | _associate_blkg_from_css - associate a bio with a specified css*@bio: target bio*@css: target css* Associate @bio with the blkg found by combining the css's blkg and the* request_queue of the @bio. This falls back to the queue's root_blkg if |
bio_associate_blkg_from_page | _associate_blkg_from_page - associate a bio with the page's blkg*@bio: target bio*@page: the page to lookup the blkcg from* Associate @bio with the blkg from @page's owning memcg and the respective* request_queue |
bio_associate_blkg | _associate_blkg - associate a bio with a blkg*@bio: target bio* Associate @bio with the blkg found from the bio's css and request_queue |
bio_clone_blkg_association | _clone_blkg_association - clone blkg association from src to dst bio*@dst: destination bio*@src: source bio |
blk_queue_enter | lk_queue_enter() - try to increase q->q_usage_counter*@q: request queue pointer*@flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PREEMPT |
blk_partition_remap | Remap block n of partition p to block n+start(p) of the disk. |
ioc_lookup_icq | _lookup_icq - lookup io_cq from ioc*@ioc: the associated io_context*@q: the associated request_queue* Look up io_cq associated with @ioc - @q pair from @ioc. Must be called* with @q->queue_lock held. |
hctx_unlock | |
blk_stat_add | |
disk_get_part | disk_get_part - get partition*@disk: disk to look partition from*@partno: partition number* Look for partition @partno from @disk. If found, increment* reference count and return it.* CONTEXT:* Don't care.* RETURNS: |
disk_part_iter_init | disk_part_iter_init - initialize partition iterator*@piter: iterator to initialize*@disk: disk to iterate over*@flags: DISK_PITER_* flags* Initialize @piter so that it iterates over partitions of @disk.* CONTEXT:* Don't care. |
disk_part_iter_next | disk_part_iter_next - proceed iterator to the next partition and return it*@piter: iterator of interest* Proceed @piter to the next partition and return it.* CONTEXT:* Don't care. |
set_task_ioprio | |
SYSCALL_DEFINE3 | |
SYSCALL_DEFINE2 | |
blkcg_print_blkgs | lkcg_print_blkgs - helper for printing per-blkg data*@sf: seq_file to print to*@blkcg: blkcg of interest*@prfill: fill function to print out a blkg*@pol: policy in question*@data: data to be passed to @prfill*@show_total: to print out sum of prfill return |
blkg_conf_prep | lkg_conf_prep - parse and prepare for per-blkg config update*@blkcg: target block cgroup*@pol: target policy*@input: input string*@ctx: blkg_conf_ctx to be filled* Parse per-blkg config update from @input and initialize @ctx with the* result |
blkg_conf_finish | lkg_conf_finish - finish up per-blkg config update*@ctx: blkg_conf_ctx intiailized by blkg_conf_prep()* Finish up after per-blkg config update. This function must be paired* with blkg_conf_prep(). |
blkcg_print_stat | |
blkcg_init_queue | lkcg_init_queue - initialize blkcg part of request queue*@q: request_queue to initialize* Called from blk_alloc_queue_node(). Responsible for initializing blkcg* part of new request_queue @q.* RETURNS:* 0 on success, -errno on failure. |
blkcg_rstat_flush | |
blkcg_maybe_throttle_current | lkcg_maybe_throttle_current - throttle the current task if it has been marked* This is only called if we've been marked with set_notify_resume() |
blkg_rwstat_recursive_sum | lkg_rwstat_recursive_sum - collect hierarchical blkg_rwstat*@blkg: blkg of interest*@pol: blkcg_policy which contains the blkg_rwstat*@off: offset to the blkg_rwstat in blkg_policy_data or @blkg*@sum: blkg_rwstat_sample structure containing the results* |
blk_throtl_update_limit_valid | |
throtl_can_upgrade | |
throtl_upgrade_state | |
blk_throtl_drain | lk_throtl_drain - drain throttled bios*@q: request_queue to drain throttled bios for* Dispatch all currently throttled bios on @q through ->make_request_fn(). |
blkiolatency_timer_fn | |
bfq_get_queue | |
keyring_search | keyring_search - Search the supplied keyring tree for a matching key*@keyring: The root of the keyring tree to be searched |
keyring_detect_cycle | See if a cycle will will be created by inserting acyclic tree B in acyclic* tree A at the topmost level (ie: as a direct child of A).* Since we are adding B to A at the top level, checking for cycles should just |
keyring_gc | Garbage collect pointers from a keyring.* Not called with any locks held. The keyring's key struct will not be* deallocated under us as only our caller may deallocate it. |
keyctl_session_to_parent | Attempt to install the calling process's session keyring on the process's* parent process.* The keyring must exist and must grant the caller LINK permission, and the* parent process must be single-threaded and must have the same effective |
lookup_user_key | Look up a key ID given us by userspace with a given permissions mask to get* the key it refers to.* Flags can be passed to request that special keyrings be created if referred* to directly, to permit partially constructed keys to be found and to skip |
construct_alloc_key | Allocate a new key in under-construction state and attempt to link it in to* the requested keyring.* May return a key that's already under construction instead if there was a* race between two thread calling request_key(). |
request_key_and_link | quest_key_and_link - Request a key and cache it in a keyring.*@type: The type of key we want.*@description: The searchable description of the key.*@domain_tag: The domain in which the key operates. |
key_get_instantiation_authkey | Search the current process's keyrings for the authorisation key for* instantiation of a key. |
proc_keys_show | |
cap_ptrace_access_check | ap_ptrace_access_check - Determine whether the current process may access* another*@child: The process to be accessed*@mode: The mode of attachment.* If we are in the same or an ancestor user_ns and have all the target |
cap_ptrace_traceme | ap_ptrace_traceme - Determine whether another process may trace the current*@parent: The task proposed to be the tracer* If parent is in the same or an ancestor user_ns and has all current's* capabilities, then ptrace access is allowed |
cap_capget | ap_capget - Retrieve a task's capability sets*@target: The task from which to retrieve the capability sets*@effective: The place to record the effective set*@inheritable: The place to record the inheritable set*@permitted: The place to record the |
cap_safe_nice | Rationale: code calling task_setscheduler, task_setioprio, and* task_setnice, assumes that* |
avc_get_hash_stats | |
avc_reclaim_node | |
avc_flush | avc_flush - Flush the cache |
avc_compute_av | Slow-path helper function for avc_has_perm_noaudit,* when the avc_node lookup fails |
avc_has_extended_perms | The avc extended permissions logic adds an additional 256 bits of* permissions to an avc node when extended permissions for that node are* specified in the avtab |
avc_has_perm_noaudit | avc_has_perm_noaudit - Check permissions but perform no auditing |
task_sid | get the objective security ID of a task |
ptrace_parent_sid | prm security operations |
selinux_getprocattr | |
sel_netif_sid | sel_netif_sid - Lookup the SID of a network interface*@ns: the network namespace*@ifindex: the network interface*@sid: interface SID* Description:* This function determines the SID of a network interface using the fastest* method possible |
sel_netif_kill | sel_netif_kill - Remove an entry from the network interface table*@ns: the network namespace*@ifindex: the network interface* Description:* This function removes the entry matching @ifindex from the network interface* table if it exists. |
sel_netnode_sid | sel_netnode_sid - Lookup the SID of a network address*@addr: the IP address*@family: the address family*@sid: node SID* Description:* This function determines the SID of a network address using the fastest* method possible |
sel_netport_sid | sel_netport_sid - Lookup the SID of a network port*@protocol: protocol*@pnum: port*@sid: port SID* Description:* This function determines the SID of a network port using the fastest method* possible |
sel_ib_pkey_sid | sel_ib_pkey_sid - Lookup the SID of a PKEY*@subnet_prefix: subnet_prefix*@pkey_num: pkey number*@sid: pkey SID* Description:* This function determines the SID of a PKEY using the fastest method* possible |
smk_ptrace_rule_check | smk_ptrace_rule_check - helper for ptrace access*@tracer: tracer process*@tracee_known: label entry of the process that's about to be traced*@mode: ptrace attachment mode (PTRACE_MODE_*)*@func: name of the function that called us, used for audit* Returns |
smack_bprm_set_creds | smack_bprm_set_creds - set creds for exec*@bprm: the exec information* Returns 0 if it gets a blob, -EPERM if exec forbidden and -ENOMEM otherwise |
smack_inode_init_security | smack_inode_init_security - copy out the smack from an inode*@inode: the newly created inode*@dir: containing directory object*@qstr: unused*@name: where to put the attribute name*@value: where to put the attribute value*@len: where to put the length of |
smack_mmap_file | smack_mmap_file :* Check permissions for a mmap operation. The @file may be NULL, e.g.* if mapping anonymous memory.*@file contains the file structure for file to map (may be NULL).*@reqprot contains the protection requested by the application. |
smack_file_send_sigiotask | smack_file_send_sigiotask - Smack on sigio*@tsk: The target task*@fown: the object the signal come from*@signum: unused* Allow a privileged task to get signals even if it shouldn't* Returns 0 if a subject with the object's smack could |
smack_cred_getsecid | smack_cred_getsecid - get the secid corresponding to a creds structure*@cred: the object creds*@secid: where to put the result* Sets the secid to contain a u32 version of the smack label. |
smack_sk_free_security | smack_sk_free_security - Free a socket blob*@sk: the socket* Clears the blob pointer |
smack_netlabel_send | smack_netlbel_send - Set the secattr on a socket and perform access checks*@sk: the socket*@sap: the destination address* Set the correct secattr for the given socket based on the destination* address and perform any outbound access checks needed. |
smk_ipv6_port_label | smk_ipv6_port_label - Smack port access table management*@sock: socket*@address: address* Create or update the port list entry |
smk_ipv6_port_check | smk_ipv6_port_check - check Smack port access*@sk: socket*@address: address*@act: the action being taken* Create or update the port list entry |
smack_from_secattr | smack_from_secattr - Convert a netlabel attr.mls.lvl/attr.mls.cat pair to smack*@sap: netlabel secattr*@ssp: socket security information* Returns a pointer to a Smack label entry found on the label list. |
smack_inet_conn_request | smack_inet_conn_request - Smack access check on connect*@sk: socket involved*@skb: packet*@req: unused* Returns 0 if a task with the packet label could write to* the socket, otherwise an error code |
smack_dentry_create_files_as | |
smk_access | smk_access - determine if a subject has a specific access to an object*@subject: a pointer to the subject's Smack label entry*@object: a pointer to the object's Smack label entry*@request: the access requested, in "MAY" format*@a : a pointer to the audit |
smack_from_secid | smack_from_secid - find the Smack label associated with a secid*@secid: an integer that might be associated with a Smack label* Returns a pointer to the appropriate Smack label entry if there is one,* otherwise a pointer to the invalid Smack label. |
smack_privileged_cred | smack_privileged_cred - are all privilege requirements met by cred*@cap: The requested capability*@cred: the credential to use* Is the task privileged and allowed to be privileged* by the onlycap rule. |
smk_seq_stop | |
tomoyo_select_domain | moyo_select_domain - Parse select command.*@head: Pointer to "struct tomoyo_io_buffer".*@data: String to parse.* Returns true on success, false otherwise.* Caller holds tomoyo_read_lock(). |
tomoyo_read_pid | moyo_read_pid - Get domainname of the specified PID |
profile_depth | |
aa_get_task_label | aa_get_task_label - Get another task's label*@task: task to query (NOT NULL)* Returns: counted reference to @task's label |
may_change_ptraced_domain | may_change_ptraced_domain - check if can change profile on ptraced task*@to_label: profile to change to (NOT NULL)*@info: message if there is an error* Check if current is ptraced and if so if the tracing task is allowed* to trace the new domain* Returns: |
find_attach | d_attach - do attachment search for unconfined processes*@bprm - binprm structure of transitioning task*@ns: the current namespace (NOT NULL)*@head - profile list to walk (NOT NULL)*@name - to match against (NOT NULL)*@info - info message if there was an |
aa_find_child | aa_find_child - find a profile by @name in @parent*@parent: profile to search (NOT NULL)*@name: profile name to search for (NOT NULL)* Returns: a refcounted profile or NULL if not found |
aa_lookupn_profile | aa_lookup_profile - find a profile by its full or partial name*@ns: the namespace to start from (NOT NULL)*@hname: name to do lookup on. Does not contain namespace prefix (NOT NULL)*@n: size of @hname* Returns: refcounted profile or NULL if not found |
apparmor_capget | Derived from security/commoncap.c:cap_capget |
aa_task_setrlimit | aa_task_setrlimit - test permission to set an rlimit*@label - label confining the task (NOT NULL)*@task - task the resource is being set on*@resource - the resource being set*@new_rlim - the new resource limit (NOT NULL) |
aa_secid_to_label | see label for inverse aa_label_to_secid |
aa_file_perm | aa_file_perm - do permission revalidation check & audit for @file*@op: operation being checked*@label: label being enforced (NOT NULL)*@file: file to revalidate access permissions on (NOT NULL)*@request: requested permissions*@in_atomic: whether |
aa_findn_ns | aa_findn_ns - look up a profile namespace on the namespace list*@root: namespace to search in (NOT NULL)*@name: name of namespace to find (NOT NULL)*@n: length of @name* Returns: a refcounted namespace on the list, or NULL if no namespace |
aa_lookupn_ns | aa_lookupn_ns - look up a policy namespace relative to @view*@view: namespace to search in (NOT NULL)*@name: name of namespace to find (NOT NULL)*@n: length of @name* Returns: a refcounted namespace on the list, or NULL if no namespace |
yama_relation_cleanup | |
yama_ptracer_add | yama_ptracer_add - add/replace an exception for this tracer/tracee pair*@tracer: the task_struct of the process doing the ptrace*@tracee: the task_struct of the process to be ptraced* Each tracee can have, at most, one tracer registered. Each time this |
yama_ptracer_del | yama_ptracer_del - remove exceptions related to the given tasks*@tracer: remove any relation where tracer task matches*@tracee: remove any relation where tracee task matches |
yama_task_prctl | yama_task_prctl - check for Yama-specific prctl operations*@option: operation*@arg2: argument*@arg3: argument*@arg4: argument*@arg5: argument* Return 0 on success, -ve on error. -ENOSYS is returned when Yama* does not handle the given option. |
task_is_descendant | ask_is_descendant - walk up a process family tree looking for a match*@parent: the process to compare against while walking up from child*@child: the process to start from while looking upwards for parent |
ptracer_exception_found | ptracer_exception_found - tracer registered as exception for this tracee*@tracer: the task_struct of the process attempting ptrace*@tracee: the task_struct of the process to be ptraced* Returns 1 if tracer has a ptracer exception ancestor for tracee. |
yama_ptrace_access_check | yama_ptrace_access_check - validate PTRACE_ATTACH calls*@child: task that current task is attempting to ptrace*@mode: ptrace attach mode* Returns 0 if following the ptrace is allowed, -ve on error. |
setuid_policy_lookup | Compute a decision for a transition from @src to @dst under the active* policy. |
devcgroup_seq_show | |
propagate_exception | propagate_exception - propagates a new exception to the children*@devcg_root: device cgroup that added a new exception*@ex: new exception to be propagated* returns: 0 in case of success, != 0 in case of error |
__devcgroup_check_permission | __devcgroup_check_permission - checks if an inode operation is permitted*@dev_cgroup: the dev cgroup to be tested against*@type: device type*@major: device major number*@minor: device minor number*@access: combination of DEVCG_ACC_WRITE, DEVCG_ACC_READ |
ima_measurements_start | rns pointer to hlist_node |
ima_measurements_next | |
ima_lookup_digest_entry | lookup up the digest value in the hash table, and return the entry |
ima_match_policy | ma_match_policy - decision based on LSM and other conditions*@inode: pointer to an inode for which the policy decision is being made*@cred: pointer to a credentials structure for which the policy decision is* being made*@secid: LSM secid of the task to be |
lookup_template_desc | |
check_unsafe_exec | determine how safe it is to execute the proposed program* - the caller must hold ->cred_guard_mutex to protect against* PTRACE_ATTACH or seccomp thread-sync |
exec_binprm | |
terminate_walk | |
unlazy_walk | lazy_walk - try to switch to ref-walk mode |
unlazy_child | lazy_child - try to switch to ref-walk mode |
pick_link | |
f_setown | |
f_setown_ex | |
sigio_perm | |
send_sigio | |
send_sigurg | |
kill_fasync | |
do_select | |
core_sys_select | We can actually return ERESTARTSYS instead of EINTR, but I'd* like to be certain this leads to no problems. So I return* EINTR just for safety.* Update: ERESTARTSYS breaks at least the xview clock binary, so |
compat_core_sys_select | We can actually return ERESTARTSYS instead of EINTR, but I'd* like to be certain this leads to no problems. So I return* EINTR just for safety.* Update: ERESTARTSYS breaks at least the xview clock binary, so |
__lock_parent | |
dput | dput - release a dentry*@dentry: dentry to release * Release a dentry. This will drop the usage count and if appropriate* call the dentry unlink method as well as removing it from the queues and* releasing its resources |
dput_to_list | |
dget_parent | |
shrink_dentry_list | |
d_walk | d_walk - walk the dentry tree*@parent: start of walk*@data: data passed to @enter() and @finish()*@enter: callback when first entering the dentry* The @enter() callbacks are called with d_lock held. |
shrink_dcache_parent | prune dcache |
__d_lookup | __d_lookup - search for a dentry (racy)*@parent: parent dentry*@name: qstr of name we wish to find* Returns: dentry, or NULL* __d_lookup is like d_lookup, however it may (rarely) return a* false-negative result due to unrelated rename activity |
d_alloc_parallel | |
is_subdir | is new dentry a subdirectory of old_dentry |
__fget | |
get_close_on_exec | |
SYSCALL_DEFINE2 | |
legitimize_mnt | all under rcu_read_lock |
lookup_mnt | lookup_mnt - Return the first child mount mounted at path* "First" means first mounted chronologically |
mntput_no_expire | |
path_is_mountpoint | path_is_mountpoint() - Check if path is a mount in the current* namespace |
wakeup_flusher_threads_bdi | |
wakeup_flusher_threads | Wakeup the flusher threads to start writeback of all currently dirty pages |
wakeup_dirtytime_writeback | Wake up bdi's periodically to make sure dirtytime inodes gets* written back periodically. We deliberately do *not* check the* b_dirtytime list in wb_has_dirty_io(), since this would cause the* kernel to be constantly waking up once there are any dirtytime |
wait_sb_inodes | The @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks |
prepend_path | prepend_path - Prepend path string to a buffer*@path: the dentry/vfsmount to report*@root: root vfsmnt/dentry*@buffer: pointer to the end of the buffer*@buflen: pointer to buffer length* The function will first try to write out the pathname without taking |
d_path | d_path - return the path of a dentry*@path: path to report*@buf: buffer to return value in*@buflen: buffer length* Convert a dentry into an ASCII path name |
__dentry_path | Write full pathname from the root of the filesystem into the buffer. |
SYSCALL_DEFINE2 | NOTE! The user-level library version returns a* character pointer |
pin_kill | |
mnt_pin_kill | |
group_pin_kill | |
__ns_get_path | |
guard_bio_eod | This allows us to do IO even on the odd last sectors* of a device, even if the block size is some multiple* of the physical sector size |
fcntl_dirnotify | When a process calls fcntl to attach a dnotify watch to a directory it ends* up here. Allocate both a mark for fsnotify to add and a dnotify_struct to be* attached to the fsnotify_mark. |
ep_remove_wait_queue | |
ep_pm_stay_awake_rcu | all when ep->mtx cannot be held (ep_poll_callback) |
reverse_path_check_proc | |
timerfd_clock_was_set | Called when the clock was set to cancel the timers in the cancel* list. This will wake up processes waiting on these timers. The* wake-up requires ctx->ticks to be non zero, therefore we increment* it before calling wake_up_locked(). |
aio_ring_mremap | |
lookup_ioctx | |
io_grab_files | |
io_wqe_wake_worker | We need a worker. If we find a free one, we're good. If not, and we're* below the max number of workers, wake up the manager to create one. |
io_wq_can_queue | |
io_wq_cancel_all | |
io_wqe_cancel_cb_work | |
io_wqe_cancel_work | |
io_wq_destroy | |
dax_lock_page | dax_lock_mapping_entry - Lock the DAX entry corresponding to a page*@page: The page whose entry we want to lock* Context: Process context.* Return: A cookie to pass to dax_unlock_page() or 0 if the entry could* not be locked. |
locks_translate_pid | locks_translate_pid - translate a file_lock's fl_pid number into a namespace*@fl: The file_lock who's fl_pid should be translated*@ns: The namespace into which the pid should be translated* Used to tranlate a fl_pid into a namespace virtual pid number |
get_cached_acl | |
zap_threads |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |