Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:fs\aio.c Create Date:2022-07-28 20:21:14
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:x_alloc* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed.

Proto:static struct kioctx *ioctx_alloc(unsigned nr_events)

Type:struct kioctx

Parameter:

TypeParameterName
unsignednr_events
703  mm = mm
705  err = -ENOMEM
711  max_reqs = nr_events
722  nr_events = max - return maximum of two values of the same or compatible types*@x: first value*@y: second value(nr_events, num_possible_cpus() * 4)
723  nr_events *= 2
726  If nr_events > 0x10000000U / sizeof(structio_event) Then
727  pr_debug("ENOMEM: nr_events too high\n")
728  Return ERR_PTR( - EINVAL)
731  If Not nr_events || max_reqs > system wide maximum number of aio requests Then Return ERR_PTR( - EAGAIN)
734  ctx = Shortcuts
735  If Not ctx Then Return ERR_PTR( - ENOMEM)
738  * This is what userspace passed to io_setup(), it's not used for * anything but counting against the global max_reqs quota. * The real limit is nr_events - 1, which will be larger (see * aio_setup_ring()) = max_reqs
740  Process spin lock initialization( & ctx_lock)
741  Process spin lock initialization( & completion_lock)
742  mutex_init - initialize the mutex*@mutex: the mutex to be initialized* Initialize the mutex to unlocked state.* It is not allowed to initialize an already locked mutex.( & ring_lock)
745  mutex_lock( & ring_lock)
746  init_waitqueue_head( & wait)
748  Initialization list head
750  If percpu_ref_init - initialize a percpu refcount*@ref: percpu_ref to initialize*@release: function which will be called when refcount hits 0*@flags: PERCPU_REF_INIT_* flags*@gfp: allocation mask to use* Initializes @ref Then Go to err
753  If percpu_ref_init - initialize a percpu refcount*@ref: percpu_ref to initialize*@release: function which will be called when refcount hits 0*@flags: PERCPU_REF_INIT_* flags*@gfp: allocation mask to use* Initializes @ref Then Go to err
756  cpu = alloc_percpu(structkioctx_cpu)
757  If Not cpu Then Go to err
760  err = aio_setup_ring(ctx, nr_events)
761  If err < 0 Then Go to err
764  atomic_set( & * This counts the number of available slots in the ringbuffer, * so we avoid overflowing it: it's decremented (if positive) * when allocating a kiocb and incremented when the resulting * io_event is pulled off the ringbuffer. * We batch accesses to it with, Size of ringbuffer, in units of struct io_event - 1)
765  * For percpu reqs_available, number of slots we move to/from global * counter at a time: = ( Size of ringbuffer, in units of struct io_event - 1) / num_possible_cpus() * 4
766  If * For percpu reqs_available, number of slots we move to/from global * counter at a time: < 1 Then * For percpu reqs_available, number of slots we move to/from global * counter at a time: = 1
770  spin_lock( & ------ sysctl variables----)
771  If current system wide number of aio requests + * This is what userspace passed to io_setup(), it's not used for * anything but counting against the global max_reqs quota. * The real limit is nr_events - 1, which will be larger (see * aio_setup_ring()) > system wide maximum number of aio requests || current system wide number of aio requests + * This is what userspace passed to io_setup(), it's not used for * anything but counting against the global max_reqs quota. * The real limit is nr_events - 1, which will be larger (see * aio_setup_ring()) < current system wide number of aio requests Then
773  spin_unlock( & ------ sysctl variables----)
774  err = -EAGAIN
775  Go to err_ctx
777  current system wide number of aio requests += * This is what userspace passed to io_setup(), it's not used for * anything but counting against the global max_reqs quota. * The real limit is nr_events - 1, which will be larger (see * aio_setup_ring())
778  spin_unlock( & ------ sysctl variables----)
780  percpu_ref_get - increment a percpu refcount*@ref: percpu_ref to get* Analagous to atomic_long_inc().* This function is safe to call as long as @ref is between init and exit.
781  percpu_ref_get - increment a percpu refcount*@ref: percpu_ref to get* Analagous to atomic_long_inc().* This function is safe to call as long as @ref is between init and exit.
783  err = ioctx_add_table(ctx, mm)
784  If err Then Go to err_cleanup
788  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.
790  pr_debug("allocated ioctx %p[%ld]: mm=%p mask=0x%x\n", ctx, user_id, mm, Size of ringbuffer, in units of struct io_event )
792  Return ctx
794  err_cleanup :
795  aio_nr_sub(* This is what userspace passed to io_setup(), it's not used for * anything but counting against the global max_reqs quota. * The real limit is nr_events - 1, which will be larger (see * aio_setup_ring()))
796  err_ctx :
797  atomic_set( & dead, 1)
798  If mmap_size Then vm_munmap(mmap_base, mmap_size)
800  aio_free_ring(ctx)
801  err :
802  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.
803  free previously allocated percpu memory
804  percpu_ref_exit - undo percpu_ref_init()*@ref: percpu_ref to exit* This function exits @ref
805  percpu_ref_exit - undo percpu_ref_init()*@ref: percpu_ref to exit* This function exits @ref
806  kmem_cache_free(kioctx_cachep, ctx)
807  pr_debug("error allocating ioctx %d\n", err)
808  Return ERR_PTR(err)
Caller
NameDescribe
SYSCALL_DEFINE2sys_io_setup:* Create an aio_context capable of receiving at least nr_events
COMPAT_SYSCALL_DEFINE2