Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\rhashtable.h Create Date:2022-07-28 06:07:13
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Internal function, please use rhashtable_insert_fast() instead. This* function returns the existing element already in hashes in there is a clash,* otherwise it returns an error via ERR_PTR().

Proto:static inline void *__rhashtable_insert_fast(struct rhashtable *ht, const void *key, struct rhash_head *obj, const struct rhashtable_params params, bool rhlist)

Type:void

Parameter:

TypeParameterName
struct rhashtable *ht
const void *key
struct rhash_head *obj
const struct rhashtable_paramsparams
boolrhlist
708  struct rhashtable_compare_arg arg = {ht = ht, key = key, }
713  __rcu * pprev
720  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
722  tbl = rht_dereference_rcu(tbl, ht)
723  hash = rht_head_hashfn(ht, tbl, obj, params)
724  elasticity = Maximum chain length before rehash* The maximum (not average) chain length grows with the size of the hash* table, at a rate of (log N)/(log log N)
725  bkt = rht_bucket_insert(ht, tbl, hash)
726  data = ERR_PTR( - ENOMEM)
727  If Not bkt Then Go to out
729  pprev = NULL
730  We lock a bucket by setting BIT(0) in the pointer - this is always* zero in real pointers. The NULLS mark is never stored in the bucket,* rather we store NULL if the bucket is empty.* bit_spin_locks do not handle contention well, but the whole point
732  If Value for the false possibility is greater at compile time(cu_access_pointer() - fetch RCU pointer with no dereferencing*@p: The pointer to read* Return the value of the specified RCU-protected pointer, but omit the* lockdep checks for being in an RCU read-side critical section(future_tbl)) Then
733  slow_path :
734  rht_unlock(tbl, bkt)
735  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
736  Return rhashtable_insert_slow(ht, key, obj)
743  elasticity--
744  If Not key || If obj_cmpfn Then obj_cmpfn( & arg, rht_obj(ht, head)) Else rhashtable_compare( & arg, rht_obj(ht, head)) Then
748  pprev = next
749  Continue
752  data = rht_obj(ht, head)
754  If Not rhlist Then Go to out_unlock
758  list = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(obj, structrhlist_head, rhead)
759  plist = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(head, structrhlist_head, rhead)
761  RCU_INIT_POINTER() - initialize an RCU protected pointer*@p: The pointer to be initialized.*@v: The value to initialized the pointer to.* Initialize an RCU-protected pointer in special cases where readers(next, plist)
762  head = rht_dereference_bucket(next, tbl, hash)
763  RCU_INIT_POINTER() - initialize an RCU protected pointer*@p: The pointer to be initialized.*@v: The value to initialized the pointer to.* Initialize an RCU-protected pointer in special cases where readers(next, head)
764  If pprev Then
766  rht_unlock(tbl, bkt)
767  Else rht_assign_unlock(tbl, bkt, obj)
769  data = NULL
770  Go to out
773  If elasticity <= 0 Then Go to slow_path
776  data = ERR_PTR( - E2BIG)
777  If Value for the false possibility is greater at compile time(ht_grow_above_max - returns true if table is above maximum*@ht: hash table*@tbl: current table) Then Go to out_unlock
780  If Value for the false possibility is greater at compile time(ht_grow_above_100 - returns true if nelems > table-size*@ht: hash table*@tbl: current table) Then Go to slow_path
784  head = rht_ptr(bkt, tbl, hash)
786  RCU_INIT_POINTER() - initialize an RCU protected pointer*@p: The pointer to be initialized.*@v: The value to initialized the pointer to.* Initialize an RCU-protected pointer in special cases where readers(next, head)
787  If rhlist Then
790  list = container_of - cast a member of a structure out to the containing structure*@ptr: the pointer to the member.*@type: the type of the container struct this is embedded in.*@member: the name of the member within the struct.(obj, structrhlist_head, rhead)
791  RCU_INIT_POINTER() - initialize an RCU protected pointer*@p: The pointer to be initialized.*@v: The value to initialized the pointer to.* Initialize an RCU-protected pointer in special cases where readers(next, NULL)
794  atomic_inc( & nelems)
795  rht_assign_unlock(tbl, bkt, obj)
797  If ht_grow_above_75 - returns true if nelems > 0.75 * table-size*@ht: hash table*@tbl: current table Then schedule_work - put work task in global workqueue*@work: job to be done* Returns %false if @work was already on the kernel-global workqueue and* %true otherwise
800  data = NULL
801  out :
802  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
804  Return data
806  out_unlock :
807  rht_unlock(tbl, bkt)
808  Go to out