Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:fs\dax.c Create Date:2022-07-28 20:23:17
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:dax_iomap_pte_fault

Proto:static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, int *iomap_errp, const struct iomap_ops *ops)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
pfn_t *pfnp
int *iomap_errp
const struct iomap_ops *ops
1248  vma = vma
1249  mapping = f_mapping
1250  XA_STATE() - Declare an XArray operation state.*@name: Name of this operation state (usually xas).*@array: Array to operate on.*@index: Initial index of interest.* Declare and initialise an xa_state on the stack.(xas, & i_pages, pgoff)
1251  inode = host
1252  vaddr = address
1253  pos = pgoff << PAGE_SHIFT determines the page size
1254  struct iomap iomap = { type of mapping = blocks allocated, need allocation }
1255  struct iomap srcmap = { type of mapping = blocks allocated, need allocation }
1256  flags = mapping for page fault
1257  major = 0
1258  write = flags & Fault was a write access
1260  ret = 0
1264  trace_dax_pte_fault(inode, vmf, ret)
1270  If pos >= NOTE: in a 32bit arch with a preemptable kernel and* an UP compile the i_size_read/write must be atomic* with respect to the local cpu (unlike with preempt disabled),* but they don't need to be atomic with respect to other cpus like in* true SMP (so they Then
1271  ret = VM_FAULT_SIGBUS
1272  Go to out
1275  If write && Not cow_page Then flags |= writing, must allocate blocks
1278  entry = Find page cache entry at given index. If it is a DAX entry, return it* with the entry locked. If the page cache doesn't contain an entry at* that index, add a locked empty entry.* When requesting an entry with size DAX_PMD, grab_mapping_entry() will
1279  If xa_is_internal() - Is the entry an internal entry?*@entry: XArray entry.* Context: Any context.* Return: %true if the entry is an internal entry. Then
1280  ret = xa_to_internal() - Extract the value from an internal entry.*@entry: XArray entry.* Context: Any context.* Return: The value which was stored in the internal entry.
1281  Go to out
1290  If pmd_trans_huge( * pmd) || pmd_devmap( * pmd) Then
1291  ret = VM_FAULT_NOPAGE
1292  Go to unlock_entry
1300  error = iomap_begin(inode, pos, PAGE_SIZE, flags, & iomap, & srcmap)
1301  If iomap_errp Then iomap_errp = error
1303  If error Then
1304  ret = dax_fault_return(error)
1305  Go to unlock_entry
1307  If WARN_ON_ONCE( file offset of mapping, bytes + length of mapping, bytes < pos + PAGE_SIZE) Then
1308  error = -EIO
1309  Go to error_finish_iomap
1312  If cow_page Then
1313  sector = dax_iomap_sector( & iomap, pos)
1316  Case type of mapping == blocks allocated, need allocation
1319  Break
1323  Break
1324  Default
1325  WARN_ON_ONCE(1)
1326  error = -EIO
1327  Break
1330  If error Then Go to error_finish_iomap
1333  __SetPageUptodate(cow_page)
1334  ret = sh_fault - finish page fault once we have prepared the page to fault*@vmf: structure describing the fault* This function handles all that is needed to finish a page fault once the* page to fault in is prepared
1335  If Not ret Then ret = VM_FAULT_DONE_COW
1337  Go to finish_iomap
1340  sync = MAP_SYNC on a dax mapping guarantees dirty metadata is* flushed on write-faults (non-cow), but not read-faults.
1343  Case type of mapping == locks allocated at @addr
1349  error = dax_iomap_pfn( & iomap, pos, PAGE_SIZE, & pfn)
1350  If error < 0 Then Go to error_finish_iomap
1353  entry = By this point grab_mapping_entry() has ensured that we have a locked entry* of the appropriate size so we don't have to worry about downgrading PMDs to* PTEs
1362  If sync Then
1363  If WARN_ON_ONCE(!pfnp) Then
1364  error = -EIO
1365  Go to error_finish_iomap
1367  pfnp = pfn
1369  Go to finish_iomap
1371  trace_dax_insert_mapping(inode, vmf, entry)
1372  If write Then ret = If the insertion of PTE failed because someone else already added a* different entry in the mean time, we treat that as success as we assume* the same entry was actually inserted.
1374  Else ret = vmf_insert_mixed(vma, vaddr, pfn)
1377  Go to finish_iomap
1378  Case type of mapping == locks allocated at @addr in unwritten state
1379  Case type of mapping == blocks allocated, need allocation
1380  If Not write Then
1382  Go to finish_iomap
1385  Default
1386  WARN_ON_ONCE(1)
1387  error = -EIO
1388  Break
1391  error_finish_iomap :
1392  ret = dax_fault_return(error)
1393  finish_iomap :
1394  If iomap_end Then
1395  copied = PAGE_SIZE
1397  If ret & VM_FAULT_ERROR Then copied = 0
1405  iomap_end(inode, pos, PAGE_SIZE, copied, flags, & iomap)
1407  unlock_entry :
1408  We used the xa_state to get the entry, but then we locked the entry and* dropped the xa_lock, so we know the xa_state is stale and must be reset* before use.
1409  out :
1410  trace_dax_pte_fault_done(inode, vmf, ret)
1411  Return ret | major
Caller
NameDescribe
dax_iomap_faultdax_iomap_fault - handle a page fault on a DAX file*@vmf: The description of the fault*@pe_size: Size of the page to fault in*@pfnp: PFN to insert for synchronous faults if fsync is required*@iomap_errp: Storage for detailed error code in case of