Function report |
Source Code:mm\huge_memory.c |
Create Date:2022-07-28 16:01:46 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:do_huge_pmd_wp_page
Proto:vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
Type:vm_fault_t
Parameter:
Type | Parameter | Name |
---|---|---|
struct vm_fault * | vmf | |
pmd_t | orig_pmd |
1317 | vma = Target VMA |
1320 | haddr = Faulting virtual address & HPAGE_PMD_MASK |
1323 | ret = 0 |
1325 | Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lockptr(The address space we belong to. , Pointer to pmd entry matching* the 'address' ) |
1327 | If is_huge_zero_pmd(orig_pmd) Then Go to alloc |
1330 | If Value for the false possibility is greater at compile time(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd)) Then Go to out_unlock |
1333 | page = Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:(orig_pmd) |
1334 | VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page) |
1339 | If Not Return true if the page was successfully locked Then |
1347 | Go to out_unlock |
1353 | entry = pmd_mkyoung(orig_pmd) |
1354 | entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma) |
1357 | ret |= VM_FAULT_WRITE |
1359 | Go to out_unlock |
1364 | alloc : |
1368 | new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER) |
1369 | Else new_page = NULL |
1372 | If Value is more likely to compile time(new_page) Then |
1373 | prep_transhuge_page(new_page) |
1374 | Else |
1375 | If Not page Then |
1377 | ret |= VM_FAULT_FALLBACK |
1378 | Else |
1379 | ret = do_huge_pmd_wp_page_fallback(vmf, orig_pmd, page) |
1380 | If ret & VM_FAULT_OOM Then |
1382 | ret |= VM_FAULT_FALLBACK |
1386 | Disable counters |
1387 | Go to out |
1394 | If page Then Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page. |
1396 | ret |= VM_FAULT_FALLBACK |
1397 | Disable counters |
1398 | Go to out |
1401 | Disable counters |
1402 | count_memcg_events(memcg, THP_FAULT_ALLOC, 1) |
1404 | If Not page Then clear_huge_page(new_page, Faulting virtual address , HPAGE_PMD_NR) |
1406 | Else copy_user_huge_page(new_page, page, Faulting virtual address , vma, HPAGE_PMD_NR) |
1409 | __SetPageUptodate(new_page) |
1411 | mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, The address space we belong to. , haddr, haddr + HPAGE_PMD_SIZE) |
1413 | mmu_notifier_invalidate_range_start( & range) |
1416 | If page Then Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page. |
1420 | mem_cgroup_cancel_charge(new_page, memcg, true) |
1422 | Go to out_mn |
1423 | Else |
1426 | entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma) |
1429 | mem_cgroup_commit_charge(new_page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., true) |
1431 | set_pmd_at(The address space we belong to. , haddr, Pointer to pmd entry matching* the 'address' , entry) |
1432 | update_mmu_cache_pmd(vma, Faulting virtual address , Pointer to pmd entry matching* the 'address' ) |
1433 | If Not page Then |
1435 | Else |
1436 | VM_BUG_ON_PAGE(!PageHead(page), page) |
1440 | ret |= VM_FAULT_WRITE |
1443 | out_mn : |
1449 | out : |
1450 | Return ret |
1451 | out_unlock : |
1453 | Return ret |
Name | Describe |
---|---|
wp_huge_pmd | `inline' is required to avoid gcc 4.1.2 build error |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |