Function report |
Source Code:mm\huge_memory.c |
Create Date:2022-07-28 16:01:33 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:do_huge_pmd_wp_page_fallback
Proto:static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd, struct page *page)
Type:vm_fault_t
Parameter:
Type | Parameter | Name |
---|---|---|
struct vm_fault * | vmf | |
pmd_t | orig_pmd | |
struct page * | page |
1201 | vma = Target VMA |
1202 | haddr = Faulting virtual address & HPAGE_PMD_MASK |
1207 | ret = 0 |
1211 | pages = kmalloc_array - allocate memory for an array.*@n: number of elements.*@size: element size.*@flags: the type of memory to allocate (see kmalloc). |
1213 | If Value for the false possibility is greater at compile time(!pages) Then |
1214 | ret |= VM_FAULT_OOM |
1215 | Go to out |
1218 | When i < HPAGE_PMD_NR cycle |
1219 | pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE, vma, Faulting virtual address , page_to_nid(page)) |
1221 | If Value for the false possibility is greater at compile time(!pages[i] || mem_cgroup_try_charge_delay(pages[i], The address space we belong to. , GFP_KERNEL, & memcg, false)) Then |
1224 | If pages[i] Then Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page. |
1227 | memcg = page_private(pages[i]) |
1228 | set_page_private(pages[i], 0) |
1229 | mem_cgroup_cancel_charge(pages[i], memcg, false) |
1234 | ret |= VM_FAULT_OOM |
1235 | Go to out |
1237 | set_page_private(pages[i], (unsignedlong)memcg) |
1240 | When i < HPAGE_PMD_NR cycle |
1243 | __SetPageUptodate(pages[i]) |
1244 | cond_resched() |
1247 | mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, The address space we belong to. , haddr, haddr + HPAGE_PMD_SIZE) |
1249 | mmu_notifier_invalidate_range_start( & range) |
1251 | Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd. = pmd_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' ) |
1252 | If Value for the false possibility is greater at compile time(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd)) Then Go to out_free_pages |
1254 | VM_BUG_ON_PAGE(!PageHead(page), page) |
1267 | pmd_populate(The address space we belong to. , & _pmd, pgtable) |
1269 | When i < HPAGE_PMD_NR cycle |
1273 | memcg = page_private(pages[i]) |
1274 | set_page_private(pages[i], 0) |
1276 | mem_cgroup_commit_charge(pages[i], memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false) |
1278 | Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map( & _pmd, haddr) |
1279 | VM_BUG_ON(!pte_none( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)) |
1285 | smp_wmb() |
1286 | pmd_populate(The address space we belong to. , Pointer to pmd entry matching* the 'address' , pgtable) |
1296 | ret |= VM_FAULT_WRITE |
1299 | out : |
1300 | Return ret |
1302 | out_free_pages : |
1304 | mmu_notifier_invalidate_range_end( & range) |
1305 | When i < HPAGE_PMD_NR cycle |
1306 | memcg = page_private(pages[i]) |
1307 | set_page_private(pages[i], 0) |
1308 | mem_cgroup_cancel_charge(pages[i], memcg, false) |
1312 | Go to out |
Name | Describe |
---|---|
do_huge_pmd_wp_page |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |