Skip to content

Commit 2e83ee1

Browse files
xzpetertorvalds
authored andcommitted
mm: thp: fix flags for pmd migration when split
When splitting a huge migrating PMD, we'll transfer all the existing PMD bits and apply them again onto the small PTEs. However we are fetching the bits unconditionally via pmd_soft_dirty(), pmd_write() or pmd_yound() while actually they don't make sense at all when it's a migration entry. Fix them up. Since at it, drop the ifdef together as not needed. Note that if my understanding is correct about the problem then if without the patch there is chance to lose some of the dirty bits in the migrating pmd pages (on x86_64 we're fetching bit 11 which is part of swap offset instead of bit 2) and it could potentially corrupt the memory of an userspace program which depends on the dirty bit. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Peter Xu <[email protected]> Reviewed-by: Konstantin Khlebnikov <[email protected]> Reviewed-by: William Kucharski <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Dave Jiang <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Souptick Joarder <[email protected]> Cc: Konstantin Khlebnikov <[email protected]> Cc: Zi Yan <[email protected]> Cc: <[email protected]> [4.14+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 2830bf6 commit 2e83ee1

File tree

1 file changed

+11
-9
lines changed

1 file changed

+11
-9
lines changed

mm/huge_memory.c

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2144,23 +2144,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
21442144
*/
21452145
old_pmd = pmdp_invalidate(vma, haddr, pmd);
21462146

2147-
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
21482147
pmd_migration = is_pmd_migration_entry(old_pmd);
2149-
if (pmd_migration) {
2148+
if (unlikely(pmd_migration)) {
21502149
swp_entry_t entry;
21512150

21522151
entry = pmd_to_swp_entry(old_pmd);
21532152
page = pfn_to_page(swp_offset(entry));
2154-
} else
2155-
#endif
2153+
write = is_write_migration_entry(entry);
2154+
young = false;
2155+
soft_dirty = pmd_swp_soft_dirty(old_pmd);
2156+
} else {
21562157
page = pmd_page(old_pmd);
2158+
if (pmd_dirty(old_pmd))
2159+
SetPageDirty(page);
2160+
write = pmd_write(old_pmd);
2161+
young = pmd_young(old_pmd);
2162+
soft_dirty = pmd_soft_dirty(old_pmd);
2163+
}
21572164
VM_BUG_ON_PAGE(!page_count(page), page);
21582165
page_ref_add(page, HPAGE_PMD_NR - 1);
2159-
if (pmd_dirty(old_pmd))
2160-
SetPageDirty(page);
2161-
write = pmd_write(old_pmd);
2162-
young = pmd_young(old_pmd);
2163-
soft_dirty = pmd_soft_dirty(old_pmd);
21642166

21652167
/*
21662168
* Withdraw the table only after we mark the pmd entry invalid.

0 commit comments

Comments
 (0)