Skip to content

Commit 602f985

Browse files
committed
mm/huge_memory: fix dereferencing invalid pmd migration entry
jira LE-3666 cve CVE-2025-37958 Rebuild_History Non-Buildable kernel-5.14.0-570.30.1.el9_6 commit-author Gavin Guo <[email protected]> commit be6e843 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-5.14.0-570.30.1.el9_6/be6e843f.failed When migrating a THP, concurrent access to the PMD migration entry during a deferred split scan can lead to an invalid address access, as illustrated below. To prevent this invalid access, it is necessary to check the PMD migration entry and return early. In this context, there is no need to use pmd_to_swp_entry and pfn_swap_entry_to_page to verify the equality of the target folio. Since the PMD migration entry is locked, it cannot be served as the target. Mailing list discussion and explanation from Hugh Dickins: "An anon_vma lookup points to a location which may contain the folio of interest, but might instead contain another folio: and weeding out those other folios is precisely what the "folio != pmd_folio((*pmd)" check (and the "risk of replacing the wrong folio" comment a few lines above it) is for." BUG: unable to handle page fault for address: ffffea60001db008 CPU: 0 UID: 0 PID: 2199114 Comm: tee Not tainted 6.14.0+ #4 NONE Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 RIP: 0010:split_huge_pmd_locked+0x3b5/0x2b60 Call Trace: <TASK> try_to_migrate_one+0x28c/0x3730 rmap_walk_anon+0x4f6/0x770 unmap_folio+0x196/0x1f0 split_huge_page_to_list_to_order+0x9f6/0x1560 deferred_split_scan+0xac5/0x12a0 shrinker_debugfs_scan_write+0x376/0x470 full_proxy_write+0x15c/0x220 vfs_write+0x2fc/0xcb0 ksys_write+0x146/0x250 do_syscall_64+0x6a/0x120 entry_SYSCALL_64_after_hwframe+0x76/0x7e The bug is found by syzkaller on an internal kernel, then confirmed on upstream. Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/[email protected]/ Link: https://lore.kernel.org/all/[email protected]/ Fixes: 84c3fc4 ("mm: thp: check pmd migration entry in common path") Signed-off-by: Gavin Guo <[email protected]> Acked-by: David Hildenbrand <[email protected]> Acked-by: Hugh Dickins <[email protected]> Acked-by: Zi Yan <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Cc: Florent Revest <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Miaohe Lin <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> (cherry picked from commit be6e843) Signed-off-by: Jonathan Maple <[email protected]> # Conflicts: # mm/huge_memory.c
1 parent cb0d8ab commit 602f985

File tree

1 file changed

+106
-0
lines changed

1 file changed

+106
-0
lines changed
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
mm/huge_memory: fix dereferencing invalid pmd migration entry
2+
3+
jira LE-3666
4+
cve CVE-2025-37958
5+
Rebuild_History Non-Buildable kernel-5.14.0-570.30.1.el9_6
6+
commit-author Gavin Guo <[email protected]>
7+
commit be6e843fc51a584672dfd9c4a6a24c8cb81d5fb7
8+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
9+
Will be included in final tarball splat. Ref for failed cherry-pick at:
10+
ciq/ciq_backports/kernel-5.14.0-570.30.1.el9_6/be6e843f.failed
11+
12+
When migrating a THP, concurrent access to the PMD migration entry during
13+
a deferred split scan can lead to an invalid address access, as
14+
illustrated below. To prevent this invalid access, it is necessary to
15+
check the PMD migration entry and return early. In this context, there is
16+
no need to use pmd_to_swp_entry and pfn_swap_entry_to_page to verify the
17+
equality of the target folio. Since the PMD migration entry is locked, it
18+
cannot be served as the target.
19+
20+
Mailing list discussion and explanation from Hugh Dickins: "An anon_vma
21+
lookup points to a location which may contain the folio of interest, but
22+
might instead contain another folio: and weeding out those other folios is
23+
precisely what the "folio != pmd_folio((*pmd)" check (and the "risk of
24+
replacing the wrong folio" comment a few lines above it) is for."
25+
26+
BUG: unable to handle page fault for address: ffffea60001db008
27+
CPU: 0 UID: 0 PID: 2199114 Comm: tee Not tainted 6.14.0+ #4 NONE
28+
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
29+
RIP: 0010:split_huge_pmd_locked+0x3b5/0x2b60
30+
Call Trace:
31+
<TASK>
32+
try_to_migrate_one+0x28c/0x3730
33+
rmap_walk_anon+0x4f6/0x770
34+
unmap_folio+0x196/0x1f0
35+
split_huge_page_to_list_to_order+0x9f6/0x1560
36+
deferred_split_scan+0xac5/0x12a0
37+
shrinker_debugfs_scan_write+0x376/0x470
38+
full_proxy_write+0x15c/0x220
39+
vfs_write+0x2fc/0xcb0
40+
ksys_write+0x146/0x250
41+
do_syscall_64+0x6a/0x120
42+
entry_SYSCALL_64_after_hwframe+0x76/0x7e
43+
44+
The bug is found by syzkaller on an internal kernel, then confirmed on
45+
upstream.
46+
47+
Link: https://lkml.kernel.org/r/[email protected]
48+
Link: https://lore.kernel.org/all/[email protected]/
49+
Link: https://lore.kernel.org/all/[email protected]/
50+
Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path")
51+
Signed-off-by: Gavin Guo <[email protected]>
52+
Acked-by: David Hildenbrand <[email protected]>
53+
Acked-by: Hugh Dickins <[email protected]>
54+
Acked-by: Zi Yan <[email protected]>
55+
Reviewed-by: Gavin Shan <[email protected]>
56+
Cc: Florent Revest <[email protected]>
57+
Cc: Matthew Wilcox (Oracle) <[email protected]>
58+
Cc: Miaohe Lin <[email protected]>
59+
60+
Signed-off-by: Andrew Morton <[email protected]>
61+
(cherry picked from commit be6e843fc51a584672dfd9c4a6a24c8cb81d5fb7)
62+
Signed-off-by: Jonathan Maple <[email protected]>
63+
64+
# Conflicts:
65+
# mm/huge_memory.c
66+
diff --cc mm/huge_memory.c
67+
index c1cdbd21ddde,47d76d03ce30..000000000000
68+
--- a/mm/huge_memory.c
69+
+++ b/mm/huge_memory.c
70+
@@@ -2279,6 -3072,32 +2279,35 @@@ static void __split_huge_pmd_locked(str
71+
pmd_populate(mm, pmd, pgtable);
72+
}
73+
74+
++<<<<<<< HEAD
75+
++=======
76+
+ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
77+
+ pmd_t *pmd, bool freeze, struct folio *folio)
78+
+ {
79+
+ bool pmd_migration = is_pmd_migration_entry(*pmd);
80+
+
81+
+ VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
82+
+ VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
83+
+ VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
84+
+ VM_BUG_ON(freeze && !folio);
85+
+
86+
+ /*
87+
+ * When the caller requests to set up a migration entry, we
88+
+ * require a folio to check the PMD against. Otherwise, there
89+
+ * is a risk of replacing the wrong folio.
90+
+ */
91+
+ if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || pmd_migration) {
92+
+ /*
93+
+ * Do not apply pmd_folio() to a migration entry; and folio lock
94+
+ * guarantees that it must be of the wrong folio anyway.
95+
+ */
96+
+ if (folio && (pmd_migration || folio != pmd_folio(*pmd)))
97+
+ return;
98+
+ __split_huge_pmd_locked(vma, pmd, address, freeze);
99+
+ }
100+
+ }
101+
+
102+
++>>>>>>> be6e843fc51a (mm/huge_memory: fix dereferencing invalid pmd migration entry)
103+
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
104+
unsigned long address, bool freeze, struct folio *folio)
105+
{
106+
* Unmerged path mm/huge_memory.c

0 commit comments

Comments
 (0)