Skip to content

Commit 1118234

Browse files
davidhildenbrandakpm00
authored andcommitted
mm/rmap: fix dereferencing invalid subpage pointer in try_to_migrate_one()
The subpage we calculate is an invalid pointer for device private pages, because device private pages are mapped via non-present device private entries, not ordinary present PTEs. Let's just not compute broken pointers and fixup later. Move the proper assignment of the correct subpage to the beginning of the function and assert that we really only have a single page in our folio. This currently results in a BUG when tying to compute anon_exclusive, because: [ 528.727237] BUG: unable to handle page fault for address: ffffea1fffffffc0 [ 528.739585] #PF: supervisor read access in kernel mode [ 528.745324] #PF: error_code(0x0000) - not-present page [ 528.751062] PGD 44eaf2067 P4D 44eaf2067 PUD 0 [ 528.756026] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 528.760890] CPU: 120 PID: 18275 Comm: hmm-tests Not tainted 5.19.0-rc3-kfd-alex #257 [ 528.769542] Hardware name: AMD Corporation BardPeak/BardPeak, BIOS RTY1002BDS 09/17/2021 [ 528.778579] RIP: 0010:try_to_migrate_one+0x21a/0x1000 [ 528.784225] Code: f6 48 89 c8 48 2b 05 45 d1 6a 01 48 c1 f8 06 48 29 c3 48 8b 45 a8 48 c1 e3 06 48 01 cb f6 41 18 01 48 89 85 50 ff ff ff 74 0b <4c> 8b 33 49 c1 ee 11 41 83 e6 01 48 8b bd 48 ff ff ff e8 3f 99 02 [ 528.805194] RSP: 0000:ffffc90003cdfaa0 EFLAGS: 00010202 [ 528.811027] RAX: 00007ffff7ff4000 RBX: ffffea1fffffffc0 RCX: ffffeaffffffffc0 [ 528.818995] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffc90003cdfaf8 [ 528.826962] RBP: ffffc90003cdfb70 R08: 0000000000000000 R09: 0000000000000000 [ 528.834930] R10: ffffc90003cdf910 R11: 0000000000000002 R12: ffff888194450540 [ 528.842899] R13: ffff888160d057c0 R14: 0000000000000000 R15: 03ffffffffffffff [ 528.850865] FS: 00007ffff7fdb740(0000) GS:ffff8883b0600000(0000) knlGS:0000000000000000 [ 528.859891] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 528.866308] CR2: ffffea1fffffffc0 CR3: 00000001562b4003 CR4: 0000000000770ee0 [ 528.874275] PKRU: 55555554 [ 528.877286] Call Trace: [ 528.880016] <TASK> [ 528.882356] ? lock_is_held_type+0xdf/0x130 [ 528.887033] rmap_walk_anon+0x167/0x410 [ 528.891316] try_to_migrate+0x90/0xd0 [ 528.895405] ? try_to_unmap_one+0xe10/0xe10 [ 528.900074] ? anon_vma_ctor+0x50/0x50 [ 528.904260] ? put_anon_vma+0x10/0x10 [ 528.908347] ? invalid_mkclean_vma+0x20/0x20 [ 528.913114] migrate_vma_setup+0x5f4/0x750 [ 528.917691] dmirror_devmem_fault+0x8c/0x250 [test_hmm] [ 528.923532] do_swap_page+0xac0/0xe50 [ 528.927623] ? __lock_acquire+0x4b2/0x1ac0 [ 528.932199] __handle_mm_fault+0x949/0x1440 [ 528.936876] handle_mm_fault+0x13f/0x3e0 [ 528.941256] do_user_addr_fault+0x215/0x740 [ 528.945928] exc_page_fault+0x75/0x280 [ 528.950115] asm_exc_page_fault+0x27/0x30 [ 528.954593] RIP: 0033:0x40366b ... Link: https://lkml.kernel.org/r/[email protected] Fixes: 6c28760 ("mm: remember exclusively mapped anonymous pages with PG_anon_exclusive") Signed-off-by: David Hildenbrand <[email protected]> Reported-by: "Sierra Guiza, Alejandro (Alex)" <[email protected]> Reviewed-by: Alistair Popple <[email protected]> Tested-by: Alistair Popple <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "Matthew Wilcox (Oracle)" <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 34c0a5b commit 1118234

File tree

1 file changed

+17
-10
lines changed

1 file changed

+17
-10
lines changed

mm/rmap.c

Lines changed: 17 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1899,8 +1899,23 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
18991899
/* Unexpected PMD-mapped THP? */
19001900
VM_BUG_ON_FOLIO(!pvmw.pte, folio);
19011901

1902-
subpage = folio_page(folio,
1903-
pte_pfn(*pvmw.pte) - folio_pfn(folio));
1902+
if (folio_is_zone_device(folio)) {
1903+
/*
1904+
* Our PTE is a non-present device exclusive entry and
1905+
* calculating the subpage as for the common case would
1906+
* result in an invalid pointer.
1907+
*
1908+
* Since only PAGE_SIZE pages can currently be
1909+
* migrated, just set it to page. This will need to be
1910+
* changed when hugepage migrations to device private
1911+
* memory are supported.
1912+
*/
1913+
VM_BUG_ON_FOLIO(folio_nr_pages(folio) > 1, folio);
1914+
subpage = &folio->page;
1915+
} else {
1916+
subpage = folio_page(folio,
1917+
pte_pfn(*pvmw.pte) - folio_pfn(folio));
1918+
}
19041919
address = pvmw.address;
19051920
anon_exclusive = folio_test_anon(folio) &&
19061921
PageAnonExclusive(subpage);
@@ -1993,15 +2008,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
19932008
/*
19942009
* No need to invalidate here it will synchronize on
19952010
* against the special swap migration pte.
1996-
*
1997-
* The assignment to subpage above was computed from a
1998-
* swap PTE which results in an invalid pointer.
1999-
* Since only PAGE_SIZE pages can currently be
2000-
* migrated, just set it to page. This will need to be
2001-
* changed when hugepage migrations to device private
2002-
* memory are supported.
20032011
*/
2004-
subpage = &folio->page;
20052012
} else if (PageHWPoison(subpage)) {
20062013
pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
20072014
if (folio_test_hugetlb(folio)) {

0 commit comments

Comments
 (0)