Skip to content

Commit 880a99b

Browse files
aagitakpm00
authored andcommitted
mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()
Patch series "userfaultfd move option", v6. This patch series introduces UFFDIO_MOVE feature to userfaultfd, which has long been implemented and maintained by Andrea in his local tree [1], but was not upstreamed due to lack of use cases where this approach would be better than allocating a new page and copying the contents. Previous upstraming attempts could be found at [6] and [7]. UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application needs pages to be allocated [2]. However, with UFFDIO_MOVE, if pages are available (in userspace) for recycling, as is usually the case in heap compaction algorithms, then we can avoid the page allocation and memcpy (done by UFFDIO_COPY). Also, since the pages are recycled in the userspace, we avoid the need to release (via madvise) the pages back to the kernel [3]. We see over 40% reduction (on a Google pixel 6 device) in the compacting thread's completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was measured using a benchmark that emulates a heap compaction implementation using userfaultfd (to allow concurrent accesses by application threads). More details of the usecase are explained in [3]. Furthermore, UFFDIO_MOVE enables moving swapped-out pages without touching them within the same vma. Today, it can only be done by mremap, however it forces splitting the vma. TODOs for follow-up improvements: - cross-mm support. Known differences from single-mm and missing pieces: - memcg recharging (might need to isolate pages in the process) - mm counters - cross-mm deposit table moves - cross-mm test - document the address space where src and dest reside in struct uffdio_move - TLB flush batching. Will require extensive changes to PTL locking in move_pages_pte(). OTOH that might let us reuse parts of mremap code. This patch (of 5): For now, folio_move_anon_rmap() was only used to move a folio to a different anon_vma after fork(), whereby the root anon_vma stayed unchanged. For that, it was sufficient to hold the folio lock when calling folio_move_anon_rmap(). However, we want to make use of folio_move_anon_rmap() to move folios between VMAs that have a different root anon_vma. As folio_referenced() performs an RMAP walk without holding the folio lock but only holding the anon_vma in read mode, holding the folio lock is insufficient. When moving to an anon_vma with a different root anon_vma, we'll have to hold both, the folio lock and the anon_vma lock in write mode. Consequently, whenever we succeeded in folio_lock_anon_vma_read() to read-lock the anon_vma, we have to re-check if the mapping was changed in the meantime. If that was the case, we have to retry. Note that folio_move_anon_rmap() must only be called if the anon page is exclusive to a process, and must not be called on KSM folios. This is a preparation for UFFDIO_MOVE, which will hold the folio lock, the anon_vma lock in write mode, and the mmap_lock in read mode. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Suren Baghdasaryan <[email protected]> Acked-by: Peter Xu <[email protected]> Cc: Al Viro <[email protected]> Cc: Axel Rasmussen <[email protected]> Cc: Brian Geffon <[email protected]> Cc: Christian Brauner <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jann Horn <[email protected]> Cc: Kalesh Singh <[email protected]> Cc: [email protected] Cc: Liam R. Howlett <[email protected]> Cc: Lokesh Gidra <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Mike Rapoport (IBM) <[email protected]> Cc: Nicolas Geoffray <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shuah Khan <[email protected]> Cc: ZhangPeng <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent fa399c3 commit 880a99b

File tree

1 file changed

+24
-0
lines changed

1 file changed

+24
-0
lines changed

mm/rmap.c

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -542,6 +542,7 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
542542
struct anon_vma *root_anon_vma;
543543
unsigned long anon_mapping;
544544

545+
retry:
545546
rcu_read_lock();
546547
anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
547548
if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
@@ -552,6 +553,17 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
552553
anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
553554
root_anon_vma = READ_ONCE(anon_vma->root);
554555
if (down_read_trylock(&root_anon_vma->rwsem)) {
556+
/*
557+
* folio_move_anon_rmap() might have changed the anon_vma as we
558+
* might not hold the folio lock here.
559+
*/
560+
if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
561+
anon_mapping)) {
562+
up_read(&root_anon_vma->rwsem);
563+
rcu_read_unlock();
564+
goto retry;
565+
}
566+
555567
/*
556568
* If the folio is still mapped, then this anon_vma is still
557569
* its anon_vma, and holding the mutex ensures that it will
@@ -586,6 +598,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
586598
rcu_read_unlock();
587599
anon_vma_lock_read(anon_vma);
588600

601+
/*
602+
* folio_move_anon_rmap() might have changed the anon_vma as we might
603+
* not hold the folio lock here.
604+
*/
605+
if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
606+
anon_mapping)) {
607+
anon_vma_unlock_read(anon_vma);
608+
put_anon_vma(anon_vma);
609+
anon_vma = NULL;
610+
goto retry;
611+
}
612+
589613
if (atomic_dec_and_test(&anon_vma->refcount)) {
590614
/*
591615
* Oops, we held the last refcount, release the lock

0 commit comments

Comments
 (0)