Skip to content

Commit adef440

Browse files
aagitakpm00
authored andcommitted
userfaultfd: UFFDIO_MOVE uABI
Implement the uABI of UFFDIO_MOVE ioctl. UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are available (in userspace) for recycling, as is usually the case in heap compaction algorithms, then we can avoid the page allocation and memcpy (done by UFFDIO_COPY). Also, since the pages are recycled in the userspace, we avoid the need to release (via madvise) the pages back to the kernel [2]. We see over 40% reduction (on a Google pixel 6 device) in the compacting thread's completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was measured using a benchmark that emulates a heap compaction implementation using userfaultfd (to allow concurrent accesses by application threads). More details of the usecase are explained in [2]. Furthermore, UFFDIO_MOVE enables moving swapped-out pages without touching them within the same vma. Today, it can only be done by mremap, however it forces splitting the vma. [1] https://lore.kernel.org/all/[email protected]/ [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/ Update for the ioctl_userfaultfd(2) manpage: UFFDIO_MOVE (Since Linux xxx) Move a continuous memory chunk into the userfault registered range and optionally wake up the blocked thread. The source and destination addresses and the number of bytes to move are specified by the src, dst, and len fields of the uffdio_move structure pointed to by argp: struct uffdio_move { __u64 dst; /* Destination of move */ __u64 src; /* Source of move */ __u64 len; /* Number of bytes to move */ __u64 mode; /* Flags controlling behavior of move */ __s64 move; /* Number of bytes moved, or negated error */ }; The following value may be bitwise ORed in mode to change the behavior of the UFFDIO_MOVE operation: UFFDIO_MOVE_MODE_DONTWAKE Do not wake up the thread that waits for page-fault resolution UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES Allow holes in the source virtual range that is being moved. When not specified, the holes will result in ENOENT error. When specified, the holes will be accounted as successfully moved memory. This is mostly useful to move hugepage aligned virtual regions without knowing if there are transparent hugepages in the regions or not, but preventing the risk of having to split the hugepage during the operation. The move field is used by the kernel to return the number of bytes that was actually moved, or an error (a negated errno- style value). If the value returned in move doesn't match the value that was specified in len, the operation fails with the error EAGAIN. The move field is output-only; it is not read by the UFFDIO_MOVE operation. The operation may fail for various reasons. Usually, remapping of pages that are not exclusive to the given process fail; once KSM might deduplicate pages or fork() COW-shares pages during fork() with child processes, they are no longer exclusive. Further, the kernel might only perform lightweight checks for detecting whether the pages are exclusive, and return -EBUSY in case that check fails. To make the operation more likely to succeed, KSM should be disabled, fork() should be avoided or MADV_DONTFORK should be configured for the source VMA before fork(). This ioctl(2) operation returns 0 on success. In this case, the entire area was moved. On error, -1 is returned and errno is set to indicate the error. Possible errors include: EAGAIN The number of bytes moved (i.e., the value returned in the move field) does not equal the value that was specified in the len field. EINVAL Either dst or len was not a multiple of the system page size, or the range specified by src and len or dst and len was invalid. EINVAL An invalid bit was specified in the mode field. ENOENT The source virtual memory range has unmapped holes and UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set. EEXIST The destination virtual memory range is fully or partially mapped. EBUSY The pages in the source virtual memory range are either pinned or not exclusive to the process. The kernel might only perform lightweight checks for detecting whether the pages are exclusive. To make the operation more likely to succeed, KSM should be disabled, fork() should be avoided or MADV_DONTFORK should be configured for the source virtual memory area before fork(). ENOMEM Allocating memory needed for the operation failed. ESRCH The target process has exited at the time of a UFFDIO_MOVE operation. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Suren Baghdasaryan <[email protected]> Cc: Al Viro <[email protected]> Cc: Axel Rasmussen <[email protected]> Cc: Brian Geffon <[email protected]> Cc: Christian Brauner <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jann Horn <[email protected]> Cc: Kalesh Singh <[email protected]> Cc: Liam R. Howlett <[email protected]> Cc: Lokesh Gidra <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Mike Rapoport (IBM) <[email protected]> Cc: Nicolas Geoffray <[email protected]> Cc: Peter Xu <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shuah Khan <[email protected]> Cc: ZhangPeng <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 880a99b commit adef440

File tree

9 files changed

+864
-1
lines changed

9 files changed

+864
-1
lines changed

Documentation/admin-guide/mm/userfaultfd.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,9 @@ events, except page fault notifications, may be generated:
113113
areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
114114
support for shmem virtual memory areas.
115115

116+
- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
117+
existing page contents from userspace.
118+
116119
The userland application should set the feature flags it intends to use
117120
when invoking the ``UFFDIO_API`` ioctl, to request that those features be
118121
enabled if supported.

fs/userfaultfd.c

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2005,6 +2005,75 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
20052005
return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
20062006
}
20072007

2008+
static int userfaultfd_move(struct userfaultfd_ctx *ctx,
2009+
unsigned long arg)
2010+
{
2011+
__s64 ret;
2012+
struct uffdio_move uffdio_move;
2013+
struct uffdio_move __user *user_uffdio_move;
2014+
struct userfaultfd_wake_range range;
2015+
struct mm_struct *mm = ctx->mm;
2016+
2017+
user_uffdio_move = (struct uffdio_move __user *) arg;
2018+
2019+
if (atomic_read(&ctx->mmap_changing))
2020+
return -EAGAIN;
2021+
2022+
if (copy_from_user(&uffdio_move, user_uffdio_move,
2023+
/* don't copy "move" last field */
2024+
sizeof(uffdio_move)-sizeof(__s64)))
2025+
return -EFAULT;
2026+
2027+
/* Do not allow cross-mm moves. */
2028+
if (mm != current->mm)
2029+
return -EINVAL;
2030+
2031+
ret = validate_range(mm, uffdio_move.dst, uffdio_move.len);
2032+
if (ret)
2033+
return ret;
2034+
2035+
ret = validate_range(mm, uffdio_move.src, uffdio_move.len);
2036+
if (ret)
2037+
return ret;
2038+
2039+
if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES|
2040+
UFFDIO_MOVE_MODE_DONTWAKE))
2041+
return -EINVAL;
2042+
2043+
if (mmget_not_zero(mm)) {
2044+
mmap_read_lock(mm);
2045+
2046+
/* Re-check after taking mmap_lock */
2047+
if (likely(!atomic_read(&ctx->mmap_changing)))
2048+
ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src,
2049+
uffdio_move.len, uffdio_move.mode);
2050+
else
2051+
ret = -EINVAL;
2052+
2053+
mmap_read_unlock(mm);
2054+
mmput(mm);
2055+
} else {
2056+
return -ESRCH;
2057+
}
2058+
2059+
if (unlikely(put_user(ret, &user_uffdio_move->move)))
2060+
return -EFAULT;
2061+
if (ret < 0)
2062+
goto out;
2063+
2064+
/* len == 0 would wake all */
2065+
VM_WARN_ON(!ret);
2066+
range.len = ret;
2067+
if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) {
2068+
range.start = uffdio_move.dst;
2069+
wake_userfault(ctx, &range);
2070+
}
2071+
ret = range.len == uffdio_move.len ? 0 : -EAGAIN;
2072+
2073+
out:
2074+
return ret;
2075+
}
2076+
20082077
/*
20092078
* userland asks for a certain API version and we return which bits
20102079
* and ioctl commands are implemented in this kernel for such API
@@ -2097,6 +2166,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
20972166
case UFFDIO_ZEROPAGE:
20982167
ret = userfaultfd_zeropage(ctx, arg);
20992168
break;
2169+
case UFFDIO_MOVE:
2170+
ret = userfaultfd_move(ctx, arg);
2171+
break;
21002172
case UFFDIO_WRITEPROTECT:
21012173
ret = userfaultfd_writeprotect(ctx, arg);
21022174
break;

include/linux/rmap.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
121121
down_write(&anon_vma->root->rwsem);
122122
}
123123

124+
static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
125+
{
126+
return down_write_trylock(&anon_vma->root->rwsem);
127+
}
128+
124129
static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
125130
{
126131
up_write(&anon_vma->root->rwsem);

include/linux/userfaultfd_k.h

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,17 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
9393
extern long uffd_wp_range(struct vm_area_struct *vma,
9494
unsigned long start, unsigned long len, bool enable_wp);
9595

96+
/* move_pages */
97+
void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
98+
void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
99+
ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm,
100+
unsigned long dst_start, unsigned long src_start,
101+
unsigned long len, __u64 flags);
102+
int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
103+
struct vm_area_struct *dst_vma,
104+
struct vm_area_struct *src_vma,
105+
unsigned long dst_addr, unsigned long src_addr);
106+
96107
/* mm helpers */
97108
static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
98109
struct vm_userfaultfd_ctx vm_ctx)

include/uapi/linux/userfaultfd.h

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,8 @@
4141
UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
4242
UFFD_FEATURE_WP_UNPOPULATED | \
4343
UFFD_FEATURE_POISON | \
44-
UFFD_FEATURE_WP_ASYNC)
44+
UFFD_FEATURE_WP_ASYNC | \
45+
UFFD_FEATURE_MOVE)
4546
#define UFFD_API_IOCTLS \
4647
((__u64)1 << _UFFDIO_REGISTER | \
4748
(__u64)1 << _UFFDIO_UNREGISTER | \
@@ -50,6 +51,7 @@
5051
((__u64)1 << _UFFDIO_WAKE | \
5152
(__u64)1 << _UFFDIO_COPY | \
5253
(__u64)1 << _UFFDIO_ZEROPAGE | \
54+
(__u64)1 << _UFFDIO_MOVE | \
5355
(__u64)1 << _UFFDIO_WRITEPROTECT | \
5456
(__u64)1 << _UFFDIO_CONTINUE | \
5557
(__u64)1 << _UFFDIO_POISON)
@@ -73,6 +75,7 @@
7375
#define _UFFDIO_WAKE (0x02)
7476
#define _UFFDIO_COPY (0x03)
7577
#define _UFFDIO_ZEROPAGE (0x04)
78+
#define _UFFDIO_MOVE (0x05)
7679
#define _UFFDIO_WRITEPROTECT (0x06)
7780
#define _UFFDIO_CONTINUE (0x07)
7881
#define _UFFDIO_POISON (0x08)
@@ -92,6 +95,8 @@
9295
struct uffdio_copy)
9396
#define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
9497
struct uffdio_zeropage)
98+
#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
99+
struct uffdio_move)
95100
#define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
96101
struct uffdio_writeprotect)
97102
#define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
@@ -222,6 +227,9 @@ struct uffdio_api {
222227
* asynchronous mode is supported in which the write fault is
223228
* automatically resolved and write-protection is un-set.
224229
* It implies UFFD_FEATURE_WP_UNPOPULATED.
230+
*
231+
* UFFD_FEATURE_MOVE indicates that the kernel supports moving an
232+
* existing page contents from userspace.
225233
*/
226234
#define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
227235
#define UFFD_FEATURE_EVENT_FORK (1<<1)
@@ -239,6 +247,7 @@ struct uffdio_api {
239247
#define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
240248
#define UFFD_FEATURE_POISON (1<<14)
241249
#define UFFD_FEATURE_WP_ASYNC (1<<15)
250+
#define UFFD_FEATURE_MOVE (1<<16)
242251
__u64 features;
243252

244253
__u64 ioctls;
@@ -347,6 +356,24 @@ struct uffdio_poison {
347356
__s64 updated;
348357
};
349358

359+
struct uffdio_move {
360+
__u64 dst;
361+
__u64 src;
362+
__u64 len;
363+
/*
364+
* Especially if used to atomically remove memory from the
365+
* address space the wake on the dst range is not needed.
366+
*/
367+
#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
368+
#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
369+
__u64 mode;
370+
/*
371+
* "move" is written by the ioctl and must be at the end: the
372+
* copy_from_user will not read the last 8 bytes.
373+
*/
374+
__s64 move;
375+
};
376+
350377
/*
351378
* Flags for the userfaultfd(2) system call itself.
352379
*/

mm/huge_memory.c

Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2141,6 +2141,128 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
21412141
return ret;
21422142
}
21432143

2144+
#ifdef CONFIG_USERFAULTFD
2145+
/*
2146+
* The PT lock for src_pmd and the mmap_lock for reading are held by
2147+
* the caller, but it must return after releasing the page_table_lock.
2148+
* Just move the page from src_pmd to dst_pmd if possible.
2149+
* Return zero if succeeded in moving the page, -EAGAIN if it needs to be
2150+
* repeated by the caller, or other errors in case of failure.
2151+
*/
2152+
int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
2153+
struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
2154+
unsigned long dst_addr, unsigned long src_addr)
2155+
{
2156+
pmd_t _dst_pmd, src_pmdval;
2157+
struct page *src_page;
2158+
struct folio *src_folio;
2159+
struct anon_vma *src_anon_vma;
2160+
spinlock_t *src_ptl, *dst_ptl;
2161+
pgtable_t src_pgtable;
2162+
struct mmu_notifier_range range;
2163+
int err = 0;
2164+
2165+
src_pmdval = *src_pmd;
2166+
src_ptl = pmd_lockptr(mm, src_pmd);
2167+
2168+
lockdep_assert_held(src_ptl);
2169+
mmap_assert_locked(mm);
2170+
2171+
/* Sanity checks before the operation */
2172+
if (WARN_ON_ONCE(!pmd_none(dst_pmdval)) || WARN_ON_ONCE(src_addr & ~HPAGE_PMD_MASK) ||
2173+
WARN_ON_ONCE(dst_addr & ~HPAGE_PMD_MASK)) {
2174+
spin_unlock(src_ptl);
2175+
return -EINVAL;
2176+
}
2177+
2178+
if (!pmd_trans_huge(src_pmdval)) {
2179+
spin_unlock(src_ptl);
2180+
if (is_pmd_migration_entry(src_pmdval)) {
2181+
pmd_migration_entry_wait(mm, &src_pmdval);
2182+
return -EAGAIN;
2183+
}
2184+
return -ENOENT;
2185+
}
2186+
2187+
src_page = pmd_page(src_pmdval);
2188+
if (unlikely(!PageAnonExclusive(src_page))) {
2189+
spin_unlock(src_ptl);
2190+
return -EBUSY;
2191+
}
2192+
2193+
src_folio = page_folio(src_page);
2194+
folio_get(src_folio);
2195+
spin_unlock(src_ptl);
2196+
2197+
flush_cache_range(src_vma, src_addr, src_addr + HPAGE_PMD_SIZE);
2198+
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, src_addr,
2199+
src_addr + HPAGE_PMD_SIZE);
2200+
mmu_notifier_invalidate_range_start(&range);
2201+
2202+
folio_lock(src_folio);
2203+
2204+
/*
2205+
* split_huge_page walks the anon_vma chain without the page
2206+
* lock. Serialize against it with the anon_vma lock, the page
2207+
* lock is not enough.
2208+
*/
2209+
src_anon_vma = folio_get_anon_vma(src_folio);
2210+
if (!src_anon_vma) {
2211+
err = -EAGAIN;
2212+
goto unlock_folio;
2213+
}
2214+
anon_vma_lock_write(src_anon_vma);
2215+
2216+
dst_ptl = pmd_lockptr(mm, dst_pmd);
2217+
double_pt_lock(src_ptl, dst_ptl);
2218+
if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
2219+
!pmd_same(*dst_pmd, dst_pmdval))) {
2220+
err = -EAGAIN;
2221+
goto unlock_ptls;
2222+
}
2223+
if (folio_maybe_dma_pinned(src_folio) ||
2224+
!PageAnonExclusive(&src_folio->page)) {
2225+
err = -EBUSY;
2226+
goto unlock_ptls;
2227+
}
2228+
2229+
if (WARN_ON_ONCE(!folio_test_head(src_folio)) ||
2230+
WARN_ON_ONCE(!folio_test_anon(src_folio))) {
2231+
err = -EBUSY;
2232+
goto unlock_ptls;
2233+
}
2234+
2235+
folio_move_anon_rmap(src_folio, dst_vma);
2236+
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
2237+
2238+
src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
2239+
/* Folio got pinned from under us. Put it back and fail the move. */
2240+
if (folio_maybe_dma_pinned(src_folio)) {
2241+
set_pmd_at(mm, src_addr, src_pmd, src_pmdval);
2242+
err = -EBUSY;
2243+
goto unlock_ptls;
2244+
}
2245+
2246+
_dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
2247+
/* Follow mremap() behavior and treat the entry dirty after the move */
2248+
_dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
2249+
set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd);
2250+
2251+
src_pgtable = pgtable_trans_huge_withdraw(mm, src_pmd);
2252+
pgtable_trans_huge_deposit(mm, dst_pmd, src_pgtable);
2253+
unlock_ptls:
2254+
double_pt_unlock(src_ptl, dst_ptl);
2255+
anon_vma_unlock_write(src_anon_vma);
2256+
put_anon_vma(src_anon_vma);
2257+
unlock_folio:
2258+
/* unblock rmap walks */
2259+
folio_unlock(src_folio);
2260+
mmu_notifier_invalidate_range_end(&range);
2261+
folio_put(src_folio);
2262+
return err;
2263+
}
2264+
#endif /* CONFIG_USERFAULTFD */
2265+
21442266
/*
21452267
* Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
21462268
*

mm/khugepaged.c

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1140,6 +1140,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
11401140
* Prevent all access to pagetables with the exception of
11411141
* gup_fast later handled by the ptep_clear_flush and the VM
11421142
* handled by the anon_vma lock + PG_lock.
1143+
*
1144+
* UFFDIO_MOVE is prevented to race as well thanks to the
1145+
* mmap_lock.
11431146
*/
11441147
mmap_write_lock(mm);
11451148
result = hugepage_vma_revalidate(mm, address, true, &vma, cc);

mm/rmap.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -490,6 +490,12 @@ void __init anon_vma_init(void)
490490
* page_remove_rmap() that the anon_vma pointer from page->mapping is valid
491491
* if there is a mapcount, we can dereference the anon_vma after observing
492492
* those.
493+
*
494+
* NOTE: the caller should normally hold folio lock when calling this. If
495+
* not, the caller needs to double check the anon_vma didn't change after
496+
* taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it
497+
* concurrently without folio lock protection). See folio_lock_anon_vma_read()
498+
* which has already covered that, and comment above remap_pages().
493499
*/
494500
struct anon_vma *folio_get_anon_vma(struct folio *folio)
495501
{

0 commit comments

Comments
 (0)