Skip to content

Commit a1b92a3

Browse files
musamaanjumakpm00
authored andcommitted
mm/userfaultfd: support WP on multiple VMAs
mwriteprotect_range() errors out if [start, end) doesn't fall in one VMA. We are facing a use case where multiple VMAs are present in one range of interest. For example, the following pseudocode reproduces the error which we are trying to fix: - Allocate memory of size 16 pages with PROT_NONE with mmap - Register userfaultfd - Change protection of the first half (1 to 8 pages) of memory to PROT_READ | PROT_WRITE. This breaks the memory area in two VMAs. - Now UFFDIO_WRITEPROTECT_MODE_WP on the whole memory of 16 pages errors out. This is a simple use case where user may or may not know if the memory area has been divided into multiple VMAs. We need an implementation which doesn't disrupt the already present users. So keeping things simple, stop going over all the VMAs if any one of the VMA hasn't been registered in WP mode. While at it, remove the un-needed error check as well. [[email protected]: s/VM_WARN_ON_ONCE/VM_WARN_ONCE/ to fix build] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muhammad Usama Anjum <[email protected]> Acked-by: Peter Xu <[email protected]> Acked-by: David Hildenbrand <[email protected]> Reported-by: Paul Gofman <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 700d2e9 commit a1b92a3

File tree

1 file changed

+24
-17
lines changed

1 file changed

+24
-17
lines changed

mm/userfaultfd.c

Lines changed: 24 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -717,6 +717,8 @@ long uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma,
717717
struct mmu_gather tlb;
718718
long ret;
719719

720+
VM_WARN_ONCE(start < dst_vma->vm_start || start + len > dst_vma->vm_end,
721+
"The address range exceeds VMA boundary.\n");
720722
if (enable_wp)
721723
mm_cp_flags = MM_CP_UFFD_WP;
722724
else
@@ -741,9 +743,12 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
741743
unsigned long len, bool enable_wp,
742744
atomic_t *mmap_changing)
743745
{
746+
unsigned long end = start + len;
747+
unsigned long _start, _end;
744748
struct vm_area_struct *dst_vma;
745749
unsigned long page_mask;
746750
long err;
751+
VMA_ITERATOR(vmi, dst_mm, start);
747752

748753
/*
749754
* Sanitize the command parameters:
@@ -766,28 +771,30 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
766771
goto out_unlock;
767772

768773
err = -ENOENT;
769-
dst_vma = find_dst_vma(dst_mm, start, len);
774+
for_each_vma_range(vmi, dst_vma, end) {
770775

771-
if (!dst_vma)
772-
goto out_unlock;
773-
if (!userfaultfd_wp(dst_vma))
774-
goto out_unlock;
775-
if (!vma_can_userfault(dst_vma, dst_vma->vm_flags))
776-
goto out_unlock;
776+
if (!userfaultfd_wp(dst_vma)) {
777+
err = -ENOENT;
778+
break;
779+
}
777780

778-
if (is_vm_hugetlb_page(dst_vma)) {
779-
err = -EINVAL;
780-
page_mask = vma_kernel_pagesize(dst_vma) - 1;
781-
if ((start & page_mask) || (len & page_mask))
782-
goto out_unlock;
783-
}
781+
if (is_vm_hugetlb_page(dst_vma)) {
782+
err = -EINVAL;
783+
page_mask = vma_kernel_pagesize(dst_vma) - 1;
784+
if ((start & page_mask) || (len & page_mask))
785+
break;
786+
}
784787

785-
err = uffd_wp_range(dst_mm, dst_vma, start, len, enable_wp);
788+
_start = max(dst_vma->vm_start, start);
789+
_end = min(dst_vma->vm_end, end);
786790

787-
/* Return 0 on success, <0 on failures */
788-
if (err > 0)
789-
err = 0;
791+
err = uffd_wp_range(dst_mm, dst_vma, _start, _end - _start, enable_wp);
790792

793+
/* Return 0 on success, <0 on failures */
794+
if (err < 0)
795+
break;
796+
err = 0;
797+
}
791798
out_unlock:
792799
mmap_read_unlock(dst_mm);
793800
return err;

0 commit comments

Comments
 (0)