Skip to content

Commit f4e177d

Browse files
wildea01torvalds
authored andcommitted
mm/migrate.c: stabilise page count when migrating transparent hugepages
When migrating a transparent hugepage, migrate_misplaced_transhuge_page guards itself against a concurrent fastgup of the page by checking that the page count is equal to 2 before and after installing the new pmd. If the page count changes, then the pmd is reverted back to the original entry, however there is a small window where the new (possibly writable) pmd is installed and the underlying page could be written by userspace. Restoring the old pmd could therefore result in loss of data. This patch fixes the problem by freezing the page count whilst updating the page tables, which protects against a concurrent fastgup without the need to restore the old pmd in the failure case (since the page count can no longer change under our feet). Link: http://lkml.kernel.org/r/1497349722-6731-4-git-send-email-will.deacon@arm.com Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 108a7ac commit f4e177d

File tree

1 file changed

+2
-13
lines changed

1 file changed

+2
-13
lines changed

mm/migrate.c

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1916,7 +1916,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
19161916
int page_lru = page_is_file_cache(page);
19171917
unsigned long mmun_start = address & HPAGE_PMD_MASK;
19181918
unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE;
1919-
pmd_t orig_entry;
19201919

19211920
/*
19221921
* Rate-limit the amount of data that is being migrated to a node.
@@ -1959,8 +1958,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
19591958
/* Recheck the target PMD */
19601959
mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
19611960
ptl = pmd_lock(mm, pmd);
1962-
if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
1963-
fail_putback:
1961+
if (unlikely(!pmd_same(*pmd, entry) || !page_ref_freeze(page, 2))) {
19641962
spin_unlock(ptl);
19651963
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
19661964

@@ -1982,7 +1980,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
19821980
goto out_unlock;
19831981
}
19841982

1985-
orig_entry = *pmd;
19861983
entry = mk_huge_pmd(new_page, vma->vm_page_prot);
19871984
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
19881985

@@ -1999,15 +1996,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
19991996
set_pmd_at(mm, mmun_start, pmd, entry);
20001997
update_mmu_cache_pmd(vma, address, &entry);
20011998

2002-
if (page_count(page) != 2) {
2003-
set_pmd_at(mm, mmun_start, pmd, orig_entry);
2004-
flush_pmd_tlb_range(vma, mmun_start, mmun_end);
2005-
mmu_notifier_invalidate_range(mm, mmun_start, mmun_end);
2006-
update_mmu_cache_pmd(vma, address, &entry);
2007-
page_remove_rmap(new_page, true);
2008-
goto fail_putback;
2009-
}
2010-
1999+
page_ref_unfreeze(page, 2);
20112000
mlock_migrate_page(new_page, page);
20122001
page_remove_rmap(page, true);
20132002
set_page_owner_migrate_reason(new_page, MR_NUMA_MISPLACED);

0 commit comments

Comments
 (0)