Skip to content

Commit 357b927

Browse files
Kiryl Shutsemauakpm00
authored andcommitted
mm/filemap: map entire large folio faultaround
Currently, kernel only maps part of large folio that fits into start_pgoff/end_pgoff range. Map entire folio where possible. It will match finish_fault() behaviour that user hits on cold page cache. Mapping large folios at once will allow the rmap code to mlock it on add, as it will recognize that it is fully mapped and mlocking is safe. Link: https://lkml.kernel.org/r/20250923110711.690639-6-kirill@shutemov.name Signed-off-by: Kiryl Shutsemau <kas@kernel.org> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 19773df commit 357b927

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

mm/filemap.c

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3670,6 +3670,21 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
36703670
struct page *page = folio_page(folio, start);
36713671
unsigned int count = 0;
36723672
pte_t *old_ptep = vmf->pte;
3673+
unsigned long addr0;
3674+
3675+
/*
3676+
* Map the large folio fully where possible.
3677+
*
3678+
* The folio must not cross VMA or page table boundary.
3679+
*/
3680+
addr0 = addr - start * PAGE_SIZE;
3681+
if (folio_within_vma(folio, vmf->vma) &&
3682+
(addr0 & PMD_MASK) == ((addr0 + folio_size(folio) - 1) & PMD_MASK)) {
3683+
vmf->pte -= start;
3684+
page -= start;
3685+
addr = addr0;
3686+
nr_pages = folio_nr_pages(folio);
3687+
}
36733688

36743689
do {
36753690
if (PageHWPoison(page + count))

0 commit comments

Comments
 (0)