Skip to content

Commit 8cf8864

Browse files
Matthew Wilcox (Oracle)torvalds
authored andcommitted
proc: optimise smaps for shmem entries
Avoid bumping the refcount on pages when we're only interested in the swap entries. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Huang Ying <ying.huang@intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: William Kucharski <william.kucharski@oracle.com> Link: https://lkml.kernel.org/r/20200910183318.20139-5-willy@infradead.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent e6e8871 commit 8cf8864

File tree

1 file changed

+1
-7
lines changed

1 file changed

+1
-7
lines changed

fs/proc/task_mmu.c

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -520,16 +520,10 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
520520
page = device_private_entry_to_page(swpent);
521521
} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap
522522
&& pte_none(*pte))) {
523-
page = find_get_entry(vma->vm_file->f_mapping,
523+
page = xa_load(&vma->vm_file->f_mapping->i_pages,
524524
linear_page_index(vma, addr));
525-
if (!page)
526-
return;
527-
528525
if (xa_is_value(page))
529526
mss->swap += PAGE_SIZE;
530-
else
531-
put_page(page);
532-
533527
return;
534528
}
535529

0 commit comments

Comments
 (0)