Skip to content

Commit

Permalink
readahead: trigger mmap sequential readahead on PG_readahead
Browse files Browse the repository at this point in the history
Previously the mmap sequential readahead is triggered by updating
ra->prev_pos on each page fault and compare it with current page offset.

It costs dirtying the cache line on each _minor_ page fault.  So remove
the ra->prev_pos recording, and instead tag PG_readahead to trigger the
possible sequential readahead.  It's not only more simple, but also will
work more reliably and reduce cache line bouncing on concurrent page
faults on shared struct file.

In the mosbench exim benchmark which does multi-threaded page faults on
shared struct file, the ra->mmap_miss and ra->prev_pos updates are found
to cause excessive cache line bouncing on tmpfs, which actually disabled
readahead totally (shmem_backing_dev_info.ra_pages == 0).

So remove the ra->prev_pos recording, and instead tag PG_readahead to
trigger the possible sequential readahead.  It's not only more simple, but
also will work more reliably on concurrent reads on shared struct file.

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Tested-by: Tim Chen <tim.c.chen@intel.com>
Reported-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Wu Fengguang authored and torvalds committed May 25, 2011
1 parent 207d04b commit 2cbea1d
Showing 1 changed file with 2 additions and 4 deletions.
6 changes: 2 additions & 4 deletions mm/filemap.c
Original file line number Diff line number Diff line change
Expand Up @@ -1559,8 +1559,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
if (!ra->ra_pages)
return;

if (VM_SequentialReadHint(vma) ||
offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) {
if (VM_SequentialReadHint(vma)) {
page_cache_sync_readahead(mapping, ra, file, offset,
ra->ra_pages);
return;
Expand All @@ -1583,7 +1582,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
ra_pages = max_sane_readahead(ra->ra_pages);
ra->start = max_t(long, 0, offset - ra_pages / 2);
ra->size = ra_pages;
ra->async_size = 0;
ra->async_size = ra_pages / 4;
ra_submit(ra, mapping, file);
}

Expand Down Expand Up @@ -1689,7 +1688,6 @@ int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
}

ra->prev_pos = (loff_t)offset << PAGE_CACHE_SHIFT;
vmf->page = page;
return ret | VM_FAULT_LOCKED;

Expand Down

0 comments on commit 2cbea1d

Please sign in to comment.