Skip to content

Commit

Permalink
mm, memcontrol: move swap charge handling into get_swap_page()
Browse files Browse the repository at this point in the history
Patch series "mm, memcontrol: Implement memory.swap.events", v2.

This patchset implements memory.swap.events which contains max and fail
events so that userland can monitor and respond to swap running out.

This patch (of 2):

get_swap_page() is always followed by mem_cgroup_try_charge_swap().
This patch moves mem_cgroup_try_charge_swap() into get_swap_page() and
makes get_swap_page() call the function even after swap allocation
failure.

This simplifies the callers and consolidates memcg related logic and
will ease adding swap related memcg events.

Link: http://lkml.kernel.org/r/20180416230934.GH1911913@devbig577.frc2.facebook.com
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
htejun authored and torvalds committed Jun 8, 2018
1 parent 88aa7cc commit bb98f2c
Show file tree
Hide file tree
Showing 4 changed files with 10 additions and 10 deletions.
3 changes: 3 additions & 0 deletions mm/memcontrol.c
Original file line number Diff line number Diff line change
Expand Up @@ -6012,6 +6012,9 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
if (!memcg)
return 0;

if (!entry.val)
return 0;

memcg = mem_cgroup_id_get_online(memcg);

if (!mem_cgroup_is_root(memcg) &&
Expand Down
4 changes: 0 additions & 4 deletions mm/shmem.c
Original file line number Diff line number Diff line change
Expand Up @@ -1322,9 +1322,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
if (!swap.val)
goto redirty;

if (mem_cgroup_try_charge_swap(page, swap))
goto free_swap;

/*
* Add inode to shmem_unuse()'s list of swapped-out inodes,
* if it's not already there. Do it now before the page is
Expand Down Expand Up @@ -1353,7 +1350,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
}

mutex_unlock(&shmem_swaplist_mutex);
free_swap:
put_swap_page(page, swap);
redirty:
set_page_dirty(page);
Expand Down
10 changes: 7 additions & 3 deletions mm/swap_slots.c
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ swp_entry_t get_swap_page(struct page *page)
if (PageTransHuge(page)) {
if (IS_ENABLED(CONFIG_THP_SWAP))
get_swap_pages(1, true, &entry);
return entry;
goto out;
}

/*
Expand Down Expand Up @@ -347,10 +347,14 @@ swp_entry_t get_swap_page(struct page *page)
}
mutex_unlock(&cache->alloc_lock);
if (entry.val)
return entry;
goto out;
}

get_swap_pages(1, false, &entry);

out:
if (mem_cgroup_try_charge_swap(page, entry)) {
put_swap_page(page, entry);
entry.val = 0;
}
return entry;
}
3 changes: 0 additions & 3 deletions mm/swap_state.c
Original file line number Diff line number Diff line change
Expand Up @@ -216,9 +216,6 @@ int add_to_swap(struct page *page)
if (!entry.val)
return 0;

if (mem_cgroup_try_charge_swap(page, entry))
goto fail;

/*
* Radix-tree node allocations from PF_MEMALLOC contexts could
* completely exhaust the page allocator. __GFP_NOMEMALLOC
Expand Down

0 comments on commit bb98f2c

Please sign in to comment.