Skip to content

Commit 7d7ef0a

Browse files
yosrym93akpm00
authored andcommitted
mm: memcg: restore subtree stats flushing
Stats flushing for memcg currently follows the following rules: - Always flush the entire memcg hierarchy (i.e. flush the root). - Only one flusher is allowed at a time. If someone else tries to flush concurrently, they skip and return immediately. - A periodic flusher flushes all the stats every 2 seconds. The reason this approach is followed is because all flushes are serialized by a global rstat spinlock. On the memcg side, flushing is invoked from userspace reads as well as in-kernel flushers (e.g. reclaim, refault, etc). This approach aims to avoid serializing all flushers on the global lock, which can cause a significant performance hit under high concurrency. This approach has the following problems: - Occasionally a userspace read of the stats of a non-root cgroup will be too expensive as it has to flush the entire hierarchy [1]. - Sometimes the stats accuracy are compromised if there is an ongoing flush, and we skip and return before the subtree of interest is actually flushed, yielding stale stats (by up to 2s due to periodic flushing). This is more visible when reading stats from userspace, but can also affect in-kernel flushers. The latter problem is particulary a concern when userspace reads stats after an event occurs, but gets stats from before the event. Examples: - When memory usage / pressure spikes, a userspace OOM handler may look at the stats of different memcgs to select a victim based on various heuristics (e.g. how much private memory will be freed by killing this). Reading stale stats from before the usage spike in this case may cause a wrongful OOM kill. - A proactive reclaimer may read the stats after writing to memory.reclaim to measure the success of the reclaim operation. Stale stats from before reclaim may give a false negative. - Reading the stats of a parent and a child memcg may be inconsistent (child larger than parent), if the flush doesn't happen when the parent is read, but happens when the child is read. As for in-kernel flushers, they will occasionally get stale stats. No regressions are currently known from this, but if there are regressions, they would be very difficult to debug and link to the source of the problem. This patch aims to fix these problems by restoring subtree flushing, and removing the unified/coalesced flushing logic that skips flushing if there is an ongoing flush. This change would introduce a significant regression with global stats flushing thresholds. With per-memcg stats flushing thresholds, this seems to perform really well. The thresholds protect the underlying lock from unnecessary contention. This patch was tested in two ways to ensure the latency of flushing is up to par, on a machine with 384 cpus: - A synthetic test with 5000 concurrent workers in 500 cgroups doing allocations and reclaim, as well as 1000 readers for memory.stat (variation of [2]). No regressions were noticed in the total runtime. Note that significant regressions in this test are observed with global stats thresholds, but not with per-memcg thresholds. - A synthetic stress test for concurrently reading memcg stats while memory allocation/freeing workers are running in the background, provided by Wei Xu [3]. With 250k threads reading the stats every 100ms in 50k cgroups, 99.9% of reads take <= 50us. Less than 0.01% of reads take more than 1ms, and no reads take more than 100ms. [1] https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/ [2] https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/ [akpm@linux-foundation.org: fix mm/zswap.c] [yosryahmed@google.com: remove stats flushing mutex] Link: https://lkml.kernel.org/r/CAJD7tkZgP3m-VVPn+fF_YuvXeQYK=tZZjJHj=dzD=CcSSpp2qg@mail.gmail.com Link: https://lkml.kernel.org/r/20231129032154.3710765-6-yosryahmed@google.com Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Tested-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com> Acked-by: Shakeel Butt <shakeelb@google.com> Cc: Chris Li <chrisl@kernel.org> Cc: Greg Thelen <gthelen@google.com> Cc: Ivan Babrou <ivan@cloudflare.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Koutny <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Tejun Heo <tj@kernel.org> Cc: Waiman Long <longman@redhat.com> Cc: Wei Xu <weixugc@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent b006847 commit 7d7ef0a

File tree

5 files changed

+52
-38
lines changed

5 files changed

+52
-38
lines changed

include/linux/memcontrol.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1051,8 +1051,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
10511051
return x;
10521052
}
10531053

1054-
void mem_cgroup_flush_stats(void);
1055-
void mem_cgroup_flush_stats_ratelimited(void);
1054+
void mem_cgroup_flush_stats(struct mem_cgroup *memcg);
1055+
void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg);
10561056

10571057
void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
10581058
int val);
@@ -1563,11 +1563,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
15631563
return node_page_state(lruvec_pgdat(lruvec), idx);
15641564
}
15651565

1566-
static inline void mem_cgroup_flush_stats(void)
1566+
static inline void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
15671567
{
15681568
}
15691569

1570-
static inline void mem_cgroup_flush_stats_ratelimited(void)
1570+
static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
15711571
{
15721572
}
15731573

mm/memcontrol.c

Lines changed: 39 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -670,7 +670,6 @@ struct memcg_vmstats {
670670
*/
671671
static void flush_memcg_stats_dwork(struct work_struct *w);
672672
static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
673-
static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
674673
static u64 flush_last_time;
675674

676675
#define FLUSH_TIME (2UL*HZ)
@@ -731,35 +730,40 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
731730
}
732731
}
733732

734-
static void do_flush_stats(void)
733+
static void do_flush_stats(struct mem_cgroup *memcg)
735734
{
736-
/*
737-
* We always flush the entire tree, so concurrent flushers can just
738-
* skip. This avoids a thundering herd problem on the rstat global lock
739-
* from memcg flushers (e.g. reclaim, refault, etc).
740-
*/
741-
if (atomic_read(&stats_flush_ongoing) ||
742-
atomic_xchg(&stats_flush_ongoing, 1))
743-
return;
744-
745-
WRITE_ONCE(flush_last_time, jiffies_64);
746-
747-
cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
735+
if (mem_cgroup_is_root(memcg))
736+
WRITE_ONCE(flush_last_time, jiffies_64);
748737

749-
atomic_set(&stats_flush_ongoing, 0);
738+
cgroup_rstat_flush(memcg->css.cgroup);
750739
}
751740

752-
void mem_cgroup_flush_stats(void)
741+
/*
742+
* mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree
743+
* @memcg: root of the subtree to flush
744+
*
745+
* Flushing is serialized by the underlying global rstat lock. There is also a
746+
* minimum amount of work to be done even if there are no stat updates to flush.
747+
* Hence, we only flush the stats if the updates delta exceeds a threshold. This
748+
* avoids unnecessary work and contention on the underlying lock.
749+
*/
750+
void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
753751
{
754-
if (memcg_should_flush_stats(root_mem_cgroup))
755-
do_flush_stats();
752+
if (mem_cgroup_disabled())
753+
return;
754+
755+
if (!memcg)
756+
memcg = root_mem_cgroup;
757+
758+
if (memcg_should_flush_stats(memcg))
759+
do_flush_stats(memcg);
756760
}
757761

758-
void mem_cgroup_flush_stats_ratelimited(void)
762+
void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
759763
{
760764
/* Only flush if the periodic flusher is one full cycle late */
761765
if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
762-
mem_cgroup_flush_stats();
766+
mem_cgroup_flush_stats(memcg);
763767
}
764768

765769
static void flush_memcg_stats_dwork(struct work_struct *w)
@@ -768,7 +772,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
768772
* Deliberately ignore memcg_should_flush_stats() here so that flushing
769773
* in latency-sensitive paths is as cheap as possible.
770774
*/
771-
do_flush_stats();
775+
do_flush_stats(root_mem_cgroup);
772776
queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
773777
}
774778

@@ -1643,7 +1647,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
16431647
*
16441648
* Current memory state:
16451649
*/
1646-
mem_cgroup_flush_stats();
1650+
mem_cgroup_flush_stats(memcg);
16471651

16481652
for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
16491653
u64 size;
@@ -4193,7 +4197,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v)
41934197
int nid;
41944198
struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
41954199

4196-
mem_cgroup_flush_stats();
4200+
mem_cgroup_flush_stats(memcg);
41974201

41984202
for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
41994203
seq_printf(m, "%s=%lu", stat->name,
@@ -4274,7 +4278,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
42744278

42754279
BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats));
42764280

4277-
mem_cgroup_flush_stats();
4281+
mem_cgroup_flush_stats(memcg);
42784282

42794283
for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
42804284
unsigned long nr;
@@ -4770,7 +4774,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
47704774
struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
47714775
struct mem_cgroup *parent;
47724776

4773-
mem_cgroup_flush_stats();
4777+
mem_cgroup_flush_stats(memcg);
47744778

47754779
*pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
47764780
*pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
@@ -6865,7 +6869,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v)
68656869
int i;
68666870
struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
68676871

6868-
mem_cgroup_flush_stats();
6872+
mem_cgroup_flush_stats(memcg);
68696873

68706874
for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
68716875
int nid;
@@ -8096,7 +8100,11 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
80968100
break;
80978101
}
80988102

8099-
cgroup_rstat_flush(memcg->css.cgroup);
8103+
/*
8104+
* mem_cgroup_flush_stats() ignores small changes. Use
8105+
* do_flush_stats() directly to get accurate stats for charging.
8106+
*/
8107+
do_flush_stats(memcg);
81008108
pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE;
81018109
if (pages < max)
81028110
continue;
@@ -8161,8 +8169,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
81618169
static u64 zswap_current_read(struct cgroup_subsys_state *css,
81628170
struct cftype *cft)
81638171
{
8164-
cgroup_rstat_flush(css->cgroup);
8165-
return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B);
8172+
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
8173+
8174+
mem_cgroup_flush_stats(memcg);
8175+
return memcg_page_state(memcg, MEMCG_ZSWAP_B);
81668176
}
81678177

81688178
static int zswap_max_show(struct seq_file *m, void *v)

mm/vmscan.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2226,7 +2226,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
22262226
* Flush the memory cgroup stats, so that we read accurate per-memcg
22272227
* lruvec stats for heuristics.
22282228
*/
2229-
mem_cgroup_flush_stats();
2229+
mem_cgroup_flush_stats(sc->target_mem_cgroup);
22302230

22312231
/*
22322232
* Determine the scan balance between anon and file LRUs.

mm/workingset.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -464,8 +464,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
464464

465465
rcu_read_unlock();
466466

467-
/* Flush stats (and potentially sleep) outside the RCU read section */
468-
mem_cgroup_flush_stats_ratelimited();
467+
/*
468+
* Flush stats (and potentially sleep) outside the RCU read section.
469+
* XXX: With per-memcg flushing and thresholding, is ratelimiting
470+
* still needed here?
471+
*/
472+
mem_cgroup_flush_stats_ratelimited(eviction_memcg);
469473

470474
eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
471475
refault = atomic_long_read(&eviction_lruvec->nonresident_age);
@@ -676,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
676680
struct lruvec *lruvec;
677681
int i;
678682

679-
mem_cgroup_flush_stats();
683+
mem_cgroup_flush_stats(sc->memcg);
680684
lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
681685
for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
682686
pages += lruvec_page_state_local(lruvec,

mm/zswap.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -641,7 +641,7 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
641641
return 0;
642642

643643
#ifdef CONFIG_MEMCG_KMEM
644-
mem_cgroup_flush_stats();
644+
mem_cgroup_flush_stats(memcg);
645645
nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
646646
nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
647647
#else

0 commit comments

Comments
 (0)