Skip to content

Commit a6eaf38

Browse files
committed
Merge tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fixes from Ingo Molnar: - Fix a small inconsistency (bug) in load tracking, caught by a new warning that several people reported. - Flip CONFIG_SCHED_CORE to default-disabled, and update the Kconfig help text. * tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/core: Disable CONFIG_SCHED_CORE by default sched/fair: Ensure _sum and _avg values stay consistent
2 parents f4cc74c + a22a5cb commit a6eaf38

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

kernel/Kconfig.preempt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,6 @@ config PREEMPT_DYNAMIC
102102

103103
config SCHED_CORE
104104
bool "Core Scheduling for SMT"
105-
default y
106105
depends on SCHED_SMT
107106
help
108107
This option permits Core Scheduling, a means of coordinated task
@@ -115,7 +114,8 @@ config SCHED_CORE
115114
- mitigation of some (not all) SMT side channels;
116115
- limiting SMT interference to improve determinism and/or performance.
117116

118-
SCHED_CORE is default enabled when SCHED_SMT is enabled -- when
119-
unused there should be no impact on performance.
117+
SCHED_CORE is default disabled. When it is enabled and unused,
118+
which is the likely usage by Linux distributions, there should
119+
be no measurable impact on performance.
120120

121121

kernel/sched/fair.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3685,15 +3685,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
36853685

36863686
r = removed_load;
36873687
sub_positive(&sa->load_avg, r);
3688-
sub_positive(&sa->load_sum, r * divider);
3688+
sa->load_sum = sa->load_avg * divider;
36893689

36903690
r = removed_util;
36913691
sub_positive(&sa->util_avg, r);
3692-
sub_positive(&sa->util_sum, r * divider);
3692+
sa->util_sum = sa->util_avg * divider;
36933693

36943694
r = removed_runnable;
36953695
sub_positive(&sa->runnable_avg, r);
3696-
sub_positive(&sa->runnable_sum, r * divider);
3696+
sa->runnable_sum = sa->runnable_avg * divider;
36973697

36983698
/*
36993699
* removed_runnable is the unweighted version of removed_load so we

0 commit comments

Comments
 (0)