Skip to content

Commit

Permalink
sched: fix sync wakeups
Browse files Browse the repository at this point in the history
Pawel Dziekonski reported that the openssl benchmark and his
quantum chemistry application both show slowdowns due to the
scheduler under-parallelizing execution.

The reason are pipe wakeups still doing 'sync' wakeups which
overrides the normal buddy wakeup logic - even if waker and
wakee are loosely coupled.

Fix an inversion of logic in the buddy wakeup code.

Reported-by: Pawel Dziekonski <dzieko@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Feb 1, 2009
1 parent f90d411 commit d942fb6
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 9 deletions.
4 changes: 4 additions & 0 deletions kernel/sched.c
Original file line number Diff line number Diff line change
Expand Up @@ -2266,6 +2266,10 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync)
if (!sched_feat(SYNC_WAKEUPS))
sync = 0;

if (!sync && (current->se.avg_overlap < sysctl_sched_migration_cost &&
p->se.avg_overlap < sysctl_sched_migration_cost))
sync = 1;

#ifdef CONFIG_SMP
if (sched_feat(LB_WAKEUP_UPDATE)) {
struct sched_domain *sd;
Expand Down
11 changes: 2 additions & 9 deletions kernel/sched_fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1179,20 +1179,15 @@ wake_affine(struct sched_domain *this_sd, struct rq *this_rq,
int idx, unsigned long load, unsigned long this_load,
unsigned int imbalance)
{
struct task_struct *curr = this_rq->curr;
struct task_group *tg;
unsigned long tl = this_load;
unsigned long tl_per_task;
struct task_group *tg;
unsigned long weight;
int balanced;

if (!(this_sd->flags & SD_WAKE_AFFINE) || !sched_feat(AFFINE_WAKEUPS))
return 0;

if (sync && (curr->se.avg_overlap > sysctl_sched_migration_cost ||
p->se.avg_overlap > sysctl_sched_migration_cost))
sync = 0;

/*
* If sync wakeup then subtract the (maximum possible)
* effect of the currently running task from the load
Expand Down Expand Up @@ -1419,9 +1414,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int sync)
if (!sched_feat(WAKEUP_PREEMPT))
return;

if (sched_feat(WAKEUP_OVERLAP) && (sync ||
(se->avg_overlap < sysctl_sched_migration_cost &&
pse->avg_overlap < sysctl_sched_migration_cost))) {
if (sched_feat(WAKEUP_OVERLAP) && sync) {
resched_task(curr);
return;
}
Expand Down

0 comments on commit d942fb6

Please sign in to comment.