sched/fair: Have task_move_group_fair() also detach entity load from the old runqueue

Since we attach the entity load to the new runqueue, we should also
detatch the entity load from the old runqueue, otherwise load can
accumulate.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
[ Rewrote the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1440069720-27038-4-git-send-email-byungchul.park@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Byungchul Park 2015-08-20 20:21:58 +09:00 committed by Ingo Molnar
parent 50a2a3b246
commit 1746babbb1
1 changed files with 5 additions and 1 deletions

View File

@ -8037,8 +8037,12 @@ static void task_move_group_fair(struct task_struct *p, int queued)
if (!queued && (!se->sum_exec_runtime || p->state == TASK_WAKING))
queued = 1;
cfs_rq = cfs_rq_of(se);
if (!queued)
se->vruntime -= cfs_rq_of(se)->min_vruntime;
se->vruntime -= cfs_rq->min_vruntime;
/* Synchronize task with its prev cfs_rq */
detach_entity_load_avg(cfs_rq, se);
set_task_rq(p, task_cpu(p));
se->depth = se->parent ? se->parent->depth + 1 : 0;
cfs_rq = cfs_rq_of(se);