[PATCH] sched: remove staggering of load balancing

Timer interrupts already are staggered.  We do not need an additional layer of
time staggering for short load balancing actions that take a reasonably small
portion of the time slice.

For load balancing on large sched_domains we will add a serialization later
that avoids concurrent load balance operations and thus has the same effect as
load staggering.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Peter Williams <pwil3058@bigpond.net.au>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
Christoph Lameter 2006-12-10 02:20:19 -08:00 committed by Linus Torvalds
parent 571f6d2fb0
commit 4211a9a2e9
1 changed files with 2 additions and 8 deletions

View File

@ -2841,16 +2841,10 @@ static void active_load_balance(struct rq *busiest_rq, int busiest_cpu)
* Balancing parameters are set up in arch_init_sched_domains.
*/
/* Don't have all balancing operations going off at once: */
static inline unsigned long cpu_offset(int cpu)
{
return jiffies + cpu * HZ / NR_CPUS;
}
static void
rebalance_tick(int this_cpu, struct rq *this_rq, enum idle_type idle)
{
unsigned long this_load, interval, j = cpu_offset(this_cpu);
unsigned long this_load, interval;
struct sched_domain *sd;
int i, scale;
@ -2885,7 +2879,7 @@ rebalance_tick(int this_cpu, struct rq *this_rq, enum idle_type idle)
if (unlikely(!interval))
interval = 1;
if (j - sd->last_balance >= interval) {
if (jiffies - sd->last_balance >= interval) {
if (load_balance(this_cpu, this_rq, sd, idle)) {
/*
* We've pulled tasks over so either we're no