8236632fb3
For N cpus, with full throttle traffic on all N CPUs, funneling traffic to the same ethernet device, the devices queue lock is contended by all N CPUs constantly. The TX lock is only contended by a max of 2 CPUS. In the current mode of operation, after all the work of entering the dequeue region, we may endup aborting the path if we are unable to get the tx lock and go back to contend for the queue lock. As N goes up, this gets worse. The changes in this patch result in a small increase in performance with a 4CPU (2xdual-core) with no irq binding. Both e1000 and tg3 showed similar behavior; Signed-off-by: Jamal Hadi Salim <hadi@cyberus.ca> Signed-off-by: David S. Miller <davem@davemloft.net> |
||
---|---|---|
.. | ||
act_api.c | ||
act_gact.c | ||
act_ipt.c | ||
act_mirred.c | ||
act_pedit.c | ||
act_police.c | ||
act_simple.c | ||
cls_api.c | ||
cls_basic.c | ||
cls_fw.c | ||
cls_route.c | ||
cls_rsvp6.c | ||
cls_rsvp.c | ||
cls_rsvp.h | ||
cls_tcindex.c | ||
cls_u32.c | ||
em_cmp.c | ||
em_meta.c | ||
em_nbyte.c | ||
em_text.c | ||
em_u32.c | ||
ematch.c | ||
Kconfig | ||
Makefile | ||
sch_api.c | ||
sch_atm.c | ||
sch_blackhole.c | ||
sch_cbq.c | ||
sch_dsmark.c | ||
sch_fifo.c | ||
sch_generic.c | ||
sch_gred.c | ||
sch_hfsc.c | ||
sch_htb.c | ||
sch_ingress.c | ||
sch_netem.c | ||
sch_prio.c | ||
sch_red.c | ||
sch_sfq.c | ||
sch_tbf.c | ||
sch_teql.c |