perf, powerpc: Fix hw breakpoints returning -ENOSPC

I've been trying to get hardware breakpoints with perf to work
on POWER7 but I'm getting the following:

  % perf record -e mem:0x10000000 true

    Error: sys_perf_event_open() syscall returned with 28 (No space left on device).  /bin/dmesg may provide additional information.

    Fatal: No CONFIG_PERF_EVENTS=y kernel support configured?

  true: Terminated

(FWIW adding -a and it works fine)

Debugging it seems that __reserve_bp_slot() is returning ENOSPC
because it thinks there are no free breakpoint slots on this
CPU.

I have a 2 CPUs, so perf userspace is doing two perf_event_open
syscalls to add a counter to each CPU [1].  The first syscall
succeeds but the second is failing.

On this second syscall, fetch_bp_busy_slots() sets slots.pinned
to be 1, despite there being no breakpoint on this CPU.  This is
because the call the task_bp_pinned, checks all CPUs, rather
than just the current CPU. POWER7 only has one hardware
breakpoint per CPU (ie. HBP_NUM=1), so we return ENOSPC.

The following patch fixes this by checking the associated CPU
for each breakpoint in task_bp_pinned.  I'm not familiar with
this code, so it's provided as a reference to the above issue.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Jovi Zhang <bookjovi@gmail.com>
Cc: K Prasad <prasad@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1351268936-2956-1-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Michael Neuling 2012-10-26 18:28:56 +02:00 committed by Ingo Molnar
parent 35fd3dc58d
commit 0d855354ea
1 changed files with 7 additions and 5 deletions

View File

@ -111,14 +111,16 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type)
* Count the number of breakpoints of the same type and same task. * Count the number of breakpoints of the same type and same task.
* The given event must be not on the list. * The given event must be not on the list.
*/ */
static int task_bp_pinned(struct perf_event *bp, enum bp_type_idx type) static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type)
{ {
struct task_struct *tsk = bp->hw.bp_target; struct task_struct *tsk = bp->hw.bp_target;
struct perf_event *iter; struct perf_event *iter;
int count = 0; int count = 0;
list_for_each_entry(iter, &bp_task_head, hw.bp_list) { list_for_each_entry(iter, &bp_task_head, hw.bp_list) {
if (iter->hw.bp_target == tsk && find_slot_idx(iter) == type) if (iter->hw.bp_target == tsk &&
find_slot_idx(iter) == type &&
cpu == iter->cpu)
count += hw_breakpoint_weight(iter); count += hw_breakpoint_weight(iter);
} }
@ -141,7 +143,7 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp,
if (!tsk) if (!tsk)
slots->pinned += max_task_bp_pinned(cpu, type); slots->pinned += max_task_bp_pinned(cpu, type);
else else
slots->pinned += task_bp_pinned(bp, type); slots->pinned += task_bp_pinned(cpu, bp, type);
slots->flexible = per_cpu(nr_bp_flexible[type], cpu); slots->flexible = per_cpu(nr_bp_flexible[type], cpu);
return; return;
@ -154,7 +156,7 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp,
if (!tsk) if (!tsk)
nr += max_task_bp_pinned(cpu, type); nr += max_task_bp_pinned(cpu, type);
else else
nr += task_bp_pinned(bp, type); nr += task_bp_pinned(cpu, bp, type);
if (nr > slots->pinned) if (nr > slots->pinned)
slots->pinned = nr; slots->pinned = nr;
@ -188,7 +190,7 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, bool enable,
int old_idx = 0; int old_idx = 0;
int idx = 0; int idx = 0;
old_count = task_bp_pinned(bp, type); old_count = task_bp_pinned(cpu, bp, type);
old_idx = old_count - 1; old_idx = old_count - 1;
idx = old_idx + weight; idx = old_idx + weight;