87773dd56d
In debugging an application that receives -ENOMEM from ib_reg_mr(), I found that ib_umem_get() can fail because the pinned_vm count has wrapped causing it to always be larger than the lock limit even with RLIMIT_MEMLOCK set to RLIM_INFINITY. The wrapping of pinned_vm occurs because the process that calls ib_reg_mr() will have its mm->pinned_vm count incremented. Later a different process with a different mm_struct than the one that allocated the ib_umem struct ends up releasing it which results in decrementing the new processes mm->pinned_vm count past zero and wrapping. I'm not entirely sure what circumstances cause a different process to release the ib_umem than the one that allocated it but the kernel stack trace of the freeing process from my situation looks like the following: Call Trace: [<ffffffff814d64b1>] dump_stack+0x19/0x1b [<ffffffffa0b522a5>] ib_umem_release+0x1f5/0x200 [ib_core] [<ffffffffa0b90681>] mlx4_ib_destroy_qp+0x241/0x440 [mlx4_ib] [<ffffffffa0b4d93c>] ib_destroy_qp+0x12c/0x170 [ib_core] [<ffffffffa0cc7129>] ib_uverbs_close+0x259/0x4e0 [ib_uverbs] [<ffffffff81141cba>] __fput+0xba/0x240 [<ffffffff81141e4e>] ____fput+0xe/0x10 [<ffffffff81060894>] task_work_run+0xc4/0xe0 [<ffffffff810029e5>] do_notify_resume+0x95/0xa0 [<ffffffff814e3dd0>] int_signal+0x12/0x17 The following patch fixes the issue by storing the pid struct of the process that calls ib_umem_get() so that ib_umem_release and/or ib_umem_account() can properly decrement the pinned_vm count of the correct mm_struct. Signed-off-by: Shawn Bohrer <sbohrer@rgmadvisors.com> Reviewed-by: Shachar Raindel <raindel@mellanox.com> Signed-off-by: Roland Dreier <roland@purestorage.com> |
||
---|---|---|
.. | ||
ib_addr.h | ||
ib_cache.h | ||
ib_cm.h | ||
ib_fmr_pool.h | ||
ib_mad.h | ||
ib_marshall.h | ||
ib_pack.h | ||
ib_pma.h | ||
ib_sa.h | ||
ib_smi.h | ||
ib_umem.h | ||
ib_verbs.h | ||
ib.h | ||
iw_cm.h | ||
iw_portmap.h | ||
rdma_cm_ib.h | ||
rdma_cm.h | ||
rdma_netlink.h |