xen/gntdev: Fix circular locking dependency

apply_to_page_range will acquire PTE lock while priv->lock is held,
and mn_invl_range_start tries to acquire priv->lock with PTE already
held.  Fix by not holding priv->lock during the entire map operation.
This is safe because map->vma is set nonzero while the lock is held,
which will cause subsequent maps to fail and will cause the unmap
ioctl (and other users of gntdev_del_map) to return -EBUSY until the
area is unmapped. It is similarly impossible for gntdev_vma_close to
be called while the vma is still being created.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
This commit is contained in:
Daniel De Graaf 2011-01-07 11:51:47 +00:00 committed by Konrad Rzeszutek Wilk
parent ba5d101229
commit f0a70c882e
1 changed files with 7 additions and 2 deletions

View File

@ -575,21 +575,26 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
if (!(vma->vm_flags & VM_WRITE))
map->flags |= GNTMAP_readonly;
spin_unlock(&priv->lock);
err = apply_to_page_range(vma->vm_mm, vma->vm_start,
vma->vm_end - vma->vm_start,
find_grant_ptes, map);
if (err) {
printk(KERN_WARNING "find_grant_ptes() failure.\n");
goto unlock_out;
return err;
}
err = map_grant_pages(map);
if (err) {
printk(KERN_WARNING "map_grant_pages() failure.\n");
goto unlock_out;
return err;
}
map->is_mapped = 1;
return 0;
unlock_out:
spin_unlock(&priv->lock);
return err;