linux/fs/ceph
Sage Weil db3540522e ceph: fix cap flush race reentrancy
In e9964c10 we change cap flushing to do a delicate dance because some
inodes on the cap_dirty list could be in a migrating state (got EXPORT but
not IMPORT) in which we couldn't actually flush and move from
dirty->flushing, breaking the while (!empty) { process first } loop
structure.  It worked for a single sync thread, but was not reentrant and
triggered infinite loops when multiple syncers came along.

Instead, move inodes with dirty to a separate cap_dirty_migrating list
when in the limbo export-but-no-import state, allowing us to go back to
the simple loop structure (which was reentrant).  This is cleaner and more
robust.

Audited the cap_dirty users and this looks fine:
list_empty(&ci->i_dirty_item) is still a reliable indicator of whether we
have dirty caps (which list we're on is irrelevant) and list_del_init()
calls still do the right thing.

Signed-off-by: Sage Weil <sage@newdream.net>
2011-05-24 11:52:12 -07:00
..
addr.c ceph: check return value for start_request in writepages 2011-05-19 11:25:05 -07:00
caps.c ceph: fix cap flush race reentrancy 2011-05-24 11:52:12 -07:00
ceph_frag.c
debugfs.c
dir.c ceph: fix broken comparison in readdir loop 2011-05-19 11:25:04 -07:00
export.c ceph: avoid inode lookup on nfs fh reconnect 2011-05-24 11:52:06 -07:00
file.c ceph: do not call __mark_dirty_inode under i_lock 2011-05-04 12:56:45 -07:00
inode.c ceph: do not use i_wrbuffer_ref as refcount for Fb cap 2011-05-11 10:44:48 -07:00
ioctl.c
ioctl.h
Kconfig
locks.c
Makefile
mds_client.c ceph: fix cap flush race reentrancy 2011-05-24 11:52:12 -07:00
mds_client.h ceph: fix cap flush race reentrancy 2011-05-24 11:52:12 -07:00
mdsmap.c
snap.c ceph: fix list_add in ceph_put_snap_realm 2011-05-11 10:44:36 -07:00
strings.c
super.c
super.h ceph: do not use i_wrbuffer_ref as refcount for Fb cap 2011-05-11 10:44:48 -07:00
xattr.c ceph: do not call __mark_dirty_inode under i_lock 2011-05-04 12:56:45 -07:00