qcow2: Limit COW to where it's needed

This fixes a regression introduced in commit 250196f1. The bug leads to
data corruption, found during an Autotest run with a Fedora 8 guest.

Consider a write request whose first part is covered by an already
allocated cluster, but additional clusters need to be newly allocated.
When counting the number of clusters to allocate, the qcow2 code would
decide to do COW for all remaining clusters of the write request, even
if some of them are already allocated.

If during this COW operation another write request is issued that touches
the same cluster, it will still refer to the old cluster. When the COW
completes, the first request will update the L2 table and the second
write request will be lost. Note that the requests need not overlap, it's
enough for them to touch the same cluster.

This patch ensures that only clusters that really require COW are
considered for allocation. In this case any other request writing to the
same cluster will be an allocating write and gets serialised.

Reported-by: Marcelo Tosatti <mtosatti@redhat.com>
Tested-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit is contained in:
Kevin Wolf 2012-04-26 19:41:22 +02:00
parent 115c2b5a68
commit 54e6814360
1 changed files with 11 additions and 7 deletions

View File

@ -883,17 +883,21 @@ again:
assert(keep_clusters <= nb_clusters);
nb_clusters -= keep_clusters;
} else {
/* For the moment, overwrite compressed clusters one by one */
if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
nb_clusters = 1;
} else {
nb_clusters = count_cow_clusters(s, nb_clusters, l2_table, l2_index);
}
keep_clusters = 0;
cluster_offset = 0;
}
if (nb_clusters > 0) {
/* For the moment, overwrite compressed clusters one by one */
uint64_t entry = be64_to_cpu(l2_table[l2_index + keep_clusters]);
if (entry & QCOW_OFLAG_COMPRESSED) {
nb_clusters = 1;
} else {
nb_clusters = count_cow_clusters(s, nb_clusters, l2_table,
l2_index + keep_clusters);
}
}
cluster_offset &= L2E_OFFSET_MASK;
/*