The recently-added NBD context qemu:allocation-depth is able to
distinguish between locally-present data (even when that data is
sparse) [shown as depth 1 over NBD], and data that could not be found
anywhere in the backing chain [shown as depth 0]; and the libnbd
project was recently patched to give the human-readable name "absent"
to an allocation-depth of 0. But qemu-img map --output=json predates
that addition, and has the unfortunate behavior that all portions of
the backing chain that resolve without finding a hit in any backing
layer report the same depth as the final backing layer. This makes it
harder to reconstruct a qcow2 backing chain using just 'qemu-img map'
output, especially when using "backing":null to artificially limit a
backing chain, because it is impossible to distinguish between a
QCOW2_CLUSTER_UNALLOCATED (which defers to a [missing] backing file)
and a QCOW2_CLUSTER_ZERO_PLAIN cluster (which would override any
backing file), since both types of clusters otherwise show as
"data":false,"zero":true" (but note that we can distinguish a
QCOW2_CLUSTER_ZERO_ALLOCATED, which would also have an "offset":
listing).
The task of reconstructing a qcow2 chain was made harder in commit
0da9856851 (nbd: server: Report holes for raw images), because prior
to that point, it was possible to abuse NBD's block status command to
see which portions of a qcow2 file resulted in BDRV_BLOCK_ALLOCATED
(showing up as NBD_STATE_ZERO in isolation) vs. missing from the chain
(showing up as NBD_STATE_ZERO|NBD_STATE_HOLE); but now qemu reports
more accurate sparseness information over NBD.
An obvious solution is to make 'qemu-img map --output=json' add an
additional "present":false designation to any cluster lacking an
allocation anywhere in the chain, without any change to the "depth"
parameter to avoid breaking existing clients. The iotests have
several examples where this distinction demonstrates the additional
accuracy.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210701190655.2131223-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[eblake: fix more iotest fallout]
Signed-off-by: Eric Blake <eblake@redhat.com>
Three test cases:
(1) Adding a qcow2 (metadata) file to an existing data file, see whether
we can read the existing data through the qcow2 image.
(2) Append data to the data file, grow the qcow2 image accordingly, see
whether we can read the new data through the qcow2 image.
(3) At runtime, add a backing image to a freshly created qcow2 image
with an external data file (with data-file-raw). Reading data from
the qcow2 image must return the same result as reading data from the
data file, so everything in the backing image must be ignored.
(This did not use to be the case, because without the L2 tables
preallocated, all clusters would appear as unallocated, and so the
qcow2 driver would fall through to the backing file.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210326145509.163455-3-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Setting the qcow2 data-file-raw bit means that you can ignore the
qcow2 metadata when reading from the external data file. It does not
mean that you have to ignore it, though. Therefore, the data read must
be the same regardless of whether you interpret the metadata or whether
you ignore it, and thus the L1/L2 tables must all be present and give a
1:1 mapping.
This patch changes 244's output: First, the qcow2 file is larger right
after creation, because of metadata preallocation. Second, the qemu-img
map output changes: Everything that was not explicitly discarded or
zeroed is now a data area.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210326145509.163455-2-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Flushing a qcow2 node must lead to the data-file node being flushed as
well.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Test 244 checks the expected behavior of qcow2 external data files
with respect to zero and discarded clusters. Filesystems however
are free to ignore discard requests, and this seems to be the
case for overlayfs. Relax the tests to skip checks on the
external data file for discarded areas, which implies not using
qemu-img compare in the data_file_raw=on case.
This fixes docker tests on RHEL8.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-block@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200409191006.24429-1-pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Various qemu-img commands are inconsistent on whether they report
status/errors in terms of bytes or sector offsets. The latter is
confusing (especially as more places move to 4k block sizes), so let's
switch everything to just use bytes everywhere. One iotest is
impacted.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200402135717.476398-1-eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This adds a test for 'qemu-img convert' with copy offloading where the
target image has an external data file. If the test hosts supports it,
it tests both the case where copy offloading is supported and the case
where it isn't (otherwise we just test unsupported twice).
More specifically, the case with unsupported copy offloading tests
qcow2_alloc_cluster_abort() with external data files.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200211094900.17315-4-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
One of the recent commits changed the way qemu-io prints out its
errors and warnings - they are now prefixed with the program name.
We've got to adapt the iotests accordingly to prevent that they
are failing.
Fixes: 99e98d7c9f ("qemu-io: Use error_[gs]et_progname()")
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>