iotests: Test large write request to qcow2 file
Without HEAD^, the following happens when you attempt a large write
request to a qcow2 file such that the number of bytes covered by all
clusters involved in a single allocation will exceed INT_MAX:
(A) handle_alloc_space() decides to fill the whole area with zeroes and
fails because bdrv_co_pwrite_zeroes() fails (the request is too
large).
(B) If handle_alloc_space() does not do anything, but merge_cow()
decides that the requests can be merged, it will create a too long
IOV that later cannot be written.
(C) Otherwise, all parts will be written separately, so those requests
will work.
In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
overflow: We use an int (i) to iterate over nb_clusters, and then
calculate the L2 entry based on "i << s->cluster_bits" -- which will
overflow if the range covers more than INT_MAX bytes. This then leads
to image corruption because the L2 entry will be wrong (it will be
recognized as a compressed cluster).
Even if that were not the case, the .cow_end area would be empty
(because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
their difference (which is the .cow_end size) will be 0).
So this test checks that on such large requests, the image will not be
corrupted. Unfortunately, we cannot check whether COW will be handled
correctly, because that data is discarded when it is written to null-co
(but we have to use null-co, because writing 2 GB of data in a test is
not quite reasonable).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-10 12:08:58 +02:00
|
|
|
#!/usr/bin/env bash
|
2021-01-16 14:44:19 +01:00
|
|
|
# group: rw backing quick
|
iotests: Test large write request to qcow2 file
Without HEAD^, the following happens when you attempt a large write
request to a qcow2 file such that the number of bytes covered by all
clusters involved in a single allocation will exceed INT_MAX:
(A) handle_alloc_space() decides to fill the whole area with zeroes and
fails because bdrv_co_pwrite_zeroes() fails (the request is too
large).
(B) If handle_alloc_space() does not do anything, but merge_cow()
decides that the requests can be merged, it will create a too long
IOV that later cannot be written.
(C) Otherwise, all parts will be written separately, so those requests
will work.
In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
overflow: We use an int (i) to iterate over nb_clusters, and then
calculate the L2 entry based on "i << s->cluster_bits" -- which will
overflow if the range covers more than INT_MAX bytes. This then leads
to image corruption because the L2 entry will be wrong (it will be
recognized as a compressed cluster).
Even if that were not the case, the .cow_end area would be empty
(because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
their difference (which is the .cow_end size) will be 0).
So this test checks that on such large requests, the image will not be
corrupted. Unfortunately, we cannot check whether COW will be handled
correctly, because that data is discarded when it is written to null-co
(but we have to use null-co, because writing 2 GB of data in a test is
not quite reasonable).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-10 12:08:58 +02:00
|
|
|
#
|
|
|
|
# Test large write to a qcow2 image
|
|
|
|
#
|
|
|
|
# Copyright (C) 2019 Red Hat, Inc.
|
|
|
|
#
|
|
|
|
# This program is free software; you can redistribute it and/or modify
|
|
|
|
# it under the terms of the GNU General Public License as published by
|
|
|
|
# the Free Software Foundation; either version 2 of the License, or
|
|
|
|
# (at your option) any later version.
|
|
|
|
#
|
|
|
|
# This program is distributed in the hope that it will be useful,
|
|
|
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
# GNU General Public License for more details.
|
|
|
|
#
|
|
|
|
# You should have received a copy of the GNU General Public License
|
|
|
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
#
|
|
|
|
|
|
|
|
seq=$(basename "$0")
|
|
|
|
echo "QA output created by $seq"
|
|
|
|
|
|
|
|
status=1 # failure is the default!
|
|
|
|
|
|
|
|
_cleanup()
|
|
|
|
{
|
|
|
|
_cleanup_test_img
|
|
|
|
}
|
|
|
|
trap "_cleanup; exit \$status" 0 1 2 3 15
|
|
|
|
|
|
|
|
# get standard environment, filters and checks
|
|
|
|
. ./common.rc
|
|
|
|
. ./common.filter
|
|
|
|
|
|
|
|
# This is a qcow2 regression test
|
|
|
|
_supported_fmt qcow2
|
|
|
|
_supported_proto file
|
|
|
|
_supported_os Linux
|
|
|
|
|
|
|
|
# We use our own external data file and our own cluster size, and we
|
|
|
|
# require v3 images
|
|
|
|
_unsupported_imgopts data_file cluster_size 'compat=0.10'
|
|
|
|
|
|
|
|
|
|
|
|
# We need a backing file so that handle_alloc_space() will not do
|
|
|
|
# anything. (If it were to do anything, it would simply fail its
|
|
|
|
# write-zeroes request because the request range is too large.)
|
|
|
|
TEST_IMG="$TEST_IMG.base" _make_test_img 4G
|
|
|
|
$QEMU_IO -c 'write 0 512' "$TEST_IMG.base" | _filter_qemu_io
|
|
|
|
|
|
|
|
# (Use .orig because _cleanup_test_img will remove that file)
|
|
|
|
# We need a large cluster size, see below for why (above the $QEMU_IO
|
|
|
|
# invocation)
|
|
|
|
_make_test_img -o cluster_size=2M,data_file="$TEST_IMG.orig" \
|
iotests: Specify explicit backing format where sensible
There are many existing qcow2 images that specify a backing file but
no format. This has been the source of CVEs in the past, but has
become more prominent of a problem now that libvirt has switched to
-blockdev. With older -drive, at least the probing was always done by
qemu (so the only risk of a changed format between successive boots of
a guest was if qemu was upgraded and probed differently). But with
newer -blockdev, libvirt must specify a format; if libvirt guesses raw
where the image was formatted, this results in data corruption visible
to the guest; conversely, if libvirt guesses qcow2 where qemu was
using raw, this can result in potential security holes, so modern
libvirt instead refuses to use images without explicit backing format.
The change in libvirt to reject images without explicit backing format
has pointed out that a number of tools have been far too reliant on
probing in the past. It's time to set a better example in our own
iotests of properly setting this parameter.
iotest calls to create, rebase, and convert are all impacted to some
degree. It's a bit annoying that we are inconsistent on command line
- while all of those accept -o backing_file=...,backing_fmt=..., the
shortcuts are different: create and rebase have -b and -F, while
convert has -B but no -F. (amend has no shortcuts, but the previous
patch just deprecated the use of amend to change backing chains).
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200706203954.341758-9-eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-06 22:39:52 +02:00
|
|
|
-b "$TEST_IMG.base" -F $IMGFMT 4G
|
iotests: Test large write request to qcow2 file
Without HEAD^, the following happens when you attempt a large write
request to a qcow2 file such that the number of bytes covered by all
clusters involved in a single allocation will exceed INT_MAX:
(A) handle_alloc_space() decides to fill the whole area with zeroes and
fails because bdrv_co_pwrite_zeroes() fails (the request is too
large).
(B) If handle_alloc_space() does not do anything, but merge_cow()
decides that the requests can be merged, it will create a too long
IOV that later cannot be written.
(C) Otherwise, all parts will be written separately, so those requests
will work.
In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
overflow: We use an int (i) to iterate over nb_clusters, and then
calculate the L2 entry based on "i << s->cluster_bits" -- which will
overflow if the range covers more than INT_MAX bytes. This then leads
to image corruption because the L2 entry will be wrong (it will be
recognized as a compressed cluster).
Even if that were not the case, the .cow_end area would be empty
(because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
their difference (which is the .cow_end size) will be 0).
So this test checks that on such large requests, the image will not be
corrupted. Unfortunately, we cannot check whether COW will be handled
correctly, because that data is discarded when it is written to null-co
(but we have to use null-co, because writing 2 GB of data in a test is
not quite reasonable).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-10 12:08:58 +02:00
|
|
|
|
|
|
|
# We want a null-co as the data file, because it allows us to quickly
|
|
|
|
# "write" 2G of data without using any space.
|
|
|
|
# (qemu-img create does not like it, though, because null-co does not
|
|
|
|
# support image creation.)
|
|
|
|
$QEMU_IMG amend -o data_file="json:{'driver':'null-co',,'size':'4294967296'}" \
|
|
|
|
"$TEST_IMG"
|
|
|
|
|
|
|
|
# This gives us a range of:
|
|
|
|
# 2^31 - 512 + 768 - 1 = 2^31 + 255 > 2^31
|
|
|
|
# until the beginning of the end COW block. (The total allocation
|
|
|
|
# size depends on the cluster size, but all that is important is that
|
|
|
|
# it exceeds INT_MAX.)
|
|
|
|
#
|
|
|
|
# 2^31 - 512 is the maximum request size. We want this to result in a
|
|
|
|
# single allocation, and because the qcow2 driver splits allocations
|
|
|
|
# on L2 boundaries, we need large L2 tables; hence the cluster size of
|
|
|
|
# 2 MB. (Anything from 256 kB should work, though, because then one L2
|
|
|
|
# table covers 8 GB.)
|
|
|
|
$QEMU_IO -c "write 768 $((2 ** 31 - 512))" "$TEST_IMG" | _filter_qemu_io
|
|
|
|
|
|
|
|
_check_test_img
|
|
|
|
|
|
|
|
# success, all done
|
|
|
|
echo "*** done"
|
|
|
|
rm -f $seq.full
|
|
|
|
status=0
|