qemu-e2k/tests/qemu-iotests/270.out

10 lines
474 B
Plaintext
Raw Normal View History

iotests: Test large write request to qcow2 file Without HEAD^, the following happens when you attempt a large write request to a qcow2 file such that the number of bytes covered by all clusters involved in a single allocation will exceed INT_MAX: (A) handle_alloc_space() decides to fill the whole area with zeroes and fails because bdrv_co_pwrite_zeroes() fails (the request is too large). (B) If handle_alloc_space() does not do anything, but merge_cow() decides that the requests can be merged, it will create a too long IOV that later cannot be written. (C) Otherwise, all parts will be written separately, so those requests will work. In either B or C, though, qcow2_alloc_cluster_link_l2() will have an overflow: We use an int (i) to iterate over nb_clusters, and then calculate the L2 entry based on "i << s->cluster_bits" -- which will overflow if the range covers more than INT_MAX bytes. This then leads to image corruption because the L2 entry will be wrong (it will be recognized as a compressed cluster). Even if that were not the case, the .cow_end area would be empty (because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so their difference (which is the .cow_end size) will be 0). So this test checks that on such large requests, the image will not be corrupted. Unfortunately, we cannot check whether COW will be handled correctly, because that data is discarded when it is written to null-co (but we have to use null-co, because writing 2 GB of data in a test is not quite reasonable). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-10 12:08:58 +02:00
QA output created by 270
Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=4294967296
wrote 512/512 bytes at offset 0
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
iotests: Specify explicit backing format where sensible There are many existing qcow2 images that specify a backing file but no format. This has been the source of CVEs in the past, but has become more prominent of a problem now that libvirt has switched to -blockdev. With older -drive, at least the probing was always done by qemu (so the only risk of a changed format between successive boots of a guest was if qemu was upgraded and probed differently). But with newer -blockdev, libvirt must specify a format; if libvirt guesses raw where the image was formatted, this results in data corruption visible to the guest; conversely, if libvirt guesses qcow2 where qemu was using raw, this can result in potential security holes, so modern libvirt instead refuses to use images without explicit backing format. The change in libvirt to reject images without explicit backing format has pointed out that a number of tools have been far too reliant on probing in the past. It's time to set a better example in our own iotests of properly setting this parameter. iotest calls to create, rebase, and convert are all impacted to some degree. It's a bit annoying that we are inconsistent on command line - while all of those accept -o backing_file=...,backing_fmt=..., the shortcuts are different: create and rebase have -b and -F, while convert has -B but no -F. (amend has no shortcuts, but the previous patch just deprecated the use of amend to change backing chains). Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200706203954.341758-9-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-07-06 22:39:52 +02:00
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=4294967296 backing_file=TEST_DIR/t.IMGFMT.base backing_fmt=IMGFMT data_file=TEST_DIR/t.IMGFMT.orig
iotests: Test large write request to qcow2 file Without HEAD^, the following happens when you attempt a large write request to a qcow2 file such that the number of bytes covered by all clusters involved in a single allocation will exceed INT_MAX: (A) handle_alloc_space() decides to fill the whole area with zeroes and fails because bdrv_co_pwrite_zeroes() fails (the request is too large). (B) If handle_alloc_space() does not do anything, but merge_cow() decides that the requests can be merged, it will create a too long IOV that later cannot be written. (C) Otherwise, all parts will be written separately, so those requests will work. In either B or C, though, qcow2_alloc_cluster_link_l2() will have an overflow: We use an int (i) to iterate over nb_clusters, and then calculate the L2 entry based on "i << s->cluster_bits" -- which will overflow if the range covers more than INT_MAX bytes. This then leads to image corruption because the L2 entry will be wrong (it will be recognized as a compressed cluster). Even if that were not the case, the .cow_end area would be empty (because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so their difference (which is the .cow_end size) will be 0). So this test checks that on such large requests, the image will not be corrupted. Unfortunately, we cannot check whether COW will be handled correctly, because that data is discarded when it is written to null-co (but we have to use null-co, because writing 2 GB of data in a test is not quite reasonable). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-10 12:08:58 +02:00
wrote 2147483136/2147483136 bytes at offset 768
2 GiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
No errors were found on the image.
*** done