Commit d759c951f3 ("replay: push
replay_mutex_lock up the call tree") removed the !timeout lock
optimization in the main loop.
The idea of the optimization was to avoid ping-pongs between threads by
keeping the Big QEMU Lock held across non-blocking (!timeout) main loop
iterations.
A warning is printed when the main loop spins without releasing BQL for
long periods of time. These warnings were supposed to aid debugging but
in practice they just alarm users. They are considered noise because
the cause of spinning is not shown and is hard to find.
Now that the lock optimization has been removed, there is no danger of
hogging the BQL. Drop the spin counter and the infamous warning.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
We're ready to declare the blockdev-create job stable. This renames the
corresponding QMP command from x-blockdev-create to blockdev-create.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This rewrites the test case 213 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 212 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 211 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 210 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This rewrites the test case 207 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
Most of the test cases stay the same as before (the exception being some
improved 'size' options that allow distinguishing which command created
the image), but in order to be able to implement proper job handling,
the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 206 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds two helper functions that are useful for test cases that make
use of a non-file protocol (specifically ssh).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Add an iotests.py function that runs a job and only returns when it is
destroyed. An error is logged when the job failed and job-finalize and
job-dismiss commands are issued if necessary.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This adds a filter function to postprocess 'qemu-img info' input
(similar to what _img_info does), and an img_info_log() function that
calls 'qemu-img info' and logs the filtered output.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This adds a helper function that logs both the QMP request and the
received response before returning it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds a helper function that returns a list of QMP events that are
already filtered through filter_qmp_event().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This changes the x-blockdev-create QMP command so that it doesn't block
the monitor and the main loop any more, but starts a background job that
performs the image creation.
The basic job as implemented here is all that is necessary to make image
creation asynchronous and to provide a QMP interface that can be marked
stable, but it still lacks a few features that jobs usually provide: The
job will ignore pause commands and it doesn't publish more than very
basic progress yet (total-progress is 1 and current-progress advances
from 0 to 1 when the driver callbacks returns). These features can be
added later without breaking compatibility.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
qmp_to_opts() used to be a method of QMPTestCase, but recently we
started to add more Python test cases that don't make use of
QMPTestCase. In order to make the method usable there, move it to VM.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds a QMP event that is emitted whenever a job transitions from
one status to another.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
185 and 191 define a MIG_SOCKET even though they don't do anything with
migration. Remove the useless variable.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
grep for "migrate" turns up a few test cases which use migration, but
haven't been in the "migration" group so far. Add them to the group.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
The reference output file only works for file. 'qemu-img convert -p'
makes a lot more progress updates for NFS than for file, so disable the
test for NFS.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
NFS paths were only partially filtered in _filter_img_create, _img_info
and _filter_img_info, resulting in "nfs://127.0.0.1TEST_DIR/t.IMGFMT".
This adds another replacement to the sed calls that matches the test
directory not as a host path, but as an NFS URL (the prefix as used for
$TEST_IMG).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Test cases were trying to use nfs:// URLs as local filenames, which made
every test fail for NFS. With TEST_IMG and TEST_IMG_FILE set like for
the other protocols, NFS tests can pass again.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Currently the timer is cancelled and the block job is entered by
block_job_resume(). This behavior causes drain to run extra blockjob
iterations when the job was sleeping due to the ratelimit.
This patch leaves the job asleep when block_job_resume() is called.
Jobs can still be forcibly woken up using block_job_enter(), which is
used to cancel jobs.
After this patch drain no longer runs extra blockjob iterations. This
is the expected behavior that qemu-iotests 185 used to rely on. We
temporarily changed the 185 test output to make it pass for the QEMU
2.12 release but now it's time to address this issue.
Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Message-id: 20180508135436.30140-3-stefanha@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Commit 8565c3ab53 ("qemu-iotests: fix
185") identified a race condition in a sub-test.
Similar issues also affect the other sub-tests. If disk I/O completes
quickly, it races with the QMP 'quit' command. This causes spurious
test failures because QMP events are emitted in an unpredictable order.
This test relies on QEMU internals and there is no QMP API for getting
deterministic behavior needed to make this test 100% reliable. At the
same time, the test is useful and it would be a shame to remove it.
Add sleep 0.5 to reduce the chance of races. This is not a real fix but
appears to reduce spurious failures in practice.
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180508135436.30140-2-stefanha@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180502202051.15493-4-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
COR across nodes (that is, you have some filter node between the
actually COR target and the node that performs the COR) cannot reliably
work together with the permission system when there is no explicit COR
node that can request the WRITE_UNCHANGED permission for its child.
This is because COR (currently) sneaks its requests by the usual
permission checks, so it can work without a WRITE* permission; but if
there is a filter node in between, that will re-issue the request, which
then passes through the usual check -- and if nobody has requested a
WRITE_UNCHANGED permission, that check will fail.
There is no real direct fix apart from hoping that there is someone who
has requested that permission; in case of just the qemu-io HMP command
(and no guest device), however, that is not the case. The real real fix
is to implement the copy-on-read flag through an implicitly added COR
node. Such a node can request the necessary permissions as shown in
this test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180421132929.21610-10-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
iotest 197 tests copy-on-read using the (now old) copy-on-read flag.
Copy it to 215 and modify it to use the COR filter driver instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180421132929.21610-9-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180421132929.21610-8-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
userfaultfd support depends on the host kernel, so it may not be
available. If so, 181 and 201 should be skipped.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406151731.4285-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, common.qemu only allows to match for results indicating
success. The only way to fail is by provoking a timeout. However,
sometimes we do have a defined failure output and can match for that,
which saves us from having to wait for the timeout in case of failure.
Because failure can sometimes just result in a _notrun in the test, it
is actually important to care about being able to fail quickly.
Also, sometimes we simply do not get any specific output in case of
success. The only way to handle this currently would be to define an
error message as the string to look for, which means that actual success
results in a timeout. This is really bad because it unnecessarily slows
down a succeeding test.
Therefore, this patch adds a new parameter $success_or_failure to
_timed_wait_for and _send_qemu_cmd. Setting this to a non-empty string
makes both commands expect two match parameters: If the first matches,
the function succeeds. If the second matches, the function fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406151731.4285-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The L2 and refcount caches have default sizes that can be overridden
using the l2-cache-size and refcount-cache-size (an additional
parameter named cache-size sets the combined size of both caches).
Unless forced by one of the aforementioned parameters, QEMU will set
the unspecified sizes so that the L2 cache is 4 times larger than the
refcount cache.
This is based on the premise that the refcount metadata needs to be
only a fourth of the L2 metadata to cover the same amount of disk
space. This is incorrect for two reasons:
a) The amount of disk covered by an L2 table depends solely on the
cluster size, but in the case of a refcount block it depends on
the cluster size *and* the width of each refcount entry.
The 4/1 ratio is only valid with 16-bit entries (the default).
b) When we talk about disk space and L2 tables we are talking about
guest space (L2 tables map guest clusters to host clusters),
whereas refcount blocks are used for host clusters (including
L1/L2 tables and the refcount blocks themselves). On a fully
populated (and uncompressed) qcow2 file, image size > virtual size
so there are more refcount entries than L2 entries.
Problem (a) could be fixed by adjusting the algorithm to take into
account the refcount entry width. Problem (b) could be fixed by
increasing a bit the refcount cache size to account for the clusters
used for qcow2 metadata.
However this patch takes a completely different approach and instead
of keeping a ratio between both cache sizes it assigns as much as
possible to the L2 cache and the remainder to the refcount cache.
The reason is that L2 tables are used for every single I/O request
from the guest and the effect of increasing the cache is significant
and clearly measurable. Refcount blocks are however only used for
cluster allocation and internal snapshots and in practice are accessed
sequentially in most cases, so the effect of increasing the cache is
negligible (even when doing random writes from the guest).
So, make the refcount cache as small as possible unless the user
explicitly asks for a larger one.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 9695182c2eb11b77cb319689a1ebaa4e7c9d6591.1523968389.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit abd3622cc0 added a case to 122
regarding how the qcow2 driver handles an incorrect compressed data
length value. This does not really fit into 122, as that file is
supposed to contain qemu-img convert test cases, which this case is not.
So this patch splits it off into its own file; maybe we will even get
more qcow2-only compression tests in the future.
Also, that test case does not work with refcount_bits=1, so mark that
option as unsupported.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406164108.26118-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We already have an extensive mirror test (041) which does cover
cancelling a mirror job, especially after it has emitted the READY
event. However, it does not check what exact events are emitted after
block-job-cancel is executed. More importantly, it does not use
throttling to ensure that it covers the case of block-job-cancel before
READY.
It would be possible to add this case to 041, but considering it is
already our largest test file, it makes sense to create a new file for
these cases.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180501220509.14152-3-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Commit b76e4458b1 ("block/mirror: change
the semantic of 'force' of block-job-cancel") accidentally removed the
ratelimit in the mirror job.
Reintroduce the ratelimit but keep the block-job-cancel force=true
behavior that was added in commit
b76e4458b1.
Note that block_job_sleep_ns() returns immediately when the job is
cancelled. Therefore it's safe to unconditionally call
block_job_sleep_ns() - a cancelled job does not sleep.
This commit fixes the non-deterministic qemu-iotests 185 output. The
test relies on the ratelimit to make the job sleep until the 'quit'
command is processed. Previously the job could complete before the
'quit' command was received since there was no ratelimit.
Cc: Liang Li <liliang.opensource@gmail.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180424123527.19168-1-stefanha@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Improve and fix 169:
- use MIGRATION events instead of RESUME
- make a TODO: enable dirty-bitmaps capability for offline case
- recreate vm_b without -incoming near test end
This (likely) fixes racy faults at least of the following types:
- timeout on waiting for RESUME event
- sha256 mismatch on line 136 (142 after this patch)
- fail to self.vm_b.launch() on line 135 (141 now after this patch)
And surely fixes cat processes, left after test finish.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180411122606.367301-3-vsementsov@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit 4486e89c21 ("vl: introduce
vm_shutdown()") added a bdrv_drain_all() call. As a side-effect of the
drain operation the block job iterates one more time than before. The
185 output no longer matches and the test is failing now.
It may be possible to avoid the superfluous block job iteration, but
that type of patch is not suitable late in the QEMU 2.12 release cycle.
This patch simply updates the 185 output file. The new behavior is
correct, just not optimal, so make the test pass again.
Fixes: 4486e89c21 ("vl: introduce vm_shutdown()")
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu-iotests doesn't support dmg, and the dmg block driver doesn't
support image creation. Two test cases declare dmg as supported, but
that's obviously wrong for both reasons. Remove the declaration.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Blacklist these formats, as they don't support image creation, as they
say:
> ./qemu-img create -f bochs x 1m
qemu-img: x: Format driver 'bochs' does not support image creation
> ./qemu-img create -f cloop x 1m
qemu-img: x: Format driver 'cloop' does not support image creation
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Support "generic" formats like in bash tests with their
_supported_fmt generic
The test, supporting "generic" formats will run if IMGFMT_GENERIC =
true, which is default, except for bochs and cloop. However, you can
use verify_image_format(['generic', 'bochs']), which will run for all
except cloop (for this moment).
Also, add an assert (we don't want set both arguments) and remove
duplication.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If there are more than one events, wait_until_completed() might return
the 2nd event even if the 1st event is JOB_COMPLETED, since the for loop
will continue to run even if completed is set to True.
It never happened before, but it can be triggered when OOB is enabled
due to the RESUME startup message. Fix that up.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180408030542.17855-1-peterx@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaw6CJAAoJEPQH2wBh1c9AZvcIAL51TsxzLBfYbEr7t/OqaYcT
KptgkLn8UPUsVwEaZ1i9T+qpPqUhCgUeqBki4wTpGOfwmRO633L81lhlvLU1RD5f
a9qsIWtvG0eYw+BU4P+ojltufdeQRQMxuLVhoiq2ur9vC8zUy6tEnF+yqOdWHYsm
83Z2T/NFZ192zIwPD3Rq9/+ijllJOhODrHdAcnwBp0IuvlXi5FV7GjccB4scW/Ym
LwDe7cNWv0XlzRlosyhTRs0HiOm+XiLAj5t9dVKFiuZsUwGFEXx6GiRtUutDf1j8
5ham2wiYLsvxnSgRoUk486Txuj2ceNYDOvnllSv3uFh2kZyTaYewOL1acQFmtVw=
=5AX/
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mreitz/tags/pull-block-2018-04-03' into queue-block
A fix for preallocated truncation, a new iotest, and a fix to make the iotests work more comfortably on ppc64
# gpg: Signature made Tue Apr 3 17:40:57 2018 CEST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* mreitz/tags/pull-block-2018-04-03:
iotests: Test abnormally large size in compressed cluster descriptor
qemu-iotests: Use ppc64 qemu_arch on ppc64le host
iotests: Test preallocated truncate of 2G image
block/file-posix: Fix fully preallocated truncate
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
L2 entries for compressed clusters have a field that indicates the
number of sectors used to store the data in the image.
That's however not the size of the compressed data itself, just the
number of sectors where that data is located. The actual data size is
usually not a multiple of the sector size, and therefore cannot be
represented with this field.
The way it works is that QEMU reads all the specified sectors and
starts decompressing the data until there's enough to recover the
original uncompressed cluster. If there are any bytes left that
haven't been decompressed they are simply ignored.
One consequence of this is that even if the size field is larger than
it needs to be QEMU can handle it just fine: it will read more data
from disk but it will ignore the extra bytes.
This test creates an image with two compressed clusters that use 5
sectors (2.5 KB) each, increases the size field to the maximum (8192
sectors, or 4 MB) and verifies that the data can be read without
problems.
This test is important because while the decompressed data takes
exactly one cluster, the maximum value allowed in the compressed size
field is twice the cluster size. So although QEMU won't produce images
with such large values we need to make sure that it can handle them.
Another effect of increasing the size field is that it can make
it include data from the following host cluster(s). In this case
'qemu-img check' will detect that the refcounts are not correct, and
we'll need to rebuild them.
Additionally, this patch also tests that decreasing the size corrupts
the image since the original data can no longer be recovered. In this
case QEMU returns an error when trying to read the compressed data,
but 'qemu-img check' doesn't see anything wrong if the refcounts are
consistent.
One possible task for the future is to make 'qemu-img check' verify
the sizes of the compressed clusters, by trying to decompress the data
and checking that the size stored in the L2 entry is correct.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180329120745.11154-1-berto@igalia.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The qemu target does not always correspond to the host machine type. For
example ppc64le machine target is ppc64. Let's introduce "qemu_arch"
variable to store the matching qemu architecture related to the current
architecture and use it when auto-detecting the default qemu binary.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Message-id: 20180329112053.5399-2-ldoktor@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180228131315.30194-3-mreitz@redhat.com
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Support luks images creatins like in 205
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit ac64273c66 modified the output of iotest 186, changing
the QOM path of floppy drives from /machine/unattached/device[17] to
/machine/unattached/device[13].
Instead of updating the test output to reflect this change, this patch
adds a new filter that hides all QOM paths from the 'Attached to:'
line of the 'info block' command.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
SCSI controllers are no longer created automatically for
-drive if=scsi, so this patch updates the tests that relied
on that.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
patch keeping us from creating too large extents.
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJauVVBAAoJEPQH2wBh1c9AGtYIAJ1ojl6+guKQHjtzv9W8ch50
2tzH9/lFkhq/Tyay8MCcq1dWQr23tKqusxi6fDHVdc8XIx6fDuPFzWxUQwwLvS81
9CA6Qs2NngkiAs89ZZ0Sc9aj+LzFbKvXDXPYd4FFeLVVzD0CA4qghH5THrrH6LRm
4DoRUK1QejrOC0v2zCRQvEN6RlI6WFGCf9YiYKkdrvswnjtLgOobCt7TvC9Nade2
smGyxtDENkV6bAZgQgxMAlf88GCyKslb4Fu6U+sfKMDejWmlYEREbBsQc+/Gp6Af
R6ZeiXEJ+KUF34RktPTmhJlUbzRc4Mw0Ij7Abrfhtpre9hifIp62XSrM8sasgLo=
=k+ly
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2018-03-26' into staging
A fix for dirty bitmap migration through shared storage, and a VMDK
patch keeping us from creating too large extents.
# gpg: Signature made Mon 26 Mar 2018 21:17:05 BST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2018-03-26:
vmdk: return ERROR when cluster sector is larger than vmdk limitation
iotests: enable shared migration cases in 169
qcow2: fix bitmaps loading when bitmaps already exist
qcow2-bitmap: add qcow2_reopen_bitmaps_rw_hint()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>