Seems that it comes up enough.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-17-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
The Valgrind tool reports about the uninitialised buffer 'buf'
instantiated on the stack of the function guess_disk_lchs().
Pass 'read-zeroes=on' to the null block driver to make it deterministic.
The output of the tests 051, 186 and 227 now includes the parameter
'read-zeroes'. So, the benchmark output files are being changed too.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In Python 3, / is always a floating-point division. We usually do not
want this, and as Python 2.7 understands // as well, change all integer
divisions to use that.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20181022135307.14398-5-mreitz@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The previous patch fixes a problem in which draining a block device
with more than one throttled request can make it wait first for the
completion of requests in other members of the same group.
This patch updates test_remove_group_member() in iotest 093 to
reproduce that scenario. This updated test would hang QEMU without the
fix from the previous patch.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A throttle group can have several members, and each one of them can
have several pending requests in the queue.
The requests are processed in a round-robin fashion, so the algorithm
decides the drive that is going to run the next request and sets a
timer in it. Once the timer fires and the throttled request is run
then the next drive from the group is selected and a new timer is set.
If the user tried to remove a drive from a group and that drive had a
timer set then the code was not taking care of setting up a new timer
in one of the remaining members of the group, freezing their I/O.
This problem was fixed in 6fccbb475b,
and this patch adds a new test case that reproduces this exact
scenario.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Now that iotest 093 test proves that the throttling configuration
survives a blockdev-remove-medium/blockdev-insert-medium pair, the
original reason for declaring these commands experimental is gone
(see commit 6e0abc251d).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171110224302.14424-5-mreitz@redhat.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This test hotplugs a CD drive to a VM and checks that I/O limits can
be set only when the drive has media inserted and that they are kept
when the media is replaced.
This also tests the removal of a device with valid I/O limits set but
no media inserted. This involves deleting and disabling the limits
of a BlockBackend without BlockDriverState, a scenario that has been
crashing until the fixes from the last couple of patches.
[Python PEP8 fixup: "Don't use spaces are the = sign when used to
indicate a keyword argument or a default parameter value"
--Stefan]
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 071eb397118ed207c5a7f01d58766e415ee18d6a.1510339534.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The 093 throttling test submits twice as many requests as the throttle
limit in order to ensure that we reach the limit. The remaining
requests are left in-flight at the end of each test iteration.
Commit 452589b6b4 ("vl.c/exit: pause cpus
before closing block devices") exposed a hang in 093. This happens
because requests are still in flight when QEMU terminates but
QEMU_CLOCK_VIRTUAL time is frozen. bdrv_drain_all() hangs forever since
throttled requests cannot complete.
Step the clock at the end of each test iteration so in-flight requests
actually finish. This solves the hang and is cleaner than leaving tests
in-flight.
Note that this could also be "fixed" by disabling throttling when drives
are closed in QEMU. That approach has two issues:
1. We must drain requests before disabling throttling, so the hang
cannot be easily avoided!
2. Any time QEMU disables throttling internally there is a chance that
malicious users can abuse the code path to bypass throttling limits.
Therefore it makes more sense to fix the test case than to modify QEMU.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170815130502.8736-1-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
iotest 093 contains a test that creates a throttling group with
several drives and performs I/O in all of them. This patch adds a new
test that creates a similar setup but only performs I/O in one of the
drives at the same time.
This is useful to test that the round robin algorithm is behaving
properly in these scenarios, and is specifically written using the
regression introduced in 27ccdd5259 as an example.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.
This patch adds a new test to check the naming of throttling groups.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: d87d02823a6b91609509d8bb18e2f5dbd9a6102c.1467986342.git.berto@igalia.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch adds a new test that checks that the burst settings
('iops_max', 'iops_max_length', etc.) of the throttling code work as
expected.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch improves the test by attaching a different number of drives
to the VM and putting them in the same throttling group. The test
verifies that the I/O is evenly distributed among all members of the
group, and that the limits are enforced.
By default the test is repeated 3 times with 1, 2 and 3 drives, but
the maximum number of simultaneous drives is configurable.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 513df1da5c658878191b579ebcddd985adcd4122.1433779731.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This case utilizes qemu-io command "aio_{read,write} -q" to verify the
effectiveness of IO throttling options.
It's implemented by driving the vm timer from qtest protocol, so the
throttling timers are signaled with determinied time duration. Then we
verify the completed IO requests are within 10% error of bps and iops
limits.
"null" protocol is used as the disk backend so that no actual disk IO is
performed on host, this will make the blockstats much more
deterministic. Both "null-aio" and "null-co" are covered, which is also
a simple cross validation test for the driver code.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1422586186-9925-6-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>