These cases require a bit more thought to review; in each case, the
code was appending to a list, but not with a FOOList **tail variable.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210113221013.390592-6-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Flawed change to qmp_guest_network_get_interfaces() dropped]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The easiest spots to use QAPI_LIST_APPEND are where we already have an
obvious pointer to the tail of a list. While at it, consistently use
the variable name 'tail' for that purpose.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210113221013.390592-5-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Similar to the existing QAPI_LIST_PREPEND, but designed for use where
we want to preserve insertion order. Callers will be added in
upcoming patches. Note the difference in signature: PREPEND takes
List*, APPEND takes List**.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210113221013.390592-4-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Commit 54aa3de72e switched multiple sites to use QAPI_LIST_PREPEND
instead of open-coding, but missed a couple of spots.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210113221013.390592-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
On first glance, the loop in qmp_query_rx_filter() has early return
paths that could leak any allocation of filter_list from a previous
iteration. But on closer inspection, it is obvious that all of the
early exits are guarded by has_name, and that the bulk of the loop
body can be executed at most once if the user is filtering by name,
thus, any early exit coincides with an empty list. Add asserts to
make this obvious.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210113221013.390592-2-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Using the cfg.use_non_secure bitfield and the MMU access type, we can determine
if the access should be secure or not.
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <1611274735-303873-4-git-send-email-komlodi@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Using MMUAccessType makes it more clear what the variable's use is.
No functional change.
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <1611274735-303873-3-git-send-email-komlodi@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
This property is used to control the security of the following interfaces
on MicroBlaze:
M_AXI_DP - data interface
M_AXI_IP - instruction interface
M_AXI_DC - dcache interface
M_AXI_IC - icache interface
It works by enabling or disabling the use of the non_secure[3:0] signals.
Interfaces and their corresponding values are taken from:
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_2/ug984-vivado-microblaze-ref.pdf
page 153.
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <1611274735-303873-2-git-send-email-komlodi@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
- Make backup block jobs use asynchronous requests with the block-copy
module
- Use COR filter node for stream block jobs
- Make coroutine-sigaltstack’s qemu_coroutine_new() function thread-safe
- Report error string when file locking fails with an unexpected errno
- iotest fixes, additions, and some refactoring
-----BEGIN PGP SIGNATURE-----
iQFGBAABCAAwFiEEkb62CjDbPohX0Rgp9AfbAGHVz0AFAmAQJFsSHG1yZWl0ekBy
ZWRoYXQuY29tAAoJEPQH2wBh1c9AWZEIALwZwQU82RE+gezeDK7XPqBYuGMOVzbJ
wdvQLnAx+vEo2idLlWm4RE7GhibhTB5iRaiIGmpZhvja/+2BrZHve4GHD5+GBolF
QvLoIdfkPqhORhuAMc1+8rkP4yqv1bLBuOB6TLh15Bw6B6Q7clQbDGqubK2HIwgb
h/RcmMj/93ZN27WM589Zdk5i/1kE11sOyCKNP+UQ3hnKz7ecqtY7pHYvFS3Q9e6l
xpc/QyFhTw/OHfaUViL6W0MWhdqhf/mBlXqzvFoqVWEfdJs+56E1BeUsHRUSxTjs
2v6y7AC3s9gZC/9Icpui3mXvxAQQPTsaF0UfjEPiW+q+NiC07FtdqHo=
=S1Dc
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2021-01-26' into staging
Block patches:
- Make backup block jobs use asynchronous requests with the block-copy
module
- Use COR filter node for stream block jobs
- Make coroutine-sigaltstack’s qemu_coroutine_new() function thread-safe
- Report error string when file locking fails with an unexpected errno
- iotest fixes, additions, and some refactoring
# gpg: Signature made Tue 26 Jan 2021 14:16:59 GMT
# gpg: using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40
# gpg: issuer "mreitz@redhat.com"
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full]
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2021-01-26: (53 commits)
iotests/178: Pass value to invalid option
iotests/118: Drop 'change' test
iotests: Add test for the regression fixed in c8bf9a9169
block: report errno when flock fcntl fails
simplebench: add bench-backup.py
simplebench: bench_block_job: add cmd_options argument
simplebench/bench_block_job: use correct shebang line with python3
block/block-copy: drop unused argument of block_copy()
block/block-copy: drop unused block_copy_set_progress_callback()
qapi: backup: disable copy_range by default
backup: move to block-copy
block/backup: drop extra gotos from backup_run()
block/block-copy: make progress_bytes_callback optional
iotests: 257: prepare for backup over block-copy
iotests: 219: prepare for backup over block-copy
iotests: 185: prepare for backup over block-copy
iotests/129: Limit backup's max-chunk/max-workers
iotests: 56: prepare for backup over block-copy
qapi: backup: add max-chunk and max-workers to x-perf struct
job: call job_enter from job_pause
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ccd3b3b811 has deprecated short-hand boolean options (i.e., options
with values). All options without values are interpreted as boolean
options, so this includes the invalid option "snapshot.foo" used in
iotest 178.
So after ccd3b3b811, 178 fails with:
+qemu-img: warning: short-form boolean option 'snapshot.foo' deprecated
+Please use snapshot.foo=on instead
Suppress that deprecation warning by passing some value to it (it does
not matter which, because the option is invalid anyway).
Fixes: ccd3b3b811
("qemu-option: warn for short-form boolean options")
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210126123834.115915-1-mreitz@redhat.com>
Commit 0afec75734 removed the 'change' QMP command, so we can no
longer test it in 118.
Fixes: 0afec75734
('qmp: remove deprecated "change" command')
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210126104833.57026-1-mreitz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
When a call to fcntl(2) for the purpose of adding file locks fails
with an error other than EAGAIN or EACCES, report the error returned
by fcntl.
EAGAIN or EACCES are elided as they are considered to be common
failures, indicating that a conflicting lock is held by another
process.
No errors are elided when removing file locks.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210113164447.2545785-1-david.edmondson@oracle.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-22-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-21-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Drop unused code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-20-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Further commit will add a benchmark
(scripts/simplebench/bench-backup.py), which will show that backup
works better with async parallel requests (previous commit) and
disabled copy_range. So, let's disable copy_range by default.
Note: the option was added several commits ago with default to true,
to follow old behavior (the feature was enabled unconditionally), and
only now we are going to change the default behavior.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-19-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This brings async request handling and block-status driven chunk sizes
to backup out of the box, which improves backup performance.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-18-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-17-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We are going to stop use of this callback in the following commit.
Still the callback handling code will be dropped in a separate commit.
So, for now let's make it optional.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-16-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Iotest 257 dumps a lot of in-progress information of backup job, such
as offset and bitmap dirtiness. Further commit will move backup to be
one block-copy call, which will introduce async parallel requests
instead of plain cluster-by-cluster copying. To keep things
deterministic, allow only one worker (only one copy request at a time)
for this test.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-15-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The further change of moving backup to be a one block-copy call will
make copying chunk-size and cluster-size two separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
Test 219 depends on specified chunk-size. Update it for explicit
chunk-size for backup as for mirror.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-14-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The further change of moving backup to be a one block-copy call will
make copying chunk-size and cluster-size two separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
185 test however assumes, that with speed limited to 64K, one iteration
would result in offset=64K. It will change, as first iteration would
result in offset=1M independently of speed.
So, let's explicitly specify, what test wants: set max-chunk to 64K, so
that one iteration is 64K. Note, that we don't need to limit
max-workers, as block-copy rate limiter will handle the situation and
wouldn't start new workers when speed limit is obviously reached.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-13-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Right now, this does not change anything, because backup ignores
max-chunk and max-workers. However, as soon as backup is switched over
to block-copy for the background copying process, we will need it to
keep 129 passing.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210120102043.28346-1-mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
After introducing parallel async copy requests instead of plain
cluster-by-cluster copying loop, we'll have to wait for paused status,
as we need to wait for several parallel request. So, let's gently wait
instead of just asserting that job already paused.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-12-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add new parameters to configure future backup features. The patch
doesn't introduce aio backup requests (so we actually have only one
worker) neither requests larger than one cluster. Still, formally we
satisfy these maximums anyway, so add the parameters now, to facilitate
further patch which will really change backup job behavior.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-11-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.
Note, that job_user_pause is not enough: we want to handle
child_job_drained_begin() as well, which call job_pause().
Still, if job is already in job_do_yield() in job_pause_point() we
should not enter it.
iotest 109 output is modified: on stop we do bdrv_drain_all() which now
triggers job pause immediately (and pause after ready is standby).
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210116214705.822267-10-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We are going to use async block-copy call in backup, so we'll need to
passthrough setting backup speed to block-copy call.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-9-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Add function to cancel running async block-copy call. It will be used
in backup.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-8-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We are going to directly use one async block-copy operation for backup
job, so we need rate limiter.
We want to maintain current backup behavior: only background copying is
limited and copy-before-write operations only participate in limit
calculation. Therefore we need one rate limiter for block-copy state
and boolean flag for block-copy call state for actual limitation.
Note, that we can't just calculate each chunk in limiter after
successful copying: it will not save us from starting a lot of async
sub-requests which will exceed limit too much. Instead let's use the
following scheme on sub-request creation:
1. If at the moment limit is not exceeded, create the request and
account it immediately.
2. If at the moment limit is already exceeded, drop create sub-request
and handle limit instead (by sleep).
With this approach we'll never exceed the limit more than by one
sub-request (which pretty much matches current backup behavior).
Note also, that if there is in-flight block-copy async call,
block_copy_kick() should be used after set-speed to apply new setup
faster. For that block_copy_kick() published in this patch.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-7-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
It simplifies debugging.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-6-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
They will be used for backup.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-5-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We'll need async block-copy invocation to use in backup directly.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-4-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Refactor common path to use BlockCopyCallState pointer as parameter, to
prepare it for use in asynchronous block-copy (at least, we'll need to
run block-copy in a coroutine, passing the whole parameters as one
pointer).
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-3-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Experiments show, that copy_range is not always making things faster.
So, to make experimentation simpler, let's add a parameter. Some more
perf parameters will be added soon, so here is a new struct.
For now, add new backup qmp parameter with x- prefix for the following
reasons:
- We are going to add more performance parameters, some will be
related to the whole block-copy process, some only to background
copying in backup (ignored for copy-before-write operations).
- On the other hand, we are going to use block-copy interface in other
block jobs, which will need performance options as well.. And it
should be the same structure or at least somehow related.
So, there are too much unclean things about how the interface and now
we need the new options mostly for testing. Let's keep them
experimental for a while.
In do_backup_common() new x-perf parameter handled in a way to
make further options addition simpler.
We add use-copy-range with default=true, and we'll change the default
in further patch, after moving backup to use block-copy.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210116214705.822267-2-vsementsov@virtuozzo.com>
[mreitz: s/5\.2/6.0/]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Disposition (action) for any given signal is global for the process.
When two threads run coroutine-sigaltstack's qemu_coroutine_new()
concurrently, they may interfere with each other: One of them may revert
the SIGUSR2 handler to SIG_DFL, between the other thread (a) setting up
coroutine_trampoline() as the handler and (b) raising SIGUSR2. That
SIGUSR2 will then terminate the QEMU process abnormally.
We have to ensure that only one thread at a time can modify the
process-global SIGUSR2 handler. To do so, wrap the whole section where
that is done in a mutex.
Alternatively, we could for example have the SIGUSR2 handler always be
coroutine_trampoline(), so there would be no need to invoke sigaction()
in qemu_coroutine_new(). Laszlo has posted a patch to do so here:
https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg05962.html
However, given that coroutine-sigaltstack is more of a fallback
implementation for platforms that do not support ucontext, that change
may be a bit too invasive to be comfortable with it. The mutex proposed
here may negatively impact performance, but the change is much simpler.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210125120305.19520-1-mreitz@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
And consequentially drop it from 297's skip list.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-11-mreitz@redhat.com>
And consequentially drop it from 297's skip list.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210118105720.14824-10-mreitz@redhat.com>
Issuing 'stop' on the VM drains all nodes. If the mirror job has many
large requests in flight, this may lead to significant I/O that looks a
bit like 'stop' would make the job try to complete (which is what 129
should verify not to happen).
We can limit the I/O in flight by limiting the buffer size, so mirror
will make very little progress during the 'stop' drain.
(We do not need to do anything about commit, which has a buffer size of
512 kB by default; or backup, which goes cluster by cluster. Once we
have asynchronous requests for backup, that will change, but then we can
fine-tune the backup job to only perform a single request on a very
small chunk, too.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-9-mreitz@redhat.com>
Before this patch, test_block_commit() performs an active commit, which
under the hood is a mirror job. If we want to test various different
block jobs, we should perhaps run an actual commit job instead.
Doing so requires adding an overlay above the source node before the
commit is done (and then specifying the source node as the top node for
the commit job).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-8-mreitz@redhat.com>
Throttling on the BB has not affected block jobs in a while, so it is
possible that one of the jobs in 129 finishes before the VM is stopped.
We can fix that by running the job from a throttle node.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-7-mreitz@redhat.com>
@busy is false when the job is paused, which happens all the time
because that is how jobs yield (e.g. for mirror at least since commit
565ac01f8d).
Back when 129 was added (2015), perhaps there was no better way of
checking whether the job was still actually running. Now we have the
@status field (as of 58b295ba52, i.e. 2018), which can give us exactly
that information.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-6-mreitz@redhat.com>
Instead of checking iotests.py only, check all Python files in the
qemu-iotests/ directory. Of course, most of them do not pass, so there
is an extensive skip list for now. (The only files that do pass are
209, 254, 283, and iotests.py.)
(Alternatively, we could have the opposite, i.e. an explicit list of
files that we do want to check, but I think it is better to check files
by default.)
Unless started in debug mode (./check -d), the output has no information
on which files are tested, so we will not have a problem e.g. with
backports, where some files may be missing when compared to upstream.
Besides the technical rewrite, some more things are changed:
- For the pylint invocation, PYTHONPATH is adjusted. This mirrors
setting MYPYPATH for mypy.
- Also, MYPYPATH is now derived from PYTHONPATH, so that we include
paths set by the environment. Maybe at some point we want to let the
check script add '../../python/' to PYTHONPATH so that iotests.py does
not need to do that.
- Passing --notes=FIXME,XXX to pylint suppresses warnings for TODO
comments. TODO is fine, we do not need 297 to complain about such
comments.
- The "Success" line from mypy's output is suppressed, because (A) it
does not add useful information, and (B) it would leak information
about the files having been tested to the reference output, which we
decidedly do not want.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210118105720.14824-3-mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
There are a couple of environment variables that we fetch with
os.environ.get() without supplying a default. Clearly they are required
and expected to be set by the ./check script (as evidenced by
execute_setup_common(), which checks for test_dir and
qemu_default_machine to be set, and aborts if they are not).
Using .get() this way has the disadvantage of returning an Optional[str]
type, which mypy will complain about when tests just assume these values
to be str.
Use [] instead, which raises a KeyError for environment variables that
are not set. When this exception is raised, catch it and move the abort
code from execute_setup_common() there.
Drop the 'assert iotests.sock_dir is not None' from iotest 300, because
that sort of thing is precisely what this patch wants to prevent.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210118105720.14824-2-mreitz@redhat.com>
This patch completes the series with the COR-filter applied to
block-stream operations.
Adding the filter makes it possible in future implement discarding
copied regions in backing files during the block-stream job, to reduce
the disk overuse (we need control on permissions).
Also, the filter now is smart enough to do copy-on-read with specified
base, so we have benefit on guest reads even when doing block-stream of
the part of the backing chain.
Several iotests are slightly modified due to filter insertion.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20201216061703.70908-14-vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>