Building the debian-debootstrap image will usually fail if EXECUTABLE
isn't set (when using the Makefile). Warn the user in this case so
they know why it's failing.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-6-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
The debian-bootstrap image doesn't choose a default architecture and
distribution version, instead the user has to set both DEB_ARCH and
DEB_TYPE in the environment. Print a reasonably helpful message if
either of them isn't set instead of complaining about "qemu-" being
missing or erroring out because we cannot cd to the mirror URL.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-5-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Send error messages where they belong so they're seen even if stdout
is redirected to /dev/null.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-4-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
The 'realpath' executable is shipped in a separate package that isn't
installed by default on some distros.
We already use 'readlink -e' (provided by GNU coreutils) in some other
part of the code, so let's settle for that instead.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-3-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Unlike Popen.communicate(), subprocess.call() doesn't read from the
stdout file descriptor. If the child process produces more output than
fits into the pipe buffer, it will block indefinitely.
If we don't intend to consume the output, just send it straight to
/dev/null to avoid this issue.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Message-Id: <1473192351-601-2-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
It's a variation of our existing centos6, plus two more lines to
downgrade glib2 to version 2.22 which we download from vault.centos.org.
Suggested-by: Paolo Bonzini <pbonzoni@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1470708908-12885-1-git-send-email-famz@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJXzpyJAAoJEH8JsnLIjy/WwToQAJ29bQ8dbVxybQtApZn0l3DH
aBcguj822Vqa+KaxOLAfzkxmG5MurIvWzRQD1BvjxaprRykB+hDh4oAJCmVjfedP
B28h24TUF+w8WIbpxf9weQFNpsT2Ire8ZySc0JZhpYMqxXCqy6NzDs98sjedDC0O
jNbfic1L+yEpZumVE0Fzr4/YgPumt7wP0X42nb6G8R+VlChm3nweNCFF7hNQvTuB
GNNbd9ckUS0BTcQazm04yRR/WzXW6uFqa00QeWsNGGd1mmZ0kUxiqxVgx/fuBMrL
yC4LxFit7eNRoeVqu/nu8GsG+2Ol5zsalfJKFcoWmpg8pygOayc5SXecRUZRw7tg
3oB7ZijbrBUFlr4y6cNVCGPtRluQshpLGHlgo68ulEIlHprqECwgPIdoOPr0bs+v
Gb8ho2Y+lrISPIsjYWK5UFSmZf0SIBGILZUSD3lzQ+oOHXGKbdPAaFvSUqXENHSN
xjtMYjr5t+NjrNNd2Q+VUJPlimHGw5jAowjsQSTk3ndcvJYeIVs+AwLqNTKc3dY4
Oxx1IZ2RngDC63PmZUgh2Bs8pwFg7HaZJejmtq5jY8eHJZM/QMkCJX9TSRCZsMRB
n0GxfCYabX526h3Yo94d74s5xRnHaC+Lem8PU/VEGR3/dMn21jf/PI9e+/0BnBTC
iET2gkU70c4lunWkFHMd
=LZpU
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches
# gpg: Signature made Tue 06 Sep 2016 11:38:01 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (36 commits)
block: Allow node name for 'qemu-io' HMP command
qemu-iotests: Log QMP traffic in debug mode
block jobs: Improve error message for missing job ID
coroutine: Assert that no locks are held on termination
coroutine: Let CoMutex remember who holds it
qcow2: fix iovec size at qcow2_co_pwritev_compressed
test-coroutine: Fix coroutine pool corruption
qemu-iotests: add vmdk for test backup compression in 055
qemu-iotests: test backup compression in 055
blockdev-backup: added support for data compression
drive-backup: added support for data compression
block: simplify blockdev-backup
block: simplify drive-backup
block/io: turn on dirty_bitmaps for the compressed writes
block: remove BlockDriver.bdrv_write_compressed
qcow: cleanup qcow_co_pwritev_compressed to avoid the recursion
qcow: add qcow_co_pwritev_compressed
vmdk: add vmdk_co_pwritev_compressed
qcow2: cleanup qcow2_co_pwritev_compressed to avoid the recursion
qcow2: add qcow2_co_pwritev_compressed
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
vhost-user-test relies on iPXE just to initialize the virtio-net
device, and doesn't do any actual packet tx/rx testing.
In addition to that, the test relies on TCG, which is
imcompatible with vhost. The test only worked by accident: a bug
the memory backend initialization made memory regions not have
the DIRTY_MEMORY_CODE bit set in dirty_log_mask.
This changes vhost-user-test to initialize the virtio-net device
using libqos, and not use TCG nor pxe-virtio.rom.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Python tests are already annoying enough to debug. With QMP traffic
available it's a little bit easier at least.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The test case overwrites the Coroutine object with 0xff as a way to
assert that the coroutine isn't used any more. However, this means that
the coroutine pool now contains a corrupted object and later test cases
may get this corrupted object and crash.
This patch saves the real content of the object and restores it after
completing the test. The only use of the coroutine pool between those
two points is the deletion of co2. As this only means an insertion at
the head of an SLIST (release_pool or alloc_pool), it doesn't access the
invalid list pointers that co1 has during this period.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The vmdk format has support for compression, it would be fine to add it for
the test backup compression
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Jeff Cody <jcody@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: John Snow <jsnow@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Added cases to check the backup compression out of qcow2, raw in qcow2
on drive-backup and blockdev-backup.
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Jeff Cody <jcody@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: John Snow <jsnow@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
drive-mirror to accept a node-name without lifting the restriction that
we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
drive-backup and the corresponding transaction action to accept a
node-name without lifting the restriction that we're operating at a root
node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
blockdev-snapshot-internal-sync to accept a node-name without lifting
the restriction that we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
blockdev-snapshot-delete-internal-sync to accept a node-name without
lifting the restriction that we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
block-stream to accept a node-name without lifting the restriction that
we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Since f6880b7f [qemu-log: support simple pid substitution for logs],
test-logging creates files with hard-coded names in /tmp. In the best
case, this prevents multiple developers from running "make check" on
the same machine. In the worst case, it allows for symlink attacks,
enabling an attacker to overwrite files that are writable to the
developer running "make check".
Instead of hard-coding the paths, create a temporary directory using
g_dir_make_tmp() and clean it up afterwards.
Fixes: f6880b7f ("qemu-log: support simple pid substitution for logs")
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-id: 1471545963-11720-3-git-send-email-silbe@linux.vnet.ibm.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(commit 80dcfb8532)
Upon migration, the code use a timer based on vm_clock for 1ns
in the future from post_load to do the event send in case host_connected
differs between migration source and target.
However, it's not guaranteed that the apic is ready to inject irqs into
the guest, and the irq line remained high, resulting in any future interrupts
going unnoticed by the guest as well.
That's because 1) the migration coroutine is not blocked when it get EAGAIN
while reading QEMUFile. 2) The vm_clock is enabled default currently, it doesn't
rely on the calling of vm_start(), that means vm_clock timers can run before
VCPUs are running.
So, let's set the vm_clock disabled default, keep the initial intention of
design for vm_clock timers.
Meanwhile, change the test-aio usecase, using QEMU_CLOCK_REALTIME instead of
QEMU_CLOCK_VIRTUAL as the block code does.
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Message-Id: <1470728955-90600-1-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
109 iotest is broken for raw after 0965a41e99
[mirror: double performance of the bulk stage if the disc is full]
The problem is with finishing block-job with error: before specified
patch mirror was not very async and it created one big request at disk
start, this request finished with error and qemu produced
BLOCK_JOB_COMPLETED with zero progress.
After 0965a41, mirror starts several smaller requests in parallel, when
BLOCK_JOB_COMPLETED emited we have some successful non-zero progress.
This patch solves the issue by filtering out progress from 109 test
output.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Since 7f0317cfc8 we have API to specify the ID of block jobs and we
also guarantee that they are well-formed and unique.
This patch adds tests to check some common scenarios.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We have three qtest tests which have test names ending with "error".
This is awkward because the output of verbose test runs looks like
/crypto/task/error: OK
/crypto/task/thread_error: OK
which gives false positives if you are grepping build logs for
errors by looking for "error:". Since there are only three tests
with this problem, just rename them all to 'failure' instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1470307178-22848-1-git-send-email-peter.maydell@linaro.org
This checks that making FOO_max lower than FOO is not allowed.
We could also forbid having FOO_max == FOO, but that doesn't have
any odd side effects and it would require us to update several other
tests, so let's keep it simple.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 2f90f9ee58aa14b7bd985f67c5996b06e0ab6c19.1469693110.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
By not using "--format" with docker images command.
The option is not available on RHEL 7 docker command. Use an awk
matching command instead.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1470202928-3392-1-git-send-email-famz@redhat.com>
Printf'ing a NULL string is undefined behaviour. Avoid it.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1469459025-23606-4-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
So far, QHT functions assume that the passed qht has previously been
initialized--otherwise they segfault.
This patch makes an exception for qht_statistics_init, with the goal
of simplifying calling code. For instance, qht_statistics_init is
called from the 'info jit' dump, and given that under KVM the TB qht
is never initialized, we get a segfault. Thus, instead of complicating
the 'info jit' code with additional checks, let's allow passing an
uninitialized qht to qht_statistics_init.
While at it, add a test for this to test-qht.
Before the patch (for $ qemu -enable-kvm [...]):
(qemu) info jit
[...]
direct jump count 0 (0%) (2 jumps=0 0%)
Program received signal SIGSEGV, Segmentation fault.
After the patch the "TB hash buckets", "TB hash occupancy"
and "TB hash avg chain" lines are omitted.
(qemu) info jit
[...]
direct jump count 0 (0%) (2 jumps=0 0%)
TB hash buckets 0/0 (-nan% head buckets used)
TB hash occupancy nan% avg chain occ. Histogram: (null)
TB hash avg chain nan buckets. Histogram: (null)
[...]
Reported by: Changlong Xie <xiecl.fnst@cn.fujitsu.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1469205390-14369-1-git-send-email-cota@braap.org>
[Extract printing statistics to an entirely separate function. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Here are the current pending ppc and spapr related patches for
qemu-2.7. Given the freeze status, these are all bugfixes, with two
exceptions:
* There's some final rework of the vcpu hotplug model. Specifically
we add spapr specific code on the generic basis Igor established
to make cpu_index stable for pseries-2.7 and later machine types.
- This allows us to remove the limitation that cpu cores had to
be inserted in linear order, and removed in LIFO order.
- This is worth merging this late in 2.7 because it will avoid
considerable future grief with management layers needing to
discover whether out-of-order hotplug is possible, amongst
other things.
- For now we do add a constraint that the initial cpu cannot be
unplugged.
* We add two extra testcases to make check, for postcopy and
drive_del on ppc64.
- Not strictly bugfixes, but safe, because they don't affect the
actual code, and increase test coverage.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXmuB6AAoJEGw4ysog2bOSboIQANFAPB5c33cgRv1Fe2csqHor
Z3VfjP72/sittYxEHVwhxnbmzCHpYb0KN2AZTrHBjXkryWZR/yjXDebiV9HM7NCy
BOA8euberkfyde3HYptcviaqypmnHOZqcJicx1+jtTG2iKSGb8HVeM5KO3a6aKvQ
6yGngUy8NCMrVmGPNlG/Q2Y8wkGVDbr8A2h6xJGJKC8Ol+IV3WNCaZIXLst/2q/O
UbYr1thoWG32nAu4asUOg6rExfirjcr16hYEbDzREnhUK4AYykosmeOw1kRg/Oki
JVVFhRlgEsZ4htLBtenBv+MLPoFWXY2PHp90Lx5ITLkLwmfgxmoRgaQvBYlcJNSA
Jt7j4THhHw0ppPGrQf4L0M3wt+peyvNo4IGbaeAojamctnu9b6TLEjbhiDd66onB
bgPy3Diwti+vRt2k1q0CSwB+mlUbS1TisK0y1BG1ob9TbtWfZRVDRCVuTJI0fzX7
cneaOzKu660pct06ciOTb0TSrvdhNYehBpHHm4pdzc8kU0mRJpPhhX38iCPMyTA3
eTGLOYyz6sHQjwvys6/TtDm8A+5Aot65WW0Y1OpM9iLx+20lRE/KLzmYtC+GB4T3
KjdRLNzKaC+CAbc/c1zBQvPIPW95Le9Pmq87ZZUovg8UXt8RdSldzpg98krHfLoG
itDVvgUwydTbzXABOTFY
=t6gz
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.7-20160729' into staging
ppc patch queue 2016-07-29
Here are the current pending ppc and spapr related patches for
qemu-2.7. Given the freeze status, these are all bugfixes, with two
exceptions:
* There's some final rework of the vcpu hotplug model. Specifically
we add spapr specific code on the generic basis Igor established
to make cpu_index stable for pseries-2.7 and later machine types.
- This allows us to remove the limitation that cpu cores had to
be inserted in linear order, and removed in LIFO order.
- This is worth merging this late in 2.7 because it will avoid
considerable future grief with management layers needing to
discover whether out-of-order hotplug is possible, amongst
other things.
- For now we do add a constraint that the initial cpu cannot be
unplugged.
* We add two extra testcases to make check, for postcopy and
drive_del on ppc64.
- Not strictly bugfixes, but safe, because they don't affect the
actual code, and increase test coverage.
# gpg: Signature made Fri 29 Jul 2016 05:50:02 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.7-20160729:
tests: add drive_del-test to ppc/ppc64
spapr: Prevent boot CPU core removal
ppc: Fix fault PC reporting for lve*/stve* VMX instructions
test: port postcopy test to ppc64
Revert "spapr: Ensure CPU cores are added contiguously and removed in LIFO order"
spapr: init CPUState->cpu_index with index relative to core-id
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
a bunch of bugfixes and a couple of cleanups
making these easier and/or making debugging easier
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJXmslFAAoJECgfDbjSjVRpev0IAMZghEuSeKMB2JR88dErS8P5
J6y/1W2VFuRa1YBkTz/ecr5r2RwIO5teZUZpUkibM65Zo6bu1liMo6gbzeCg/xOi
k437pNRl6W9RVWuXQM9VOegNoGYhX3Hrnu3iQeiT8KRY3OMCwG52umUXYVodJh1R
mlozlEcSyUEDZVdNjhRECuUiw8RRcErEtiKda+zjkf4tPAGkyCItVpLYshE6A2/I
lfQLkv+EWOyuD4cfEHl+4F9K9wegothFTSd/xBmcqqaWRc+pboMVF2A2yga+GjKm
Xgb8SzQYkt9Q1nFr9fz89q89CsjhmfrD/ct/vJAcCFnw/dNXnC6mYjr6MDX0Gd0=
=26Uu
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc, pci, virtio: cleanups, fixes
a bunch of bugfixes and a couple of cleanups
making these easier and/or making debugging easier
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Fri 29 Jul 2016 04:11:01 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (41 commits)
mptsas: Fix a migration compatible issue
vhost: do not update last avail idx on get_vring_base() failure
vhost: add vhost_net_set_backend()
vhost-user: add error report in vhost_user_write()
tests: fix vhost-user-test leak
tests: plug some leaks in virtio-net-test
vhost-user: wait until backend init is completed
char: add and use tcp_chr_wait_connected
char: add chr_wait_connected callback
vhost: add assert() to check runtime behaviour
vhost-net: vhost_migration_done is vhost-user specific
Revert "vhost-net: do not crash if backend is not present"
vhost-user: add get_vhost_net() assertions
vhost-user: keep vhost_net after a disconnection
vhost-user: check vhost_user_{read,write}() return value
vhost-user: check qemu_chr_fe_set_msgfds() return value
vhost-user: call set_msgfds unconditionally
qemu-char: fix qemu_chr_fe_set_msgfds() crash when disconnected
vhost: use error_report() instead of fprintf(stderr,...)
vhost: add missing VHOST_OPS_DEBUG
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As userfaultfd syscall is available on powerpc, migration
postcopy can be used.
This patch adds the support needed to test this on powerpc,
instead of using a bootsector to run code to modify memory,
we use a FORTH script in "boot-command" property.
As spapr machine doesn't support "-prom-env" argument
(the nvram is initialized by SLOF and not by QEMU),
"boot-command" is provided to SLOF via a file mapped nvram
(with "-drive file=...,if=pflash")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Spotted by valgrind.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Found thanks to valgrind.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The previous commit refactoring iotests.py:
commit 6661397446
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Wed Jul 20 14:23:10 2016 +0100
scripts: refactor the VM class in iotests for reuse
was not properly tested and included a number of broken
bits.
- The 'event_match' method was not moved into qemu.py
- The 'self._args' list parameter in QEMUMachine needs
to be copied otherwise modifications will affect the
global 'qemu_opts' variable in iotests.py
- The QEMUQtestMachine class methods had inverted
parameter order for the super() calls
- The QEMUQtestMachine class forgot to add
'-machine accel=qtest'
- The QEMUQtestMachine class constructor needs to set
a default 'name' value before using it as it may
be None
- The QEMUQtestMachine class constructor needs to use
named parameters when calling the super constructor
as it is leaving out some positional parameters.
- The 'qemu_prog' variable should be a string not a
list in iotests.py
- The VM classs constructor needs to use named
parameters when calling the super constructor
as it is leaving out some positional parameters.
- The path to the socket-scm-helper needs to be
passed into the QEMUMachine class
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1469549767-27249-1-git-send-email-berrange@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Do not create a leaking temporary file, but use a static file instead.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
This introduces a moderately general purpose framework for
testing performance of migration.
The initial guest workload is provided by the included 'stress'
program, which is configured to spawn one thread per guest CPU
and run a maximally memory intensive workload. It will loop
over GB of memory, xor'ing each byte with data from a 4k array
of random bytes. This ensures heavy read and write load across
all of guest memory to stress the migration performance. While
running the 'stress' program will record how long it takes to
xor each GB of memory and print this data for later reporting.
The test engine will spawn a pair of QEMU processes, either on
the same host, or with the target on a remote host via ssh,
using the host kernel and a custom initrd built with 'stress'
as the /init binary. Kernel command line args are set to ensure
a fast kernel boot time (< 1 second) between launching QEMU and
the stress program starting execution.
None the less, the test engine will initially wait N seconds for
the guest workload to stablize, before starting the migration
operation. When migration is running, the engine will use pause,
post-copy, autoconverge, xbzrle compression and multithread
compression features, as well as downtime & bandwidth tuning
to encourage completion. If migration completes, the test engine
will wait N seconds again for the guest workooad to stablize on
the target host. If migration does not complete after a preset
number of iterations, it will be aborted.
While the QEMU process is running on the source host, the test
engine will sample the host CPU usage of QEMU as a whole, and
each vCPU thread. While migration is running, it will record
all the stats reported by 'query-migration'. Finally, it will
capture the output of the stress program running in the guest.
All the data produced from a single test execution is recorded
in a structured JSON file. A separate program is then able to
create interactive charts using the "plotly" python + javascript
libraries, showing the characteristics of the migration.
The data output provides visualization of the effect on guest
vCPU workloads from the migration process, the corresponding
vCPU utilization on the host, and the overall CPU hit from
QEMU on the host. This is correlated from statistics from the
migration process, such as downtime, vCPU throttling and iteration
number.
While the tests can be run individually with arbitrary parameters,
there is also a facility for producing batch reports for a number
of pre-defined scenarios / comparisons, in order to be able to
get standardized results across different hardware configurations
(eg TCP vs RDMA, or comparing different VCPU counts / memory
sizes, etc).
To use this, first you must build the initrd image
$ make tests/migration/initrd-stress.img
To run a a one-shot test with all default parameters
$ ./tests/migration/guestperf.py > result.json
This has many command line args for varying its behaviour.
For example, to increase the RAM size and CPU count and
bind it to specific host NUMA nodes
$ ./tests/migration/guestperf.py \
--mem 4 --cpus 2 \
--src-mem-bind 0 --src-cpu-bind 0,1 \
--dst-mem-bind 1 --dst-cpu-bind 2,3 \
> result.json
Using mem + cpu binding is strongly recommended on NUMA
machines, otherwise the guest performance results will
vary wildly between runs of the test due to lucky/unlucky
NUMA placement, making sensible data analysis impossible.
To make it run across separate hosts:
$ ./tests/migration/guestperf.py \
--dst-host somehostname > result.json
To request that post-copy is enabled, with switchover
after 5 iterations
$ ./tests/migration/guestperf.py \
--post-copy --post-copy-iters 5 > result.json
Once a result.json file is created, a graph of the data
can be generated, showing guest workload performance per
thread and the migration iteration points:
$ ./tests/migration/guestperf-plot.py --output result.html \
--migration-iters --split-guest-cpu result.json
To further include host vCPU utilization and overall QEMU
utilization
$ ./tests/migration/guestperf-plot.py --output result.html \
--migration-iters --split-guest-cpu \
--qemu-cpu --vcpu-cpu result.json
NB, the 'guestperf-plot.py' command requires that you have
the plotly python library installed. eg you must do
$ pip install --user plotly
Viewing the result.html file requires that you have the
plotly.min.js file in the same directory as the HTML
output. This js file is installed as part of the plotly
python library, so can be found in
$HOME/.local/lib/python2.7/site-packages/plotly/offline/plotly.min.js
The guestperf-plot.py program can accept multiple json files
to plot, enabling results from different configurations to
be compared.
Finally, to run the entire standardized set of comparisons
$ ./tests/migration/guestperf-batch.py \
--dst-host somehost \
--mem 4 --cpus 2 \
--src-mem-bind 0 --src-cpu-bind 0,1 \
--dst-mem-bind 1 --dst-cpu-bind 2,3
--output tcp-somehost-4gb-2cpu
will store JSON files from all scenarios in the directory
named tcp-somehost-4gb-2cpu
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1469020993-29426-7-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
The iotests module has a python class for controlling QEMU
processes. Pull the generic functionality out of this file
and create a scripts/qemu.py module containing a QEMUMachine
class. Put the QTest integration support into a subclass
QEMUQtestMachine.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1469020993-29426-4-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
- interrupt remapping for intel iommus
- a bunch of virtio cleanups
- fixes all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJXkQsqAAoJECgfDbjSjVRpanoIAJ9JVlc1aEjt9sa0cSBcs+NQ
J7JmgU9FqFsj+4FrNTouO3AxTjHurd1UAULP1WMPD+V3JpbnHct8r6SCBLQ5EBMN
VOjYo4DwWs1g+DqnQ9WZmbadu06XvYi/yiAKNUzWfZk0MR11D0D/S5hmarNKw0Kq
tGHeTWjGeY4WqFLV7m+qB4+cqkAByn6um99UtUvgLL05RgIEIP2IEMKYZ+rXvAa9
iGUvzqlO7mbq/+LbL18kaWywa4TCwbbd2eSGWaqhX4CuB62Rl33mWTXFcfaYhkyp
Z3FgwaJ09h0lAjSVEbyAuLFMfO/BnMcsoKqwl4xc4vkn/xBCqFtgH9JcEVm3O8U=
=ge2D
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc, pci, virtio: new features, cleanups, fixes
- interrupt remapping for intel iommus
- a bunch of virtio cleanups
- fixes all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 21 Jul 2016 18:49:30 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (57 commits)
intel_iommu: avoid unnamed fields
virtio: Update migration docs
virtio-gpu: Wrap in vmstate
virtio-gpu: Use migrate_add_blocker for virgl migration blocking
virtio-input: Wrap in vmstate
9pfs: Wrap in vmstate
virtio-serial: Wrap in vmstate
virtio-net: Wrap in vmstate
virtio-balloon: Wrap in vmstate
virtio-rng: Wrap in vmstate
virtio-blk: Wrap in vmstate
virtio-scsi: Wrap in vmstate
virtio: Migration helper function and macro
virtio-serial: Remove old migration version support
virtio-net: Remove old migration version support
virtio-scsi: Replace HandleOutput typedef
Revert "mirror: Workaround for unexpected iohandler events during completion"
virtio-scsi: Call virtio_add_queue_aio
virtio-blk: Call virtio_add_queue_aio
virtio: Introduce virtio_add_queue_aio
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Version: GnuPG v2
iQEcBAABCAAGBQJXj15cAAoJEMo1YkxqkXHGkwIIAIgXZ7ciQNS6HK8WWlfvulfh
gFnu32HDNih3zYk6N5NNcpHxi16dYLdj98WteWlkYUwwJ2iQBH8e0VJPVMYzJC+g
pdbaUjXScpCkumA+vH6PgUjgJwH3Z1FMj+r9I1ZF6POy17DjOy6xmCCr+Pvh0sxm
NfRzgnUM1nsHvfVS6WM+NorlmEX/wvkWw/qBjv49N5hoJw9I0saJopNM+oh6+Pgy
A87DM83O0a8fHPBoPV7L6TDYasNl/Y26iCliBu9qxW/pGODjw8ohrQkxq8Bopo08
jy7ITHNfrcK44PFMmCZELbygtLZe5eB5qmHndyAGQhDjrHpe+/cv84ar/Dh+MrY=
=PXt6
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/famz/tags/docker-pull-request' into staging
# gpg: Signature made Wed 20 Jul 2016 12:19:56 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/docker-pull-request:
docker: pass EXECUTABLE to build script
docker: Don't start a container that doesn't exist
docker: Add "images" subcommand to docker.py
docker: Fix exit code if $CMD failed
docker: More sensible run script
tests/docker/docker.py: add update operation
tests/docker/dockerfiles: new debian-bootstrap.docker
tests/docker/docker.py: check and run .pre script
tests/docker/docker.py: support --include-executable
tests/docker/docker.py: docker_dir outside build
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On a slower machine the test can take more than 30 seconds.
Increase the timeout to 100 seconds.
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXjnLvAAoJEDhwtADrkYZTkd0P/iVsviq3WwxjmpbjyYDqgBYl
2hj9XL8ZtYGwFIU9WZXdu16lXlXtZom9u8XzspXIvhy5McNeWDxYarbE2kByyjac
rl4873YzIspVKPPFMl2LWsHwtq7LLGzFF+f+ofjHN81ZdM6qTwZBe00gxxY8281I
6x1aQYkhOAmqJAlqqnEVk76WtdScs23mdbqmy1LzGD3ZBKvDa8IasogAKvZquNZX
P16kYdq9QFqFJ30non9fWg9VQWtdryisseVhpSY/PXlrM4H+XPDK4hvegr3pjN8S
yqtmKkPOuVWQyCfRz6UWBP0ncz3QN7iOiVLqb6TJgxJ8jFp3lbEEsXsCkvpSWjOU
JDFOcGuiE1AVoZF8NxWQ3fdLiHt2jyCgY7iqgbNc1A3eT2H5IshIc7OF5QOU7MtP
EYqk5bXiusUftuvZE5Mh0pXJ5AeOEOoXM+/dGRt3IRxPtqOWlW7/NM4YGgMD9XFg
83i5OIjdOBa1HYvetfEl3oKYsu+2zGsSqz4AH3CDIWJ8aoWJ9RNId3JJWFf+jyFR
S+l7wMyFYdzDL7VqUmIy8xrVgc0IQPYT0udcjdOciEej873roEVencQ07VnK2zN0
aiAmAN/BQFO0HmmH4rBvTcLxF06+HnW0CwKii0mlAeisUFTQJEDmyg823Uc+MAGl
4pGFqDUbnzsy8xK3HCCP
=MhfL
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2016-07-19' into staging
QAPI patches for 2016-07-19
# gpg: Signature made Tue 19 Jul 2016 19:35:27 BST
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2016-07-19:
net: Use correct type for bool flag
qapi: Change Netdev into a flat union
block: Simplify drive-mirror
block: Simplify block_set_io_throttle
qapi: Implement boxed types for commands/events
qapi: Plumb in 'boxed' to qapi generator lower levels
qapi-event: Simplify visit of non-implicit data
qapi: Drop useless gen_err_check()
qapi: Add type.is_empty() helper
qapi: Hide tag_name data member of variants
qapi: Special case c_name() for empty type
qapi: Require all branches of flat union enum to be covered
net: use Netdev instead of NetClientOptions in client init
qapi: change QmpInputVisitor to QSLIST
qapi: change QmpOutputVisitor to QSLIST
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
To build a docker image with which needs qemu linux-user emulation we
need to pass --include-executable to the build script. Using the same
mechanism as for other container controls we enable the option is
EXECUTABLE is set on the make command line e.g:
make docker-image-debian-bootstrap V=1 J=9 DEB_ARCH=armhf \
DEB_TYPE=stable EXECUTABLE=./arm-linux-user/qemu-arm
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-11-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
Image building targets are dependencies of test running targets, so when
a docker image doesn't exist, it means it's skipped (due to dependency
checks in pre script). Therefore, skip the test too.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1468934445-32183-10-git-send-email-famz@redhat.com
This is a wrapper for the 'docker images' command.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-9-git-send-email-famz@redhat.com
It is very easy to figure out current directory and bash option from the
execution, so do less in the Makefile invocation command line, and
figure both options in the script.
This makes the next patch easier.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1468934445-32183-7-git-send-email-famz@redhat.com
This adds a new operation to the docker script to allow updating of
binaries in an existing container. This is because it would be
inefficient to re-build the whole container just for an update to the
QEMU binary.
To update the executable run:
./tests/docker/docker.py update \
debian:armhf ./arm-linux-user/qemu-arm
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-6-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>
Together with the debian-bootstrap.pre script can now build an arbitrary
architecture of Debian using debootstrap. This allows debootstrap to set
up its first stage before the container is built.
To build a container you need a command line like:
DEB_ARCH=armhf DEB_TYPE=testing \
./tests/docker/docker.py build \
--include-executable=arm-linux-user/qemu-arm debian:armhf \
./tests/docker/dockerfiles/debian-bootstrap.docker
Although a number of non-debian systems package the debootstrap script
it is fairly portable in itself. Assuming we have some sort of fakeroot
implementation we can just clone the upstream repository and use the
script from there.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1468934445-32183-5-git-send-email-famz@redhat.com
Signed-off-by: Fam Zheng <famz@redhat.com>