Background: As of cURL 7.59.0, it verifies that several functions are
not called from within a callback. Among these functions is
curl_multi_add_handle().
curl_read_cb() is a callback from cURL and not a coroutine. Waking up
acb->co will lead to entering it then and there, which means the current
request will settle and the caller (if it runs in the same coroutine)
may then issue the next request. In such a case, we will enter
curl_setup_preadv() effectively from within curl_read_cb().
Calling curl_multi_add_handle() will then fail and the new request will
not be processed.
Fix this by not letting curl_read_cb() wake up acb->co. Instead, leave
the whole business of settling the AIOCB objects to
curl_multi_check_completion() (which is called from our timer callback
and our FD handler, so not from any cURL callbacks).
Reported-by: Natalie Gavrielov <ngavrilo@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1740193
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-7-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Instead of reporting all sockets to cURL, only report the one that has
caused curl_multi_do_locked() to be called. This lets us get rid of the
QLIST_FOREACH_SAFE() list, which was actually wrong: SAFE foreaches are
only safe when the current element is removed in each iteration. If it
possible for the list to be concurrently modified, we cannot guarantee
that only the current element will be removed. Therefore, we must not
use QLIST_FOREACH_SAFE() here.
Fixes: ff5ca1664a
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-6-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
curl_multi_do_locked() currently marks all sockets as ready. That is
not only inefficient, but in fact unsafe (the loop is). A follow-up
patch will change that, but to do so, curl_multi_do_locked() needs to
know exactly which socket is ready; and that is accomplished by this
patch here.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-5-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
While it is more likely that transfers complete after some file
descriptor has data ready to read, we probably should not rely on it.
Better be safe than sorry and call curl_multi_check_completion() in
curl_multi_do(), too, just like it is done in curl_multi_read().
With this change, curl_multi_do() and curl_multi_read() are actually the
same, so drop curl_multi_read() and use curl_multi_do() as the sole FD
handler.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-4-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This does not really change anything, but it makes the code a bit easier
to follow once we use @socket as the opaque pointer for
aio_set_fd_handler().
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-3-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
A follow-up patch will make curl_multi_do() and curl_multi_read() take a
CURLSocket instead of the CURLState. They still need the latter,
though, so add a pointer to it to the former.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190910124136.10565-2-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
qemu-io now prefixes its error and warnings with "qemu-io:".
36b9986b08 fixed a lot of iotests output but forget about
026.out.nocache. Fix it too.
Fixes: 99e98d7c9f ("qemu-io: Use error_[gs]et_progname()")
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190816153015.447957-2-vsementsov@virtuozzo.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The check script is already printing out which iotest is currently
running, so printing out the name of the check-block.sh shell script
looks superfluous here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-id: 20190906113534.10907-1-thuth@redhat.com
Acked-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When running "make check -j8" or something similar, the iotests are
running in parallel with the other tests. So when they are printing
out "Passed all xx tests" or a similar status message at the end,
it might not be quite clear that this message belongs to the iotests,
since the output might be mixed with the other tests. Thus change the
word "tests" here to "iotests" instead to avoid confusion.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-id: 20190906113920.11271-1-thuth@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Replace confusing usage:
~BDRV_SECTOR_MASK
With more clear:
(BDRV_SECTOR_SIZE - 1)
Remove BDRV_SECTOR_MASK and the unused BDRV_BLOCK_OFFSET_MASK which was
it's last user.
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20190827185913.27427-3-nsoffer@redhat.com
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Replace instances of:
(n & (BDRV_SECTOR_SIZE - 1)) == 0
And:
(n & ~BDRV_SECTOR_MASK) == 0
With:
QEMU_IS_ALIGNED(n, BDRV_SECTOR_SIZE)
Which reveals the intent of the code better, and makes it easier to
locate the code checking alignment.
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20190827185913.27427-2-nsoffer@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The qemu-ga documentation is currently in qemu-ga.texi in
Texinfo format, which we present to the user as:
* a qemu-ga manpage
* a section of the main qemu-doc HTML documentation
Convert the documentation to rST format, and present it to
the user as:
* a qemu-ga manpage
* part of the interop/ Sphinx manual
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-id: 20190905131040.8350-1-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The APB frequency can be calculated directly when needed from the
HPLL_PARAM and CLK_SEL register values. This removes useless state in
the model.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20190904070506.1052-11-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
and use a class AspeedSCUClass to define each SoC characteristics.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20190904070506.1052-10-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds the missing checksum calculation on normal DMA transfer.
According to the datasheet this is how the SMC should behave.
Verified on AST1250 that the hardware matches the behaviour.
Signed-off-by: Christian Svensson <bluecmd@google.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20190904070506.1052-9-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Emulate read errors in the DMA Checksum Register for high frequencies
and optimistic settings of the Read Timing Compensation Register. This
will help in tuning the SPI timing calibration algorithm. Errors are
only injected when the property "inject_failure" is set to true as
suggested by Philippe.
The values below are those to expect from the first flash device of
the FMC controller of a palmetto-bmc machine.
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Message-id: 20190904070506.1052-8-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When doing calibration, the SPI clock rate in the CE0 Control Register
and the read delay cycles in the Read Timing Compensation Register are
set using bit[11:4] of the DMA Control Register.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190904070506.1052-7-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The FMC controller on the Aspeed SoCs support DMA to access the flash
modules. It can operate in a normal mode, to copy to or from the flash
module mapping window, or in a checksum calculation mode, to evaluate
the best clock settings for reads.
The model introduces two custom address spaces for DMAs: one for the
AHB window of the FMC flash devices and one for the DRAM. The latter
is populated using a "dram" link set from the machine with the RAM
container region.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Joel Stanley <joel@jms.id.au>
Message-id: 20190904070506.1052-6-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Improve the naming of the different controller models to ease their
generation when initializing the SoC. The rename of the SMC types is
breaking migration compatibility.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20190904070506.1052-5-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are no QEMU Aspeed machines using the SoCs "ast2400-a0" or
"ast2400".
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20190904070506.1052-4-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
GPIO pins are arranged in groups of 8 pins labeled A,B,..,Y,Z,AA,AB,AC.
(Note that the ast2400 controller only goes up to group AB).
A set has four groups (except set AC which only has one) and is
referred to by the groups it is composed of (eg ABCD,EFGH,...,YZAAAB).
Each set is accessed and controlled by a bank of 14 registers.
These registers operate on a per pin level where each bit in the register
corresponds to a pin, except for the command source registers. The command
source registers operate on a per group level where bits 24, 16, 8 and 0
correspond to each group in the set.
eg. registers for set ABCD:
|D7...D0|C7...C0|B7...B0|A7...A0| <- GPIOs
|31...24|23...16|15....8|7.....0| <- bit position
Note that there are a couple of groups that only have 4 pins.
There are two ways that this model deviates from the behaviour of the
actual controller:
(1) The only control source driving the GPIO pins in the model is the ARM
model (as there currently aren't models for the LPC or Coprocessor).
(2) None of the registers in the model are reset tolerant (needs
integration with the watchdog).
Signed-off-by: Rashmica Gupta <rashmica.g@gmail.com>
Tested-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20190904070506.1052-2-clg@kaod.org
[clg: fixed missing header files
made use of HWADDR_PRIx to fix compilation on windows ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
New feature:
UUID validation check from Yury Kotov
plus a bunch of fixes.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAl16TKwACgkQBRYzHrxb
/eeeXg//X0n45bzl6yPYELiN1WQdXgLLcvJXioFNEfTCE/XbA5Dkmt0TDYHvmzzJ
sO8YDb8mq+5d7+bedaZa53Whzn4CkMJ1bT7812sXRemx25qegdwPobGC4GOKR1Co
2Vd4YEOQfV+OAf3tPWddKtit3mtR0FXpOMauMHbjFC/tbFV9dL6ikeTUsprNYBrY
dbJb2I7TyIPv1OjjazmybA3zH00EUYac7Ds6S7Q+gw8K7CfTsECYCm4dfPpiQDu0
eZiNPp+bH0YD2J47pLIfuI1bb0zUtSMRaJ4KJZtO2/dm6mDgG95R63iaSe4DQCO9
lekX/xBOKdJgySUcsLcmMiqRLL3AB/lR8+8FsoVyrGbhcy1N54izPtupwq8tU5bZ
+39BUbHcsPCBcXwVVHUQimoH5g/FYAii+KrjDCnSZqFvjmBGnJbcVLwO/f61Sghi
ehqfvEiqe6SGnbsxCUcoc1Akz1P/DOYxaTGaAn1wtMrQrkRJhTbrX1pedmTViD5m
v31J93AnROGHWi/slsxrO2jXghlo0W7a5TdKh0bul/N/IbGCTFZH1EbNXAJqkxkV
4cKbb86vVJRozsqUCbqrs/WZgQrPXyHaXpN1bUQuA5ofUOBlynj2hrnJXMiCITvW
MckOnKp0tgijdUefgIWmmGNCeSEPSZ25Nd6QGqPXdehoT0JBb2U=
=W+5Q
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20190912a' into staging
Migration pull 2019-09-12
New feature:
UUID validation check from Yury Kotov
plus a bunch of fixes.
# gpg: Signature made Thu 12 Sep 2019 14:48:28 BST
# gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20190912a:
migration: fix one typo in comment of function migration_total_bytes()
migration/qemu-file: fix potential buf waste for extra buf_index adjustment
migration/qemu-file: remove check on writev_buffer in qemu_put_compression_data
migration: Fix postcopy bw for recovery
tests/migration: Add a test for validate-uuid capability
tests/libqtest: Allow setting expected exit status
migration: Add validate-uuid capability
qemu-file: Rework old qemu_fflush comment
migration: register_savevm_live doesn't need dev
hw/net/vmxnet3: Fix leftover unregister_savevm
migration: cleanup check on ops in savevm.handlers iterations
migration: multifd_send_thread always post p->sem_sync when error happen
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- qcow2: Allow overwriting multiple compressed clusters at once for
better performance
- nfs: add support for nfs_umount
- file-posix: write_zeroes fixes
- qemu-io, blockdev-create, pr-manager: Fix crashes and memory leaks
- qcow2: Fix the calculation of the maximum L2 cache size
- vpc: Fix return code for vpc_co_create()
- blockjob: Code cleanup
- iotests improvements (e.g. for use with valgrind)
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJde20nAAoJEH8JsnLIjy/WilcP/RGqhScaROcvr584XRJ+/r0p
Wx/r1dfAL7uWaCrUt0Z7BtoAQb0vTJLAmShRvyfyDqSurPoTCGunQhXqHC/eX3oK
fo6iB/fjXKCEsEjKcgamxv6it9rz3wjqeQLLakHZW4Z62yXAfyFZE/vXYdx6IS5B
UlVI2gOjz3lfGWZCBd1rOQNnOOTtSeBkTMPndrpri5m5gvnZFBaCcAldUtrQwvau
YQnzPhOsDlz00gR9NqDehQZcBCoNZnhhOMdcGWrvcMbabJ/yY/iqgCVxYq7m2Sol
olcMw/Mg8Z/Mp2qr4hOUmxoHSEX9y7mZ7hCPRlw/3NdiEQpquIfpILa8CvmO8jqJ
ZHnLWILf2ZCmU75vTM2ilhvgErtEmYbE39jzRYF/dNfBTwfx9Yp2eFbG57OWqNyJ
sEJB3JY+RuPyqP/QX7jAelnL2jtSmyhmV1dBYIpD0cib2+VTCgdgWMoMNlXRXG+8
9XoRVfBNHvybR7o7xw7xUTmGdkkAZDbNlzGH6ekIn3HCiDqdpL7SUkV7kBUCV9ie
YKcZMRrwOiYr8agi4SlctyUPY/h1nFJFPKSBdPcKSf353sjZgpzYeubXUJd+kTBX
orkkJVq9+WcqEKScvqzTraf4CN5FHP7J1rvQdxbP6RWgkY64j4gkMScGv7qu29K2
WNQrD2Ij+KKC0zprkCx0
=yYOk
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- qcow2: Allow overwriting multiple compressed clusters at once for
better performance
- nfs: add support for nfs_umount
- file-posix: write_zeroes fixes
- qemu-io, blockdev-create, pr-manager: Fix crashes and memory leaks
- qcow2: Fix the calculation of the maximum L2 cache size
- vpc: Fix return code for vpc_co_create()
- blockjob: Code cleanup
- iotests improvements (e.g. for use with valgrind)
# gpg: Signature made Fri 13 Sep 2019 11:19:19 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (23 commits)
qcow2: Stop overwriting compressed clusters one by one
block/create: Do not abort if a block driver is not available
qemu-io: Don't leak pattern file in error path
iotests: extend sleeping time under Valgrind
iotests: extended timeout under Valgrind
iotests: Valgrind fails with nonexistent directory
iotests: Add casenotrun report to bash tests
iotests: exclude killed processes from running under Valgrind
iotests: allow Valgrind checking all QEMU processes
block/nfs: add support for nfs_umount
block/nfs: tear down aio before nfs_close
iotests: skip 232 when run tests as root
iotests: Test blockdev-create for vpc
iotests: Restrict nbd Python tests to nbd
iotests: Restrict file Python tests to file
iotests: Add supported protocols to execute_test()
vpc: Return 0 from vpc_co_create() on success
file-posix: Fix has_write_zeroes after NO_FALLBACK
pr-manager: Fix invalid g_free() crash bug
iotests: Test reverse sub-cluster qcow2 writes
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
handle_alloc() tries to find as many contiguous clusters that need
copy-on-write as possible in order to allocate all of them at the same
time.
However, compressed clusters are only overwritten one by one, so let's
say that we have an image with 1024 consecutive compressed clusters:
qemu-img create -f qcow2 hd.qcow2 64M
for f in `seq 0 64 65472`; do
qemu-io -c "write -c ${f}k 64k" hd.qcow2
done
In this case trying to overwrite the whole image with one large write
request results in 1024 separate allocations:
qemu-io -c "write 0 64M" hd.qcow2
This restriction comes from commit 095a9c58ce from 2008.
Nowadays QEMU can overwrite multiple compressed clusters just fine,
and in fact it already does: as long as the first cluster that
handle_alloc() finds is not compressed, all other compressed clusters
in the same batch will be overwritten in one go:
qemu-img create -f qcow2 hd.qcow2 64M
qemu-io -c "write -z 0 64k" hd.qcow2
for f in `seq 64 64 65472`; do
qemu-io -c "write -c ${f}k 64k" hd.qcow2
done
Compared to the previous one, overwriting this image on my computer
goes from 8.35s down to 230ms.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: John Snow <jsnow@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The 'blockdev-create' QMP command was introduced as experimental
feature in commit b0292b851b, using the assert() debug call.
It got promoted to 'stable' command in 3fb588a0f2, but the
assert call was not removed.
Some block drivers are optional, and bdrv_find_format() might
return a NULL value, triggering the assertion.
Stable code is not expected to abort, so return an error instead.
This is easily reproducible when libnfs is not installed:
./configure
[...]
module support no
Block whitelist (rw)
Block whitelist (ro)
libiscsi support yes
libnfs support no
[...]
Start QEMU:
$ qemu-system-x86_64 -S -qmp unix:/tmp/qemu.qmp,server,nowait
Send the 'blockdev-create' with the 'nfs' driver:
$ ( cat << 'EOF'
{'execute': 'qmp_capabilities'}
{'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'}
EOF
) | socat STDIO UNIX:/tmp/qemu.qmp
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 1, "major": 4}, "package": "v4.1.0-733-g89ea03a7dc"}, "capabilities": ["oob"]}}
{"return": {}}
QEMU crashes:
$ gdb qemu-system-x86_64 core
Program received signal SIGSEGV, Segmentation fault.
(gdb) bt
#0 0x00007ffff510957f in raise () at /lib64/libc.so.6
#1 0x00007ffff50f3895 in abort () at /lib64/libc.so.6
#2 0x00007ffff50f3769 in _nl_load_domain.cold.0 () at /lib64/libc.so.6
#3 0x00007ffff5101a26 in .annobin_assert.c_end () at /lib64/libc.so.6
#4 0x0000555555d7e1f1 in qmp_blockdev_create (job_id=0x555556baee40 "x", options=0x555557666610, errp=0x7fffffffc770) at block/create.c:69
#5 0x0000555555c96b52 in qmp_marshal_blockdev_create (args=0x7fffdc003830, ret=0x7fffffffc7f8, errp=0x7fffffffc7f0) at qapi/qapi-commands-block-core.c:1314
#6 0x0000555555deb0a0 in do_qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false, errp=0x7fffffffc898) at qapi/qmp-dispatch.c:131
#7 0x0000555555deb2a1 in qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false) at qapi/qmp-dispatch.c:174
With this patch applied, QEMU returns a QMP error:
{'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'}
{"id": "x", "error": {"class": "GenericError", "desc": "Block driver 'nfs' not found or not supported"}}
Cc: qemu-stable@nongnu.org
Reported-by: Xu Tian <xutian@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu_io_alloc_from_file() needs to close the pattern file even if some
error occurred.
Setting f = NULL in the success path and checking it for NULL in the
error path isn't strictly necessary at this point, but let's do it
anyway in case someone later adds a 'goto error' after closing the file.
Coverity: CID 1405303
Fixes: 4d731510d3
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
To synchronize the time when QEMU is running longer under the Valgrind,
increase the sleeping time in the test 247.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
As the iotests run longer under the Valgrind, the QEMU_COMM_TIMEOUT is
to be increased in the test cases 028, 183 and 192 when running under
the Valgrind.
Suggested-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The Valgrind uses the exported variable TMPDIR and fails if the
directory does not exist. Let us exclude such a test case from
being run under the Valgrind and notify the user of it.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The new function _casenotrun() is to be invoked if a test case cannot
be run for some reason. The user will be notified by a message passed
to the function. It is the caller's responsibility to make skipped a
particular test.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The Valgrind tool fails to manage its termination in multi-threaded
processes when they raise the signal SIGKILL. The bug has been reported
to the Valgrind maintainers and was registered as the bug #409141:
https://bugs.kde.org/show_bug.cgi?id=409141
Let's exclude such test cases from running under the Valgrind until a
new version with the bug fix is released because checking for the
memory issues is covered by other test cases.
Suggested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
With the '-valgrind' option, let all the QEMU processes be run under
the Valgrind tool. The Valgrind own parameters may be set with its
environment variable VALGRIND_OPTS, e.g.
$ VALGRIND_OPTS="--leak-check=yes" ./check -valgrind <test#>
or they may be listed in the Valgrind checked file ./.valgrindrc or
~/.valgrindrc like
--memcheck:leak-check=no
--memcheck:track-origins=yes
To exclude a specific process from running under the Valgrind, the
corresponding environment variable VALGRIND_QEMU_<name> is to be set
to the empty string:
$ VALGRIND_QEMU_IO= ./check -valgrind <test#>
When QEMU-IO process is being killed, the shell report refers to the
text of the command in _qemu_io_wrapper(), which was modified with this
patch. So, the benchmark output for the tests 039, 061 and 137 is to be
changed also.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
nfs_close is a sync call from libnfs and has its own event
handler polling on the nfs FD. Avoid that both QEMU and libnfs
are intefering here.
CC: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Multiple reports from users were received regarding failures of
packet 'g' communication with gdb for some MIPS configurations.
It was found out (by bisecting) that the problematic commit is
8e0b373. Revert that commit until a better solution is developed.
Suggested-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Libo Zhou <zhlb29@foxmail.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1568207966-25202-1-git-send-email-aleksandar.markovic@rt-rk.com>
Now that the MIPS CPU implementation uses the new
do_transaction_failed hook, we can remove the old code that handled
the do_unassigned_access hook.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Message-Id: <20190802160458.25681-4-peter.maydell@linaro.org>
Switch the MIPS target from the old unassigned_access hook to the new
do_transaction_failed hook.
Unlike the old hook, do_transaction_failed is only ever called from
the TCG memory access paths, so there is no need for the "ignore this
if we're using KVM" hack that we were previously using to work around
the way unassigned_access was called for all kinds of memory accesses
to unassigned physical addresses.
The MIPS target does not ever do direct memory reads by physical
address (via either ldl_phys etc or address_space_ldl etc), so the
only memory accesses this affects are the 'normal' guest loads and
stores, which will be handled by the new hook; their behaviour is
unchanged.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Message-Id: <20190802160458.25681-3-peter.maydell@linaro.org>
The MIPS Jazz ('magnum' and 'pica61') boards have some code which
overrides the CPU's do_unassigned_access hook, so they can intercept
it and not raise exceptions on data accesses to invalid addresses,
only for instruction fetches.
We want to switch MIPS over to using the do_transaction_failed
hook instead, so add an intercept for that as well, and make
the board code install whichever hook the CPU is actually using.
Once we've changed the CPU implementation we can remove the
redundant code for the old hook.
Note: I am suspicious that the behaviour as implemented here may not
be what the hardware really does. It was added in commit
54e755588c to restore the behaviour that was broken by
commit c658b94f6e. But prior to commit c658b94f6e
every MIPS board generated exceptions for instruction access to
invalid addresses but not for data accesses; and other boards,
notably Malta, were fixed by making all invalid accesses behave as
reads-as-zero (see the call to empty_slot_init() in
mips_malta_init()). Hardware that raises exceptions for instruction
access and not data access seems to me to be an unlikely design, and
it's possible that the right way to emulate this is to make the Jazz
boards do what we did with Malta (or some variation of that).
Nonetheless, since I don't have access to real hardware to test
against I have taken the approach of "make QEMU continue to behave
the same way it did before this commit". I have updated the comment
to correct the parts that are no longer accurate and note that
the hardware might behave differently.
The test case for the need for the hook-hijacking is in
https://bugs.launchpad.net/qemu/+bug/1245924 That BIOS will boot OK
either with this overriding of both hooks, or with a simple "global
memory region to ignore bad accesses of all types", so it doesn't
provide evidence either way, unfortunately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Message-Id: <20190802160458.25681-2-peter.maydell@linaro.org>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190912024957.11780-1-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
In add_to_iovec(), qemu_fflush() will be called if iovec is full. If
this happens, buf_index is reset. Currently, this is not checked and
buf_index would always been adjust with buf size.
This is not harmful, but will waste some space in file buffer.
This patch make add_to_iovec() return 1 when it has flushed the file.
Then the caller could check the return value to see whether it is
necessary to adjust the buf_index any more.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190911132839.23336-3-richard.weiyang@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The check of writev_buffer is in qemu_fflush, which means it is not
harmful if it is NULL.
And removing it will make the code consistent since all other
add_to_iovec() is called without the check.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190911132839.23336-2-richard.weiyang@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We've got max-postcopy-bandwidth parameter but it's not applied
correctly after a postcopy recovery so the recovered migration stream
will still eat the whole net bandwidth. Fix that up.
Reported-by: Xiaohui Li <xiaohli@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20190906130103.20961-1-peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru>
Message-Id: <20190903162246.18524-4-yury-kotov@yandex-team.ru>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>