Commit caa1ee43 "vhost-user-blk: add discard/write zeroes features
support" added fields to struct virtio_blk_config. This changes
the size of the config space and breaks migration from QEMU 3.1
and older:
qemu-system-ppc64: get_pci_config_device: Bad config data: i=0x10 read: 41 device: 1 cmask: ff wmask: 80 w1cmask:0
qemu-system-ppc64: Failed to load PCIDevice:config
qemu-system-ppc64: Failed to load virtio-blk:virtio
qemu-system-ppc64: error while loading state for instance 0x0 of device 'pci@800000020000000:01.0/virtio-blk'
qemu-system-ppc64: load of migration failed: Invalid argument
Since virtio-blk doesn't support the "discard" and "write zeroes"
features, it shouldn't even expose the associated fields in the
config space actually. Just include all fields up to num_queues to
match QEMU 3.1 and older.
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1550022537-27565-1-git-send-email-changpeng.liu@intel.com
Message-Id: <1550022537-27565-1-git-send-email-changpeng.liu@intel.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
QEMU wraps the socket functions in os-win32.h, but in commit
a9d8b3ec43, the header inclusion was dropped,
breaking libslirp on Windows.
Wrap the missing functions.
Rename the wrapped function with "slirp_" prefix and "_wrap" suffix,
for consistency and to avoid a clash with existing function (such as
"slirp_socket").
Fixes: a9d8b3ec ("slirp: replace remaining qemu headers dependency")
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20190212160953.29051-3-marcandre.lureau@redhat.com>
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Tested-by: Howard Spoelstra
QEMU wraps the socket functions in os-win32.h, but in commit
a9d8b3ec43, the header inclusion was dropped,
breaking libslirp on Windows.
There are already a few socket functions that are wrapped in libslirp,
with "slirp_" prefix, but many of them are missing, and we are going
to wrap the missing functions in a second patch.
Using "slirp_" prefix avoids the conflict with socket function #define
wrappers in QEMU os-win32.h, but they are quite intrusive. In the end,
the functions should behave the same as original one, but with errno
being set. To avoid the churn, and potential confusion, remove the
"slirp_" prefix. A series of #undef is necessary until libslirp is
made standalone to prevent the #define conflict with QEMU.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20190212160953.29051-2-marcandre.lureau@redhat.com>
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
HP-UX 10.20 uses busmaster writes to the CPU EIR to signal
interrupts from the SCSI constroller. (Similar to what is known
as MSI on x86)
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211192039.5457-1-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It looks like the operands where exchanged. HP bootrom tests the
following sequence:
0x00000000f0004064: ldil L%-66666800,r7
0x00000000f0004068: addi 19f,r7,r7
0x00000000f000406c: addi -1,r0,rp
0x00000000f0004070: addi f,r0,r4
0x00000000f0004074: addi 1,r4,r5
0x00000000f0004078: dcor rp,r6
0x00000000f000407c: cmpb,<>,n r6,r7,0xf000411
This returned 0x66666661 instead of the expected 0x9999999f in QEMU.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211181907.2219-6-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These conditions include the signed overflow bit. See page 5-3
of the Parisc 1.1 Architecture Reference Manual for details.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
[rth: More changes for c == 3, to compute (N^V)|Z properly.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We will be fixing do_cond vs signed overflow, which requires
that do_log_cond not rely on do_cond.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When QEMU is compiled with -O0, these functions are inlined
which will cause a wrong restart address generated for the TB.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211181907.2219-2-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now that the implementation is entirely within the generated
decode function, eliminate the wrapper.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With decodetree.py, the specializations would conflict so we
must have a single entry point for all variants of OR.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert the BREAK instruction to start.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of returning DisasJumpType, immediately store it.
Return true in preparation for conversion to the decodetree script.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
valgrind on the test-char.c code reports that 'struct termios' contains
uninitialized memory.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-17-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The current socket chardev tests try to exercise the chardev socket
driver in both server and client mode at the same time. The chardev API
is not very well designed to handle both ends of the connection being in
the same process so this approach makes the test case quite unpleasant
to deal with.
This splits the tests into distinct cases, one to test server socket
chardevs and one to test client socket chardevs. In each case the peer
is run in a background thread using the simpler QIOChannelSocket APIs.
The main test case code can now be written in a way that mirrors the
typical usage from within QEMU.
In doing this recfactoring it is possible to greatly expand the test
coverage for the socket chardevs to test all combinations except for a
server operating in blocking wait mode.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-16-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
When the 'reconnect' option is given for a client connection, the
qmp_chardev_open_socket_client method will run an asynchronous
connection attempt. The QIOChannel socket executes this is a single use
background thread, so the connection will succeed immediately (assuming
the server is listening). The chardev, however, won't get the result
from this background thread until the main loop starts running and
processes idle callbacks.
Thus when tcp_chr_wait_connected is run s->ioc will be NULL, but the
state will be TCP_CHARDEV_STATE_CONNECTING, and there may already
be an established connection that will be associated with the chardev
by the pending idle callback. tcp_chr_wait_connected doesn't check the
state, only s->ioc, so attempts to establish another connection
synchronously.
If the server allows multiple connections this is unhelpful but not a
fatal problem as the duplicate connection will get ignored by the
tcp_chr_new_client method when it sees the state is already connected.
If the server only supports a single connection, however, the
tcp_chr_wait_connected method will hang forever because the server will
not accept its synchronous connection attempt until the first connection
is closed.
To deal with this tcp_chr_wait_connected needs to synchronize with the
completion of the background connection task. To do this it needs to
create the QIOTask directly and use the qio_task_wait_thread method.
This will cancel the pending idle callback and directly dispatch the
task completion callback, allowing the connection to be associated
with the chardev. If the background connection failed, it can still
attempt a new synchronous connection.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-15-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
In the previous commit
commit 1dc8a6695c
Author: Marc-André Lureau <marcandre.lureau@redhat.com>
Date: Tue Aug 16 12:33:32 2016 +0400
char: fix waiting for TLS and telnet connection
the tcp_chr_wait_connected() method was changed to check for a non-NULL
's->ioc' as a sign that there is already a connection present, as
opposed to checking the "connected" flag to supposedly fix handling of
TLS/telnet connections.
The original code would repeatedly call tcp_chr_wait_connected creating
many connections as 'connected' would never become true. The changed
code would still repeatedly call tcp_chr_wait_connected busy waiting
because s->ioc is set but the chardev will never see CHR_EVENT_OPENED.
IOW, the code is still broken with TLS/telnet, but in a different way.
Checking for a non-NULL 's->ioc' does not mean that a CHR_EVENT_OPENED
will be ready for a TLS/telnet connection. These protocols (and the
websocket protocol) all require the main loop to be running in order
to complete the protocol handshake before emitting CHR_EVENT_OPENED.
The tcp_chr_wait_connected() method is only used during early startup
before a main loop is running, so TLS/telnet/websock connections can
never complete initialization.
Making this work would require changing tcp_chr_wait_connected to run
a main loop. This is quite complex since we must not allow GSource's
that other parts of QEMU have registered to run yet. The current callers
of tcp_chr_wait_connected do not require use of the TLS/telnet/websocket
protocols, so the simplest option is to just forbid this combination
completely for now.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-14-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
If establishing a client connection fails, the tcp_chr_wait_connected
method should sleep for the reconnect timeout and then retry the
attempt. This ensures the callers don't immediately abort with an
error when the initial connection fails.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-13-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The socket connection state is indicated via the 'bool connected' field
in the SocketChardev struct. This variable is somewhat misleading
though, as it is only set to true once the connection has completed all
required handshakes (eg for TLS, telnet or websockets). IOW there is a
period of time in which the socket is connected, but the "connected"
flag is still false.
The socket chardev really has three states that it can be in,
disconnected, connecting and connected and those should be tracked
explicitly.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-12-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
In qmp_chardev_open_socket the code for connecting client chardevs is
split across two conditionals far apart with some server chardev code in
the middle. Split up the method so that code for client connection setup
is separate from code for server connection setup.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-11-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The tcp_chr_wait_connected method can deal with either server or client
chardevs, but some callers only care about one of these possibilities.
The tcp_chr_wait_connected method will also need some refactoring to
reliably deal with its primary goal of allowing a device frontend to
wait for an established connection, which will interfere with other
callers.
Split it into two methods, one responsible for server initiated
connections, the other responsible for client initiated connections.
In doing this split the tcp_char_connect_async() method is renamed
to become consistent with naming of the new methods.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-10-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The 'sioc' variable in qmp_chardev_open_socket was unused since
commit 3e7d4d20d3
Author: Peter Xu <peterx@redhat.com>
Date: Tue Mar 6 13:33:17 2018 +0800
chardev: use chardev's gcontext for async connect
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-9-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
If no valid char driver was identified the qemu_chr_parse_compat method
was silent, leaving callers no clue what failed.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-8-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Now that all validation is separated off into a separate method,
we can directly populate the ChardevSocket struct from the
QemuOpts values, avoiding many local variables.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-7-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The 'wait'/'nowait' parameter is used to tell server sockets whether to
block until a client is accepted during initialization. Client chardevs
have always silently ignored this option. Various tests were mistakenly
passing this option for their client chardevs.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-6-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The 'reconnect' option is used to give the sleep time, in seconds,
before a client socket attempts to re-establish a connection to the
server. It does not make sense to set this for server sockets, as they
will always accept a new client connection immediately after the
previous one went away.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-5-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The TLS creds option is not valid with certain address types. The user
config was only checked for errors when parsing legacy QemuOpts, thus
the user could pass unsupported values via QMP.
Pull all code for validating options out into a new method
qmp_chardev_validate_socket, that is called from the main
qmp_chardev_open_socket method. This adds a missing check for rejecting
TLS creds with the vsock address type.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-4-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Add the ability for a caller to wait for completion of the
background thread to synchronously dispatch its result, without
needing to wait for the main loop to run the idle callback.
This method needs very careful usage to avoid a dangerous
race condition with the free'ing of the task. The completion
callback is normally invoked from an idle callback registered
with the main loop context. The qio_task_wait_thread method
must only be called if the completion callback has not yet
run. The only safe way to achieve this is to run the
qio_task_wait_thread method from the thread that executes
the main loop.
It is generally a bad idea to use this method since it will
block execution of the main loop, however, the design of
the character devices and its usage from vhostuser already
requires blocking execution.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-3-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Currently the struct QIOTaskThreadData is only needed by the worker
thread, but a subsequent patch will need to access it from another
context.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190211182442.8542-2-berrange@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Validate that frontend callbacks for CHR_EVENT_OPENED/CHR_EVENT_CLOSED
events are being issued when expected and in strictly pairing order.
Signed-off-by: Artem Pisarenko <artem.k.pisarenko@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <ac67ff2d27dd51a0075d5d634355c9e4f7bb53de.1541507990.git.artem.k.pisarenko@gmail.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
When chardev is multiplexed (mux=on) there are a lot of cases where
CHR_EVENT_OPENED/CHR_EVENT_CLOSED events pairing (expected from
frontend side) is broken. There are either generation of multiple
repeated or extra CHR_EVENT_OPENED events, or CHR_EVENT_CLOSED just
isn't generated at all.
This is mostly because 'qemu_chr_fe_set_handlers()' function makes its
own (and often wrong) implicit decision on updated frontend state and
invokes 'fd_event' callback with 'CHR_EVENT_OPENED'. And even worse,
it doesn't do symmetric action in opposite direction, as someone may
expect (i.e. it doesn't invoke previously set 'fd_event' with
'CHR_EVENT_CLOSED'). Muxed chardev uses trick by calling this function
again to replace callback handlers with its own ones, but it doesn't
account for such side effect.
Fix that using extended version of this function with added argument
for disabling side effect and keep original function for compatibility
with lots of frontends already using this interface and being
"tolerant" to its side effects.
One more source of event duplication is just line of code in
char-mux.c, which does far more than comment above says (obvious fix).
Signed-off-by: Artem Pisarenko <artem.k.pisarenko@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <7dde6abbd21682857f8294644013173c0b9949b3.1541507990.git.artem.k.pisarenko@gmail.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
In several part we still using req->dev or VIRTIO_DEVICE(req->dev)
when we have already defined s and vdev pointers:
VirtIOBlock *s = req->dev;
VirtIODevice *vdev = VIRTIO_DEVICE(s);
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Message-id: 20190208142347.214815-1-sgarzare@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
qemu coroutine command results in following error output:
Python Exception <class 'gdb.error'> 'arch_prctl' has unknown return
type; cast the call to its declared return type: Error occurred in
Python command: 'arch_prctl' has unknown return type; cast the call to
its declared return type
Fix it by giving it what it wants: arch_prctl return type.
Information on the topic:
https://sourceware.org/gdb/onlinedocs/gdb/Calling.html
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190206151425.105871-1-vsementsov@virtuozzo.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Lukas reported an hard to reproduce QMP iothread hang on s390 that
QEMU might hang at pthread_join() of the QMP monitor iothread before
quitting:
Thread 1
#0 0x000003ffad10932c in pthread_join
#1 0x0000000109e95750 in qemu_thread_join
at /home/thuth/devel/qemu/util/qemu-thread-posix.c:570
#2 0x0000000109c95a1c in iothread_stop
#3 0x0000000109bb0874 in monitor_cleanup
#4 0x0000000109b55042 in main
While the iothread is still in the main loop:
Thread 4
#0 0x000003ffad0010e4 in ??
#1 0x000003ffad553958 in g_main_context_iterate.isra.19
#2 0x000003ffad553d90 in g_main_loop_run
#3 0x0000000109c9585a in iothread_run
at /home/thuth/devel/qemu/iothread.c:74
#4 0x0000000109e94752 in qemu_thread_start
at /home/thuth/devel/qemu/util/qemu-thread-posix.c:502
#5 0x000003ffad10825a in start_thread
#6 0x000003ffad00dcf2 in thread_start
IMHO it's because there's a race between the main thread and iothread
when stopping the thread in following sequence:
main thread iothread
=========== ==============
aio_poll()
iothread_get_g_main_context
set iothread->worker_context
iothread_stop
schedule iothread_stop_bh
execute iothread_stop_bh [1]
set iothread->running=false
(since main_loop==NULL so
skip to quit main loop.
Note: although main_loop is
NULL but worker_context is
not!)
atomic_read(&iothread->worker_context) [2]
create main_loop object
g_main_loop_run() [3]
pthread_join() [4]
We can see that when execute iothread_stop_bh() at [1] it's possible
that main_loop is still NULL because it's only created until the first
check of the worker_context later at [2]. Then the iothread will hang
in the main loop [3] and it'll starve the main thread too [4].
Here the simple solution should be that we check again the "running"
variable before check against worker_context.
CC: Thomas Huth <thuth@redhat.com>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Lukáš Doktor <ldoktor@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-id: 20190129051432.22023-1-peterx@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>