Implement zero copy send on nocomp_send_write(), by making use of QIOChannel
writev + flags & flush interface.
Change multifd_send_sync_main() so flush_zero_copy() can be called
after each iteration in order to make sure all dirty pages are sent before
a new iteration is started. It will also flush at the beginning and at the
end of migration.
Also make it return -1 if flush_zero_copy() fails, in order to cancel
the migration process, and avoid resuming the guest in the target host
without receiving all current RAM.
This will work fine on RAM migration because the RAM pages are not usually freed,
and there is no problem on changing the pages content between writev_zero_copy() and
the actual sending of the buffer, because this change will dirty the page and
cause it to be re-sent on a next iteration anyway.
A lot of locked memory may be needed in order to use multifd migration
with zero-copy enabled, so disabling the feature should be necessary for
low-privileged users trying to perform multifd migrations.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220513062836.965425-9-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Since d48c3a0445 ("multifd: Use a single writev on the send side"),
sending the header packet and the memory pages happens in the same
writev, which can potentially make the migration faster.
Using channel-socket as example, this works well with the default copying
mechanism of sendmsg(), but with zero-copy-send=true, it will cause
the migration to often break.
This happens because the header packet buffer gets reused quite often,
and there is a high chance that by the time the MSG_ZEROCOPY mechanism get
to send the buffer, it has already changed, sending the wrong data and
causing the migration to abort.
It means that, as it is, the buffer for the header packet is not suitable
for sending with MSG_ZEROCOPY.
In order to enable zero copy for multifd, send the header packet on an
individual write(), without any flags, and the remanining pages with a
writev(), as it was happening before. This only changes how a migration
with zero-copy-send=true works, not changing any current behavior for
migrations with zero-copy-send=false.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220513062836.965425-8-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Even though multifd_send_sync_main() currently emits error_reports, it's
callers don't really check it before continuing.
Change multifd_send_sync_main() to return -1 on error and 0 on success.
Also change all it's callers to make use of this change and possibly fail
earlier.
(This change is important to next patch on multifd zero copy
implementation, to make it sure an error in zero-copy flush does not go
unnoticed.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220513062836.965425-7-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
A lot of places check parameters.tls_creds in order to evaluate if TLS is
in use, and sometimes call migrate_get_current() just for that test.
Add new helper function migrate_use_tls() in order to simplify testing
for TLS usage.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220513062836.965425-6-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Add property that allows zero-copy migration of memory pages
on the sending side, and also includes a helper function
migrate_use_zero_copy_send() to check if it's enabled.
No code is introduced to actually do the migration, but it allow
future implementations to enable/disable this feature.
On non-Linux builds this parameter is compiled-out.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220513062836.965425-5-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
For CONFIG_LINUX, implement the new zero copy flag and the optional callback
io_flush on QIOChannelSocket, but enables it only when MSG_ZEROCOPY
feature is available in the host kernel, which is checked on
qio_channel_socket_connect_sync()
qio_channel_socket_flush() was implemented by counting how many times
sendmsg(...,MSG_ZEROCOPY) was successfully called, and then reading the
socket's error queue, in order to find how many of them finished sending.
Flush will loop until those counters are the same, or until some error occurs.
Notes on using writev() with QIO_CHANNEL_WRITE_FLAG_ZERO_COPY:
1: Buffer
- As MSG_ZEROCOPY tells the kernel to use the same user buffer to avoid copying,
some caution is necessary to avoid overwriting any buffer before it's sent.
If something like this happen, a newer version of the buffer may be sent instead.
- If this is a problem, it's recommended to call qio_channel_flush() before freeing
or re-using the buffer.
2: Locked memory
- When using MSG_ZERCOCOPY, the buffer memory will be locked after queued, and
unlocked after it's sent.
- Depending on the size of each buffer, and how often it's sent, it may require
a larger amount of locked memory than usually available to non-root user.
- If the required amount of locked memory is not available, writev_zero_copy
will return an error, which can abort an operation like migration,
- Because of this, when an user code wants to add zero copy as a feature, it
requires a mechanism to disable it, so it can still be accessible to less
privileged users.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20220513062836.965425-4-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Add flags to io_writev and introduce io_flush as optional callback to
QIOChannelClass, allowing the implementation of zero copy writes by
subclasses.
How to use them:
- Write data using qio_channel_writev*(...,QIO_CHANNEL_WRITE_FLAG_ZERO_COPY),
- Wait write completion with qio_channel_flush().
Notes:
As some zero copy write implementations work asynchronously, it's
recommended to keep the write buffer untouched until the return of
qio_channel_flush(), to avoid the risk of sending an updated buffer
instead of the buffer state during write.
As io_flush callback is optional, if a subclass does not implement it, then:
- io_flush will return 0 without changing anything.
Also, some functions like qio_channel_writev_full_all() were adapted to
receive a flag parameter. That allows shared code between zero copy and
non-zero copy writev, and also an easier implementation on new flags.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20220513062836.965425-3-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
A build error happens in alpine CI when linux/errqueue.h is included
in io/channel-socket.c, due to redefining of 'struct __kernel_timespec':
===
ninja: job failed: [...]
In file included from /usr/include/linux/errqueue.h:6,
from ../io/channel-socket.c:29:
/usr/include/linux/time_types.h:7:8: error: redefinition of 'struct __kernel_timespec'
7 | struct __kernel_timespec {
| ^~~~~~~~~~~~~~~~~
In file included from /usr/include/liburing.h:19,
from /builds/user/qemu/include/block/aio.h:18,
from /builds/user/qemu/include/io/channel.h:26,
from /builds/user/qemu/include/io/channel-socket.h:24,
from ../io/channel-socket.c:24:
/usr/include/liburing/compat.h:9:8: note: originally defined here
9 | struct __kernel_timespec {
| ^~~~~~~~~~~~~~~~~
ninja: subcommand failed
===
As above error message suggests, 'struct __kernel_timespec' was already
defined by liburing/compat.h.
Fix alpine CI by adding test to disable liburing in configure step if a
redefinition happens between linux/errqueue.h and liburing/compat.h.
[dgilbert: This has been fixed in Alpine issue 13813 and liburing]
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Message-Id: <20220513062836.965425-2-leobras@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Various methods in the migration test call 'query_migrate' to fetch the
current status and then access a particular field. Almost all of these
cases expect the migration to be in a non-failed state. In the case of
'wait_for_migration_pass' in particular, if the status is 'failed' then
it will get into an infinite loop. By validating that the status is
not 'failed' the test suite will assert rather than hang when getting
into an unexpected state.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-10-berrange@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This validates that we correctly handle multifd migration success
and failure scenarios when using TLS with x509 certificates. There
are quite a few different scenarios that matter in relation to
hostname validation, but we skip a couple as we can assume that
the non-multifd coverage applies to some extent.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-9-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This validates that we correctly handle multifd migration success
and failure scenarios when using TLS with pre shared keys.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-8-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Most of the multifd migration test logic is common with the rest of the
precopy tests, so it can use the helper without difficulty. The only
exception of the multifd cancellation test which tries to run multiple
migrations in a row.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-7-berrange@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Most of the XBZRLE migration test logic is common with the rest of the
precopy tests, so it can use the helper with just one small tweak.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-6-berrange@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This validates that we correctly handle migration success and failure
scenarios when using TLS with x509 certificates. There are quite a few
different scenarios that matter in relation to hostname validation.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-5-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Manual merge due to ifdef change in 3
This validates that we correctly handle migration success and failure
scenarios when using TLS with pre shared keys.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-4-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
These macros are more suited to the general consumers of certs in the
test suite, where we don't need to exercise every single possible
permutation.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-3-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We need to encode just the address bytes, not the whole struct sockaddr
data. Add a test case to validate that we're matching on SAN IP
addresses correctly.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160048.812266-2-berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
- A few or1ksim fixes and enhancements
- A fix for OpenRISC tcg backend around delay slot handling
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmKAWKsACgkQw7McLV5m
J+SoFg/8Dlrc2BqjjXw9gpaQ18+3BRI6dMVPqHA22VJks88gykH7UWLUbrCxtKnS
SBcIcpzu17nKDdfwYWndqCr0UBM/zM3JzFrTv4QEhTEg6Np7lSM2KVNEhjBPVGoW
A7QOjPFrwItWOfAx6hrcczpj+L50iKuWeMW0XnEfqSeDYisxZcSp2yMoe5h3y7bF
tlpo+ha/ir/fd2kMlFrQlPWYiWkWM05RLJJOlXhdRMF7hrW5qlHqEB/SVykUTf7V
6fqOFvY6r3vE5OFm0Scgf/k2AJIxwV8qXkBJ5/egv+ZqUidZBQ9nXtOw++vF2AWp
eKoU2/c2XIxiF1Xdpgdi6a/CxlLqrr9jraQROB3GpaL9zGQvd//wUCg0F+QLicLv
avq4lvNmnat89aXj1DQ+DWpLy0zaZFGmxsPR+KeBJ2wkuEJ3Vd4+uiuAyXnm9M8D
wEE8mgFQYsTL1WlgHF4uNTDIx8OLS+4gYlBE3tffRksxyLLwzKHHgAfLdNZvhfx8
QZBuPy+yyO8zjr3RUVUArBs/ukZHP1QwDE6uxmPKV34tvVEbFVeSFY3a1LmYV3w5
mZNALNqf+h5Dq5Qo7f7cGNMrzhL53GTWPNX0MK5+SBDZF3/fpPZyvCr4Zd69Z5tD
+YClfWBv8HPjdUf+IFHqyE8rURw/sgNvgB76GpalwcUYXRr7zTM=
=tmP4
-----END PGP SIGNATURE-----
Merge tag 'or1k-pull-request-20220515' of https://github.com/stffrdhrn/qemu into staging
OpenRISC Fixes for 7.0
- A few or1ksim fixes and enhancements
- A fix for OpenRISC tcg backend around delay slot handling
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmKAWKsACgkQw7McLV5m
# J+SoFg/8Dlrc2BqjjXw9gpaQ18+3BRI6dMVPqHA22VJks88gykH7UWLUbrCxtKnS
# SBcIcpzu17nKDdfwYWndqCr0UBM/zM3JzFrTv4QEhTEg6Np7lSM2KVNEhjBPVGoW
# A7QOjPFrwItWOfAx6hrcczpj+L50iKuWeMW0XnEfqSeDYisxZcSp2yMoe5h3y7bF
# tlpo+ha/ir/fd2kMlFrQlPWYiWkWM05RLJJOlXhdRMF7hrW5qlHqEB/SVykUTf7V
# 6fqOFvY6r3vE5OFm0Scgf/k2AJIxwV8qXkBJ5/egv+ZqUidZBQ9nXtOw++vF2AWp
# eKoU2/c2XIxiF1Xdpgdi6a/CxlLqrr9jraQROB3GpaL9zGQvd//wUCg0F+QLicLv
# avq4lvNmnat89aXj1DQ+DWpLy0zaZFGmxsPR+KeBJ2wkuEJ3Vd4+uiuAyXnm9M8D
# wEE8mgFQYsTL1WlgHF4uNTDIx8OLS+4gYlBE3tffRksxyLLwzKHHgAfLdNZvhfx8
# QZBuPy+yyO8zjr3RUVUArBs/ukZHP1QwDE6uxmPKV34tvVEbFVeSFY3a1LmYV3w5
# mZNALNqf+h5Dq5Qo7f7cGNMrzhL53GTWPNX0MK5+SBDZF3/fpPZyvCr4Zd69Z5tD
# +YClfWBv8HPjdUf+IFHqyE8rURw/sgNvgB76GpalwcUYXRr7zTM=
# =tmP4
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 14 May 2022 06:34:35 PM PDT
# gpg: using RSA key D9C47354AEF86C103A25EFF1C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* tag 'or1k-pull-request-20220515' of https://github.com/stffrdhrn/qemu:
target/openrisc: Do not reset delay slot flag on early tb exit
hw/openrisc: use right OMPIC size variable
hw/openrisc: support 4 serial ports in or1ksim
hw/openrisc: page-align FDT address
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This was found when running linux crypto algorithm selftests used by
wireguard. We found that randomly the tests would fail. We found
through investigation that a combination of a tick timer interrupt,
raised when executing a delay slot instruction at a page boundary caused
the issue.
This was caused when handling the TB_EXIT_REQUESTED case in cpu_tb_exec.
On OpenRISC, which doesn't implement synchronize_from_tb, set_pc was
being used as a fallback. The OpenRISC set_pc implementation clears
dflag, which caused the exception handling logic to not account for the
delay slot. This was the bug, because it meant when execution resumed
after the interrupt was handling it resumed in the wrong place.
Fix this by implementing synchronize_from_tb which simply updates pc,
and not clear the delay slot flag.
Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This appears to be a copy and paste error. The UART size was used
instead of the much smaller OMPIC size. But actually that smaller OMPIC
size is wrong too and doesn't allow the IPI to work in Linux. So set it
to the old value.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
[smh:Updated OR1KSIM_OMPIC size to use OR1KSIM_CPUS_MAX]
Signed-off-by: Stafford Horne <shorne@gmail.com>
The 8250 serial controller supports 4 serial ports, so wire them all up,
so that we can have more than one basic I/O channel.
Cc: Stafford Horne <shorne@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
[smh:Fixup indentation and lines over 80 chars]
Signed-off-by: Stafford Horne <shorne@gmail.com>
Update to c5eb0a61238d ("Linux 5.18-rc6"). Mechanical search and
replace of vfio defines with white space massaging.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
common.rc has some complicated logic to find the common.config that
dates back to xfstests and is completely unnecessary now. Just include
the contents of the file.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220505094723.732116-1-pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
error_report should generally be followed by a failure; if we can proceed
anyway, that is just a warning and should be communicated properly to
the user with warn_report.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220511175043.27327-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
According to the NBD spec, a server that advertises
NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
not see any cache inconsistencies: when properly separated by a single
flush, actions performed by one client will be visible to another
client, regardless of which client did the flush.
We always satisfy these conditions in qemu - even when we support
multiple clients, ALL clients go through a single point of reference
into the block layer, with no local caching. The effect of one client
is instantly visible to the next client. Even if our backend were a
network device, we argue that any multi-path caching effects that
would cause inconsistencies in back-to-back actions not seeing the
effect of previous actions would be a bug in that backend, and not the
fault of caching in qemu. As such, it is safe to unconditionally
advertise CAN_MULTI_CONN for any qemu NBD server situation that
supports parallel clients.
Note, however, that we don't want to advertise CAN_MULTI_CONN when we
know that a second client cannot connect (for historical reasons,
qemu-nbd defaults to a single connection while nbd-server-add and QMP
commands default to unlimited connections; but we already have
existing means to let either style of NBD server creation alter those
defaults). This is visible by no longer advertising MULTI_CONN for
'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation.
The harder part of this patch is setting up an iotest to demonstrate
behavior of multiple NBD clients to a single server. It might be
possible with parallel qemu-io processes, but I found it easier to do
in python with the help of libnbd, and help from Nir and Vladimir in
writing the test.
Signed-off-by: Eric Blake <eblake@redhat.com>
Suggested-by: Nir Soffer <nsoffer@redhat.com>
Suggested-by: Vladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru>
Message-Id: <20220512004924.417153-3-eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The next patch wants to adjust whether the NBD server code advertises
MULTI_CONN based on whether it is known if the server limits to
exactly one client. For a server started by QMP, this information is
obtained through nbd_server_start (which can support more than one
export); but for qemu-nbd (which supports exactly one export), it is
controlled only by the command-line option -e/--shared. Since we
already have a hook function used by qemu-nbd, it's easiest to just
alter its signature to fit our needs.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20220512004924.417153-2-eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add the reproducer from https://gitlab.com/qemu-project/qemu/-/issues/339
Without the previous commit, when running 'make check-qtest-i386'
with QEMU configured with '--enable-sanitizers' we get:
==4028352==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000062a00 at pc 0x5626d03c491a bp 0x7ffdb4199410 sp 0x7ffdb4198bc0
READ of size 786432 at 0x619000062a00 thread T0
#0 0x5626d03c4919 in __asan_memcpy (qemu-system-i386+0x1e65919)
#1 0x5626d1c023cc in flatview_write_continue softmmu/physmem.c:2787:13
#2 0x5626d1bf0c0f in flatview_write softmmu/physmem.c:2822:14
#3 0x5626d1bf0798 in address_space_write softmmu/physmem.c:2914:18
#4 0x5626d1bf0f37 in address_space_rw softmmu/physmem.c:2924:16
#5 0x5626d1bf14c8 in cpu_physical_memory_rw softmmu/physmem.c:2933:5
#6 0x5626d0bd5649 in cpu_physical_memory_write include/exec/cpu-common.h:82:5
#7 0x5626d0bd0a07 in i8257_dma_write_memory hw/dma/i8257.c:452:9
#8 0x5626d09f825d in fdctrl_transfer_handler hw/block/fdc.c:1616:13
#9 0x5626d0a048b4 in fdctrl_start_transfer hw/block/fdc.c:1539:13
#10 0x5626d09f4c3e in fdctrl_write_data hw/block/fdc.c:2266:13
#11 0x5626d09f22f7 in fdctrl_write hw/block/fdc.c:829:9
#12 0x5626d1c20bc5 in portio_write softmmu/ioport.c:207:17
0x619000062a00 is located 0 bytes to the right of 512-byte region [0x619000062800,0x619000062a00)
allocated by thread T0 here:
#0 0x5626d03c66ec in posix_memalign (qemu-system-i386+0x1e676ec)
#1 0x5626d2b988d4 in qemu_try_memalign util/oslib-posix.c:210:11
#2 0x5626d2b98b0c in qemu_memalign util/oslib-posix.c:226:27
#3 0x5626d09fbaf0 in fdctrl_realize_common hw/block/fdc.c:2341:20
#4 0x5626d0a150ed in isabus_fdc_realize hw/block/fdc-isa.c:113:5
#5 0x5626d2367935 in device_set_realized hw/core/qdev.c:531:13
SUMMARY: AddressSanitizer: heap-buffer-overflow (qemu-system-i386+0x1e65919) in __asan_memcpy
Shadow bytes around the buggy address:
0x0c32800044f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c3280004500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c3280004510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c3280004520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c3280004530: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c3280004540:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c3280004550: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c3280004560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c3280004570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c3280004580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c3280004590: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Heap left redzone: fa
Freed heap region: fd
==4028352==ABORTING
[ kwolf: Added snapshot=on to prevent write file lock failure ]
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Per the 82078 datasheet, if the end-of-track (EOT byte in
the FIFO) is more than the number of sectors per side, the
command is terminated unsuccessfully:
* 5.2.5 DATA TRANSFER TERMINATION
The 82078 supports terminal count explicitly through
the TC pin and implicitly through the underrun/over-
run and end-of-track (EOT) functions. For full sector
transfers, the EOT parameter can define the last
sector to be transferred in a single or multisector
transfer. If the last sector to be transferred is a par-
tial sector, the host can stop transferring the data in
mid-sector, and the 82078 will continue to complete
the sector as if a hardware TC was received. The
only difference between these implicit functions and
TC is that they return "abnormal termination" result
status. Such status indications can be ignored if they
were expected.
* 6.1.3 READ TRACK
This command terminates when the EOT specified
number of sectors have been read. If the 82078
does not find an I D Address Mark on the diskette
after the second· occurrence of a pulse on the
INDX# pin, then it sets the IC code in Status Regis-
ter 0 to "01" (Abnormal termination), sets the MA bit
in Status Register 1 to "1", and terminates the com-
mand.
* 6.1.6 VERIFY
Refer to Table 6-6 and Table 6-7 for information
concerning the values of MT and EC versus SC and
EOT value.
* Table 6·6. Result Phase Table
* Table 6-7. Verify Command Result Phase Table
Fix by aborting the transfer when EOT > # Sectors Per Side.
Cc: qemu-stable@nongnu.org
Cc: Hervé Poussineau <hpoussin@reactos.org>
Fixes: baca51faff ("floppy driver: disk geometry auto detect")
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/339
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211118115733.4038610-2-philmd@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently png support is dependent on vnc for linking object file to
libpng. This commit makes the parameter independent of vnc as it breaks
system emulator with --disable-vnc unless --disable-png is added.
Fixes: 9a0a119a38 ("Added parameter to take screenshot with screendump as PNG", 2022-04-27)
Signed-off-by: Kshitij Suri <kshitij.suri@nutanix.com>
Message-Id: <20220510161932.228481-1-kshitij.suri@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The vsock callbacks .vhost_vsock_set_guest_cid and
.vhost_vsock_set_running are the only ones to be conditional
on #ifdef CONFIG_VHOST_VSOCK. This is different from any other
device-dependent callbacks like .vhost_scsi_set_endpoint, and it
also broke when CONFIG_VHOST_VSOCK was changed to a per-target
symbol.
It would be possible to also use the CONFIG_DEVICES include, but
really there is no reason for most virtio files to be per-target
so just remove the #ifdef to fix the issue.
Reported-by: Dov Murik <dovmurik@linux.ibm.com>
Fixes: 9972ae314f ("build: move vhost-vsock configuration to Kconfig")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu_co_queue_restart_all is basically the same as qemu_co_enter_all
but without a QemuLockable argument. That's perfectly fine, but only as
long as the function is marked coroutine_fn. If used outside coroutine
context, qemu_co_queue_wait will attempt to take the lock and that
is just broken: if you are calling qemu_co_queue_restart_all outside
coroutine context, the lock is going to be a QemuMutex which cannot be
taken twice by the same thread.
The patch adds the marker to qemu_co_queue_restart_all and to its sole
non-coroutine_fn caller; it then reimplements the function in terms of
qemu_co_enter_all_impl, to remove duplicated code and to clarify that the
latter also works in coroutine context.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20220427130830.150180-4-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because qemu_co_queue_restart_all does not release the lock, it should
be used only in coroutine context. Introduce a new function that,
like qemu_co_enter_next, does release the lock, and use it whenever
qemu_co_queue_restart_all was used outside coroutine context.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20220427130830.150180-3-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu_co_queue_next is basically the same as qemu_co_enter_next but
without a QemuLockable argument. That's perfectly fine, but only
as long as the function is marked coroutine_fn. If used outside
coroutine context, qemu_co_queue_wait will attempt to take the lock
and that is just broken: if you are calling qemu_co_queue_next outside
coroutine context, the lock is going to be a QemuMutex which cannot be
taken twice by the same thread.
The patch adds the marker and reimplements qemu_co_queue_next in terms of
qemu_co_enter_next_impl, to remove duplicated code and to clarify that the
latter also works in coroutine context.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20220427130830.150180-2-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
slirp 4.7 introduces a new CFI-friendly timer callback that does
not pass function pointers within libslirp as callbacks for timers.
Check the version number and, if it is new enough, allow using CFI
even with a system libslirp.
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Reviewed-by: Marc-André Lureau <malureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
libslirp 4.7 introduces a CFI-friendly version of the .timer_new callback.
The new callback replaces the function pointer with an enum; invoking the
callback is done with a new function slirp_handle_timer.
Support the new API so that CFI can be made compatible with using a system
libslirp.
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Reviewed-by: Marc-André Lureau <malureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace slirp_init with slirp_new, so that a more recent cfg.version
can be specified. The function appeared in version 4.1.0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This struct will be extended in the next few patches to support the
new slirp_handle_timer() call. For that we need to store an additional
"int" for each SLIRP timer, in addition to the cb_opaque.
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Reviewed-by: Marc-André Lureau <malureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Version 4.7 of slirp provides a new timer API that works better with CFI,
together with several other improvements:
* Allow disabling the internal DHCP server !22
* Support Unix sockets in hostfwd !103
* IPv6 DNS proxying support !110
* bootp: add support for UEFI HTTP boot !111
and bugfixes.
The submodule update also includes 2 commits to fix warnings in the
Win32 build.
Reviewed-by: Marc-André Lureau <malureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This allows setting memory properties without going through
vl.c, and have them validated just the same.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220414165300.555321-6-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Handle HostMemoryBackend creation and setting of ms->ram entirely in
machine_run_board_init.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220414165300.555321-5-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make -m syntactic sugar for a compound property "-machine
mem.{size,max-size,slots}". The new property does not have
the magic conversion to megabytes of unsuffixed arguments,
and also does not understand that "0" means the default size
(you have to leave it out to get the default). This means
that we need to convert the QemuOpts by hand to a QDict.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220414165300.555321-4-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make -boot syntactic sugar for a compound property "-machine boot.{order,menu,...}".
machine_boot_parse is replaced by the setter for the property.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220414165300.555321-3-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As part of converting -boot to a property with a QAPI type, define
the struct and use it throughout QEMU to access boot configuration.
machine_boot_parse takes care of doing the QemuOpts->QAPI conversion by
hand, for now.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220414165300.555321-2-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When running 'make check' we only get a summary of progress on the
console. Fortunately meson/ninja have saved the raw test output to a
logfile. Exposing this log will make it easier to debug failures that
happen in CI.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220509124134.867431-3-berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When running I/O tests using TAP output mode, we get a single TAP test
with a sub-test reported for each I/O test that is run. The output looks
something like this:
1..123
ok qcow2 011
ok qcow2 012
ok qcow2 013
ok qcow2 217
...
If everything runs or fails normally this is fine, but periodically we
have been seeing the test harness abort early before all 123 tests have
been run, just leaving a fairly useless message like
TAP parsing error: Too few tests run (expected 123, got 107)
we have no idea which tests were running at the time the test harness
abruptly exited. This change causes us to print a message about our
intent to run each test, so we have a record of what is active at the
time the harness exits abnormally.
1..123
# running qcow2 011
ok qcow2 011
# running qcow2 012
ok qcow2 012
# running qcow2 013
ok qcow2 013
# running qcow2 217
ok qcow2 217
...
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220509124134.867431-2-berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When stdout is not a terminal, the buffer may not be flushed at each end
of line, so we should flush after each test is done. This is especially
apparent when run by check-block, in two ways:
First, when running make check-block -jX with X > 1, progress indication
was missing, even though testrunner.py does theoretically print each
test's status once it has been run, even in multi-processing mode.
Flushing after each test restores this progress indication.
Second, sometimes make check-block failed altogether, with an error
message that "too few tests [were] run". I presume that's because one
worker process in the job pool did not get to flush its stdout before
the main process exited, and so meson did not get to see that worker's
test results. In any case, by flushing at the end of run_test(), the
problem has disappeared for me.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220506134215.10086-1-hreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>