Using the previous patches, we're now able to timestamp the SETUP
state. Once we have this time, let the user know about it in the
schema.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
As described in the previous patch, until now, the MIG_STATE_SETUP
state was not really a 'formal' state. It has been used as a 'zero' state
(what we're calling 'NONE' here) and QEMU has been unconditionally transitioning
into this state when the QMP migration command was called. Instead we want to
introduce MIG_STATE_NONE, which is our starting state in the state machine, and
then immediately transition into the MIG_STATE_SETUP state when the QMP migrate
command is issued.
In order to do this, we must delay the transition into MIG_STATE_ACTIVE until
later in the migration_thread(). This is done to be able to timestamp the amount of
time spent in the SETUP state for proper accounting to the user during
an RDMA migration.
Furthermore, the management software, until now, has never been aware of the
existence of the SETUP state whatsoever. This must change, because, timing of this
state implies that the state actually exists.
These two patches cannot be separated because the 'query_migrate' QMP
switch statement needs to know how to handle this new state transition.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This patch is in preparation for the next ones: Until now the MIG_STATE_SETUP
state was not really a 'formal' state. It has been used as a 'zero' state
and QEMU has been unconditionally transitioning into this state when
the QMP migrate command was called. In preparation for timing this state,
we have to make this state a a 'real' state which actually gets transitioned
from later in the migration_thread() from SETUP => ACTIVE, rather than just
automatically dropping into this state at the beginninig of the migration.
This means that the state transition function (migration_finish_set_state())
needs to be capable of transitioning from valid states _other_ than just
MIG_STATE_ACTIVE.
The function is in fact already capable of doing that, but was not allowing the
old state to be a parameter specified as an input.
This patch fixes that and only makes the transition if the current state
matches the old state that the caller intended to transition from.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Code that does need to be visible is kept
well contained inside this file and this is the only
new additional file to the entire patch.
This file includes the entire protocol and interfaces
required to perform RDMA migration.
Also, the configure and Makefile modifications to link
this file are included.
Full documentation is in docs/rdma.txt
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
this patch adds a efficient encoding for zero blocks by
adding a new flag indicating a block is completely zero.
additionally bdrv_write_zeros() is used at the destination
to efficiently write these zeroes. depending on the implementation
this avoids that the destination target gets fully provisioned.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
# By Chegu Vinod
# Via Juan Quintela
* quintela/migration.next:
Force auto-convegence of live migration
Add 'auto-converge' migration capability
Introduce async_run_on_cpu()
Message-id: 1373664508-5404-1-git-send-email-quintela@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
If bdrv_flush_all() returns an error, there is an inconsistency in the
view of an image file between the source and the destination host.
Completing the migration would lead to corruption. Better abort
migration in this case.
To reproduce this case, try the following (ensures that there is
something to flush, and then fails that flush):
$ qemu-img create -f qcow2 test.qcow2 1G
$ cat blkdebug.cfg
[inject-error]
event = "flush_to_os"
errno = "5"
$ qemu-system-x86_64 -hda blkdebug:blkdebug.cfg:test.qcow2 -monitor stdio
(qemu) qemu-io ide0-hd0 "write 0 4k"
(qemu) migrate ...
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod <chegu_vinod@hp.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We're already using them in several places, but __sync builtins are just
too ugly to type, and do not provide seqcst load/store operations.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This capability allows you to disable dynamic chunk registration
for better throughput on high-performance links.
For example, using an 8GB RAM virtual machine with all 8GB of memory in
active use and the VM itself is completely idle using a 40 gbps infiniband link:
1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
These numbers would of course scale up to whatever size virtual machine
you have to migrate using RDMA.
Enabling this feature does *not* have any measurable affect on
migration *downtime*. This is because, without this feature, all of the
memory will have already been registered already in advance during
the bulk round and does not need to be re-registered during the successive
iteration rounds.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This exposes throughput (in megabits/sec) through QMP.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
bandwidth_limit is double set in migrate_init(), remove one.
Signed-off-by: Lei Li <lilei@linux.vnet.ibm.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This reverts commit 7161082c8d.
Reverting this patch fixes a divide-by-zero error in qemu that can be
fairly reliably triggered by doing block migration. In this case, the
configuration/error was:
source: temp/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -L temp-bios
-M pc-i440fx-1.4 -m 512M -kernel boot/vmlinuz-x86_64 -initrd
boot/test-initramfs-x86_64.img.gz -vga std -append seed=1234 -drive
file=disk1.img,if=virtio -drive file=disk2.img,if=virtio -device
virtio-net-pci,netdev=net0 -netdev user,id=net0 -monitor
unix:/tmp/vm-hmp.sock,server,nowait -qmp
unix:/tmp/vm-qmp.sock,server,nowait -vnc :100
16837 Floating point exception(core dumped)
target: temp/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -L temp-bios
-M pc-i440fx-1.4 -m 512M -kernel boot/vmlinuz-x86_64 -initrd
boot/test-initramfs-x86_64.img.gz -vga std -append seed=1234 -drive
file=target_disk1.img,if=virtio -drive file=target_disk2.img,if=virtio
-device virtio-net-pci,netdev=net0 -netdev user,id=net0 -incoming
unix:/tmp/migrate.sock -monitor
unix:/tmp/vm-hmp-incoming.sock,server,nowait -qmp
unix:/tmp/vm-qmp-incoming.sock,server,nowait -vnc :101
Receiving block device images
20 %
21 %
load of migration failed
This revert potentially re-introduces a bug that was present in 1.4,
but fixes a prevalent issue with block migration so we should revert
it for now and take an updated patch later.
Conflicts:
migration.c
* fixed up to remove logic introduced in 7161082c while leaving
changes in HEAD intact
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-id: 1368739544-31021-1-git-send-email-mdroth@linux.vnet.ibm.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Management apps like libvirt don't know to pay attention to
stderr unless there is a non-zero exit status.
* migration.c (process_incoming_migration_co): Exit with non-zero
status on failure.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1366149041-626-1-git-send-email-eblake@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The fcntl(fd, F_SETFL, O_NONBLOCK) flag is not specific to sockets.
Rename to qemu_set_nonblock() just like qemu_set_cloexec().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
during bulk stage of ram migration if a page is a
zero page do not send it at all.
the memory at the destination reads as zero anyway.
even if there is an madvise with QEMU_MADV_DONTNEED
at the target upon receipt of a zero page I have observed
that the target starts swapping if the memory is overcommitted.
it seems that the pages are dropped asynchronously.
this patch also updates QMP to return the number of
skipped pages in MigrationStats.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The indirection is useless now. Backends can open s->file directly.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
With this patch, the migration_file is not needed anymore.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Rate limiting is now simply a byte counter; client call
qemu_file_rate_limit() manually to determine if they have to exit.
So it is possible and simple to move the functionality to QEMUFile.
This makes the remaining functionality of s->file redundant;
in the next patch we can remove it and write directly to s->migration_file.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This patch extracts a few small changes from the next patch, which
are unrelated to adding generic rate-limiting functionality to
QEMUFile. Make migration_set_rate_limit a simple accessor, and
use qemu_file_set_rate_limit consistently. Also fix a typo where
INT_MAX should have been SIZE_MAX.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Prepare for when s->bytes_xfer will be removed.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Second, drop the file descriptor indirection, and write directly to the
QEMUFile.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
As a start, use QEMUFile to store the destination and close it.
qemu_get_fd gets a file descriptor that will be used by the write
callbacks.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
migration_put_buffer is never called if there has been an error.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We will go around the loop exactly once after setting last_round.
Eliminate the variable altogether.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This is consistent once that we have moved everything to migration.c
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Buffering was needed because blocking writes could take a long time
and starve other threads seeking to grab the big QEMU mutex.
Now that all writes (except within _complete callbacks) are done
outside the big QEMU mutex, we do not need buffering at all.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Only the migration_bitmap_sync() call needs the iothread lock.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This makes it possible to do blocking writes directly to the socket,
with no buffer in the middle. For RAM, only the migration_bitmap_sync()
call needs the iothread lock. For block migration, it is needed by
the block layer (including bdrv_drain_all and dirty bitmap access),
but because some code is shared between iterate and complete, all of
mig_save_device_dirty is run with the lock taken.
In the savevm case, the iterate callback runs within the big lock.
This is annoying because it complicates the rules. Luckily we do not
need to do anything about it: the RAM iterate callback does not need
the iothread lock, and block migration never runs during savevm.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Perform final cleanup in a bottom half, and add joining the thread to
the series of cleanup actions.
migrate_fd_error remains for connection error, but it doesn't need
to cleanup anything anymore.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Accessing s->state outside the big QEMU lock will simplify a bit the
locking/unlocking of the iothread lock.
We need to keep the lock in migrate_fd_error and migrate_fd_completed,
however, because they call migrate_fd_cleanup.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Completion of migration is currently done with a "nested" loop that
invokes buffered_flush: migrate_fd_completed is called by
buffered_file_thread, which calls migrate_fd_cleanup, which calls
buffered_close (via qemu_fclose), which flushes the buffer.
Simplify this, by reusing the buffered_flush call of buffered_file_thread.
Then if qemu_savevm_state_complete was called, and the buffer is empty
(including the QEMUFile buffer, for which we need the previous patch), we
are done.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Always use qemu_file_get_error to detect errors, since that is how
QEMUFile itself drops I/O after an error occurs. There is no need
to propagate and check return values all the time.
Also remove the "complete" member, since we know that it is set (via
migrate_fd_cleanup) only when the state changes.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Remove the return value of buffered_flush, pass it via the error code
of s->file. Once this is done, the error can be retrieved simply
via migrate_fd_close's call to qemu_fclose.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Including data that resided in the QEMUFile's own buffer.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The next patch will add more cases where qemu_savevm_state_cancel
needs to be called; prepare for that already, the function can be
called twice with no ill effect.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
vm_stop_force_state does:
if (runstate_is_running()) {
vm_stop(state);
} else {
runstate_set(state);
}
migration.c does:
if (runstate_is_running()) {
vm_stop(state);
} else {
vm_stop_force_state(state);
}
The code run is the same even if we always use vm_stop_force_state in
migration.c.
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Unify the goto around the loop, with the exit condition at the end of it.
Both can be expressed as "while (ret >= 0)".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We removed the calculation in commit e4ed1541ac
Now we add it back. We need to create dirty_bytes_rate because we
can't include cpu-all.h from migration.c, and there is no other way to
include TARGET_PAGE_SIZE.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
While we are sleeping we are not sending, so we should not use that
time to estimate our bandwidth.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
0 is a very bad initial value, what we are trying to get is
max_downtime, so that is a much better estimation.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
The incoming migration is processed in a coroutine and uses an fd read
handler to enter the yielded coroutine when data becomes available.
The read handler was set too broadly, so that spurious coroutine entries
were be triggered if other coroutine users yielded (like the block
layer's bdrv_write() function).
Install the fd read only only when yielding for more data to become
available. This prevents spurious coroutine entries which break code
that assumes only a specific set of places can re-enter the coroutine.
This patch fixes crashes in block/raw-posix.c that are triggered with
"migrate -b" when qiov becomes a dangling pointer due to a spurious
coroutine entry that frees qiov early.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1360598505-5512-1-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>