migration: Increase default max_downtime from 30ms to 300ms
The existing timeout is 30ms which on 100MB/s (1Gbit) gives us 3MB/s rate maximum. If we put some load on the guest, it is easy to get page dirtying rate too big so live migration will never complete. In the case of libvirt that means that the guest will be stopped anyway after a timeout specified in the "virsh migrate" command and this normally generates even bigger delay. This changes max_downtime to 300ms which seems to be more reasonable value. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
This commit is contained in:
parent
c6f6646c60
commit
f7cd55a023
@ -133,7 +133,7 @@ void process_incoming_migration(QEMUFile *f)
|
||||
* the choice of nanoseconds is because it is the maximum resolution that
|
||||
* get_clock() can achieve. It is an internal measure. All user-visible
|
||||
* units must be in seconds */
|
||||
static uint64_t max_downtime = 30000000;
|
||||
static uint64_t max_downtime = 300000000;
|
||||
|
||||
uint64_t migrate_max_downtime(void)
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user