When it was based on debian8 which uses python-minimal, it needed this.
It no longer does.
Goodbye, python2.7.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20190918222546.11696-1-jsnow@redhat.com>
[AJB: fixed up commit message]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Finger trouble in a previous clean-up inadvertently set
DEBIAN_PARTIAL_IMAGES instead of DOCKER_PARTIAL_IMAGES. Also fix the
typo to debian-9-mxe.
Fixes: 44d5a8bf5d
Signed-off-by: John Snow <jsnow@redhat.com>
[AJB: merged fix from Message-Id: <20190917185537.25417-1-jsnow@redhat.com>]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Fedora23 is but a distant twinkle. The sanitizer works again, and even
if not, we have --enable-sanitizers now.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20190912014442.5757-1-jsnow@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
We were incorrectly using the 64-bit AIX ABI instead of the 32-bit
SYSV ABI for setting NIP for the signal handler.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Remove a redundant masking of ignore. Once that's gone it is
obvious that the system-mode inner test is redundant with the
outer test. Move the fpcr_exc_enable masking up and tidy.
No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-8-richard.henderson@linaro.org>
The kernel masks the integer overflow exception with the
software invalid exception mask. Include IOV in the set
of exception bits masked by fpcr_exc_enable.
Fixes the new float_convs test.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-7-richard.henderson@linaro.org>
Tidy the computation of the value; no functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-6-richard.henderson@linaro.org>
Since we're converting the swcr to fpcr format for exceptions,
it's trivial to add FPCR_DNZ to the set of fpcr bits overriden.
No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-5-richard.henderson@linaro.org>
The CONFIG_USER_ONLY adjustment blindly mashed the swcr
exception enable bits into the fpcr exception disable bits.
However, fpcr_exc_enable has already converted the exception
disable bits into the exception status bits in order to make
it easier to mask status bits at runtime.
Instead, merge the swcr enable bits with the fpcr before we
convert to status bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-4-richard.henderson@linaro.org>
We were setting the wrong bit. The fp_status.flush_to_zero
setting is overwritten by either the constant 1 or the value
of fpcr_flush_to_zero depending on bits within an fp insn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-3-richard.henderson@linaro.org>
This is a bit more straight-forward than using a switch statement.
No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-2-richard.henderson@linaro.org>
Check that keyslots don't overlap with the data,
and check that keyslots don't overlap with each other.
(this is done using naive O(n^2) nested loops,
but since there are just 8 keyslots, this doesn't really matter.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This function will be used later to store
new keys to the luks metadata
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This is just to make qcrypto_block_luks_open more
reasonable in size.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
These values are not used by generic crypto code anyway
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Prior to that patch, the parsed encryption settings
were already stored into the QCryptoBlockLUKS but not
used anywhere but in qcrypto_block_luks_get_info
Using them simplifies the code
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Another minor refactoring
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Let the caller allocate masterkey
Always use master key len from the header
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This way we can store the header we loaded, which
will be used in key management code
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
* key_bytes -> master_key_len
* payload_offset = payload_offset_sector (to emphasise that this isn't byte offset)
* key_offset -> key_offset_sector - same as above for luks slots
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
me: test fixes from (should stop hangs in postcopy tests).
me: An RDMA cleanup hang fix
Wei: Tidy ups around postcopy
Marc-Andre: mem leak fix
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAl2LgN0ACgkQBRYzHrxb
/efO3hAAjwECXkswTQKSy/pWsTpxIwJaKh0RxYCUMOGW9SPaAZ6eFYYVlgyi6N3Z
S0ibn3uhTKDm0Zu5yfNrF3BfqCT/H60qyepVX1CN+DBeFtwgpXEnivwnMZV36qnh
/6PS4afsa/mHUC1TeDls7WZK0LPwSSpZGPdAbedaqzeN9D5ZXXqZ+S8cIyRFMwv/
36uDXq9GmzwwEeKOTIl45Mn4g1jilcDmCvMuC8M9Ao2b2hoT/Lxb735CGe4P8MYl
0oP2COrDI1JKWwY+W0iA+m2pkomK+b7f0zGr/3/I/Y+jrfoab+bgcBpvA6PIidmX
25LHL9D6HfhA0v7Y5K+BOV4TK6hoXFC+RXCYozCJdxKGpa6JkTHZkLgq7wTyn2Zs
9Gyb6M01Y9XgsG5fS339KFNzzulkWOTNj0EH2Rdg/TYb5SJzxQhM3fEkIf9+/SFb
WICXnuB8DTWwL20HdMJiNPxgHim6VG9HQFsY5KSXZiVDcga58d+Ij8S8NzeSuHL+
AIWwwsd6YhmzzVB4QLXct1uwSsepILM+67gxKeuCKZrQfsu4PuZBFVc1vH1OjjjV
Vfz5pdHiYqP8EGIHDnW/W02EBmfXrXhklB6mUc+Fm9zZA7+gPZl56Qv51KwyUqN3
qwOuPZftFbzrgZpzD9CrMnrl05m0EE+nY41AkITkFaKafQLzZm8=
=61eA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20190925a' into staging
Migration pull 2019-09-25
me: test fixes from (should stop hangs in postcopy tests).
me: An RDMA cleanup hang fix
Wei: Tidy ups around postcopy
Marc-Andre: mem leak fix
# gpg: Signature made Wed 25 Sep 2019 15:59:41 BST
# gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20190925a:
migration/postcopy: Recognise the recovery states as 'in_postcopy'
tests/migration/postcopy: trim migration bandwidth
tests/migration: Fail on unexpected migration states
migration/rdma.c: Swap synchronize_rcu for call_rcu
migration/rdma: Don't moan about disconnects at the end
migration: remove sent parameter in get_queued_page_not_dirty
migration/postcopy: unsentmap is not necessary for postcopy
migration/postcopy: not necessary to do discard when canonicalizing bitmap
migration: fix vmdesc leak on vmstate_save() error
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- Improved error message for plaintext client of encrypted server
- Fix various assertions when -object iothread is in use
- Silence a Coverity error for use-after-free on error path
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEccLMIrHEYCkn0vOqp6FrSiUnQ2oFAl2LbTgACgkQp6FrSiUn
Q2rw/gf8CiEby+/V5AExRf/CQ9TOqFCcIYwbsMpTbGTUJIrF0N7FH/FT6g6wZUMq
JYdHDk77ew0ZoQCLkefl8Bj3ddIcfzIkcL4/AYTZ3KUyPTzmwrejckcQ08qlj+dg
HvnnxmNmtPTi001r4fVLl+OfFVXkw/ACnjoy/tB+UP7ZxV4ADi5UAVyBvnwGFVkP
n4mf7o3tvM05eVhwkbVUSsEDvN2VCMxrNM2Qm6u5njoPuGi06fLpN/5oVTiipvcA
xgLSKNzOVyK3v98eGCxUBsYOtTqLphuhSSTpbwEy1uxRRIoBUpFxfazWXpX0pWl9
m+ibBD5tX6e5sFISMRNu9Q4LrFcutw==
=GqDl
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-09-24-v2' into staging
nbd patches for 2019-09-24
- Improved error message for plaintext client of encrypted server
- Fix various assertions when -object iothread is in use
- Silence a Coverity error for use-after-free on error path
# gpg: Signature made Wed 25 Sep 2019 14:35:52 BST
# gpg: using RSA key 71C2CC22B1C4602927D2F3AAA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full]
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full]
# gpg: aka "[jpeg image of size 6874]" [full]
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2019-09-24-v2:
util/qemu-sockets: fix keep_alive handling in inet_connect_saddr
tests: Use iothreads during iotest 223
nbd: Grab aio context lock in more places
nbd/server: attach client channel to the export's AioContext
nbd/client: Add hint when TLS is missing
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fixes the previous TLB_WATCHPOINT patches because we are currently
failing to set cpu->mem_io_pc with the call to cpu_check_watchpoint.
Pass down the retaddr directly because it's readily available.
Fixes: 50b107c5d6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rather than rely on cpu->mem_io_pc, pass retaddr down directly.
Within tb_invalidate_phys_page_range__locked, the is_cpu_write_access
parameter is non-zero exactly when retaddr would be non-zero, so that
is a simple replacement.
Recognize that current_tb_not_found is true only when mem_io_pc
(and now retaddr) are also non-zero, so remove a redundant test.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
All callers pass false to this argument. Remove it and pass the
constant on to tb_invalidate_phys_page_range__locked.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With the merge of notdirty handling into store_helper,
the last user of cpu->mem_io_vaddr was removed.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can use notdirty_write for the write and return a valid host
pointer for this case.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since 9458a9a1df, all readers of the dirty bitmaps wait
for the rcu lock, which means that they wait until the end
of any executing TranslationBlock.
As a consequence, there is no need for the actual access
to happen in between the _prepare and _complete. Therefore,
we can improve things by merging the two functions into
notdirty_write and dropping the NotDirtyInfo structure.
In addition, the only users of notdirty_write are in cputlb.c,
so move the merged function there. Pass in the CPUIOTLBEntry
from which the ram_addr_t may be computed.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is only one caller, tlb_set_page_with_attrs. We cannot
inline the entire function because the AddressSpaceDispatch
structure is private to exec.c, and cannot easily be moved to
include/exec/memory-internal.h.
Compute is_ram and is_romd once within tlb_set_page_with_attrs.
Fold the number of tests against these predicates. Compute
cpu_physical_memory_is_clean outside of the tlb lock region.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Pages that we want to track for NOTDIRTY are RAM. We do not
really need to go through the I/O path to handle them.
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It does not require going through the whole I/O path
in order to discard a write.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The memory_region_tb_read tracepoint is unreachable, since notdirty
is supposed to apply only to writes. The memory_region_tb_write
tracepoint is mis-named, because notdirty is not only used for TB
invalidation. It is also used for e.g. VGA RAM updates and migration.
Replace memory_region_tb_write with memory_notdirty_write_access,
and place it in memory_notdirty_write_prepare where it can catch
all of the instances. Add memory_notdirty_set_dirty to log when
we no longer intercept writes to a page.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Handle bswap on ram directly in load/store_helper. This fixes a
bug with the previous implementation in that one cannot use the
I/O path for RAM.
Fixes: a26fc6f515
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We will shortly be using these more than once.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Increase the current runtime assert to a compile-time assert.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use this as a compile-time assert that a particular
code path is not reachable.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This forced inlining can result in missing symbols,
which makes a debugging build harder to follow.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These bits do not need to vary with the actual page size
used by the guest.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Various parts of the migration code do different things when they're
in postcopy mode; prior to this patch this has been 'postcopy-active'.
This patch extends 'in_postcopy' to include 'postcopy-paused' and
'postcopy-recover'.
In particular, when you set the max-postcopy-bandwidth parameter, this
only affects the current migration fd if we're 'in_postcopy';
this leads to a race in the postcopy recovery test where it increases
the speed from 4k/sec to unlimited, but that increase can get ignored
if the change is made between the point at which the reconnection
happens and it transitions back to active.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190923174942.12182-1-dgilbert@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
On slow hosts with tcg we were sometimes finding that the migration
would complete during precopy and never get into the postcopy test.
Trim back the bandwidth a bit to make that much less likely.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190923131022.15498-3-dgilbert@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Cleber Rosa <crosa@redhat.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We've got various places where we wait for a migration to enter
a given state; but if we enter an unexpected state we tend to fail
in odd ways; add a mechanism for explicitly testing for any state
which we shouldn't be in.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190923131022.15498-2-dgilbert@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Tested-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This fixes a deadlock that can occur on the migration source after
a failed RDMA migration; as the source tries to cleanup it
clears a pair of pointers and uses synchronize_rcu to wait; this
is happening on the main thread. With the CPUs running
a CPU thread can be an rcu reader and attempt to grab the main lock
(kvm_handle_io->address_space_write->flatview_write->flatview_write_continue->
prepare_mmio_access->qemu_mutex_lock_iothread_impl)
Replace the synchronize_rcu with a call_rcu to postpone the freeing.
Fixes: 74637e6f08 ("migration: implement bi-directional RDMA QIOChannel")
( https://bugzilla.redhat.com/show_bug.cgi?id=1746787 )
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190913163507.1403-3-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
If we've already finished the migration or something has
already gone wrong, don't moan about the migration stream disconnecting.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190913163507.1403-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This is a cleanup for previous removal of unsentmap.
The sent parameter is not necessary now.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-4-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit f3f491fcd6 ('Postcopy: Maintain unsentmap') introduced
unsentmap to track not yet sent pages.
This is not necessary since:
* unsentmap is a sub-set of bmap before postcopy start
* unsentmap is the summation of bmap and unsentmap after canonicalizing
This patch just removes it.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
All pages, either partially sent or partially dirty, will be discarded in
postcopy_send_discard_bm_ram(), since we update the unsentmap to be
unsentmap = unsentmap | dirty in ram_postcopy_send_discard_bitmap().
This is not necessary to do discard when canonicalizing bitmap. And by
doing so, we separate the page discard into two individual steps:
* canonicalize bitmap
* discard page
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20190912122514.22504-2-marcandre.lureau@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>