be8b02edae
As Peter pointed out: | - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's | per-guest-page granularity | | - RAMState.iterations is done for each ram_find_and_save_block(), so | it's per-host-page granularity | | An example is that when we migrate a 2M huge page in the guest, we | will only increase the RAMState.iterations by 1 (since | ram_find_and_save_block() will be called once), but we might increase | xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call | save_xbzrle_page() that many times) if all the pages got cache miss. | Then IMHO the cache miss rate will be 512/1=51200% (while it should | actually be just 100% cache miss). And he also suggested as xbzrle_counters.cache_miss_rate is the only user of rs->iterations we can adapt it to count target guest page numbers After that, rename 'iterations' to 'target_page_count' to better reflect its meaning Suggested-by: Peter Xu <peterx@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> Message-Id: <20180903092644.25812-3-xiaoguangrong@tencent.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> |
||
---|---|---|
.. | ||
block-dirty-bitmap.c | ||
block.c | ||
block.h | ||
channel.c | ||
channel.h | ||
colo-comm.c | ||
colo-failover.c | ||
colo.c | ||
exec.c | ||
exec.h | ||
fd.c | ||
fd.h | ||
global_state.c | ||
Makefile.objs | ||
migration.c | ||
migration.h | ||
page_cache.c | ||
page_cache.h | ||
postcopy-ram.c | ||
postcopy-ram.h | ||
qemu-file-channel.c | ||
qemu-file-channel.h | ||
qemu-file.c | ||
qemu-file.h | ||
qjson.c | ||
qjson.h | ||
ram.c | ||
ram.h | ||
rdma.c | ||
rdma.h | ||
savevm.c | ||
savevm.h | ||
socket.c | ||
socket.h | ||
tls.c | ||
tls.h | ||
trace-events | ||
vmstate-types.c | ||
vmstate.c | ||
xbzrle.c | ||
xbzrle.h |