virtio-balloon: Safely handle BALLOON_PAGE_SIZE < host page size

The virtio-balloon always works in units of 4kiB (BALLOON_PAGE_SIZE), but
we can only actually discard memory in units of the host page size.

Now, we handle this very badly: we silently ignore balloon requests that
aren't host page aligned, and for requests that are host page aligned we
discard the entire host page.  The latter can corrupt guest memory if its
page size is smaller than the host's.

The obvious choice would be to disable the balloon if the host page size is
not 4kiB.  However, that would break the special case where host and guest
have the same page size, but that's larger than 4kiB.  That case currently
works by accident[1] - and is used in practice on many production POWER
systems where 64kiB has long been the Linux default page size on both host
and guest.

To make the balloon safe, without breaking that useful special case, we
need to accumulate 4kiB balloon requests until we have a whole contiguous
host page to discard.

We could in principle do that across all guest memory, but it would require
a large bitmap to track.  This patch represents a compromise: we track
ballooned subpages for a single contiguous host page at a time.  This means
that if the guest discards all 4kiB chunks of a host page in succession,
we will discard it.  This is the expected behaviour in the (host page) ==
(guest page) != 4kiB case we want to support.

If the guest scatters 4kiB requests across different host pages, we don't
discard anything, and issue a warning.  Not ideal, but at least we don't
corrupt guest memory as the previous version could.

Warning reporting is kind of a compromise here.  Determining whether we're
in a problematic state at realize() time is tricky, because we'd have to
look at the host pagesizes of all memory backends, but we can't really know
if some of those backends could be for special purpose memory that's not
subject to ballooning.

Reporting only when the guest tries to balloon a partial page also isn't
great because if the guest page size happens to line up it won't indicate
that we're in a non ideal situation.  It could also cause alarming repeated
warnings whenever a migration is attempted.

So, what we do is warn the first time the guest attempts balloon a partial
host page, whether or not it will end up ballooning the rest of the page
immediately afterwards.

[1] Because when the guest attempts to balloon a page, it will submit
    requests for each 4kiB subpage.  Most will be ignored, but the one
    which happens to be host page aligned will discard the whole lot.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190214043916.22128-6-david@gibson.dropbear.id.au>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This commit is contained in:
David Gibson 2019-02-14 15:39:16 +11:00 committed by Michael S. Tsirkin
parent dbe1a27745
commit ed48c59875
2 changed files with 62 additions and 10 deletions

View File

@ -33,33 +33,82 @@
#define BALLOON_PAGE_SIZE (1 << VIRTIO_BALLOON_PFN_SHIFT)
struct PartiallyBalloonedPage {
RAMBlock *rb;
ram_addr_t base;
unsigned long bitmap[];
};
static void balloon_inflate_page(VirtIOBalloon *balloon,
MemoryRegion *mr, hwaddr offset)
{
void *addr = memory_region_get_ram_ptr(mr) + offset;
RAMBlock *rb;
size_t rb_page_size;
ram_addr_t ram_offset;
int subpages;
ram_addr_t ram_offset, host_page_base;
/* XXX is there a better way to get to the RAMBlock than via a
* host address? */
rb = qemu_ram_block_from_host(addr, false, &ram_offset);
rb_page_size = qemu_ram_pagesize(rb);
host_page_base = ram_offset & ~(rb_page_size - 1);
/* Silently ignore hugepage RAM blocks */
if (rb_page_size != getpagesize()) {
if (rb_page_size == BALLOON_PAGE_SIZE) {
/* Easy case */
ram_block_discard_range(rb, ram_offset, rb_page_size);
/* We ignore errors from ram_block_discard_range(), because it
* has already reported them, and failing to discard a balloon
* page is not fatal */
return;
}
/* Silently ignore unaligned requests */
if (ram_offset & (rb_page_size - 1)) {
return;
/* Hard case
*
* We've put a piece of a larger host page into the balloon - we
* need to keep track until we have a whole host page to
* discard
*/
warn_report_once(
"Balloon used with backing page size > 4kiB, this may not be reliable");
subpages = rb_page_size / BALLOON_PAGE_SIZE;
if (balloon->pbp
&& (rb != balloon->pbp->rb
|| host_page_base != balloon->pbp->base)) {
/* We've partially ballooned part of a host page, but now
* we're trying to balloon part of a different one. Too hard,
* give up on the old partial page */
free(balloon->pbp);
balloon->pbp = NULL;
}
ram_block_discard_range(rb, ram_offset, rb_page_size);
/* We ignore errors from ram_block_discard_range(), because it has
* already reported them, and failing to discard a balloon page is
* not fatal */
if (!balloon->pbp) {
/* Starting on a new host page */
size_t bitlen = BITS_TO_LONGS(subpages) * sizeof(unsigned long);
balloon->pbp = g_malloc0(sizeof(PartiallyBalloonedPage) + bitlen);
balloon->pbp->rb = rb;
balloon->pbp->base = host_page_base;
}
bitmap_set(balloon->pbp->bitmap,
(ram_offset - balloon->pbp->base) / BALLOON_PAGE_SIZE,
subpages);
if (bitmap_full(balloon->pbp->bitmap, subpages)) {
/* We've accumulated a full host page, we can actually discard
* it now */
ram_block_discard_range(rb, balloon->pbp->base, rb_page_size);
/* We ignore errors from ram_block_discard_range(), because it
* has already reported them, and failing to discard a balloon
* page is not fatal */
free(balloon->pbp);
balloon->pbp = NULL;
}
}
static const char *balloon_stat_names[] = {

View File

@ -30,6 +30,8 @@ typedef struct virtio_balloon_stat_modern {
uint64_t val;
} VirtIOBalloonStatModern;
typedef struct PartiallyBalloonedPage PartiallyBalloonedPage;
typedef struct VirtIOBalloon {
VirtIODevice parent_obj;
VirtQueue *ivq, *dvq, *svq;
@ -42,6 +44,7 @@ typedef struct VirtIOBalloon {
int64_t stats_last_update;
int64_t stats_poll_interval;
uint32_t host_features;
PartiallyBalloonedPage *pbp;
} VirtIOBalloon;
#endif