curl: Handle failure for potentially large allocations

Some code in the block layer makes potentially huge allocations. Failure
is not completely unexpected there, so avoid aborting qemu and handle
out-of-memory situations gracefully.

This patch addresses the allocations in the curl block driver.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
This commit is contained in:
Kevin Wolf 2014-05-20 13:26:40 +02:00
parent 4ae7a52e43
commit 8dc7a7725b
1 changed files with 7 additions and 1 deletions

View File

@ -640,7 +640,13 @@ static void curl_readv_bh_cb(void *p)
state->buf_start = start;
state->buf_len = acb->end + s->readahead_size;
end = MIN(start + state->buf_len, s->len) - 1;
state->orig_buf = g_malloc(state->buf_len);
state->orig_buf = g_try_malloc(state->buf_len);
if (state->buf_len && state->orig_buf == NULL) {
curl_clean_state(state);
acb->common.cb(acb->common.opaque, -ENOMEM);
qemu_aio_release(acb);
return;
}
state->acb[0] = acb;
snprintf(state->range, 127, "%zd-%zd", start, end);