There is no need to keep the export name around, and it seems a better
fit as an argument in the init() call.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
The caller might handle non-blocking using coroutine. Leave the choice
to the caller to use a blocking or non-blocking negotiate.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
this converts read, write and flush functions from aio to coroutines
eliminating almost 200 lines of code.
The requirement for libiscsi is bumped to version 1.4.0 which was
released in may 2012.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
this patch aims to set bdi->cluster_size to the internal page size
of the iscsi target so that enabled callers can align requests
properly.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now it is possible to directly export an internal snapshot, which
can be used to probe the snapshot's contents without qemu-img
convert.
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Since later this function will be used so improve it. The only caller of it
now is qemu-img, and it is not impacted by introduce function
bdrv_snapshot_load_tmp_by_id_or_name() that call bdrv_snapshot_load_tmp()
twice to keep old search logic. bdrv_snapshot_load_tmp_by_id_or_name() return
int to let caller know the errno, and errno will be used later.
Also fix a typo in comments of bdrv_snapshot_delete().
Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Strictly speaking, this is only required for has_zero_init() == false,
but it's easy enough to just do a cluster-aligned write that is padded
with zeros after the header.
This fixes that after 'qemu-img create' header extensions are attempted
to be parsed that are really just random leftover data.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The code is similar to the implementation of discard and write_zeroes
with UNMAP. However, failure must be propagated up to block.c.
The stale page cache problem can be reproduced as follows:
# modprobe scsi-debug lbpws=1 lbprz=1
# ./qemu-io /dev/sdXX
qemu-io> write -P 0xcc 0 2M
qemu-io> write -z 0 1M
qemu-io> read -P 0x00 0 512
Pattern verification failed at offset 0, 512 bytes
qemu-io> read -v 0 512
00000000: cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc ................
...
# ./qemu-io --cache=none /dev/sdXX
qemu-io> write -P 0xcc 0 2M
qemu-io> write -z 0 1M
qemu-io> read -P 0x00 0 512
qemu-io> read -v 0 512
00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
...
And similarly with discard instead of "write -z".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
See the next commit for the description of the Linux kernel problem
that is worked around in raw_open_common.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Writing zeroes to a file can be done by punching a hole if
MAY_UNMAP is set.
Note that in this case ENOTSUP is not ignored, but makes
the block layer fall back to the generic implementation.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The current check is right for MAY_UNMAP=1. For MAY_UNMAP=0, just
try and fall back to regular writes as soon as a WRITE SAME command
fails.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
added myself to reflect recent work on the iscsi block driver.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
since commit 3ac21627 the default value changed to 0.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This will let misaligned but large requests use zero clusters. This
is important because the cluster size is not guest visible.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Similar to write_zeroes, let the generic code receive a ENOTSUP for
discard operations. Since bdrv_discard has advisory semantics,
we can just swallow the error.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The buffer for description file was 4096 which only covers a few
hundred of extents. This changes the buffer to dynamic allocated with
g_strdup_printf in order to support bigger cases.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
If you open an image temporarily just because you want to check its size
or get it flushed, there's no real reason to open the whole backing file
chain.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
This adds "remove_break" command which is the reverse of blkdebug
command "break": it removes all breakpoints with given tag and resumes
all the requests.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Sheepdog support two kinds of redundancy, full replication and erasure coding.
# create a fully replicated vdi with x copies
-o redundancy=x (1 <= x <= SD_MAX_COPIES)
# create a erasure coded vdi with x data strips and y parity strips
-o redundancy=x:y (x must be one of {2,4,8,16} and 1 <= y < SD_EC_MAX_STRIP)
E.g, to convert a vdi into sheepdog vdi 'test' with 8:3 erasure coding scheme
$ qemu-img convert -o redundancy=8:3 linux-0.2.img sheepdog:test
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <namei.unix@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We can actually use BDRVSheepdogState *s to pass most of the parameters.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <namei.unix@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
cow_co_is_allocated() only checks one sector's worth of allocated bits
before returning. This is allowed but (slightly) inefficient, so extend
it to check all of the file's metadata sectors.
Signed-off-by: Charlie Shepherd <charlie@ctshepherd.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
[kwolf: silenced compiler warning (-Wmaybe-uninitialized for changed)]
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Process a whole sector's worth of COW bits by reading a sector, setting
the bits after skipping any already set bits, then writing it out again.
Make sure we only flush once before writing metadata, and only if we
need to write metadata.
Signed-off-by: Charlie Shepherd <charlie@ctshepherd.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We have multiple dirty bitmaps in BDS now, switch QAPI to allow query
it (BlockInfo.dirty_bitmaps), and also drop old BlockInfo.dirty.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Previously a BlockDriverState has only one dirty bitmap, so only one
caller (e.g. a block job) can keep track of writing. This changes the
dirty bitmap to a list and creates a BdrvDirtyBitmap for each caller, the
lifecycle is managed with these new functions:
bdrv_create_dirty_bitmap
bdrv_release_dirty_bitmap
Where BdrvDirtyBitmap is a linked list wrapper structure of HBitmap.
In place of bdrv_set_dirty_tracking, a BdrvDirtyBitmap pointer argument
is added to these functions, since each caller has its own dirty bitmap:
bdrv_get_dirty
bdrv_dirty_iter_init
bdrv_get_dirty_count
bdrv_set_dirty and bdrv_reset_dirty prototypes are unchanged but will
internally walk the list of all dirty bitmaps and set them one by one.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If a block device is unbacked, a streaming blockjob should immediately
finish instead of beginning to try to stream, then noticing the backing
file does not contain even the first sector (since it does not exist)
and then finishing normally.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
With this patch, qemu-img info sheepdog:image will show disk size for sheepdog
images.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Liu Yuan <namei.unix@gmail.com>
Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
now that bdrv_co_discard can handle limits we do not need
the request split logic here anymore.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
if multiple sectors spanning multiple clusters are read the
function count_contiguous_clusters should ensure that the
cluster type should not change between the clusters.
Especially the for-loop should break when we have one
or more normal clusters followed by a compressed cluster.
Unfortunately the wrong macro was used in the mask to
compare the flags.
This was discovered while debugging a data corruption
issue when converting a compressed qcow2 image to raw.
qemu-img reads 2MB chunks which span multiple clusters.
CC: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If backing file doesn't exist, the error message is confusing and
misleading:
$ qemu /tmp/a.qcow2
qemu: could not open disk image /tmp/a.qcow2: Could not open file: No
such file or directory
But...
$ ls /tmp/a.qcow2
/tmp/a.qcow2
$ qemu-img info /tmp/a.qcow2
image: /tmp/a.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/b.qcow2
Because...
$ ls /tmp/b.qcow2
ls: cannot access /tmp/b.qcow2: No such file or directory
This is not intuitive. It's better to have the missing file's name in
the error message. With this patch:
$ qemu-io -c 'read 0 512' /tmp/a.qcow2
qemu-io: can't open device /tmp/a.qcow2: Could not open backing
file: Could not open '/stor/vm/arch.raw': No such file or directory
no file open, try 'help open'
Which is a little bit better.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds support for VHDX image creation, for images of type "Fixed"
and "Dynamic". "Differencing" types (i.e., VHDX images with backing
files) are currently not supported.
Options for image creation include:
* log size:
The size of the journaling log for VHDX. Minimum is 1MB,
and it must be a multiple of 1MB. Invalid log sizes will be
silently fixed by rounding up to the nearest MB.
Default is 1MB.
* block size:
This is the size of a payload block. The range is 1MB to 256MB,
inclusive, and must be a multiple of 1MB as well. Invalid sizes
and multiples will be silently fixed. If '0' is passed, then
a sane size is chosen (depending on virtual image size).
Default is 0 (Auto-select).
* subformat:
- "dynamic"
An image without data pre-allocated.
- "fixed"
An image with data pre-allocated.
Default is "dynamic"
When creating the image file, the lettered sections are created:
-----------------------------------------------------------------.
| (A) | (B) | (C) | (D) | (E)
| File ID | Header1 | Header 2 | Region Tbl 1 | Region Tbl 2
| | | | |
.-----------------------------------------------------------------.
0 64KB 128KB 192KB 256KB 320KB
.---- ~ ----------- ~ ------------ ~ ---------------- ~ -----------.
| (F) | (G) | (H) |
| Journal Log | BAT / Bitmap | Metadata | .... data ......
| | | |
.---- ~ ----------- ~ ------------ ~ ---------------- ~ -----------.
1MB (var.) (var.) (var.)
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
VHDXPage83Data and VHDXParentLocatorHeader both incorrectly had their
MSGUID fields set as arrays of 16. This is incorrect (it stems from
an early version where those fields were uint_8 arrays). Those fields
were, up to this patch, unused.
Also, there were a couple of typos and incorrect wording in comments,
and those have been fixed up as well.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is preperation for vhdx_create(). The ability to write headers,
and calculate the number of BAT entries will be needed within the
create() functions, so move this relevant code into helper functions.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In preparation for vhdx_create(), move more endian translation
functions out to vhdx-endian.c.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Bit shifting can be fun, but in this case it was unnecessary. The
upper 44 bits of the 64-bit BAT entry is specifies the File Offset,
so we shifted the bits to get access to the value.
However, per the spec the value is in MB. So we dutifully shifted back
to the left by 20 bits, to convert to a true uint64_t file offset.
This replaces those steps with just a bit mask, to get rid of the lower
20 bits instead.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This adds support for writing to VHDX image files, using coroutines.
Writes into the BAT table goes through the VHDX log. Currently, BAT
table writes occur when expanding a dynamic VHDX file, and allocating a
new BAT entry.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This adds support for writing to the VHDX log.
For spec details, see VHDX Specification Format v1.00:
https://www.microsoft.com/en-us/download/details.aspx?id=34750
There are a few limitations to this log support:
1.) There is no caching yet
2.) The log is flushed after each entry
The primary write interface, vhdx_log_write_and_flush(), performs a log
write followed by an immediate flush of the log.
As each log entry sector is a minimum of 4KB, partial sector writes are
filled in with data from the disk write destination.
If the current file log GUID is 0, a new GUID is generated and updated
in the header.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Regions in the image file cannot overlap - the log, region tables,
and metdata must all be unique and non-overlapping.
This adds region checking by means of a QLIST; there can be a variable
number of regions and metadata (there may be metadata or region tables
that we do not recognize / know about, but are not required).
This adds the capability to register a region for later checking, and
to check against registered regions for any overlap.
Also, if neither the BAT or Metadata region tables are found, return
error.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>