Another DRM_IOCTL_I915 patches will be sent next.
Signed-off-by: Chen Gang <chengang@emindsoft.com.cn>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20200802133938.12055-1-chengang@emindsoft.com.cn>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Implementation of syscall 'clock_nanosleep()' in 'syscall.c' uses
functions 'target_to_host_timespec()' and 'host_to_target_timespec()'
to transfer the value of 'struct timespec' between target and host.
However, the implementation doesn't check whether this conversion
succeeds and thus can return an unaproppriate error instead of 'EFAULT'
that is expected. This was confirmed with the modified LTP test suite
where testcases with bad 'struct timespec' adress for 'clock_nanosleep()'
were added. This modified LTP suite can be found at:
https://github.com/bozutaf/ltp
(Patch with this new test case will be sent to LTP mailing list soon)
Signed-off-by: Filip Bozuta <Filip.Bozuta@syrmia.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20200727201326.401519-1-Filip.Bozuta@syrmia.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The implementations of syscalls 'semop()' and 'semtimedop()' in
file 'syscall.c' use function 'target_to_host_sembuf()' to convert
values of 'struct sembuf' from host to target. However, before this
conversion it should be check whether the number of semaphore operations
'nsops' is not bigger than maximum allowed semaphor operations per
syscall: 'SEMOPM'. In these cases, errno 'E2BIG' ("Arg list too long")
should be set. But the implementation will set errno 'EFAULT' ("Bad address")
in this case since the conversion from target to host in this case fails.
This was confirmed with the LTP test for 'semop()' ('ipc/semop/semop02') in
test case where 'nsops' is greater than SEMOPM with unaproppriate errno EFAULT:
semop02.c:130: FAIL: semop failed unexpectedly; expected: E2BIG: EFAULT (14)
This patch changes this by adding a check whether 'nsops' is bigger than
'SEMOPM' before the conversion function 'target_to_host_sembuf()' is called.
After the changes from this patch, the test works fine along with the other
LTP testcases for 'semop()'):
semop02.c:126: PASS: semop failed as expected: E2BIG (7)
Implementation notes:
A target value ('TARGET_SEMOPM') was added for 'SEMOPM' as to be sure
in case the value is not available for some targets.
Signed-off-by: Filip Bozuta <Filip.Bozuta@syrmia.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20200818180722.45089-1-Filip.Bozuta@syrmia.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCX0bPowAKCRD3ErUQojoP
X43sAPwP4Prb0NQTw68l5oSwOoIcuWb4GZBjxOPecDis/0K2ogD/WswDJ8qk3RAQ
7XYGY8LuMdhwfcsx15TsuB/HAUie3QM=
=wIGS
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/hdeller/tags/target-hppa-v3-pull-request' into staging
artist out of bounds fixes
# gpg: Signature made Wed 26 Aug 2020 22:09:55 BST
# gpg: using EDDSA key BCE9123E1AD29F07C049BBDEF712B510A23A0F5F
# gpg: Good signature from "Helge Deller <deller@gmx.de>" [unknown]
# gpg: aka "Helge Deller <deller@kernel.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4544 8228 2CD9 10DB EF3D 25F8 3E5F 3D04 A7A2 4603
# Subkey fingerprint: BCE9 123E 1AD2 9F07 C049 BBDE F712 B510 A23A 0F5F
* remotes/hdeller/tags/target-hppa-v3-pull-request:
hw/display/artist: Fix invalidation of lines near screen border
hw/display/artist: Fix invalidation of lines in artist_draw_line()
hw/display/artist: Unbreak size mismatch memory accesses
hw/display/artist: Prevent out of VRAM buffer accesses
Revert "hw/display/artist: Avoid drawing line when nothing to display"
hw/display/artist: Refactor artist_rop8() to avoid buffer over-run
hw/display/artist: Check offset in draw_line to avoid buffer over-run
hw/hppa/lasi: Don't abort on invalid IMR value
hw/display/artist.c: fix out of bounds check
hw/hppa: Implement proper SeaBIOS version check
seabios-hppa: Update to SeaBIOS hppa version 1
hw/hppa: Sync hppa_hardware.h file with SeaBIOS sources
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If parts of the invalidated screen lines are outside of the VRAM buffer,
the code skips the whole invalidate. This is incorrect when only parts
of the buffer are invisble - which is the case when the mouse cursor is
located near the screen border.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Helge Deller <deller@gmx.de>
The old code didn't invalidate correctly when vertical lines were drawn.
Fix this and move the invalidation out of the loop.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Helge Deller <deller@gmx.de>
Commit 5d971f9e67 ("memory: Revert "memory: accept mismatching sizes
in memory_region_access_valid") broke the artist driver in a way that
the dtwm window manager on HP-UX rendered wrong.
Fixes: 5d971f9e67 ("memory: Revert "memory: accept mismatching sizes in memory_region_access_valid")
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Helge Deller <deller@gmx.de>
Invalid I/O writes can craft an offset out of the vram_buffer range.
Instead of passing an unsafe pointer to artist_rop8(), pass the vram_buffer and
the offset. We can now check if the offset is in range before accessing it.
We avoid:
Program terminated with signal SIGSEGV, Segmentation fault.
284 *dst &= ~plane_mask;
(gdb) bt
#0 0x000056367b2085c0 in artist_rop8 (s=0x56367d38b510, dst=0x7f9f972fffff <error: Cannot access memory at address 0x7f9f972fffff>, val=0 '\000') at hw/display/artist.c:284
#1 0x000056367b209325 in draw_line (s=0x56367d38b510, x1=-20480, y1=-1, x2=0, y2=17920, update_start=true, skip_pix=-1, max_pix=-1) at hw/display/artist.c:646
Reported-by: LLVM libFuzzer
Buglink: https://bugs.launchpad.net/qemu/+bug/1880326
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Helge Deller <deller@gmx.de>
Invalid I/O writes can craft an offset out of the vram_buffer range.
We avoid:
Program terminated with signal SIGSEGV, Segmentation fault.
284 *dst &= ~plane_mask;
(gdb) bt
#0 0x000055d5dccdc5c0 in artist_rop8 (s=0x55d5defee510, dst=0x7f8e84ed8216 <error: Cannot access memory at address 0x7f8e84ed8216>, val=0 '\000') at hw/display/artist.c:284
#1 0x000055d5dccdcf83 in fill_window (s=0x55d5defee510, startx=22, starty=5674, width=65, height=5697) at hw/display/artist.c:551
#2 0x000055d5dccddfb9 in artist_reg_write (opaque=0x55d5defee510, addr=1051140, val=4265537, size=4) at hw/display/artist.c:902
#3 0x000055d5dcb42a7c in memory_region_write_accessor (mr=0x55d5defeea10, addr=1051140, value=0x7ffe57db08c8, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:483
Reported-by: LLVM libFuzzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Helge Deller <deller@gmx.de>
NetBSD initializes the LASI IMR value with 0xffffffff to disable all LASI
interrupts. This triggered an assert() and stopped the emulation. By replacing
the check with a warning in the guest log we now allow NetBSD to boot again.
Signed-off-by: Helge Deller <deller@gmx.de>
- qcow2 subclusters (extended L2 entries)
-----BEGIN PGP SIGNATURE-----
iQFGBAABCAAwFiEEkb62CjDbPohX0Rgp9AfbAGHVz0AFAl9GESASHG1yZWl0ekBy
ZWRoYXQuY29tAAoJEPQH2wBh1c9AKNsH/1iG6YVi9c25BoE/3nnu1yJSQiVqpjzt
hRsV0LzqqaUd3r/yLx5wyFpmcOC+iqJsNrrJCMR9GqMCXyOiH2S9xZs9rVnL44dr
gt8bhbAfLTQr6ix9rzJUekRHWa0oeoECS6FLdkAnc6xB5Tf5YwOXdX8FYGiR6M9Q
mibqBIRbQX10ptdZjRpQaDNiTAGMoXfXa1YxTfeFuvW/vwnP34mT+K1B6o1CxDdL
G9mG9atF0Zu5qjovapw0a/lnEppyxIJXSpU0s6s7dGbuAzw8IRM5utJLSNu4hL/c
fTxmAnhcB6bFPaMHHMP4izQFds1M1NBZFX/4ji+u0cE8CgTGSqmupRA=
=j4jq
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2020-08-26' into staging
Block patches:
- qcow2 subclusters (extended L2 entries)
# gpg: Signature made Wed 26 Aug 2020 08:37:04 BST
# gpg: using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40
# gpg: issuer "mreitz@redhat.com"
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full]
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2020-08-26: (34 commits)
iotests: Add tests for qcow2 images with extended L2 entries
qcow2: Assert that expand_zero_clusters_in_l1() does not support subclusters
qcow2: Allow preallocation and backing files if extended_l2 is set
qcow2: Add the 'extended_l2' option and the QCOW2_INCOMPAT_EXTL2 bit
qcow2: Add prealloc field to QCowL2Meta
qcow2: Add subcluster support to qcow2_measure()
qcow2: Add subcluster support to qcow2_co_pwrite_zeroes()
qcow2: Add subcluster support to handle_alloc_space()
qcow2: Clear the L2 bitmap when allocating a compressed cluster
qcow2: Update L2 bitmap in qcow2_alloc_cluster_link_l2()
qcow2: Add subcluster support to check_refcounts_l2()
qcow2: Add subcluster support to discard_in_l2_slice()
qcow2: Add subcluster support to zero_in_l2_slice()
qcow2: Add subcluster support to qcow2_get_host_offset()
qcow2: Add subcluster support to calculate_l2_meta()
qcow2: Handle QCOW2_SUBCLUSTER_UNALLOCATED_ALLOC
qcow2: Replace QCOW2_CLUSTER_* with QCOW2_SUBCLUSTER_*
qcow2: Add cluster type parameter to qcow2_get_host_offset()
qcow2: Add qcow2_cluster_is_allocated()
qcow2: Add qcow2_get_subcluster_range_type()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <e6dd0429cafe84ca603179c298a8703bddca2904.1594396418.git.berto@igalia.com>
[mreitz: Use env in shebang line]
Signed-off-by: Max Reitz <mreitz@redhat.com>
machines. The Spike and Virt machines both support NUMA sockets.
This PR also updates the current experimental Hypervisor support to the
v0.6.1 spec.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAl9FXM0ACgkQIeENKd+X
cFRuPgf6A7yfDjapdHIJlX+wr8O2QEVKSL59+t/TgJRv+0CsmP4MdkYE2g0zA7zy
mstYAgZNXTYE6Ee+WnOLcP7chAsdTjFyAnsUgNOXDToVEQQykm6XdaPsZsRJNnTG
AcAPN6+s9kBvh9NUTfQHb4tEvvD/mGk5wsT58ciYWl8islpCADcYgAtboTSLY/gA
qTex0McllG+y4coJudubi+uv46BeaetgQ59cBb4K65zsSM9fesRz8dUvSyXyFv10
CUFpQLxX410wvBlTV3I5L7bwOT5OdFjw1dzwE6VET+tu9YlLOOHuRuptnDhdTHgu
Dkxx4YB7SRBiF+/JUvxJ559Ez106pw==
=TQFe
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-to-apply-20200825' into staging
This pull request first adds support for multi-socket NUMA RISC-V
machines. The Spike and Virt machines both support NUMA sockets.
This PR also updates the current experimental Hypervisor support to the
v0.6.1 spec.
# gpg: Signature made Tue 25 Aug 2020 19:47:41 BST
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-to-apply-20200825:
target/riscv: Support the Virtual Instruction fault
target/riscv: Return the exception from invalid CSR accesses
target/riscv: Support the v0.6 Hypervisor extension CRSs
target/riscv: Only support little endian guests
target/riscv: Only support a single VSXL length
target/riscv: Update the CSRs to the v0.6 Hyp extension
target/riscv: Update the Hypervisor trap return/entry
target/riscv: Fix the interrupt cause code
target/riscv: Convert MSTATUS MTL to GVA
target/riscv: Don't allow guest to write to htinst
target/riscv: Do two-stage lookups on hlv/hlvx/hsv instructions
target/riscv: Allow generating hlv/hlvx/hsv instructions
target/riscv: Allow setting a two-stage lookup in the virt status
hw/riscv: virt: Allow creating multiple NUMA sockets
hw/riscv: spike: Allow creating multiple NUMA sockets
hw/riscv: Add helpers for RISC-V multi-socket NUMA machines
hw/riscv: Allow creating multiple instances of PLIC
hw/riscv: Allow creating multiple instances of CLINT
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When performing a CSR access let's return a negative exception value on
an error instead of -1. This will allow us to specify the exception in
future patches.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: a487dad60c9b8fe7a2b992c5e0dcc2504a9000a7.1597259519.git.alistair.francis@wdc.com
Message-Id: <a487dad60c9b8fe7a2b992c5e0dcc2504a9000a7.1597259519.git.alistair.francis@wdc.com>
We extend RISC-V virt machine to allow creating a multi-socket
machine. Each RISC-V virt machine socket is a NUMA node having
a set of HARTs, a memory instance, a CLINT instance, and a PLIC
instance. Other devices are shared between all sockets. We also
update the generated device tree accordingly.
By default, NUMA multi-socket support is disabled for RISC-V virt
machine. To enable it, users can use "-numa" command-line options
of QEMU.
Example1: For two NUMA nodes with 2 CPUs each, append following
to command-line options: "-smp 4 -numa node -numa node"
Example2: For two NUMA nodes with 1 and 3 CPUs, append following
to command-line options:
"-smp 4 -numa node -numa node -numa cpu,node-id=0,core-id=0 \
-numa cpu,node-id=1,core-id=1 -numa cpu,node-id=1,core-id=2 \
-numa cpu,node-id=1,core-id=3"
The maximum number of sockets in a RISC-V virt machine is 8
but this limit can be changed in future.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Message-Id: <20200616032229.766089-6-anup.patel@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We extend RISC-V spike machine to allow creating a multi-socket
machine. Each RISC-V spike machine socket is a NUMA node having
a set of HARTs, a memory instance, and a CLINT instance. Other
devices are shared between all sockets. We also update the
generated device tree accordingly.
By default, NUMA multi-socket support is disabled for RISC-V spike
machine. To enable it, users can use "-numa" command-line options
of QEMU.
Example1: For two NUMA nodes with 2 CPUs each, append following
to command-line options: "-smp 4 -numa node -numa node"
Example2: For two NUMA nodes with 1 and 3 CPUs, append following
to command-line options:
"-smp 4 -numa node -numa node -numa cpu,node-id=0,core-id=0 \
-numa cpu,node-id=1,core-id=1 -numa cpu,node-id=1,core-id=2 \
-numa cpu,node-id=1,core-id=3"
The maximum number of sockets in a RISC-V spike machine is 8
but this limit can be changed in future.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Message-Id: <20200616032229.766089-5-anup.patel@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We add common helper routines which can be shared by RISC-V
multi-socket NUMA machines.
We have two types of helpers:
1. riscv_socket_xyz() - These helper assist managing multiple
sockets irrespective whether QEMU NUMA is enabled/disabled
2. riscv_numa_xyz() - These helpers assist in providing
necessary QEMU machine callbacks for QEMU NUMA emulation
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Message-Id: <20200616032229.766089-4-anup.patel@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We extend PLIC emulation to allow multiple instances of PLIC in
a QEMU RISC-V machine. To achieve this, we remove first HART id
zero assumption from PLIC emulation.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20200616032229.766089-3-anup.patel@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We extend CLINT emulation to allow multiple instances of CLINT in
a QEMU RISC-V machine. To achieve this, we remove first HART id
zero assumption from CLINT emulation.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
Message-Id: <20200616032229.766089-2-anup.patel@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
After build qemu with '-fsanitize=address' extra-cflags,
'make check' show following leak:
=================================================================
==44580==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 2500 byte(s) in 1 object(s) allocated from:
#0 0x7f1b5a8b8d28 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded28)
#1 0x7f1b5a514b10 in g_malloc0 (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x51b10)
#2 0xd79ea4e4c0ad31c3 (<unknown module>)
SUMMARY: AddressSanitizer: 2500 byte(s) leaked in 1 allocation(s).
Call 'g_rand_free' in the end of function to avoid this.
Fixes: 4d3a329af59("tests/util-sockets: add abstract unix socket cases")
Signed-off-by: Li Qiang <liq3ea@163.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: xiaoqiang zhao <zxq_yx_007@163.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This function is only used by qcow2_expand_zero_clusters() to
downgrade a qcow2 image to a previous version. This would require
transforming all extended L2 entries into normal L2 entries but this
is not a simple task and there are no plans to implement this at the
moment.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <15e65112b4144381b4d8c0bdf8fb76b0d813e3d1.1594396418.git.berto@igalia.com>
[mreitz: Fixed comment style]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Traditional qcow2 images don't allow preallocation if a backing file
is set. This is because once a cluster is allocated there is no way to
tell that its data should be read from the backing file.
Extended L2 entries have individual allocation bits for each
subcluster, and therefore it is perfectly possible to have an
allocated cluster with all its subclusters unallocated.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <6d5b0f38e7dc5f2f31d8cab1cb92044e9909aece.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Now that the implementation of subclusters is complete we can finally
add the necessary options to create and read images with this feature,
which we call "extended L2 entries".
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <6476caaa73216bd05b7bb2d504a20415e1665176.1594396418.git.berto@igalia.com>
[mreitz: %s/5\.1/5.2/; fixed 302's and 303's reference output]
Signed-off-by: Max Reitz <mreitz@redhat.com>
This field allows us to indicate that the L2 metadata update does not
come from a write request with actual data but from a preallocation
request.
For traditional images this does not make any difference, but for
images with extended L2 entries this means that the clusters are
allocated normally in the L2 table but individual subclusters are
marked as unallocated.
This will allow preallocating images that have a backing file.
There is one special case: when we resize an existing image we can
also request that the new clusters are preallocated. If the image
already had a backing file then we have to hide any possible stale
data and zero out the new clusters (see commit 955c7d6687 for more
details).
In this case the subclusters cannot be left as unallocated so the L2
bitmap must be updated.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <960d4c444a4f5a870e2b47e5da322a73cd9a2f5a.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Extended L2 entries are bigger than normal L2 entries so this has an
impact on the amount of metadata needed for a qcow2 file.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <7efae2efd5e36b42d2570743a12576d68ce53685.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This works now at the subcluster level and pwrite_zeroes_alignment is
updated accordingly.
qcow2_cluster_zeroize() is turned into qcow2_subcluster_zeroize() with
the following changes:
- The request can now be subcluster-aligned.
- The cluster-aligned body of the request is still zeroized using
zero_in_l2_slice() as before.
- The subcluster-aligned head and tail of the request are zeroized
with the new zero_l2_subclusters() function.
There is just one thing to take into account for a possible future
improvement: compressed clusters cannot be partially zeroized so
zero_l2_subclusters() on the head or the tail can return -ENOTSUP.
This makes the caller repeat the *complete* request and write actual
zeroes to disk. This is sub-optimal because
1) if the head area was compressed we would still be able to use
the fast path for the body and possibly the tail.
2) if the tail area was compressed we are writing zeroes to the
head and the body areas, which are already zeroized.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <17e05e2ee7e12f10dcf012da81e83ebe27eb3bef.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The bdrv_co_pwrite_zeroes() call here fills complete clusters with
zeroes, but it can happen that some subclusters are not part of the
write request or the copy-on-write. This patch makes sure that only
the affected subclusters are overwritten.
A potential improvement would be to also fill with zeroes the other
subclusters if we can guarantee that we are not overwriting existing
data. However this would waste more disk space, so we should first
evaluate if it's really worth doing.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <b3dc97e8e2240ddb5191a4f930e8fc9653f94621.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Compressed clusters always have the bitmap part of the extended L2
entry set to 0.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <04455b3de5dfeb9d1cfe1fc7b02d7060a6e09710.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The L2 bitmap needs to be updated after each write to indicate what
new subclusters are now allocated. This needs to happen even if the
cluster was already allocated and the L2 entry was otherwise valid.
In some cases however a write operation doesn't need change the L2
bitmap (because all affected subclusters were already allocated). This
is detected in calculate_l2_meta(), and qcow2_alloc_cluster_link_l2()
is never called in those cases.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <0875620d49f44320334b6a91c73b3f301f975f38.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The offset field of an uncompressed cluster's L2 entry must be aligned
to the cluster size, otherwise it is invalid. If the cluster has no
data then it means that the offset points to a preallocation, so we
can clear the offset field without affecting the guest-visible data.
This is what 'qemu-img check' does when run in repair mode.
On traditional qcow2 images this can only happen when QCOW_OFLAG_ZERO
is set, and repairing such entries turns the clusters from ZERO_ALLOC
into ZERO_PLAIN.
Extended L2 entries have no ZERO_ALLOC clusters and no QCOW_OFLAG_ZERO
but the idea is the same: if none of the subclusters are allocated
then we can clear the offset field and leave the bitmap untouched.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <9f4ed1d0a34b0a545b032c31ecd8c14734065342.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Two things need to be taken into account here:
1) With full_discard == true the L2 entry must be cleared completely.
This also includes the L2 bitmap if the image has extended L2
entries.
2) With full_discard == false we have to make the discarded cluster
read back as zeroes. With normal L2 entries this is done with the
QCOW_OFLAG_ZERO bit, whereas with extended L2 entries this is done
with the individual 'all zeroes' bits for each subcluster.
Note however that QCOW_OFLAG_ZERO is not supported in v2 qcow2
images so, if there is a backing file, discard cannot guarantee
that the image will read back as zeroes. If this is important for
the caller it should forbid it as qcow2_co_pdiscard() does (see
80f5c01183 for more details).
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <5ef8274e628aa3ab559bfac467abf488534f2b76.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The QCOW_OFLAG_ZERO bit that indicates that a cluster reads as
zeroes is only used in standard L2 entries. Extended L2 entries use
individual 'all zeroes' bits for each subcluster.
This must be taken into account when updating the L2 entry and also
when deciding that an existing entry does not need to be updated.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <b61d61606d8c9b367bd641ab37351ddb9172799a.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The logic of this function remains pretty much the same, except that
it uses count_contiguous_subclusters(), which combines the logic of
count_contiguous_clusters() / count_contiguous_clusters_unallocated()
and checks individual subclusters.
qcow2_cluster_to_subcluster_type() is not necessary as a separate
function anymore so it's inlined into its caller.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <d2193fd48653a350d80f0eca1c67b1d9053fb2f3.1594396418.git.berto@igalia.com>
[mreitz: Initialize expected_type to anything]
Signed-off-by: Max Reitz <mreitz@redhat.com>
If an image has subclusters then there are more copy-on-write
scenarios that we need to consider. Let's say we have a write request
from the middle of subcluster #3 until the end of the cluster:
1) If we are writing to a newly allocated cluster then we need
copy-on-write. The previous contents of subclusters #0 to #3 must
be copied to the new cluster. We can optimize this process by
skipping all leading unallocated or zero subclusters (the status of
those skipped subclusters will be reflected in the new L2 bitmap).
2) If we are overwriting an existing cluster:
2.1) If subcluster #3 is unallocated or has the all-zeroes bit set
then we need copy-on-write (on subcluster #3 only).
2.2) If subcluster #3 was already allocated then there is no need
for any copy-on-write. However we still need to update the L2
bitmap to reflect possible changes in the allocation status of
subclusters #4 to #31. Because of this, this function checks
if all the overwritten subclusters are already allocated and
in this case it returns without creating a new QCowL2Meta
structure.
After all these changes l2meta_cow_start() and l2meta_cow_end()
are not necessarily cluster-aligned anymore. We need to update the
calculation of old_start and old_end in handle_dependencies() to
guarantee that no two requests try to write on the same cluster.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <4292dd56e4446d386a2fe307311737a711c00708.1594396418.git.berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>