Move fp_status and fpscr closer to other floating point and vector
related members in cpu env definition so they are in one group.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <5b50e9e7eec2c383ae878b397d0b2927efc9ea43.1581888834.git.balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
"Deferred" was misspelled as "differed" in some comments, correct this
typo,
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20200214155748.0896B745953@zero.eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows moving the kernel in the guest memory. The option is useful
for step debugging (as Linux is linked at 0x0); it also allows loading
grub which is normally linked to run at 0x20000.
This uses the existing kernel address by default.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20200203032943.121178-6-aik@ozlabs.ru>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We obviously don't want to print out an error message if addr points to
a valid register.
Reported-by: Coverity CID 1419391 Missing break in switch
Fixes: 9ae1329ee2 "ppc/pnv: Add models for POWER8 PHB3 PCIe Host bridge"
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <158153365202.3229002.11521084761048102466.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Obviously, we want to pass &local_err so that we can check it then
line below, not errp.
Reported-by: Coverity CID 1419395 'Constant' variable guards dead code
Fixes: 4f9924c4d4 "ppc/pnv: Add models for POWER9 PHB4 PCIe Host bridge"
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <158153364605.3229002.2796177658957390343.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
As reported by Coverity defect CID 1419397, the 'j' variable goes up to
63 and shouldn't be used to left shift a 32-bit integer.
The result of the operation goes to a 64-bit integer : use a 64-bit
constant.
Reported-by: Coverity CID 1419397 Bad bit shift operation
Fixes: 9ae1329ee2 "ppc/pnv: Add models for POWER8 PHB3 PCIe Host bridge"
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <158153364010.3229002.8004283672455615950.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit 74433bf083 added some includes but added them twice. Since
these are guarded against multiple inclusion including them once is
enough.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20200212223207.5A37574637F@zero.eik.bme.hu>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch implements few of the necessary hcalls for the nvdimm support.
PAPR semantics is such that each NVDIMM device is comprising of multiple
SCM(Storage Class Memory) blocks. The guest requests the hypervisor to
bind each of the SCM blocks of the NVDIMM device using hcalls. There can
be SCM block unbind requests in case of driver errors or unplug(not
supported now) use cases. The NVDIMM label read/writes are done through
hcalls.
Since each virtual NVDIMM device is divided into multiple SCM blocks,
the bind, unbind, and queries using hcalls on those blocks can come
independently. This doesn't fit well into the qemu device semantics,
where the map/unmap are done at the (whole)device/object level granularity.
The patch doesnt actually bind/unbind on hcalls but let it happen at the
device_add/del phase itself instead.
The guest kernel makes bind/unbind requests for the virtual NVDIMM device
at the region level granularity. Without interleaving, each virtual NVDIMM
device is presented as a separate guest physical address range. So, there
is no way a partial bind/unbind request can come for the vNVDIMM in a
hcall for a subset of SCM blocks of a virtual NVDIMM. Hence it is safe to
do bind/unbind everything during the device_add/del.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Message-Id: <158131059899.2897.11515211602702956854.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add support for NVDIMM devices for sPAPR. Piggyback on existing nvdimm
device interface in QEMU to support virtual NVDIMM devices for Power.
Create the required DT entries for the device (some entries have
dummy values right now).
The patch creates the required DT node and sends a hotplug
interrupt to the guest. Guest is expected to undertake the normal
DR resource add path in response and start issuing PAPR SCM hcalls.
The device support is verified based on the machine version unlike x86.
This is how it can be used ..
Ex :
For coldplug, the device to be added in qemu command line as shown below
-object memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/tmp/nvdimm0,share=yes,size=1073872896
-device nvdimm,label-size=128k,uuid=75a3cdd7-6a2f-4791-8d15-fe0a920e8e9e,memdev=memnvdimm0,id=nvdimm0,slot=0
For hotplug, the device to be added from monitor as below
object_add memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/tmp/nvdimm0,share=yes,size=1073872896
device_add nvdimm,label-size=128k,uuid=75a3cdd7-6a2f-4791-8d15-fe0a920e8e9e,memdev=memnvdimm0,id=nvdimm0,slot=0
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
[Early implementation]
Message-Id: <158131058078.2897.12767731856697459923.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For ppc64, PAPR requires the nvdimm device to have UUID property
set in the device tree. Add an option to get it from the user.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <158131056931.2897.14057087440721445976.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
nvdimm_device_list is required for parsing the list for devices
in subsequent patches. Move it to common utility area.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <158131055857.2897.15658377276504711773.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We are going to add more init for the latest machine, so move the setup
to a function so we don't have to change the DEFINE_SPAPR_MACHINE macro
each time.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20200207064628.1196095-1-mst@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When PHB4 bridge has been added, the dependencies to PCIE_PORT has been
added to XIVE_SPAPR and indirectly to PSERIES.
The build of the PowerNV machine is fine while we also build the PSERIES
machine.
If we disable the PSERIES machine, the PowerNV build fails because the
PCI Express files are not built:
/usr/bin/ld: hw/ppc/pnv.o: in function `pnv_chip_power8_pic_print_info':
.../hw/ppc/pnv.c:623: undefined reference to `pnv_phb3_msi_pic_print_info'
/usr/bin/ld: hw/ppc/pnv.o: in function `pnv_chip_power9_pic_print_info':
.../hw/ppc/pnv.c:639: undefined reference to `pnv_phb4_pic_print_info'
/usr/bin/ld: ../hw/usb/hcd-ehci-pci.o: in function `usb_ehci_pci_write_config':
.../hw/usb/hcd-ehci-pci.c:129: undefined reference to `pci_default_write_config'
/usr/bin/ld: ../hw/usb/hcd-ehci-pci.o: in function `usb_ehci_pci_realize':
.../hw/usb/hcd-ehci-pci.c:68: undefined reference to `pci_allocate_irq'
/usr/bin/ld: .../hw/usb/hcd-ehci-pci.c:72: undefined reference to `pci_register_bar'
/usr/bin/ld: ../hw/usb/hcd-ehci-pci.o:(.data.rel+0x50): undefined reference to `vmstate_pci_device'
This patch fixes the problem by adding needed dependencies to POWERNV.
Fixes: 4f9924c4d4 ("ppc/pnv: Add models for POWER9 PHB4 PCIe Host bridge")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20200205232016.588202-3-lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
qtest "rtas" command is only available with pseries not all ppc64 targets,
so if I try to compile only powernv machine, the build fails with:
/usr/bin/ld: qtest.o: in function `qtest_process_command':
.../qtest.c:645: undefined reference to `qtest_rtas_call'
We fix this by enabling rtas command only with pseries machine.
Fixes: eeddd59f59 ("tests: add RTAS command in the protocol")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20200205232016.588202-2-lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The "ibm,os-term" RTAS call has a single parameter which is a pointer to
a message from the guest kernel about the termination cause; this prints
it.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20200203032044.118585-1-aik@ozlabs.ru>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Disable by default build of fdt, slirp and tools with linux-user
Improve strace and use qemu_log to send trace to a file
Add partial ALSA ioctl supports
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAl5OT1QSHGxhdXJlbnRA
dml2aWVyLmV1AAoJEPMMOL0/L748i9kQAJYbShtYQNoNhSf/joKtQbxJ6vtQPryU
8J85wD2RShYpX2qi86WZ4IGFhSE+wFQ+Lfi1Z0SPs/VUgTSaiOYYEZBfROiKzY/M
hbpXlfzWGOLxfzgP17QR5eoSxbVA81N7ruyTiUxFGUTeVbowIXR+/v5Ek8GOJCuu
pAA/07NHxuP/YP2MkrlfAolFte5riQgfQT+QrJwpVwygdRwMx0Ed+gDWJosl8zL/
qEOzyZlXI/Mm+5bKCUQknSqpL3yQibmYfqM0XLhxjfvZ4vJDYk3Vet7ww39Je9Rz
7eAmXU7ERzzuFBTA7HEJjAjy+BrPkPMJINczfYcahUrt3TYS36DxinTre0TJbkMV
1np1Snn4LhSbvcAY/gFpO6X+lS5siYKZC9Ki2chBfl7AygjfswSl1wqE4cNwEyma
ycjdYnlzo8JvuPPI7qploFAPr0hDgloAVfIzTlz+LC7dhsAhHtpf0vq0xC717z98
qOTO1MVoZa0WserrnyAFDaNkBwwYGwhm9WKcd23/9XO6gjc5yZWF98oc31Z1KpT4
ya+qqpcy++FNM3K+74BC403pdoda75N7sl0HYsxJWnhRjUOAPjEwHRo69iGzO/ku
8AOcjws/wf2BnoR7KO/KLQrkpc+EAGY/ulndTqUEyC1MdsMoF6VOB5aTFiBgJBMO
GlsvTkt+05i4
=WAH+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging
Implement membarrier, SO_RCVTIMEO and SO_SNDTIMEO
Disable by default build of fdt, slirp and tools with linux-user
Improve strace and use qemu_log to send trace to a file
Add partial ALSA ioctl supports
# gpg: Signature made Thu 20 Feb 2020 09:20:20 GMT
# gpg: using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
# gpg: issuer "laurent@vivier.eu"
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg: aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C
* remotes/vivier2/tags/linux-user-for-5.0-pull-request:
linux-user: Add support for selected alsa timer instructions using ioctls
linux-user: Add support for getting/setting selected alsa timer parameters using ioctls
linux-user: Add support for selecting alsa timer using ioctl
linux-user: Add support for getting/setting specified alsa timer parameters using ioctls
linux-user: Add support for getting alsa timer version and id
linux-user: remove gemu_log from the linux-user tree
linux-user: Use `qemu_log' for strace
linux-user: Use `qemu_log' for non-strace logging
configure: Avoid compiling system tools on user build by default
linux-user/strace: Improve output of various syscalls
configure: linux-user doesn't need neither fdt nor slirp
linux-user: implement getsockopt SO_RCVTIMEO and SO_SNDTIMEO
linux-user: Implement membarrier syscall
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a test that all fields in "qemu-img snapshot -l"s output are
separated by spaces.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200117105859.241818-3-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[mreitz: Renamed test from 284 to 286]
Signed-off-by: Max Reitz <mreitz@redhat.com>
When printing the snapshot list (e.g. with qemu-img snapshot -l), the VM
size field is only seven characters wide. As of de38b5005e, this is
not necessarily sufficient: We generally print three digits, and this
may require a decimal point. Also, the unit field grew from something
as plain as "M" to " MiB". This means that number and unit may take up
eight characters in total; but we also want spaces in front.
Considering previously the maximum width was four characters and the
field width was chosen to be three characters wider, let us adjust the
field width to be eleven now.
Fixes: de38b5005e
Buglink: https://bugs.launchpad.net/qemu/+bug/1859989
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200117105859.241818-2-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This must not crash.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200121155915.98232-3-mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
s.target_has_backing does not reflect whether the target BDS has a
backing file; it only tells whether we should use a backing file during
conversion (specified by -B).
As such, if you use convert -n, the target does not necessarily actually
have a backing file, and then dereferencing out_bs->backing fails here.
When converting to an existing file, we should set
target_backing_sectors to a negative value, because first, as the
comment explains, this value is only used for optimization, so it is
always fine to do that.
Second, we use this value to determine where the target must be
initialized to zeroes (overlays are initialized to zero after the end of
their backing file). When converting to an existing file, we cannot
assume that to be true.
Cc: qemu-stable@nongnu.org
Fixes: 351c8efff9
("qemu-img: Special post-backing convert handling")
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200121155915.98232-2-mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200122164532.178040-6-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
[mreitz: Added a note that NBD does not support resizing, which is why
the second case is expected to fail]
Signed-off-by: Max Reitz <mreitz@redhat.com>
The generic fallback implementation effectively does the same.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200122164532.178040-5-mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The generic fallback implementation effectively does the same.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200122164532.178040-4-mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
If a protocol driver does not support image creation, we can see whether
maybe the file exists already. If so, just truncating it will be
sufficient.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200122164532.178040-3-mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When nbd_close() is called from a coroutine, the connection_co never
gets to run, and thus nbd_teardown_connection() hangs.
This is because aio_co_enter() only puts the connection_co into the main
coroutine's wake-up queue, so this main coroutine needs to yield and
wait for connection_co to terminate.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200122164532.178040-2-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
First, driver=qcow2 will not work so well with non-qcow2 formats (and
this test claims to support qcow, qed, and vmdk).
Second, vmdk will always report the backing file format to be vmdk.
Filter that out so the output looks like for all other formats.
Third, the flat vmdk subformats do not support backing files, so they
will not work with this test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20191219144243.1763246-1-mreitz@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
backup-top "supports" write-unchanged, by skipping CBW operation in
backup_top_co_pwritev. But it forgets to do the same in
backup_top_co_pwrite_zeroes, as well as declare support for
BDRV_REQ_WRITE_UNCHANGED.
Fix this, and, while being here, declare also support for flags
supported by source child.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200207161231.32707-1-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When initializing the LUKS header the size with default encryption
parameters will currently be 2068480 bytes. This is rounded up to
a multiple of the cluster size, 2081792, with 64k sectors. If the
end of the header is not the same as the end of the cluster we fill
the extra space with zeros. This was forgetting that not even the
space allocated for the header will be fully initialized, as we
only write key material for the first key slot. The space left
for the other 7 slots is never written to.
An optimization to the ref count checking code:
commit a5fff8d4b4 (refs/bisect/bad)
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: Wed Feb 27 16:14:30 2019 +0300
qcow2-refcount: avoid eating RAM
made the assumption that every cluster which was allocated would
have at least some data written to it. This was violated by way
the LUKS header is only partially written, with much space simply
reserved for future use.
Depending on the cluster size this problem was masked by the
logic which wrote zeros between the end of the LUKS header and
the end of the cluster.
$ qemu-img create --object secret,id=cluster_encrypt0,data=123456 \
-f qcow2 -o cluster_size=2k,encrypt.iter-time=1,\
encrypt.format=luks,encrypt.key-secret=cluster_encrypt0 \
cluster_size_check.qcow2 100M
Formatting 'cluster_size_check.qcow2', fmt=qcow2 size=104857600
encrypt.format=luks encrypt.key-secret=cluster_encrypt0
encrypt.iter-time=1 cluster_size=2048 lazy_refcounts=off refcount_bits=16
$ qemu-img check --object secret,id=cluster_encrypt0,data=redhat \
'json:{"driver": "qcow2", "encrypt.format": "luks", \
"encrypt.key-secret": "cluster_encrypt0", \
"file.driver": "file", "file.filename": "cluster_size_check.qcow2"}'
ERROR: counting reference for region exceeding the end of the file by one cluster or more: offset 0x2000 size 0x1f9000
Leaked cluster 4 refcount=1 reference=0
...snip...
Leaked cluster 130 refcount=1 reference=0
1 errors were found on the image.
Data may be corrupted, or further writes to the image may corrupt it.
127 leaked clusters were found on the image.
This means waste of disk space, but no harm to data.
Image end offset: 268288
The problem only exists when the disk image is entirely empty. Writing
data to the disk image payload will solve the problem by causing the
end of the file to be extended further.
The change fixes it by ensuring that the entire allocated LUKS header
region is fully initialized with zeros. The qemu-img check will still
fail for any pre-existing disk images created prior to this change,
unless at least 1 byte of the payload is written to.
Fully writing zeros to the entire LUKS header is a good idea regardless
as it ensures that space has been allocated on the host filesystem (or
whatever block storage backend is used).
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200207135520.2669430-1-berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In many cases the target of a convert operation is a newly provisioned
target that the user knows is blank (reads as zero). In this situation
there is no requirement for qemu-img to wastefully zero out the entire
device.
Add a new option, --target-is-zero, allowing the user to indicate that
an existing target device will return zeros for all reads.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20200205110248.2009589-2-david.edmondson@oracle.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When a management application manages node names there's no reason to
recurse into backing images in the output of query-named-block-nodes.
Add a parameter to the command which will return just the top level
structs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Message-Id: <4470f8c779abc404dcf65e375db195cd91a80651.1579509782.git.pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[mreitz: Fixed coding style]
Signed-off-by: Max Reitz <mreitz@redhat.com>
8dff69b94 added an aio parameter to the drive parameter but forgot to
add a comma before, thus breaking the test. Fix it again.
Fixes: 8dff69b941
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200206130812.612960-1-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit d9df28e7b0 ("iotests: check whitelisted formats") added the
modern @iotests.skip_if_unsupported() to the functions in this test,
so we don't need the old explicit test here anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200129141751.32652-1-thuth@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Tested-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The patch adds a new additional field to the qcow2 header: compression_type,
which specifies compression type. If field is absent or zero, default
compression type is set: ZLIB, which corresponds to current behavior.
New compression type (ZSTD) is to be added in further commit.
Suggested-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200131142219.3264-3-vsementsov@virtuozzo.com>
[mreitz: s/Bits 3-63: Reserved/Bits 4-63: Reserved/]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Make it more obvious how to add new fields to the version 3 header and
how to interpret them.
The specification is adjusted so that for new defined optional fields:
1. Software may support some of these optional fields and ignore the
others, which means that features may be backported to downstream
Qemu independently.
2. If we want to add incompatible field (or a field, for which some of
its values would be incompatible), it must be accompanied by
incompatible feature bit.
Also the concept of "default is zero" is clarified, as it's strange to
say that the value of the field is assumed to be zero for the software
version which don't know about the field at all and don't know how to
treat it be it zero or not.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200131142219.3264-2-vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
[mreitz: s/some its/some of its/]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Considering that legacy "mem" option is deprecated, use memdev
in tests and add an additional test for legacy "mem" option
on old machine type, to make sure it won't regress in the future.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200219160953.13771-80-imammedo@redhat.com>
Use GString to pass argument to make_cli() so that it would be easy
to dynamically change test case arguments from main(). The follow up
patch will use it to change RAM size options depending on target.
While at it cleanup 'cli' freeing, using g_autofree annotation.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200219160953.13771-79-imammedo@redhat.com>
When option -mem-prealloc is used with one or more memory-backend
objects, created backends may not obey configured bind policy or
creation may fail after kernel attempts to move pages according
to bind policy.
Reason is in file_ram_alloc(), which will pre-allocate
any descriptor based RAM if global mem_prealloc != 0 and that
happens way before bind policy is applied to memory range.
One way to fix it would be to extend memory_region_foo() API
and add more invariants that could broken later due implicit
dependencies that's hard to track.
Another approach is to drop adhoc main RAM allocation and
consolidate it around memory-backend. That allows to have
single place that allocates guest RAM (main and memdev)
in the same way and then global mem_prealloc could be
replaced by backend's property[s] that will affect created
memory-backend objects but only in correct order this time.
With main RAM now converted to hostmem backends, there is no
point in keeping global mem_prealloc around, so alias
-mem-prealloc to "memory-backend.prealloc=on"
machine compat[*] property and make mem_prealloc a local
variable to only stir registration of compat property.
*) currently user accessible -global works only with DEVICE
based objects and extra work is needed to make it work
with hostmem backends. But that is convenience option
and out of scope of this already huge refactoring.
Hence machine compat properties were used.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20200219160953.13771-78-imammedo@redhat.com>
the property will allow user to specify number of threads to use
in pre-allocation stage. It also will allow to reduce implicit
hostmem dependency on current_machine.
On object creation it will default to 1, but via machine
compat property it will be updated to MachineState::smp::cpus
to keep current behavior for hostmem and main RAM (which is
now also hostmem based).
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20200219160953.13771-77-imammedo@redhat.com>
It's no longer used anywhere beside main(),
so make it local variable that is used for CLI compat
purposes to keep -mem-path option working.
Under hood QEMU will use it to create
memory-backend-file,mem-path=...
backend and use its MemoryRegion as main RAM.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-76-imammedo@redhat.com>
Function will report error that will mention global mem_path,
which was valid the only if legacy -mem-path was used and
only in case of main RAM.
However it doesn't work with hostmem backends
(for example:
"
qemu: -object memory-backend-file,id=ram0,size=128M,mem-path=foo:
backing store (null) size 0x200000 does not match 'size' option 0x8000000
")
and couldn't possibly work in general FD case the function
is supposed to handle.
Taking in account that main RAM was converted into
memory-backend-foo object, there is no point in printing
file name (from inappropriate place) as failing path is
a part of backend's error message.
Hence drop bogus mem_path usage from qemu_ram_alloc_from_fd(),
it's a job of its user to add file name to error message if applicable.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-75-imammedo@redhat.com>
Since all RAM is backed by hostmem backends, drop
global -mem-path invariant and simplify code.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-74-imammedo@redhat.com>
all boards were switched to using memdev backend for main RAM,
so we can drop no longer used memory_region_allocate_system_memory()
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200219160953.13771-73-imammedo@redhat.com>
memory_region_allocate_system_memory() API is going away, so
replace it with memdev allocated MemoryRegion. The later is
initialized by generic code, so board only needs to opt in
to memdev scheme by providing
MachineClass::default_ram_id
and using MachineState::ram instead of manually initializing
RAM memory region.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-72-imammedo@redhat.com>
memory_region_allocate_system_memory() API is going away, so
replace it with memdev allocated MemoryRegion. The later is
initialized by generic code, so board only needs to opt in
to memdev scheme by providing
MachineClass::default_ram_id
and using MachineState::ram instead of manually initializing
RAM memory region.
Patch moves ram size check into sun4m_hw_init() and drops
ram_init() moving remainder to sun4m_hw_init() as well,
as it was the only place that called it.
Also it rewrites impl. of RamDevice a little bit, which
could serve as an example of frontend device for RAM backend.
(Caveats are:
1. it doesn't check for memdev backend being mapped
since it's been usurped by generic machine to handle
majority of machines which don't have RAM frontend device
2. it still lacks 'addr' property and still has hardcoded
sysbus_mmio_map() in board init. If done right, board should
set 'addr' property and bus/machine plug handler should map
it during device realize time.
)
Further improvements were left as exercise for the future,
since frontends are out scope of RAM conversion to memdev.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20200219160953.13771-71-imammedo@redhat.com>
memory_region_allocate_system_memory() API is going away, so
replace it with memdev allocated MemoryRegion. The later is
initialized by generic code, so board only needs to opt in
to memdev scheme by providing
MachineClass::default_ram_id
and using MachineState::ram instead of manually initializing
RAM memory region.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-70-imammedo@redhat.com>
memory_region_allocate_system_memory() API is going away, so
replace it with memdev allocated MemoryRegion. The later is
initialized by generic code, so board only needs to opt in
to memdev scheme by providing
MachineClass::default_ram_id
and using MachineState::ram instead of manually initializing
RAM memory region.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200219160953.13771-69-imammedo@redhat.com>
memory_region_allocate_system_memory() API is going away, so
replace it with memdev allocated MemoryRegion. The later is
initialized by generic code, so board only needs to opt in
to memdev scheme by providing
MachineClass::default_ram_id
and using MachineState::ram instead of manually initializing
RAM memory region.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200219160953.13771-68-imammedo@redhat.com>
memory_region_allocate_system_memory() API is going away, so
replace it with memdev allocated MemoryRegion. The later is
initialized by generic code, so board only needs to opt in
to memdev scheme by providing
MachineClass::default_ram_id
and using MachineState::ram instead of manually initializing
RAM memory region.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20200219160953.13771-67-imammedo@redhat.com>