* Deprecate the rdma code
* Fix flaky npcm7xx_timer test
* i2c-echo license statement and Kconfig switch
* Disable the failing riscv64-debian-cross CI job by default
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmUoI3IRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbXl3A//dnuUf7cXrllZc9PyTRMg99c45DRrj058
e9iOIdQdMAxN8QQtG/6HHVGRBKP5q5LqcAHZE4DTbIxz9keyoVNQ7CiOL8Zm1GJx
up5o/HGul8pAnlG0HrM21VJ6ZVSFW8j+50LQbxLTPOMdGAmzxBPvVIGonDOMBIJh
GPMTFFbJcppSCY2i5ijZ1qmgWMjwErEdQ2kXxxWo5eczNUFT68A4UmQ4NzlA0DZ7
jG19+2DzigJxnLhYXIOPGCkKjVPcue7QjB1ka35ixU6AgQcW3WNrEwqVbf+FHNyN
U1ONWqSTkpGlxB/3a+u/HecC4qN4bCRzVdfTNOArQpgNcc+NWr+jZlAoCZ9N72aH
jZNfW4D1TcgLh+Etj2NWRJKe/2F8N03V/9sJta0eYekzwUqu15Lc1FhhG2zJUJzO
QZxPik6J7PkxX28AoSLjyRx80g3V06eBh2YXqIZioZnbdRQrqX1GG7YpRk7YQRCF
o77T85R64s/z/6GSvEee2f6e6c8O/WKMhLBBoI8fCN0TZVv5VlwHLipKxRQqUZIn
+Iu/LpKsGtmQT259D+rqkCJMzdPFcc3S8e4MwWb8KMe3/rqZuGoCO3ecXQR+qto4
Gif0Z7T+us1hf6ExHDi+8FaRBMlbDpk1/Ebw1NvwcWqMaWipyeAY1yVmYzAMcQ9/
dm2e2YWb4mk=
=aPMR
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2023-10-12' of https://gitlab.com/thuth/qemu into staging
* Fix CVE-2023-1544
* Deprecate the rdma code
* Fix flaky npcm7xx_timer test
* i2c-echo license statement and Kconfig switch
* Disable the failing riscv64-debian-cross CI job by default
* tag 'pull-request-2023-10-12' of https://gitlab.com/thuth/qemu:
gitlab-ci: Disable the riscv64-debian-cross-container by default
MAINTAINERS: Add include/sysemu/qtest.h to the qtest section
hw/misc/Kconfig: add switch for i2c-echo
hw/misc/i2c-echo: add copyright/license note
tests/qtest: Fix npcm7xx_timer-test.c flaky test
hw/rdma: Deprecate the pvrdma device and the rdma subsystem
hw/pvrdma: Protect against buggy or malicious guest driver
Conflicts:
docs/about/deprecated.rst
Context conflict between RISC-V and RDMA deprecation.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
- Clean up coroutine versions of bdrv_{is_allocated,block_status}*
- Graph locking part 5 (protect children/parent links)
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmUoHL8RHGt3b2xmQHJl
ZGhhdC5jb20ACgkQfwmycsiPL9b4uRAAjryVAaA5jXZ3mdGB80nhGtARZlIaIVO/
tlXk065q2Cj+98f+fBPCPWvmEz28vJwBhJUsFwpHzLZrxecBpwZp0MPAkFBNkouq
+AiO9xyTAqccEp/dnIys4Bun9Rp0Jq9lk9y29zzEmQuK5uCB56lpx2cDn/JkzSQt
ZFtnxxTwi3MDTNvXATub8Ia/1suui0zvESS7J/NBxQNI3cFaQszp1vMwlRIoPiWo
15YZFPZZQ2pvu6/1nL1Vl9OLbPAVcEGJpjHZv0XhudYOwRiDvjYnwfPL7BuwYEsU
Dos4mZZd/KMU695s7OzlVYi1q4ATKUTUxyyylVhXZrFBXSE5ntnfoHTKHEruTyPb
G31h5mribSTWjdvY5HewHbSSPjByAWsSQg9yzcHybhORiqGQCpcGQ8zuW7oNKMPV
JicWdoRVY4U4hR0nRdDxz9zdpQ8QYok/ginBxFaOzrCfClUB7ZOBxwRMclIghuRH
FV+ZJk0ylVOz2tbfNxUa3KhUgTPd8jgCHFI7xak5EBRtTJiJjE03Xag1Fdxy5/D5
tRsBBW4sOFygAhjN/xyeaRv9L8rAv3x/akriFjPUbOMLkPcJpe/DTWsP8+5LaZF8
GkQvjsg5UvmfcJ3LFtecXxfYH4UWhDmyAjF+BswiRqafDDi2CCUmdwDnzEPbwuWO
x1y7cgxe9SE=
=4d/s
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- Clean up coroutine versions of bdrv_{is_allocated,block_status}*
- Graph locking part 5 (protect children/parent links)
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmUoHL8RHGt3b2xmQHJl
# ZGhhdC5jb20ACgkQfwmycsiPL9b4uRAAjryVAaA5jXZ3mdGB80nhGtARZlIaIVO/
# tlXk065q2Cj+98f+fBPCPWvmEz28vJwBhJUsFwpHzLZrxecBpwZp0MPAkFBNkouq
# +AiO9xyTAqccEp/dnIys4Bun9Rp0Jq9lk9y29zzEmQuK5uCB56lpx2cDn/JkzSQt
# ZFtnxxTwi3MDTNvXATub8Ia/1suui0zvESS7J/NBxQNI3cFaQszp1vMwlRIoPiWo
# 15YZFPZZQ2pvu6/1nL1Vl9OLbPAVcEGJpjHZv0XhudYOwRiDvjYnwfPL7BuwYEsU
# Dos4mZZd/KMU695s7OzlVYi1q4ATKUTUxyyylVhXZrFBXSE5ntnfoHTKHEruTyPb
# G31h5mribSTWjdvY5HewHbSSPjByAWsSQg9yzcHybhORiqGQCpcGQ8zuW7oNKMPV
# JicWdoRVY4U4hR0nRdDxz9zdpQ8QYok/ginBxFaOzrCfClUB7ZOBxwRMclIghuRH
# FV+ZJk0ylVOz2tbfNxUa3KhUgTPd8jgCHFI7xak5EBRtTJiJjE03Xag1Fdxy5/D5
# tRsBBW4sOFygAhjN/xyeaRv9L8rAv3x/akriFjPUbOMLkPcJpe/DTWsP8+5LaZF8
# GkQvjsg5UvmfcJ3LFtecXxfYH4UWhDmyAjF+BswiRqafDDi2CCUmdwDnzEPbwuWO
# x1y7cgxe9SE=
# =4d/s
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 12 Oct 2023 12:20:15 EDT
# gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg: issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (26 commits)
block: Add assertion for bdrv_graph_wrlock()
block: Protect bs->children with graph_lock
block: Protect bs->parents with graph_lock
block: Mark bdrv_get_specific_info() and callers GRAPH_RDLOCK
block: Mark bdrv_apply_auto_read_only() and callers GRAPH_RDLOCK
block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCK
qcow2: Mark check_constraints_on_bitmap() GRAPH_RDLOCK
qcow2: Mark qcow2_inactivate() and callers GRAPH_RDLOCK
qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCK
block: Mark bdrv_amend_options() and callers GRAPH_RDLOCK
block: Mark bdrv_get_parent_name() and callers GRAPH_RDLOCK
block: Mark bdrv_primary_child() and callers GRAPH_RDLOCK
block: Mark bdrv_refresh_filename() and callers GRAPH_RDLOCK
block: Mark bdrv_get_xdbg_block_graph() and callers GRAPH_RDLOCK
block: Take graph rdlock in parts of reopen
block: Mark bdrv_snapshot_fallback() and callers GRAPH_RDLOCK
block: Mark bdrv_parent_cb_resize() and callers GRAPH_RDLOCK
block: Mark drain related functions GRAPH_RDLOCK
block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCK
block: Take graph rdlock in bdrv_inactivate_all()
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
"Host Memory Backends" and "Memory devices" queue ("mem"):
- Support memory devices with multiple memslots
- Support memory devices that dynamically consume memslots
- Support memory devices that can automatically decide on the number of
memslots to use
- virtio-mem support for exposing memory dynamically via multiple
memslots
- Some required cleanups/refactorings
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAmUn+XMRHGRhdmlkQHJl
ZGhhdC5jb20ACgkQTd4Q9wD/g1qDHA//T01suTa+uzrcoJHoMWN11S47WnAmbuTo
vVakucLBPMJAa9xZeCy3OavXaVGpHkw+t6g3OFknof0LfQ5/j9iE3Q1PxURN7g5j
SJ2WJXCoceM6T4TMhPvVvgEaYjFmESqZB5FZgedMT0QRyhAxMuF9pCkWhk1O3OAV
JqQKqLFiGcv60AEuBYGZGzgiOUv8EJ5gKwRF4VOdyHIxqZDw1aZXzlcd4TzFZBQ7
rwW/3ef+sFmUJdmfrSrqcIlQSRrqZ2w95xATDzLTIEEUT3SWqh/E95EZWIz1M0oQ
NgWgFiLCR1KOj7bWFhLXT7IfyLh0mEysD+P/hY6QwQ4RewWG7EW5UK+JFswssdcZ
rEj5XpHZzev/wx7hM4bWsoQ+VIvrH7j3uYGyWkcgYRbdDEkWDv2rsT23lwGYNhht
oBsrdEBELRw6v4C8doq/+sCmHmuxUMqTGwbArCQVnB1XnLxOEkuqlnfq5MORkzNF
fxbIRx+LRluOllC0HVaDQd8qxRq1+UC5WIpAcDcrouy4HGgi1onWKrXpgjIAbVyH
M6cENkK7rnRk96gpeXdmrf0h9HqRciAOY8oUsFsvLyKBOCPBWDrLyOQEY5UoSdtD
m4QpEVgywCy2z1uU/UObeT/UxJy/9EL/Zb+DHoEK06iEhwONoUJjEBYMJD38RMkk
mwPTB4UAk9g=
=s69t
-----END PGP SIGNATURE-----
Merge tag 'mem-2023-10-12' of https://github.com/davidhildenbrand/qemu into staging
Hi,
"Host Memory Backends" and "Memory devices" queue ("mem"):
- Support memory devices with multiple memslots
- Support memory devices that dynamically consume memslots
- Support memory devices that can automatically decide on the number of
memslots to use
- virtio-mem support for exposing memory dynamically via multiple
memslots
- Some required cleanups/refactorings
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAmUn+XMRHGRhdmlkQHJl
# ZGhhdC5jb20ACgkQTd4Q9wD/g1qDHA//T01suTa+uzrcoJHoMWN11S47WnAmbuTo
# vVakucLBPMJAa9xZeCy3OavXaVGpHkw+t6g3OFknof0LfQ5/j9iE3Q1PxURN7g5j
# SJ2WJXCoceM6T4TMhPvVvgEaYjFmESqZB5FZgedMT0QRyhAxMuF9pCkWhk1O3OAV
# JqQKqLFiGcv60AEuBYGZGzgiOUv8EJ5gKwRF4VOdyHIxqZDw1aZXzlcd4TzFZBQ7
# rwW/3ef+sFmUJdmfrSrqcIlQSRrqZ2w95xATDzLTIEEUT3SWqh/E95EZWIz1M0oQ
# NgWgFiLCR1KOj7bWFhLXT7IfyLh0mEysD+P/hY6QwQ4RewWG7EW5UK+JFswssdcZ
# rEj5XpHZzev/wx7hM4bWsoQ+VIvrH7j3uYGyWkcgYRbdDEkWDv2rsT23lwGYNhht
# oBsrdEBELRw6v4C8doq/+sCmHmuxUMqTGwbArCQVnB1XnLxOEkuqlnfq5MORkzNF
# fxbIRx+LRluOllC0HVaDQd8qxRq1+UC5WIpAcDcrouy4HGgi1onWKrXpgjIAbVyH
# M6cENkK7rnRk96gpeXdmrf0h9HqRciAOY8oUsFsvLyKBOCPBWDrLyOQEY5UoSdtD
# m4QpEVgywCy2z1uU/UObeT/UxJy/9EL/Zb+DHoEK06iEhwONoUJjEBYMJD38RMkk
# mwPTB4UAk9g=
# =s69t
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 12 Oct 2023 09:49:39 EDT
# gpg: using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A
# gpg: issuer "david@redhat.com"
# gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown]
# gpg: aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full]
# gpg: aka "David Hildenbrand <hildenbr@in.tum.de>" [unknown]
# gpg: WARNING: The key's User ID is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D FCCA 4DDE 10F7 00FF 835A
* tag 'mem-2023-10-12' of https://github.com/davidhildenbrand/qemu:
virtio-mem: Mark memslot alias memory regions unmergeable
memory,vhost: Allow for marking memory device memory regions unmergeable
virtio-mem: Expose device memory dynamically via multiple memslots if enabled
virtio-mem: Update state to match bitmap as soon as it's been migrated
virtio-mem: Pass non-const VirtIOMEM via virtio_mem_range_cb
memory: Clarify mapping requirements for RamDiscardManager
memory-device,vhost: Support automatic decision on the number of memslots
vhost: Add vhost_get_max_memslots()
kvm: Add stub for kvm_get_max_memslots()
memory-device,vhost: Support memory devices that dynamically consume memslots
memory-device: Track required and actually used memslots in DeviceMemoryState
stubs: Rename qmp_memory_device.c to memory_device.c
memory-device: Support memory devices with multiple memslots
vhost: Return number of free memslots
kvm: Return number of free memslots
softmmu/physmem: Fixup qemu_ram_block_from_host() documentation
vhost: Remove vhost_backend_can_merge() callback
vhost: Rework memslot filtering and fix "used_memslot" tracking
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This patch fixes invalid ufs register fields.
This fixes an issue reported by Bin Meng that
caused ufs to fail over riscv.
Fixes: bc4e68d362 ("hw/ufs: Initial commit for emulated Universal-Flash-Storage")
Signed-off-by: Jeuk Kim <jeuk20.kim@samsung.com>
Reported-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Tested-by: Bin Meng <bmeng@tinylab.org>
Fixed four ufs-related coverity issues.
The coverity issues and fixes are as follows
1. CID 1519042: Security issue with the rand() function
Changed to use a fixed value (0xab) instead of rand() as
the value for testing
2. CID 1519043: Dereference after null check
Removed useless (redundant) null checks
3. CID 1519050: Out-of-bounds access issue
Fix to pass an array type variable to find_first_bit and
find_next_bit using DECLARE_BITMAP()
4. CID 1519051: Out-of-bounds read issue
Fix incorrect range check for lun
Fix coverity CID: 1519042 1519043 1519050 1519051
Signed-off-by: Jeuk Kim <jeuk20.kim@samsung.com>
Commit a908985971 ("target/i386/seg_helper: introduce tss_set_busy",
2023-09-26) failed to use the tss_selector argument of the new function,
which was therefore unused.
This shows up as a #GP fault when booting old versions of 32-bit
Linux.
Fixes: a908985971 ("target/i386/seg_helper: introduce tss_set_busy", 2023-09-26)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20231011135350.438492-1-pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Rename some variables to avoid compiler warnings when compiling
with -Wshadow=local.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20231009083726.30301-1-thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and
requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert
the latter. Assert the former as well and add a no_coroutine_fn marker.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-23-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Almost all functions that access the child links already take the graph
lock now. Add locking to the remaining users and finally annotate the
struct field itself as protected by the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-22-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Almost all functions that access the parent link already take the graph
lock now. Add locking to the remaining user in a test case and finally
annotate the struct field itself as protected by the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-21-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_get_specific_info() need to hold a reader lock for the graph.
This removes an assume_graph_lock() call in vmdk's implementation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-20-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_apply_auto_read_only() need to hold a reader lock for the graph
because it calls bdrv_can_set_read_only(), which indirectly accesses the
parents list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-19-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_op_is_blocked() need to hold a reader lock for the graph
because it calls bdrv_get_device_or_node_name(), which accesses the
parents list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-18-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It still has an assume_graph_lock() call, but all of its callers are now
properly annotated to hold the graph lock. Update the function to be
GRAPH_RDLOCK as well and remove the assume_graph_lock().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-17-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
qcow2_inactivate() need to hold a reader lock for the graph because it
calls bdrv_get_device_or_node_name(), which accesses the parents list of
a node.
qcow2_do_close() is a bit strange because it is called from different
contexts. If close_data_file = true, we know that we were called from
non-coroutine main loop context (more specifically, we're coming from
qcow2_close()) and can safely drop the reader lock temporarily with
bdrv_graph_rdunlock_main_loop() and acquire the writer lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-16-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
qcow2_signal_corruption() need to hold a reader lock for the graph
because it calls bdrv_get_node_name(), which accesses the parents list
of a node.
For some places, we know that they will hold the lock, but we don't have
the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock()
with a FIXME comment. These places will be removed once everything is
properly annotated.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-15-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_amend_options() need to hold a reader lock for the graph. This
removes an assume_graph_lock() call in crypto's implementation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-14-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_get_parent_name() need to hold a reader lock for the graph
because it accesses the parents list of a node.
For some places, we know that they will hold the lock, but we don't have
the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock()
with a FIXME comment. These places will be removed once everything is
properly annotated.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-13-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_primary_child() need to hold a reader lock for the graph
because it accesses the children list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-12-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_refresh_filename() need to hold a reader lock for the graph
because it accesses the children list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-11-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_get_xdbg_block_graph() need to hold a reader lock for the graph
because it accesses the children list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-10-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reopen isn't easy with respect to locking because many of its functions
need to iterate the graph, some change it, and then you get some drains
in the middle where you can't hold any locks.
Therefore just documents most of the functions to be unlocked, and take
locks internally before accessing the graph.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-9-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_snapshot_fallback() need to hold a reader lock for the graph
because it accesses the children list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-8-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_parent_cb_resize() need to hold a reader lock for the graph.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-7-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Draining recursively traverses the graph, therefore we need to make sure
that also such accesses to the graph are protected by the graph rdlock.
There are 3 different drain callers to consider:
1. drain in the main loop: no issue at all, rdlock is nop.
2. drain in an iothread: rdlock only works in main loop or coroutines,
so disallow it.
3. drain in a coroutine (regardless of AioContext): the drain mechanism
takes care of scheduling a BH in the bs->aio_context that will
then take care of perform the actual draining. This is wrong,
because as pointed in (2) if bs->aio_context is an iothread then
rdlock won't work. Therefore change bdrv_co_yield_to_drain to
schedule the BH in the main loop.
Caller (2) also implies that we need to modify test-bdrv-drain.c to
disallow draining in the iothreads.
For some places, we know that they will hold the lock, but we don't have
the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock()
with a FIXME comment. These places will be removed once everything is
properly annotated.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-6-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds GRAPH_RDLOCK annotations to declare that callers of
bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock
for the graph. These functions are the only functions in block-backend.c
that access the parent list of a node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-5-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The function reads the parents list, so it needs to hold the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-4-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add a new wrapper type for GRAPH_RDLOCK functions that should be called
from coroutine context.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-3-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
AIO callbacks are effectively coroutine_mixed_fn. If AIO requests don't
return immediately, their callback is called from the request coroutine.
This means that in AIO callbacks, we can't call no_coroutine_fns such as
bdrv_graph_wrlock(). Unfortunately test-bdrv-drain does so.
Change the test to use a BH to drop out of coroutine context, and add
coroutine_mixed_fn and no_coroutine_fn markers to clarify the context
each function runs in.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-2-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20230904100306.156197-5-pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20230904100306.156197-4-pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Include both coroutine and non-coroutine versions, the latter being
co_wrapper_mixed_bdrv_rdlock of the former.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20230904100306.156197-3-pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_block_status exists as a wrapper for bdrv_block_status_above, but
the name of the (hypothetical) coroutine version, bdrv_co_block_status,
is squatted by a random static function. Rename it to
bdrv_co_do_block_status.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20230904100306.156197-2-pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* Add support for the max CPU
* Detect user choice in TCG
* Clear CSR values at reset and sync MPSTATE with host
* Fix the typo of inverted order of pmpaddr13 and pmpaddr14
* Split TCG/KVM accelerators from cpu.c
* Add extension properties for all cpus
* Replace GDB exit calls with proper shutdown
* Support KVM_GET_REG_LIST
* Remove RVG warning
* Use env_archcpu for better performance
* Deprecate capital 'Z' CPU properties
* Fix vfwmaccbf16.vf
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmUncYAACgkQr3yVEwxT
gBPQ3g/9Fi4uYRK7dymHHAQbOO9NPlmVPPSxmQ8fNUhoZUkbHfm56JEl42Xr02rA
Lg2ORRQxJhAinANV8CotnbyLRHNCAvouCMCQEjHo1YEHzdXc0tQzp+rIOHT7v9rH
6OQpI6RuCjO+0LQPMgzJx8yokMw/9b0uma3+RkNKod1XsSySo6JvDkMZGGZZWuVX
Que3TMHzc4513PWEwRS9NaAHqRdy/ax0aPu9khswTYBxeJ/mBTLvGj4wBq5wnS7+
JPvq0M5ScUMl4K5o884wsAzOdxRk8QZOMx3duMCbqXw0xFmYZj/EzcIeHdnXwuDB
lcANd6LcESMNUb8iDBaFRjLnZ/gNiu20/P/LPWyTirfoZXzZ+h6WPnSeli36xtzO
KKWtvS1YggCjsDvh9/PLYAvUGBcS/kUhIynN10YKnoKB+wSDxxyvBS1GU6c8czgc
WDf3V4P3Z8oPKDA/24Qd9Uiho1Gq9FED4eBQPb9PuvkfboKE/g7lUp708XXDFVld
hkJMsYROSRvk54RHITrD9Z+XFQ2TfC8wHLH0IwlyynQnc1sKvXaR6U1hZTAVtE4f
yley/xCQ7OUV+hrx1sQLURcN6A+SPummOY5jdHiD29QcJnOZnkSy5j2KOlnHSa5i
6v/6EFCgxwr69N6Q6X34VDv6+DZqLO2dNncQCInYFfupRhQ7t1E=
=SUon
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20231012-1' of https://github.com/alistair23/qemu into staging
Second RISC-V PR for 8.2
* Add support for the max CPU
* Detect user choice in TCG
* Clear CSR values at reset and sync MPSTATE with host
* Fix the typo of inverted order of pmpaddr13 and pmpaddr14
* Split TCG/KVM accelerators from cpu.c
* Add extension properties for all cpus
* Replace GDB exit calls with proper shutdown
* Support KVM_GET_REG_LIST
* Remove RVG warning
* Use env_archcpu for better performance
* Deprecate capital 'Z' CPU properties
* Fix vfwmaccbf16.vf
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmUncYAACgkQr3yVEwxT
# gBPQ3g/9Fi4uYRK7dymHHAQbOO9NPlmVPPSxmQ8fNUhoZUkbHfm56JEl42Xr02rA
# Lg2ORRQxJhAinANV8CotnbyLRHNCAvouCMCQEjHo1YEHzdXc0tQzp+rIOHT7v9rH
# 6OQpI6RuCjO+0LQPMgzJx8yokMw/9b0uma3+RkNKod1XsSySo6JvDkMZGGZZWuVX
# Que3TMHzc4513PWEwRS9NaAHqRdy/ax0aPu9khswTYBxeJ/mBTLvGj4wBq5wnS7+
# JPvq0M5ScUMl4K5o884wsAzOdxRk8QZOMx3duMCbqXw0xFmYZj/EzcIeHdnXwuDB
# lcANd6LcESMNUb8iDBaFRjLnZ/gNiu20/P/LPWyTirfoZXzZ+h6WPnSeli36xtzO
# KKWtvS1YggCjsDvh9/PLYAvUGBcS/kUhIynN10YKnoKB+wSDxxyvBS1GU6c8czgc
# WDf3V4P3Z8oPKDA/24Qd9Uiho1Gq9FED4eBQPb9PuvkfboKE/g7lUp708XXDFVld
# hkJMsYROSRvk54RHITrD9Z+XFQ2TfC8wHLH0IwlyynQnc1sKvXaR6U1hZTAVtE4f
# yley/xCQ7OUV+hrx1sQLURcN6A+SPummOY5jdHiD29QcJnOZnkSy5j2KOlnHSa5i
# 6v/6EFCgxwr69N6Q6X34VDv6+DZqLO2dNncQCInYFfupRhQ7t1E=
# =SUon
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 12 Oct 2023 00:09:36 EDT
# gpg: using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65 9296 AF7C 9513 0C53 8013
* tag 'pull-riscv-to-apply-20231012-1' of https://github.com/alistair23/qemu: (54 commits)
target/riscv: Fix vfwmaccbf16.vf
target/riscv: deprecate capital 'Z' CPU properties
target/riscv: Use env_archcpu for better performance
target/riscv/tcg: remove RVG warning
target/riscv/kvm: support KVM_GET_REG_LIST
target/riscv/kvm: improve 'init_multiext_cfg' error msg
gdbstub: replace exit calls with proper shutdown for softmmu
hw/char: riscv_htif: replace exit calls with proper shutdown
hw/misc/sifive_test.c: replace exit calls with proper shutdown
softmmu: pass the main loop status to gdb "Wxx" packet
softmmu: add means to pass an exit code when requesting a shutdown
target/riscv/tcg-cpu.c: add extension properties for all cpus
target/riscv: add riscv_cpu_get_name()
target/riscv/cpu: move priv spec functions to tcg-cpu.c
target/riscv/cpu.c: export isa_edata_arr[]
target/riscv/tcg: move riscv_cpu_add_misa_properties() to tcg-cpu.c
target/riscv/cpu.c: make misa_ext_cfgs[] 'const'
target/riscv/tcg: introduce tcg_cpu_instance_init()
target/riscv/cpu.c: export set_misa()
target/riscv/kvm: do not use riscv_cpu_add_misa_properties()
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This job is failing since weeks. Let's mark it as manual until
it gets fixed.
Message-Id: <82aa015a-ca94-49ce-beec-679cc175b726@redhat.com>
Acked-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Let's mark the memslot alias memory regions as unmergable, such that
flatview and vhost won't merge adjacent memory region aliases and we can
atomically map/unmap individual aliases without affecting adjacent
alias memory regions.
This handles vhost and vfio in multiple-memslot mode correctly (which do
not support atomic memslot updates) and avoids the temporary removal of
large memslots, which can be an expensive operation. For example, vfio
might have to unpin + repin a lot of memory, which is undesired.
Message-ID: <20230926185738.277351-19-david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's allow for marking memory regions unmergeable, to teach
flatview code and vhost to not merge adjacent aliases to the same memory
region into a larger memory section; instead, we want separate aliases to
stay separate such that we can atomically map/unmap aliases without
affecting other aliases.
This is desired for virtio-mem mapping device memory located on a RAM
memory region via multiple aliases into a memory region container,
resulting in separate memslots that can get (un)mapped atomically.
As an example with virtio-mem, the layout would look something like this:
[...]
0000000240000000-00000020bfffffff (prio 0, i/o): device-memory
0000000240000000-000000043fffffff (prio 0, i/o): virtio-mem
0000000240000000-000000027fffffff (prio 0, ram): alias memslot-0 @mem2 0000000000000000-000000003fffffff
0000000280000000-00000002bfffffff (prio 0, ram): alias memslot-1 @mem2 0000000040000000-000000007fffffff
00000002c0000000-00000002ffffffff (prio 0, ram): alias memslot-2 @mem2 0000000080000000-00000000bfffffff
[...]
Without unmergable memory regions, all three memslots would get merged into
a single memory section. For example, when mapping another alias (e.g.,
virtio-mem-memslot-3) or when unmapping any of the mapped aliases,
memory listeners will first get notified about the removal of the big
memory section to then get notified about re-adding of the new
(differently merged) memory section(s).
In an ideal world, memory listeners would be able to deal with that
atomically, like KVM nowadays does. However, (a) supporting this for other
memory listeners (vhost-user, vfio) is fairly hard: temporary removal
can result in all kinds of issues on concurrent access to guest memory;
and (b) this handling is undesired, because temporarily removing+readding
can consume quite some time on bigger memslots and is not efficient
(e.g., vfio unpinning and repinning pages ...).
Let's allow for marking a memory region unmergeable, such that we
can atomically (un)map aliases to the same memory region, similar to
(un)mapping individual DIMMs.
Similarly, teach vhost code to not redo what flatview core stopped doing:
don't merge such sections. Merging in vhost code is really only relevant
for handling random holes in boot memory where; without this merging,
the vhost-user backend wouldn't be able to mmap() some boot memory
backed on hugetlb.
We'll use this for virtio-mem next.
Message-ID: <20230926185738.277351-18-david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Having large virtio-mem devices that only expose little memory to a VM
is currently a problem: we map the whole sparse memory region into the
guest using a single memslot, resulting in one gigantic memslot in KVM.
KVM allocates metadata for the whole memslot, which can result in quite
some memory waste.
Assuming we have a 1 TiB virtio-mem device and only expose little (e.g.,
1 GiB) memory, we would create a single 1 TiB memslot and KVM has to
allocate metadata for that 1 TiB memslot: on x86, this implies allocating
a significant amount of memory for metadata:
(1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB
-> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %)
With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets
allocated lazily when required for nested VMs
(2) gfn_track: 2 bytes per 4 KiB
-> For 1 TiB: 536870912 = ~512 MiB (0.05 %)
(3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB
-> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %)
(4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page
-> For 1 TiB: 536870912 = 64 MiB (0.006 %)
So we primarily care about (1) and (2). The bad thing is, that the
memory consumption *doubles* once SMM is enabled, because we create the
memslot once for !SMM and once for SMM.
Having a 1 TiB memslot without the TDP MMU consumes around:
* With SMM: 5 GiB
* Without SMM: 2.5 GiB
Having a 1 TiB memslot with the TDP MMU consumes around:
* With SMM: 1 GiB
* Without SMM: 512 MiB
... and that's really something we want to optimize, to be able to just
start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device
that can grow very large (e.g., 1 TiB).
Consequently, using multiple memslots and only mapping the memslots we
really need can significantly reduce memory waste and speed up
memslot-related operations. Let's expose the sparse RAM memory region using
multiple memslots, mapping only the memslots we currently need into our
device memory region container.
The feature can be enabled using "dynamic-memslots=on" and requires
"unplugged-inaccessible=on", which is nowadays the default.
Once enabled, we'll auto-detect the number of memslots to use based on the
memslot limit provided by the core. We'll use at most 1 memslot per
gigabyte. Note that our global limit of memslots accross all memory devices
is currently set to 256: even with multiple large virtio-mem devices,
we'd still have a sane limit on the number of memslots used.
The default is to not dynamically map memslot for now
("dynamic-memslots=off"). The optimization must be enabled manually,
because some vhost setups (e.g., hotplug of vhost-user devices) might be
problematic until we support more memslots especially in vhost-user backends.
Note that "dynamic-memslots=on" is just a hint that multiple memslots
*may* be used for internal optimizations, not that multiple memslots
*must* be used. The actual number of memslots that are used is an
internal detail: for example, once memslot metadata is no longer an
issue, we could simply stop optimizing for that. Migration source and
destination can differ on the setting of "dynamic-memslots".
Message-ID: <20230926185738.277351-17-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
It's cleaner and future-proof to just have other state that depends on the
bitmap state to be updated as soon as possible when restoring the bitmap.
So factor out informing RamDiscardListener into a functon and call it in
case of early migration right after we restored the bitmap.
Message-ID: <20230926185738.277351-16-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's prepare for a user that has to modify the VirtIOMEM device state.
Message-ID: <20230926185738.277351-15-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
We really only care about the RAM memory region not being mapped into
an address space yet as long as we're still setting up the
RamDiscardManager. Once mapped into an address space, memory notifiers
would get notified about such a region and any attempts to modify the
RamDiscardManager would be wrong.
While "mapped into an address space" is easy to check for RAM regions that
are mapped directly (following the ->container links), it's harder to
check when such regions are mapped indirectly via aliases. For now, we can
only detect that a region is mapped through an alias (->mapped_via_alias),
but we don't have a handle on these aliases to follow all their ->container
links to test if they are eventually mapped into an address space.
So relax the assertion in memory_region_set_ram_discard_manager(),
remove the check in memory_region_get_ram_discard_manager() and clarify
the doc.
Message-ID: <20230926185738.277351-14-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
We want to support memory devices that can automatically decide how many
memslots they will use. In the worst case, they have to use a single
memslot.
The target use cases are virtio-mem and the hyper-v balloon.
Let's calculate a reasonable limit such a memory device may use, and
instruct the device to make a decision based on that limit. Use a simple
heuristic that considers:
* A memslot soft-limit for all memory devices of 256; also, to not
consume too many memslots -- which could harm performance.
* Actually still free and unreserved memslots
* The percentage of the remaining device memory region that memory device
will occupy.
Further, while we properly check before plugging a memory device whether
there still is are free memslots, we have other memslot consumers (such as
boot memory, PCI BARs) that don't perform any checks and might dynamically
consume memslots without any prior reservation. So we might succeed in
plugging a memory device, but once we dynamically map a PCI BAR we would
be in trouble. Doing accounting / reservation / checks for all such
users is problematic (e.g., sometimes we might temporarily split boot
memory into two memslots, triggered by the BIOS).
We use the historic magic memslot number of 509 as orientation to when
supporting 256 memory devices -> memslots (leaving 253 for boot memory and
other devices) has been proven to work reliable. We'll fallback to
suggesting a single memslot if we don't have at least 509 total memslots.
Plugging vhost devices with less than 509 memslots available while we
have memory devices plugged that consume multiple memslots due to
automatic decisions can be problematic. Most configurations might just fail
due to "limit < used + reserved", however, it can also happen that these
memory devices would suddenly consume memslots that would actually be
required by other memslot consumers (boot, PCI BARs) later. Note that this
has always been sketchy with vhost devices that support only a small number
of memslots; but we don't want to make it any worse.So let's keep it simple
and simply reject plugging such vhost devices in such a configuration.
Eventually, all vhost devices that want to be fully compatible with such
memory devices should support a decent number of memslots (>= 509).
Message-ID: <20230926185738.277351-13-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's add vhost_get_max_memslots().
Message-ID: <20230926185738.277351-12-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
We'll need the stub soon from memory device context.
While at it, use "unsigned int" as return value and place the
declaration next to kvm_get_free_memslots().
Message-ID: <20230926185738.277351-11-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
We want to support memory devices that have a dynamically managed memory
region container as device memory region. This device memory region maps
multiple RAM memory subregions (e.g., aliases to the same RAM memory
region), whereby these subregions can be (un)mapped on demand.
Each RAM subregion will consume a memslot in KVM and vhost, resulting in
such a new device consuming memslots dynamically, and initially usually
0. We already track the number of used vs. required memslots for all
memslots. From that, we can derive the number of reserved memslots that
must not be used otherwise.
The target use case is virtio-mem and the hyper-v balloon, which will
dynamically map aliases to RAM memory region into their device memory
region container.
Properly document what's supported and what's not and extend the vhost
memslot check accordingly.
Message-ID: <20230926185738.277351-10-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's track how many memslots are required by plugged memory devices and
how many are currently actually getting used by plugged memory
devices.
"required - used" is the number of reserved memslots. For now, the number
of used and required memslots is always equal, and there are no
reservations. This is a preparation for memory devices that want to
dynamically consume memslots after initially specifying how many they
require -- where we'll end up with reserved memslots.
To track the number of used memslots, create a new address space for
our device memory and register a memory listener (add/remove) for that
address space.
Message-ID: <20230926185738.277351-9-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>