2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('osdep.c', 'cutils.c', 'unicode.c', 'qemu-timer-common.c'))
|
util: Add write-only "node-affinity" property for ThreadContext
Let's make it easier to pin threads created via a ThreadContext to
all host CPUs currently belonging to a given set of host NUMA nodes --
which is the common case.
"node-affinity" is simply a shortcut for setting "cpu-affinity" manually
to the list of host CPUs belonging to the set of host nodes. This property
can only be written.
A simple QEMU example to set the CPU affinity to host node 1 on a system
with two nodes, 24 CPUs each, whereby odd-numbered host CPUs belong to
host node 1:
qemu-system-x86_64 -S \
-object thread-context,id=tc1,node-affinity=1
And we can query the cpu-affinity via HMP/QMP:
(qemu) qom-get tc1 cpu-affinity
[
1,
3,
5,
7,
9,
11,
13,
15,
17,
19,
21,
23,
25,
27,
29,
31,
33,
35,
37,
39,
41,
43,
45,
47
]
We cannot query the node-affinity:
(qemu) qom-get tc1 node-affinity
Error: Insufficient permission to perform this operation
But note that due to dynamic library loading this example will not work
before we actually make use of thread_context_create_thread() in QEMU
code, because the type will otherwise not get registered. We'll wire
this up next to make it work.
Note that if the host CPUs for a host node change due do CPU hot(un)plug
CPU onlining/offlining (i.e., lscpu output changes) after the ThreadContext
was started, the CPU affinity will not get updated.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221014134720.168738-5-david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2022-10-14 15:47:17 +02:00
|
|
|
util_ss.add(files('thread-context.c'), numa)
|
2021-10-07 15:08:25 +02:00
|
|
|
if not config_host_data.get('CONFIG_ATOMIC64')
|
|
|
|
util_ss.add(files('atomic64.c'))
|
|
|
|
endif
|
os-posix: asynchronous teardown for shutdown on Linux
This patch adds support for asynchronously tearing down a VM on Linux.
When qemu terminates, either naturally or because of a fatal signal,
the VM is torn down. If the VM is huge, it can take a considerable
amount of time for it to be cleaned up. In case of a protected VM, it
might take even longer than a non-protected VM (this is the case on
s390x, for example).
Some users might want to shut down a VM and restart it immediately,
without having to wait. This is especially true if management
infrastructure like libvirt is used.
This patch implements a simple trick on Linux to allow qemu to return
immediately, with the teardown of the VM being performed
asynchronously.
If the new commandline option -async-teardown is used, a new process is
spawned from qemu at startup, using the clone syscall, in such way that
it will share its address space with qemu.The new process will have the
name "cleanup/<QEMU_PID>". It will wait until qemu terminates
completely, and then it will exit itself.
This allows qemu to terminate quickly, without having to wait for the
whole address space to be torn down. The cleanup process will exit
after qemu, so it will be the last user of the address space, and
therefore it will take care of the actual teardown. The cleanup
process will share the same cgroups as qemu, so both memory usage and
cpu time will be accounted properly.
If possible, close_range will be used in the cleanup process to close
all open file descriptors. If it is not available or if it fails, /proc
will be used to determine which file descriptors to close.
If the cleanup process is forcefully killed with SIGKILL before the
main qemu process has terminated completely, the mechanism is defeated
and the teardown will not be asynchronous.
This feature can already be used with libvirt by adding the following
to the XML domain definition to pass the parameter to qemu directly:
<commandline xmlns="http://libvirt.org/schemas/domain/qemu/1.0">
<arg value='-async-teardown'/>
</commandline>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Tested-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Message-Id: <20220812133453.82671-1-imbrenda@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-08-12 15:34:53 +02:00
|
|
|
util_ss.add(when: 'CONFIG_LINUX', if_true: files('async-teardown.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('aio-posix.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('fdmon-poll.c'))
|
2021-06-03 12:10:05 +02:00
|
|
|
if config_host_data.get('CONFIG_EPOLL_CREATE1')
|
|
|
|
util_ss.add(files('fdmon-epoll.c'))
|
|
|
|
endif
|
meson: fix missing preprocessor symbols
While most libraries do not need a CONFIG_* symbol because the
"when:" clauses are enough, some do. Add them back or stop
using them if possible.
In the case of libpmem, the statement to add the CONFIG_* symbol
was still in configure, but could not be triggered because it
checked for "no" instead of "disabled" (and it would be wrong anyway
since the test for the library has not been done yet).
Reported-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Fixes: 587d59d6cc ("configure, meson: convert virgl detection to meson", 2021-07-06)
Fixes: 83ef16821a ("configure, meson: convert libdaxctl detection to meson", 2021-07-06)
Fixes: e36e8c70f6 ("configure, meson: convert libpmem detection to meson", 2021-07-06)
Fixes: 53c22b68e3 ("configure, meson: convert liburing detection to meson", 2021-07-06)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-08 13:50:06 +02:00
|
|
|
util_ss.add(when: linux_io_uring, if_true: files('fdmon-io_uring.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('compatfd.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('event_notifier-posix.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('mmap-alloc.c'))
|
2022-03-23 16:57:13 +01:00
|
|
|
freebsd_dep = []
|
|
|
|
if targetos == 'freebsd'
|
|
|
|
freebsd_dep = util
|
|
|
|
endif
|
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: [files('oslib-posix.c'), freebsd_dep])
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('qemu-thread-posix.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('memfd.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_WIN32', if_true: files('aio-win32.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_WIN32', if_true: files('event_notifier-win32.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_WIN32', if_true: files('oslib-win32.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_WIN32', if_true: files('qemu-thread-win32.c'))
|
|
|
|
util_ss.add(when: 'CONFIG_WIN32', if_true: winmm)
|
2022-06-24 16:50:37 +02:00
|
|
|
util_ss.add(when: 'CONFIG_WIN32', if_true: pathcch)
|
2022-10-12 11:31:32 +02:00
|
|
|
if glib_has_gslice
|
|
|
|
util_ss.add(files('qtree.c'))
|
|
|
|
endif
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('envlist.c', 'path.c', 'module.c'))
|
|
|
|
util_ss.add(files('host-utils.c'))
|
|
|
|
util_ss.add(files('bitmap.c', 'bitops.c'))
|
|
|
|
util_ss.add(files('fifo8.c'))
|
2022-06-21 03:48:35 +02:00
|
|
|
util_ss.add(files('cacheflush.c'))
|
2022-03-23 16:57:25 +01:00
|
|
|
util_ss.add(files('error.c', 'error-report.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('qemu-print.c'))
|
|
|
|
util_ss.add(files('id.c'))
|
|
|
|
util_ss.add(files('qemu-config.c', 'notify.c'))
|
|
|
|
util_ss.add(files('qemu-option.c', 'qemu-progress.c'))
|
|
|
|
util_ss.add(files('keyval.c'))
|
|
|
|
util_ss.add(files('crc32c.c'))
|
|
|
|
util_ss.add(files('uuid.c'))
|
|
|
|
util_ss.add(files('getauxval.c'))
|
|
|
|
util_ss.add(files('rcu.c'))
|
2021-11-08 13:52:11 +01:00
|
|
|
if have_membarrier
|
|
|
|
util_ss.add(files('sys_membarrier.c'))
|
|
|
|
endif
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('log.c'))
|
|
|
|
util_ss.add(files('qdist.c'))
|
|
|
|
util_ss.add(files('qht.c'))
|
|
|
|
util_ss.add(files('qsp.c'))
|
|
|
|
util_ss.add(files('range.c'))
|
|
|
|
util_ss.add(files('stats64.c'))
|
|
|
|
util_ss.add(files('systemd.c'))
|
2021-04-28 17:17:36 +02:00
|
|
|
util_ss.add(files('transactions.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(when: 'CONFIG_POSIX', if_true: files('drm.c'))
|
|
|
|
util_ss.add(files('guest-random.c'))
|
2021-03-23 18:52:46 +01:00
|
|
|
util_ss.add(files('yank.c'))
|
2022-01-06 22:00:53 +01:00
|
|
|
util_ss.add(files('int128.c'))
|
2022-02-26 19:07:17 +01:00
|
|
|
util_ss.add(files('memalign.c'))
|
2022-09-17 14:05:54 +02:00
|
|
|
util_ss.add(files('interval-tree.c'))
|
2022-11-11 16:47:56 +01:00
|
|
|
util_ss.add(files('lockcnt.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
|
|
|
|
if have_user
|
|
|
|
util_ss.add(files('selfmap.c'))
|
|
|
|
endif
|
|
|
|
|
|
|
|
if have_system
|
2021-01-23 11:39:57 +01:00
|
|
|
util_ss.add(files('crc-ccitt.c'))
|
2022-04-20 17:33:44 +02:00
|
|
|
util_ss.add(when: gio, if_true: files('dbus.c'))
|
2021-01-29 11:14:04 +01:00
|
|
|
util_ss.add(when: 'CONFIG_LINUX', if_true: files('userfaultfd.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
endif
|
|
|
|
|
2022-11-10 09:36:26 +01:00
|
|
|
if have_block or have_ga
|
|
|
|
util_ss.add(files('aiocb.c', 'async.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('base64.c'))
|
2022-11-10 09:36:26 +01:00
|
|
|
util_ss.add(files('main-loop.c'))
|
|
|
|
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
|
2022-10-12 13:19:35 +02:00
|
|
|
util_ss.add(files(f'coroutine-@coroutine_backend@.c'))
|
2022-11-10 09:36:26 +01:00
|
|
|
util_ss.add(files('thread-pool.c', 'qemu-timer.c'))
|
|
|
|
util_ss.add(files('qemu-sockets.c'))
|
|
|
|
endif
|
|
|
|
if have_block
|
|
|
|
util_ss.add(files('aio-wait.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('buffer.c'))
|
|
|
|
util_ss.add(files('bufferiszero.c'))
|
|
|
|
util_ss.add(files('hbitmap.c'))
|
|
|
|
util_ss.add(files('hexdump.c'))
|
|
|
|
util_ss.add(files('iova-tree.c'))
|
2022-11-10 09:36:26 +01:00
|
|
|
util_ss.add(files('iov.c', 'uri.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('nvdimm-utils.c'))
|
2020-10-27 18:35:18 +01:00
|
|
|
util_ss.add(when: 'CONFIG_LINUX', if_true: [
|
2020-09-24 17:15:49 +02:00
|
|
|
files('vhost-user-server.c'), vhost_user
|
|
|
|
])
|
2020-09-18 10:09:09 +02:00
|
|
|
util_ss.add(files('block-helpers.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('qemu-coroutine-sleep.c'))
|
|
|
|
util_ss.add(files('qemu-co-shared-resource.c'))
|
2022-04-07 15:27:23 +02:00
|
|
|
util_ss.add(files('qemu-co-timeout.c'))
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(files('readline.c'))
|
|
|
|
util_ss.add(files('throttle.c'))
|
|
|
|
util_ss.add(files('timed-average.c'))
|
2022-01-07 14:35:14 +01:00
|
|
|
if config_host_data.get('CONFIG_INOTIFY1')
|
|
|
|
util_ss.add(files('filemonitor-inotify.c'))
|
|
|
|
else
|
|
|
|
util_ss.add(files('filemonitor-stub.c'))
|
|
|
|
endif
|
2020-08-19 14:44:56 +02:00
|
|
|
util_ss.add(when: 'CONFIG_LINUX', if_true: files('vfio-helpers.c'))
|
|
|
|
endif
|
2023-05-18 06:12:08 +02:00
|
|
|
|
2023-05-18 05:50:45 +02:00
|
|
|
if cpu == 'aarch64'
|
|
|
|
util_ss.add(files('cpuinfo-aarch64.c'))
|
|
|
|
elif cpu in ['x86', 'x86_64']
|
2023-05-18 06:12:08 +02:00
|
|
|
util_ss.add(files('cpuinfo-i386.c'))
|
|
|
|
endif
|