In preference to vendoring meson source, vendor a built distributable
("bdist" in python parlance). This has some benefits:
(1) We can get rid of a git submodule,
(2) Installing built meson into a venv doesn't require any extra
dependencies (the python "wheel" package, chiefly.)
(3) We don't treat meson any differently than we would any other python
package (we install it, end of story, done.)
(4) All future tarball *and* developer checkouts will function offline;
No git or PyPI connection needed to fetch meson.
Note that because mkvenv prefers vendored packages to PyPI, as mkvenv is
currently written we will never consult PyPI for meson. (Do keep in mind
that your distribution's meson will be preferred above the vendored
version, though.)
```
jsnow@scv ~/s/q/python (python-configure-venv)> python3 scripts/vendor.py
pip download --dest /home/jsnow/src/qemu/python/wheels --require-hashes -r /tmp/tmpvo5qav7i
Collecting meson==0.61.5
Using cached meson-0.61.5-py3-none-any.whl (862 kB)
Saved ./wheels/meson-0.61.5-py3-none-any.whl
Successfully downloaded meson
```
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-17-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch changes the configure script so that it always creates and
uses a python virtual environment unconditionally.
Meson bootstrapping is temporarily altered to force the use of meson
from git or vendored source (as packaged in our source tarballs). A
subsequent commit restores the use of distribution-vendored Meson.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a teeny-tiny script that just downloads any packages we want to
vendor from PyPI and stores them in qemu.git/python/wheels/. If I'm hit
by a meteor, it'll be easy to replicate what I have done in order to
udpate the vendored source.
We don't really care which python runs it; it exists as a meta-utility
with no external dependencies and we won't package or install it. It
will be monitored by the linters/type checkers, though; so it's
guaranteed safe on python 3.6+.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-15-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
NetBSD cannot successfully run "ensurepip" without access to the pyexpat
module, which NetBSD debundles. Like the Debian patch, it would be
strictly faster long term to install pip/setuptools, and I recommend
developers at their workstations take that approach instead.
For the purposes of a throwaway VM, there's not really a speed
difference for who is responsible for installing pip; us (needs
py310-pip) or Python (needs py310-expat).
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230511035435.734312-14-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
NetBSD removes some packages from the Python stdlib, but only
re-packages them for Python 3.10. Switch to using Python 3.10.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230511035435.734312-13-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Several debian-based tests need the python3-venv dependency as a
consequence of Debian debundling the "ensurepip" module normally
included with Python.
As mkvenv.py stands as of this commit, Debian requires EITHER:
(A) setuptools and pip, or
(B) ensurepip
mkvenv is a few seconds faster if you have setuptools and pip, so
developers should prefer the first requirement. For the purposes of CI,
the time-save is a wash; it's only a matter of who is responsible for
installing pip and when; the timing is about the same.
Arbitrarily, I chose adding ensurepip to the test configuration because
it is normally part of the Python stdlib, and always having it allows us
a more consistent cross-platform environment.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230511035435.734312-12-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a workaround intended for Debian 10, where the debian-patched
pip does not function correctly if accessed from within a virtual
environment.
We don't support Debian 10 as a build platform any longer, though we do
still utilize it for our build-tricore-softmmu CI test. It's also
possible that this bug might appear on other derivative platforms and
this workaround may prove useful.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-11-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
distlib is usually not installed on Linux distribution, but it is vendored
into pip. Because the virtual environment has pip via ensurepip, we
can piggy-back on pip's vendored version. This could break if they move
our cheese in the future, but the fix would be simply to require distlib.
If it is debundled, as it is on msys, it is simply available directly.
Signed-off-by: John Snow <jsnow@redhat.com>
[Move to toplevel. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When creating a virtual environment that inherits system packages,
script entry points (like "meson", "sphinx-build", etc) are not
re-generated with the correct shebang. When you are *inside* of the
venv, this is not a problem, but if you are *outside* of it, you will
not have a script that engages the virtual environment appropriately.
Add a mechanism that generates new entry points for pre-existing
packages so that we can use these scripts to run "meson",
"sphinx-build", "pip", unambiguously inside the venv.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-9-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a routine that is designed to print some usable info for human
beings back out to the terminal if/when "mkvenv ensure" fails to locate
or install a package during configure time, such as meson or sphinx.
Since we are requiring that "meson" and "sphinx" are installed to the
same Python environment as QEMU is configured to build with, this can
produce some surprising failures when things are mismatched. This method
is here to try and ease that sting by offering some actionable
diagnosis.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-8-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This command is to be used to add various packages (or ensure they're
already present) into the configure-provided venv in a modular fashion.
Examples:
mkvenv ensure --online --dir "${source_dir}/python/wheels/" "meson>=0.61.5"
mkvenv ensure --online "sphinx>=1.6.0"
mkvenv ensure "qemu.qmp==0.0.2"
It's designed to look for packages in three places, in order:
(1) In system packages, if the version installed is already good
enough. This way your distribution-provided meson, sphinx, etc are
always used as first preference.
(2) In a vendored packages directory. Here I am suggesting
qemu.git/python/wheels/ as that directory. This is intended to serve as
a replacement for vendoring the meson source for QEMU tarballs. It is
also highly likely to be extremely useful for packaging the "qemu.qmp"
package in source distributions for platforms that do not yet package
qemu.qmp separately.
(3) Online, via PyPI, ***only when "--online" is passed***. This is only
ever used as a fallback if the first two sources do not have an
appropriate package that meets the requirement. The ability to build
QEMU and run tests *completely offline* is not impinged.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-7-jsnow@redhat.com>
[Use distlib to lookup distributions. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Python virtual environments do not typically nest; they may inherit from
the top-level system packages or not at all.
For our purposes, it would be convenient to emulate "nested" virtual
environments to allow callers of the configure script to install
specific versions of python utilities in order to test build system
features, utility version compatibility, etc.
While it is possible to install packages into the system environment
(say, by using the --user flag), it's nicer to install test packages
into a totally isolated environment instead.
As detailed in https://www.qemu.org/2023/03/24/python/, Emulate a nested
venv environment by using .pth files installed into the site-packages
folder that points to the parent environment when appropriate.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-6-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Debian debundles ensurepip for python; NetBSD debundles pyexpat but
ensurepip needs pyexpat. Try our best to offer a helpful error message
instead of just failing catastrophically.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-5-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This script will be responsible for building a lightweight Python
virtual environment at configure time. It works with Python 3.6 or
newer.
It has been designed to:
- work *offline*, no PyPI required.
- work *quickly*, The fast path is only ~65ms on my machine.
- work *robustly*, with multiple fallbacks to keep things working.
- work *cooperatively*, using system packages where possible.
(You can use your distro's meson, no problem.)
Due to its unique position in the build chain, it exists outside of the
installable python packages in-tree and *must* be runnable without any
third party dependencies.
Under normal circumstances, the only dependency required to execute this
script is Python 3.6+ itself. The script is *faster* by several seconds
when setuptools and pip are installed in the host environment, which is
probably the case for a typical multi-purpose developer workstation.
In the event that pip/setuptools are missing or not usable, additional
dependencies may be required on some distributions which remove certain
Python stdlib modules to package them separately:
- Debian may require python3-venv to provide "ensurepip"
- NetBSD may require py310-expat to provide "pyexpat" *
(* Or whichever version is current for NetBSD.)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-4-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Pylint 2.17.x decided that SocketAddrT was a bad name for a Type Alias for some
reason. Sure, fine, whatever.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230511035435.734312-3-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
"make check-minreqs" runs pip without the --disable-pip-version-check
option, which causes the obnoxious "A new release of pip available"
message.
Recent versions of pip also complain that some of the dependencies in
our virtual environment rely on "setup.py install" instead of providing
a pyproject.toml file; apparently it is deprecated to install them
directly from pip instead of letting the "wheel" package take care
of them. So, install "wheel" in the virtual environment.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-2-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Run 'make distclean' in a tree, and GNUmakefile is removed.
But, GNUmakefile is where we change directory to build.
Run 'make distclean' or 'make clean' again, and Makefile applies
the clean actions, such as this one, at the top level of the tree.
For example, it removes the .d source files in 'meson/test cases/d/*/*.d'.
find . \( -name '*.so' -o -name '*.dll' -o \
-name '*.[oda]' -o -name '*.gcno' \) -type f \
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-aarch64.a \
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \
-exec rm {} +
To fix, remove clean and distclean from UNCHECKED_GOALS, so those targets
are "checked", meaning that configure must be run before make. However,
the check action does not trigger, because clean does not depend on
config-host.mak, so change the action to simply throw an error.
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Message-Id: <1681909700-94095-1-git-send-email-steven.sistare@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using linux 6.x guest, at boot time, an inquiry on a scsi-generic
device makes qemu crash. This is caused by a buffer overflow when
scsi-generic patches the block limits VPD page.
Do the operations on a temporary on-stack buffer that is guaranteed
to be large enough.
Reported-by: Théo Maillart <tmaillart@freebox.fr>
Analyzed-by: Théo Maillart <tmaillart@freebox.fr>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arm64 has different capability from x86 to enable the dirty ring, which
is KVM_CAP_DIRTY_LOG_RING_ACQ_REL. Besides, arm64 also needs the backup
bitmap extension (KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP) when 'kvm-arm-gicv3'
or 'arm-its-kvm' device is enabled. Here the extension is always enabled
and the unnecessary overhead to do the last stage of dirty log synchronization
when those two devices aren't used is introduced, but the overhead should
be very small and acceptable. The benefit is cover future cases where those
two devices are used without modifying the code.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20230509022122.20888-5-gshan@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Due to multiple capabilities associated with the dirty ring for different
architectures: KVM_CAP_DIRTY_{LOG_RING, LOG_RING_ACQ_REL} for x86 and
arm64 separately. There will be more to be done in order to support the
dirty ring for arm64.
Lets add helper kvm_dirty_ring_init() to enable the dirty ring. With this,
the code looks a bit clean.
No functional change intended.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
Message-Id: <20230509022122.20888-4-gshan@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In the last stage of live migration or memory slot removal, the
backup bitmap needs to be synchronized when it has been enabled.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
Message-Id: <20230509022122.20888-3-gshan@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The global dirty log synchronization is used when KVM and dirty ring
are enabled. There is a particularity for ARM64 where the backup
bitmap is used to track dirty pages in non-running-vcpu situations.
It means the dirty ring works with the combination of ring buffer
and backup bitmap. The dirty bits in the backup bitmap needs to
collected in the last stage of live migration.
In order to identify the last stage of live migration and pass it
down, an extra parameter is added to the relevant functions and
callbacks. This last stage indicator isn't used until the dirty
ring is enabled in the subsequent patches.
No functional change intended.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
Message-Id: <20230509022122.20888-2-gshan@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Save a bit of build time by passing the number of jobs option to
sphinx.
We cannot use the -j option from make because meson does not support
setting build time parameters for custom targets. Use nproc instead or
the equivalent sphinx option "-j auto", if that is available (version
>=1.7.0).
Also make sure our plugins support parallelism and report it properly
to sphinx. Particularly, implement the merge_domaindata method in
DBusDomain that is used to merge in data from other subprocesses.
Tested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20230503203947.3417-2-farosas@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Before this change, MOVNTPS and MOVNTPD were labeled as Exception Class
4 (only requiring alignment for legacy SSE instructions). This changes
them to Exception Class 1 (always requiring memory alignment), as
documented in the Intel manual.
Message-Id: <20230501111428.95998-3-ricky@rzhou.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fix the exception classes for some SSE/AVX instructions to match what is
documented in the Intel manual.
These changes are expected to have no functional effect on the behavior
that qemu implements (primarily >= 16-byte memory alignment checks). For
instance, since qemu does not implement the AC flag, there is no
difference in behavior between Exception Classes 4 and 5 for
instructions where the SSE version only takes <16 byte memory operands.
Message-Id: <20230501111428.95998-2-ricky@rzhou.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Adds some comments describing what instructions correspond to decoding
table entries and fixes some existing comments which named the wrong
instruction.
Message-Id: <20230501111428.95998-1-ricky@rzhou.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vzeroall: xmm_regs should be used instead of xmm_t0
vpermdq: bit 3 and 7 of imm should be considered
Signed-off-by: Xinyu Li <lixinyu20s@ict.ac.cn>
Message-Id: <20230510145222.586487-1-lixinyu20s@ict.ac.cn>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Compared to other SSE instructions, VUCOMISx and VCOMISx are different:
the single and double precision versions are distinguished through a
prefix, however they use no-prefix and 0x66 for SS and SD respectively.
Scalar values usually are associated with 0xF2 and 0xF3.
Because of these, they incorrectly perform a 128-bit memory load instead
of a 32- or 64-bit load. Fix this by writing a custom decoding function.
I tested that the reproducer is fixed and the test-avx output does not
change.
Reported-by: Gabriele Svelto <gsvelto@mozilla.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1637
Fixes: f8d19eec0d ("target/i386: reimplement 0x0f 0x28-0x2f, add AVX", 2022-10-18)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As reported by the Intel's doc:
"FB_CLEAR: The processor will overwrite fill buffer values as part of
MD_CLEAR operations with the VERW instruction.
On these processors, L1D_FLUSH does not overwrite fill buffer values."
If this cpu feature is present in host, allow QEMU to choose whether to
show it to the guest too.
One disadvantage of not exposing it is that the guest will report
a non existing vulnerability in
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
because the mitigation is present only when the cpu has
(FLUSH_L1D and MD_CLEAR) or FB_CLEAR
features enabled.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20230201135759.555607-3-eesposit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As reported by Intel's doc:
"L1D_FLUSH: Writeback and invalidate the L1 data cache"
If this cpu feature is present in host, allow QEMU to choose whether to
show it to the guest too.
One disadvantage of not exposing it is that the guest will report
a non existing vulnerability in
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
because the mitigation is present only when the cpu has
(FLUSH_L1D and MD_CLEAR) or FB_CLEAR
features enabled.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20230201135759.555607-2-eesposit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Two type hints fail centos-stream-8-x86_64 CI. They are actually
broken. Changing them to Optional[re.Match[str]] fixes them locally
for me, but then CI fails differently. Drop them for now.
Fixes: 3e32dca3f0 (qapi: Rewrite parsing of doc comment section symbols and tags)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20230517061600.1782455-1-armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
linux-user getgroups(), setgroups(), getgroups32() and setgroups32()
used alloca() to allocate grouplist arrays, with unchecked gidsetsize
coming from the "guest". With NGROUPS_MAX being 65536 (linux, and it
is common for an application to allocate NGROUPS_MAX for getgroups()),
this means a typical allocation is half the megabyte on the stack.
Which just overflows stack, which leads to immediate SIGSEGV in actual
system getgroups() implementation.
An example of such issue is aptitude, eg
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=811087#72
Cap gidsetsize to NGROUPS_MAX (return EINVAL if it is larger than that),
and use heap allocation for grouplist instead of alloca(). While at it,
fix coding style and make all 4 implementations identical.
Try to not impose random limits - for example, allow gidsetsize to be
negative for getgroups() - just do not allocate negative-sized grouplist
in this case but still do actual getgroups() call. But do not allow
negative gidsetsize for setgroups() since its argument is unsigned.
Capping by NGROUPS_MAX seems a bit arbitrary, - we can do more, it is
not an error if set size will be NGROUPS_MAX+1. But we should not allow
integer overflow for the array being allocated. Maybe it is enough to
just call g_try_new() and return ENOMEM if it fails.
Maybe there's also no need to convert setgroups() since this one is
usually smaller and known beforehand (KERN_NGROUPS_MAX is actually 63, -
this is apparently a kernel-imposed limit for runtime group set).
The patch fixes aptitude segfault mentioned above.
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Message-Id: <20230409105327.1273372-1-mjt@msgid.tls.msk.ru>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
If a program requires fr1, we should set the FR bit of CP0 control status
register and add F64 hardware flag. The corresponding `else if` branch
statement is copied from the linux kernel sources (see `arch_check_elf` function
in linux/arch/mips/kernel/elf.c).
Signed-off-by: Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20230404052153.16617-1-dkovalev@compiler-toolchain-for.me>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The kernel does not require PROT_READ for addresses passed to mincore.
For example the fincore(1) tool from util-linux uses PROT_NONE and
currently does not work under qemu-user.
Example (with fincore(1) from util-linux 2.38):
$ fincore /proc/self/exe
RES PAGES SIZE FILE
24K 6 22.1K /proc/self/exe
$ qemu-x86_64 /usr/bin/fincore /proc/self/exe
fincore: failed to do mincore: /proc/self/exe: Cannot allocate memory
With this patch:
$ ./build/qemu-x86_64 /usr/bin/fincore /proc/self/exe
RES PAGES SIZE FILE
24K 6 22.1K /proc/self/exe
Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20230422100314.1650-3-thomas@t-8ch.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This can be used to validate that an address range is mapped but without
being readable or writable.
It will be used by an updated implementation of mincore().
Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20230422100314.1650-2-thomas@t-8ch.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This way we can get rid of the if'deffery and the XXX comment
here (it's repeated in the list_cpus() function anyway).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230424122126.236586-1-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20230424153429.276788-2-thomas@t-8ch.de>
[lv: move declaration at the beginning of the block,
define syscall]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The correct error number for unknown ioctls is ENOTTY.
ENOSYS would mean that the ioctl() syscall itself is not implemented,
which is very improbable and unexpected for userspace.
ENOTTY means "Inappropriate ioctl for device". This is what the kernel
returns on unknown ioctls, what qemu is trying to express and what
userspace is prepared to handle.
Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230426070659.80649-1-thomas@t-8ch.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
RISC-V does not expose all extensions via hwcaps, thus some userspace
applications may want to query these via /proc/cpuinfo.
Currently when querying this file the host's file is shown instead
which is slightly confusing. Emulate a basic /proc/cpuinfo file
with mmu info and an ISA string.
Signed-off-by: Afonso Bordado <afonsobordado@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <167873059442.9885.15152085316575248452-0@git.sr.ht>
[lv: removed the test that fails in CI for unknown reason]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
TCG will need this declaration, without all of the other
bits that come with cpu-all.h.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Disconnect guest tlb parameters from TCG compilation.
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Disconnect guest page size from TCG compilation.
While this could be done via exec/target_page.h, we want to cache
the value across multiple memory access operations, so we might
as well initialize this early.
The changes within tcg/ are entirely mechanical:
sed -i s/TARGET_PAGE_BITS/s->page_bits/g
sed -i s/TARGET_PAGE_MASK/s->page_mask/g
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
All uses replaced with TCGContext.addr_type.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>