Correct typos automatically found with the `typos` tool
<https://crates.io/crates/typos>
Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
AF_XDP is a network socket family that allows communication directly
with the network device driver in the kernel, bypassing most or all
of the kernel networking stack. In the essence, the technology is
pretty similar to netmap. But, unlike netmap, AF_XDP is Linux-native
and works with any network interfaces without driver modifications.
Unlike vhost-based backends (kernel, user, vdpa), AF_XDP doesn't
require access to character devices or unix sockets. Only access to
the network interface itself is necessary.
This patch implements a network backend that communicates with the
kernel by creating an AF_XDP socket. A chunk of userspace memory
is shared between QEMU and the host kernel. 4 ring buffers (Tx, Rx,
Fill and Completion) are placed in that memory along with a pool of
memory buffers for the packet data. Data transmission is done by
allocating one of the buffers, copying packet data into it and
placing the pointer into Tx ring. After transmission, device will
return the buffer via Completion ring. On Rx, device will take
a buffer form a pre-populated Fill ring, write the packet data into
it and place the buffer into Rx ring.
AF_XDP network backend takes on the communication with the host
kernel and the network interface and forwards packets to/from the
peer device in QEMU.
Usage example:
-device virtio-net-pci,netdev=guest1,mac=00:16:35:AF:AA:5C
-netdev af-xdp,ifname=ens6f1np1,id=guest1,mode=native,queues=1
XDP program bridges the socket with a network interface. It can be
attached to the interface in 2 different modes:
1. skb - this mode should work for any interface and doesn't require
driver support. With a caveat of lower performance.
2. native - this does require support from the driver and allows to
bypass skb allocation in the kernel and potentially use
zero-copy while getting packets in/out userspace.
By default, QEMU will try to use native mode and fall back to skb.
Mode can be forced via 'mode' option. To force 'copy' even in native
mode, use 'force-copy=on' option. This might be useful if there is
some issue with the driver.
Option 'queues=N' allows to specify how many device queues should
be open. Note that all the queues that are not open are still
functional and can receive traffic, but it will not be delivered to
QEMU. So, the number of device queues should generally match the
QEMU configuration, unless the device is shared with something
else and the traffic re-direction to appropriate queues is correctly
configured on a device level (e.g. with ethtool -N).
'start-queue=M' option can be used to specify from which queue id
QEMU should start configuring 'N' queues. It might also be necessary
to use this option with certain NICs, e.g. MLX5 NICs. See the docs
for examples.
In a general case QEMU will need CAP_NET_ADMIN and CAP_SYS_ADMIN
or CAP_BPF capabilities in order to load default XSK/XDP programs to
the network interface and configure BPF maps. It is possible, however,
to run with no capabilities. For that to work, an external process
with enough capabilities will need to pre-load default XSK program,
create AF_XDP sockets and pass their file descriptors to QEMU process
on startup via 'sock-fds' option. Network backend will need to be
configured with 'inhibit=on' to avoid loading of the program.
QEMU will need 32 MB of locked memory (RLIMIT_MEMLOCK) per queue
or CAP_IPC_LOCK.
There are few performance challenges with the current network backends.
First is that they do not support IO threads. This means that data
path is handled by the main thread in QEMU and may slow down other
work or may be slowed down by some other work. This also means that
taking advantage of multi-queue is generally not possible today.
Another thing is that data path is going through the device emulation
code, which is not really optimized for performance. The fastest
"frontend" device is virtio-net. But it's not optimized for heavy
traffic either, because it expects such use-cases to be handled via
some implementation of vhost (user, kernel, vdpa). In practice, we
have virtio notifications and rcu lock/unlock on a per-packet basis
and not very efficient accesses to the guest memory. Communication
channels between backend and frontend devices do not allow passing
more than one packet at a time as well.
Some of these challenges can be avoided in the future by adding better
batching into device emulation or by implementing vhost-af-xdp variant.
There are also a few kernel limitations. AF_XDP sockets do not
support any kinds of checksum or segmentation offloading. Buffers
are limited to a page size (4K), i.e. MTU is limited. Multi-buffer
support implementation for AF_XDP is in progress, but not ready yet.
Also, transmission in all non-zero-copy modes is synchronous, i.e.
done in a syscall. That doesn't allow high packet rates on virtual
interfaces.
However, keeping in mind all of these challenges, current implementation
of the AF_XDP backend shows a decent performance while running on top
of a physical NIC with zero-copy support.
Test setup:
2 VMs running on 2 physical hosts connected via ConnectX6-Dx card.
Network backend is configured to open the NIC directly in native mode.
The driver supports zero-copy. NIC is configured to use 1 queue.
Inside a VM - iperf3 for basic TCP performance testing and dpdk-testpmd
for PPS testing.
iperf3 result:
TCP stream : 19.1 Gbps
dpdk-testpmd (single queue, single CPU core, 64 B packets) results:
Tx only : 3.4 Mpps
Rx only : 2.0 Mpps
L2 FWD Loopback : 1.5 Mpps
In skb mode the same setup shows much lower performance, similar to
the setup where pair of physical NICs is replaced with veth pair:
iperf3 result:
TCP stream : 9 Gbps
dpdk-testpmd (single queue, single CPU core, 64 B packets) results:
Tx only : 1.2 Mpps
Rx only : 1.0 Mpps
L2 FWD Loopback : 0.7 Mpps
Results in skb mode or over the veth are close to results of a tap
backend with vhost=on and disabled segmentation offloading bridged
with a NIC.
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> (docker/lcitool)
Signed-off-by: Jason Wang <jasowang@redhat.com>
HAX is deprecated since commits 73741fda6c ("MAINTAINERS: Abort
HAXM maintenance") and 90c167a1da ("docs/about/deprecated: Mark
HAXM in QEMU as deprecated"), released in v8.0.0.
Per the latest HAXM release (v7.8 [*]), the latest QEMU supported
is v7.2:
Note: Up to this release, HAXM supports QEMU from 2.9.0 to 7.2.0.
The next commit (https://github.com/intel/haxm/commit/da1b8ec072)
added:
HAXM v7.8.0 is our last release and we will not accept
pull requests or respond to issues after this.
It became very hard to build and test HAXM. Its previous
maintainers made it clear they won't help. It doesn't seem to be
a very good use of QEMU maintainers to spend their time in a dead
project. Save our time by removing this orphan zombie code.
[*] https://github.com/intel/haxm/releases/tag/v7.8.0
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230831082016.60885-1-philmd@linaro.org>
This reverts commit e8e4298fea.
ensuregroup allows to specify both the acceptable versions of avocado,
and a locked version to be used when avocado is not installed as a system
pacakge. This lets us install avocado in pyvenv/ using "mkvenv.py" and
reuse the distro package on Fedora and CentOS Stream (the only distros
where it's available).
ensuregroup's usage of "(>=..., <=...)" constraints when evaluating
the distro package, and "==" constraints when installing it from PyPI,
makes it possible to avoid conflicts between the known-good version and
a package plugins included in the distro.
This is because package plugins have "==" constraints on the version
that is included in the distro, and, using "pip install avocado==88.1"
on a venv that includes system packages will result in an error:
avocado-framework-plugin-varianter-yaml-to-mux 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
avocado-framework-plugin-result-html 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
But at the same time, if the venv does not include a system distribution
of avocado then we can install a known-good version and stick to LTS
releases.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1663
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reuse --enable/--disable-download to control git submodules as well.
Adjust the error messages of git-submodule.sh to refer to the new
option.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The scenario for which --with-git= was introduced was to use a SOCKS proxy
such as tsocks. However, this was back in 2017 when QEMU's submodules
used the git:// protocol, and it is not as important when using the
"smart HTTP" backend; for example, neither "meson subprojects download"
nor scripts/checkpatch.pl obey the GIT environment variable.
So remove the knob, but test for the presence of git in the configure and
git-submodule.sh scripts, and suggest using --with-git-submodules=validate
+ a manual invocation of git-submodule.sh when git does not work. Hopefully
in the future the GIT environment variable will be supported by Meson.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commits eea2d14117 ("Makefile: remove $(TESTS_PYTHON)",
2023-05-26) and 9c6692db55 ("tests: Use configure-provided pyvenv for
tests", 2023-05-18).
Right now, there is a conflict between wanting a ">=" constraint when
using a distro-provided package and wanting a "==" constraint when
installing Avocado from PyPI; this would provide the best of both worlds
in terms of resiliency for both distros that have required packages and
distros that don't.
The conflict is visible also for meson, where we would like to install
the latest 0.63.x version but also accept a distro 1.1.x version.
But it is worse for avocado, for two reasons:
1) we cannot use an "==" constraint to install avocado if the venv
includes a system avocado. The distro will package plugins that have
"==" constraints on the version that is included in the distro, and, using
"pip install avocado==88.1" on a venv that includes system packages will
result in this error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
avocado-framework-plugin-varianter-yaml-to-mux 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
avocado-framework-plugin-result-html 98.0 requires avocado-framework==98.0, but you have avocado-framework 88.1 which is incompatible.
make[1]: Leaving directory '/home/berrange/src/virt/qemu/build'
2) we cannot use ">=" either if the venv does _not_ include a system
avocado, because that would result in the installation of v101.0 which
is the one we've just reverted.
So the idea is to encode the dependencies as an (acceptable, locked)
tuple, like this hypothetical TOML that would be committed inside
python/ and used by mkvenv.py:
[meson]
meson = { minimum = "0.63.0", install = "0.63.3", canary = "meson" }
[docs]
# 6.0 drops support for Python 3.7
sphinx = { minimum = "1.6", install = "<6.0", canary = "sphinx-build" }
sphinx_rtd_theme = { minimum = "0.5" }
[avocado]
avocado-framework = { minimum = "88.1", install = "88.1", canary = "avocado" }
Once this is implemented, it would also be possible to install avocado in
pyvenv/ using "mkvenv.py ensure", thus using the distro package on Fedora
and CentOS Stream (the only distros where it's available). But until
this is implemented, keep avocado in a separate venv. There is still the
benefit of using a single python for meson custom_targets and for sphinx.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Custom values for the gitlab-runner Helm chart.
See https://wiki.qemu.org/Testing/CI/KubernetesRunners.
Signed-off-by: Camilla Conte <cconte@redhat.com>
Message-Id: <20230522174153.46801-6-cconte@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This follows the corresponding change for e1000e. This fixes:
tests/avocado/netdev-ethtool.py:NetDevEthtool.test_igb
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Fixes: 9f95111474 ("tests/avocado: re-factor igb test to avoid timeouts")
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This patch changes how the avocado tests are provided, ever so
slightly. Instead of creating a new testing venv, use the
configure-provided 'pyvenv' instead, and install optional packages into
that.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20230511035435.734312-20-jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We have a bunch of references to 20.04 (which s390x is still on)
although we are basically building on 22.04 now. Clean up the textual
references and use lcitool to generate the full package list to be
consistent.
We can drop "Install packages to build QEMU on Ubuntu on non-s390x" as
when we upgrade the s390x builder to 22.04 it won't need this
workaround.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230503091244.1450613-19-alex.bennee@linaro.org>
One of the main reasons to have custom runners it so we can run KVM
tests. Enable the "kvm" additional group so we can access the feature
on the kernel.
Message-Id: <20230503091244.1450613-5-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
This was broken when we moved to using the pre-built packages as we
didn't take care to ensure we used RPMs where required.
NB: I could never get this to complete on my test setup but I suspect
this was down to network connectivity and timeouts while downloading.
Fixes: 69c4befba1 (scripts/ci: update gitlab-runner playbook to use latest runner)
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230315174331.2959-5-alex.bennee@linaro.org>
Without libslip enabled we won't have user networking which means the
KVM tests won't run.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230315174331.2959-4-alex.bennee@linaro.org>
This automates ethtool tests for igb registers, interrupts, etc.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Remove all the virtiofsd build and docs infrastructure.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Rmove the avocado test for virtiofsd, since we're about to remove
the C implementation.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
scripts/ci/org.centos/stream/8/build-environment.yml has a slightly different
list of packages compared to scripts/ci/setup/build-environment.yaml. Make
them the same.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Update the CI playbook so that it is able to prepare a system with a
fresh CentOS Stream 8 install, rather than just support RHEL.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We were using quite and old runner on our machines and running into
issues with stalling jobs. Gitlab in the meantime now reliably provide
the latest packaged versions of the runner under a stable URL. This
update:
- creates a per-arch subdir for builds
- switches from binary tarballs to deb packages
- re-uses the same binary for the secondary runner
- updates distro check for second to 22.04
Note this script isn't fully idempotent as we end up accumulating
runners especially during testing. However we also want to be able to
run twice with different GitLab keys (e.g. project and personal) so I
think we just have to be mindful of that during testing.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230124180127.1881110-2-alex.bennee@linaro.org>
Changed build-environment.yml to only install spice-server on x86_64 and
aarch64 as this package is only available on those architectures.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220922135516.33627-4-lucas.araujo@eldorado.org.br>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220929114231.583801-4-alex.bennee@linaro.org>
XEN hypervisor is only available in ARM and x86, but the yaml only
checked if the architecture is different from s390x, changed it to
a more accurate test.
Tested this change on a Ubuntu 20.04 ppc64le.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220922135516.33627-3-lucas.araujo@eldorado.org.br>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220929114231.583801-3-alex.bennee@linaro.org>
ninja-build is missing from the RHEL environment, so a system prepared
with that script would still fail to compile QEMU.
Tested on a Fedora 36
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20220922135516.33627-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220929114231.583801-2-alex.bennee@linaro.org>
According to our "Supported build platforms" policy, we now do not support
Ubuntu 18.04 anymore. Remove the related container files and entries from
our CI.
Message-Id: <20220516115912.120951-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
vhost-scsi and vhost-user-scsi are two devices of their own; it should
be possible to enable/disable them with --without-default-devices, not
--without-default-features. Compute their default value in Kconfig to
obtain the more intuitive behavior.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vhost-vsock and vhost-user-vsock are two devices of their own; it should
be possible to enable/disable them with --without-default-devices, not
--without-default-features. Compute their default value in Kconfig to
obtain the more intuitive behavior.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Libpng is only detected if VNC is enabled currently. This patch adds a
generalised png option in the meson build which is aimed to replace use of
CONFIG_VNC_PNG with CONFIG_PNG.
Signed-off-by: Kshitij Suri <kshitij.suri@nutanix.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220408071336.99839-2-kshitij.suri@nutanix.com>
[ kraxel: add meson-buildoptions.sh updates ]
[ kraxel: fix centos8 testcase ]
[ kraxel: update --enable-vnc-png too ]
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
--enable-vnc-png fixup
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Some HW can run multiple architecture profiles so we can install a
secondary runner to build and run tests for those profiles. This
allows setting up secondary service.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225172021.3493923-9-alex.bennee@linaro.org>
At least the current crop of Aarch64 HW can support running 32 bit EL0
code. Before we can build and test we need a minimal set of packages
installed. We can't use "apt build-dep" because it currently gets
confused trying to keep two sets of build-deps installed at once.
Instead we install a minimal set of libraries that will allow us to
continue.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225172021.3493923-8-alex.bennee@linaro.org>
For a long time, we assumed that libxml2 is necessary for parallels
block format support (block/parallels*). However, this format actually
does not use libxml [*]. Since this is the only user of libxml2 in
whole QEMU tree, we can drop all libxml2 checks and dependencies too.
It is even more: --enable-parallels configure option was the only
option which was silently ignored when it's (fake) dependency
(libxml2) isn't installed.
Drop all mentions of libxml2.
[*] Actually the basis for libxml use were introduced in commit
ed279a06c5 ("configure: add dependency") but the implementation
was never merged:
https://lore.kernel.org/qemu-devel/70227bbd-a517-70e9-714f-e6e0ec431be9@openvz.org/
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220119090423.149315-1-mjt@msgid.tls.msk.ru>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: Updated description and adapted to use lcitool]
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220121154134.315047-5-f4bug@amsat.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20220204204335.1689602-9-alex.bennee@linaro.org>
The handling for the XFS_IOC_DIOINFO ioctl is currently quite excessive:
This is not a "real" feature like the other features that we provide with
the "--enable-xxx" and "--disable-xxx" switches for the configure script,
since this does not influence lots of code (it's only about one call to
xfsctl() in file-posix.c), so people don't gain much with the ability to
disable this with "--disable-xfsctl".
It's also unfortunate that the ioctl will be disabled on Linux in case
the user did not install the right xfsprogs-devel package before running
configure. Thus let's simplify this by providing the ioctl definition
on our own, so we can completely get rid of the header dependency and
thus the related code in the configure script.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20211215125824.250091-1-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This introduces three different parts of a job designed to run
on a custom runner managed by Red Hat. The goals include:
a) propose a model for other organizations that want to onboard
their own runners, with their specific platforms, build
configuration and tests.
b) bring awareness to the differences between upstream QEMU and the
version available under CentOS Stream, which is "A preview of
upcoming Red Hat Enterprise Linux minor and major releases".
c) because of b), it should be easier to identify and reduce the gap
between Red Hat's downstream and upstream QEMU.
The components of this custom job are:
I) OS build environment setup code:
- additions to the existing "build-environment.yml" playbook
that can be used to set up CentOS/EL 8 systems.
- a CentOS Stream 8 specific "build-environment.yml" playbook
that adds to the generic one.
II) QEMU build configuration: a script that will produce binaries with
features as similar as possible to the ones built and packaged on
CentOS stream 8.
III) Scripts that define the minimum amount of testing that the
binaries built with the given configuration (point II) under the
given OS build environment (point I) should be subjected to.
IV) Job definition: GitLab CI jobs that will dispatch the build/test
jobs (see points #II and #III) to the machine specifically
configured according to #I.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Tested-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20211111160501.862396-2-crosa@redhat.com>
Message-Id: <20211115142915.3797652-6-alex.bennee@linaro.org>
To have the jobs dispatched to custom runners, gitlab-runner must
be installed, active as a service and properly configured. The
variables file and playbook introduced here should help with those
steps.
The playbook introduced here covers the Linux distributions and
has been primarily tested on OS/machines that the QEMU project
has available to act as runners, namely:
* Ubuntu 20.04 on aarch64
* Ubuntu 18.04 on s390x
But, it should work on all other Linux distributions. Earlier
versions were tested on FreeBSD too, so chances of success are
high.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Willian Rampazzo <willianr@redhat.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210630012619.115262-4-crosa@redhat.com>
Message-Id: <20210709143005.1554-4-alex.bennee@linaro.org>
To run basic jobs on custom runners, the environment needs to be
properly set up. The most common requirement is having the right
packages installed.
The playbook introduced here covers the QEMU's project s390x and
aarch64 machines. At the time this is being proposed, those machines
have already had this playbook applied to them.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210630012619.115262-3-crosa@redhat.com>
Message-Id: <20210709143005.1554-3-alex.bennee@linaro.org>
This includes both input parameters (project id and commit) in the
message so to make it easier to debug returned API calls.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20210222193240.921250-4-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
When an HTTP GET request fails, it's useful to go beyond the "not
successful" message, and show the code returned by the server.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20210222193240.921250-3-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This simply splits out the code that does an HTTP GET so that it
can be used for other API requests.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20210222193240.921250-2-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Similarly to commit 8cdb2cef3f, move the gprof/gcov test to GitLab.
The coverage-summary.sh script is not Travis-CI specific, make it
generic.
[thuth: Add gcovr and bsdmainutils which are required for the
coverage-summary.sh script to the ubuntu docker file,
and use 'check' as test target]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20201108204535.2319870-10-philmd@redhat.com>
Message-Id: <20210211045455.456371-2-thuth@redhat.com>
Message-Id: <20210211122750.22645-2-alex.bennee@linaro.org>
This allows us to do:
./scripts/ci/gitlab-pipeline-status -w -b HEAD -p 2961854
to check out own pipeline status of a recently pushed branch.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-Id: <20201117173635.29101-2-alex.bennee@linaro.org>
When called in wait mode, this script will also wait for the pipeline
to be get to a "running" state. Because many more status may be seen
until a pipeline gets to "running", and those need to be handle too.
Reference: https://docs.gitlab.com/ee/api/pipelines.html#list-project-pipelines
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20200904164258.240278-8-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
For two very different error conditions.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20200904164258.240278-7-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
So that exits based on user requests are handled more gracefully.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20200904164258.240278-6-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Out of the main function.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20200904164258.240278-5-crosa@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>