The tests are running in containers here, so it should be OK to
run with AVOCADO_ALLOW_UNTRUSTED_CODE enabled in this case.
Message-Id: <20201023073351.251332-4-thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
downtime in migration test, less verbose output when running w/o KVM)
* Improve handling of acceptance tests in the Gitlab-CI
* Run checkpatch.pl in the Gitlab-CI
* Improve the gitlab-pipeline-status script
* Misc patches (mark 'moxie' as deprecated, remove stale .gitignore files, ...)
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAl+FhiIRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbXfcBAAkMc4eUbZ0Wkm7M7TdIRkn1vQEstgvyJN
6t02MuqY0R01rdbIBAnCLSw9okxfCTf7Q33VmC7snLtPo6WmvYIPAXZAnUiz13K1
hGhMJfEY0JSyPEXlENMC/SWcRfNuHud6OPp6KePvn6EQsVZ5CR9SeO5zMsCVj2SP
bMaBYIAJsVCEHkR2lq9UXbjckjyO0GQnQ/oR3mNiqDLYBmrXUOxIFMBctgfbuUtm
uPuvvknHVQa8foD18qVJ8QYZrpwrqN4edFjcoW3yvwfX6OOhTnx+pY43BG/of9YB
OoRY7V4VN8aYmVR08sqyn6PRNpXW9WcSUn8D3JNeiAhLzO/8H197JhHwFVvbZc7t
puLECIINy91wH2i3Onx7HWhss3XLUK3HsvWNLrvLui6vdbFHEtiW2/0GbwJzrcA0
a9inH7bvI7BlPiIau/J7goaDv0fzZ7xVXlQcrM8hC9oCWH5gvmvcgTBWJn/5OxUZ
fov3iFxcRWslFSQe+D66gBceIl/fScF+TUmPoWyeSlD/f1OR2WW+q8N1FvnbLflz
oPutIoja8b6CobzAzp8Igc6/9uQvzCAFB92Y8q1Og7eguQybw7dDtbArjBmjUBVi
slFWoY8/ri2+uyiPsyU13Yfu9N5myqdwIQeM7H8sQ7qS40QHp0z2tj18o951xH2w
WJv3PlGcez4=
=lCRK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2020-10-13' into staging
* qtest improvements (test for crash found with the fuzzer, increase
downtime in migration test, less verbose output when running w/o KVM)
* Improve handling of acceptance tests in the Gitlab-CI
* Run checkpatch.pl in the Gitlab-CI
* Improve the gitlab-pipeline-status script
* Misc patches (mark 'moxie' as deprecated, remove stale .gitignore files, ...)
# gpg: Signature made Tue 13 Oct 2020 11:49:06 BST
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/huth-gitlab/tags/pull-request-2020-10-13: (23 commits)
scripts/ci/gitlab-pipeline-status: wait for pipeline creation
scripts/ci/gitlab-pipeline-status: use more descriptive exceptions
scripts/ci/gitlab-pipeline-status: handle keyboard interrupts
scripts/ci/gitlab-pipeline-status: refactor parser creation
scripts/ci/gitlab-pipeline-status: give early feedback on running pipelines
scripts/ci/gitlab-pipeline-status: improve message regarding timeout
scripts/ci/gitlab-pipeline-status: make branch name configurable
gitlab: assign python helper files to GitLab maintainers section
gitlab: add a CI job to validate the DCO sign off
gitlab: add a CI job for running checkpatch.pl
configure: fixes indent of $meson setup
docs/system/deprecated: Mark the 'moxie' CPU as deprecated
Remove superfluous .gitignore files
MAINTAINERS: Ignore bios-tables-test in the qtest section
Add a comment in bios-tables-test.c to clarify the reason behind approach
softmmu/vl: Be less verbose about missing KVM when running the qtests
tests/migration: Allow longer timeouts
qtest: add fuzz test case
Acceptance tests: show test report on GitLab CI
Acceptance tests: do not show canceled test logs on GitLab CI
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While checkpatch.pl can validate DCO sign off that job must always be
advisory only since it is expected that certain patches will fail some
code style rules.
We require the DCO sign off to be mandatory for all commits though, so
it benefits from being validated in a standalone job.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200918132903.1848939-3-berrange@redhat.com>
[thuth: Use "stage: build" to let it run earlier]
Signed-off-by: Thomas Huth <thuth@redhat.com>
This job is advisory since it is expected that certain patches will fail
the style checks and checkpatch.pl provides no way to mark exceptions to
the rules.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200918132903.1848939-2-berrange@redhat.com>
[thuth: Use "stage: build" to let it run earlier]
Signed-off-by: Thomas Huth <thuth@redhat.com>
Avocado will, by default, produce JUnit files. Let's ask GitLab
to present those in the web UI.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20201009205513.751968-4-crosa@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Tests resulting in "CANCEL" in Avocado are usually canceled on
purpose, and are almost identical to "SKIP". The logs for canceled
tests are adding a lot of noise to the logs being shown on GitLab CI,
and causing distraction from real failures.
As a side note, this "after script" is scheduled for removal once the
feature is implemented within Avocado itself.
Reference: https://github.com/avocado-framework/avocado/issues/4266
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20201009205513.751968-3-crosa@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
With 1000 runs, there is a non-negligible chance that the fuzzer can
trigger a crash. With this CI job, we care about catching build/runtime
issues in the core fuzzing code. Actual device fuzzing takes place on
oss-fuzz. For these purposes, only running one input should be
sufficient.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Suggested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201002143524.56930-1-alxndr@bu.edu>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Test 067 from qemu-iotests is executing QMP commands to hotplug
and hot-unplug disks, devices and blockdevs. Because the power
of the text-based test harness is limited, it is actually limiting
the checks that it does, for example by skipping DEVICE_DELETED
events.
tests/qtest already has a similar test, drive_del-test.c.
We can merge them, and even reuse some of the existing code in
drive_del-test.c. This will improve the quality of the test by
covering DEVICE_DELETED events and testing multiple architectures
(therefore covering multiple PCI hotplug mechanisms as well as s390x
virtio-ccw).
The only difference is that the new test will always use null-co:// for
the medium rather than qcow2 or raw, but this should be irrelevant for
what the test is covering. For example there are no "qemu-img check"
runs in 067 that would check that the file is properly closed.
The new tests requires PCI hot-plug support, so drive_del-test
is moved from qemu-system-ppc to qemu-system-ppc64.
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While the job is pretty fast for only a few targets we still want to
catch breakage of the build. By splitting the test step we can
allow_failures for that while still ensuring we don't miss the build
breaking.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20201002091538.3017-1-alex.bennee@linaro.org>
Even with the recent split moving beefier plugins into contrib and
dropping them from the check-tcg tests we are still hitting time
limits. This possibly points to a slow down of --debug-tcg but seeing
as we are migrating stuff to gitlab we might as well move there and
bump the timeout.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20201002103223.24022-1-alex.bennee@linaro.org>
According to our support policy, Debian 9 is not supported by the
QEMU project anymore. Since we now switched the MinGW cross-compiler
builds to Fedora, we do not need these Debian9-based containers
in the gitlab-CI anymore, and can now also get rid of the "layer3"
container build stage this way.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200921174320.46062-3-thuth@redhat.com>
Message-Id: <20200925154027.12672-11-alex.bennee@linaro.org>
While we are at it move the few places where they are into the
deprecation build bucket.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200915134317.11110-9-alex.bennee@linaro.org>
These targets might be deprecated but we should keep them building
before the final axe comes down. Lets keep them all in one place and
don't hold up the CI if they do fail. They are either poorly tested or
already flaky anyway.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200915134317.11110-8-alex.bennee@linaro.org>
Most jobs test the latest nettle library. This adds explicit coverage
for latest gcrypt using Fedora, and old gcrypt and nettle using
CentOS-7. The latter does a minimal tools-only build, as we only need to
validate that the crypto code builds and unit tests pass. Finally a job
disabling both nettle and gcrypt is provided to validate that gnutls
still works.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200901133050.381844-3-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Now that we can use all our QEMU test containers in the gitlab-CI, we can
easily add some jobs that test cross-compilation for various architectures.
There is just only small ugliness: Since the shared runners on gitlab.com
are single-threaded, we have to split each compilation job into two parts
(--disable-user and --disable-system), and exclude some additional targets,
to avoid that the jobs are running too long and hitting the timeout of 1 h.
Message-Id: <20200823111757.72002-8-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The default expiration time for artifacts seems to be very high (30 days?).
Since we only need the artifacts to pass the binaries from one stage to
the next one, we can decrease the expiration time to avoid to spam the
file server too much. Two days should be enough in case someone still wants
to have a look after the pipeline finished.
Message-Id: <20200806161546.15325-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The fuzzer job finishes quite early, so we can run the unit tests and
qtests with -fsanitize=address here without extending the total test time.
Message-Id: <20200831153228.229185-1-thuth@redhat.com>
Reviewed-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Let's focus on the gitlab-ci when testing the compilation with disabled
features, thus add more switches there (and while we're at it, sort them
also alphabetically). This should cover the test from the Travis CI now,
too, so that we can remove the now-redundant job from the Travis CI.
Message-Id: <20200806155306.13717-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The binaries move to the root directory, e.g. qemu-system-i386 or
qemu-arm. This requires changes to qtests, CI, etc.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In commit 6957fd98dc ("gitlab: add avocado asset caching") we
tried to save the Avocado cache (as in commit c1073e44b4 with
Travis-CI) however it doesn't work as expected. For some reason
Avocado uses /root/avocado_cache/ which we can not select later.
Manually generate a Avocado config to force the use of the
current job's directory.
This patch is based on an earlier version from Philippe Mathieu-Daudé.
Message-Id: <20200730141326.8260-5-thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We were missing the two new targets avr-softmmu and rx-softmmu in the
gitlab-CI so far, and did not add some of the "other endianess" targets
like sh4eb-softmmu yet.
Since the current build-system-* jobs run already for a very long time,
let's do not add these missing targets there, but introduce two new
additional build jobs, one running with Debian and one running with
CentOS, and add the new targets there. Also move some targets from
the old build-system-* jobs to these new targets, to distribute the
load and reduce the runtime of the CI.
Message-Id: <20200730141326.8260-4-thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This tries to build and run the fuzzers with the same build-script used
by oss-fuzz. This doesn't guarantee that the builds on oss-fuzz will
also succeed, since oss-fuzz provides its own compiler and fuzzer vars,
but it can catch changes that are not compatible with the the
./scripts/oss-fuzz/build.sh script.
The strange way of finding fuzzer binaries stems from the method used by
oss-fuzz:
https://github.com/google/oss-fuzz/blob/master/infra/base-images/base-runner/targets_list
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Message-Id: <20200720073223.22945-1-thuth@redhat.com>
[thuth: Tweak the "script" to make it work, exclude slirp test, etc.]
Signed-off-by: Thomas Huth <thuth@redhat.com>
So far we neither compile-tested nor run any of the new fuzzers in our CI,
which led to some build failures of the fuzzer code in the past weeks.
To avoid this problem, add a job to compile the fuzzer code and run some
loops (which likely don't find any new bugs via fuzzing, but at least we
know that the code can still be run).
A nice side-effect of this test is that the leak tests are enabled here,
so we should now notice some of the memory leaks in our code base earlier.
Message-Id: <20200716100950.27396-1-thuth@redhat.com>
Reviewed-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Thomas Huth <thuth@redhat.com>
If we want to continue to split build and check phase it seems like a
good idea to allow building of the tests during our multi-threaded
build phase.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200701135652.1366-40-alex.bennee@linaro.org>
These can be quite big so lets cache them. I couldn't find any nots on
ccache in the gitlab docs so I've just ignored it for now.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200701135652.1366-36-alex.bennee@linaro.org>
Switch to building in the new debian-all-test-cross image which has
most of the cross compilers inline.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200701135652.1366-35-alex.bennee@linaro.org>
As part of migrating things from Travis to GitLab add the acceptance
tests. To do this:
- rename system1 to system-ubuntu-main
- rename system2 to system-fedora-misc
- split into build/check/acceptance
- remove -j from check stages
- use artifacts to save build stage
- add post acceptance template and use
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200701135652.1366-31-alex.bennee@linaro.org>
Now that we're building standard container images from
dockerfiles in tests/docker/dockerfiles, we can convert
the build jobs to use them. The key benefit of this is
that a contributor can now more easily replicate the CI
environment on their local machine. The container images
are cached too, so we are not spending time waiting for
the apt-get/dnf package installs to complete.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200622153318.751107-4-berrange@redhat.com>
[AJB: tweak naming convention]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200701135652.1366-23-alex.bennee@linaro.org>
We have a number of container images in tests/docker/dockerfiles
that are intended to provide well defined environments for doing
test builds. We want our CI system to use these containers too.
This introduces builds of all of them as the first stage in the
CI, so that the built containers are available for later build
jobs. The containers are setup to use the GitLab container
registry as the cache, so we only pay the penalty of the full
build when the dockerfiles change. The main qemu-project/qemu
repo is used as a second cache, so that users forking QEMU will
see a fast turnaround time on their CI jobs.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20200622153318.751107-3-berrange@redhat.com>
[AJB: tweak the tag format]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200701135652.1366-22-alex.bennee@linaro.org>
If no stage is listed, jobs get put in an implicit "test" stage.
Some jobs which create container images to be used by later stages
are currently listed as in a "build" stages.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200622153318.751107-2-berrange@redhat.com>
Message-Id: <20200701135652.1366-21-alex.bennee@linaro.org>
Some people might want to run the gitlab CI pipelines in an environment
where multiple CPUs are available to the runners, so let's rather get
the number for "-j" from the "nproc" program (increased by 1 to compensate
for jobs that wait for I/O) instead of hard-coding it.
Message-Id: <20200525131823.715-7-thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Currently all pipelines of the gitlab CI are failing, except for the
"build-user" pipeline. There is an issue with the default container
image (likely Debian stable) where they imported something bad in one
of the system headers:
/usr/include/linux/swab.h: In function '__swab':
/builds/huth/qemu/include/qemu/bitops.h:20:34: error: "sizeof" is not
defined, evaluates to 0 [-Werror=undef]
#define BITS_PER_LONG (sizeof (unsigned long) * BITS_PER_BYTE)
We could maybe work-around this issue or wait for the default containers
to get fixed, but considering that we use Ubuntu (and thus Debian-style)
CI in Travis already to a very large extent, we should consider to use
some RPM-based distros in our gitlab CI instead. Thus let's change the
failing pipelines to use Fedora and CentOS (and also one Ubuntu 19.10,
since 20.04 is broken, too) now.
Message-Id: <20200525131823.715-6-thuth@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We have a dedicated folder for the gitlab-ci - so there is no need
to clutter the top directory with these .yml files.
Message-Id: <20200525131823.715-5-thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
At this point it seems that all jobs depend on those steps, with
maybe the EDK2 jobs as exceptions.
The jobs that will be added later will not want those scripts to be
run, so let's move these steps to the appropriate jobs, while
still trying to avoid repetition.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20200525131823.715-4-thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
[thuth: Rebased to current master branch, use separate template]
Signed-off-by: Thomas Huth <thuth@redhat.com>
QEMU does not use flex/bison packages.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200515163029.12917-4-philmd@redhat.com>
Message-Id: <20200525131823.715-3-thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Add two GitLab jobs to build the OpenSBI firmware binaries.
The first job builds a Docker image with the packages requisite
to build OpenSBI, and stores this image in the GitLab registry.
The second job pulls the image from the registry and builds the
OpenSBI firmware binaries.
The docker image is only rebuilt if the GitLab YAML or the
Dockerfile is updated. The second job is only built when the
roms/opensbi/ submodule is updated, when a git-ref starts with
'opensbi' or when the last commit contains 'OpenSBI'. The files
generated are archived in the artifacts.zip file.
With OpenSBI v0.6, it took 2 minutes 56 seconds to build
the docker image, and 1 minute 24 seconds to generate the
artifacts.zip with the firmware binaries (filesize: 111KiB).
See: https://gitlab.com/lbmeng/qemu/pipelines/120520138
Suggested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
Add it to several build systems to make testing good.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
iotest 147 and 205 have recently been marked as "NBD-only", so they
are currently simply skipped and thus can be removed.
iotest 129 occasionally fails in the gitlab-CI, and according to Max,
there are some known issues with this test (see for example this URL:
https://lists.nongnu.org/archive/html/qemu-block/2019-06/msg00499.html ),
so for the time being, let's disable it until the problems are fixed.
The iotests 040, 127, 203 and 256 are scheduled to become part of "make
check-block", so we also do not have to test them seperately here anymore.
On the other side, new iotests have been added to the QEMU repository
in the past months, so we can now add some new test > 256 instead.
Message-Id: <20200121131936.8214-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Add two GitLab job to build the EDK2 firmware binaries.
The first job build a Docker image with the packages requisite
to build EDK2, and store this image in the GitLab registry.
The second job pull the image from the registry and build the
EDK2 firmware binaries.
The docker image is only rebuilt if the GitLab YAML or the
Dockerfile is updated.
The second job is only built when the roms/edk2/ submodule is
updated, when a git-ref starts with 'edk2' or when the last
commit contains 'EDK2'. The files generated are archived in
the artifacts.zip file.
With edk2-stable201905, it took 2 minutes 52 seconds to build
the docker image, and 36 minutes 28 seconds to generate the
artifacts.zip with the firmware binaries (filesize: 10MiB).
See: https://gitlab.com/philmd/qemu/pipelines/107553178
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Since commit 2f160e0f97 ("tci: Add
implementation for INDEX_op_ld16u_i64") has been included now, we
can also run the TCG tests with tci, so let's enable them in our
Gitlab CI now.
Message-Id: <20191127155105.3784-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The tests directory itself is pretty overcrowded, and it's hard to
see which test belongs to which test subsystem (unit, qtest, ...).
Let's move the qtests to a separate folder for more clarity.
Message-Id: <20191218103059.11729-6-thuth@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Most developers are using out-of-tree builds and it was discussed in the past
to only allow those. To prepare for the transition, use out-of-tree builds
in all continuous integration jobs.
Based on a patch by Marc-André Lureau.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Li-Wen Hsu <lwhsu@freebsd.org>
Message-Id: <1576074829-56711-1-git-send-email-pbonzini@redhat.com>
Since the bluetooth code has been removed, we don't need to test
with this library anymore.
Message-Id: <20191120091014.16883-5-thuth@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We currently enable libcap-dev in build-clang to pick up the 9p proxy
helper. Paolo's patch changes (commit 7e46261368) that to use
libcap-ng, so switch to using it. This also means we'll be testing the
scsi pr manager and the bridge helper.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[groug, mention SHA1 that dropped libcap]
Signed-off-by: Greg Kurz <groug@kaod.org>
The libvdeplug-dev package is required to compile-test net/vde.c.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20191016131002.29663-1-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
So far the gitlab-ci was not testing virtio-9p yet, since we did not
install libattr-devel and libcap-devel in any of the pipelines. Do
it now to get some more test coverage.
Message-Id: <20190905111729.1197-1-thuth@redhat.com>
Acked-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Since most iotests are now run during "make check" already, we do not
need to test them explicitly from the gitlab-ci.yml script anymore.
And while we're at it, add some of the new non-auto tests >= 246 instead.
Message-Id: <20190717111947.30356-5-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
So far we do not have any test coverage for TCI (the TCG interpreter) yet.
Thus let's add a CI pipeline that runs at least some basic TCG tests with
a TCI build, to make sure that there are no further regressions.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190410123550.2362-1-thuth@redhat.com>
This is very convenient for people like me who store their QEMU git trees
on gitlab.com: Automatic CI pipelines are now run for each branch that is
pushed to the server - useful for some extra-testing before sending PULL-
requests for example. Since the runtime of the jobs is limited to 1h, the
jobs are distributed into multiple pipelines - this way everything finishs
fine within time (ca. 30 minutes currently).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1550058881-16351-1-git-send-email-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>