* Various updates for the documentation

-----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmEmGw8RHHRodXRoQHJl
 ZGhhdC5jb20ACgkQLtnXdP5wLbX3sA//dRQQdBdc+ZAPTdtG3I9qzOT2sgWe9Se/
 dlrw9cvUkHzHxXr/U2/hf1TOo3UfwYhGBVy9E003CBYSCyi7Xv0bIpqaOqcU6yg5
 9gtwDNOENMX5qC/EOua/8secbXTXCWs1d0hrzjeqNIL1/9ZeC6IUPkP23U9yIg2M
 khEbdClX+/dim6h/+hrFv4ONVG0IVpYh3uip3sSNtZXCg1nl4QPZdbkCQax/Bqih
 MM8Q9z0BuIngp7d9NE949pAIZ6P/QuZGsAcDTZa1utNZk3wGSV4aB85dC6wM/kGp
 N2h4xapCl5aK8brw1Q2dNiPsAjYRQn0w8N1qbMRqXYLxq9ULxdpDHQdenhdlLP8+
 TIiaDkIZld+MZW5RdHnGlo0MYd6YSg40Zx5E31nsx3HO9uv251/n5BGcRlA29+Rr
 smAb7GOtqJnVzCeaSmjltZJmxksp+Q0YuwXuiqj6j5ZawMhU6T5/+cSY26V1cdsh
 FYwrAM0Q8ohwiIzeFcaCJBEC7/SqU9bydT7D8ys68sQ2zyRVKFnvHIZrQeUdDLl2
 zvyO4TGLde3iSdnzEVltToQCQET34QjVI5YyGlvAW6D4ygA/T8SZ6JOTUOLtUl8S
 z6DUEwRBRJ4xIa3FJrsezjcbVSOTu6QBsSCRs73x22Z7OCeemy7ErrFyFQhUQMjb
 0DuG0KBbiSo=
 =P0bZ
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/thuth-gitlab/tags/pull-request-2021-08-25' into staging

* Various updates for the documentation

# gpg: Signature made Wed 25 Aug 2021 11:27:27 BST
# gpg:                using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg:                issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg:                 aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg:                 aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg:                 aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3  EAB9 2ED9 D774 FE70 2DB5

* remotes/thuth-gitlab/tags/pull-request-2021-08-25:
  docs: make sphinx-build be quiet by default
  docs: split the CI docs into two files
  docs/about/removed-features: Move some CLI options to the right location
  docs/about: Add the missing release record in the subject
  docs/about: Unify the subject format
  docs/about: Remove the duplicated doc

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
Peter Maydell 2021-08-25 18:50:31 +01:00
commit 810e0cd1a2
6 changed files with 259 additions and 272 deletions

View File

@ -107,8 +107,8 @@ the process listing. This is replaced by the new ``password-secret``
option which lets the password be securely provided on the command
line using a ``secret`` object instance.
``opened`` property of ``rng-*`` objects (since 6.0.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''''
``opened`` property of ``rng-*`` objects (since 6.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''
The only effect of specifying ``opened=on`` in the command line or QMP
``object-add`` is that the device is opened immediately, possibly before all
@ -116,8 +116,8 @@ other options have been processed. This will either have no effect (if
``opened`` was the last option) or cause errors. The property is therefore
useless and should not be specified.
``loaded`` property of ``secret`` and ``secret_keyring`` objects (since 6.0.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``loaded`` property of ``secret`` and ``secret_keyring`` objects (since 6.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The only effect of specifying ``loaded=on`` in the command line or QMP
``object-add`` is that the secret is loaded immediately, possibly before all
@ -142,33 +142,33 @@ should be used instead.
QEMU Machine Protocol (QMP) commands
------------------------------------
``blockdev-open-tray``, ``blockdev-close-tray`` argument ``device`` (since 2.8.0)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``blockdev-open-tray``, ``blockdev-close-tray`` argument ``device`` (since 2.8)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Use argument ``id`` instead.
``eject`` argument ``device`` (since 2.8.0)
'''''''''''''''''''''''''''''''''''''''''''
``eject`` argument ``device`` (since 2.8)
'''''''''''''''''''''''''''''''''''''''''
Use argument ``id`` instead.
``blockdev-change-medium`` argument ``device`` (since 2.8.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``blockdev-change-medium`` argument ``device`` (since 2.8)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Use argument ``id`` instead.
``block_set_io_throttle`` argument ``device`` (since 2.8.0)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``block_set_io_throttle`` argument ``device`` (since 2.8)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Use argument ``id`` instead.
``blockdev-add`` empty string argument ``backing`` (since 2.10.0)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``blockdev-add`` empty string argument ``backing`` (since 2.10)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Use argument value ``null`` instead.
``block-commit`` arguments ``base`` and ``top`` (since 3.1.0)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``block-commit`` arguments ``base`` and ``top`` (since 3.1)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Use arguments ``base-node`` and ``top-node`` instead.
@ -191,8 +191,8 @@ from Linux upstream kernel, declare it deprecated.
System emulator CPUS
--------------------
``Icelake-Client`` CPU Model (since 5.2.0)
''''''''''''''''''''''''''''''''''''''''''
``Icelake-Client`` CPU Model (since 5.2)
''''''''''''''''''''''''''''''''''''''''
``Icelake-Client`` CPU Models are deprecated. Use ``Icelake-Server`` CPU
Models instead.
@ -245,8 +245,8 @@ Device options
Emulated device options
'''''''''''''''''''''''
``-device virtio-blk,scsi=on|off`` (since 5.0.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``-device virtio-blk,scsi=on|off`` (since 5.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The virtio-blk SCSI passthrough feature is a legacy VIRTIO feature. VIRTIO 1.0
and later do not support it because the virtio-scsi device was introduced for
@ -258,14 +258,14 @@ alias.
Block device options
''''''''''''''''''''
``"backing": ""`` (since 2.12.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``"backing": ""`` (since 2.12)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to prevent QEMU from automatically opening an image's backing
chain, use ``"backing": null`` instead.
``rbd`` keyvalue pair encoded filenames: ``""`` (since 3.1.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``rbd`` keyvalue pair encoded filenames: ``""`` (since 3.1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Options for ``rbd`` should be specified according to its runtime options,
like other block drivers. Legacy parsing of keyvalue pair encoded
@ -283,8 +283,8 @@ The above, converted to the current supported format::
linux-user mode CPUs
--------------------
``ppc64abi32`` CPUs (since 5.2.0)
'''''''''''''''''''''''''''''''''
``ppc64abi32`` CPUs (since 5.2)
'''''''''''''''''''''''''''''''
The ``ppc64abi32`` architecture has a number of issues which regularly
trip up our CI testing and is suspected to be quite broken. For that
@ -303,8 +303,8 @@ Related binaries
Backwards compatibility
-----------------------
Runnability guarantee of CPU models (since 4.1.0)
'''''''''''''''''''''''''''''''''''''''''''''''''
Runnability guarantee of CPU models (since 4.1)
'''''''''''''''''''''''''''''''''''''''''''''''
Previous versions of QEMU never changed existing CPU models in
ways that introduced additional host software or hardware

View File

@ -140,18 +140,79 @@ Use ``-rtc driftfix=slew`` instead.
Replaced by ``-rtc base=date``.
``-vnc ...,tls=...``, ``-vnc ...,x509=...`` & ``-vnc ...,x509verify=...``
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``-vnc ...,tls=...``, ``-vnc ...,x509=...`` & ``-vnc ...,x509verify=...`` (removed in 3.1)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The "tls-creds" option should be used instead to point to a "tls-creds-x509"
object created using "-object".
``-mem-path`` fallback to RAM (removed in 5.0)
''''''''''''''''''''''''''''''''''''''''''''''
If guest RAM allocation from file pointed by ``mem-path`` failed,
QEMU was falling back to allocating from RAM, which might have resulted
in unpredictable behavior since the backing file specified by the user
as ignored. Currently, users are responsible for making sure the backing storage
specified with ``-mem-path`` can actually provide the guest RAM configured with
``-m`` and QEMU fails to start up if RAM allocation is unsuccessful.
``-net ...,name=...`` (removed in 5.1)
''''''''''''''''''''''''''''''''''''''
The ``name`` parameter of the ``-net`` option was a synonym
for the ``id`` parameter, which should now be used instead.
``-numa node,mem=...`` (removed in 5.1)
'''''''''''''''''''''''''''''''''''''''
The parameter ``mem`` of ``-numa node`` was used to assign a part of guest RAM
to a NUMA node. But when using it, it's impossible to manage a specified RAM
chunk on the host side (like bind it to a host node, setting bind policy, ...),
so the guest ends up with the fake NUMA configuration with suboptiomal
performance.
However since 2014 there is an alternative way to assign RAM to a NUMA node
using parameter ``memdev``, which does the same as ``mem`` and adds
means to actually manage node RAM on the host side. Use parameter ``memdev``
with *memory-backend-ram* backend as replacement for parameter ``mem``
to achieve the same fake NUMA effect or a properly configured
*memory-backend-file* backend to actually benefit from NUMA configuration.
New machine versions (since 5.1) will not accept the option but it will still
work with old machine types. User can check the QAPI schema to see if the legacy
option is supported by looking at MachineInfo::numa-mem-supported property.
``-numa`` node (without memory specified) (removed in 5.2)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Splitting RAM by default between NUMA nodes had the same issues as ``mem``
parameter with the difference that the role of the user plays QEMU using
implicit generic or board specific splitting rule.
Use ``memdev`` with *memory-backend-ram* backend or ``mem`` (if
it's supported by used machine type) to define mapping explicitly instead.
Users of existing VMs, wishing to preserve the same RAM distribution, should
configure it explicitly using ``-numa node,memdev`` options. Current RAM
distribution can be retrieved using HMP command ``info numa`` and if separate
memory devices (pc|nv-dimm) are present use ``info memory-device`` and subtract
device memory from output of ``info numa``.
``-smp`` (invalid topologies) (removed in 5.2)
''''''''''''''''''''''''''''''''''''''''''''''
CPU topology properties should describe whole machine topology including
possible CPUs.
However, historically it was possible to start QEMU with an incorrect topology
where *n* <= *sockets* * *cores* * *threads* < *maxcpus*,
which could lead to an incorrect topology enumeration by the guest.
Support for invalid topologies is removed, the user must ensure
topologies described with -smp include all possible cpus, i.e.
*sockets* * *cores* * *threads* = *maxcpus*.
``-machine enforce-config-section=on|off`` (removed in 5.2)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The ``enforce-config-section`` property was replaced by the
``-global migration.send-configuration={on|off}`` option.
``-no-kvm`` (removed in 5.2)
''''''''''''''''''''''''''''
@ -194,8 +255,8 @@ by the ``tls-authz`` and ``sasl-authz`` options.
The ``pretty=on|off`` switch has no effect for HMP monitors and
its use is rejected.
``-drive file=json:{...{'driver':'file'}}`` (removed 6.0)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``-drive file=json:{...{'driver':'file'}}`` (removed in 6.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The 'file' driver for drives is no longer appropriate for character or host
devices and will only accept regular files (S_IFREG). The correct driver
@ -272,8 +333,8 @@ for the RISC-V ``virt`` machine and ``sifive_u`` machine.
QEMU Machine Protocol (QMP) commands
------------------------------------
``block-dirty-bitmap-add`` "autoload" parameter (removed in 4.2.0)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
``block-dirty-bitmap-add`` "autoload" parameter (removed in 4.2)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The "autoload" parameter has been ignored since 2.12.0. All bitmaps
are automatically loaded from qcow2 images.
@ -456,15 +517,15 @@ Nobody was using this CPU emulation in QEMU, and there were no test images
available to make sure that the code is still working, so it has been removed
without replacement.
``lm32`` CPUs (removed in 6.1.0)
''''''''''''''''''''''''''''''''
``lm32`` CPUs (removed in 6.1)
''''''''''''''''''''''''''''''
The only public user of this architecture was the milkymist project,
which has been dead for years; there was never an upstream Linux
port. Removed without replacement.
``unicore32`` CPUs (since 6.1.0)
''''''''''''''''''''''''''''''''
``unicore32`` CPUs (removed in 6.1)
'''''''''''''''''''''''''''''''''''
Support for this CPU was removed from the upstream Linux kernel, and
there is no available upstream toolchain to build binaries for it.
@ -590,82 +651,6 @@ enforce that any failure to open the backing image (including if the
backing file is missing or an incorrect format was specified) is an
error when ``-u`` is not used.
Command line options
--------------------
``-smp`` (invalid topologies) (removed 5.2)
'''''''''''''''''''''''''''''''''''''''''''
CPU topology properties should describe whole machine topology including
possible CPUs.
However, historically it was possible to start QEMU with an incorrect topology
where *n* <= *sockets* * *cores* * *threads* < *maxcpus*,
which could lead to an incorrect topology enumeration by the guest.
Support for invalid topologies is removed, the user must ensure
topologies described with -smp include all possible cpus, i.e.
*sockets* * *cores* * *threads* = *maxcpus*.
``-numa`` node (without memory specified) (removed 5.2)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''
Splitting RAM by default between NUMA nodes had the same issues as ``mem``
parameter with the difference that the role of the user plays QEMU using
implicit generic or board specific splitting rule.
Use ``memdev`` with *memory-backend-ram* backend or ``mem`` (if
it's supported by used machine type) to define mapping explicitly instead.
Users of existing VMs, wishing to preserve the same RAM distribution, should
configure it explicitly using ``-numa node,memdev`` options. Current RAM
distribution can be retrieved using HMP command ``info numa`` and if separate
memory devices (pc|nv-dimm) are present use ``info memory-device`` and subtract
device memory from output of ``info numa``.
``-numa node,mem=``\ *size* (removed in 5.1)
''''''''''''''''''''''''''''''''''''''''''''
The parameter ``mem`` of ``-numa node`` was used to assign a part of
guest RAM to a NUMA node. But when using it, it's impossible to manage a specified
RAM chunk on the host side (like bind it to a host node, setting bind policy, ...),
so the guest ends up with the fake NUMA configuration with suboptiomal performance.
However since 2014 there is an alternative way to assign RAM to a NUMA node
using parameter ``memdev``, which does the same as ``mem`` and adds
means to actually manage node RAM on the host side. Use parameter ``memdev``
with *memory-backend-ram* backend as replacement for parameter ``mem``
to achieve the same fake NUMA effect or a properly configured
*memory-backend-file* backend to actually benefit from NUMA configuration.
New machine versions (since 5.1) will not accept the option but it will still
work with old machine types. User can check the QAPI schema to see if the legacy
option is supported by looking at MachineInfo::numa-mem-supported property.
``-mem-path`` fallback to RAM (removed in 5.0)
''''''''''''''''''''''''''''''''''''''''''''''
If guest RAM allocation from file pointed by ``mem-path`` failed,
QEMU was falling back to allocating from RAM, which might have resulted
in unpredictable behavior since the backing file specified by the user
as ignored. Currently, users are responsible for making sure the backing storage
specified with ``-mem-path`` can actually provide the guest RAM configured with
``-m`` and QEMU fails to start up if RAM allocation is unsuccessful.
``-smp`` (invalid topologies) (removed 5.2)
'''''''''''''''''''''''''''''''''''''''''''
CPU topology properties should describe whole machine topology including
possible CPUs.
However, historically it was possible to start QEMU with an incorrect topology
where *n* <= *sockets* * *cores* * *threads* < *maxcpus*,
which could lead to an incorrect topology enumeration by the guest.
Support for invalid topologies is removed, the user must ensure
topologies described with -smp include all possible cpus, i.e.
*sockets* * *cores* * *threads* = *maxcpus*.
``-machine enforce-config-section=on|off`` (removed 5.2)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The ``enforce-config-section`` property was replaced by the
``-global migration.send-configuration={on|off}`` option.
qemu-img amend to adjust backing file (removed in 6.1)
''''''''''''''''''''''''''''''''''''''''''''''''''''''

40
docs/devel/ci-jobs.rst Normal file
View File

@ -0,0 +1,40 @@
Custom CI/CD variables
======================
QEMU CI pipelines can be tuned by setting some CI environment variables.
Set variable globally in the user's CI namespace
------------------------------------------------
Variables can be set globally in the user's CI namespace setting.
For further information about how to set these variables, please refer to::
https://docs.gitlab.com/ee/ci/variables/#add-a-cicd-variable-to-a-project
Set variable manually when pushing a branch or tag to the user's repository
---------------------------------------------------------------------------
Variables can be set manually when pushing a branch or tag, using
git-push command line arguments.
Example setting the QEMU_CI_EXAMPLE_VAR variable:
.. code::
git push -o ci.variable="QEMU_CI_EXAMPLE_VAR=value" myrepo mybranch
For further information about how to set these variables, please refer to::
https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd
Here is a list of the most used variables:
QEMU_CI_AVOCADO_TESTING
~~~~~~~~~~~~~~~~~~~~~~~
By default, tests using the Avocado framework are not run automatically in
the pipelines (because multiple artifacts have to be downloaded, and if
these artifacts are not already cached, downloading them make the jobs
reach the timeout limit). Set this variable to have the tests using the
Avocado framework run automatically.

117
docs/devel/ci-runners.rst Normal file
View File

@ -0,0 +1,117 @@
Jobs on Custom Runners
======================
Besides the jobs run under the various CI systems listed before, there
are a number additional jobs that will run before an actual merge.
These use the same GitLab CI's service/framework already used for all
other GitLab based CI jobs, but rely on additional systems, not the
ones provided by GitLab as "shared runners".
The architecture of GitLab's CI service allows different machines to
be set up with GitLab's "agent", called gitlab-runner, which will take
care of running jobs created by events such as a push to a branch.
Here, the combination of a machine, properly configured with GitLab's
gitlab-runner, is called a "custom runner".
The GitLab CI jobs definition for the custom runners are located under::
.gitlab-ci.d/custom-runners.yml
Custom runners entail custom machines. To see a list of the machines
currently deployed in the QEMU GitLab CI and their maintainers, please
refer to the QEMU `wiki <https://wiki.qemu.org/AdminContacts>`__.
Machine Setup Howto
-------------------
For all Linux based systems, the setup can be mostly automated by the
execution of two Ansible playbooks. Create an ``inventory`` file
under ``scripts/ci/setup``, such as this::
fully.qualified.domain
other.machine.hostname
You may need to set some variables in the inventory file itself. One
very common need is to tell Ansible to use a Python 3 interpreter on
those hosts. This would look like::
fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
other.machine.hostname ansible_python_interpreter=/usr/bin/python3
Build environment
~~~~~~~~~~~~~~~~~
The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
set up machines with the environment needed to perform builds and run
QEMU tests. This playbook consists on the installation of various
required packages (and a general package update while at it). It
currently covers a number of different Linux distributions, but it can
be expanded to cover other systems.
The minimum required version of Ansible successfully tested in this
playbook is 2.8.0 (a version check is embedded within the playbook
itself). To run the playbook, execute::
cd scripts/ci/setup
ansible-playbook -i inventory build-environment.yml
Please note that most of the tasks in the playbook require superuser
privileges, such as those from the ``root`` account or those obtained
by ``sudo``. If necessary, please refer to ``ansible-playbook``
options such as ``--become``, ``--become-method``, ``--become-user``
and ``--ask-become-pass``.
gitlab-runner setup and registration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The gitlab-runner agent needs to be installed on each machine that
will run jobs. The association between a machine and a GitLab project
happens with a registration token. To find the registration token for
your repository/project, navigate on GitLab's web UI to:
* Settings (the gears-like icon at the bottom of the left hand side
vertical toolbar), then
* CI/CD, then
* Runners, and click on the "Expand" button, then
* Under "Set up a specific Runner manually", look for the value under
"And this registration token:"
Copy the ``scripts/ci/setup/vars.yml.template`` file to
``scripts/ci/setup/vars.yml``. Then, set the
``gitlab_runner_registration_token`` variable to the value obtained
earlier.
To run the playbook, execute::
cd scripts/ci/setup
ansible-playbook -i inventory gitlab-runner.yml
Following the registration, it's necessary to configure the runner tags,
and optionally other configurations on the GitLab UI. Navigate to:
* Settings (the gears like icon), then
* CI/CD, then
* Runners, and click on the "Expand" button, then
* "Runners activated for this project", then
* Click on the "Edit" icon (next to the "Lock" Icon)
Tags are very important as they are used to route specific jobs to
specific types of runners, so it's a good idea to double check that
the automatically created tags are consistent with the OS and
architecture. For instance, an Ubuntu 20.04 aarch64 system should
have tags set as::
ubuntu_20.04,aarch64
Because the job definition at ``.gitlab-ci.d/custom-runners.yml``
would contain::
ubuntu-20.04-aarch64-all:
tags:
- ubuntu_20.04
- aarch64
It's also recommended to:
* increase the "Maximum job timeout" to something like ``2h``
* give it a better Description

View File

@ -8,160 +8,5 @@ found at::
https://wiki.qemu.org/Testing/CI
Custom CI/CD variables
======================
QEMU CI pipelines can be tuned by setting some CI environment variables.
Set variable globally in the user's CI namespace
------------------------------------------------
Variables can be set globally in the user's CI namespace setting.
For further information about how to set these variables, please refer to::
https://docs.gitlab.com/ee/ci/variables/#add-a-cicd-variable-to-a-project
Set variable manually when pushing a branch or tag to the user's repository
---------------------------------------------------------------------------
Variables can be set manually when pushing a branch or tag, using
git-push command line arguments.
Example setting the QEMU_CI_EXAMPLE_VAR variable:
.. code::
git push -o ci.variable="QEMU_CI_EXAMPLE_VAR=value" myrepo mybranch
For further information about how to set these variables, please refer to::
https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd
Here is a list of the most used variables:
QEMU_CI_AVOCADO_TESTING
~~~~~~~~~~~~~~~~~~~~~~~
By default, tests using the Avocado framework are not run automatically in
the pipelines (because multiple artifacts have to be downloaded, and if
these artifacts are not already cached, downloading them make the jobs
reach the timeout limit). Set this variable to have the tests using the
Avocado framework run automatically.
Jobs on Custom Runners
======================
Besides the jobs run under the various CI systems listed before, there
are a number additional jobs that will run before an actual merge.
These use the same GitLab CI's service/framework already used for all
other GitLab based CI jobs, but rely on additional systems, not the
ones provided by GitLab as "shared runners".
The architecture of GitLab's CI service allows different machines to
be set up with GitLab's "agent", called gitlab-runner, which will take
care of running jobs created by events such as a push to a branch.
Here, the combination of a machine, properly configured with GitLab's
gitlab-runner, is called a "custom runner".
The GitLab CI jobs definition for the custom runners are located under::
.gitlab-ci.d/custom-runners.yml
Custom runners entail custom machines. To see a list of the machines
currently deployed in the QEMU GitLab CI and their maintainers, please
refer to the QEMU `wiki <https://wiki.qemu.org/AdminContacts>`__.
Machine Setup Howto
-------------------
For all Linux based systems, the setup can be mostly automated by the
execution of two Ansible playbooks. Create an ``inventory`` file
under ``scripts/ci/setup``, such as this::
fully.qualified.domain
other.machine.hostname
You may need to set some variables in the inventory file itself. One
very common need is to tell Ansible to use a Python 3 interpreter on
those hosts. This would look like::
fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
other.machine.hostname ansible_python_interpreter=/usr/bin/python3
Build environment
~~~~~~~~~~~~~~~~~
The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
set up machines with the environment needed to perform builds and run
QEMU tests. This playbook consists on the installation of various
required packages (and a general package update while at it). It
currently covers a number of different Linux distributions, but it can
be expanded to cover other systems.
The minimum required version of Ansible successfully tested in this
playbook is 2.8.0 (a version check is embedded within the playbook
itself). To run the playbook, execute::
cd scripts/ci/setup
ansible-playbook -i inventory build-environment.yml
Please note that most of the tasks in the playbook require superuser
privileges, such as those from the ``root`` account or those obtained
by ``sudo``. If necessary, please refer to ``ansible-playbook``
options such as ``--become``, ``--become-method``, ``--become-user``
and ``--ask-become-pass``.
gitlab-runner setup and registration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The gitlab-runner agent needs to be installed on each machine that
will run jobs. The association between a machine and a GitLab project
happens with a registration token. To find the registration token for
your repository/project, navigate on GitLab's web UI to:
* Settings (the gears-like icon at the bottom of the left hand side
vertical toolbar), then
* CI/CD, then
* Runners, and click on the "Expand" button, then
* Under "Set up a specific Runner manually", look for the value under
"And this registration token:"
Copy the ``scripts/ci/setup/vars.yml.template`` file to
``scripts/ci/setup/vars.yml``. Then, set the
``gitlab_runner_registration_token`` variable to the value obtained
earlier.
To run the playbook, execute::
cd scripts/ci/setup
ansible-playbook -i inventory gitlab-runner.yml
Following the registration, it's necessary to configure the runner tags,
and optionally other configurations on the GitLab UI. Navigate to:
* Settings (the gears like icon), then
* CI/CD, then
* Runners, and click on the "Expand" button, then
* "Runners activated for this project", then
* Click on the "Edit" icon (next to the "Lock" Icon)
Tags are very important as they are used to route specific jobs to
specific types of runners, so it's a good idea to double check that
the automatically created tags are consistent with the OS and
architecture. For instance, an Ubuntu 20.04 aarch64 system should
have tags set as::
ubuntu_20.04,aarch64
Because the job definition at ``.gitlab-ci.d/custom-runners.yml``
would contain::
ubuntu-20.04-aarch64-all:
tags:
- ubuntu_20.04
- aarch64
It's also recommended to:
* increase the "Maximum job timeout" to something like ``2h``
* give it a better Description
.. include:: ci-jobs.rst
.. include:: ci-runners.rst

View File

@ -9,7 +9,7 @@ endif
# Check if tools are available to build documentation.
build_docs = false
if sphinx_build.found()
SPHINX_ARGS = ['env', 'CONFDIR=' + qemu_confdir, sphinx_build]
SPHINX_ARGS = ['env', 'CONFDIR=' + qemu_confdir, sphinx_build, '-q']
# If we're making warnings fatal, apply this to Sphinx runs as well
if get_option('werror')
SPHINX_ARGS += [ '-W' ]