libc-rs/ci
2020-10-27 21:48:52 +09:00
..
docker Unpin the cc crate version 2020-10-24 02:35:55 +09:00
ios Remove license headers 2020-04-11 23:35:45 +09:00
android-install-ndk.sh Pin cc crate to fix AArch64-Android CI 2020-10-17 04:41:00 +09:00
android-install-sdk.sh Use the latest Android SDK manager 2020-10-16 13:01:02 +09:00
android-sysimage.sh Use the latest Android SDK manager 2020-10-16 13:01:02 +09:00
build.sh Disable sparc-unknown-linux-gnu for now 2020-10-27 21:48:52 +09:00
dox.sh Use build-std feature instead of using cargo-xbuild 2020-07-25 23:57:46 +09:00
emscripten-entry.sh Update emsdk to 1.39.20 2020-08-20 12:53:03 +09:00
emscripten.sh Update emsdk to 1.39.20 2020-08-20 12:53:03 +09:00
install-musl.sh Upgrade to musl 1.1.24 in CI 2019-11-19 14:17:34 +08:00
install-rust.sh Split off Rust installation scripts 2020-10-14 07:40:01 +09:00
linux-s390x.sh Update installer-s390x to 20200314 2020-10-09 18:41:07 +09:00
linux-sparc64.sh Fix debian image for sparc64 2020-10-14 08:00:07 +09:00
README.md Minor clean-up CI docs 2020-04-14 03:23:50 +09:00
run-docker.sh Use build-std feature instead of using cargo-xbuild 2020-07-25 23:57:46 +09:00
run.sh Revert "Skip some tests for mips64(el)-unknown-linux-gnuabi64" 2020-10-25 11:04:58 +09:00
runtest-android.rs Re-enable debug output in dox.sh and remove unused variable in runtest 2019-02-13 18:04:10 +01:00
rust.css Tweak building documentation 2020-07-08 16:48:03 +09:00
semver.sh Fix scripts following shellcheck 2020-10-14 07:40:01 +09:00
style.rs ci/style.rs: Catch derives of Copy and Clone 2020-08-19 23:03:28 -07:00
style.sh Fix scripts following shellcheck 2020-10-14 07:40:01 +09:00
switch.json Build Switch on CI 2019-05-10 10:28:18 -04:00
sysinfo_guard.patch Upgrade to musl 1.1.24 in CI 2019-11-19 14:17:34 +08:00
test-runner-linux Run CMSG tests on s390x 2019-05-24 14:13:43 +02:00

The goal of the libc crate is to have CI running everywhere to have the strongest guarantees about the definitions that this library contains, and as a result the CI is pretty complicated and also pretty large! Hopefully this can serve as a guide through the sea of scripts in this directory and elsewhere in this project.

Note that this documentation is quite outdated. See CI config and scripts in the ci directory how we run CI now.

Files

First up, let's talk about the files in this directory:

  • run-docker.sh - a shell script run by most builders, it will execute run.sh inside a Docker container configured for the target.

  • run.sh - the actual script which runs tests for a particular architecture.

  • dox.sh - build the documentation of the crate and publish it to gh-pages.

CI Systems

Currently this repository leverages a combination of Azure Pipelines and Cirrus CI for running tests. You can find tested triples in Pipelines config or Cirrus config.

The Windows triples are all pretty standard, they just set up their environment then run tests, no need for downloading any extra target libs (we just download the right installer). The Intel Linux/OSX builds are similar in that we just download the right target libs and run tests. Note that the Intel Linux/OSX builds are run on stable/beta/nightly, but are the only ones that do so.

The remaining architectures look like:

  • Android runs in a docker image with an emulator, the NDK, and the SDK already set up. The entire build happens within the docker image.
  • The MIPS, ARM, and AArch64 builds all use the QEMU userspace emulator to run the generated binary to actually verify the tests pass.
  • The MUSL build just has to download a MUSL compiler and target libraries and then otherwise runs tests normally.
  • iOS builds need an extra linker flag currently, but beyond that they're built as standard as everything else.
  • The rumprun target builds an entire kernel from the test suite and then runs it inside QEMU using the serial console to test whether it succeeded or failed.
  • The BSD builds, currently OpenBSD and FreeBSD, use QEMU to boot up a system and compile/run tests. More information on that below.

QEMU

Lots of the architectures tested here use QEMU in the tests, so it's worth going over all the crazy capabilities QEMU has and the various flavors in which we use it!

First up, QEMU has userspace emulation where it doesn't boot a full kernel, it just runs a binary from another architecture (using the qemu-<arch> wrappers). We provide it the runtime path for the dynamically loaded system libraries, however. This strategy is used for all Linux architectures that aren't intel. Note that one downside of this QEMU system is that threads are barely implemented, so we're careful to not spawn many threads.

For the rumprun target the only output is a kernel image, so we just use that plus the rumpbake command to create a full kernel image which is then run from within QEMU.

Finally, the fun part, the BSDs. Quite a few hoops are jumped through to get CI working for these platforms, but the gist of it looks like:

  • Cross compiling from Linux to any of the BSDs seems to be quite non-standard. We may be able to get it working but it might be difficult at that point to ensure that the libc definitions align with what you'd get on the BSD itself. As a result, we try to do compiles within the BSD distro.
  • We resort to userspace emulation (QEMU).

With all that in mind, the way BSD is tested looks like:

  1. Download a pre-prepared image for the OS being tested.
  2. Generate the tests for the OS being tested. This involves running the ctest library over libc to generate a Rust file and a C file which will then be compiled into the final test.
  3. Generate a disk image which will later be mounted by the OS being tested. This image is mostly just the libc directory, but some modifications are made to compile the generated files from step 2.
  4. The kernel is booted in QEMU, and it is configured to detect the libc-test image being available, run the test script, and then shut down afterwards.
  5. Look for whether the tests passed in the serial console output of the kernel.

There's some pretty specific instructions for setting up each image (detailed below), but the main gist of this is that we must avoid a vanilla cargo run inside of the libc-test directory (which is what it's intended for) because that would compile syntex_syntax, a large library, with userspace emulation. This invariably times out on CI, so we can't do that.

Once all those hoops are jumped through, however, we can be happy that we're testing almost everything!

Below are some details of how to set up the initial OS images which are downloaded. Each image must be enabled have input/output over the serial console, log in automatically at the serial console, detect if a second drive in QEMU is available, and if so mount it, run a script (it'll specifically be run-qemu.sh in this folder which is copied into the generated image talked about above), and then shut down.

QEMU Setup - FreeBSD

  1. Download the latest stable amd64-bootonly release ISO. E.g. FreeBSD-11.1-RELEASE-amd64-bootonly.iso
  2. Create the disk image: qemu-img create -f qcow2 FreeBSD-11.1-RELEASE-amd64.qcow2 2G
  3. Boot the machine: qemu-system-x86_64 -cdrom FreeBSD-11.1-RELEASE-amd64-bootonly.iso -drive if=virtio,file=FreeBSD-11.1-RELEASE-amd64.qcow2 -net nic,model=virtio -net user
  4. Run the installer, and install FreeBSD:
    1. Install

    2. Continue with default keymap

    3. Set Hostname: freebsd-ci

    4. Distribution Select:

      1. Uncheck lib32
      2. Uncheck ports
    5. Network Configuration: vtnet0

    6. Configure IPv4? Yes

    7. DHCP? Yes

    8. Configure IPv6? No

    9. Resolver Configuration: Ok

    10. Mirror Selection: Main Site

    11. Partitioning: Auto (UFS)

    12. Partition: Entire Disk

    13. Partition Scheme: MBR

    14. App Partition: Ok

    15. Partition Editor: Finish

    16. Confirmation: Commit

    17. Wait for sets to install

    18. Set the root password to nothing (press enter twice)

    19. Set time zone to UTC

    20. Set Date: Skip

    21. Set Time: Skip

    22. System Configuration:

      1. Disable sshd
      2. Disable dumpdev
    23. System Hardening

      1. Disable Sendmail service
    24. Add User Accounts: No

    25. Final Configuration: Exit

    26. Manual Configuration: Yes

    27. echo 'console="comconsole"' >> /boot/loader.conf

    28. echo 'autoboot_delay="0"' >> /boot/loader.conf

    29. echo 'ext2fs_load="YES"' >> /boot/loader.conf

    30. Look at /etc/ttys, see what getty argument is for ttyu0 (E.g. 3wire)

    31. Edit /etc/gettytab (with vi for example), look for ttyu0 argument, prepend :al=root to the line beneath to have the machine auto-login as root. E.g.

      3wire:\
               :np:nc:sp#0:
      

      becomes:

      3wire:\
               :al=root:np:nc:sp#0:
      
    32. Edit /root/.login and put this in it:

      [ -e /dev/vtbd1 ] || exit 0
      mount -t ext2fs /dev/vtbd1 /mnt
      sh /mnt/run.sh /mnt
      poweroff
      
    33. Exit the post install shell: exit

    34. Back in in the installer choose Reboot

    35. If all went well the machine should reboot and show a login prompt. If you switch to the serial console by choosing View > serial0 in the qemu menu, you should be logged in as root.

    36. Shutdown the machine: shutdown -p now

Helpful links

QEMU setup - OpenBSD

  1. Download CD installer
  2. qemu-img create -f qcow2 foo.qcow2 2G
  3. qemu -cdrom foo.iso -drive if=virtio,file=foo.qcow2 -net nic,model=virtio -net user
  4. run installer
  5. echo 'set tty com0' >> /etc/boot.conf
  6. echo 'boot' >> /etc/boot.conf
  7. Modify /etc/ttys, change the tty00 at the end from 'unknown off' to 'vt220 on secure'
  8. Modify same line in /etc/ttys to have "/root/foo.sh" as the shell
  9. Add this script to /root/foo.sh
#!/bin/sh
exec 1>/dev/tty00
exec 2>&1

if mount -t ext2fs /dev/sd1c /mnt; then
  sh /mnt/run.sh /mnt
  shutdown -ph now
fi

# limited shell...
exec /bin/sh < /dev/tty00
  1. chmod +x /root/foo.sh

Helpful links:

Questions?

Hopefully that's at least somewhat of an introduction to everything going on here, and feel free to ping @alexcrichton with questions!