Fixes#43881. Reduces AppVeyor test time back to ~2 hours on
average.
The i586 libstd was never tested before Aug 13th, so this PR
brings the situation back to the previous status-quo.
Now that the final bug fixes have been merged into sccache we can start
leveraging sccache on the MSVC builders on AppVeyor instead of relying on the
ad-hoc caching strategy of trigger files and whatnot.
This commit sort of brings back #40777 by upgrading back to 6.3.0. While
investigating #40546 it was discovered that 6.3.0 appears to not spurious
fail in the same way that 6.2.0 does (which we're currently using). The
workaround for #40184 contained in #40777 did not work so this commit also
contains a different workaround for the gdb issue. We will not download the
6.2.0 version of gdb and use that instead of the default version that comes with
6.3.0.
I'm going to optimistically say...
Closes#40546
LLVM 4.0 Upgrade
Since nobody has done this yet, I decided to get things started:
**Todo:**
* [x] push the relevant commits to `rust-lang/llvm` and `rust-lang/compiler-rt`
* [x] cleanup `.gitmodules`
* [x] Verify if there are any other commits from `rust-lang/llvm` which need backporting
* [x] Investigate / fix debuginfo ("`<optimized out>`") failures
* [x] Use correct emscripten version in docker image
---
Closes#37609.
---
**Test results:**
Everything is green 🎉
Re-enable appveyor cache
After breaking the queue last time, I'm cautiously back with a PR to re-enable caching on appveyor.
If you look at https://ci.appveyor.com/project/rust-lang/rust/build/1.0.2623/job/46o90by4ari6gege (one of the multiple runs that started failed, there are actually two errors - one for restoring the cache, one right at the bottom for creating a directory. I only noticed the restore error at the time as I was a bit rushed to revert and didn't stop to wonder why it continued - turns out appveyor [does not abort on cache restore failure](https://github.com/appveyor/ci/issues/723).
Turns out the cause of the build failures was the cache directory existing and me being thinking that because mkdir on windows is [recursive by default](http://stackoverflow.com/a/905239/2352259), it ignores the error if the directory already exists. Apparently this is not true, so now it checks if the directory exists before attempting to create.
In addition, I've added some more paranoia to double check everything is sane.
I've tracked down what I believe is the last spurious sccache failure on #40240
to behavior in mio (carllerche/mio#583), and this commit updates the binaries to
a version which has that fix incorporated.
Attempt to cache git modules
Partial resolution of #40772, appveyor remains to be done once travis looks like it's working ok.
The approach in this PR is based on the `--reference` flag to `git-clone`/`git-submodule --update` and is a compromise based on the current limitations of the tools we're using.
The ideal would be:
1. have a cached pristine copy of rust-lang/rust master in `$HOME/rustsrc` with all submodules initialised
2. clone the PR branch with `git clone --recurse-submodules --reference $HOME/rustsrc git@github.com:rust-lang/rust.git`
This would (in the nonexistent ideal world) use the pristine copy as an object cache for the top level repo and all submodules, transferring over the network only the changes on the branch. Unfortunately, a) there is no way to manually control the initial clone with travis and b) even if there was, cloned submodules don't use the submodules of the reference as an object cache. So the steps we end up with are:
1. have a cached pristine copy of rust-lang/rust master in `$HOME/rustsrc` with all submodules initialised
2. have a cloned PR branch
3. extract the path of each submodule, and explicitly `git submodule update --init --reference $HOME/rustsrc/$module $module` (i.e. point directly to the location of the pristine submodule repo) for each one
I've also taken some care to make this forward compatible, both for adding and removing submodules.
r? @alexcrichton
It looks like the 6.3.0 MinGW comes with a gdb which has issues (#40184) that an
attempted workaround (#40777) does not actually fix (#40835). The original
motivation for upgradin MinGW was to fix build flakiness (#40546) due to newer
builds not exhibiting the same bug, so let's hope that 6.2.0 isn't too far back
in time and still contains the fix we need.
Closes#40835
appveyor: Upgrade MinGW toolchains we use
In debugging #40546 I was able to reproduce locally finally using
the literal toolchain that the bots were using. I reproduced the error maybe 4
in 10 builds. I also have the 6.3.0 toolchain installed through `pacman` which
has yet to have a failed build.
When attempting to reproduce the bug with the toolchain that this commit
switches to I was unable to reproduce anything after a few builds. I have no
idea what the original problem was, but I'm hoping that it was just some random
bug fixed somewhere along the way.
I don't currently know of a technical reason to stick to the 4.9.2 toolchains we
were previously using. Historcal 5.3.* toolchains would cause llvm to segfault
(maybe a miscompile?) but this seems to have been fixed recently. To me if it
passes CI then I think we're good.
Closes#40546
In debugging #40546 I was able to reproduce locally finally using
the literal toolchain that the bots were using. I reproduced the error maybe 4
in 10 builds. I also have the 6.3.0 toolchain installed through `pacman` which
has yet to have a failed build.
When attempting to reproduce the bug with the toolchain that this commit
switches to I was unable to reproduce anything after a few builds. I have no
idea what the original problem was, but I'm hoping that it was just some random
bug fixed somewhere along the way.
I don't currently know of a technical reason to stick to the 4.9.2 toolchains we
were previously using. Historcal 5.3.* toolchains would cause llvm to segfault
(maybe a miscompile?) but this seems to have been fixed recently. To me if it
passes CI then I think we're good.
Closes#40546
This was recently implemented (appveyor/ci#1387) in response to one of our
feature requests, so let's take advantage of it! I'm going to optimistically
say...
Closes#39074
I have a suspicion that MinGW's make is the cause of #40546 rather than anything
else, but that's purely a suspicion without any facts to back it up. In any case
we'll eventually be moving the MSVC build over to Ninja in order to leverage
sccache regardless, so this commit simply jumpstarts that process by downloading
Ninja for use by MinGW anyway.
I'm not sure if this closes#40546 for real, but this is my current best shot at
closing it out, so...
Closes#40546
Now that mozilla/sccache#43 is fixed the caching works for MinGW on Windows. We
still can't use it for MSVC just yet, but I'll try to revive that branch at some
point.
This commit attempts to move more network operations to being retryable through
various operations. For example git submodule updates, downloading snapshots,
etc, are now all in retryable steps.
Hopefully this commit can cut down on the number of network failures we've been
seeing!
Currently CI builds can fail spuriously during the LLVM build (#39003). I
believe this is due to sccache, and I believe that in turn was due to the fact
that the sccache server used to just be a raw mio server. Historically raw mio
servers are quite complicated to get right, but this is why we built Tokio! The
sccache server has been migrated to Tokio which I suspect would fix any latent
issues.
I have no confirmation of this (never been able to reproduce the deadlock
locally), but my hunch is that updating sccache to the master branch will fix
the timeouts during the LLVM build.
The binaries previously came from Gecko's infrastructure, but I've built new
ones by hand for Win/Mac/Linux and uploaded them to our CI bucket.
In the long run we want to separate out the dist builders from the test
builders. This provides us leeway to expand the dist builders with more tools
(e.g. Cargo and the RLS) without impacting cycle times.
Currently the Travis dist builders double-up the platforms they provide builds
for, so I figured we could try that out for MSVC as well. This commit adds a new
AppVeyor builder which runs a dist for all the MSVC targets:
* x86_64-pc-windows-msvc
* i686-pc-windows-msvc
* i586-pc-windows-msvc
If this takes too long and/or times out we'll need to split this up. In any case
we're going to need more capacity from AppVeyor no matter what becaue the two
pc-windows-gnu targets can't cross compile so we need at least 2 more builders
no matter what.
travis: Add builders without assertions
This commit adds three new builders, one OSX, one Linux, and one MSVC, which
will produce "nightlies" with LLVM assertions disabled. Currently all nightly
releases have LLVM assertions enabled to catch bugs before they reach the
beta/stable channels. The beta/stable channels, however, do not have LLVM
assertions enabled.
Unfortunately though projects like Servo are stuck on nightlies for the near
future at least and are also suffering very long compile times. The purpose of
this commit is to provide artifacts to these projects which are not distributed
through normal channels (e.g. rustup) but are provided for developers to use
locally if need be.
Logistically these builds will all be uploaded to `rustc-builds-alt` instead of
the `rustc-builds` folder of the `rust-lang-ci` bucket. These builds will stay
there forever (until cleaned out if necessary) and there are no plans to
integrate this with rustup and/or the official release process.
This commit adds three new builders, one OSX, one Linux, and one MSVC, which
will produce "nightlies" with LLVM assertions disabled. Currently all nightly
releases have LLVM assertions enabled to catch bugs before they reach the
beta/stable channels. The beta/stable channels, however, do not have LLVM
assertions enabled.
Unfortunately though projects like Servo are stuck on nightlies for the near
future at least and are also suffering very long compile times. The purpose of
this commit is to provide artifacts to these projects which are not distributed
through normal channels (e.g. rustup) but are provided for developers to use
locally if need be.
Logistically these builds will all be uploaded to `rustc-builds-alt` instead of
the `rustc-builds` folder of the `rust-lang-ci` bucket. These builds will stay
there forever (until cleaned out if necessary) and there are no plans to
integrate this with rustup and/or the official release process.
Previously we only uploaded tarballs, but this modifies Travis/AppVeyor to
upload everything. We shouldn't have anything else in there to worry about and
otherwise we need to be sure to pick up pkg/msi/exe installers.
This commit adds a new flag to the configure script,
`--enable-extended`, which is intended for specifying a desire to
compile the full suite of Rust tools such as Cargo, the RLS, etc. This
is also an indication that the build system should create combined
installers such as the pkg/exe/msi artifacts.
Currently the `--enable-extended` flag just indicates that combined
installers should be built, and Cargo is itself not compiled just yet
but rather only downloaded from its location. The intention here is to
quickly get to feature parity with the current release process and then
we can start improving it afterwards.
All new files in this PR inside `src/etc/installer` are copied from the
rust-packaging repository.
travis: Stop uploading sha256 files
We'll generate these later in the build process and otherwise they could just
cause spurious failures with files overwriting one another.
cc #38531