I don't believe these test cases have served any purpose in years.
The shootout benchmarks are now upstreamed. A new benchmark suite
should rather be maintained out of tree.
The cross prefix was not likely the actual compiler that needed to be used, but
rather the standard `arm-linux-gnueabihf-gcc` compiler can just be used with
`-march=armv7`.
Unfortunately older clang compilers don't support this argument, so the
bootstrap will fail. We don't actually really need to optimized the C code we
compile, however, as currently we're just compiling jemalloc and not much else.
This adds support for the armv7 crosstool-ng toolchain for the Raspberry Pi 2.
Getting the toolchain ready:
Checkout crosstool-ng from https://github.com/crosstool-ng/crosstool-ng
Build crosstool-ng
Configure the rpi2 target with |ct-ng armv7-rpi2-linux-gnueabihf|
Build the toolchain with |ct-build| and add the path to $toolchain_install_dir/bin to your $PATH
Then, on the rust side:
configure --target=armv7-rpi2-linux-gnueabihf && make && make install
To cross compile for the rpi2,
add $rust_install_path/lib to your $LD_LIBRARY_PATH, then use
rustc --target=armv7-rpi2-linux-gnueabihf -C linker=armv7-rpi2-linux-gnueabihf-g++ hello.rs
The purpose of the translation item collector is to find all monomorphic instances of functions, methods and statics that need to be translated into LLVM IR in order to compile the current crate.
So far these instances have been discovered lazily during the trans path. For incremental compilation we want to know the set of these instances in advance, and that is what the trans::collect module provides.
In the future, incremental and regular translation will be driven by the collector implemented here.
r? @nikomatsakis
cc @rust-lang/compiler
Translation Item Collection
===========================
This module is responsible for discovering all items that will contribute to
to code generation of the crate. The important part here is that it not only
needs to find syntax-level items (functions, structs, etc) but also all
their monomorphized instantiations. Every non-generic, non-const function
maps to one LLVM artifact. Every generic function can produce
from zero to N artifacts, depending on the sets of type arguments it
is instantiated with.
This also applies to generic items from other crates: A generic definition
in crate X might produce monomorphizations that are compiled into crate Y.
We also have to collect these here.
The following kinds of "translation items" are handled here:
- Functions
- Methods
- Closures
- Statics
- Drop glue
The following things also result in LLVM artifacts, but are not collected
here, since we instantiate them locally on demand when needed in a given
codegen unit:
- Constants
- Vtables
- Object Shims
General Algorithm
-----------------
Let's define some terms first:
- A "translation item" is something that results in a function or global in
the LLVM IR of a codegen unit. Translation items do not stand on their
own, they can reference other translation items. For example, if function
`foo()` calls function `bar()` then the translation item for `foo()`
references the translation item for function `bar()`. In general, the
definition for translation item A referencing a translation item B is that
the LLVM artifact produced for A references the LLVM artifact produced
for B.
- Translation items and the references between them for a directed graph,
where the translation items are the nodes and references form the edges.
Let's call this graph the "translation item graph".
- The translation item graph for a program contains all translation items
that are needed in order to produce the complete LLVM IR of the program.
The purpose of the algorithm implemented in this module is to build the
translation item graph for the current crate. It runs in two phases:
1. Discover the roots of the graph by traversing the HIR of the crate.
2. Starting from the roots, find neighboring nodes by inspecting the MIR
representation of the item corresponding to a given node, until no more
new nodes are found.
The roots of the translation item graph correspond to the non-generic
syntactic items in the source code. We find them by walking the HIR of the
crate, and whenever we hit upon a function, method, or static item, we
create a translation item consisting of the items DefId and, since we only
consider non-generic items, an empty type-substitution set.
Given a translation item node, we can discover neighbors by inspecting its
MIR. We walk the MIR and any time we hit upon something that signifies a
reference to another translation item, we have found a neighbor. Since the
translation item we are currently at is always monomorphic, we also know the
concrete type arguments of its neighbors, and so all neighbors again will be
monomorphic. The specific forms a reference to a neighboring node can take
in MIR are quite diverse. Here is an overview:
The most obvious form of one translation item referencing another is a
function or method call (represented by a CALL terminator in MIR). But
calls are not the only thing that might introduce a reference between two
function translation items, and as we will see below, they are just a
specialized of the form described next, and consequently will don't get any
special treatment in the algorithm.
A function does not need to actually be called in order to be a neighbor of
another function. It suffices to just take a reference in order to introduce
an edge. Consider the following example:
```rust
fn print_val<T: Display>(x: T) {
println!("{}", x);
}
fn call_fn(f: &Fn(i32), x: i32) {
f(x);
}
fn main() {
let print_i32 = print_val::<i32>;
call_fn(&print_i32, 0);
}
```
The MIR of none of these functions will contain an explicit call to
`print_val::<i32>`. Nonetheless, in order to translate this program, we need
an instance of this function. Thus, whenever we encounter a function or
method in operand position, we treat it as a neighbor of the current
translation item. Calls are just a special case of that.
In a way, closures are a simple case. Since every closure object needs to be
constructed somewhere, we can reliably discover them by observing
`RValue::Aggregate` expressions with `AggregateKind::Closure`. This is also
true for closures inlined from other crates.
Drop glue translation items are introduced by MIR drop-statements. The
generated translation item will again have drop-glue item neighbors if the
type to be dropped contains nested values that also need to be dropped. It
might also have a function item neighbor for the explicit `Drop::drop`
implementation of its type.
A subtle way of introducing neighbor edges is by casting to a trait object.
Since the resulting fat-pointer contains a reference to a vtable, we need to
instantiate all object-save methods of the trait, as we need to store
pointers to these functions even if they never get called anywhere. This can
be seen as a special case of taking a function reference.
Since `Box` expression have special compiler support, no explicit calls to
`exchange_malloc()` and `exchange_free()` may show up in MIR, even if the
compiler will generate them. We have to observe `Rvalue::Box` expressions
and Box-typed drop-statements for that purpose.
Interaction with Cross-Crate Inlining
-------------------------------------
The binary of a crate will not only contain machine code for the items
defined in the source code of that crate. It will also contain monomorphic
instantiations of any extern generic functions and of functions marked with
The collection algorithm handles this more or less transparently. When
constructing a neighbor node for an item, the algorithm will always call
`inline::get_local_instance()` before proceeding. If no local instance can
be acquired (e.g. for a function that is just linked to) no node is created;
which is exactly what we want, since no machine code should be generated in
the current crate for such an item. On the other hand, if we can
successfully inline the function, we subsequently can just treat it like a
local item, walking it's MIR et cetera.
Eager and Lazy Collection Mode
------------------------------
Translation item collection can be performed in one of two modes:
- Lazy mode means that items will only be instantiated when actually
referenced. The goal is to produce the least amount of machine code
possible.
- Eager mode is meant to be used in conjunction with incremental compilation
where a stable set of translation items is more important than a minimal
one. Thus, eager mode will instantiate drop-glue for every drop-able type
in the crate, even of no drop call for that type exists (yet). It will
also instantiate default implementations of trait methods, something that
otherwise is only done on demand.
Open Issues
-----------
Some things are not yet fully implemented in the current version of this
module.
Since no MIR is constructed yet for initializer expressions of constants and
statics we cannot inspect these properly.
Ideally, no translation item should be generated for const fns unless there
is a call to them that cannot be evaluated at compile time. At the moment
this is not implemented however: a translation item will be produced
regardless of whether it is actually needed or not.
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/rust-lang/rust/30900)
<!-- Reviewable:end -->
Unfortunately older clang compilers don't support this argument, so the
bootstrap will fail. We don't actually really need to optimized the C code we
compile, however, as currently we're just compiling jemalloc and not much else.
The purpose of the translation item collector is to find all monomorphic instances of functions, methods and statics that need to be translated into LLVM IR in order to compile the current crate.
So far these instances have been discovered lazily during the trans path. For incremental compilation we want to know the set of these instances in advance, and that is what the trans::collect module provides.
In the future, incremental and regular translation will be driven by the collector implemented here.
This commit removes the `-D warnings` flag being passed through the makefiles to
all crates to instead be a crate attribute. We want these attributes always
applied for all our standard builds, and this is more amenable to Cargo-based
builds as well.
Note that all `deny(warnings)` attributes are gated with a `cfg(stage0)`
attribute currently to match the same semantics we have today
this makes sure the checks run before typeck (which might use the constant or const
function to calculate an array length) and gives prettier error messages in case of for
loops and such (since they aren't expanded yet).
fixes#30887
r? @pnkfelix
This aligns with unicode recommendations and should be stable for all future
unicode releases. See http://unicode.org/reports/tr31/#R3.
This renames `libsyntax::lexer::is_whitespace` to `is_pattern_whitespace`
so potentially breaks users of libsyntax.
this makes sure the checks run before typeck (which might use the constant or const
function to calculate an array length) and gives prettier error messages in case of for
loops and such (since they aren't expanded yet).
Use arena allocation instead of reference counting for `Module`s to fix memory leaks from `Rc` cycles.
A module references its module children and its import resolutions, and an import resolution references the module defining the imported name, so there is a cycle whenever a module imports something from an ancestor module.
For example,
```rust
mod foo { // `foo` references `bar`.
fn baz() {}
mod bar { // `bar` references the import.
use foo::baz; // The import references `foo`.
}
}
```
I also re-enabled the use of `#[thread_local]` on AArch64. It was originally disabled in the PR that introduced AArch64 (#19790), but the reasons for this were not explained. `#[thread_local]` seems to work fine in my tests on AArch64, so I don't think this should be an issue.
cc @alexcrichton @akiss77
This mixes in additional information into the hash that is
passed to -C extra-filename. It can be used to further distinguish
the standard libraries if they must be installed next to each
other.
Closes#29559
Frankly, I'm not sure if this solves a real problem. It's meant to help with side-by-side and overlapping installations where there are two sets of libs in /usr, but there are other potential issues there as well, including that some of our artifacts don't use this extra-filename munging, and it's not something our installers can support at all.
cc @jauhien Do you still think this helps the Gentoo case?
Since `darwin` is really `apple-darwin`, the valgrind-rpass tests were not actually being run with valgrind on mac before. Also, the `HOST` check was completely wrong.
r? @alexcrichton
This mixes in additional information into the hash that is
passed to -C extra-filename. It can be used to further distinguish
the standard libraries if they must be installed next to each
other.
Closes#29559
It's been awhile since we last updated jemalloc, and there's likely some bugs
that have been fixed since the last version we're using, so let's try to update
again.
It's been awhile since we last updated jemalloc, and there's likely some bugs
that have been fixed since the last version we're using, so let's try to update
again.
this PR reverts previous ones, that tried to make `cc` to found `estdc++` in `/usr/local/lib`. It causes more trouble than it resolvs things: rustc become unbuildable if another version already exists in `/usr/local` (for example, `libstd-xxxx.so` is found in `/usr/local/lib` and in builddir).
so this PR tries another way to achieve build, but using the good linker for building. By default, rustc use `cc` for linking. But under OpenBSD, `cc` is gcc 4.2.1 from base, whereas we build with gcc 4.9 from ports. By linking using the compiler found at compile-time, we ensure that the compiler will found his own stdc++ library without trouble.
r? @alexcrichton
By default, rustc use `cc` as linker. Under OpenBSD, `cc` is gcc version 4.2.1.
So use the compiler found at configure-time for linking: it will be gcc 4.9.
It permits to resolv problem of finding -lestdc++ or -lgcc. For base gcc (4.2), there are in not standard path, whereas for ports gcc (4.9) there are in standard path.
It looks like #27937 accidentally switched the llvmdeps file from the target to
the host by accident, so be sure to use the right llvmdeps file which is built
for the target when building rustc_llvm
This handles cases when the LLVM used isn't configured will the 'usual' targets. Also, cases where LLVM is shared are also handled (ie with `LD_LIBRARY_PATH` etc).
This handles cases when the LLVM used isn't configured will the 'usual'
targets. Also, cases where LLVM is shared are also handled (ie with
`LD_LIBRARY_PATH` etc).
With this commit, metadata encoding and decoding can make use of thread-local encoding and decoding contexts. These allow implementers of `serialize::Encodable` and `Decodable` to access information and
datastructures that would otherwise not be available to them. For example, we can automatically translate def-id and span information during decoding because the decoding context knows which crate the data is decoded from. Or it allows to make `ty::Ty` decodable because the context has access to the `ty::ctxt` that is needed for creating `ty::Ty` instances.
Some notes:
- `tls::with_encoding_context()` and `tls::with_decoding_context()` (as opposed to their unsafe versions) try to prevent the TLS data getting out-of-sync by making sure that the encoder/decoder passed in is actually the same as the one stored in the context. This should prevent accidentally reading from the wrong decoder.
- There are no real tests in this PR. I had a unit tests for some of the core aspects of the TLS implementation but it was kind of brittle, a lot of code for mocking `ty::ctxt`, `crate_metadata`, etc and did actually test not so much. The code will soon be tested by the first incremental compilation auto-tests that rely on MIR being properly serialized. However, if people think that some tests should be added before this can land, I'll try to provide some that make sense.
r? @nikomatsakis
With this commit, metadata encoding and decoding can make use of
thread-local encoding and decoding contexts. These allow implementers
of serialize::Encodable and Decodable to access information and
datastructures that would otherwise not be available to them. For
example, we can automatically translate def-id and span information
during decoding because the decoding context knows which crate the
data is decoded from. Or it allows to make ty::Ty decodable because
the context has access to the ty::ctxt that is needed for creating
ty::Ty instances.
The `rsbegin.o` and `rsend.o` build products should not be generated
on non WinGnu platforms.
This is another path to resolving #30063 for non win-gnu targets.
(And it won't require a snapshot, unlike PR #30208.)
r? @alexcrichton
The `rsbegin.o` and `rsend.o` build products should not be generated
on non WinGnu platforms.
This is another path to resolving #30063 for non win-gnu targets.
(And it won't require a snapshot, unlike PR #30208.)
In #29932, I moved the location of TRPL, but I missed making the changes
in mk/tests.mk. This led to #30088 landing with a broken example.
As such, #30113 will need to land before this.
Previously the file was not regenrated upon modification of `src/rustllvm` or others.
Now it will be rebuilt if `src/llvm` or `src/rustllvm` is touched.
Also added *.rs rule to 'clean' rule so that it is removed upon 'make
clean'.
Fixes#28614.
Debian wants to build all binaries with particular hardening flags.
The Rust makefiles are inconsistent in which architectures they
correctly include CFLAGS/etc from the enivoronment (see mk/cfg/*).
This patch adds LDFLAGS, and then unconditionally prepends
CFLAGS/LDFLAGS/etc to the build commands.
The book was located under 'src/doc/trpl' because originally, it was
going to be hosted under that URL. Late in the game, before 1.0, we
decided that /book was a better one, so we changed the output, but
not the input. This causes confusion for no good reason. So we'll change
the source directory to look like the output directory, like for every
other thing in src/doc.
Rather than modifying the installer to disable directory rewriting,
this patch modifies the directory structure passed to the installer so
that the rewriting gives the correct results. This means that if a
non-standard --libdir is passed to configure then the same --libdir
option (relative to the --prefix) must be passed to the install
script. In the `make install` case this is handled automatically.
Binary distributions are generally generated using the default
--libdir and then have paths optionally rewritten by the installer,
which should continue to work.
This has the advantage of not complicating the installer interface
intended for end-user use.
Fixes#29561
On distros that use i486 or i586 in their CHOST, Rust will fail to build
because it is not handling i486 or i586 like i686 is handled. This
changes the match to do work for all instances of i?86 instead of just
i686. The Yocto Project still uses i586 as a target.
Signed-off-by: Doug Goldstein <cardoe@cardoe.com>
under openbsd, the library path of libstdc++ need to be explicit (due
to the fact the default linker `cc` is gcc-4.2, and not gcc-4.9).
but when a recent LLVM is installed, rustc compilation pikes the bad
LLVM version (which live in /usr/local/lib, which is same directory of
libestdc++.so for gcc-4.9).
this patch move the libstdc++ path from RUST_FLAGS_<target> to special
variable, and use it *after* LLVM_LIBDIR_RUSTFLAGS_<target> in
arguments.
r? @alexcrichton
Rather than modifying the installer to disable directory rewriting,
this patch modifies the directory structure passed to the installer so
that the rewriting gives the correct results. This means that if a
non-standard --libdir is passed to configure then the same --libdir
option (relative to the --prefix) must be passed to the install
script. In the `make install` case this is handled automatically.
Binary distributions are generally generated using the default
--libdir and then have paths optionally rewritten by the installer,
which should continue to work.
This has the advantage of not complicating the installer interface
intended for end-user use.
Fixes#29561
On distros that use i486 or i586 in their CHOST, Rust will fail to build
because it is not handling i486 or i586 like i686 is handled. This
changes the match to do work for all instances of i?86 instead of just
i686. The Yocto Project still uses i586 as a target.
Signed-off-by: Doug Goldstein <cardoe@cardoe.com>
I noticed the nomicon was not listed!
I also removed links to racer and rustfmt since they were not *doc-specific* links, just links to tools, as well as pointed the cargo link directly at the docs.
Removed all the community stuff. There are lots of other places to find this now, including the website.
With pending website changes this page will continue to be pared back, reflecting only what's in-tree, not general Rust docs.
r? @steveklabnik
This should get `--libdir` working as well as it was a couple of weeks ago. (That is, it still rewrites paths incorrectly but it no longer fails during `make install`.)
Fixesgentoo/gentoo-rust#28 and gentoo/gentoo-rust#29.
This is to handle the case where CFG_LIBDIR is not a direct child of
CFG_PREFIX (in other words, where CFG_LIBDIR_RELATIVE has more than
one component).
Emacs warns that makefile lines that start with spaces followed by
tabs are "suspicious". These were harmless since they were
continuation lines, but getting rid of the warning is nice and this
version looks better.
The important one is $(MAKE). make handles recipes containing the
literal string "$(MAKE)" specially, so it is important to make sure it
isn't evaluated until recipe invocation time.
under openbsd, the library path of libstdc++ need to be explicit (due
to the fact the default linker `cc` is gcc-4.2, and not gcc-4.9).
but when a recent LLVM is installed, rustc compilation pikes the bad
LLVM version (which live in /usr/local/lib, which is same directory of
libestdc++.so for gcc-4.9).
this patch move the libstdc++ path from RUST_FLAGS_<target> to special
variable, and use it *after* LLVM_LIBDIR_RUSTFLAGS_<target> in
arguments.
Quite a bit of cruft in the valgrind suppressions. I started from a clean slate and found a few unique failures; this commit also moves the tests "fixed" by these suppressions into run-pass-valgrind.
* Delete `sys::unix::{c, sync}` as these are now all folded into libc itself
* Update all references to use `libc` as a result.
* Update all references to the new flat namespace.
* Moves all windows bindings into sys::c
According to a recent [discussion on IRC](https://botbot.me/mozilla/rust-tools/2015-10-27/?msg=52887517&page=2), there's no good reason for Windows builds to store target libraries under `bin`, when on every other platform they are under `lib`.
This might be a [breaking-change] for some users. I am pretty sure VisualRust has that path hard-coded somewhere.
r? @brson
Note: for now, this change only affects `-windows-gnu` builds.
So why was this `libgcc` dylib dependency needed in the first place?
The stack unwinder needs to know about locations of unwind tables of all the modules loaded in the current process. The easiest portable way of achieving this is to have each module register itself with the unwinder when loaded into the process. All modules compiled by GCC do this by calling the __register_frame_info() in their startup code (that's `crtbegin.o` and `crtend.o`, which are automatically linked into any gcc output).
Another important piece is that there should be only one copy of the unwinder (and thus unwind tables registry) in the process. This pretty much means that the unwinder must be in a shared library (unless everything is statically linked).
Now, Rust compiler tries very hard to make sure that any given Rust crate appears in the final output just once. So if we link the unwinder statically to one of Rust's crates, everything should be fine.
Unfortunately, GCC startup objects are built under assumption that `libgcc` is the one true place for the unwind info registry, so I couldn't find any better way than to replace them. So out go `crtbegin`/`crtend`, in come `rsbegin`/`rsend`!
A side benefit of this change is that rustc is now more in control of the command line that goes to the linker, so we could stop using `gcc` as the linker driver and just invoke `ld` directly.
Previously the file was not regenrated upon modification of src/rustllvm or others.
Now it will be rebuilt if `src/llvm` or `src/rustllvm` is touched.
Also added *.rs rule to 'clean' rule so that it is removed upon 'make
clean'.
It looks like the target libs aren't actually the same across hosts so instead
of always packaging the target libs from CFG_BUILD take the target libs from the
host if we have them and then only failing that do we take them from CFG_BUILD.
Closes#29228
This commit splits out the standard library from the current 'rustc' package
into a new 'rust-std' package. This is the basis for the work on easily
packaging compilers that can cross-compile to new targets.
Since it isn't possible to disable linkage of just GCC startup objects, we now need logic for finding libc installation directory and copying the required startup files (e.g. crt2.o) to rustlib directory.
Bonus change: use the `-nodefaultlibs` flag on Windows, thus paving the way to direct linker invocation.
* Don't pass `-mno-compact-eh`, apparently not all compilers have this?
* Don't pass `+o32`, apparently LLVm doesn't recognize this
* Use `mipsel-linux-gnu` as a prefix instead of `mipsel-unknown-linux-gnu`, this
matches the ubuntu package at least!
This commit splits out the standard library from the current 'rustc' package
into a new 'rust-std' package. This is the basis for the work on easily
packaging compilers that can cross-compile to new targets.
For most parts, rumprun currently looks like NetBSD, as they share the same
libc and drivers. However, being a unikernel, rumprun does not support
process management, signals or virtual memory, so related functions
might fail at runtime. Stack guards are disabled exactly for this reason.
Code for rumprun is always cross-compiled, it uses always static
linking and needs a custom linker.
We don't actually probe for javac in all circumstances, so if you have
javac installed, but don't have antlr4 installed, and you're on Mac OS
X, then you'll get a message that javac is missing, even though that's
wrong.
To fix this, let's just be a bit more generic in the message, so that
it's the same no matter what part of the lexer tests you're missing.
cc
https://www.reddit.com/r/rust/comments/3m199d/running_make_check_on_the_source_code_says_javac/
These changes introduce the ability to cross-compile working binaries for NetBSD/amd64. Previous support added in PR #26682 shared all its code with the OpenBSD implementation, and was therefore never functional (e.g. linking against non-existing symbols and using wrong type definitions). Nonetheless, the previous patches were a great starting point and made my work significantly easier. 😃
Because there are no stage0 snapshots for NetBSD (yet), I used a cross-compiler for NetBSD 7.0 RC3 and only tested some toy programs (threading and channels, stack guards, a small TCP/IP echo server and some other platform dependent bits). If someone could point me to documentation on how to generate a stage0 snapshot from a cross-compiler I'm happy to run the full test suite.
A few other notes regarding Rust on NetBSD/amd64:
- To preserve binary compatibility, NetBSD introduces new symbols for system call wrappers on breaking ABI changes and keeps the old (legacy) symbols around, see [this documentation](https://www.netbsd.org/docs/internals/en/chap-processes.html#syscalls_master) for some details. I went ahead and modified the `libc` and `std` crate to use the current (renamed) symbols instead of the legacy ones where I found them, but I might have missed some. Notably using the `sigaction` symbol (deprecated in 1998) instead of `__sigaction14` even triggers SIGSYS (bad syscall) on my amd64 setup. I also changed the type definitions to use the most recent version.
- NetBSD's gdb doesn't really support position independent executables, so you might want to turn that off for debugging, see [NetBSD Problem Report #48250](https://gnats.netbsd.org/48250).
- For binaries invoked using a relative path, NetBSD supports `$ORIGIN` only for short `rpath`s (~64 chars or so, I'm told). If running an executable fails with `execname not specified in AUX vector: No such file or directory`, consider invoking the binary using its full absolute path.
By default, the linker in use under OpenBSD is the linker of base, which
don't include /usr/local/lib where libstdc++ of gcc-4.9 lives. We need
to add this directory to linker-path-search (using -L).
Search the path of libstdc++.a, which is a known name (libstdc++.so has
SO_VERSION) in the same directory.
it makes rustc compatible with gcc installation that are using
`--program-transform-name' configure flag (on OpenBSD for example).
- detects at configure the name of stdc++ library on the system
- use the detected name in llvm makefile (with enable-static-stdcpp),
and pass it to mklldeps.py
- generate mklldeps.rs using this detected name
note that CFG_STDCPP_NAME is about stdc++ name, not about libc++. If
using libc++, the default name will be `stdc++', but it won't be used
when linking.
Fix formatting
Remove unused imports
Refactor
Fix msvc build
Fix line lengths
Formatting
Enable backtrace tests
Fix using directive on mac
pwd info
Work-around buildbot PWD bug, and fix libbacktrace configuration
Use alternative to `env -u` which is not supported on bitrig
Disable tests on 32-bit windows gnu
Because 'doc' is a directory, when running `make doc`, you'll see
this:
make: Nothing to be done for `doc'.
By adding a target for `doc` to build `docs`, both work.
Fixes#14705
Because 'doc' is a directory, when running `make doc`, you'll see
this:
make: Nothing to be done for `doc'.
By adding a target for `doc` to build `docs`, both work.
Fixes#14705
This fixes the case where we try to re-build & re-install rust to the
same prefix (without uninstalling) while using an llvm-root that is the
same as the prefix.
Without this, builds like that fail with:
'error: multiple dylib candidates for `std` found'
See https://github.com/jmesmon/meta-rust/issues/6 for some details.
May also be related to #20342.
This commit is an implementation of [RFC 1183][rfc] which allows swapping out
the default allocator on nightly Rust. No new stable surface area should be
added as a part of this commit.
[rfc]: https://github.com/rust-lang/rfcs/pull/1183
Two new attributes have been added to the compiler:
* `#![needs_allocator]` - this is used by liballoc (and likely only liballoc) to
indicate that it requires an allocator crate to be in scope.
* `#![allocator]` - this is a indicator that the crate is an allocator which can
satisfy the `needs_allocator` attribute above.
The ABI of the allocator crate is defined to be a set of symbols that implement
the standard Rust allocation/deallocation functions. The symbols are not
currently checked for exhaustiveness or typechecked. There are also a number of
restrictions on these crates:
* An allocator crate cannot transitively depend on a crate that is flagged as
needing an allocator (e.g. allocator crates can't depend on liballoc).
* There can only be one explicitly linked allocator in a final image.
* If no allocator is explicitly requested one will be injected on behalf of the
compiler. Binaries and Rust dylibs will use jemalloc by default where
available and staticlibs/other dylibs will use the system allocator by
default.
Two allocators are provided by the distribution by default, `alloc_system` and
`alloc_jemalloc` which operate as advertised.
Closes#27389
New enough find on Linux doesn't support "-perm +..." and suggests
using "-perm /..." instead, but that doesn't work on Windows.
Hopefully all platforms are happy with this expanded version.
I don't have access to a Windows development system to test this, so someone needs to verify that this actually works there before merging.
Closes#19981.
Per @steveklabnik's comment [here](https://github.com/rust-lang/cargo/issues/739#issuecomment-130085860), the Pandoc components of the Makefile are no longer used, and as such the corresponding components of the documentation are out of date.
- I've removed the Pandoc (and therefore also LaTeX) elements of the makefile and confirmed that the build proceeds correctly.
- I updated the documentation to reference `rustdoc` and of Pandoc.
r? @steveklabnik
Pretty-printed files sometimes start with #![some_feature], which
looks like a shebang line and confuses Windows builds into thinking
they are executables.
This commit leverages the runtime support for DWARF exception info added
in #27210 to enable unwinding by default on 64-bit MSVC. This also additionally
adds a few minor fixes here and there in the test harness and such to get
`make check` entirely passing on 64-bit MSVC:
* The invocation of `maketest.py` now works with spaces/quotes in CC
* debuginfo tests are disabled on MSVC
* A link error for librustc was hacked around (see #27438)
This commit leverages the runtime support for DWARF exception info added
in #27210 to enable unwinding by default on 64-bit MSVC. This also additionally
adds a few minor fixes here and there in the test harness and such to get
`make check` entirely passing on 64-bit MSVC:
* The invocation of `maketest.py` now works with spaces/quotes in CC
* debuginfo tests are disabled on MSVC
* A link error for librustc was hacked around (see #27438)
I have no idea how bors keeps working without this - I can only assume it's some peculiarity of how windows searches for DLLs.
Without this change, running `make check` on windows will not correctly set PATH to include eg. `x86_64-pc-windows-gnu\stage1\bin\rustlib\x86_64-pc-windows-gnu\lib`, and when it tries to run eg. `stage1/test/stdtest-x86_64-pc-windows-gnu.exe`, it will fail because windows can't find the DLLs on which it relies.
It seems to be just a mistake: when the equivalent was added for the branch that deals with unix-like platforms, the windows branch was left unchanged.
This means that we no longer need to ship the `llvm-ar.exe` binary in the MSVC
distribution, and after a snapshot we can remove a good bit of logic from the
makefiles!
Rust's current compilation model makes it impossible on Windows to generate one
object file with a complete and final set of dllexport annotations. This is
because when an object is generated the compiler doesn't actually know if it
will later be included in a dynamic library or not. The compiler works around
this today by flagging *everything* as dllexport, but this has the drawback of
exposing too much.
Thankfully there are alternate methods of specifying the exported surface area
of a dll on Windows, one of which is passing a `*.def` file to the linker which
lists all public symbols of the dynamic library. This commit removes all
locations that add `dllexport` to LLVM variables and instead dynamically
generates a `*.def` file which is passed to the linker. This file will include
all the public symbols of the current object file as well as all upstream
libraries, and the crucial aspect is that it's only used when generating a
dynamic library. When generating an executable this file isn't generated, so all
the symbols aren't exported from an executable.
To ensure that statically included native libraries are reexported correctly,
the previously added support for the `#[linked_from]` attribute is used to
determine the set of FFI symbols that are exported from a dynamic library, and
this is required to get the compiler to link correctly.
This means that we no longer need to ship the `llvm-ar.exe` binary in the MSVC
distribution, and after a snapshot we can remove a good bit of logic from the
makefiles!
This commit removes all morestack support from the compiler which entails:
* Segmented stacks are no longer emitted in codegen.
* We no longer build or distribute libmorestack.a
* The `stack_exhausted` lang item is no longer required
The only current use of the segmented stack support in LLVM is to detect stack
overflow. This is no longer really required, however, because we already have
guard pages for all threads and registered signal handlers watching for a
segfault on those pages (to print out a stack overflow message). Additionally,
major platforms (aka Windows) already don't use morestack.
This means that Rust is by default less likely to catch stack overflows because
if a function takes up more than one page of stack space it won't hit the guard
page. This is what the purpose of morestack was (to catch this case), but it's
better served with stack probes which have more cross platform support and no
runtime support necessary. Until LLVM supports this for all platform it looks
like morestack isn't really buying us much.
cc #16012 (still need stack probes)
Closes#26458 (a drive-by fix to help diagnostics on stack overflow)
New enough find on Linux doesn't support "-perm +..." and suggests
using "-perm /..." instead, but that doesn't work on Windows.
Hopefully all platforms are happy with this expanded version.
As there’s no C++ runtime any more there’s really no point in having anything but Rust tags being made.
I’ve also taken the liberty of excluding the compiler parts of this in the `librust%,,` pattern substitution. Whether or not this is “correct” will depend on whether you want tags for the compiler or for general use. For myself, I want it for general use.
I’m not sure how much people use the tags files anyway. I definitely do, but with Racer existing the tags files aren’t quite so necessary.
I've been baking this out of tree for long enough. This is currently about ~2/5ths the size of TRPL. Time to get it in tree so it can be more widely maintained and scrutinized. I've preserved the whole gruesome history including various rewrites. I can definitely squash these a fair amount if desired. Some random people submitted minor fixes though, so they're mixed in.
Edit: forgot to link to rendered http://cglab.ca/~abeinges/blah/turpl/_book/
Edit2:
To streamline the review process, I'm going to break this into sections that need official "domain expert" approval:
# Summary
* [ ] references.md -- very important, needs work
* [x] Meet Safe and Unsafe: reviewed by @aturon
* [x] Data Layout: reviewed by @arielb1
* [x] Ownership: reviewed by @aturon ( and sorta @nikomatsakis ) -- significantly updated, may need re-r
* [x] Coversions: reviewed by @nrc
* [x] Uninitialized Memory: reviewed by @pnkfelix
* [x] Ownership-Oriented Resource Management: reviewed by @aturon
* [x] Unwinding: reviewed by @alexcrichton
* [x] Concurrency: reviewed by @aturon
* [x] Implementing Vec: r? @huonw
As there’s no C++ runtime any more there’s really no point in having
anything but Rust tags being made.
I’ve also taken the liberty of excluding the compiler parts of this in
the `librust%,,` pattern substitution. Whether or not this is “correct”
will depend on whether you want tags for the compiler or for general
use. For myself, I want it for general use.
I’m not sure how much people use the tags files anyway. I definitely do,
but with Racer existing the tags files aren’t quite so necessary.
This commit moves the IR files in the distribution, rust_try.ll,
rust_try_msvc_64.ll, and rust_try_msvc_32.ll into the compiler from the main
distribution. There's a few reasons for this change:
* LLVM changes its IR syntax from time to time, so it's very difficult to
have these files build across many LLVM versions simultaneously. We'll likely
want to retain this ability for quite some time into the future.
* The implementation of these files is closely tied to the compiler and runtime
itself, so it makes sense to fold it into a location which can do more
platform-specific checks for various implementation details (such as MSVC 32
vs 64-bit).
* This removes LLVM as a build-time dependency of the standard library. This may
end up becoming very useful if we move towards building the standard library
with Cargo.
In the immediate future, however, this commit should restore compatibility with
LLVM 3.5 and 3.6.
We have previously always relied upon an external tool, `ar`, to modify archives
that the compiler produces (staticlibs, rlibs, etc). This approach, however, has
a number of downsides:
* Spawning a process is relatively expensive for small compilations
* Encoding arguments across process boundaries often incurs unnecessary overhead
or lossiness. For example `ar` has a tough time dealing with files that have
the same name in archives, and the compiler copies many files around to ensure
they can be passed to `ar` in a reasonable fashion.
* Most `ar` programs found do **not** have the ability to target arbitrary
platforms, so this is an extra tool which needs to be found/specified when
cross compiling.
The LLVM project has had a tool called `llvm-ar` for quite some time now, but it
wasn't available in the standard LLVM libraries (it was just a standalone
program). Recently, however, in LLVM 3.7, this functionality has been moved to a
library and is now accessible by consumers of LLVM via the `writeArchive`
function.
This commit migrates our archive bindings to no longer invoke `ar` by default
but instead make a library call to LLVM to do various operations. This solves
all of the downsides listed above:
* Archive management is now much faster, for example creating a "hello world"
staticlib is now 6x faster (50ms => 8ms). Linking dynamic libraries also
recently started requiring modification of rlibs, and linking a hello world
dynamic library is now 2x faster.
* The compiler is now one step closer to "hassle free" cross compilation because
no external tool is needed for managing archives, LLVM does the right thing!
This commit does not remove support for calling a system `ar` utility currently.
We will continue to maintain compatibility with LLVM 3.5 and 3.6 looking forward
(so the system LLVM can be used wherever possible), and in these cases we must
shell out to a system utility. All nightly builds of Rust, however, will stop
needing a system `ar`.
This PR was originally going to be a "let's start running tests on MSVC" PR, but it didn't quite get to that point. It instead gets us ~80% of the way there! The steps taken in this PR are:
* Landing pads are turned on by default for 64-bit MSVC. The LLVM support is "good enough" with the caveat the destructor glue is now marked noinline. This was recommended [on the associated bug](https://llvm.org/bugs/show_bug.cgi?id=23884) as a stopgap until LLVM has a better representation for exception handling in MSVC. The consequence of this is that MSVC will have a bit of a perf hit, but there are possible routes we can take if this workaround sticks around for too long.
* The linker (`link.exe`) is now looked up in the Windows Registry if it's not otherwise available in the environment. This improves using the compiler outside of a VS shell (e.g. in a MSYS shell or in a vanilla cmd.exe shell). This also makes cross compiles via Cargo "just work" when crossing between 32 and 64 bit!
* TLS destructors were fixed to start running on MSVC (they previously weren't running at all)
* A few assorted `run-pass` tests were fixed.
* The dependency on the `rust_builtin` library was removed entirely for MSVC to try to prevent any `cl.exe` compiled objects get into the standard library. This should help us later remove any dependence on the CRT by the standard library.
* I re-added `rust_try_msvc_32.ll` for 32-bit MSVC and ensured that landing pads were turned off by default there as well.
Despite landing pads being enabled, there are still *many* failing tests on MSVC. The two major classes I've identified so far are:
* Spurious aborts. It appears that when optimizations are enabled that landing pads aren't always lined up properly, and sometimes an exception being thrown can't find the catch block down the stack, causing the program to abort. I've been working to reduce this test case but haven't been met with great success just yet.
* Parallel codegen does not work on MSVC. Our current strategy is to take the N object files emitted by the N codegen threads and use `ld -r` to assemble them into *one* object file. The MSVC linker, however, does not have this ability, and this will need to be rearchitected to work on MSVC.
I will fix parallel codegen in a future PR, and I'll also be watching LLVM closely to see if the aborts... disappear!
This commit alters the compiler to no longer "just run link.exe" but instead
probe the system's registry to find where the linker is located. The default
library search path (normally found through LIB) is also found through the
registry. This also brings us in line with the default behavior of Clang, and
much of the logic of where to look for information is copied over from Clang as
well. Finally, this commit removes the makefile logic for updating the
environment variables for the compiler, except for stage0 where it's still
necessary.
The motivation for this change is rooted in two positions:
* Not having to set up these environment variables is much less hassle both for
the bootstrap and for running the compiler itself. This means that the
compiler can be run outside of VS shells and be run inside of cmd.exe or a
MSYS shell.
* When dealing with cross compilation, there's not actually a set of environment
variables that can be set for the compiler. This means, for example, if a
Cargo compilation is targeting 32-bit from 64-bit you can't actually set up
one set of environment variables. Having the compiler deal with the logic
instead is generally much more convenient!
This commit modifies the configure script and our makefiles to support building
32-bit MSVC targets. The MSVC toolchain is now parameterized over whether it can
produce a 32-bit or 64-bit binary. The configure script was updated to export
more variables at configure time, and the makefiles were rejiggered to
selectively reexport the relevant environment variables for the applicable
targets they're going to run for.
Now that LLVM has been updated, the only remaining roadblock to implementing
unwinding for MSVC is to fill out the runtime support in `std::rt::unwind::seh`.
This commit does precisely that, fixing up some other bits and pieces along the
way:
* The `seh` unwinding module now uses `RaiseException` to initiate a panic.
* The `rust_try.ll` file was rewritten for MSVC (as it's quite different) and is
located at `rust_try_msvc_64.ll`, only included on MSVC builds for now.
* The personality function for all landing pads generated by LLVM is hard-wired
to `__C_specific_handler` instead of the standard `rust_eh_personality` lang
item. This is required to get LLVM to emit SEH unwinding information instead
of DWARF unwinding information. This also means that on MSVC the
`rust_eh_personality` function is entirely unused (but is defined as it's a
lang item).
More details about how panicking works on SEH can be found in the
`rust_try_msvc_64.ll` or `seh.rs` files, but I'm always open to adding more
comments!
A key aspect of this PR is missing, however, which is that **unwinding is still
turned off by default for MSVC**. There is a [bug in llvm][llvm-bug] which
causes optimizations to inline enough landing pads that LLVM chokes. If the
compiler is optimized at `-O1` (where inlining isn't enabled) then it can
bootstrap with unwinding enabled, but when optimized at `-O2` (inlining is
enabled) then it hits a fatal LLVM error.
[llvm-bug]: https://llvm.org/bugs/show_bug.cgi?id=23884
In #26252 support was added to have prettier paths printed out on failure by not
passing the full path to the source file to the compiler, but instead just a
small relative path. To preserve this relative path across configurations, the
`SREL` variable was used for reconfiguring, but if `SREL` is empty then it will
attempt to run the command `configure` which is distinct from running
`./configure` (e.g. doesn't run the local script).
This commit modifies the `SREL` value to re-run the configure script by setting
it to `./` in the case where `SREL` is empty.
In #26252 support was added to have prettier paths printed out on failure by not
passing the full path to the source file to the compiler, but instead just a
small relative path. To preserve this relative path across configurations, the
`SREL` variable was used for reconfiguring, but if `SREL` is empty then it will
attempt to run the command `configure` which is distinct from running
`./configure` (e.g. doesn't run the local script).
This commit modifies the `SREL` value to re-run the configure script by setting
it to `./` in the case where `SREL` is empty.
musl only creates rlib files for stdlib linking so we need to ignore the `CFG_LIB_GLOB_` setting, otherwise we an error:
```
$ make --debug VERBOSE=1 dist-tar-bins
[...]
Successfully remade target file `prepare-target-x86_64-unknown-linux-gnu-host-x86_64-unknown-linux-gnu-2-dir-x86_64-unknown-linux-gnu'.
File `prepare-target-x86_64-unknown-linux-musl-host-x86_64-unknown-linux-gnu-2-dir-x86_64-unknown-linux-gnu' does not exist.
Must remake target `prepare-target-x86_64-unknown-linux-musl-host-x86_64-unknown-linux-gnu-2-dir-x86_64-unknown-linux-gnu'.
umask 022 && mkdir -p tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unknown-linux-musl/lib
umask 022 && mkdir -p tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unknown-linux-gnu/bin
LIB_NAME="liblibc-d8ace771.rlib"; MATCHES=""; if [ -n "$MATCHES" ]; then echo "warning: one or libraries matching Rust library 'liblibc-*.rlib'" && echo " (other than '$LIB_NAME' itself) alre
ady present" && echo " at destination tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unknown-linux-musl/lib:" && echo $MATCHES ; fi
install -m644 `ls -drt1 x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-musl/lib/liblibc-*.rlib` tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unk
nown-linux-musl/lib/
LIB_NAME=""; MATCHES=""; if [ -n "$MATCHES" ]; then echo "warning: one or libraries matching Rust library 'libstd-*.so'" && echo " (other than '$LIB_NAME' itself) already present" && echo
" at destination tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unknown-linux-musl/lib:" && echo $MATCHES ; fi
install -m644 `ls -drt1 x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-musl/lib/libstd-*.so` tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unknow
n-linux-musl/lib/
ls: cannot access x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-musl/lib/libstd-*.so: No such file or directory
install: missing destination file operand after ‘tmp/dist/rustc-1.2.0-dev-x86_64-unknown-linux-gnu-image/lib/rustlib/x86_64-unknown-linux-musl/lib/’
Try 'install --help' for more information.
make: *** [prepare-target-x86_64-unknown-linux-musl-host-x86_64-unknown-linux-gnu-2-dir-x86_64-unknown-linux-gnu] Error 1
```
`CFG_INSTALL_ONLY_RLIB_` is provided for this reason and fixes `make install` and `make dist`.
mk: Build crates with relative source file paths
The path we pass to rustc will be visible in panic messages and
backtraces: they will be user visible!
Avoid junk in these paths by passing relative paths to rustc.
For most advanced users, `libcore` or `libstd` in the path will be
a clue to the location -- inside our code, not theirs.
Store both the relative path to the source as well as the absolute.
Use the relative path where it matters, compiling the main crates,
instead of changing all of the build process to cope with relative
paths.
Example output after this patch:
```
$ ./testunwrap
thread '<main>' panicked at 'called `Option::unwrap()` on a `None` value', ../src/libcore/option.rs:362
$ RUST_BACKTRACE=1 ./testunwrap
thread '<main>' panicked at 'called `Option::unwrap()` on a `None` value', ../src/libcore/option.rs:362
stack backtrace:
1: 0x7ff59c1e9956 - sys::backtrace::write::h67a542fd2b201576des
at ../src/libstd/sys/unix/backtrace.rs:158
2: 0x7ff59c1ed5b6 - panicking::on_panic::h3d21c41cdd5c12d41Xw
at ../src/libstd/panicking.rs:58
3: 0x7ff59c1e7b6e - rt::unwind::begin_unwind_inner::h9f3a5440cebb8baeLDw
at ../src/libstd/rt/unwind/mod.rs:273
4: 0x7ff59c1e7f84 - rt::unwind::begin_unwind_fmt::h4fe8a903e0c296b0RCw
at ../src/libstd/rt/unwind/mod.rs:212
5: 0x7ff59c1eced7 - rust_begin_unwind
6: 0x7ff59c22c11a - panicking::panic_fmt::h00b0cd49c98a9220i5B
at ../src/libcore/panicking.rs:64
7: 0x7ff59c22b9e0 - panicking::panic::hf549420c0ee03339P3B
at ../src/libcore/panicking.rs:45
8: 0x7ff59c1e621d - option::Option<T>::unwrap::h501963526474862829
9: 0x7ff59c1e61b1 - main::hb5c91ce92347d1e6eaa
10: 0x7ff59c1f1c18 - rust_try_inner
11: 0x7ff59c1f1c05 - rust_try
12: 0x7ff59c1ef374 - rt::lang_start::h7e51e19c6677cffe5Sw
at ../src/libstd/rt/unwind/mod.rs:147
at ../src/libstd/rt/unwind/mod.rs:130
at ../src/libstd/rt/mod.rs:128
13: 0x7ff59c1e628e - main
14: 0x7ff59b3f6b44 - __libc_start_main
15: 0x7ff59c1e6078 - <unknown>
16: 0x0 - <unknown>
```
The path we pass to rustc will be visible in panic messages and
backtraces: they will be user visible!
Avoid junk in these paths by passing relative paths to rustc.
For most advanced users, `libcore` or `libstd` in the path will be
a clue to the location -- inside our code, not theirs.
Store both the relative path to the source as well as the absolute.
Use the relative path where it matters, compiling the main crates,
instead of changing all of the build process to cope with relative
paths.
Example output after this patch:
```
$ ./testunwrap
thread '<main>' panicked at 'called `Option::unwrap()` on a `None` value', ../src/libcore/option.rs:362
$ RUST_BACKTRACE=1 ./testunwrap
thread '<main>' panicked at 'called `Option::unwrap()` on a `None` value', ../src/libcore/option.rs:362
stack backtrace:
1: 0x7ff59c1e9956 - sys::backtrace::write::h67a542fd2b201576des
at ../src/libstd/sys/unix/backtrace.rs:158
2: 0x7ff59c1ed5b6 - panicking::on_panic::h3d21c41cdd5c12d41Xw
at ../src/libstd/panicking.rs:58
3: 0x7ff59c1e7b6e - rt::unwind::begin_unwind_inner::h9f3a5440cebb8baeLDw
at ../src/libstd/rt/unwind/mod.rs:273
4: 0x7ff59c1e7f84 - rt::unwind::begin_unwind_fmt::h4fe8a903e0c296b0RCw
at ../src/libstd/rt/unwind/mod.rs:212
5: 0x7ff59c1eced7 - rust_begin_unwind
6: 0x7ff59c22c11a - panicking::panic_fmt::h00b0cd49c98a9220i5B
at ../src/libcore/panicking.rs:64
7: 0x7ff59c22b9e0 - panicking::panic::hf549420c0ee03339P3B
at ../src/libcore/panicking.rs:45
8: 0x7ff59c1e621d - option::Option<T>::unwrap::h501963526474862829
9: 0x7ff59c1e61b1 - main::hb5c91ce92347d1e6eaa
10: 0x7ff59c1f1c18 - rust_try_inner
11: 0x7ff59c1f1c05 - rust_try
12: 0x7ff59c1ef374 - rt::lang_start::h7e51e19c6677cffe5Sw
at ../src/libstd/rt/unwind/mod.rs:147
at ../src/libstd/rt/unwind/mod.rs:130
at ../src/libstd/rt/mod.rs:128
13: 0x7ff59c1e628e - main
14: 0x7ff59b3f6b44 - __libc_start_main
15: 0x7ff59c1e6078 - <unknown>
16: 0x0 - <unknown>
```
Right now the distribution tarball for MSVC only includes the *.dll files for
the supporting libraries, but not the corresponding *.lib files which allow
actually linking to the dll. This means that the current MSVC nightlies cannot
produce dynamically linked binaries as the *.lib files are not available to link
against.
This commit modifies the `LIB_GLOB` used to copy the files around to include the
`lib` variant of the `dll`.
On MSVC there are two ways that the CRT can be linked, either statically or
dynamically. Each object file produced by the compiler is compiled against
msvcrt (a dll) or libcmt (a static library). When the linker is dealing with
more than one object file, it requires that all object files link to the same
CRT, or else the linker will spit out some errors.
For now, compile code with `-MD` as it seems to appear more often in C libraries
so we'll stick with the same trend.
On MSVC there are two ways that the CRT can be linked, either statically or
dynamically. Each object file produced by the compiler is compiled against
msvcrt (a dll) or libcmt (a static library). When the linker is dealing with
more than one object file, it requires that all object files link to the same
CRT, or else the linker will spit out some errors.
For now, compile code with `-MD` as it seems to appear more often in C libraries
so we'll stick with the same trend.
GDB and LLDB pretty printers have some common functionality and also access some common information, such as the layout of standard library types. So far, this information has been duplicated in the two pretty printing python modules. This PR introduces a common module used by both debuggers.
This PR also implements proper rendering of `String` and `&str` values in LLDB.
Now that MSVC support has landed in the most recent nightlies we can now have
MSVC bootstrap itself without going through a GNU compiler first. Unfortunately,
however, the bootstrap currently fails due to the compiler not being able to
find the llvm-ar.exe tool during the stage0 libcore compile. The compiler cannot
find this tool because it's looking inside a directory that does not exist:
$SYSROOT/rustlib/x86_64-pc-windows-gnu/bin
The `gnu` on this triple is because the bootstrap compiler's host architecture
is GNU. The build system, however, only arranges for the llvm-ar.exe tool to be
available in this location:
$SYSROOT/rustlib/x86_64-pc-windows-msvc/bin
To resolve this discrepancy, the build system has been modified to understand
triples that are bootstrapped from another triple, and in this case copy the
native tools to the right location.
This commit adds a ./configure option called `--disable-elf-tls` which disables
ELF based TLS (that which is communicated to LLVM) on platforms which already
support it. OSX 10.6 does not support this form of TLS, and some users of Rust
need to target 10.6 and are unable to do so due to the usage of TLS. The
standard library will continue to use ELF based TLS on OSX by default (as the
officially supported platform is 10.7+), but this adds an option to compile the
standard library in a way that is compatible with 10.6.
Closes#25342
GDB and LLDB pretty printers have some common functionality
and also access some common information, such as the layout of
standard library types. So far, this information has been
duplicated in the two pretty printing python modules. This
commit introduces a common module used by both debuggers.
This commit adds a ./configure option called `--disable-elf-tls` which disables
ELF based TLS (that which is communicated to LLVM) on platforms which already
support it. OSX 10.6 does not support this form of TLS, and some users of Rust
need to target 10.6 and are unable to do so due to the usage of TLS. The
standard library will continue to use ELF based TLS on OSX by default (as the
officially supported platform is 10.7+), but this adds an option to compile the
standard library in a way that is compatible with 10.6.
Now that MSVC support has landed in the most recent nightlies we can now have
MSVC bootstrap itself without going through a GNU compiler first. Unfortunately,
however, the bootstrap currently fails due to the compiler not being able to
find the llvm-ar.exe tool during the stage0 libcore compile. The compiler cannot
find this tool because it's looking inside a directory that does not exist:
$SYSROOT/rustlib/x86_64-pc-windows-gnu/bin
The `gnu` on this triple is because the bootstrap compiler's host architecture
is GNU. The build system, however, only arranges for the llvm-ar.exe tool to be
available in this location:
$SYSROOT/rustlib/x86_64-pc-windows-msvc/bin
To resolve this discrepancy, the build system has been modified to understand
triples that are bootstrapped from another triple, and in this case copy the
native tools to the right location.
The changes scaled back in 4cc025d8 were a little too aggressive and broke a
bunch of cross compilations by not defining the `LINK_$(1)` variable for all
targets. This commit ensures that the variable is defined for all targets by
defaulting it to the normal compiler if it's not already defined (it's only
defined specially for MSVC).
Closes#25723Closes#25802
The current codegen tests only compare IR line counts between similar
rust and C programs, the latter getting compiled with clang. That looked
like a good idea back then, but actually things like lifetime intrinsics
mean that less IR isn't always better, so the metric isn't really
helpful.
Instead, we can start doing tests that check specific aspects of the
generated IR, like attributes or metadata. To do that, we can use LLVM's
FileCheck tool which has a number of useful features for such tests.
To start off, I created some tests for a few things that were recently
added and/or broken.
The changes scaled back in 4cc025d8 were a little too aggressive and broke a
bunch of cross compilations by not defining the `LINK_$(1)` variable for all
targets. This commit ensures that the variable is defined for all targets by
defaulting it to the normal compiler if it's not already defined (it's only
defined specially for MSVC).
Closes#25723
The recent MSVC patch made the build system pass explicit linkers to
rustc, but did not set that up for anything other than MSVC.
This is blocking nightlies.
The install target depends on compiler-docs but 'all' does not.
This means that running 'make && make install' will run additional
doc builds and tests during installation, which hides bugs in
the build.
For now this just unconditionally stops building compiler docs.
The install target depends on compiler-docs but 'all' does not.
This means that running 'make && make install' will run additional
doc builds and tests during installation, which hides bugs in
the build.
For now this just unconditionally stops building compiler docs.
Special thanks to @retep998 for the [excellent writeup](https://github.com/rust-lang/rfcs/issues/1061) of tasks to be done and @ricky26 for initially blazing the trail here!
# MSVC Support
This goal of this series of commits is to add MSVC support to the Rust compiler
and build system, allowing it more easily interoperate with Visual Studio
installations and native libraries compiled outside of MinGW.
The tl;dr; of this change is that there is a new target of the compiler,
`x86_64-pc-windows-msvc`, which will not interact with the MinGW toolchain at
all and will instead use `link.exe` to assemble output artifacts.
## Why try to use MSVC?
With today's Rust distribution, when you install a compiler on Windows you also
install `gcc.exe` and a number of supporting libraries by default (this can be
opted out of). This allows installations to remain independent of MinGW
installations, but it still generally requires native code to be linked with
MinGW instead of MSVC. Some more background can also be found in #1768 about the
incompatibilities between MinGW and MSVC.
Overall the current installation strategy is quite nice so long as you don't
interact with native code, but once you do the usage of a MinGW-based `gcc.exe`
starts to get quite painful.
Relying on a nonstandard Windows toolchain has also been a long-standing "code
smell" of Rust and has been slated for remedy for quite some time now. Using a
standard toolchain is a great motivational factor for improving the
interoperability of Rust code with the native system.
## What does it mean to use MSVC?
"Using MSVC" can be a bit of a nebulous concept, but this PR defines it as:
* The build system for Rust will build as much code as possible with the MSVC
compiler, `cl.exe`.
* The build system will use native MSVC tools for managing archives.
* The compiler will link all output with `link.exe` instead of `gcc.exe`.
None of these are currently implemented today, but all are required for the
compiler to fluently interoperate with MSVC.
## How does this all work?
At the highest level, this PR adds a new target triple to the Rust compiler:
x86_64-pc-windows-msvc
All logic for using MSVC or not is scoped within this triple and code can
conditionally build for MSVC or MinGW via:
#[cfg(target_env = "msvc")]
It is expected that auto builders will be set up for MSVC-based compiles in
addition to the existing MinGW-based compiles, and we will likely soon start
shipping MSVC nightlies where `x86_64-pc-windows-msvc` is the host target triple
of the compiler.
# Summary of changes
Here I'll explain at a high level what many of the changes made were targeted
at, but many more details can be found in the commits themselves. Many thanks to
@retep998 for the excellent writeup in rust-lang/rfcs#1061 and @rick26 for a lot
of the initial proof-of-concept work!
## Build system changes
As is probably expected, a large chunk of this PR is changes to Rust's build
system to build with MSVC. At a high level **it is an explicit non goal** to
enable building outside of a MinGW shell, instead all Makefile infrastructure we
have today is retrofitted with support to use MSVC instead of the standard MSVC
toolchain. Some of the high-level changes are:
* The configure script now detects when MSVC is being targeted and adds a number
of additional requirements about the build environment:
* The `--msvc-root` option must be specified or `cl.exe` must be in PATH to
discover where MSVC is installed. The compiler in use is also required to
target x86_64.
* Once the MSVC root is known, the INCLUDE/LIB environment variables are
scraped so they can be reexported by the build system.
* CMake is required to build LLVM with MSVC (and LLVM is also configured with
CMake instead of the normal configure script).
* jemalloc is currently unconditionally disabled for MSVC targets as jemalloc
isn't a hard requirement and I don't know how to build it with MSVC.
* Invocations of a C and/or C++ compiler are now abstracted behind macros to
appropriately call the underlying compiler with the correct format of
arguments, for example there is now a macro for "assemble an archive from
objects" instead of hard-coded invocations of `$(AR) crus liboutput.a ...`
* The output filenames for standard libraries such as morestack/compiler-rt are
now "more correct" on windows as they are shipped as `foo.lib` instead of
`libfoo.a`.
* Rust targets can now depend on native tools provided by LLVM, and as you'll
see in the commits the entire MSVC target depends on `llvm-ar.exe`.
* Support for custom arbitrary makefile dependencies of Rust targets has been
added. The MSVC target for `rustc_llvm` currently requires a custom `.DEF`
file to be passed to the linker to get further linkages to complete.
## Compiler changes
The modifications made to the compiler have so far largely been minor tweaks
here and there, mostly just adding a layer of abstraction over whether MSVC or a
GNU-like linker is being used. At a high-level these changes are:
* The section name for metadata storage in dynamic libraries is called `.rustc`
for MSVC-based platorms as section names cannot contain more than 8
characters.
* The implementation of `rustc_back::Archive` was refactored, but the
functionality has remained the same.
* Targets can now specify the default `ar` utility to use, and for MSVC this
defaults to `llvm-ar.exe`
* The building of the linker command in `rustc_trans:🔙:link` has been
abstracted behind a trait for the same code path to be used between GNU and
MSVC linkers.
## Standard library changes
Only a few small changes were required to the stadnard library itself, and only
for minor differences between the C runtime of msvcrt.dll and MinGW's libc.a
* Some function names for floating point functions have leading underscores, and
some are not present at all.
* Linkage to the `advapi32` library for crypto-related functions is now
explicit.
* Some small bits of C code here and there were fixed for compatibility with
MSVC's cl.exe compiler.
# Future Work
This commit is not yet a 100% complete port to using MSVC as there are still
some key components missing as well as some unimplemented optimizations. This PR
is already getting large enough that I wanted to draw the line here, but here's
a list of what is not implemented in this PR, on purpose:
## Unwinding
The revision of our LLVM submodule [does not seem to implement][llvm] does not
support lowering SEH exception handling on the Windows MSVC targets, so
unwinding support is not currently implemented for the standard library (it's
lowered to an abort).
[llvm]: https://github.com/rust-lang/llvm/blob/rust-llvm-2015-02-19/lib/CodeGen/Passes.cpp#L454-L461
It looks like, however, that upstream LLVM has quite a bit more support for SEH
unwinding and landing pads than the current revision we have, so adding support
will likely just involve updating LLVM and then adding some shims of our own
here and there.
## dllimport and dllexport
An interesting part of Windows which MSVC forces our hand on (and apparently
MinGW didn't) is the usage of `dllimport` and `dllexport` attributes in LLVM IR
as well as native dependencies (in C these correspond to
`__declspec(dllimport)`).
Whenever a dynamic library is built by MSVC it must have its public interface
specified by functions tagged with `dllexport` or otherwise they're not
available to be linked against. This poses a few problems for the compiler, some
of which are somewhat fundamental, but this commit alters the compiler to attach
the `dllexport` attribute to all LLVM functions that are reachable (e.g. they're
already tagged with external linkage). This is suboptimal for a few reasons:
* If an object file will never be included in a dynamic library, there's no need
to attach the dllexport attribute. Most object files in Rust are not destined
to become part of a dll as binaries are statically linked by default.
* If the compiler is emitting both an rlib and a dylib, the same source object
file is currently used but with MSVC this may be less feasible. The compiler
may be able to get around this, but it may involve some invasive changes to
deal with this.
The flipside of this situation is that whenever you link to a dll and you import
a function from it, the import should be tagged with `dllimport`. At this time,
however, the compiler does not emit `dllimport` for any declarations other than
constants (where it is required), which is again suboptimal for even more
reasons!
* Calling a function imported from another dll without using `dllimport` causes
the linker/compiler to have extra overhead (one `jmp` instruction on x86) when
calling the function.
* The same object file may be used in different circumstances, so a function may
be imported from a dll if the object is linked into a dll, but it may be
just linked against if linked into an rlib.
* The compiler has no knowledge about whether native functions should be tagged
dllimport or not.
For now the compiler takes the perf hit (I do not have any numbers to this
effect) by marking very little as `dllimport` and praying the linker will take
care of everything. Fixing this problem will likely require adding a few
attributes to Rust itself (feature gated at the start) and then strongly
recommending static linkage on Windows! This may also involve shipping a
statically linked compiler on Windows instead of a dynamically linked compiler,
but these sorts of changes are pretty invasive and aren't part of this PR.
## CI integration
Thankfully we don't need to set up a new snapshot bot for the changes made here as our snapshots are freestanding already, we should be able to use the same snapshot to bootstrap both MinGW and MSVC compilers (once a new snapshot is made from these changes).
I plan on setting up a new suite of auto bots which are testing MSVC configurations for now as well, for now they'll just be bootstrapping and not running tests, but once unwinding is implemented they'll start running all tests as well and we'll eventually start gating on them as well.
---
I'd love as many eyes on this as we've got as this was one of my first interactions with MSVC and Visual Studio, so there may be glaring holes that I'm missing here and there!
cc @retep998, @ricky26, @vadimcn, @klutzy
r? @brson
This commit updates the `dist` target for MSVC to not build the mingw components
and to also ensure that the `llvm-ar.exe` binary is ferried along into the right
location for installs.
Windows needs explicit exports of functions from DLLs but LLVM does not mention
any of its symbols as being export-able from a DLL. The compiler, however,
relies on being able to use LLVM symbols across DLL boundaries so we need to
force many of LLVM's symbols to be exported from `rustc_llvm.dll`. This commit
adds support for generation of a `rustc_llvm.def` file which is passed along to
the linker when generating `rustc_llvm.dll` which should keep all these symbols
exportable and usable.
The compiler will require that `llvm-ar.exe` be available for MSVC-targeting
builds (more comments on this soon), so this commit adds support for targets to
depend on LLVM tools. The `core` library for MSVC depends on `llvm-ar.exe` which
will be copied into place for the target before the compiler starts to run.
Note that these targets all depend on `llvm-config.exe` to ensure that they're
built before they're attempted to be copied.
This commit updates the rustllvm.mk file with the necessary flags and such to
build rustllvm.lib with cl.exe instead of gcc. Some comments can be found in the
commit itself.
It looks like compiler-rt has a cmake build sytem inside its source, but I have
been unable to figure out how to use it and actually build the right library.
For now this commit hard-wires MSVC-targeting builds of libcompiler-rt to
continue using `make` as the primary bulid system, but some frobbing of the
flags are necessary to ensure that the right compiler is used.
Currently the MSVC compilers don't have any cross prefixes and we're only able
to make an MSVC compiler with a cross compile, so just avoid this logic on msvc
for now.
We have a number of support C/C++ files in Rust that we link into the standard
library and other various locations, and these all need to be built with cl.exe
instead of gcc.exe when targeting MSVC. This commit adds helper macros for this
functionality to use different sets of programs/flags/invocations on MSVC than
on GNU-like platforms.
This commit modifies the makefiles to enable building LLVM with cmake and Visual
Studio to generate an LLVM that targets MSVC. Rust's configure script requires
cmake to be installed when targeting MSVC and will configure LLVM with cmake
instead of the normal `./configure` script LLVM provides. The build will then
run cmake to execute the build instead of the normal `make`.
Currently `make clean-llvm` isn't supported on MSVC as I can't figure out how to
run a "clean" target for the Visual Studio files.
This commit starts to add MSVC support to the ./configure script to enable the
build system to detect and build an MSVC target with the cl.exe compiler and
toolchain. The primary change here is a large sanity check when an MSVC target
is requested (and currently only `x86_64-pc-windows-msvc` is recognized).
When building an MSVC target, the configure script either requires the
`--msvc-root` argument or for `cl.exe` to be in `PATH`. It also requires that if
in the path `cl.exe` is the 64-bit version of the compiler.
Once detected the configure script will run the `vcvarsall.bat` script provided
by Visual Studio to learn about the `INCLUDE` and `LIB` variables needed by the
`cl.exe` compiler to run (the default include/lib paths for the
compiler/linker). These variables are then reexported when running `make` to
ensure that our own compiles are running the same toolchain.
The purpose of this detection and environment variable scraping is to avoid
requiring the build itself to be run inside of a `cmd.exe` shell but rather
allow it to run in the currently expected MinGW/MSYS shell.
We use a script called `mklldeps.py` to run `llvm-config` to generate a list
of LLVM libraries and native dependencies needed by LLVM, but all cross-compiled
LLVM builds were using the *host triple's* `llvm-config` instead of the
*target's* `llvm-config`. This commit alters this to require the right
`llvmdeps.rs` to be generated which will run the correct `llvm-config`.
Previously libmorestack.a and libcompiler-rt.a were installed, but link.exe
looks for morestack.lib and compiler-rt.lib by default, so we need to install
these with the correct name
The code takes a prefix of the MD5 hash of the version string.
Since the hash command differs across GNU and BSD platforms, we scan for
the right one in the configure script.
Closes#25007
The code takes a prefix of the MD5 hash of the version string.
Since the hash command differs across GNU and BSD platforms, we scan for
the right one in the configure script.
Closes#25007
Then, decouple the question of whether the compiler/stdlib carry
debuginfo (which is controlled via `--enable-debuginfo` and implied by
`--enable-debug`) from the question of whether the tests carry
debuginfo (which now no longer is implied by `--enable-debug` nor
`--enable-debuginfo`, and is off by default).
These commits build on [some great work on reddit](http://www.reddit.com/r/rust/comments/33boew/weekend_experiment_link_rust_programs_against/) for adding MUSL support to the compiler. This goal of this PR is to enable a `--target x86_64-unknown-linux-musl` argument to the compiler to work A-OK. The outcome here is that there are 0 compile-time dependencies for a MUSL-targeting build *except for a linker*. Currently this also assumes that MUSL is being used for statically linked binaries so there is no support for dynamically linked binaries with MUSL.
MUSL support largely just entailed munging around with the linker and where libs are located, and the major highlights are:
* The entirety of `libc.a` is included in `liblibc.rlib` (statically included as an archive).
* The entirety of `libunwind.a` is included in `libstd.rlib` (like with liblibc).
* The target specification for MUSL passes a number of ... flavorful options! Each option is documented in the relevant commit.
* The entire test suite currently passes with MUSL as a target, except for:
* Dynamic linking tests are all ignored as it's not supported with MUSL
* Stack overflow detection is not working MUSL yet (I'm not sure why)
* There is a language change included in this PR to add a `target_env` `#[cfg]` directive. This is used to conditionally build code for only MUSL (or for linux distros not MUSL). I highly suspect that this will also be used by Windows to target MSVC instead of a MinGW-based toolchain.
To build a compiler targeting MUSL you need to follow these steps:
1. Clone the current MUSL repo from `git://git.musl-libc.org/musl`. Build this as usual and install it.
2. Clone and build LLVM's [libcxxabi](http://libcxxabi.llvm.org/) library. Only the `libunwind.a` artifact is needed. I have tried using upstream libunwind's source repo but I have not gotten unwinding to work with it unfortunately. Move `libunwind.a` adjacent to MUSL's `libc.a`
3. Configure a Rust checkout with `--target=x86_64-unknown-linux-musl --musl-root=$MUSL_ROOT` where `MUSL_ROOT` is where you installed MUSL in step 1.
I hope to improve building a copy of libunwind as it's still a little sketchy and difficult to do today, but other than that everything should "just work"! This PR is not intended to include 100% comprehensive support for MUSL, as future modifications will probably be necessary.