Remove support for the PNaCl target (le32-unknown-nacl)
This removes support for the `le32-unknown-nacl` target which is currently supported by rustc on tier 3. Despite the "nacl" in the name, the target doesn't output native code (x86, ARM, MIPS), instead it outputs binaries in the PNaCl format.
There are two reasons for the removal:
* Google [has announced](https://blog.chromium.org/2017/05/goodbye-pnacl-hello-webassembly.html) deprecation of the PNaCl format. The suggestion is to migrate to wasm. Happens we already have a wasm backend!
* Our PNaCl LLVM backend is provided by the fastcomp patch set that the LLVM fork used by rustc contains in addition to vanilla LLVM (`src/llvm/lib/Target/JSBackend/NaCl`). Upstream LLVM doesn't have PNaCl support. Removing PNaCl support will enable us to move away from fastcomp (#44006) and have a lighter set of patches on top of upstream LLVM inside our LLVM fork. This will help distribution packagers of Rust.
Fixes#42420
This commit is an implementation of LLVM's ThinLTO for consumption in rustc
itself. Currently today LTO works by merging all relevant LLVM modules into one
and then running optimization passes. "Thin" LTO operates differently by having
more sharded work and allowing parallelism opportunities between optimizing
codegen units. Further down the road Thin LTO also allows *incremental* LTO
which should enable even faster release builds without compromising on the
performance we have today.
This commit uses a `-Z thinlto` flag to gate whether ThinLTO is enabled. It then
also implements two forms of ThinLTO:
* In one mode we'll *only* perform ThinLTO over the codegen units produced in a
single compilation. That is, we won't load upstream rlibs, but we'll instead
just perform ThinLTO amongst all codegen units produced by the compiler for
the local crate. This is intended to emulate a desired end point where we have
codegen units turned on by default for all crates and ThinLTO allows us to do
this without performance loss.
* In anther mode, like full LTO today, we'll optimize all upstream dependencies
in "thin" mode. Unlike today, however, this LTO step is fully parallelized so
should finish much more quickly.
There's a good bit of comments about what the implementation is doing and where
it came from, but the tl;dr; is that currently most of the support here is
copied from upstream LLVM. This code duplication is done for a number of
reasons:
* Controlling parallelism means we can use the existing jobserver support to
avoid overloading machines.
* We will likely want a slightly different form of incremental caching which
integrates with our own incremental strategy, but this is yet to be
determined.
* This buys us some flexibility about when/where we run ThinLTO, as well as
having it tailored to fit our needs for the time being.
* Finally this allows us to reuse some artifacts such as our `TargetMachine`
creation, where all our options we used today aren't necessarily supported by
upstream LLVM yet.
My hope is that we can get some experience with this copy/paste in tree and then
eventually upstream some work to LLVM itself to avoid the duplication while
still ensuring our needs are met. Otherwise I fear that maintaining these
bindings may be quite costly over the years with LLVM updates!
This commit is a refactoring of the LTO backend in Rust to support compilations
with multiple codegen units. The immediate result of this PR is to remove the
artificial error emitted by rustc about `-C lto -C codegen-units-8`, but longer
term this is intended to lay the groundwork for LTO with incremental compilation
and ultimately be the underpinning of ThinLTO support.
The problem here that needed solving is that when rustc is producing multiple
codegen units in one compilation LTO needs to merge them all together.
Previously only upstream dependencies were merged and it was inherently relied
on that there was only one local codegen unit. Supporting this involved
refactoring the optimization backend architecture for rustc, namely splitting
the `optimize_and_codegen` function into `optimize` and `codegen`. After an LLVM
module has been optimized it may be blocked and queued up for LTO, and only
after LTO are modules code generated.
Non-LTO compilations should look the same as they do today backend-wise, we'll
spin up a thread for each codegen unit and optimize/codegen in that thread. LTO
compilations will, however, send the LLVM module back to the coordinator thread
once optimizations have finished. When all LLVM modules have finished optimizing
the coordinator will invoke the LTO backend, producing a further list of LLVM
modules. Currently this is always a list of one LLVM module. The coordinator
then spawns further work to run LTO and code generation passes over each module.
In the course of this refactoring a number of other pieces were refactored:
* Management of the bytecode encoding in rlibs was centralized into one module
instead of being scattered across LTO and linking.
* Some internal refactorings on the link stage of the compiler was done to work
directly from `CompiledModule` structures instead of lists of paths.
* The trans time-graph output was tweaked a little to include a name on each
bar and inflate the size of the bars a little
APFloat: Rewrite It In Rust and use it for deterministic floating-point CTFE.
As part of the CTFE initiative, we're forced to find a solution for floating-point operations.
By design, IEEE-754 does not explicitly define everything in a deterministic manner, and there is some variability between platforms, at the very least (e.g. NaN payloads).
If types are to evaluate constant expressions involving type (or in the future, const) generics, that evaluation needs to be *fully deterministic*, even across `rustc` host platforms.
That is, if `[T; T::X]` was used in a cross-compiled library, and the evaluation of `T::X` executed a floating-point operation, that operation has to be reproducible on *any other host*, only knowing `T` and the definition of the `X` associated const (as either AST or HIR).
Failure to uphold those rules allows an associated type (e.g. `<Foo as Iterator>::Item`) to be seen as two (or more) different types, depending on the current host, and such type safety violations typically allow writing of a `transmute` in safe code, given enough generics.
The options considered by @rust-lang/compiler were:
1. Ban floating-point operations in generic const-evaluation contexts
2. Emulate floating-point operations in an uniformly deterministic fashion
The former option may seem appealing at first, but floating-point operations *are allowed today*, so they can't be banned wholesale, a distinction has to be made between the code that already works, and future generic contexts. *Moreover*, every computation that succeeded *has to be cached*, otherwise the generic case can be reproduced without any generics. IMO there are too many ways it can go wrong, and a single violation can be enough for an unsoundness hole.
Not to mention we may end up really wanting floating-point operations *anyway*, in CTFE.
I went with the latter option, and seeing how LLVM *already* has a library for this exact purpose (as it needs to perform optimizations independently of host floating-point capabilities), i.e. `APFloat`, that was what I ended up basing this PR on.
But having been burned by the low reusability of bindings that link to LLVM, and because I would *rather* the floating-point operations to be wrong than not deterministic or not memory-safe (`APFloat` does far more pointer juggling than I'm comfortable with), I decided to RIIR.
This way, we have a guarantee of *no* `unsafe` code, a bit more control over the where native floating-point might accidentally be involved, and non-LLVM backends can share it.
I've also ported all the testcases over, *before* any functionality, to catch any mistakes.
Currently the PR replaces all CTFE operations to go through `apfloat::ieee::{Single,Double}`, keeping only the bits of the `f32` / `f64` memory representation in between operations.
Converting from a string also double-checks that `core::num` and `apfloat` agree on the interpretation of a floating-point number literal, in case either of them has any bugs left around.
r? @nikomatsakis
f? @nagisa @est31
<hr/>
Huge thanks to @edef1c for first demoing usable `APFloat` bindings and to @chandlerc for fielding my questions on IRC about `APFloat` peculiarities (also upstreaming some bugfixes).
Thread through the original error when opening archives
This updates the management of opening archives to thread through the original
piece of error information from LLVM over to the end consumer, trans.
This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.
[RFC 1974]: https://github.com/rust-lang/rfcs/pull/197
The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.
cc #27389
When writing LLVM IR output demangled fn name in comments
`--emit=llvm-ir` looks like this now:
```
; <alloc::vec::Vec<T> as core::ops::index::IndexMut<core::ops::range::RangeFull>>::index_mut
; Function Attrs: inlinehint uwtable
define internal { i8*, i64 } @"_ZN106_$LT$alloc..vec..Vec$LT$T$GT$$u20$as$u20$core..ops..index..IndexMut$LT$core..ops..range..RangeFull$GT$$GT$9index_mut17h7f7b576609f30262E"(%"alloc::vec::Vec<u8>"* dereferenceable(24)) unnamed_addr #0 {
start:
...
```
cc https://github.com/integer32llc/rust-playground/issues/15
Enable wasm LLVM backend
Enables compilation to WebAssembly with the LLVM backend using the target triple "wasm32-unknown-unknown". This is the beginning of my work on #38804.
**edit:** The new new target is now wasm32-experimental-emscripten instead of wasm32-unknown-unknown.
The new target is wasm32-experimental-emscripten. Adds a new
configuration option to opt in to building experimental LLVM backends
such as the WebAssembly backend. The target name was chosen to be
similar to the existing wasm32-unknown-emscripten target so that the
build and tests would work with minimal other code changes. When/if the
new target replaces the old target, simply renaming it should just work.
Build instruction profiler runtime as part of compiler-rt
r? @alexcrichton
This is #38608 with some fixes.
Still missing:
- [x] testing with profiler enabled on some builders (on which ones? Should I add the option to some of the already existing configurations, or create a new configuration?);
- [x] enabling distribution (on which builders?);
- [x] documentation.
This support is needed for bindgen to work well on 32-bit Windows, and
also enables people to begin experimenting with C++ FFI support on that
platform.
Fixes#42044.
Add RWPI/ROPI relocation model support
This PR adds support for using LLVM 4's ROPI and RWPI relocation models for ARM.
ROPI (Read-Only Position Independence) and RWPI (Read-Write Position Independence) are two new relocation models in LLVM for the ARM backend ([LLVM changset](https://reviews.llvm.org/rL278015)). The motivation is that these are the specific strategies we use in userspace [Tock](https://www.tockos.org) apps, so supporting this is an important step (perhaps the final step, but can't confirm yet) in enabling userspace Rust processes.
## Explanation
ROPI makes all code and immutable accesses PC relative, but not assumed to be overriden at runtime (so for example, jumps are always relative).
RWPI uses a base register (`r9`) that stores the addresses of the GOT in memory so the runtime (e.g. a kernel) only adjusts r9 tell running code where the GOT is.
## Complications adding support in Rust
While this landed in LLVM master back in August, the header files in `llvm-c` have not been updated yet to reflect it. Rust replicates that header file's version of the `LLVMRelocMode` enum as the Rust enum `llvm::RelocMode` and uses an implicit cast in the ffi to translate from Rust's notion of the relocation model to the LLVM library's notion.
My workaround for this currently is to replace the `LLVMRelocMode` argument to `LLVMTargetMachineRef` with an int and using the hardcoded int representation of the `RelocMode` enum. This is A Bad Idea(tm), but I think very nearly the right thing.
Would a better alternative be to patch rust-llvm to support these enum variants (also a fairly trivial change)?
When -Z profile is passed, the GCDAProfiling LLVM pass is added
to the pipeline, which uses debug information to instrument the IR.
After compiling with -Z profile, the $(OUT_DIR)/$(CRATE_NAME).gcno
file is created, containing initial profiling information.
After running the program built, the $(OUT_DIR)/$(CRATE_NAME).gcda
file is created, containing branch counters.
The created *.gcno and *.gcda files can be processed using
the "llvm-cov gcov" and "lcov" tools. The profiling data LLVM
generates does not faithfully follow the GCC's format for *.gcno
and *.gcda files, and so it will probably not work with other tools
(such as gcov itself) that consume these files.
Replaces the llvm-c exposed LLVMRelocMode, which does not include all
relocation model variants, with a LLVMRustRelocMode modeled after
LLVMRustCodeMode.
LLVM 4.0 Upgrade
Since nobody has done this yet, I decided to get things started:
**Todo:**
* [x] push the relevant commits to `rust-lang/llvm` and `rust-lang/compiler-rt`
* [x] cleanup `.gitmodules`
* [x] Verify if there are any other commits from `rust-lang/llvm` which need backporting
* [x] Investigate / fix debuginfo ("`<optimized out>`") failures
* [x] Use correct emscripten version in docker image
---
Closes#37609.
---
**Test results:**
Everything is green 🎉
Tracking issue: https://github.com/rust-lang/rust/issues/40180
This calling convention can be used for definining interrupt handlers on
32-bit and 64-bit x86 targets. The compiler then uses `iret` instead of
`ret` for returning and ensures that all registers are restored to their
original values.
Usage:
```
extern "x86-interrupt" fn handler(stack_frame: &ExceptionStackFrame) {…}
```
for interrupts and exceptions without error code and
```
extern "x86-interrupt" fn page_fault_handler(stack_frame: &ExceptionStackFrame,
error_code: u64) {…}
```
for exceptions that push an error code (e.g., page faults or general
protection faults). The programmer must ensure that the correct version
is used for each interrupt.
For more details see the [LLVM PR][1] and the corresponding [proposal][2].
[1]: https://reviews.llvm.org/D15567
[2]: http://lists.llvm.org/pipermail/cfe-dev/2015-September/045171.html
[MIR] SwitchInt Everywhere
Something I've been meaning to do for a very long while. This PR essentially gets rid of 3 kinds of conditional branching and only keeps the most general one - `SwitchInt`. Primary benefits are such that dealing with MIR now does not involve dealing with 3 different ways to do conditional control flow. On the other hand, constructing a `SwitchInt` currently requires more code than what previously was necessary to build an equivalent `If` terminator. Something trivially "fixable" with some constructor methods somewhere (MIR needs stuff like that badly in general).
Some timings (tl;dr: slightly faster^1 (unexpected), but also uses slightly more memory at peak (expected)):
^1: Not sure if the speed benefits are because of LLVM liking the generated code better or the compiler itself getting compiled better. Either way, its a net benefit. The CORE and SYNTAX timings done for compilation without optimisation.
```
AFTER:
Building stage1 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Finished release [optimized] target(s) in 31.50 secs
Finished release [optimized] target(s) in 31.42 secs
Building stage1 compiler artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Finished release [optimized] target(s) in 439.56 secs
Finished release [optimized] target(s) in 435.15 secs
CORE: 99% (24.81 real, 0.13 kernel, 24.57 user); 358536k resident
CORE: 99% (24.56 real, 0.15 kernel, 24.36 user); 359168k resident
SYNTAX: 99% (49.98 real, 0.48 kernel, 49.42 user); 653416k resident
SYNTAX: 99% (50.07 real, 0.58 kernel, 49.43 user); 653604k resident
BEFORE:
Building stage1 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Finished release [optimized] target(s) in 31.84 secs
Building stage1 compiler artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Finished release [optimized] target(s) in 451.17 secs
CORE: 99% (24.66 real, 0.20 kernel, 24.38 user); 351096k resident
CORE: 99% (24.36 real, 0.17 kernel, 24.18 user); 352284k resident
SYNTAX: 99% (52.24 real, 0.56 kernel, 51.66 user); 645544k resident
SYNTAX: 99% (51.55 real, 0.48 kernel, 50.99 user); 646428k resident
```
cc @nikomatsakis @eddyb
Emit DW_AT_main_subprogram
This changes rustc to emit DW_AT_main_subprogram on the "main" program.
This lets gdb suitably stop at the user's main in response to
"start" (rather than the library's main, which is what happens
currently).
Fixes#32620
r? michaelwoerister
This changes rustc to emit DW_AT_main_subprogram on the "main" program.
This lets gdb suitably stop at the user's main in response to
"start" (rather than the library's main, which is what happens
currently).
Fixes#32620
r? michaelwoerister
Instead of directly creating a 'DIGlobalVariable', we now have to create
a 'DIGlobalVariableExpression' which itself contains a reference to a
'DIGlobalVariable'.
This is a straightforward change.
In the future, we should rename 'DIGlobalVariable' in the FFI
bindings, assuming we will only refer to 'DIGlobalVariableExpression'
and not 'DIGlobalVariable'.
LLVM Core C bindings provide this function for all the versions back to what we support (3.7), and
helps to avoid this unnecessary builder->function transition every time. Also a negative diff.
Remove not(stage0) from deny(warnings)
Historically this was done to accommodate bugs in lints, but there hasn't been a
bug in a lint since this feature was added which the warnings affected. Let's
completely purge warnings from all our stages by denying warnings in all stages.
This will also assist in tracking down `stage0` code to be removed whenever
we're updating the bootstrap compiler.
Improve naming style in rustllvm.
As per the LLVM style guide, use CamelCase for all locals and classes,
and camelCase for all non-FFI functions.
Also, make names of variables of commonly used types more consistent.
Fixes#38688.
r? @rkruppe
As per the LLVM style guide, use CamelCase for all locals and classes,
and camelCase for all non-FFI functions.
Also, make names of variables of commonly used types more consistent.
Fixes#38688.
Since discriminants do not support i128 yet, lets just calculate the boundaries within the 64 bits
that are supported. This also avoids an issue with bootstrapping on 32 bit systems due to #38727.
Fixes rebase fallout, makes code correct in presence of 128-bit constants.
This commit includes manual merge conflict resolution changes from a rebase by @est31.
This commit introduces 128-bit integers. Stage 2 builds and produces a working compiler which
understands and supports 128-bit integers throughout.
The general strategy used is to have rustc_i128 module which provides aliases for iu128, equal to
iu64 in stage9 and iu128 later. Since nowhere in rustc we rely on large numbers being supported,
this strategy is good enough to get past the first bootstrap stages to end up with a fully working
128-bit capable compiler.
In order for this strategy to work, number of locations had to be changed to use associated
max_value/min_value instead of MAX/MIN constants as well as the min_value (or was it max_value?)
had to be changed to use xor instead of shift so both 64-bit and 128-bit based consteval works
(former not necessarily producing the right results in stage1).
This commit includes manual merge conflict resolution changes from a rebase by @est31.
Historically this was done to accommodate bugs in lints, but there hasn't been a
bug in a lint since this feature was added which the warnings affected. Let's
completely purge warnings from all our stages by denying warnings in all stages.
This will also assist in tracking down `stage0` code to be removed whenever
we're updating the bootstrap compiler.
PTX support, take 2
- You can generate PTX using `--emit=asm` and the right (custom) target. Which
then you can run on a NVIDIA GPU.
- You can compile `core` to PTX. [Xargo] also works and it can compile some
other crates like `collections` (but I doubt all of those make sense on a GPU)
[Xargo]: https://github.com/japaric/xargo
- You can create "global" functions, which can be "called" by the host, using
the `"ptx-kernel"` ABI, e.g. `extern "ptx-kernel" fn kernel() { .. }`. Every
other function is a "device" function and can only be called by the GPU.
- Intrinsics like `__syncthreads()` and `blockIdx.x` are available as
`"platform-intrinsics"`. These intrinsics are *not* in the `core` crate but
any Rust user can create "bindings" to them using an `extern
"platform-intrinsics"` block. See example at the end.
- Trying to emit PTX with `-g` (debuginfo); you get an LLVM error. But I don't
think PTX can contain debuginfo anyway so `-g` should be ignored and a warning
should be printed ("`-g` doesn't work with this target" or something).
- "Single source" support. You *can't* write a single source file that contains
both host and device code. I think that should be possible to implement that
outside the compiler using compiler plugins / build scripts.
- The equivalent to CUDA `__shared__` which it's used to declare memory that's
shared between the threads of the same block. This could be implemented using
attributes: `#[shared] static mut SCRATCH_MEMORY: [f32; 64]` but hasn't been
implemented yet.
- Built-in targets. This PR doesn't add targets to the compiler just yet but one
can create custom targets to be able to emit PTX code (see the example at the
end). The idea is to have people experiment with this feature before
committing to it (built-in targets are "insta-stable")
- All functions must be "inlined". IOW, the `.rlib` must always contain the LLVM
bitcode of all the functions of the crate it was produced from. Otherwise, you
end with "undefined references" in the final PTX code but you won't get *any*
linker error because no linker is involved. IOW, you'll hit a runtime error
when loading the PTX into the GPU. The workaround is to use `#[inline]` on
non-generic functions and to never use `#[inline(never)]` but this may not
always be possible because e.g. you could be relying on third party code.
- Should `--emit=asm` generate a `.ptx` file instead of a `.s` file?
TL;DR Use Xargo to turn a crate into a PTX module (a `.s` file). Then pass that
PTX module, as a string, to the GPU and run it.
The full code is in [this repository]. This section gives an overview of how to
run Rust code on a NVIDIA GPU.
[this repository]: https://github.com/japaric/cuda
- Create a custom target. Here's the 64-bit NVPTX target (NOTE: the comments
are not valid because this is supposed to be a JSON file; remove them before
you use this file):
``` js
// nvptx64-nvidia-cuda.json
{
"arch": "nvptx64", // matches LLVM
"cpu": "sm_20", // "oldest" compute capability supported by LLVM
"data-layout": "e-i64:64-v16:16-v32:32-n16:32:64",
"llvm-target": "nvptx64-nvidia-cuda",
"max-atomic-width": 0, // LLVM errors with any other value :-(
"os": "cuda", // matches LLVM
"panic-strategy": "abort",
"target-endian": "little",
"target-pointer-width": "64",
"target-vendor": "nvidia", // matches LLVM -- not required
}
```
(There's a 32-bit target specification in the linked repository)
- Write a kernel
``` rust
extern "platform-intrinsic" {
fn nvptx_block_dim_x() -> i32;
fn nvptx_block_idx_x() -> i32;
fn nvptx_thread_idx_x() -> i32;
}
/// Copies an array of `n` floating point numbers from `src` to `dst`
pub unsafe extern "ptx-kernel" fn memcpy(dst: *mut f32,
src: *const f32,
n: usize) {
let i = (nvptx_block_dim_x() as isize)
.wrapping_mul(nvptx_block_idx_x() as isize)
.wrapping_add(nvptx_thread_idx_x() as isize);
if (i as usize) < n {
*dst.offset(i) = *src.offset(i);
}
}
```
- Emit PTX code
```
$ xargo rustc --target nvptx64-nvidia-cuda --release -- --emit=asm
Compiling core v0.0.0 (file://..)
(..)
Compiling nvptx-builtins v0.1.0 (https://github.com/japaric/nvptx-builtins)
Compiling kernel v0.1.0
$ cat target/nvptx64-nvidia-cuda/release/deps/kernel-*.s
//
// Generated by LLVM NVPTX Back-End
//
.version 3.2
.target sm_20
.address_size 64
// .globl memcpy
.visible .entry memcpy(
.param .u64 memcpy_param_0,
.param .u64 memcpy_param_1,
.param .u64 memcpy_param_2
)
{
.reg .pred %p<2>;
.reg .s32 %r<5>;
.reg .s64 %rd<12>;
ld.param.u64 %rd7, [memcpy_param_2];
mov.u32 %r1, %ntid.x;
mov.u32 %r2, %ctaid.x;
mul.wide.s32 %rd8, %r2, %r1;
mov.u32 %r3, %tid.x;
cvt.s64.s32 %rd9, %r3;
add.s64 %rd10, %rd9, %rd8;
setp.ge.u64 %p1, %rd10, %rd7;
@%p1 bra LBB0_2;
ld.param.u64 %rd3, [memcpy_param_0];
ld.param.u64 %rd4, [memcpy_param_1];
cvta.to.global.u64 %rd5, %rd4;
cvta.to.global.u64 %rd6, %rd3;
shl.b64 %rd11, %rd10, 2;
add.s64 %rd1, %rd6, %rd11;
add.s64 %rd2, %rd5, %rd11;
ld.global.u32 %r4, [%rd2];
st.global.u32 [%rd1], %r4;
LBB0_2:
ret;
}
```
- Run it on the GPU
``` rust
// `kernel.ptx` is the `*.s` file we got in the previous step
const KERNEL: &'static str = include_str!("kernel.ptx");
driver::initialize()?;
let device = Device(0)?;
let ctx = device.create_context()?;
let module = ctx.load_module(KERNEL)?;
let kernel = module.function("memcpy")?;
let h_a: Vec<f32> = /* create some random data */;
let h_b = vec![0.; N];
let d_a = driver::allocate(bytes)?;
let d_b = driver::allocate(bytes)?;
// Copy from host to GPU
driver::copy(h_a, d_a)?;
// Run `memcpy` on the GPU
kernel.launch(d_b, d_a, N)?;
// Copy from GPU to host
driver::copy(d_b, h_b)?;
// Verify
assert_eq!(h_a, h_b);
// `d_a`, `d_b`, `h_a`, `h_b` are dropped/freed here
```
---
cc @alexcrichton @brson @rkruppe
> What has changed since #34195?
- `core` now can be compiled into PTX. Which makes it very easy to turn `no_std`
crates into "kernels" with the help of Xargo.
- There's now a way, the `"ptx-kernel"` ABI, to generate "global" functions. The
old PR required a manual step (it was hack) to "convert" "device" functions
into "global" functions. (Only "global" functions can be launched by the host)
- Everything is unstable. There are not "insta stable" built-in targets this
time (\*). The users have to use a custom target to experiment with this
feature. Also, PTX instrinsics, like `__syncthreads` and `blockIdx.x`, are now
implemented as `"platform-intrinsics"` so they no longer live in the `core`
crate.
(\*) I'd actually like to have in-tree targets because that makes this target
more discoverable, removes the need to lug around .json files, etc.
However, bundling a target with the compiler immediately puts it in the path
towards stabilization. Which gives us just two cycles to find and fix any
problem with the target specification. Afterwards, it becomes hard to tweak
the specification because that could be a breaking change.
A possible solution could be "unstable built-in targets". Basically, to use an
unstable target, you'll have to also pass `-Z unstable-options` to the compiler.
And unstable targets, being unstable, wouldn't be available on stable.
> Why should this be merged?
- To let people experiment with the feature out of tree. Having easy access to
the feature (in every nightly) allows this. I also think that, as it is, it
should be possible to start prototyping type-safe single source support using
build scripts, macros and/or plugins.
- It's a straightforward implementation. No different that adding support for
any other architecture.
- `--emit=asm --target=nvptx64-nvidia-cuda` can be used to turn a crate
into a PTX module (a `.s` file).
- intrinsics like `__syncthreads` and `blockIdx.x` are exposed as
`"platform-intrinsics"`.
- "cabi" has been implemented for the nvptx and nvptx64 architectures.
i.e. `extern "C"` works.
- a new ABI, `"ptx-kernel"`. That can be used to generate "global"
functions. Example: `extern "ptx-kernel" fn kernel() { .. }`. All
other functions are "device" functions.
initial SPARC support
### UPDATE
Can now compile `no_std` executables with:
```
$ cargo new --bin app && cd $_
$ edit Cargo.toml && tail -n2 $_
[dependencies]
core = { path = "/path/to/rust/src/libcore" }
$ edit src/main.rs && cat $_
#![feature(lang_items)]
#![no_std]
#![no_main]
#[no_mangle]
pub fn _start() -> ! {
loop {}
}
#[lang = "panic_fmt"]
fn panic_fmt() -> ! {
loop {}
}
$ edit sparc-none-elf.json && cat $_
{
"arch": "sparc",
"data-layout": "E-m:e-p:32:32-i64:64-f128:64-n32-S64",
"executables": true,
"llvm-target": "sparc",
"os": "none",
"panic-strategy": "abort",
"target-endian": "big",
"target-pointer-width": "32"
}
$ cargo rustc --target sparc-none-elf -- -C linker=sparc-unknown-elf-gcc -C link-args=-nostartfiles
$ file target/sparc-none-elf/debug/app
app: ELF 32-bit MSB executable, SPARC, version 1 (SYSV), statically linked, not stripped
$ sparc-unknown-elf-readelf -h target/sparc-none-elf/debug/app
ELF Header:
Magic: 7f 45 4c 46 01 02 01 00 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, big endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Sparc
Version: 0x1
Entry point address: 0x10074
Start of program headers: 52 (bytes into file)
Start of section headers: 1188 (bytes into file)
Flags: 0x0
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 2
Size of section headers: 40 (bytes)
Number of section headers: 14
Section header string table index: 11
$ sparc-unknown-elf-objdump -Cd target/sparc-none-elf/debug/app
target/sparc-none-elf/debug/app: file format elf32-sparc
Disassembly of section .text:
00010074 <_start>:
10074: 9d e3 bf 98 save %sp, -104, %sp
10078: 10 80 00 02 b 10080 <_start+0xc>
1007c: 01 00 00 00 nop
10080: 10 80 00 02 b 10088 <_start+0x14>
10084: 01 00 00 00 nop
10088: 10 80 00 00 b 10088 <_start+0x14>
1008c: 01 00 00 00 nop
```
---
Someone wants to attempt launching some Rust [into space](https://www.reddit.com/r/rust/comments/5h76oa/c_interop/) but their platform is based on the SPARCv8 architecture. Let's not block them by enabling LLVM's SPARC backend.
Something very important that they'll also need is the "cabi" stuff as they'll be embedding some Rust code into a bigger C application (i.e. heavy use of `extern "C"`). The question there is what name(s) should we use for "target_arch" as the "cabi" implementation [varies according to that parameter](https://github.com/rust-lang/rust/blob/1.13.0/src/librustc_trans/abi.rs#L498-L523).
AFAICT, SPARCv8 is a 32-bit architecture and SPARCv9 is a 64-bit architecture. And, LLVM uses `sparc`, `sparcv9` and `sparcel` for [the architecture triple](ac1c94226e/include/llvm/ADT/Triple.h (L67-L69)) so perhaps we should use `target_arch = "sparc"` (32-bit) and `target_arch = "sparcv9"` (64-bit) as well.
r? @alexcrichton This PR only enables this LLVM backend when rustbuild is used. Do I also need to implement this for the old Makefile-based build system? Or are all our nightlies now being generated using rustbuild?
cc @brson
stdc++ is from base, and is an old library (GCC 4.2)
estdc++ is from ports, and is a recent library (GCC 4.9 currently)
as LLVM requires the newer version, use it if under OpenBSD.
Alignment was removed from createBasicType and moved to
- createGlobalVariable
- createAutoVariable
- createStaticMemberType (unused in Rust)
- createTempGlobalVariableFwdDecl (unused in Rust)
e69c459a6e
Add new #[target_feature = "..."] attribute.
This commit adds a new attribute that instructs the compiler to emit
target specific code for a single function. For example, the following
function is permitted to use instructions that are part of SSE 4.2:
#[target_feature = "+sse4.2"]
fn foo() { ... }
In particular, use of this attribute does not require setting the
-C target-feature or -C target-cpu options on rustc.
This attribute does not have any protections built into it. For example,
nothing stops one from calling the above `foo` function on hosts without
SSE 4.2 support. Doing so may result in a SIGILL.
I've also expanded the x86 target feature whitelist.
In LLVM 4.0, this enum becomes an actual type-safe enum, which breaks
all of the interfaces. Introduce our own copy of the bitflags that we
can then safely convert to the LLVM one.
[LLVM 4.0] Don't assume llvm::StringRef is null terminated
StringRefs have a length and their contents are not usually null-terminated. The solution is to either copy the string data (in `rustc_llvm::diagnostic`) or take the size into account (in LLVMRustPrintPasses).
I couldn't trigger a bug caused by this (apparently all the strings returned in practice are actually null-terminated) but this is more correct and more future-proof.
cc #37609
This commit adds a new attribute that instructs the compiler to emit
target specific code for a single function. For example, the following
function is permitted to use instructions that are part of SSE 4.2:
#[target_feature = "+sse4.2"]
fn foo() { ... }
In particular, use of this attribute does not require setting the
-C target-feature or -C target-cpu options on rustc.
This attribute does not have any protections built into it. For example,
nothing stops one from calling the above `foo` function on hosts without
SSE 4.2 support. Doing so may result in a SIGILL.
This commit also expands the target feature whitelist to include lzcnt,
popcnt and sse4a. Namely, lzcnt and popcnt have their own CPUID bits,
but were introduced with SSE4.
StringRefs have a length and their contents are not usually null-terminated.
The solution is to either copy the string data (in rustc_llvm::diagnostic) or take the size into account (in LLVMRustPrintPasses).
I couldn't trigger a bug caused by this (apparently all the strings returned in practice are actually null-terminated) but this is more correct and more future-proof.
[LLVM 4.0] Use llvm::Attribute APIs instead of "raw value" APIs
The latter will be removed in LLVM 4.0 (see 4a6fc8bacf).
The librustc_llvm API remains mostly unchanged, except that llvm::Attribute is no longer a bitflag but represents only a *single* attribute.
The ability to store many attributes in a small number of bits and modify them without interacting with LLVM is only used in rustc_trans::abi and closely related modules, and only attributes for function arguments are considered there.
Thus rustc_trans::abi now has its own bit-packed representation of argument attributes, which are translated to rustc_llvm::Attribute when applying the attributes.
cc #37609
rustbuild: allow dynamically linking LLVM
The makefiles and `mklldeps.py` called `llvm-config --shared-mode` to
find out if LLVM defaulted to shared or static libraries, and just went
with that. But under rustbuild, `librustc_llvm/build.rs` was assuming
that LLVM should be static, and even forcing `--link-static` for 3.9+.
Now that build script also uses `--shared-mode` to learn the default,
which should work better for pre-3.9 configured for dynamic linking, as
it wasn't possible back then to choose differently via `llvm-config`.
Further, the configure script now has a new `--enable-llvm-link-shared`
option, which allows one to manually override `--link-shared` on 3.9+
instead of forcing static.
Update: There are now four static/shared scenarios that can happen
for the supported LLVM versions:
- 3.9+: By default use `llvm-config --link-static`
- 3.9+ and `--enable-llvm-link-shared`: Use `--link-shared` instead.
- 3.8: Use `llvm-config --shared-mode` and go with its answer.
- 3.7: Just assume static, maintaining the status quo.
There are now four static/shared scenarios that can happen for the
supported LLVM versions:
- 3.9+: By default use `llvm-config --link-static`
- 3.9+ and `--enable-llvm-link-shared`: Use `--link-shared` instead.
- 3.8: Use `llvm-config --shared-mode` and go with its answer.
- 3.7: Just assume static, maintaining the status quo.
The librustc_llvm API remains mostly unchanged, except that llvm::Attribute is no longer a bitflag but represents only a *single* attribute.
The ability to store many attributes in a small number of bits and modify them without interacting with LLVM is only used in rustc_trans::abi and closely related modules, and only attributes for function arguments are considered there.
Thus rustc_trans::abi now has its own bit-packed representation of argument attributes, which are translated to rustc_llvm::Attribute when applying the attributes.
The makefiles and `mklldeps.py` called `llvm-config --shared-mode` to
find out if LLVM defaulted to shared or static libraries, and just went
with that. But under rustbuild, `librustc_llvm/build.rs` was assuming
that LLVM should be static, and even forcing `--link-static` for 3.9+.
Now that build script also uses `--shared-mode` to learn the default,
which should work better for pre-3.9 configured for dynamic linking, as
it wasn't possible back then to choose differently via `llvm-config`.
Further, the configure script now has a new `--enable-llvm-link-shared`
option, which allows one to manually override `--link-shared` on 3.9+
instead of forcing static.
to actually use the AAPCS calling convention
closes#37810
This is technically a [breaking-change] because it changes the ABI of
`extern "aapcs"` functions that (a) involve `f32`/`f64` arguments/return
values and (b) are compiled for arm-eabihf targets from
"aapcs-vfp" (wrong) to "aapcs" (correct).
Appendix:
What these ABIs mean?
- In the "aapcs-vfp" ABI or "hard float" calling convention: Floating
point values are passed/returned through FPU registers (s0, s1, d0, etc.)
- Whereas, in the "aapcs" ABI or "soft float" calling convention:
Floating point values are passed/returned through general purpose
registers (r0, r1, etc.)
Mixing these ABIs can cause problems if the caller assumes that the
routine is using one of these ABIs but it's actually using the other
one.
rustc: Flag all builtins functions as hidden
When compiling compiler-rt you typically compile with `-fvisibility=hidden`
which to ensure that all symbols are hidden in shared objects and don't show up
in symbol tables. This is important for these intrinsics being linked in every
crate to ensure that we're not unnecessarily bloating the public ABI of Rust
crates.
This should help allow the compiler-builtins project with Rust-defined builtins
start landing in-tree as well.
to let people experiment with this target out of tree.
The MSP430 architecture is used in 16-bit microcontrollers commonly used
in Digital Signal Processing applications.
When compiling compiler-rt you typically compile with `-fvisibility=hidden`
which to ensure that all symbols are hidden in shared objects and don't show up
in symbol tables. This is important for these intrinsics being linked in every
crate to ensure that we're not unnecessarily bloating the public ABI of Rust
crates.
This should help allow the compiler-builtins project with Rust-defined builtins
start landing in-tree as well.
The `Linkage` enum in librustc_llvm got out of sync with the version in LLVM and it caused two variants of the #[linkage=""] attribute to break.
This adds the functions `LLVMRustGetLinkage` and `LLVMRustSetLinkage` which convert between the Rust Linkage enum and the LLVM one, which should stop this from breaking every time LLVM changes it.
Fixes#33992
A new target, `s390x-unknown-linux-gnu`, has been added to the compiler
and can be used to build no_core/no_std Rust programs.
Known limitations:
- librustc_trans/cabi_s390x.rs is missing. This means no support for
`extern "C" fn`.
- No support for this arch in libc. This means std can be cross compiled
for this target.
Macro expansions produce code tagged with debug locations that are completely different from the surrounding expressions. This wrecks havoc on debugger's ability the step over source lines.
In order to have a good line stepping behavior in debugger, we overwrite debug locations of macro expansions with that of the outermost expansion site.
LLVM upgrade
As discussed in https://internals.rust-lang.org/t/need-help-with-emscripten-port/3154/46 I'm trying to update the used LLVM checkout in Rust.
I basically took @shepmaster's code and applied it on top (though I did the commits manually, the [original commits have better descriptions](https://github.com/rust-lang/rust/compare/master...avr-rust:avr-support).
With these changes I was able to build rustc. `make check` throws one last error on `run-pass/issue-28950.rs`. Output: https://gist.github.com/badboy/bcdd3bbde260860b6159aa49070a9052
I took the metadata changes as is and they seem to work, though it now uses the module in another step. I'm not sure if this is the best and correct way.
Things to do:
* [x] ~~Make `run-pass/issue-28950.rs` pass~~ unrelated
* [x] Find out how the `PositionIndependentExecutable` setting is now used
* [x] Is the `llvm::legacy` still the right way to do these things?
cc @brson @alexcrichton
This is to pull in changes to support ARM MUSL targets.
This change also commits a couple of other cargo-generated changes
to other dependencies in the various Cargo.toml files.
This is a spiritual succesor to #34268/8531d581, in which we replaced a
number of matches of None to the unit value with `if let` conditionals
where it was judged that this made for clearer/simpler code (as would be
recommended by Manishearth/rust-clippy's `single_match` lint). The same
rationale applies to matches of None to the empty block.
Compute `target_feature` from LLVM
This is a work-in-progress fix for #31662.
The logic that computes the target features from the command line has been replaced with queries to the `TargetMachine`.
When reuing a definition across codegen units, we obviously cannot use
internal linkage, but using external linkage means that we can end up
with multiple conflicting definitions of a single symbol across
multiple crates. Since the definitions should all be equal
semantically, we can use weak_odr linkage to resolve the situation.
Fixes#32518
We use a 64bit integer to pass the set of attributes that is to be
removed, but the called C function expects a 32bit integer. On most
platforms this doesn't cause any problems other than being unable to
unset some attributes, but on ARM even the lower 32bit aren't handled
correctly because the 64bit value is passed in different registers, so
the C function actually sees random garbage.
So we need to fix the relevant functions to use 32bit integers instead.
Additionally we need an implementation that actually accepts 64bit
integers because some attributes can only be unset that way.
Fixes#32360
`fast` a.k.a UnsafeAlgebra is the flag for enabling all "unsafe"
(according to llvm) float optimizations.
See LangRef for more information http://llvm.org/docs/LangRef.html#fast-math-flags
Providing these operations with less precise associativity rules (for
example) is useful to numerical applications.
For example, the summation loop:
let sum = 0.;
for element in data {
sum += *element;
}
Using the default floating point semantics, this loop expresses the
floats must be added in a sequence, one after another. This constraint
is usually completely unintended, and it means that no autovectorization
is possible.