Add compiler support for LLVM's x86_64 ERMSB feature
This change is needed for compiler-builtins to check for this feature
when implementing memcpy/memset. See:
https://github.com/rust-lang/compiler-builtins/pull/365
Without this change, the following code compiles, but does nothing:
```rust
#[cfg(target_feature = "ermsb")]
pub unsafe fn ermsb_memcpy() { ... }
```
The change just does compile-time detection. I think that runtime
detection will have to come in a follow-up CL to std-detect.
Like all the CPU feature flags, this just references #44839
Signed-off-by: Joe Richey <joerichey@google.com>
Use check-pass in single-use-lifetime ui tests
Rationale: the `single_use_lifetimes` lint is used during late name resolution, which is within the scope of `check-pass` and does not require codegen or linking.
Helps remove some FIXMES associated with #62277. Additionally tidies touched test files.
Add [T]::as_chunks(_mut)
Allows getting the slices directly, rather than just through an iterator as in `array_chunks(_mut)`. The constructors for those iterators are then written in terms of these methods, so the iterator constructors no longer have any `unsafe` of their own.
Unstable, of course. #74985
Remove unused set-discriminant statements and assignments regardless of rvalue
* Represent use counts with u32
* Unify use count visitors
* Change RemoveStatements visitor into a function
* Remove unused set-discriminant statements
* Use exhaustive match to clarify what is being optimized
* Remove unused assignments regardless of rvalue kind
rustc_mir: track inlined callees in SourceScopeData.
We now record which MIR scopes are the roots of *other* (inlined) functions's scope trees, which allows us to generate the correct debuginfo in codegen, similar to what LLVM inlining generates.
This PR makes the `ui` test `backtrace-debuginfo` pass, if the MIR inliner is turned on by default.
Also, `#[track_caller]` is now correct in the face of MIR inlining (cc `@anp).`
Fixes#76997.
r? `@rust-lang/wg-mir-opt`
Add cg_clif as optional codegen backend
Rustc_codegen_cranelift is an alternative codegen backend for rustc based on Cranelift. It has the potential to improve compilation times in debug mode. In my experience the compile time improvements over debug mode LLVM for a clean build are about 20-30% in most cases.
This PR adds cg_clif as optional codegen backend. By default it is only enabled for `./x.py check`. It can be enabled for `./x.py build` too by adding `cranelift` to the `rust.codegen-backends` array in `config.toml`.
MCP: https://github.com/rust-lang/compiler-team/issues/270
r? `@Mark-Simulacrum`
Allow creating a list of files shipped in a release
This PR adds the `BUILD_MANIFEST_SHIPPED_FILES_PATH` environment variable to `build-manifest`, which writes a list of all the files referenced in the manifest to the path defined in the variable. The use for this is for `promote-release` to prune files unused files before publishing a release.
This PR **does not implement any pruning**, it just adds support for it to be implemented in the future on `promote-release`'s side.
r? `@Mark-Simulacrum`
This change is needed for compiler-builtins to check for this feature
when implementing memcpy/memset. See:
https://github.com/rust-lang/compiler-builtins/pull/365
The change just does compile-time detection. I think that runtime
detection will have to come in a follow-up CL to std-detect.
Like all the CPU feature flags, this just references #44839
Signed-off-by: Joe Richey <joerichey@google.com>
Update affected ui & incremental tests to use a user declared variable
bindings instead of temporaries. The former are preserved because of
debuginfo, the latter are not.
The simplify locals implementation uses two different visitors to update
the locals use counts. The DeclMarker calculates the initial use counts.
The StatementDeclMarker updates the use counts as statements are being
removed from the block.
Replace them with a single visitor that can operate in either mode,
ensuring consistency of behaviour.
Additionally use exhaustive match to clarify what is being optimized.
No functional changes intended.
Optimise align_offset for stride=1 further
`stride == 1` case can be computed more efficiently through `-p (mod
a)`. That, then translates to a nice and short sequence of LLVM
instructions:
%address = ptrtoint i8* %p to i64
%negptr = sub i64 0, %address
%offset = and i64 %negptr, %a_minus_one
And produces pretty much ideal code-gen when this function is used in
isolation.
Typical use of this function will, however, involve use of
the result to offset a pointer, i.e.
%aligned = getelementptr inbounds i8, i8* %p, i64 %offset
This still looks very good, but LLVM does not really translate that to
what would be considered ideal machine code (on any target). For example
that's the codegen we obtain for an unknown alignment:
; x86_64
dec rsi
mov rax, rdi
neg rax
and rax, rsi
add rax, rdi
In particular negating a pointer is not something that’s going to be
optimised for in the design of CISC architectures like x86_64. They
are much better at offsetting pointers. And so we’d love to utilize this
ability and produce code that's more like this:
; x86_64
lea rax, [rsi + rdi - 1]
neg rsi
and rax, rsi
To achieve this we need to give LLVM an opportunity to apply its
various peep-hole optimisations that it does during DAG selection. In
particular, the `and` instruction appears to be a major inhibitor here.
We cannot, sadly, get rid of this load-bearing operation, but we can
reorder operations such that LLVM has more to work with around this
instruction.
One such ordering is proposed in #75579 and results in LLVM IR that
looks broadly like this:
; using add enables `lea` and similar CISCisms
%offset_ptr = add i64 %address, %a_minus_one
%mask = sub i64 0, %a
%masked = and i64 %offset_ptr, %mask
; can be folded with `gepi` that may follow
%offset = sub i64 %masked, %address
…and generates the intended x86_64 machine code.
One might also wonder how the increased amount of code would impact a
RISC target. Turns out not much:
; aarch64 previous ; aarch64 new
sub x8, x1, #1 add x8, x1, x0
neg x9, x0 sub x8, x8, #1
and x8, x9, x8 neg x9, x1
add x0, x0, x8 and x0, x8, x9
(and similarly for ppc, sparc, mips, riscv, etc)
The only target that seems to do worse is… wasm32.
Onto actual measurements – the best way to evaluate snipets like these
is to use llvm-mca. Much like Aarch64 assembly would allow to suspect,
there isn’t any performance difference to be found. Both snippets
execute in same number of cycles for the CPUs I tried. On x86_64,
we get throughput improvement of >50%!
Fixes#75579
Rollup of 10 pull requests
Successful merges:
- #74477 (`#[deny(unsafe_op_in_unsafe_fn)]` in sys/wasm)
- #77836 (transmute_copy: explain that alignment is handled correctly)
- #78126 (Properly define va_arg and va_list for aarch64-apple-darwin)
- #78137 (Initialize tracing subscriber in compiletest tool)
- #78161 (Add issue template link to IRLO)
- #78214 (Tweak match arm semicolon removal suggestion to account for futures)
- #78247 (Fix#78192)
- #78252 (Add codegen test for #45964)
- #78268 (Do not try to report on closures to avoid ICE)
- #78295 (Add some regression tests)
Failed merges:
r? `@ghost`