Update the mir inline costs
handle that when mir is lowered to llvm-ir more code is generated.
Landingpads generates 10 llvm-ir instructions
and resume 9 llvm-ir instructions.
r? @wesleywiser
Prefetch some queries used by the metadata encoder
This brings the time for `metadata encoding and writing` for `syntex_syntax` from 1.338s to 0.997s with 6 threads in non-incremental debug mode.
r? @Mark-Simulacrum
Rollup of 16 pull requests
Successful merges:
- #65097 (Make std::sync::Arc compatible with ThreadSanitizer)
- #69033 (Use generator resume arguments in the async/await lowering)
- #69997 (add `Option::{zip,zip_with}` methods under "option_zip" gate)
- #70038 (Remove the call that makes miri fail)
- #70058 (can_begin_literal_maybe_minus: `true` on `"-"? lit` NTs.)
- #70111 (BTreeMap: remove shared root)
- #70139 (add delay_span_bug to TransmuteSizeDiff, just to be sure)
- #70165 (Remove the erase regions MIR transform)
- #70166 (Derive PartialEq, Eq and Hash for RangeInclusive)
- #70176 (Add tests for #58319 and #65131)
- #70177 (Fix oudated comment for NamedRegionMap)
- #70184 (expand_include: set `.directory` to dir of included file.)
- #70187 (more clippy fixes)
- #70188 (Clean up E0439 explanation)
- #70189 (Abi::is_signed: assert that we are a Scalar)
- #70194 (#[must_use] on split_off())
Failed merges:
r? @ghost
Abi::is_signed: assert that we are a Scalar
A bit more sanity checking, suggested by @eddyb. This makes this method actually "safer" than `TyS::is_signed`, so I made sure Miri consistently uses the `Abi` version.
Though I am not sure if this would have caught the mistake where the layout of a zero-sized enum was asked for its sign.
r? @eddyb
Derive PartialEq, Eq and Hash for RangeInclusive
The manual implementation of `PartialEq`, `Eq` and `Hash` for `RangeInclusive` was functionally equivalent to a derived implementation.
This change removes the manual implementation and adds the respective derives.
A side effect of this change is that the derives also add implementations for `StructuralPartialEq` and `StructuralEq`, which enables `RangeInclusive` to be used in const generics, closing #70155.
This change is enabled by #68835, which changed the field `is_empty: Option<bool>` to `exhausted: bool` removing the need for *semantic* equality instead of *structural* equality.
## PartialEq
original [`PartialEq`](f4c675c476/src/libcore/ops/range.rs (L353-L359)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: PartialEq> PartialEq for RangeInclusive<Idx> {
#[inline]
fn eq(&self, other: &Self) -> bool {
self.start == other.start && self.end == other.end && self.exhausted == other.exhausted
}
}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx> crate::marker::StructuralPartialEq for RangeInclusive<Idx> {}
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate::cmp::PartialEq> crate::cmp::PartialEq for RangeInclusive<Idx> {
#[inline]
fn eq(&self, other: &RangeInclusive<Idx>) -> bool {
match *other {
RangeInclusive { start: ref __self_1_0,end: ref __self_1_1, exhausted: ref __self_1_2 } => match *self {
RangeInclusive { start: ref __self_0_0, end: ref __self_0_1, exhausted: ref __self_0_2 } => {
(*__self_0_0) == (*__self_1_0) && (*__self_0_1) == (*__self_1_1) && (*__self_0_2) == (*__self_1_2)
}
},
}
}
#[inline]
fn ne(&self, other: &RangeInclusive<Idx>) -> bool {
match *other {
RangeInclusive { start: ref __self_1_0, end: ref __self_1_1, exhausted: ref __self_1_2 } => match *self {
RangeInclusive { start: ref __self_0_0, end: ref __self_0_1exhausted: ref __self_0_2 } => {
(*__self_0_0) != (*__self_1_0) || (*__self_0_1) != (*__self_1_1) || (*__self_0_2) != (*__self_1_2)
}
},
}
}
}
```
These implementations both test for *structural* equality, with the same order of field comparisons, and the bound `Idx: PartialEq` is the same.
## Eq
original [`Eq`](f4c675c476/src/libcore/ops/range.rs (L361-L362)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: Eq> Eq for RangeInclusive<Idx> {}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx> crate::marker::StructuralEq for RangeInclusive<Idx> {}
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate::cmp::Eq> crate::cmp::Eq for RangeInclusive<Idx> {
#[inline]
#[doc(hidden)]
fn assert_receiver_is_total_eq(&self) -> () {
{
let _: crate::cmp::AssertParamIsEq<Idx>;
let _: crate::cmp::AssertParamIsEq<Idx>;
let _: crate::cmp::AssertParamIsEq<bool>;
}
}
}
```
These implementations are equivalent since `Eq` is just a marker trait and the bound `Idx: Eq` is the same.
## Hash
original [`Hash`](f4c675c476/src/libcore/ops/range.rs (L364-L371)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: Hash> Hash for RangeInclusive<Idx> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.start.hash(state);
self.end.hash(state);
self.exhausted.hash(state);
}
}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate:#️⃣:Hash> crate:#️⃣:Hash for RangeInclusive<Idx> {
fn hash<__H: crate:#️⃣:Hasher>(&self, state: &mut __H) -> () {
match *self { RangeInclusive { start: ref __self_0_0, end: ref __self_0_1, exhausted: ref __self_0_2 } => {
crate:#️⃣:Hash::hash(&(*__self_0_0), state);
crate:#️⃣:Hash::hash(&(*__self_0_1), state);
crate:#️⃣:Hash::hash(&(*__self_0_2), state)
}
}
}
}
```
These implementations are functionally equivalent, with the same order of field hashing, and the bound `Idx: Hash` is the same.
BTreeMap: remove shared root
This replaces the shared root with `Option`s in the BTreeMap code, and then slightly cleans up the node manipulation code taking advantage of the removal of the shared root. I expect that further simplification is possible, but wanted to get this posted for initial review.
Note that `BTreeMap::new()` continues to not allocate.
Benchmarks seem within the margin of error/unaffected, as expected for an entirely predictable branch.
```
name alloc-bench-a ns/iter alloc-bench-b ns/iter diff ns/iter diff % speedup
btree::map::iter_mut_20 20 21 1 5.00% x 0.95
btree::set::clone_100 1,360 1,439 79 5.81% x 0.95
btree::set::clone_100_and_into_iter 1,319 1,434 115 8.72% x 0.92
btree::set::clone_10k 143,515 150,991 7,476 5.21% x 0.95
btree::set::clone_10k_and_clear 142,792 152,916 10,124 7.09% x 0.93
btree::set::clone_10k_and_into_iter 146,019 154,561 8,542 5.85% x 0.94
```
can_begin_literal_maybe_minus: `true` on `"-"? lit` NTs.
Make `can_begin_literal_or_bool` (renamed to `can_begin_literal_maybe_minus`) accept `NtLiteral(e) | NtExpr(e)` where `e` is either a literal or a negated literal.
Fixes https://github.com/rust-lang/rust/issues/70050.
r? @petrochenkov
add `Option::{zip,zip_with}` methods under "option_zip" gate
This PR introduces 2 methods - `Option::zip` and `Option::zip_with` with
respective signatures:
- zip: `(Option<T>, Option<U>) -> Option<(T, U)>`
- zip_with: `(Option<T>, Option<U>, (T, U) -> R) -> Option<R>`
Both are under the feature gate "option_zip".
I'm not sure about the name "zip", maybe we can find a better name for this.
(I would prefer `union` for example, but this is a keyword :( )
--------------------------------------------------------------------------------
Recently in a russian rust begginers telegram chat a newbie asked (translated):
> Are there any methods for these conversions:
>
> 1. `(Option<A>, Option<B>) -> Option<(A, B)>`
> 2. `Vec<Option<T>> -> Option<Vec<T>>`
>
> ?
While second (2.) is clearly `vec.into_iter().collect::<Option<Vec<_>>()`, the
first one isn't that clear.
I couldn't find anything similar in the `core` and I've come to this solution:
```rust
let tuple: (Option<A>, Option<B>) = ...;
let res: Option<(A, B)> = tuple.0.and_then(|a| tuple.1.map(|b| (a, b)));
```
However this solution isn't "nice" (same for just `match`/`if let`), so I thought
that this functionality should be in `core`.
Use generator resume arguments in the async/await lowering
This removes the TLS requirement from async/await and enables it in `#![no_std]` crates.
Closes https://github.com/rust-lang/rust/issues/56974
I'm not confident the HIR lowering is completely correct, there seem to be quite a few undocumented invariants in there. The `async-std` and tokio test suites are passing with these changes though.
Make std::sync::Arc compatible with ThreadSanitizer
The memory fences used previously in Arc implementation are not properly
understood by thread sanitizer as synchronization primitives. This had
unfortunate effect where running any non-trivial program compiled with
`-Z sanitizer=thread` would result in numerous false positives.
Replace acquire fences with acquire loads to address the issue.
Fixes#39608.
This makes ensure_root_is_owned return a reference to the (now guaranteed to
exist) root, allowing callers to operate on it without going through another
unwrap.
Unfortunately this is only rarely useful as it's frequently the case that both
the length and the root need to be accessed and field-level borrows in methods
don't yet exist.
Add regression test for TAIT lifetime inference (issue #55099)
Fixes#55099
The minimized reproducer in issue #55099 now compiles successfully.
This commit adds a regression test for it.
codegen/mir: support polymorphic `InstanceDef`s
cc #69925
This PR modifies the use of `subst_and_normalize_erasing_regions` on parts of the MIR bodies returned from `instance_mir`, so that `InstanceDef::CloneShim` and `InstanceDef::DropGlue` (where there is a type) do not perform substitutions. This avoids double substitutions and enables polymorphic `InstanceDef`s.
r? @eddyb
cc @nikomatsakis
Clarify the relationship between `forget()` and `ManuallyDrop`.
As discussed on reddit, this commit addresses two issues with the
documentation of `mem::forget()`:
* The documentation of `mem::forget()` can confuse the reader because of the
discrepancy between usage examples that show correct usage and the
accompanying text which speaks of the possibility of double-free. The
text that says "if the panic occurs before `mem::forget` was called"
refers to a variant of the second example that was never shown, modified
to use `mem::forget` instead of `ManuallyDrop`. Ideally the documentation
should show both variants, so it's clear what it's talking about.
Also, the double free could be fixed just by placing `mem::forget(v)`
before the construction of `s`. Since the lifetimes of `s` and `v`
wouldn't overlap, there would be no point where panic could cause a double
free. This could be mentioned, and contrasted against the more robust fix
of using `ManuallyDrop`.
* This sentence seems unjustified: "For some types, operations such as
passing ownership (to a funcion like `mem::forget`) requires them to
actually be fully owned right now [...]". Unlike C++, Rust has no move
constructors, its moves are (possibly elided) bitwise copies. Even if you
pass an invalid object to `mem::forget`, no harm should come to pass
because `mem::forget` consumes the object and exists solely to prevent
drop, so there no one left to observe the invalid state state.
The memory fences used previously in Arc implementation are not properly
understood by ThreadSanitizer as synchronization primitives. This had
unfortunate effect where running any non-trivial program compiled with
`-Z sanitizer=thread` would result in numerous false positives.
Replace acquire fences with acquire loads when using ThreadSanitizer to
address the issue.