Looks like the packaging step for the standard library was happening
twice on CI, but it only needs to happen once! The `Analysis` packaging
step accidentally packaged `Std` instead of relying on compiling `Std`,
which meant that we ended up packaging it twice erroneously.
A more generic interface for dataflow analysis
#64470 requires a transfer function that is slightly more complex than the typical `gen`/`kill` one. Namely, it must copy state bits between locals when assignments occur (see #62547 for an attempt to make this fit into the existing framework). This PR contains a dataflow interface that allows for arbitrary transfer functions. The trade-off is efficiency: we can no longer coalesce transfer functions for blocks and must visit each statement individually while iterating to fixpoint.
Another issue is that poorly behaved transfer functions can result in an analysis that fails to converge. `gen`/`kill` sets do not have this problem. I believe that, in order to guarantee convergence, flipping a bit from `false` to `true` in the entry set cannot cause an output bit to go from `true` to `false` (negate all preceding booleans when `true` is the bottom value). Perhaps someone with a more formal background can confirm and we can add a section to the docs?
This approach is not maximally generic: it still requires that the lattice used for analysis is the powerset of values of `Analysis::Idx` for the `mir::Body` of interest. This can be done at a later date. Also, this is the bare minimum to get #64470 working. I've not adapted the existing debug framework to work with the new analysis, so there are no `rustc_peek` tests either. I'm planning to do this after #64470 is merged.
Finally, my ultimate plan is to make the existing, `gen`/`kill`-based `BitDenotation` a special case of `generic::Analysis`. Currently they share a ton of code. I should be able to do this without changing any implementers of `BitDenotation`. Something like:
```rust
struct GenKillAnalysis<A: BitDenotation> {
trans_for_block: IndexVec<BasicBlock, GenKillSet<A::Idx>>,
analysis: A,
}
impl<A> generic::Analysis for GenKillAnalysis<A> {
// specializations of `apply_{partial,whole}_block_effect`...
}
```
r? @pnkfelix
Polonius: more `ui` test suite fixes
Since #62736, new tests have been added, and the `run-pass` suite was merged into the `ui` suite.
This PR adds the missing tests expectations for Polonius, and updates the existing ones where the NLL output has changed in some manner (e.g. ordering of notes)
Those are the trivial cases, but a more-detailed explanation is available [in this write-up](https://hackmd.io/CjYB0fs4Q9CweyeTdKWyEg?both#26-async-awaitasync-borrowck-escaping-closure-errorrs-outputs-from-NLL-Polonius-diff) starting at test case 26: they are only differing in diagnostics and instances of other existing test cases differences.
Only 3 of the 9020 tests are still "failing" at the moment (1 failure, 2 OOMs).
r? @nikomatsakis
The super-hot call site of `inlined_shallow_resolve()` basically does
`r.inlined_shallow_resolve(ty) != ty`. This commit introduces a
version of that function specialized for that particular call pattern,
`shallow_resolve_changed()`. Incredibly, this reduces the instruction
count for `keccak` by 5%.
The commit also renames `inlined_shallow_resolve()` as
`shallow_resolve()` and removes the `inline(always)` annotation, as it's
no longer nearly so hot.
Replace `state_for_location` with `DataflowResultsCursor`
These are two different ways of getting the same data from the result of a dataflow analysis. However, `state_for_location` goes quadratic if you try to call it for every statement in the body.
PR: documentation spin loop hint
The documentation for 'spin loop hint' explains that yield is better if the lock holder is running on the same CPU. I suggest that 'CPU or core' would be clearer.
Make rustc_mir::dataflow module pub (for clippy)
I'm working on fixing false-positives of a MIR-based clippy lint (https://github.com/rust-lang/rust-clippy/pull/4509), and in need of the dataflow infrastructure.
Load proc macro metadata in the correct order.
Serialized proc macro metadata is assumed to have a one-to-one
correspondence with the entries in static array generated by proc_macro_harness.
However, we were previously serializing proc macro metadata in a
different order than proc macros were laied out in the static array.
This lead to us associating the wrong data with a proc macro when
generating documentation, causing Rustdoc to generate incorrect docs for
proc macros.
This commit keeps track of the order in which we insert proc macros into
the generated static array. We use this same order when serializing proc
macro metadata, ensuring that we can later associate the metadata for a
proc macro with its entry in the static array.
Fixes#64251
avoid duplicate issues for Miri build failures
Currently, when Miri regressed from test-pass to test-fail, we pen an issue -- and then when it regresses further from test-fail to build-fail, we open a *second* issue.
This changes the logic to avoid the redundant second issue for Miri.
r? @pietroalbini @kennytm
improve Vec example soundness in mem::transmute docs
The previous version of the `Vec` example had a case of questionable soundness, because at one point `v_orig` was aliased.
r? @RalfJung
Shrink `SubregionOrigin`.
It's currently 120 bytes on x86-64, due to one oversized variant
(`Subtype`). This commit boxes `Subtype`'s contents, reducing the size
of `SubregionOrigin` to 32 bytes.
The change speeds things up by avoiding lots of `memcpy` calls, mostly
relating to `RegionConstraintData::constraints`, which is a `BTreeMap`
with `SubregionOrigin` values.