implement better availability probing for copy_file_range
Followup to https://github.com/rust-lang/rust/pull/75428#discussion_r469616547
Previously syscall detection was overly pessimistic. Any attempt to copy to an immutable file (EPERM) would disable copy_file_range support for the whole process.
The change tries to copy_file_range on invalid file descriptors which will never run into the immutable file case and thus we can clearly distinguish syscall availability.
Rollup of 12 pull requests
Successful merges:
- #79732 (minor stylistic clippy cleanups)
- #79750 (Fix trimming of lint docs)
- #79777 (Remove `first_merge` from liveness debug logs)
- #79795 (Privatize some of libcore unicode_internals)
- #79803 (Update xsv to prevent random CI failures)
- #79810 (Account for gaps in def path table during decoding)
- #79818 (Fixes to Rust coverage)
- #79824 (Strip prefix instead of replacing it with empty string)
- #79826 (Simplify visit_{foreign,trait}_item)
- #79844 (Move RWUTable to a separate module)
- #79861 (Update LLVM submodule)
- #79862 (Remove tab-lock and replace it with ctrl+up/down arrows to switch between search result tabs)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Remove tab-lock and replace it with ctrl+up/down arrows to switch between search result tabs
Fixes https://github.com/rust-lang/rust/issues/65212
What took the longest time was to update the help popup in the end.
r? `@Manishearth`
Simplify visit_{foreign,trait}_item
Using an `if` seems like a better semantic fit and saves a few lines.
Noticed while looking at https://github.com/rust-lang/rust/pull/79752, but that's already merged.
r? `@lcnr,` cc `@cjgillot`
`@rustbot` modify labels +C-cleanup +T-compiler
Strip prefix instead of replacing it with empty string
r? `@lcnr,` since you reviewed my other PR in the area.
`@rustbot` modify labels +C-cleanup +T-compiler
Fixes to Rust coverage
Fixes: #79725
Some macros can create a situation where `fn_sig_span` and `body_span`
map to different files.
New documentation on coverage tests incorrectly assumed multiple test
binaries could just be listed at the end of the `llvm-cov` command,
but it turns out each binary needs a `--object` prefix.
This PR fixes the bug and updates the documentation to correct that
issue. It also fixes a few other minor issues in internal implementation
comments, and adds documentation on getting coverage results for doc
tests.
Account for gaps in def path table during decoding
When encoding a proc-macro crate, there may be gaps in the table (since
we only encode the crate root and proc-macro items). Account for this by
checking if the entry is present, rather than using `unwrap()`
Privatize some of libcore unicode_internals
My understanding is that these API are perma unstable, so it doesn't
make sense to pollute docs & IDE completion[1] with them.
[1]: https://github.com/rust-analyzer/rust-analyzer/issues/6738
minor stylistic clippy cleanups
simplify if let Some(_) = x to if x.is_some() (clippy::redundant_pattern_matching)
don't create owned values for comparison (clippy::cmp_owned)
use .contains() or .any() instead of find(x).is_some() (clippy::search_is_some)
don't wrap code block in Ok() (clipppy::unit_arg)
ext/ucred: Support PID in peer creds on macOS
This is a follow-up to https://github.com/rust-lang/rust/pull/75148 (RFC: https://github.com/rust-lang/rust/issues/42839).
The original PR used `getpeereid` on macOS and the BSDs, since they don't (generally) support the `SO_PEERCRED` mechanism that Linux supplies.
This PR splits the macOS/iOS implementation of `peer_cred()` from that of the BSDs, since macOS supplies the `LOCAL_PEERPID` sockopt as a source of the missing PID. It also adds a `cfg`-gated tests that ensures that platforms with support for PIDs in `UCred` have the expected data.
Properly re-use def path hash in incremental mode
Fixes#79661
In incremental compilation mode, we update a `DefPathHash -> DefId`
mapping every time we create a `DepNode` for a foreign `DefId`.
This mapping is written out to the on-disk incremental cache, and is
read by the next compilation session to allow us to lazily decode
`DefId`s.
When we decode a `DepNode` from the current incremental cache, we need
to ensure that any previously-recorded `DefPathHash -> DefId` mapping
gets recorded in the new mapping that we write out. However, PR #74967
didn't do this in all cases, leading to us being unable to decode a
`DefPathHash` in certain circumstances.
This PR refactors some of the code around `DepNode` deserialization to
prevent this kind of mistake from happening again.
Also generate `StorageDead` in constants
r? `@eddyb`
None of this special casing is actually necessary since we started promoting within constants and statics.
We may want to keep some of it around out of perf reasons, but it's not required for user visible behaviour
somewhat related: #68622
remove this weird special case from promotion
Promotion has a special case to ignore interior mutability under some specific circumstances. The purpose of this PR is to figure out what changes if we remove that. Since `Cell::new` and friends only get promoted inside `const`/`static` initializers these days, it actually is not easy to exploit this case: you need something like
```rust
const TEST_INTERIOR_MUT: () = {
// The "0." case is already ruled out by not permitting any interior mutability in `const`.
let _val: &'static _ = &(Cell::new(1), 2).1;
};
```
I assume something like `&Some(&(Cell::new(1), 2).1)` would hit the nested case inside `validate_rvalue`... though I am not sure why that would not just trigger nested promotion, first promoting the inner reference and then the outer one?
Fixes https://github.com/rust-lang/rust/issues/67534 (by simply rejecting that code^^)
r? `@oli-obk` (but for now this is not meant to be merged!)
Cc `@rust-lang/wg-const-eval`
Don't time `emit_ignored_resolution_errors`
This printed several hundred lines each time rustdoc was run, almost all
of which rounded to 0.000. Since this isn't useful info, don't print it
everywhere, so other perf info is easier to read.
r? `@Mark-Simulacrum`
Use is_write_vectored to optimize the write_vectored implementation for BufWriter
In case when the underlying writer does not have an efficient implementation `write_vectored`, the present implementation of
`write_vectored` for `BufWriter` may still forward vectored writes directly to the writer depending on the total length of the data. This misses the advantage of buffering, as the actually written slice may be small.
Provide an alternative code path for the non-vectored case, where the slices passed to `BufWriter` are coalesced in the buffer before being flushed to the underlying writer with plain `write` calls. The buffer is only bypassed if an individual slice's length is at least as large as the buffer.
Remove a FIXME comment referring to #72919 as the issue has been closed with an explanation provided.
Compress RWU from at least 32 bits to 4 bits
The liveness uses a mixed representation of RWUs based on the
observation that most of them have invalid reader and invalid
writer. The packed variant uses 32 bits and unpacked 96 bits.
Unpacked data contains reader live node and writer live node.
Since live nodes are used only to determine their validity,
RWUs can always be stored in a packed form with four bits for
each: reader bit, writer bit, used bit, and one extra padding
bit to simplify packing and unpacking operations.