Simplify array::IntoIter
- Initialization can use `transmute_copy` to do the bitwise copy.
- `as_slice` can use `get_unchecked` and `MaybeUninit::slice_get_ref`,
and `as_mut_slice` can do similar.
- `next` and `next_back` can use the corresponding `Range` methods.
- `Clone` doesn't need any unsafety, and we can dynamically update the
new range to get partial drops if `T::clone` panics.
r? @LukasKalbertodt
Implement `into_keys` and `into_values` for associative maps
This PR implements `into_keys` and `into_values` for HashMap and BTreeMap types. They are implemented as unstable, under `map_into_keys_values` feature.
Fixes#55214.
r? @dtolnay
This basically adds the safety section of `*mut T::as_{ref,mut}` to the
same methods on `NonNull` with minor modifications to fit the
differences.
Part of #48929.
make MaybeUninit::as_(mut_)ptr const
I think it was just an oversight that they are not const yet.
I also changed their implementation as the old one created references to uninitialized memory.^^
Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away
I've stumbled across some situations where there (unexpectedly) was no `__rust_begin_short_backtrace` frame on the stack during unwinding.
On closer examination, it appeared that the calls to that function had been tail-call optimised away.
This PR follows [@bjorn3's suggestion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Disabling.20tail.20call.20optimisation.3F/near/205699133), by adding calls to `black_box` that hint to rustc not to perform TCO.
Fixes#47429
- Initialization can use `transmute_copy` to do the bitwise copy.
- `as_slice` can use `get_unchecked` and `MaybeUninit::slice_get_ref`,
and `as_mut_slice` can do similar.
- `next` and `next_back` can use the corresponding `Range` methods.
- `Clone` doesn't need any unsafety, and we can dynamically update the
new range to get partial drops if `T::clone` panics.
Rollup of 4 pull requests
Successful merges:
- #74774 (adds [*mut|*const] ptr::set_ptr_value)
- #75079 (Disallow linking to items with a mismatched disambiguator)
- #75203 (Make `IntoIterator` lifetime bounds of `&BTreeMap` match with `&HashMap` )
- #75227 (Fix ICE when using asm! on an unsupported architecture)
Failed merges:
r? @ghost
Make `IntoIterator` lifetime bounds of `&BTreeMap` match with `&HashMap`
This is a pretty small change on the lifetime bounds of `IntoIterator` implementations of both `&BTreeMap` and `&mut BTreeMap`. This is loosening the lifetime bounds, so more code should be accepted with this PR. This is lifetime bounds will still be implicit since we have `type Item = (&'a K, &'a V);` in the implementation. This change will make the HashMap and BTreeMap share the same signature, so we can share the same function/trait with both HashMap and BTreeMap in the code.
Fixes#74034.
r? @dtolnay hey, I was touching this file on my previous PR and wanted to fix this on the way. Would you mind taking a look at this, or redirecting it if you are busy?
adds [*mut|*const] ptr::set_ptr_value
I propose the addition of these two functions to `*mut T` and `*const T`, respectively. The motivation for this is primarily byte-wise pointer arithmetic on (potentially) fat pointers, i.e. for types with a `T: ?Sized` bound. A concrete use-case has been discussed in [this](https://internals.rust-lang.org/t/byte-wise-fat-pointer-arithmetic/12739) thread.
TL;DR: Currently, byte-wise pointer arithmetic with potentially fat pointers in not possible in either stable or nightly Rust without making assumptions about the layout of fat pointers, which is currently still an implementation detail and not formally stabilized. This PR adds one function to `*mut T` and `*const T` each, allowing to circumvent this restriction without exposing any internal implementation details.
One possible alternative would be to add specific byte-wise pointer arithmetic functions to the two pointer types in addition to the already existing count-wise functions. However, I feel this fairly niche use case does not warrant adding a whole set of new functions like `add_bytes`, `offset_bytes`, `wrapping_offset_bytes`, etc. (times two, one for each pointer type) to `libcore`.