Change HirVec<P<T>> to HirVec<T> in hir:: Expr.
This PR changes data structures like this:
```
[ ExprArray | 8 | P ]
|
v
[ P | P | P | P | P | P | P | P ]
|
v
[ ExprTup | 2 | P ]
|
v
[ P | P ]
|
v
[ Expr ]
```
to this:
```
[ ExprArray | 8 | P ]
|
v
[ [ ExprTup | 2 | P ] | ... ]
|
v
[ Expr | Expr ]
```
I thought this would be a win for #36799, and on a cut-down version of that workload this reduces the peak heap size (as measured by Massif) from 885 MiB to 875 MiB. However, the peak RSS as measured by `-Ztime-passes` and by `/usr/bin/time` increases by about 30 MiB.
I'm not sure why. Just look at the picture above -- the second data structure clearly takes up less space than the first. My best idea relates to unused elements in the slices. `HirVec<Expr>` is a typedef for `P<[Expr]>`. If there were any unused excess elements then I could see that memory usage would increase, because those excess elements are larger in `HirVec<Expr>` than in `HirVec<P<Expr>>`. But AIUI there are no such excess elements, and Massif's measurements corroborate that.
However, the two main creation points for these data structures are these lines from `lower_expr`:
```rust
ExprKind::Vec(ref exprs) => {
hir::ExprArray(exprs.iter().map(|x| self.lower_expr(x)).collect())
}
ExprKind::Tup(ref elts) => {
hir::ExprTup(elts.iter().map(|x| self.lower_expr(x)).collect())
}
```
I suspect what is happening is that temporary excess elements are created within the `collect` calls. The workload from #36799 has many 2-tuples and 3-tuples and when `Vec` gets doubled it goes from a capacity of 1 to 4, which would lead to excess elements. Though, having said that, `Vec::with_capacity` doesn't create excess AFAICT. So I'm not sure. What goes on inside `collect` is complex.
Anyway, in its current form this PR is a memory consumption regression and so not worth landing but I figured I'd post it in case anyone has additional insight.
This changes structures like this:
```
[ ExprArray | 8 | P ]
|
v
[ P | P | P | P | P | P | P | P ]
|
v
[ ExprTup | 2 | P ]
|
v
[ P | P ]
|
v
[ Expr ]
```
to this:
```
[ ExprArray | 8 | P ]
|
v
[ [ ExprTup | 2 | P ] | ... ]
|
v
[ Expr | Expr ]
```
libstd: support creation of anonymous pipe on WinXP/2K3
`PIPE_REJECT_REMOTE_CLIENTS` flag is not supported on Windows < VISTA, and every invocation of `anon_pipe` including attempts to pipe `std::process::Child`'s stdio fails.
This PR should work around this issue by performing a runtime check of windows version and conditionally omitting this flag on "XP and friends".
Getting the version should be probably moved out of the function `anon_pipe` itself (the OS version does not often change during runtime :) ), but:
- I didn't find any precedent for this and assuming there's not much overhead (I hope windows does not perform any heuristics to find out it's own version, just fills couple of fields in the struct).
- the code path is not especially performance sensitive anyway.
Clean up `ast::Attribute`, `ast::CrateConfig`, and string interning
This PR
- removes `ast::Attribute_` (changing `Attribute` from `Spanned<Attribute_>` to a struct),
- moves a `MetaItem`'s name from the `MetaItemKind` variants to a field of `MetaItem`,
- avoids needlessly wrapping `ast::MetaItem` with `P`,
- moves string interning into `syntax::symbol` (`ast::Name` is a reexport of `symbol::Symbol` for now),
- replaces `InternedString` with `Symbol` in the AST, HIR, and various other places, and
- refactors `ast::CrateConfig` from a `Vec` to a `HashSet`.
r? @eddyb
Improve .chars().count()
Use a simpler loop to count the `char` of a string: count the
number of non-continuation bytes. Use `count += <conditional>` which the
compiler understands well and can apply loop optimizations to.
benchmark descriptions and results for two configurations:
- ascii: ascii text
- cy: cyrillic text
- jp: japanese text
- words ascii: counting each split_whitespace item from the ascii text
- words jp: counting each split_whitespace item from the jp text
```
x86-64 rustc -Copt-level=3
name orig_ ns/iter cmov_ ns/iter diff ns/iter diff %
count_ascii 1,453 (1755 MB/s) 1,398 (1824 MB/s) -55 -3.79%
count_cy 5,990 (856 MB/s) 2,545 (2016 MB/s) -3,445 -57.51%
count_jp 3,075 (1169 MB/s) 1,772 (2029 MB/s) -1,303 -42.37%
count_words_ascii 4,157 (521 MB/s) 1,797 (1205 MB/s) -2,360 -56.77%
count_words_jp 3,337 (1071 MB/s) 1,772 (2018 MB/s) -1,565 -46.90%
x86-64 rustc -Ctarget-feature=+avx -Copt-level=3
name orig_ ns/iter cmov_ ns/iter diff ns/iter diff %
count_ascii 1,444 (1766 MB/s) 763 (3343 MB/s) -681 -47.16%
count_cy 5,871 (874 MB/s) 1,527 (3360 MB/s) -4,344 -73.99%
count_jp 2,874 (1251 MB/s) 1,073 (3351 MB/s) -1,801 -62.67%
count_words_ascii 4,131 (524 MB/s) 1,871 (1157 MB/s) -2,260 -54.71%
count_words_jp 3,253 (1099 MB/s) 1,331 (2686 MB/s) -1,922 -59.08%
```
I briefly explored a more involved blocked algorithm (looking at 8 or more bytes at a time),
but the code in this PR was always winning `count_words_ascii` in particular (counting
many small strings); this solution is an improvement without tradeoffs.
[LLVM 4.0] Set EH personality when resuming stack unwinding
To resume stack unwinding, the LLVM `resume` instruction must be used.
In order to use this instruction, the calling function must have an
exception handling personality set.
LLVM 4.0 adds a new IR validation check to ensure a personality is
always set in these cases.
This was introduced in [r277360](https://reviews.llvm.org/rL277360).
rustdoc: Remove unnecessary stability versions
For some reason only on enum and macro pages, the stability version is
rendered after the summary unlike all other pages. As it is already
displayed at the top of the page for all items, this removes it for
consistency and to prevent it from overlapping the summary text.
Fixes#36093
Fix `fmt::Debug` for strings, e.g. for Chinese characters
The problem occured due to lines like
```
3400;<CJK Ideograph Extension A, First>;Lo;0;L;;;;;N;;;;;
4DB5;<CJK Ideograph Extension A, Last>;Lo;0;L;;;;;N;;;;;
```
in `UnicodeData.txt`, which the script previously interpreted as two
characters, although it represents the whole range.
Fixes#34318.
Add tests for incremental reuse scenarios
These are microbenchmarks checking that we achieve the expected reuse in the scenarios covered by incremental beta.
r? @michaelwoerister
Add std::process::abort
This calls libc abort on Unix and fastfail on Windows, first running
cleanups to do things like flush stdout buffers. This matches with libc
abort's behavior, which flushes open files.
r? @alexcrichton
Use a simpler loop to count the `char` of a string: count the
number of non-continuation bytes. Use `count += <conditional>` which the
compiler understands well and can apply loop optimizations to.
[LLVM 4.0] Use llvm::Attribute APIs instead of "raw value" APIs
The latter will be removed in LLVM 4.0 (see 4a6fc8bacf).
The librustc_llvm API remains mostly unchanged, except that llvm::Attribute is no longer a bitflag but represents only a *single* attribute.
The ability to store many attributes in a small number of bits and modify them without interacting with LLVM is only used in rustc_trans::abi and closely related modules, and only attributes for function arguments are considered there.
Thus rustc_trans::abi now has its own bit-packed representation of argument attributes, which are translated to rustc_llvm::Attribute when applying the attributes.
cc #37609
Show a better error when using --test with #[proc_macro_derive]
Fixes https://github.com/rust-lang/rust/issues/37480
Currently using `--test` with a crate that contains a `#[proc_macro_derive]` attribute causes an error. This PR doesn't attempt to fix the issue itself, or determine what tesing of a proc_macro_derive crate should be - just to provide a better error message.