Commit Graph

81 Commits

Author SHA1 Message Date
pierwill
30c9307bfc
docs: Edit rustc_ast::token::Token
Add missing punctuation.
2020-12-17 11:55:49 -08:00
Vadim Petrochenkov
31d72c2658 Accept arbitrary expressions in key-value attributes at parse time 2020-12-09 21:37:32 +03:00
Guillaume Gomez
a2d1254e22 Add documentation for name_value_literal_span methods 2020-12-01 17:32:14 +01:00
Guillaume Gomez
63816da5ed Improve some attributes error spans 2020-12-01 16:26:51 +01:00
Guillaume Gomez
7df0052df8 Created NestedMetaItem::name_value_literal_span method 2020-12-01 16:26:51 +01:00
bors
4ae328bef4 Auto merge of #78296 - Aaron1011:fix/stmt-tokens, r=petrochenkov
Properly handle attributes on statements

We now collect tokens for the underlying node wrapped by `StmtKind`
nstead of storing tokens directly in `Stmt`.

`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.

Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.

Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
2020-11-28 07:48:56 +00:00
bors
cfed9184f4 Auto merge of #79266 - b-naber:gat_trait_path_parser, r=petrochenkov
Generic Associated Types in Trait Paths - Ast part

The Ast part of https://github.com/rust-lang/rust/pull/78978

r? `@petrochenkov`
2020-11-27 00:18:24 +00:00
Aaron Hill
de88bf148b
Properly handle attributes on statements
We now collect tokens for the underlying node wrapped by `StmtKind`
instead of storing tokens directly in `Stmt`.

`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.

Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.

Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
2020-11-26 17:08:35 -05:00
Jonas Schievink
6fcd589025
Rollup merge of #79000 - sivadeilra:user/ardavis/lev_distance, r=wesleywiser
Move lev_distance to rustc_ast, make non-generic

rustc_ast currently has a few dependencies on rustc_lexer. Ideally, an AST
would not have any dependency its lexer, for minimizing
design-time dependencies. Breaking this dependency would also have practical
benefits, since modifying rustc_lexer would not trigger a rebuild of rustc_ast.

This commit does not remove the rustc_ast --> rustc_lexer dependency,
but it does remove one of the sources of this dependency, which is the
code that handles fuzzy matching between symbol names for making suggestions
in diagnostics. Since that code depends only on Symbol, it is easy to move
it to rustc_span. It might even be best to move it to a separate crate,
since other tools such as Cargo use the same algorithm, and have simply
contain a duplicate of the code.

This changes the signature of find_best_match_for_name so that it is no
longer generic over its input. I checked the optimized binaries, and this
function was duplicated for nearly every call site, because most call sites
used short-lived iterator chains, generic over Map and such. But there's
no good reason for a function like this to be generic, since all it does
is immediately convert the generic input (the Iterator impl) to a concrete
Vec<Symbol>. This has all of the costs of generics (duplicated method bodies)
with no benefit.

Changing find_best_match_for_name to be non-generic removed about 10KB of
code from the optimized binary. I know it's a drop in the bucket, but we have
to start reducing binary size, and beginning to tame over-use of generics
is part of that.
2020-11-26 13:39:05 +01:00
b-naber
823dbb38e4 ast and parser 2020-11-25 19:55:41 +01:00
Aaron Hill
baefba80b7
Adjust pretty-print compat hack to work with item statements 2020-11-25 11:32:08 -05:00
Arlie Davis
5481c1bd6d Move lev_distance to rustc_ast, make non-generic
rustc_ast currently has a few dependencies on rustc_lexer. Ideally, an AST
would not have any dependency its lexer, for minimizing unnecessarily
design-time dependencies. Breaking this dependency would also have practical
benefits, since modifying rustc_lexer would not trigger a rebuild of rustc_ast.

This commit does not remove the rustc_ast --> rustc_lexer dependency,
but it does remove one of the sources of this dependency, which is the
code that handles fuzzy matching between symbol names for making suggestions
in diagnostics. Since that code depends only on Symbol, it is easy to move
it to rustc_span. It might even be best to move it to a separate crate,
since other tools such as Cargo use the same algorithm, and have simply
contain a duplicate of the code.

This changes the signature of find_best_match_for_name so that it is no
longer generic over its input. I checked the optimized binaries, and this
function was duplicated at nearly every call site, because most call sites
used short-lived iterator chains, generic over Map and such. But there's
no good reason for a function like this to be generic, since all it does
is immediately convert the generic input (the Iterator impl) to a concrete
Vec<Symbol>. This has all of the costs of generics (duplicated method bodies)
with no benefit.

Changing find_best_match_for_name to be non-generic removed about 10KB of
code from the optimized binary. I know it's a drop in the bucket, but we have
to start reducing binary size, and beginning to tame over-use of generics
is part of that.
2020-11-24 16:12:23 -08:00
Jonas Schievink
f66af28641
Rollup merge of #79016 - fanzier:underscore-expressions, r=petrochenkov
Make `_` an expression, to discard values in destructuring assignments

This is the third and final step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the third and final part of #71156, which was split up to allow for easier review.

With this PR, an underscore `_` is parsed as an expression but is allowed *only* on the left-hand side of a destructuring assignment. There it simply discards a value, similarly to the wildcard `_` in patterns. For instance,
```rust
(a, _) = (1, 2)
```
will simply assign 1 to `a` and discard the 2. Note that for consistency,
```
_ = foo
```
is also allowed and equivalent to just `foo`.

Thanks to ````@varkor```` who helped with the implementation, particularly around pre-expansion gating.

r? ````@petrochenkov````
2020-11-15 13:39:48 +01:00
Fabian Zaiser
8cf3564310 Add underscore expressions for destructuring assignments
Co-authored-by: varkor <github@varkor.com>
2020-11-14 13:53:12 +00:00
Dániel Buga
a7f2bb6343 Reserve space in advance 2020-11-13 11:19:25 +01:00
Mara Bos
755dd14e00
Rollup merge of #78836 - fanzier:struct-and-slice-destructuring, r=petrochenkov
Implement destructuring assignment for structs and slices

This is the second step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the second part of #71156, which was split up to allow for easier review.

Note that the first PR (#78748) is not merged yet, so it is included as the first commit in this one. I thought this would allow the review to start earlier because I have some time this weekend to respond to reviews. If ``@petrochenkov`` prefers to wait until the first PR is merged, I totally understand, of course.

This PR implements destructuring assignment for (tuple) structs and slices. In order to do this, the following *parser change* was necessary: struct expressions are not required to have a base expression, i.e. `Struct { a: 1, .. }` becomes legal (in order to act like a struct pattern).

Unfortunately, this PR slightly regresses the diagnostics implemented in #77283. However, it is only a missing help message in `src/test/ui/issues/issue-77218.rs`. Other instances of this diagnostic are not affected. Since I don't exactly understand how this help message works and how to fix it yet, I was hoping it's OK to regress this temporarily and fix it in a follow-up PR.

Thanks to ``@varkor`` who helped with the implementation, particularly around the struct rest changes.

r? ``@petrochenkov``
2020-11-12 19:46:09 +01:00
bors
5a6a41e784 Auto merge of #78782 - petrochenkov:nodoctok, r=Aaron1011
Do not collect tokens for doc comments

Doc comment is a single token and AST has all the information to re-create it precisely.
Doc comments are also responsible for majority of calls to `collect_tokens` (with `num_calls == 1` and `num_calls == 0`, cc https://github.com/rust-lang/rust/pull/78736).

(I also moved token collection into `fn parse_attribute` to deduplicate code a bit.)

r? `@Aaron1011`
2020-11-12 00:33:55 +00:00
Fabian Zaiser
de84ad95b4 Implement destructuring assignment for structs and slices
Co-authored-by: varkor <github@varkor.com>
2020-11-11 12:10:52 +00:00
Nicholas-Baron
261ca04c92 Changed unwrap_or to unwrap_or_else in some places.
The discussion seems to have resolved that this lint is a bit "noisy" in
that applying it in all places would result in a reduction in
readability.

A few of the trivial functions (like `Path::new`) are fine to leave
outside of closures.

The general rule seems to be that anything that is obviously an
allocation (`Box`, `Vec`, `vec![]`) should be in a closure, even if it
is a 0-sized allocation.
2020-11-10 20:07:47 -08:00
Dylan DPC
8ebca242bc
Rollup merge of #78710 - petrochenkov:macvisit, r=davidtwco
rustc_ast: Do not panic by default when visiting macro calls

Panicking by default made sense when we didn't have HIR or MIR and everything worked on AST, but now all AST visitors run early and majority of them have to deal with macro calls, often by ignoring them.

The second commit renames `visit_mac` to `visit_mac_call`, the corresponding structures were renamed earlier in https://github.com/rust-lang/rust/pull/69589.
2020-11-09 19:06:55 +01:00
Vadim Petrochenkov
12de1e8985 Do not collect tokens for doc comments 2020-11-09 01:47:11 +03:00
bors
1773f60ea5 Auto merge of #78712 - petrochenkov:visitok, r=Aaron1011
rustc_ast: Visit tokens stored in AST nodes in mutable visitor

After #77271 token visiting is enabled only for one visitor in `rustc_expand\src\mbe\transcribe.rs` which applies hygiene marks to tokens produced by declarative macros (`macro_rules` or `macro`), so this change doesn't affect anything else.

When a macro has some interpolated token from an outer macro in its output
```rust
macro inner() {
    $interpolated
}
```
we can use the usual interpretation of interpolated tokens in token-based model - a None-delimited group - to write this macro in an equivalent form
```rust
macro inner() {
    ⟪ a b c d ⟫
}
```

When we are expanding the macro `inner` we need to apply hygiene marks to all tokens produced by it, including the tokens inside the group.

Before this PR we did this by visiting the AST piece inside the interpolated token and applying marks to all spans in it.
I'm not sure this is 100% correct (ideally we should apply the marks to tokens and then re-parse the AST from tokens), but it's a very good approximation at least.
We didn't however apply the marks to actual tokens stored in the nonterminal, so if we used the nonterminal as a token rather than as an AST piece (e.g. passed it to a proc macro), then we got hygiene bugs.
This PR applies the marks to tokens in addition to the AST pieces thus fixing the issue.

r? `@Aaron1011`
2020-11-08 20:00:51 +00:00
est31
dfa5e46fd5 The renumber pass is long gone
Originally, there has been a dedicated pass for renumbering
AST NodeIds to have actual values. This pass had been added by
commit a5ad4c3794.

Then, later, this step was moved to where it resides now,
macro expansion. See commit c86c8d41a2
or PR #36438.

The comment snippet, added by the original commit, has
survived the times without any change, becoming outdated
at removal of the dedicated pass.

Nowadays, grepping for the next_node_id function will show up
multiple places in the compiler that call it, but the main
rewriting that the comment talks about is still done in the
expansion step, inside an innocious looking visit_id function
that's called during macro invocation collection.
2020-11-06 03:18:01 +01:00
Vadim Petrochenkov
8def2fc122 rustc_ast: Never clone empty token streams in mutable visitor 2020-11-06 00:59:08 +03:00
Vadim Petrochenkov
1e15606547 rustc_ast: Visit tokens stored in AST nodes in mutable visitor 2020-11-06 00:30:52 +03:00
Matthias Krüger
bcd2f2df67 fix a couple of clippy warnings:
filter_next
manual_strip
redundant_static_lifetimes
single_char_pattern
unnecessary_cast
unused_unit
op_ref
redundant_closure
useless_conversion
2020-11-04 13:48:50 +01:00
Vadim Petrochenkov
90fafc8c8f rustc_ast: visit_mac -> visit_mac_call 2020-11-03 23:39:51 +03:00
Vadim Petrochenkov
3237b3886c rustc_ast: Do not panic by default when visiting macro calls 2020-11-03 20:38:20 +03:00
Yuki Okushi
0716724a0b
Rollup merge of #78376 - Aaron1011:feature/consistent-empty-expr, r=petrochenkov
Treat trailing semicolon as a statement in macro call

See #61733 (comment)

We now preserve the trailing semicolon in a macro invocation, even if
the macro expands to nothing. As a result, the following code no longer
compiles:

```rust
macro_rules! empty {
    () => { }
}

fn foo() -> bool { //~ ERROR mismatched
    { true } //~ ERROR mismatched
    empty!();
}
```

Previously, `{ true }` would be considered the trailing expression, even
though there's a semicolon in `empty!();`

This makes macro expansion more token-based.
2020-11-03 15:27:03 +09:00
Vadim Petrochenkov
19dbb02a89 Expand NtExpr tokens only in key-value attributes 2020-11-03 00:53:43 +03:00
Aaron Hill
e78e9d4a06
Treat trailing semicolon as a statement in macro call
See https://github.com/rust-lang/rust/issues/61733#issuecomment-716188981

We now preserve the trailing semicolon in a macro invocation, even if
the macro expands to nothing. As a result, the following code no longer
compiles:

```rust
macro_rules! empty {
    () => { }
}

fn foo() -> bool { //~ ERROR mismatched
    { true } //~ ERROR mismatched
    empty!();
}
```

Previously, `{ true }` would be considered the trailing expression, even
though there's a semicolon in `empty!();`

This makes macro expansion more token-based.
2020-11-02 13:03:13 -05:00
Vadim Petrochenkov
6b63e9b990 Do not remove tokens before AST json serialization 2020-11-01 00:03:35 +03:00
Vadim Petrochenkov
d0c63bccc5 parser: Cleanup LazyTokenStream and avoid some clones
by using a named struct instead of a closure.
2020-10-31 01:56:34 +03:00
bors
20b1e05a8d Auto merge of #77502 - varkor:const-generics-suggest-enclosing-braces, r=petrochenkov
Suggest that expressions that look like const generic arguments should be enclosed in brackets

I pulled out the changes for const expressions from https://github.com/rust-lang/rust/pull/71592 (without the trait object diagnostic changes) and made some small changes; the implementation is `@estebank's.`

We're also going to want to make some changes separately to account for trait objects (they result in poor diagnostics, as is evident from one of the test cases here), such as an adaption of https://github.com/rust-lang/rust/pull/72273.

Fixes https://github.com/rust-lang/rust/issues/70753.

r? `@petrochenkov`
2020-10-27 09:25:54 +00:00
varkor
ac1454001c Suggest expressions that look like const generic arguments should be enclosed in brackets
Co-Authored-By: Esteban Kuber <github@kuber.com.ar>
2020-10-26 21:54:45 +00:00
Yuki Okushi
0a26e4ba7e
Rollup merge of #78326 - Aaron1011:fix/min-stmt-lints, r=petrochenkov
Split out statement attributes changes from #78306

This is the same as PR https://github.com/rust-lang/rust/pull/78306, but `unused_doc_comments` is modified to explicitly ignore statement items (which preserves the current behavior).

This shouldn't have any user-visible effects, so it can be landed without lang team discussion.

---------
When the 'early' and 'late' visitors visit an attribute target, they
activate any lint attributes (e.g. `#[allow]`) that apply to it.
This can affect warnings emitted on sibiling attributes. For example,
the following code does not produce an `unused_attributes` for
`#[inline]`, since the sibiling `#[allow(unused_attributes)]` suppressed
the warning.

```rust
trait Foo {
    #[allow(unused_attributes)] #[inline] fn first();
    #[inline] #[allow(unused_attributes)] fn second();
}
```

However, we do not do this for statements - instead, the lint attributes
only become active when we visit the struct nested inside `StmtKind`
(e.g. `Item`).

Currently, this is difficult to observe due to another issue - the
`HasAttrs` impl for `StmtKind` ignores attributes for `StmtKind::Item`.
As a result, the `unused_doc_comments` lint will never see attributes on
item statements.

This commit makes two interrelated fixes to the handling of inert
(non-proc-macro) attributes on statements:

* The `HasAttr` impl for `StmtKind` now returns attributes for
  `StmtKind::Item`, treating it just like every other `StmtKind`
  variant. The only place relying on the old behavior was macro
  which has been updated to explicitly ignore attributes on item
  statements. This allows the `unused_doc_comments` lint to fire for
  item statements.
* The `early` and `late` lint visitors now activate lint attributes when
  invoking the callback for `Stmt`. This ensures that a lint
  attribute (e.g. `#[allow(unused_doc_comments)]`) can be applied to
  sibiling attributes on an item statement.

For now, the `unused_doc_comments` lint is explicitly disabled on item
statements, which preserves the current behavior. The exact locatiosn
where this lint should fire are being discussed in PR #78306
2020-10-25 18:43:49 +09:00
Aaron Hill
ac384ac2db
Fix inconsistencies in handling of inert attributes on statements
When the 'early' and 'late' visitors visit an attribute target, they
activate any lint attributes (e.g. `#[allow]`) that apply to it.
This can affect warnings emitted on sibiling attributes. For example,
the following code does not produce an `unused_attributes` for
`#[inline]`, since the sibiling `#[allow(unused_attributes)]` suppressed
the warning.

```rust
trait Foo {
    #[allow(unused_attributes)] #[inline] fn first();
    #[inline] #[allow(unused_attributes)] fn second();
}
```

However, we do not do this for statements - instead, the lint attributes
only become active when we visit the struct nested inside `StmtKind`
(e.g. `Item`).

Currently, this is difficult to observe due to another issue - the
`HasAttrs` impl for `StmtKind` ignores attributes for `StmtKind::Item`.
As a result, the `unused_doc_comments` lint will never see attributes on
item statements.

This commit makes two interrelated fixes to the handling of inert
(non-proc-macro) attributes on statements:

* The `HasAttr` impl for `StmtKind` now returns attributes for
  `StmtKind::Item`, treating it just like every other `StmtKind`
  variant. The only place relying on the old behavior was macro
  which has been updated to explicitly ignore attributes on item
  statements. This allows the `unused_doc_comments` lint to fire for
  item statements.
* The `early` and `late` lint visitors now activate lint attributes when
  invoking the callback for `Stmt`. This ensures that a lint
  attribute (e.g. `#[allow(unused_doc_comments)]`) can be applied to
  sibiling attributes on an item statement.

For now, the `unused_doc_comments` lint is explicitly disabled on item
statements, which preserves the current behavior. The exact locatiosn
where this lint should fire are being discussed in PR #78306
2020-10-24 11:55:48 -04:00
Aaron Hill
b9b2546417
Unconditionally capture tokens for attributes.
This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.

We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.
2020-10-21 18:57:29 -04:00
bors
22e6b9c689 Auto merge of #77250 - Aaron1011:feature/flat-token-collection, r=petrochenkov
Rewrite `collect_tokens` implementations to use a flattened buffer

Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-21 15:03:14 +00:00
Aaron Hill
593fdd3d45
Rewrite collect_tokens implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-19 13:59:18 -04:00
Aaron Hill
f6aec82d4d
Avoid cloning the contents of a TokenStream in a few places 2020-10-19 12:30:41 -04:00
bors
834821e3b6 Auto merge of #78066 - bugadani:wat, r=jonas-schievink
Clean up small, surprising bits of code

This PR clean up a small number of unrelated, small things I found while browsing the code base.
2020-10-18 13:50:31 +00:00
Dániel Buga
d708d7fb79 No need to map the max_distance 2020-10-18 11:01:08 +02:00
Santiago Pastorino
03defb627c
Add check_generic_arg early pass 2020-10-16 17:14:36 -03:00
Santiago Pastorino
c3e8d7965c
Parse inline const expressions 2020-10-16 15:15:30 -03:00
Yuki Okushi
022d20759b
Rollup merge of #77739 - est31:remove_unused_code, r=petrochenkov,varkor
Remove unused code

Rustc has a builtin lint for detecting unused code inside a crate, but when an item is marked `pub`, the code, even if unused inside the entire workspace, is never marked as such. Therefore, I've built [warnalyzer](https://github.com/est31/warnalyzer) to detect unused items in a cross-crate setting.

Closes https://github.com/est31/warnalyzer/issues/2
2020-10-15 07:32:29 +09:00
est31
49d4a756f1 Remove unused code from rustc_ast 2020-10-14 04:14:32 +02:00
Aaron Hill
9a6ea38647
Add hack to keep actix-web and actori-web compiling
This extends the existing `ident_name_compatibility_hack` to handle the
`tuple_from_req` macro defined in `actix-web` (and its fork
`actori-web`).
2020-10-11 13:20:26 -04:00
Esteban Küber
4ae8f6ec7c address review comments 2020-10-09 22:00:48 -07:00
bors
91a79fb29a Auto merge of #76985 - hbina:clone_check, r=estebank
Prevent stack overflow in deeply nested types.

Related issue #75577 (?)

Unfortunately, I am unable to test whether this actually solves the problem because apparently, 12GB RAM + 2GB swap is not enough to compile the (admittedly toy) source file.
2020-10-07 21:51:12 +00:00