* gcc.target/cris/pr93372-2.c, gcc.target/cris/pr93372-5.c,
gcc.target/cris/pr93372-8.c: New tests.
These tests fails miserably both at being an example of cc0
eliminating compare instructions, and post-cc0-CRIS at showing a
significant improvement. They're here to track suboptimal
comparison code for CRIS.
This test was separated from the posted and approved patch named
"dbr: Filter-out TARGET_FLAGS_REGNUM from end_of_function_needs"
and applied: it doesn't fail yet. It differs from the posted
version in that function "g" is commented-out; see the added
comment.
* config/cris/cris.c (cris_reduce_compare): New function.
* config/cris/cris-protos.h (cris_reduce_compare): Add prototype.
* config/cris/cris.md ("cbranch<mode>4", "cbranchdi4", "cstoredi4")
(cstore<mode>4"): Apply cris_reduce_compare in expanders.
The decc0ration work of the CRIS port made me look closer at the
code for trivial comparisons, as in the condition for branches
and conditional-stores, like in:
void g(short int a, short int b)
{
short int c = a + b;
if (c >= 0)
foo ();
}
At -O2, the cc0 version of the CRIS port has an explicit
*uneliminated* compare instruction ("cmp.w -1,$r10") instead of
an (eliminated) compare against 0 (which below I'll call a
zero-compare). This for the CRIS-cc0 version, but I see this
also for a much older gcc, at 4.7. For the decc0rated port, the
compare *is* a test against 0, eventually eliminated. To wit,
for cc0 (mind the delay-slot):
_g:
subq 4,$sp
add.w $r11,$r10
cmp.w -1,$r10
ble .L9
move $srp,[$sp]
jsr _foo
.L9:
jump [$sp+]
The compare instruction is expected to be eliminated, i.e. the
following diff to the above is desired, modulo the missing
sibling call, which corresponds to what I get from 4.7 and for
the decc0rated port:
!--- a Wed Feb 5 15:22:27 2020
!+++ b Wed Feb 5 15:22:51 2020
!@@ -1,8 +1,7 @@
! _g:
! subq 4,$sp
! add.w $r11,$r10
!- cmp.w -1,$r10
!- ble .L9
!+ bmi .L9
! move $srp,[$sp]
!
! jsr _foo
Tracking this difference, I see that for both cc0-CRIS and the
decc0rated CRIS, the comparison actually starts out as a compare
against -1 at "expand" time, but is transformed for decc0rated
CRIS to a zero-compare in "cse1".
For CRIS-cc0 "cse1" does try to replace the compare with a
zero-compare, but fails because at the same time it tries to
replace the c operand with (a + b). Or some such; it fails and
no other pass succeeds. I was not into fixing cc0-handling in
core gcc, so I didn't look closer.
BTW, at first, I was a bit surprised to see that for compares
against a constant, a zero-compare is not canonical RTX for
*all* conditions, and that instead only a subset of all RTX
conditions against a constant are canonical, transforming one
condition to the canonical one by adding 1 or -1 to the
constant. It does makes sense at a closer look, but still not
so much when emitting RTL.
There are several places that mention in comments that emitting
RTX as zero-compare is preferable, but nothing is done about it.
Some generic code instead seems confused that the *target* is
helped by seeing canonical RTX, or perhaps it (its authors) like
me, confused about what a canonical comparison is. For example,
prepare_cmp_insn calls canonicalize_comparison last before
emitting the actual instructions. I see most ports for various
port-specific reasons does their own massaging in their cbranch
and cstore expanders. Still, the suboptimal compares *should*
be fixed at expand time; better start out right than just
relying on later optimizations.
This kind of change is not acceptable in the current gcc
development stage, at least as a change in generic code.
However, it's problematic enough that I chose to fix this right
now in the CRIS port. For that, I claim a possibly
long-standing regression. After this, code before and after
decc0ration is similar enough that I can spot
compare-elimination-efforts and apply regression test-cases
without them drowning in cc0-specific xfailing.
I hope to eventually lift out cris_reduce_compare (renamed) into
say expmed.c, called in e.g. emit_store_flag_1 (replacing the
in-line code) and prepare_cmp_insn. Later.
Linux CET kernel places a restore token on shadow stack for signal
handler to enhance security. The restore token is 8 byte and aligned
to 8 bytes. It is usually transparent to user programs since kernel
will pop the restore token when signal handler returns. But when an
exception is thrown from a signal handler, now we need to pop the
restore token from shadow stack. For x86-64, we just need to treat
the signal frame as normal frame. For i386, we need to search for
the restore token to check if the original shadow stack is 8 byte
aligned. If the original shadow stack is 8 byte aligned, we just
need to pop 2 slots, one restore token, from shadow stack. Otherwise,
we need to pop 3 slots, one restore token + 4 byte padding, from
shadow stack.
This patch also includes 2 tests, one has a restore token with 4 byte
padding and one without.
Tested on Linux/x86-64 CET machine with and without -m32.
libgcc/
PR libgcc/85334
* config/i386/shadow-stack-unwind.h (_Unwind_Frames_Increment):
New.
gcc/testsuite/
PR libgcc/85334
* g++.target/i386/pr85334-1.C: New test.
* g++.target/i386/pr85334-2.C: Likewise.
The peephole that detects a mov of one register to another followed by
a comparison of the original register against zero is only used in Arm
state; but the instruction that matches this is generic to all 32-bit
compilation states. That instruction lacks support for SP which is
permitted in Arm state, but has restrictions in Thumb2 code.
This patch fixes the problem by allowing SP when in ARM state for all
registers; in Thumb state it allows SP only as a source when the
register really is copied to another target.
* config/arm/arm.md (movsi_compare0): Allow SP as a source register
in Thumb state and also as a destination in Arm state. Add T16
variants.
The last argument to strncasecmp is incorrect, so it matched even when
can%' wasn't followed by t. Also, the !ISALPHA (format_chars[1]) test
looks pointless, format_chars[1] must be ' if strncasecmp succeeded and
so will never be ISALPHA.
2020-02-10 Jakub Jelinek <jakub@redhat.com>
PR other/93641
* c-format.c (check_plain): Fix up last argument of strncasecmp.
Remove useless extra test.
* gcc.dg/format/gcc_diag-11.c (test_cdiag_bad_words): Add two further
tests.
Commit r10-6500-g811a475ea3fcc55ee4aea7c81171891ef19dfc25 broke the
GCC build for arm-none-uclinuxfdpiceabi, as it forgot to update some
uses of gnu_Unwind_Find_got.
2020-02-10 Christophe Lyon <christophe.lyon@linaro.org>
libgcc/
PR target/93615
* unwind-arm-common.inc: Replace uses of gnu_Unwind_Find_got with
_Unwind_gnu_Find_got.
* unwind-pe.h: Likewise.
I'm not aware of symbols starting with _ZG that don't start with _ZGR
prefix, but perhaps in the future there might be some.
2020-02-10 Jakub Jelinek <jakub@redhat.com>
PR other/93641
* error.c (dump_decl_name): Fix up last argument to strncmp.
Clearly I can't count, so we would consider as SECTION_BSS even sections
like .lbssfoo or .gnu.linkonce.lbbar, even when linker only considers as
special .lbss or .lbss.baz or .gnu.linkonce.lb.qux.
2020-02-10 Jakub Jelinek <jakub@redhat.com>
PR target/58218
PR other/93641
* config/i386/i386.c (x86_64_elf_section_type_flags): Fix up last
arguments of strncmp.
We were already rejecting initialization of a flexible array member in a
constructor; we similarly shouldn't try to clean it up.
PR c++/93618
* tree.c (array_of_unknown_bound_p): New.
* init.c (perform_member_init): Do nothing for flexible arrays.
Add xfails for nvptx offloading because
"no GOMP_OFFLOAD_async_run implemented in plugin-nvptx.c"
(https://gcc.gnu.org/PR81688) and because
"omp target link not implemented for nvptx"
(https://gcc.gnu.org/PR81689).
libgomp/
* testsuite/libgomp.c/target-33.c: Add xfail for execution on
offload_target_nvptx, cf. https://gcc.gnu.org/PR81688.
* testsuite/libgomp.c/target-34.c: Likewise.
* testsuite/libgomp.c/target-link-1.c: Add xfail for
offload_target_nvptx, cf. https://gcc.gnu.org/PR81689.
Besides simple pass-through (aggregate) jump function, arithmetic (aggregate)
jump function could also bring same (aggregate) value as parameter passed-in
for self-feeding recursive call. For example,
f1 (int i) /* normal jump function */
{
f1 (i & 1);
}
Suppose i is 0, recursive propagation via (i & 1) also gets 0, which
can be seen as a simple pass-through of i.
f2 (int *p) /* aggregate jump function */
{
int t = *p & 1;
f2 (&t);
}
Likewise, if *p is 0, (*p & 1) is also 0, and &t is an aggregate simple
pass-through of p.
2020-02-10 Feng Xue <fxue@os.amperecomputing.com>
PR ipa/93203
* ipa-cp.c (ipcp_lattice::add_value): Add source with same call edge
but different source value.
(adjust_callers_for_value_intersection): New function.
(gather_edges_for_value): Adjust order of callers to let a
non-self-recursive caller be the first element.
(self_recursive_pass_through_p): Add a new parameter "simple", and
check generalized self-recursive pass-through jump function.
(self_recursive_agg_pass_through_p): Likewise.
(find_more_scalar_values_for_callers_subset): Compute value from
pass-through jump function for self-recursive.
(intersect_with_plats): Cleanup previous implementation code for value
itersection with self-recursive call edge.
(intersect_with_agg_replacements): Likewise.
(intersect_aggregates_with_edge): Deduce value from pass-through jump
function for self-recursive call edge. Cleanup previous implementation
code for value intersection with self-recursive call edge.
(decide_whether_version_node): Remove dead callers and adjust order
to let a non-self-recursive caller be the first element.
PR ipa/93203
* g++.dg/ipa/pr93203.C: New test.
The names of split_before_sched2 ("split4") and split_before_regstack
("split3") do not reflect their insertion point in the sequence of passes,
where split_before_regstack follows split_before_sched2. Reorder the code
and rename the passes to reflect the reality.
split_before_regstack pass does not need to run if split_before_sched2 pass
was already performed. Introduce enable_split_before_sched2 function to
simplify gating functions of these two passes.
There is no need for a separate rest_of_handle_split_before_sched2.
split_all_insns can be called unconditionally from
pass_split_before_sched2::execute, since the corresponding gating function
determines if the pass is executed or not.
* recog.c: Move pass_split_before_sched2 code in front of
pass_split_before_regstack.
(pass_data_split_before_sched2): Rename pass to split3 from split4.
(pass_data_split_before_regstack): Rename pass to split4 from split3.
(rest_of_handle_split_before_sched2): Remove.
(pass_split_before_sched2::execute): Unconditionally call
split_all_insns.
(enable_split_before_sched2): New function.
(pass_split_before_sched2::gate): Use enable_split_before_sched2.
(pass_split_before_regstack::gate): Ditto.
* config/nds32/nds32.c (nds32_split_double_word_load_store_p):
Update name check for renamed split4 pass.
* config/sh/sh.c (register_sh_passes): Update pass insertion
point for renamed split4 pass.
The helpers that implement BUILTIN-PTR-CMP do not currently check if the
arguments are actually comparable, so the concept is true when it
shouldn't be.
Since we're trying to test for an unambiguous conversion to pointers, we
can also require that it returns bool, because the built-in comparisons
for pointers do return bool.
* include/bits/range_cmp.h (__detail::__eq_builtin_ptr_cmp): Require
equality comparison to be valid and return bool.
(__detail::__less_builtin_ptr_cmp): Likewise for less-than comparison.
* testsuite/20_util/function_objects/range.cmp/equal_to.cc: Check
type with ambiguous conversion to fundamental types.
* testsuite/20_util/function_objects/range.cmp/less.cc: Likewise.
The first (valid) testcase ICEs because for
A *a = new B ();
a->foo (); // virtual method call
we actually see &heap and the "heap " objects don't have the class or
whatever else type was used in new expression, but an array type containing
one (or more of those for array new) and so when using TYPE_BINFO (objtype)
on it we ICE.
This patch handles this special case, and otherwise punts (as shown e.g. in
the second testcase, where because the heap object is already deleted,
we don't really want to allow it to be used.
2020-02-09 Jakub Jelinek <jakub@redhat.com>
PR c++/93633
* constexpr.c (cxx_eval_constant_expression): If obj is heap var with
ARRAY_TYPE, use the element type. Punt if objtype after that is not
a class type.
* g++.dg/cpp2a/constexpr-new11.C: New test.
* g++.dg/cpp2a/constexpr-new12.C: New test.
* g++.dg/cpp2a/constexpr-new13.C: New test.
DECL_IN_CONSTANT_POOL are shared and thus don't really get emitted in the
BLOCK where they are used, so for OpenMP target regions that have initializers
gimplified into copying from them we actually map them at runtime from host to
offload devices. This patch instead marks them as "omp declare target", so
that they are on the target device from the beginning and don't need to be
copied there.
2020-02-09 Jakub Jelinek <jakub@redhat.com>
* gimplify.c (gimplify_adjust_omp_clauses_1): Promote
DECL_IN_CONSTANT_POOL variables into "omp declare target" to avoid
copying them around between host and target.
* testsuite/libgomp.c/target-38.c: New test.
Hi,
The problem here is that the vector mode version of movmisalign<mode>
was only conditionalized on if SIMD was enabled instead of being
also conditionalized on STRICT_ALIGNMENT too.
Applied as pre-approved in the bug report by Richard Sandiford
after a bootstrap/test on aarch64-linux-gnu.
Thanks,
Andrew Pinski
ChangeLog:
PR target/91927
* config/aarch64/aarch64-simd.md (movmisalign<mode>): Check
STRICT_ALIGNMENT also.
testsuite/ChangeLog:
PR target/91927
* gcc.target/aarch64/pr91927.c: New testcase.
The fix for PR target/92923 exposed some test cases with fragile
scan-assembler-times counting. Split the test cases into smaller
functions, which allows less chance of optimizations causing slight
instruction count numbers.
gcc/testsuite/
PR target/93136
* gcc.dg/vmx/ops.c: Add -flax-vector-conversions to dg-options.
* gcc.target/powerpc/vsx-vector-6.h: Split tests into smaller functions.
* gcc.target/powerpc/vsx-vector-6.p7.c: Adjust scan-assembler-times
regex directives. Adjust expected instruction counts.
* gcc.target/powerpc/vsx-vector-6.p8.c: Likewise.
* gcc.target/powerpc/vsx-vector-6.p9.c: Likewise.
Avoid paradoxical subregs when caller save. This reduces stack frame size
due to smaller loads and stores, and more frequent rematerialization.
PR target/93532
* config/riscv/riscv.h (HARD_REGNO_CALLER_SAVE_MODE): Define.
We would like to do constexpr evaluation to avoid false positives on
warnings, but constexpr evaluation can involve function body copying that
changes DECL_UID, which breaks -fcompare-debug. So let's remember
that we need to avoid that.
PR c++/90691
* expr.c (fold_for_warn): Call maybe_constant_value.
* constexpr.c (struct constexpr_ctx): Add uid_sensitive field.
(maybe_constant_value): Add uid_sensitive parm.
(get_fundef_copy): Don't copy if it's true.
(cxx_eval_call_expression): Don't instantiate if it's true.
(cxx_eval_outermost_constant_expr): Likewise.
If cxx_eval_outermost_constant_expr doesn't change the argument, we really
shouldn't unshare it when we try to fold it again.
PR c++/92852
* constexpr.c (maybe_constant_value): Don't unshare if the cached
value is the same as the argument.
My change
* typeck2.c (store_init_value): Don't call cp_fully_fold_init on
initializers of automatic non-constexpr variables in constexpr
functions.
- value = cp_fully_fold_init (value);
+ /* Don't fold initializers of automatic variables in constexpr functions,
+ that might fold away something that needs to be diagnosed at constexpr
+ evaluation time. */
+ if (!current_function_decl
+ || !DECL_DECLARED_CONSTEXPR_P (current_function_decl)
+ || TREE_STATIC (decl))
+ value = cp_fully_fold_init (value);
from the constexpr new change apparently broke the following testcase.
When handling COND_EXPR, we build_vector_from_val, however as the argument we
pass to it is not an INTEGER_CST/REAL_CST, but that wrapped in a
NON_LVALUE_EXPR location wrapper, we end up with a CONSTRUCTOR and as it is
middle-end that builds it, it doesn't bother with indexes. The
cp_fully_fold_init call used to fold it into VECTOR_CST in the past, but as
we intentionally don't invoke it anymore as it might fold away something
that needs to be diagnosed during constexpr evaluation, we end up evaluating
ARRAY_REF into the index-less CONSTRUCTOR. The following patch fixes the
ICE by teaching find_array_ctor_elt to handle CONSTRUCTORs without indexes
(that itself could be still very efficient) and CONSTRUCTORs with some
indexes present and others missing (the rules are that if the index on the
first element is missing, then it is the array's lowest index (in C/C++ 0)
and if other indexes are missing, they are the index of the previous element
+ 1).
Here is a new version, which assumes CONSTRUCTORs with all or none indexes
and for CONSTRUCTORs without indexes performs the verification for
flag_checking directly in find_array_ctor_elt. For CONSTRUCTORs with
indexes, it doesn't do the verification of all elts, because some CONSTRUCTORs
can be large, and it "verifies" only what it really needs - if all elts
touched during the binary search have indexes, that is actually all we care
about because we are sure we found the right elt. It is just if we see a
missing index we need assurance that all are missing to be able to directly
access it.
The assumption then simplifies the patch, for no index CONSTRUCTORs we can
use direct access like for CONSTRUCTORs where last elt index is equal to the
elt position. If we append right after the last elt, we just should clear
the index so that we don't violate the assumption, and if we need a gap
between the elts and the elt to be added, we need to add indexes.
2020-02-08 Jakub Jelinek <jakub@redhat.com>
PR c++/93549
* constexpr.c (find_array_ctor_elt): If last element has no index,
for flag_checking verify all elts have no index. If i is within the
elts, return it directly, if it is right after the last elt, append
if NULL index, otherwise force indexes on all elts.
(cxx_eval_store_expression): Allow cep->index to be NULL.
* g++.dg/ext/constexpr-pr93549.C: New test.
On Tue, Feb 04, 2020 at 11:16:06AM +0100, Uros Bizjak wrote:
> I guess that Comment #9 patch form the PR should be trivially correct,
> but althouhg it looks obvious, I don't want to propose the patch since
> I have no means of testing it.
I don't have means of testing it either.
https://docs.microsoft.com/en-us/cpp/build/x64-calling-convention?view=vs-2019
is quite explicit that [xyz]mm16-31 are call clobbered and only xmm6-15 (low
128-bits only) are call preserved.
We are talking e.g. about
/* { dg-options "-O2 -mabi=ms -mavx512vl" } */
typedef double V __attribute__((vector_size (16)));
void foo (void);
V bar (void);
void baz (V);
void
qux (void)
{
V c;
{
register V a __asm ("xmm18");
V b = bar ();
asm ("" : "=x" (a) : "0" (b));
c = a;
}
foo ();
{
register V d __asm ("xmm18");
V e;
d = c;
asm ("" : "=x" (e) : "0" (d));
baz (e);
}
}
where according to the MSDN doc gcc incorrectly holds the c value
in xmm18 register across the foo call; if foo is compiled by some Microsoft
compiler (or LLVM), then it could clobber %xmm18.
If all xmm18 occurrences are changed to say xmm15, then it is valid to hold
the 128-bit value across the foo call (though, surprisingly, LLVM saves it
into stack anyway).
The other parts are I guess mainly about SEH. Consider e.g.
void
foo (void)
{
register double x __asm ("xmm14");
register double y __asm ("xmm18");
asm ("" : "=x" (x));
asm ("" : "=v" (y));
x += y;
y += x;
asm ("" : : "x" (x));
asm ("" : : "v" (y));
}
looking at cross-compiler output, with -O2 -mavx512f this emits
.file "abcdeq.c"
.text
.align 16
.globl foo
.def foo; .scl 2; .type 32; .endef
.seh_proc foo
foo:
subq $40, %rsp
.seh_stackalloc 40
vmovaps %xmm14, (%rsp)
.seh_savexmm %xmm14, 0
vmovaps %xmm18, 16(%rsp)
.seh_savexmm %xmm18, 16
.seh_endprologue
vaddsd %xmm18, %xmm14, %xmm14
vaddsd %xmm18, %xmm14, %xmm18
vmovaps (%rsp), %xmm14
vmovaps 16(%rsp), %xmm18
addq $40, %rsp
ret
.seh_endproc
.ident "GCC: (GNU) 10.0.1 20200207 (experimental)"
Does whatever assembler mingw64 uses even assemble this (I mean the
.seh_savexmm %xmm16, 16 could be problematic)?
I can find e.g.
https://stackoverflow.com/questions/43152633/invalid-register-for-seh-savexmm-in-cygwin/43210527
which then links to
https://gcc.gnu.org/PR65782
2020-02-08 Uroš Bizjak <ubizjak@gmail.com>
Jakub Jelinek <jakub@redhat.com>
PR target/65782
* config/i386/i386.h (CALL_USED_REGISTERS): Make
xmm16-xmm31 call-used even in 64-bit ms-abi.
* gcc.target/i386/pr65782.c: New test.
Co-authored-by: Uroš Bizjak <ubizjak@gmail.com>
When I implemented C++20 parenthesized initialization of aggregates
I introduced this bogus cp_unevaluated_operand check, thus disabling
this feature in unevaluated context. Oop.
Removing the check turned up another bug: I wasn't checking the
return value of digest_init. So when constructible_expr called
build_new_method_call_1 to see if we can construct one type from
another, it got back a bogus INIT_EXPR that looked something like
*(struct T &) 1 = <<< error >>>. But that isn't the error_mark_node,
so constructible_expr thought we had been successful in creating the
ctor call, and it gave the wrong answer. Covered by paren-init17.C.
PR c++/92947 - Paren init of aggregates in unevaluated context.
* call.c (build_new_method_call_1): Don't check
cp_unevaluated_operand. Check the return value of digest_init.
* g++.dg/cpp2a/paren-init21.C: New test.
extract_local_specs wasn't finding the mention of 'an' as a template
argument because we weren't walking into template arguments. So here I
changed cp_walk_subtrees to do so--only walking into template arguments in
the spelling of the type or expression, not any hidden behind typedefs. The
change to use typedef_variant_p avoids looking through typedefs spelled with
'typedef' as well as those spelled with 'using'. And then I removed some
now-redundant code for walking into template arguments in a couple of
walk_tree callbacks.
PR c++/92654
* tree.c (cp_walk_subtrees): Walk into type template arguments.
* cp-tree.h (TYPE_TEMPLATE_INFO_MAYBE_ALIAS): Use typedef_variant_p
instead of TYPE_ALIAS_P.
* pt.c (push_template_decl_real): Likewise.
(find_parameter_packs_r): Likewise. Remove dead code.
* error.c (find_typenames_r): Remove dead code.
* include/bits/iterator_concepts.h (iter_difference_t, iter_value_t):
Use remove_cvref_t.
(readable_traits): Rename to indirectly_readable_traits.
(readable): Rename to indirectly_readable.
(writable): Rename to indirectly_writable.
(__detail::__iter_exchange_move): Do not use remove_reference_t.
(indirectly_swappable): Adjust requires expression parameter types.
expression.
* include/bits/ranges_algo.h (ranges::transform, ranges::replace)
(ranges::replace_if, ranges::generate_n, ranges::generate)
(ranges::remove): Use new name for writable.
* include/bits/stl_iterator.h (__detail::__common_iter_has_arrow):
Use new name for readable.
* include/ext/pointer.h (readable_traits<_Pointer_adapter<P>>): Use
new name for readable_traits.
* testsuite/24_iterators/associated_types/readable.traits.cc: Likewise.
* testsuite/24_iterators/indirect_callable/projected.cc: Adjust for
new definition of indirectly_readable.
The wrong type was being used in the __common_iter_has_arrow constraint,
creating a circular dependency where the iterator_traits specialization
was needed before it was complete. The correct parameter for the
__common_iter_has_arrow concept is the first template argument of the
common_iterator, not the common_iterator itself.
* include/bits/stl_iterator.h (__detail::__common_iter_ptr): Change
to take parameters of common_iterator, instead of the common_iterator
type itself. Fix argument for __common_iter_has_arrow constraint.
(iterator_traits<common_iterator<I, S>>::pointer): Adjust.
This patch adds ranges::basic_istream_view and ranges::istream_view. This seems
to be the last missing part of the ranges header.
libstdc++-v3/ChangeLog:
* include/std/ranges (ranges::__detail::__stream_extractable,
ranges::basic_istream_view, ranges::istream_view): Define.
* testsuite/std/ranges/istream_view: New test.
This patch implements [range.adaptors]. It also includes the changes from P3280
and P3278 and P3323, without which many standard examples won't work.
The implementation is mostly dictated by the spec and there was not much room
for implementation discretion. The most interesting part that was not specified
by the spec is the design of the range adaptors and range adaptor closures,
which I tried to design in a way that minimizes boilerplate and statefulness (so
that e.g. the composition of two stateless closures is stateless).
What is left unimplemented is caching of calls to begin() in filter_view,
drop_view and reverse_view, which is required to guarantee that begin() has
amortized constant time complexity. I can implement this in a subsequent patch.
"Interesting" parts of the patch are marked with XXX comments.
libstdc++-v3/ChangeLog:
Implement C++20 range adaptors
* include/std/ranges: Include <bits/refwrap.h> and <tuple>.
(subrange::_S_store_size): Mark as const instead of constexpr to
avoid what seems to be a bug in GCC.
(__detail::__box): Give it defaulted copy and move constructors.
(views::_Single::operator()): Mark constexpr.
(views::_Iota::operator()): Mark constexpr.
(__detail::Empty): Define.
(views::_RangeAdaptor, views::_RangeAdaptorClosure, ref_view, all_view,
views::all, filter_view, views::filter, transform_view,
views::transform, take_view, views::take, take_while_view,
views::take_while, drop_view, views::drop, join_view, views::join,
__detail::require_constant, __detail::tiny_range, split_view,
views::split, views::_Counted, views::counted, common_view,
views::common, reverse_view, views::reverse,
views::__detail::__is_reversible_subrange,
views::__detail::__is_reverse_view, reverse_view, views::reverse,
__detail::__has_tuple_element, elements_view, views::elements,
views::keys, views::values): Define.
* testsuite/std/ranges/adaptors/all.cc: New test.
* testsuite/std/ranges/adaptors/common.cc: Likewise.
* testsuite/std/ranges/adaptors/counted.cc: Likewise.
* testsuite/std/ranges/adaptors/drop.cc: Likewise.
* testsuite/std/ranges/adaptors/drop_while.cc: Likewise.
* testsuite/std/ranges/adaptors/elements.cc: Likewise.
* testsuite/std/ranges/adaptors/filter.cc: Likewise.
* testsuite/std/ranges/adaptors/join.cc: Likewise.
* testsuite/std/ranges/adaptors/reverse.cc: Likewise.
* testsuite/std/ranges/adaptors/split.cc: Likewise.
* testsuite/std/ranges/adaptors/take.cc: Likewise.
* testsuite/std/ranges/adaptors/take_while.cc: Likewise.
* testsuite/std/ranges/adaptors/transform.cc: Likewise.