With test-case gcc.c-torture/compile/pr92231.c, we run into:
...
nvptx-as: ptxas terminated with signal 11 [Segmentation fault], core dumped^M
compiler exited with status 1
FAIL: gcc.c-torture/compile/pr92231.c -O0 (test for excess errors)
...
due to using a function reference plus constant as operand:
...
mov.u64 %r24,bar+4096';
...
Fix this by splitting such an insn into:
...
mov.u64 %r24,bar';
add.u64 %r24,%r24,4096';
...
Tested on nvptx.
gcc/ChangeLog:
* config/nvptx/nvptx.md: Don't allow operand containing sum of
function ref and const.
This patch fixes the equivalent of arm bug PR85434/CVE-2018-12886
for aarch64: under high register pressure, the -fstack-protector
code might spill the address of the canary onto the stack and
reload it at the test site, giving an attacker the opportunity
to change the expected canary value.
This would happen in two cases:
- when generating PIC for -mstack-protector-guard=global
(tested by stack-protector-6.c). This is a direct analogue
of PR85434, which was also about PIC for the global case.
- when using -mstack-protector-guard=sysreg.
The two problems were really separate bugs and caused by separate code,
but it was more convenient to fix them together.
The post-patch code still spills _GLOBAL_OFFSET_TABLE_ for
stack-protector-6.c, which is a more general problem. However,
it no longer spills the canary address itself.
The patch also fixes an ICE when using -mstack-protector-guard=sysreg
with ILP32: even if the register read is SImode, the address
calculation itself should still be DImode.
gcc/
* config/aarch64/aarch64-protos.h (aarch64_salt_type): New enum.
(aarch64_stack_protect_canary_mem): Declare.
* config/aarch64/aarch64.md (UNSPEC_SALT_ADDR): New unspec.
(stack_protect_set): Forward to stack_protect_combined_set.
(stack_protect_combined_set): New pattern. Use
aarch64_stack_protect_canary_mem.
(reg_stack_protect_address_<mode>): Add a salt operand.
(stack_protect_test): Forward to stack_protect_combined_test.
(stack_protect_combined_test): New pattern. Use
aarch64_stack_protect_canary_mem.
* config/aarch64/aarch64.c (strip_salt): New function.
(strip_offset_and_salt): Likewise.
(tls_symbolic_operand_type): Use strip_offset_and_salt.
(aarch64_stack_protect_canary_mem): New function.
(aarch64_cannot_force_const_mem): Use strip_offset_and_salt.
(aarch64_classify_address): Likewise.
(aarch64_symbolic_address_p): Likewise.
(aarch64_print_operand): Likewise.
(aarch64_output_addr_const_extra): New function.
(aarch64_tls_symbol_p): Use strip_salt.
(aarch64_classify_symbol): Likewise.
(aarch64_legitimate_pic_operand_p): Use strip_offset_and_salt.
(aarch64_legitimate_constant_p): Likewise.
(aarch64_mov_operand_p): Use strip_salt.
(TARGET_ASM_OUTPUT_ADDR_CONST_EXTRA): Override.
gcc/testsuite/
* gcc.target/aarch64/stack-protector-5.c: New test.
* gcc.target/aarch64/stack-protector-6.c: Likewise.
* gcc.target/aarch64/stack-protector-7.c: Likewise.
These tests were inspired by corresponding arm ones. They already pass.
gcc/testsuite/
* gcc.target/aarch64/stack-protector-3.c: New test.
* gcc.target/aarch64/stack-protector-4.c: Likewise.
This patch implements the missing reinterprets to and from poly128_t and
float64x2_t.
I've plugged in the appropriate testing in the advsimd-intrinsics.exp
too.
Bootstrapped and tested on aarch64-none-linux-gnu.
Tested advsimd-intrinsics.exp on arm-none-eabi too to make sure arm
testing isn't affected.
gcc/
PR target/71233
* config/aarch64/arm_neon.h (vreinterpretq_f64_p128,
vreinterpretq_p128_f64): Define.
gcc/testsuite/
PR target/71233
* gcc.target/aarch64/advsimd-intrinsics/arm-neon-ref.h
(clean_results): Add float64x2_t cleanup.
(DECL_VARIABLE_128BITS_VARIANTS): Add float64x2_t variable.
* gcc.target/aarch64/advsimd-intrinsics/vreinterpret_p128.c: Add
testing of vreinterpretq_f64_p128, vreinterpretq_p128_f64.
This is C++, we don't need 'typedef struct foo foo;'. Oh, and bool
bitfields are a thing.
gcc/cp/
* name-lookup.h (typedef cxx_binding): Delete tdef.
(typedef cp_binding_level): Likewise.
(struct cxx_binding): Flags are bools.
This adds support for Arm's Neoverse V1 CPU to the AArch32 backend.
---
gcc/ChangeLog:
* config/arm/arm-cpus.in (neoverse-v1): New.
* config/arm/arm-tables.opt: Regenerate.
* config/arm/arm-tune.md: Regenerate.
* doc/invoke.texi: Document support for Neoverse V1.
This adds support for Arm's Neoverse V1 CPU to the AArch64 backend.
---
gcc/ChangeLog:
* config/aarch64/aarch64-cores.def: Add Neoverse V1.
* config/aarch64/aarch64-tune.md: Regenerate.
* doc/invoke.texi: Document support for Neoverse V1.
I'd missed the piece of substutution for the uses of a local extern
decl. Just grab the local specialization. We need to do this
regardless of dependentness because we always cloned the local extern.
PR c++/97171
gcc/cp/
* pt.c (tsubst_copy) [FUNCTION_DECL,VAR_DECL]: Retrieve local
specialization for DECL_LOCAL_P decls.
gcc/testsuite/
* g++.dg/template/local10.C: New.
We crash here because since r11-3302 the C FE uses codes like SWITCH_STMT
in the else branches in the attached test, and inchash::add_expr in
do_warn_duplicated_branches doesn't handle these front-end codes. In
the C++ FE this works because by the time we get to do_warn_duplicated_branches
we've already cp_genericize'd the SWITCH_STMT tree into a SWITCH_EXPR.
The fix is to call do_warn_duplicated_branches_r only after loops and other
structured control constructs have been lowered.
gcc/c-family/ChangeLog:
PR c/97125
* c-gimplify.c (c_genericize): Only call do_warn_duplicated_branches_r
after loops and other structured control constructs have been lowered.
gcc/testsuite/ChangeLog:
PR c/97125
* c-c++-common/Wduplicated-branches-15.c: New test.
This relaxes the condition under which we also try NE_EXPR
for a fake generated compare in addition to LT_EXPR given
the fact the verification ICEd when it failed but obviously
was only implemented for constants. Thus the patch removes
the verification and the restriction to constant operands.
2020-09-23 Richard Biener <rguenther@suse.de>
PR middle-end/96453
* gimple-isel.cc (gimple_expand_vec_cond_expr): Remove
LT_EXPR -> NE_EXPR verification and also apply it for
non-constant masks.
* gcc.dg/pr96453.c: New testcase.
this patch fixes bug in tracking memory stats and also I have noticed that while
the pass takes care to stop traking things when things are obviously out of hand
it still keeps summaries that have no useful info for loads or stores and also
many summaries are just copying const/pure attributes. This patch thus also
adds logic to detect if summary is useful and drop it early otherwise. This
reduces number of queries to the oracle and saves memory/lto streaming.
For cc1plus LTO build (configured with --disable-plugin
--enable-checking=release --with-build-config=lto) I now get:
Alias oracle query stats:
refs_may_alias_p: 62488734 disambiguations, 72660949 queries
ref_maybe_used_by_call_p: 128863 disambiguations, 63393551 queries
call_may_clobber_ref_p: 16013 disambiguations, 21776 queries
nonoverlapping_component_refs_p: 0 disambiguations, 37628 queries
nonoverlapping_refs_since_match_p: 19397 disambiguations, 55370 must overlaps, 75516 queries
aliasing_component_refs_p: 54741 disambiguations, 752198 queries
TBAA oracle: 21632692 disambiguations 52565147 queries
15656420 are in alias set 0
10108172 queries asked about the same object
124 queries asked about the same alias set
0 access volatile
3640460 are dependent in the DAG
1527279 are aritificially in conflict with void *
Modref stats:
modref use: 5712 disambiguations, 31221 queries
modref clobber: 684316 disambiguations, 1010000 queries
1779717 tbaa queries (1.762096 per modref query)
PTA query stats:
pt_solution_includes: 947334 disambiguations, 13601373 queries
pt_solutions_intersect: 1011662 disambiguations, 13139565 queries
The number of queries should change, but the number of disambiguations should
not. However comparing with stats here
https://gcc.gnu.org/pipermail/gcc-patches/2020-September/554309.html
I see about 50% drop in clobber disambiguations. There is however same drop in
other alias oracle stats. I suppose someting changed in meanwhile on mainline
because I was basing that on older tree. I tried to proofread changes between
mainline and branch and they seem all quite obvious.
This is consistent with what I get on tramp3d:
Alias oracle query stats:
refs_may_alias_p: 2051320 disambiguations, 2312132 queries
ref_maybe_used_by_call_p: 7058 disambiguations, 2088222 queries
call_may_clobber_ref_p: 232 disambiguations, 232 queries
nonoverlapping_component_refs_p: 0 disambiguations, 4339 queries
nonoverlapping_refs_since_match_p: 329 disambiguations, 10200 must overlaps, 10616 queries
aliasing_component_refs_p: 857 disambiguations, 34639 queries
TBAA oracle: 886768 disambiguations 1670635 queries
131572 are in alias set 0
461689 queries asked about the same object
0 queries asked about the same alias set
0 access volatile
190291 are dependent in the DAG
315 are aritificially in conflict with void *
Modref stats:
modref use: 430 disambiguations, 1885 queries
modref clobber: 9657 disambiguations, 16076 queries
19027 tbaa queries (1.183566 per modref query)
PTA query stats:
pt_solution_includes: 311756 disambiguations, 524179 queries
pt_solutions_intersect: 129689 disambiguations, 415878 queries
In both cases the number of disambiguations should be same (queries are not
comparable).
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
2020-09-23 Jan Hubicka <hubicka@ucw.cz>
* ipa-modref.c (modref_summary::lto_useful_p): New member function.
(modref_summary::useful_p): New member function.
(analyze_function): Drop useless summaries.
(modref_write): Skip useless summaries.
(pass_ipa_modref::execute): Drop useless summaries.
* ipa-modref.h (struct GTY): Declare useful_p and lto_useful_p.
* tree-ssa-alias.c (dump_alias_stats): Fix.
(modref_may_conflict): Fix stats.
We need to avoid forcing BLKmode for truth vectors, instead do as
other code and use VOIDmode so layout_type can pick a suitable and
consistent mode. RTL expansion of vect_cond_mask also needs to deal
with CONST_INT operands which means passing the mode explicitely.
2020-09-23 Richard Biener <rguenther@suse.de>
PR middle-end/96466
* internal-fn.c (expand_vect_cond_mask_optab_fn): Use
appropriate mode for force_reg.
* tree.c (build_truth_vector_type_for): Pass VOIDmode to
make_vector_type.
* gcc.dg/pr96466.c: New testcase.
This patch fixes the fallout that Kewen reported on Power after
the recent change to avoid unnecessary use of partial vectors.
As Kewen said, the problem is that vect_analyze_loop_2 doesn't
know how many epilogue iterations there will be, and so it
cannot make a final decision about whether the number of
iterations forces an epilogue loop to use partial vectors.
This is similar to the current situation for peeling: we don't know
during initial analysis whether an epilogue loop will itself require
peeling. Instead we decide that during vect_do_peeling, where the
final number of epilogue loop iterations is known.
The patch takes a similar approach for the decision about whether
to use partial vectors. As the comments in the patch say, the
idea is that vect_analyze_loop_2 should make peeling and partial-
vector decisions based on the assumption that the loop_vinfo will
be used as the main loop, while vect_do_peeling should make them
in the knowledge that the loop_vinfo will be used as an epilogue loop.
This allows the same analysis to be used for both cases, which we
rely on for implementing VECT_COMPARE_COSTS; see the big comment
in vect_analyze_loop for details.
I hope the patch makes the (mostly preexisting) structure a bit
more obvious. It isn't what anyone would design from scratch,
but that's the nature of working with a mature vector framework.
Arranging things this way means that vect_verify_full_masking
and vect_verify_loop_lens now become part of the “can” rather
than “will” test for partial vectors.
Also, while splitting out the logic that handles epilogues with
constant iterations, I added a check to make sure that we don't
try to use partial vectors to vectorise a single-scalar loop.
This required some changes to the Power tests.
gcc/
* tree-vectorizer.h (determine_peel_for_niter): Delete in favor of...
(vect_determine_partial_vectors_and_peeling): ...this new function.
* tree-vect-loop-manip.c (vect_update_epilogue_niters): New function.
Reject using vector epilogue loops for single iterations. Install
the constant number of epilogue loop iterations in the associated
loop_vinfo. Rely on vect_determine_partial_vectors_and_peeling
to do the main part of the test.
(vect_do_peeling): Use vect_update_epilogue_niters to handle
epilogue loops with a known number of iterations. Skip recomputing
the number of iterations later in that case. Otherwise, use
vect_determine_partial_vectors_and_peeling to decide whether the
epilogue loop needs to use partial vectors or peeling.
* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Set the
default can_use_partial_vectors_p to false if partial-vector-usage=0.
(determine_peel_for_niter): Remove in favor of...
(vect_determine_partial_vectors_and_peeling): ...this new function,
split out from...
(vect_analyze_loop_2): ...here. Reflect the vect_verify_full_masking
and vect_verify_loop_lens results in CAN_USE_PARTIAL_VECTORS_P
rather than USING_PARTIAL_VECTORS_P.
gcc/testsuite/
* gcc.target/powerpc/p9-vec-length-epil-1.c: Do not expect the
single-iteration epilogues of the 64-bit loops to be vectorized.
* gcc.target/powerpc/p9-vec-length-epil-7.c: Likewise.
* gcc.target/powerpc/p9-vec-length-epil-8.c: Likewise.
This patch implements the missing vrndns_f32 intrinsic. This operates on a scalar float32_t value.
It can be mapped down to a __builtin_aarch64_frintnsf builtin.
This patch does that.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR target/71233
* config/aarch64/aarch64-simd-builtins.def (frintn): Use BUILTIN_VHSDF_HSDF
for modes. Remove explicit hf instantiation.
* config/aarch64/arm_neon.h (vrndns_f32): Define.
gcc/testsuite/
PR target/71233
* gcc.target/aarch64/simd/vrndns_f32_1.c: New test.
The condition we're expecting to eventually run into isn't fully
captured by checking for CTORs, instead we can also run into the
CTOR element conversion.
2020-09-23 Richard Biener <rguenther@suse.de>
PR tree-optimization/97173
* tree-vect-loop.c (vectorizable_live_operation): Extend
assert to also conver element conversions.
* gcc.dg/vect/pr97173.c: New testcase.
This patch implements some missing vector permute intrinsics operating on poly64x2_t types.
They are implemented identically to their uint64x2_t brethren.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR target/71233
* config/aarch64/arm_neon.h (vtrn1q_p64, vtrn2q_p64, vuzp1q_p64,
vuzp2q_p64, vzip1q_p64, vzip2q_p64): Define.
gcc/testsuite/
PR target/71233
* gcc.target/aarch64/simd/trn_zip_p64_1.c: New test.
This patch implements the missing vldrq_p128 intrinsic that just loads from the appropriate pointer.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR target/71233
* config/aarch64/arm_neon.h (vldrq_p128): Define.
gcc/testsuite/
PR target/71233
* gcc.target/aarch64/simd/vldrq_p128_1.c: New test.
This patch implements the missing vstrq_p128 intrinsic.
It just performs a store of the poly128_t argument to a memory location.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR target/71233
* config/aarch64/arm_neon.h (vstrq_p128): Define.
gcc/testsuite/
PR target/71233
* gcc.target/aarch64/simd/vstrq_p128_1.c: New test.
C++ operator delete, when DECL_IS_REPLACEABLE_OPERATOR_DELETE_P,
does not cause the deleted object to be escaped. It also has no
other interesting side-effects for PTA so skip it like we do
for BUILT_IN_FREE.
2020-09-23 Richard Biener <rguenther@suse.de>
PR tree-optimization/97151
* tree-ssa-structalias.c (find_func_aliases_for_call):
DECL_IS_REPLACEABLE_OPERATOR_DELETE_P has no effect on
arguments.
* g++.dg/cpp1y/new1.C: Adjust for two more handled transforms.
This appropriately guards the check for a hard register in
compare_base_decls which otherwise ICEs when passed a CONST_DECL.
2020-09-23 Richard Biener <rguenther@suse.de>
PR middle-end/97162
* alias.c (compare_base_decls): Use DECL_HARD_REGISTER
and guard with VAR_P.
gcc/ChangeLog:
PR gcov-profile/97069
* profile.c (branch_prob): Line number must be at least 1.
gcc/testsuite/ChangeLog:
PR gcov-profile/97069
* g++.dg/gcov/pr97069.C: New test.
When compiling test-case gcc.dg/atomic/c11-atomic-exec-1.c, we run into
these ptxas errors:
...
line 100; error: Rounding modifier required for instruction 'cvt'
line 105; error: Rounding modifier required for instruction 'cvt'
...
The problem is that this move:
...
//(insn 13 11 14 2
// (set (reg:DF 28 [ _9 ])
// (subreg:DF (reg:TI 22 [ _1 ]) 0)) 9 {*movdf_insn}
// (nil))
cvt.f64.u64 %r28, %r22$0;
...
is emitted as cvt.f64.u64, while it should be a mov.b64 instead.
Fix this by handling this case in nvptx_output_mov_insn.
Tested on nvptx.
gcc/ChangeLog:
PR target/97158
* config/nvptx/nvptx.c (nvptx_output_mov_insn): Handle move from
DF subreg to DF reg.
This patch replaces a sequence of dyn_cast to different gimple stmt
types in exploded_node::on_stmt with a switch on the gimple_code. This
makes clearer which kinds of stmt are currently treated as no-ops, as a
precursor to handling them properly.
No functional change intended.
gcc/analyzer/ChangeLog:
* engine.cc (exploded_node::on_stmt): Replace sequence of dyn_cast
with switch.
In the testcase below, the dependent specializations iter_reference_t<F>
and iter_reference_t<Out> share the same tree due to specialization
caching. So when find_template_parameters walks through the
requires-expression (as part of normalization), it sees and includes the
out-of-scope template parameter F in the list of template parameters
it found within the requires-expression (along with Out and N).
From a correctness perspective this is harmless since the parameter mapping
routines only care about the level and index of each parameter, so F is
no different from Out in that sense. And it's also harmless that two
parameters in the parameter mapping have the same level and index.
But having both Out and F in the parameter mapping means extra work for
hash_atomic_constrant, tsubst_parameter_mapping and get_mapped_args; and
it also means we print this irrelevant template parameter in the
testcase's diagnostics (via pp_cxx_parameter_mapping):
in requirements with ‘Out o’ [with N = (const int&)&a; F = const int*; Out = const int*]
This patch makes keep_template_parm return only in-scope template
parameters by looking into ctx_parms for the corresponding in-scope
one, through a new helper function corresponding_template_parameter.
(That we sometimes print irrelevant template parameters in diagnostics
is also the subject of PR99 and PR66968, so the above diagnostic issue
could likely be fixed in a more general way, but this targeted fix to
keep_template_parm is perhaps worthwhile on its own.)
gcc/cp/ChangeLog:
PR c++/95310
* pt.c (corresponding_template_parameter): Define.
(keep_template_parm): Use it to adjust the given template
parameter to the corresponding in-scope one from ctx_parms.
gcc/testsuite/ChangeLog:
PR c++/95310
* g++.dg/concepts/diagnostic15.C: New test.
The remaining use of xref_tag_from_type was also suspicious. It turns
out to be an error path. At parse time we diagnose that a class
definition cannot appear, but we swallow the definition. This code
was attempting to push it into the global scope (or find a conflict).
This seems needless, just return error_mark_node. This was the
simpler fix than going through the parser and figuring out how to get
it to put in error_mark_node at the right point.
gcc/cp/
* cp-tree.h (xref_tag_from_type): Don't declare.
* decl.c (xref_tag_from_type): Delete.
* pt.c (lookup_template_class_1): Erroneously located class
definitions just give error_mark, don't try and inject it into the
namespace.
These two builtin calls are added already during parsing before pointer
subtractions or comparisons, normally they perform runtime verification
of whether the pointers point to the same object or different objects,
but during constant expressione valuation we don't really need those
builtins for anything.
2020-09-22 Jakub Jelinek <jakub@redhat.com>
PR c++/97145
* constexpr.c (cxx_eval_builtin_function_call): Return void_node for
calls to __sanitize_ptr_{sub,cmp} builtins.
* g++.dg/asan/pr97145.C: New test.
libstdc++-v3/ChangeLog:
PR libstdc++/97167
* src/c++17/fs_path.cc (path::_Parser::root_path()): Check
for empty string before inspecting the first character.
* testsuite/27_io/filesystem/path/append/source.cc: Append
empty string_view to path.
The 'mod' and 'div' operators in eBPF are unsigned, with no signed
counterpart. xBPF adds two new ALU operations, sdiv and smod, for
signed division and modulus, respectively. Update bpf.md with
'define_insn' blocks for signed div and mod to use them when targetting
xBPF, and add new tests to ensure they are used appropriately.
2020-09-17 David Faust <david.faust@oracle.com>
gcc/
* config/bpf/bpf.md: Add defines for signed div and mod operators.
gcc/testsuite/
* gcc.target/bpf/diag-sdiv.c: New test.
* gcc.target/bpf/diag-smod.c: New test.
* gcc.target/bpf/xbpf-sdiv-1.c: New test.
* gcc.target/bpf/xbpf-smod-1.c: New test.
In working on fixing hiddenness, I discovered some suspicious code in
template instantiation. I suspect it dates from when we didn't do the
hidden friend injection thing at all. The xreftag finds the same
class, but makes it visible to name lookup. Which is wrong.
hurrah, fixing a bug by deleting code!
gcc/cp/
* pt.c (instantiate_class_template_1): Do not repush and unhide
injected friend.
gcc/testsuite/
* g++.old-deja/g++.pt/friend34.C: Check injected friend is still
invisible.
This introduces two new headers:
<bits/ranges_base.h> defines the minimal components needed
for using C++20 ranges (customization point objects such as
std::ranges::begin, concepts such as std::ranges::range, etc.)
<bits/ranges_util.h> includes <bits/ranges_base.h> and additionally
defines subrange, which is needed by <bits/ranges_algo.h>.
Most of the content of <bits/ranges_base.h> was previously defined in
<bits/range_access.h>, but a few pieces were only defined in <ranges>.
This meant the entire <ranges> header was needed in <algorithm> and
<memory>, even though they don't use all the range adaptors.
By moving the ranges components out of <bits/range_access.h> that file
is left defining just the contents of [iterator.range] i.e. std::begin,
std::end, std::size etc. and not C++20 ranges components.
For consistency with other C++20 ranges headers, <bits/range_cmp.h> is
renamed to <bits/ranges_cmp.h>.
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new headers and adjust for renamed
header.
* include/Makefile.in: Regenerate.
* include/bits/iterator_concepts.h: Adjust for renamed header.
* include/bits/range_access.h (ranges::*): Move to new
<bits/ranges_base.h> header.
* include/bits/ranges_algobase.h: Include new <bits/ranges_base.h>
header instead of <ranges>.
* include/bits/ranges_algo.h: Include new <bits/ranges_util.h>
header.
* include/bits/range_cmp.h: Moved to...
* include/bits/ranges_cmp.h: ...here.
* include/bits/ranges_base.h: New header.
* include/bits/ranges_util.h: New header.
* include/experimental/string_view: Include new
<bits/ranges_base.h> header.
* include/std/functional: Adjust for renamed header.
* include/std/ranges (ranges::view_base, ranges::enable_view)
(ranges::dangling, ranges::borrowed_iterator_t): Move to new
<bits/ranges_base.h> header.
(ranges::view_interface, ranges::subrange)
(ranges::borrowed_subrange_t): Move to new <bits/ranges_util.h>
header.
* include/std/span: Include new <bits/ranges_base.h> header.
* include/std/string_view: Likewise.
* testsuite/24_iterators/back_insert_iterator/pr93884.cc: Add
missing <ranges> header.
* testsuite/24_iterators/front_insert_iterator/pr93884.cc:
Likewise.