When std::seed_seq is constructed from random access iterators we can
detect the internal vector size in O(1). Reserving memory for elements
in such cases may avoid multiple memory allocations.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/bits/random.tcc (seed_seq::seed_seq): Reserve capacity
if distance is O(1).
* testsuite/26_numerics/random/pr60037-neg.cc: Adjust dg-error
line number.
Co-authored-by: Jonathan Wakely <jwakely@redhat.com>
This patch improves the bit bounds for MINUS_EXPR during tree-ssa's
conditional constant propagation (CCP) pass (and as an added bonus
adds support for POINTER_DIFF_EXPR).
The pessimistic assumptions made by the current algorithm are
demonstrated by considering 1 - (x&1). Intuitively this should
have possible values 0 and 1, and therefore an unknown mask of 1.
Alas by treating subtraction as a negation followed by addition,
the second operand first becomes 0 or -1, with an unknown mask
of all ones, which results in the addition containing no known bits.
Improved bounds are achieved by using the same approach used for
PLUS_EXPR, determining the result with the minimum number of borrows,
the result from the maximum number of borrows, and examining the bits
they have in common. One additional benefit of this approach
is that it is applicable to POINTER_DIFF_EXPR, where previously the
negation of a pointer didn't/doesn't make sense.
A more convincing example, where a transformation missed by .032t.cpp
isn't caught a few passes later by .038t.evrp, is the expression
(7 - (x&5)) & 2, which (in the new test case) currently survives the
tree-level optimizers but with this patch is now simplified to the
constant value 2.
2021-08-17 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* tree-ssa-ccp.c (bit_value_binop) [MINUS_EXPR]: Use same
algorithm as PLUS_EXPR to improve subtraction bit bounds.
[POINTER_DIFF_EXPR]: Treat as synonymous with MINUS_EXPR.
gcc/testsuite/ChangeLog
* gcc.dg/tree-ssa/ssa-ccp-40.c: New test case.
This patch allows GCC to constant fold (i | (i<<16)) | ((i<<24) | (i<<8)),
where i is an unsigned char, or the equivalent (i*65537) | (i*16777472), to
i*16843009. The trick is to teach tree_nonzero_bits which bits may be
set in the result of a multiplication by a constant given which bits are
potentially set in the operands. This allows the optimizations recently
added to match.pd to catch more cases.
The required mask/value pair from a multiplication may be calculated using
a classical shift-and-add algorithm, given we already have implementations
for both addition and shift by constant. To keep this optimization "cheap",
this functionality is only used if the constant multiplier has a few bits
set (unless flag_expensive_optimizations), and we provide a special case
fast-path implementation for the common case where the (non-constant)
operand has no bits that are guaranteed to be set. I have no evidence
that this functionality causes performance issues, it's just that sparse
multipliers provide the largest benefit to CCP.
2021-08-17 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* tree-ssa-ccp.c (bit_value_mult_const): New helper function to
calculate the mask-value pair result of a multiplication by an
unsigned constant.
(bit_value_binop) [MULT_EXPR]: Call it from here for
multiplications by (sparse) non-negative constants.
gcc/testsuite/ChangeLog
* gcc.dg/fold-ior-5.c: New test case.
Fortran version to commit e45483c7c4,
which implemented OpenMP's scope construct for C and C++.
Most testcases are based on the C testcases; it also contains some
testcases which existed previously but had no Fortran equivalent.
gcc/fortran/ChangeLog:
* dump-parse-tree.c (show_omp_node, show_code_node): Handle
EXEC_OMP_SCOPE.
* gfortran.h (enum gfc_statement): Add ST_OMP_(END_)SCOPE.
(enum gfc_exec_op): Add EXEC_OMP_SCOPE.
* match.h (gfc_match_omp_scope): New.
* openmp.c (OMP_SCOPE_CLAUSES): Define
(gfc_match_omp_scope): New.
(gfc_match_omp_cancellation_point, gfc_match_omp_end_nowait):
Improve error diagnostic.
(omp_code_to_statement): Handle ST_OMP_SCOPE.
(gfc_resolve_omp_directive): Handle EXEC_OMP_SCOPE.
* parse.c (decode_omp_directive, next_statement,
gfc_ascii_statement, parse_omp_structured_block,
parse_executable): Handle OpenMP's scope construct.
* resolve.c (gfc_resolve_blocks): Likewise
* st.c (gfc_free_statement): Likewise
* trans-openmp.c (gfc_trans_omp_scope): New.
(gfc_trans_omp_directive): Call it.
* trans.c (trans_code): handle EXEC_OMP_SCOPE.
libgomp/ChangeLog:
* testsuite/libgomp.fortran/scope-1.f90: New test.
* testsuite/libgomp.fortran/task-reduction-16.f90: New test.
gcc/testsuite/ChangeLog:
* gfortran.dg/gomp/scan-1.f90:
* gfortran.dg/gomp/cancel-1.f90: New test.
* gfortran.dg/gomp/cancel-4.f90: New test.
* gfortran.dg/gomp/loop-4.f90: New test.
* gfortran.dg/gomp/nesting-1.f90: New test.
* gfortran.dg/gomp/nesting-2.f90: New test.
* gfortran.dg/gomp/nesting-3.f90: New test.
* gfortran.dg/gomp/nowait-1.f90: New test.
* gfortran.dg/gomp/reduction-task-1.f90: New test.
* gfortran.dg/gomp/reduction-task-2.f90: New test.
* gfortran.dg/gomp/reduction-task-2a.f90: New test.
* gfortran.dg/gomp/reduction-task-3.f90: New test.
* gfortran.dg/gomp/scope-1.f90: New test.
* gfortran.dg/gomp/scope-2.f90: New test.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* testsuite/26_numerics/random/seed_seq/cons/range.cc: Check
construction from input iterators.
The std::error_category printer wasn't meant to be part of the commit
adding std::error_code and std::error_condition printers.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* python/libstdcxx/v6/printers.py (StdErrorCatPrinter): Remove.
PR 101923 points out that the unconditional swap in the std::function
move constructor makes it slower than copying an empty std::function.
The copy constructor has to check for the empty case before doing
anything, and that makes it very fast for the empty case.
Adding the same check to the move constructor avoids copying the
_Any_data POD when we don't need to. We can also inline the effects of
swap, by copying each member and then zeroing the pointer members.
This makes moving an empty object at least as fast as copying an empty
object.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
PR libstdc++/101923
* include/bits/std_function.h (function(function&&)): Check for
non-empty parameter before doing any work.
The new contains member of the COW string is defined for non-strict
gnu++20 mode as well as for C++23 modes. I think that was left in the
committed patch unintentionally. It is inconsistent with the SSO string,
and doesn't actually compile because it uses the
basic_string_view::contains member which only defined for C++23.
This makes it only defined for C++23.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/bits/cow_string.h (basic_string::contains): Do not
define for -std=gnu++20.
This is done to match an editorial change in the working draft, to
rename the exposition-only not-same-as helper to different-from.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/bits/ranges_util.h (__not_same_as): Rename to
__different_from.
* include/std/ranges (__not_same_as): Likewise.
This is not required by the standard, but seems useful.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/utility (exchange): Add noexcept-specifier.
* testsuite/20_util/exchange/noexcept.cc: New test.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* python/libstdcxx/v6/printers.py (StdErrorCodePrinter): Define.
(build_libstdcxx_dictionary): Register printer for
std::error_code and std::error_condition.
* testsuite/libstdc++-prettyprinters/cxx11.cc: Test it.
Commit r12-1328 enabled DT_INIT_ARRAY/DT_FINI_ARRAY for all Linux
targets, but this does not work for arm-none-uclinuxfdpiceabi: it
makes all the execution tests fail.
This patch restores the original behavior for uclinuxfdpiceabi.
2021-08-12 Christophe Lyon <christophe.lyon@foss.st.com>
gcc/
PR target/100896
* config.gcc (gcc_cv_initfini_array): Leave undefined for
uclinuxfdpiceabi targets.
We iterate over debug stmts from the last one in new_bb, and we insert
them before the first post-label stmt in each dest block, without
moving the insertion iterator, so they end up reversed. Moving the
insertion iterator fixes this.
for gcc/ChangeLog
* tree-inline.c (maybe_move_debug_stmts_to_successors): Don't
reverse debug stmts.
dump_function_to_file takes the function to dump as a parameter, and
parts of it use the local fun variable where cfun would be used
elsewhere. Others use cfun, presumably in error. Fixed to use fun
uniformly. Added a few more tests for non-NULL fun before
dereferencing it.
for gcc/ChangeLog
* tree-cfg.c (dump_function_to_file): Use fun, not cfun.
With flag_wrapv, -TYPE_MIN_VALUE = TYPE_MIN_VALUE which is
unrepresentable. We currently special case this in the ABS folding
routine, but are missing similar treatment in operator_abs::op1_range.
Tested on x86-64 Linux.
PR tree-optimization/101938
gcc/ChangeLog:
* range-op.cc (operator_abs::op1_range): Special case
-TYPE_MIN_VALUE for flag_wrapv.
gcc/testsuite/ChangeLog:
* gcc.dg/pr101938.c: New test.
This adds the testcase from the fix for the PR.
2021-08-17 Richard Biener <rguenther@suse.de>
PR tree-optimization/101868
* gcc.dg/lto/pr101868_0.c: New testcase.
* gcc.dg/lto/pr101868_1.c: Likewise.
* gcc.dg/lto/pr101868_2.c: Likewise.
* gcc.dg/lto/pr101868_3.c: Likewise.
As Richi pointed out, currently for BB reductions we don't
actually build a SLP node with IFN_REDUC_* information,
ideally we may end up with that eventually.
For now, it's costed as shuffles and reduc operations, it
misses the cost of lane extraction. This patch is to add
one time of vec_to_scalar cost for lane extraction, it's to
make the costings consistent and conservative for now.
gcc/ChangeLog:
* tree-vect-slp.c (vectorizable_bb_reduc_epilogue): Add the cost for
value extraction.
This patch implements the OpenMP 5.1 scope construct, which is similar
to worksharing constructs in many regards, but isn't one of them.
The body of the construct is encountered by all threads though, it can
be nested in itself or intermixed with taskgroup and worksharing etc.
constructs can appear inside of it (but it can't be nested in
worksharing etc. constructs). The main purpose of the construct
is to allow reductions (normal and task ones) without the need to
close the parallel and reopen another one.
If it doesn't have task reductions, it can be implemented without
any new library support, with nowait it just does the privatizations
at the start if any and reductions before the end of the body, with
without nowait emits a normal GOMP_barrier{,_cancel} at the end too.
For task reductions, we need to ensure only one thread initializes
the task reduction library data structures and other threads copy from that,
so a new GOMP_scope_start routine is added to the library for that.
It acts as if the start of the scope construct is a nowait worksharing
construct (that is ok, it can't be nested in other worksharing
constructs and all threads need to encounter the start in the same
order) which does the task reduction initialization, but as the body
can have other scope constructs and/or worksharing constructs, that is
all where we use this dummy worksharing construct. With task reductions,
the construct must not have nowait and ends with a GOMP_barrier{,_cancel},
followed by task reductions followed by GOMP_workshare_task_reduction_unregister.
Only C/C++ FE support is done.
2021-08-17 Jakub Jelinek <jakub@redhat.com>
gcc/
* tree.def (OMP_SCOPE): New tree code.
* tree.h (OMP_SCOPE_BODY, OMP_SCOPE_CLAUSES): Define.
* tree-nested.c (convert_nonlocal_reference_stmt,
convert_local_reference_stmt, convert_gimple_call): Handle
GIMPLE_OMP_SCOPE.
* tree-pretty-print.c (dump_generic_node): Handle OMP_SCOPE.
* gimple.def (GIMPLE_OMP_SCOPE): New gimple code.
* gimple.c (gimple_build_omp_scope): New function.
(gimple_copy): Handle GIMPLE_OMP_SCOPE.
* gimple.h (gimple_build_omp_scope): Declare.
(gimple_has_substatements): Handle GIMPLE_OMP_SCOPE.
(gimple_omp_scope_clauses, gimple_omp_scope_clauses_ptr,
gimple_omp_scope_set_clauses): New inline functions.
(CASE_GIMPLE_OMP): Add GIMPLE_OMP_SCOPE.
* gimple-pretty-print.c (dump_gimple_omp_scope): New function.
(pp_gimple_stmt_1): Handle GIMPLE_OMP_SCOPE.
* gimple-walk.c (walk_gimple_stmt): Likewise.
* gimple-low.c (lower_stmt): Likewise.
* gimplify.c (is_gimple_stmt): Handle OMP_MASTER.
(gimplify_scan_omp_clauses): For task reductions, handle OMP_SCOPE
like ORT_WORKSHARE constructs. Adjust diagnostics for %<scope%>
allowing task reductions. Reject inscan reductions on scope.
(omp_find_stores_stmt): Handle GIMPLE_OMP_SCOPE.
(gimplify_omp_workshare, gimplify_expr): Handle OMP_SCOPE.
* tree-inline.c (remap_gimple_stmt): Handle GIMPLE_OMP_SCOPE.
(estimate_num_insns): Likewise.
* omp-low.c (build_outer_var_ref): Look through GIMPLE_OMP_SCOPE
contexts if var isn't privatized there.
(check_omp_nesting_restrictions): Handle GIMPLE_OMP_SCOPE.
(scan_omp_1_stmt): Likewise.
(maybe_add_implicit_barrier_cancel): Look through outer
scope constructs.
(lower_omp_scope): New function.
(lower_omp_task_reductions): Handle OMP_SCOPE.
(lower_omp_1): Handle GIMPLE_OMP_SCOPE.
(diagnose_sb_1, diagnose_sb_2): Likewise.
* omp-expand.c (expand_omp_single): Support also GIMPLE_OMP_SCOPE.
(expand_omp): Handle GIMPLE_OMP_SCOPE.
(omp_make_gimple_edges): Likewise.
* omp-builtins.def (BUILT_IN_GOMP_SCOPE_START): New built-in.
gcc/c-family/
* c-pragma.h (enum pragma_kind): Add PRAGMA_OMP_SCOPE.
* c-pragma.c (omp_pragmas): Add scope construct.
* c-omp.c (omp_directives): Uncomment scope directive entry.
gcc/c/
* c-parser.c (OMP_SCOPE_CLAUSE_MASK): Define.
(c_parser_omp_scope): New function.
(c_parser_omp_construct): Handle PRAGMA_OMP_SCOPE.
gcc/cp/
* parser.c (OMP_SCOPE_CLAUSE_MASK): Define.
(cp_parser_omp_scope): New function.
(cp_parser_omp_construct, cp_parser_pragma): Handle PRAGMA_OMP_SCOPE.
* pt.c (tsubst_expr): Handle OMP_SCOPE.
gcc/testsuite/
* c-c++-common/gomp/nesting-2.c (foo): Add scope and masked
construct tests.
* c-c++-common/gomp/scan-1.c (f3): Add scope construct test..
* c-c++-common/gomp/cancel-1.c (f2): Add scope and masked
construct tests.
* c-c++-common/gomp/reduction-task-2.c (bar): Add scope construct
test. Adjust diagnostics for the addition of scope.
* c-c++-common/gomp/loop-1.c (f5): Add master, masked and scope
construct tests.
* c-c++-common/gomp/clause-dups-1.c (f1): Add scope construct test.
* gcc.dg/gomp/nesting-1.c (f1, f2, f3): Add scope construct tests.
* c-c++-common/gomp/scope-1.c: New test.
* c-c++-common/gomp/scope-2.c: New test.
* g++.dg/gomp/attrs-1.C (bar): Add scope construct tests.
* g++.dg/gomp/attrs-2.C (bar): Likewise.
* gfortran.dg/gomp/reduction4.f90: Adjust expected diagnostics.
* gfortran.dg/gomp/reduction7.f90: Likewise.
libgomp/
* Makefile.am (libgomp_la_SOURCES): Add scope.c
* Makefile.in: Regenerated.
* libgomp_g.h (GOMP_scope_start): Declare.
* libgomp.map: Add GOMP_scope_start@@GOMP_5.1.
* scope.c: New file.
* testsuite/libgomp.c-c++-common/scope-1.c: New test.
* testsuite/libgomp.c-c++-common/task-reduction-16.c: New test.
The following patch implements C++20 # __VA_OPT__ (...) support.
Testcases cover what I came up with myself and what LLVM has for #__VA_OPT__
in its testsuite and the string literals are identical between the two
compilers on the va-opt-5.c testcase.
2021-08-17 Jakub Jelinek <jakub@redhat.com>
libcpp/
* macro.c (vaopt_state): Add m_stringify member.
(vaopt_state::vaopt_state): Initialize it.
(vaopt_state::update): Overwrite it.
(vaopt_state::stringify): New method.
(stringify_arg): Replace arg argument with first, count arguments
and add va_opt argument. Use first instead of arg->first and
count instead of arg->count, for va_opt add paste_tokens handling.
(paste_tokens): Fix up len calculation. Don't spell rhs twice,
instead use %.*s to supply lhs and rhs spelling lengths. Don't call
_cpp_backup_tokens here.
(paste_all_tokens): Call it here instead.
(replace_args): Adjust stringify_arg caller. For vaopt_state::END
if stringify is true handle __VA_OPT__ stringification.
(create_iso_definition): Handle # __VA_OPT__ similarly to # macro_arg.
gcc/testsuite/
* c-c++-common/cpp/va-opt-5.c: New test.
* c-c++-common/cpp/va-opt-6.c: New test.
This fixes value-numbering breaking reverse storage order accesses
due to a missed check. It adds a new overload for
reverse_storage_order_for_component_p and sets reversed on the
VN IL ops for component and array accesses accordingly.
It also compares the reversed reference ops flag on reference
lookup.
2021-08-16 Richard Biener <rguenther@suse.de>
PR tree-optimization/101925
* tree-ssa-sccvn.c (copy_reference_ops_from_ref): Set
reverse on COMPONENT_REF and ARRAY_REF according to
what reverse_storage_order_for_component_p does.
(vn_reference_eq): Compare reversed on reference ops.
(reverse_storage_order_for_component_p): New overload.
(vn_reference_lookup_3): Check reverse_storage_order_for_component_p
on the reference looked up.
* gcc.dg/sso-16.c: New testcase.
Similar to the H8/300H patch, this improves SImode shifts for the H8/S.
It's not as big a win on the H8/S since we can shift two positions at a
time. But that also means that we can handle more residuals with minimal
ode growth after a special shift-by-16 or shift-by-24 sequence.
I think there's more to do here, but this seemed like as good a checkpoint
as any. Tested without regressions.
gcc/
* config/h8300/h8300.c (shift_alg_si): Avoid loops for most SImode
shifts on the H8/S.
(h8300_option_override): Use loops on H8/S more often when optimizing
for size.
(get_shift_alg): Handle new "special" cases on H8/S. Simplify
accordingly. Handle various arithmetic right shifts with special
sequences that we couldn't handle before.
This testcase is used to detect reuse of perm mask in the main loop,
in epilog, vpermi2b can still be used, so add the option
--param=vect-epilogues-nomask=0.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr82460-2.c: Adjust testcase by adding
--param=vect-epilogues-nomask=0
The expression ctx._M_indent is not a constant expression when ctx is a
reference parameter, even though _M_indent is an enumerator. Rename it
to _S_indent to be consistent with our conventions, and refer to it as
PrintContext::_S_indent to be valid C++ code (at least until P2280 is
accepted as a DR).
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
PR libstdc++/101937
* src/c++11/debug.cc (PrintContext::_M_indent): Replace with a
static data member.
(print_word): Use qualified-id to access it.
The additional libraries installed by --enable-libstdcxx-debug are built
without optimization to aid debugging, but the Python pretty printers
are not installed alongside them. This means that you can step through
the unoptimized library code, but at the expense of pretty printing the
library types.
This remedies the situation by installing another copy of the GDB hooks
alongside the debug version of libstdc++.so.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* python/Makefile.am [GLIBCXX_BUILD_DEBUG] (install-data-local):
Install another copy of the GDB hook.
* python/Makefile.in: Regenerate.
If -fprofile-update=atomic is used, then the target must provide atomic
operations for the counters of the type returned by get_gcov_type().
This is a 64-bit type for targets which have a 64-bit long long type.
On 32-bit targets this could be an issue since they may not provide
64-bit atomic operations. Allow targets to override the default type
size with the new TARGET_GCOV_TYPE_SIZE target hook.
If a 32-bit gcov type size is used, then there is currently a warning in
libgcov-driver.c in a dead code block due to
sizeof (counter) == sizeof (gcov_unsigned_t):
libgcc/libgcov-driver.c: In function 'dump_counter':
libgcc/libgcov-driver.c:401:46: warning: right shift count >= width of type [-Wshift-count-overflow]
401 | dump_unsigned ((gcov_unsigned_t)(counter >> 32), dump_fn, arg);
| ^~
gcc/c-family/
* c-cppbuiltin.c (c_cpp_builtins): Define
__LIBGCC_GCOV_TYPE_SIZE if flag_building_libgcc is true.
gcc/
* config/sparc/rtemself.h (SPARC_GCOV_TYPE_SIZE): Define.
* config/sparc/sparc.c (sparc_gcov_type_size): New.
(TARGET_GCOV_TYPE_SIZE): Redefine if SPARC_GCOV_TYPE_SIZE is defined.
* coverage.c (get_gcov_type): Use targetm.gcov_type_size().
* doc/tm.texi (TARGET_GCOV_TYPE_SIZE): Add hook under "Misc".
* doc/tm.texi.in: Regenerate.
* target.def (gcov_type_size): New target hook.
* targhooks.c (default_gcov_type_size): New.
* targhooks.h (default_gcov_type_size): Declare.
* tree-profile.c (gimple_gen_edge_profiler): Use precision of
gcov_type_node.
(gimple_gen_time_profiler): Likewise.
libgcc/
* libgcov.h (gcov_type): Define using __LIBGCC_GCOV_TYPE_SIZE.
(gcov_type_unsigned): Likewise.
add_scalar_info can directly generate a reference to an existing DIE for a
scalar attribute, e.g the upper bound of a VLA, but it does so only if this
existing DIE has a location or is a constant:
if (get_AT (decl_die, DW_AT_location)
|| get_AT (decl_die, DW_AT_data_member_location)
|| get_AT (decl_die, DW_AT_const_value))
Now, in DWARF 5, members of a structure that are bitfields no longer have a
DW_AT_data_member_location but a DW_AT_data_bit_offset attribute instead, so
the condition is bypassed.
gcc/
* dwarf2out.c (add_scalar_info): Deal with DW_AT_data_bit_offset.
PR tree-optimization/100393
gcc/ChangeLog:
* tree-switch-conversion.c (group_cluster::dump): Use
get_comparison_count.
(jump_table_cluster::find_jump_tables): Pre-compute number of
comparisons and then decrement it. Cache also max_ratio.
(jump_table_cluster::can_be_handled): Change signature.
* tree-switch-conversion.h (get_comparison_count): New.
Given the latest work in the compiler and debugger, we no longer need to use
most GNAT-specific encodings in the debug info generated for an Ada program,
so the attached patch disables them, except with -fgnat-encodings=all.
gcc/
* dwarf2out.c (add_data_member_location_attribute): Use GNAT
encodings only when -fgnat-encodings=all is specified.
(add_bound_info): Likewise.
(add_byte_size_attribute): Likewise.
(gen_member_die): Likewise.
... and further simplify related things a bit.
Fix-up/clean-up for recent commit e2a58ed6dc
"openacc: Middle-end worker-partitioning support".
gcc/
* omp-oacc-neuter-broadcast.cc (field_map): Move variable into...
(execute_omp_oacc_neuter_broadcast): ... here.
(install_var_field, build_receiver_ref, build_sender_ref): Take
'field_map_t *' parameter. Adjust all users.
(worker_single_copy, neuter_worker_single): Take a
'record_field_map_t *' parameter. Adjust all users.
PR ipa/100600
gcc/ChangeLog:
* ipa-icf-gimple.c (func_checker::compare_ssa_name): Do not
consider equal SSA_NAMEs when one is a param.
gcc/testsuite/ChangeLog:
* gcc.dg/ipa/pr100600.c: New test.
--with-multilib-generator was only support for different ISA/ABI
combination, however code model is effect the code gen a lots it
should able to handled in multilib mechanism.
Adding `--cmodel=` option to `--with-multilib-generator` to generating
multilib combination with different code model.
E.g.
--with-multilib-generator="rv64ima-lp64--;--cmodel=medlow,medany"
will generate 3 multi-lib suppport:
1) rv64ima with lp64
2) rv64ima with lp64 and medlow code model
3) rv64ima with lp64 and medany code model
gcc/
* config/riscv/multilib-generator: Support code model option for
multi-lib.
* doc/install.texi: Add document of new option for
--with-multilib-generator.