AIX protects the STDC Format Macros in a manner that can prevent the
definition of the macros depending on the order of header inclusion.
The protection of the macros was referenced in C99, removed in C11, and
never specified in any C++ standard. Also, the macros are in the namespace
reserved to the implementation (compiler) so the compiler is permitted to
choose to inject those names.
fixincludes/ChangeLog:
2020-09-17 David Edelsohn <dje.gcc@gmail.com>
PR target/97044
* inclhack.def (aix_inttypes): New fix.
* fixincl.x: Regenerate.
* tests/base/sys/inttypes.h: New file.
The code that collect2 generates, compiles and links into applications
and shared libraries to initialize constructors and register DWARF tables
is built with the compiler options used to invoke the linker. If the
compiler options change the visibility from default, the library
initialization routines will not be visible and this can prevent
initialization.
This patch checks if the command line sets visibiliity and then adds
GCC pragmas to the initialization code generated by collect2 if
necessary to define the visibility on global, exported functions as default.
gcc/ChangeLog:
2020-09-26 David Edelsohn <dje.gcc@gmail.com>
Clement Chigot <clement.chigot@atos.com>
* collect2.c (visibility_flag): New.
(main): Detect -fvisibility.
(write_c_file_stat): Push and pop default visibility.
cc1plus stats are now:
Alias oracle query stats:
refs_may_alias_p: 62971744 disambiguations, 73160711 queries
ref_maybe_used_by_call_p: 141176 disambiguations, 63867883 queries
call_may_clobber_ref_p: 23573 disambiguations, 29322 queries
nonoverlapping_component_refs_p: 0 disambiguations, 37720 queries
nonoverlapping_refs_since_match_p: 19432 disambiguations, 55659 must overlaps, 75860 queries
aliasing_component_refs_p: 54724 disambiguations, 753570 queries
TBAA oracle: 24124230 disambiguations 56228428 queries
16058141 are in alias set 0
10338303 queries asked about the same object
125 queries asked about the same alias set
0 access volatile
3919230 are dependent in the DAG
1788399 are aritificially in conflict with void *
Modref stats:
modref use: 10408 disambiguations, 46993 queries
modref clobber: 1418549 disambiguations, 1951251 queries
4898707 tbaa queries (2.510547 per modref query)
396878 base compares (0.203397 per modref query)
PTA query stats:
pt_solution_includes: 975364 disambiguations, 13604284 queries
pt_solutions_intersect: 1026606 disambiguations, 13181198 queries
So compared to
https://gcc.gnu.org/pipermail/gcc-patches/2020-September/554692.html we get 25%
use disambiguations and 91% more clobber disambiguations.
Tramp3d is
Alias oracle query stats:
refs_may_alias_p: 2056905 disambiguations, 2317461 queries
ref_maybe_used_by_call_p: 7137 disambiguations, 2093762 queries
call_may_clobber_ref_p: 234 disambiguations, 234 queries
nonoverlapping_component_refs_p: 0 disambiguations, 4313 queries
nonoverlapping_refs_since_match_p: 329 disambiguations, 10200 must overlaps, 10616 queries
aliasing_component_refs_p: 858 disambiguations, 34600 queries
TBAA oracle: 894996 disambiguations 1695991 queries
138346 are in alias set 0
470668 queries asked about the same object
0 queries asked about the same alias set
0 access volatile
191666 are dependent in the DAG
315 are aritificially in conflict with void *
Modref stats:
modref use: 842 disambiguations, 2265 queries
modref clobber: 14833 disambiguations, 28900 queries
34884 tbaa queries (1.207059 per modref query)
5041 base compares (0.174429 per modref query)
PTA query stats:
pt_solution_includes: 313372 disambiguations, 525724 queries
pt_solutions_intersect: 130374 disambiguations, 415138 queries
So about twice many use and 40% clobber disambiguations.
Bootstrapped/regtested x86_64-linux, I plan to commit it later today after
more testing.
2020-09-26 Jan Hubicka <hubicka@ucw.cz>
* ipa-inline-transform.c: Include ipa-modref-tree.h and ipa-modref.h.
(inline_call): Call ipa_merge_modref_summary_after_inlining.
* ipa-inline.c (ipa_inline): Do not free summaries.
* ipa-modref.c (dump_records): Fix formating.
(merge_call_side_effects): Break out from ...
(analyze_call): ... here; record recursive calls.
(analyze_stmt): Add new parameter RECURSIVE_CALLS.
(analyze_function): Do iterative dataflow on recursive calls.
(compute_parm_map): New function.
(ipa_merge_modref_summary_after_inlining): New function.
(collapse_loads): New function.
(modref_propagate_in_scc): Break out from ...
(pass_ipa_modref::execute): ... here; Do iterative dataflow.
* ipa-modref.h (ipa_merge_modref_summary_after_inlining): Declare.
As mentioned earlier, the vectorizer punts on vectorization of loops with non-constant
steps. As for OpenMP loops it is by the language restriction always possible to compute
the number of loop iterations before the loop, this change helps those cases
by computing it and using an alternate IV that iterates from 0 to < niterations with
step of 1 next to the normal IV which will be just linear in that.
List of functions where we compared to current trunk vectorize some loops where we
previously didn't (for c-c++-common only listing the C function names, both C and C++
are affected though):
gcc/testsuite/gcc.dg/vect/vect-simd-17.c doit
gcc/testsuite/gcc.dg/vect/vect-simd-18.c foo
gcc/testsuite/gcc.dg/vect/vect-simd-19.c foo
gcc/testsuite/gcc.dg/vect/vect-simd-20.c foo
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_f_simd_auto
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_f_simd_guided32
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_f_simd_runtime
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_f_simd_static
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_f_simd_static32
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_pf_simd_auto._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_pf_simd_guided32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_pf_simd_runtime._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_pf_simd_static32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_pf_simd_static._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-2.c f3_simd_normal
libgomp/testsuite/libgomp.c-c++-common/for-2.c f5_simd_normal
libgomp/testsuite/libgomp.c-c++-common/for-2.c f6_simd_normal
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_auto._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_ds128_auto._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_ds128_guided32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_ds128_runtime._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_ds128_static32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_ds128_static._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_guided32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_runtime._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_static32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_dpfs_static._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_ds_ds128_normal
libgomp/testsuite/libgomp.c-c++-common/for-3.c f3_ds_normal
libgomp/testsuite/libgomp.c-c++-common/for-4.c f3_taskloop_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_tpf_simd_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_tpf_simd_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_tpf_simd_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_tpf_simd_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_tpf_simd_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_ds128_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_ds128_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_ds128_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_ds128_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_ds128_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttdpfs_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttds_ds128_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-5.c f3_ttds_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-5.c f5_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-5.c f6_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_ds128_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_ds128_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_ds128_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_ds128_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_ds128_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tdpfs_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tds_ds128_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-6.c f3_tds_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_auto._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_ds128_auto._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_ds128_guided32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_ds128_runtime._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_ds128_static32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_ds128_static._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_guided32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_runtime._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_static32._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_dpfs_static._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_ds_ds128_normal
libgomp/testsuite/libgomp.c-c++-common/for-14.c f3_ds_normal
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_ds128_auto._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_ds128_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_ds128_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_ds128_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_ds128_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_guided32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_runtime._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_static32._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tdpfs_static._omp_fn.1
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tds_ds128_normal._omp_fn.0
libgomp/testsuite/libgomp.c-c++-common/for-15.c f3_tds_normal._omp_fn.0
2020-09-26 Jakub Jelinek <jakub@redhat.com>
* omp-expand.c (expand_omp_simd): Help vectorizer for the collapse == 1
and non-composite collapse > 1 case with non-constant innermost loop
step by precomputing number of iterations before loop and using an
alternate IV from 0 to number of iterations - 1 with step of 1.
* gcc.dg/vect/vect-simd-17.c: Expect 11 or more vectorized loops.
* gcc.dg/vect/vect-simd-18.c: New test.
* gcc.dg/vect/vect-simd-19.c: New test.
* gcc.dg/vect/vect-simd-20.c: New test.
libcpp has two specialized altivec implementations of search_line_fast,
one for power8+ and the other one otherwise.
Both use __attribute__((altivec(vector))) and the GCC builtins rather than
altivec.h and the APIs from there, which is fine, but should be restricted
to when libcpp is built with GCC, so that it can be relied on.
The second elif is
and thus e.g. when built with clang it isn't picked, but the first one was
just guarded with
and so according to the bugreporter clang fails miserably on that.
The following patch fixes that by adding the same GCC_VERSION requirement
as the second version. I don't know where the 4.5 in there comes from and
the exact version doesn't matter that much, as long as it is above 4.2 that
clang pretends to be and smaller or equal to 4.8 as the oldest gcc we
support as bootstrap compiler ATM.
Furthermore, the patch fixes the comment, the version it is talking about is
not pre-GCC 5, but actually the GCC 5+ one.
2020-09-26 Jakub Jelinek <jakub@redhat.com>
PR bootstrap/97163
* lex.c (search_line_fast): Only use _ARCH_PWR8 Altivec version
for GCC >= 4.5.
this patch implement tracking wehther argument points to readonly memory. This
is is useful for ipa-modref as well as for inline heuristics. It is desirable
to inline functions that dereference pointers to local variables in order
to support SRA. We always did the oposite heuristics (guessing that the
dereferences will be optimized out with 50% probability) but here we could
increase the probability for cases where we can track that argument is indeed
a local memory (or readonly which is also good)
* ipa-fnsummary.c (dump_ipa_call_summary): Dump
points_to_local_or_readonly_memory flag.
(analyze_function_body): Compute points_to_local_or_readonly_memory
flag.
(remap_edge_change_prob): Rename to ...
(remap_edge_params): ... this one; update
points_to_local_or_readonly_memory.
(remap_edge_summaries): Update.
(read_ipa_call_summary): Stream the new flag.
(write_ipa_call_summary): Likewise.
* ipa-predicate.h (struct inline_param_summary): Add
points_to_local_or_readonly_memory.
(inline_param_summary::equal_to): Update.
(inline_param_summary::useless_p): Update.
Track if insert and merge operations changed anything in the summary.
gcc/ChangeLog:
2020-09-26 Jan Hubicka <hubicka@ucw.cz>
* ipa-modref-tree.h (modref_ref_node::insert_access): Track if something
changed.
(modref_base_node::insert_ref): Likewise (and add a new optional
argument)
(modref_tree::insert): Likewise.
(modref_tree::merge): Rewrite
gcc/analyzer/ChangeLog:
PR analyzer/96646
PR analyzer/96841
* region-model.cc (region_model::get_representative_path_var):
When handling offset_region, wrap the MEM_REF's first argument in
an ADDR_EXPR of pointer type, rather than simply using the tree
for the parent region. Require the MEM_REF's second argument to
be an integer constant.
gcc/testsuite/ChangeLog:
PR analyzer/96646
PR analyzer/96841
* gcc.dg/analyzer/pr96646.c: New test.
* gcc.dg/analyzer/pr96841.c: New test.
The decl pushing APIs and duplicate_decls take an 'is_friend' parm,
when what they actually mean is 'hide this from name lookup'. That
conflation has gotten more anachronistic as time moved on. We now
have anticipated builtins, and I plan to have injected extern decls
soon. So this patch is mainly a renaming excercise. is_friend ->
hiding. duplicate_decls gets an additional 'was_hidden' parm. As
I've already said, hiddenness is a property of the symbol table, not
the decl. Builtins are now pushed requesting hiding, and pushdecl
asserts that we don't attempt to push a thing that should be hidden
without asking for it to be hidden.
This is the final piece of groundwork to get rid of a bunch of 'this
is hidden' markers on decls and move the hiding management entirely
into name lookup.
gcc/cp/
* cp-tree.h (duplicate_decls): Replace 'is_friend' with 'hiding'
and add 'was_hidden'.
* name-lookup.h (pushdecl_namespace_level): Replace 'is_friend'
with 'hiding'.
(pushdecl): Likewise.
(pushdecl_top_level): Drop is_friend parm.
* decl.c (check_no_redeclaration_friend_default_args): Rename parm
olddelc_hidden_p.
(duplicate_decls): Replace 'is_friend' with 'hiding'
and 'was_hidden'. Do minimal adjustments in body.
(cxx_builtin_function): Pass 'hiding' to pushdecl.
* friend.c (do_friend): Pass 'hiding' to pushdecl.
* name-lookup.c (supplement_binding_1): Drop defaulted arg to
duplicate_decls.
(update_binding): Replace 'is_friend' with 'hiding'. Drop
defaulted arg to duplicate_decls.
(do_pushdecl): Replace 'is_friend' with 'hiding'. Assert no
surprise hidhing. Adjust duplicate_decls calls to inform of old
decl's hiddennes.
(pushdecl): Replace 'is_friend' with 'hiding'.
(set_identifier_type_value_with_scope): Adjust update_binding
call.
(do_pushdecl_with_scope): Replace 'is_friend' with 'hiding'.
(pushdecl_outermost_localscope): Drop default arg to
do_pushdecl_with_scope.
(pushdecl_namespace_level): Replace 'is_friend' with 'hiding'.
(pushdecl_top_level): Drop is_friend parm.
* pt.c (register_specialization): Comment duplicate_decls call
args.
(push_template_decl): Commont pushdecl_namespace_level.
(tsubst_friend_function, tsubst_friend_class): Likewise.
I always found tag_scope confusing, as it is not a scope, but a
direction of how to lookup or insert an elaborated type tag. This
replaces it with a enum class TAG_how. I also add a new value,
HIDDEN_FRIEND, to distinguish the two cases of innermost-non-class
insertion that we currently conflate. Also renamed
'lookup_type_scope' to 'lookup_elaborated_type', because again, we're
not providing a scope to lookup in.
gcc/cp/
* name-lookup.h (enum tag_scope): Replace with ...
(enum class TAG_how): ... this. Add HIDDEN_FRIEND value.
(lookup_type_scope): Replace with ...
(lookup_elaborated_type): ... this.
(pushtag): Use TAG_how, not tag_scope.
* cp-tree.h (xref_tag): Parameter is TAG_how, not tag_scope.
* decl.c (lookup_and_check_tag): Likewise. Adjust.
(xref_tag_1, xref_tag): Likewise. adjust.
(start_enum): Adjust lookup_and_check_tag call.
* name-lookup.c (lookup_type_scope_1): Rename to ...
(lookup_elaborated_type_1) ... here. Use TAG_how, not tag_scope.
(lookup_type_scope): Rename to ...
(lookup_elaborated_type): ... here. Use TAG_how, not tag_scope.
(do_pushtag): Use TAG_how, not tag_scope. Adjust.
(pushtag): Likewise.
* parser.c (cp_parser_elaborated_type_specifier): Adjust.
(cp_parser_class_head): Likewise.
gcc/objcp/
* objcp-decl.c (objcp_start_struct): Use TAG_how not tag_scope.
(objcp_xref_tag): Likewise.
The Linux kernel has defined the cpuinfo string for the +rng feature, so
this patch adds that to GCC so that -march=native can pick it up.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
* config/aarch64/aarch64-option-extensions.def (rng): Add
cpuinfo string.
We currently detect builtin decls via DECL_ARTIFICIAL &&
!DECL_HIDDEN_FUNCTION_P, which, besides being clunky, is a problem as
hiddenness is a property of the symbol table -- not the decl being
hidden. This adds DECL_BUILTIN_P, which just looks at the
SOURCE_LOCATION -- we have a magic one for builtins.
One of the consequential changes is to make function-scope omp udrs
have function context (needed because otherwise duplicate-decls thinks
the types don't match at the point we check). This is also morally
better, because that's what they are -- nested functions, stop lying.
(That's actually my plan for all DECL_LOCAL_DECL_P decls, as they are
distinct decls to the namespace-scope decl they alias.)
gcc/cp/
* cp-tree.h (DECL_BUILTIN_P): New.
* decl.c (duplicate_decls): Use it. Do not treat omp-udr as a
builtin.
* name-lookup.c (anticipated_builtin): Use it.
(set_decl_context_in_fn): Function-scope OMP UDRs have function context.
(do_nonmember_using_decl): Use DECL_BUILTIN_P.
* parser.c (cp_parser_omp_declare_reduction): Function-scope OMP
UDRs have function context. Assert we never find a valid duplicate.
* pt.c (tsubst_expr): Function-scope OMP UDRs have function context.
libcc1/
* libcp1plugin.cc (supplement_binding): Use DECL_BULTIN_P.
When compiling nvptx.c using -save-temps, I ran into Wimplicit-fallthrough
warnings.
The fallthrough locations have been marked with a fallthrough comment, but
that doesn't work with -save-temps, something that has been filed as
PR78497.
Work around this by using gcc_fallthrough () in addition to the comment.
Tested by building target nvptx, copying nvptx.c compile line and adding
-save-temps.
gcc/ChangeLog:
2020-09-25 Tom de Vries <tdevries@suse.de>
* config/nvptx/nvptx.c (nvptx_assemble_integer, nvptx_print_operand):
Use gcc_fallthrough ().
The RTL expansion code for CTORs doesn't handle VECTOR_BOOLEAN_TYPE_P
with bit-precision elements correctly as the testcase shows before
the PR97085 fix. The following makes it do the correct thing
(not 100% sure for CTOR of sub-vectors due to the lack of a testcase).
The alternative would be to assert such CTORs do not happen (and also
add IL verification for this).
The GIMPLE FE needs a way to declare the VECTOR_BOOLEAN_TYPE_P vectors
(thus the C FE needs that).
2020-09-25 Richard Biener <rguenther@suse.de>
PR middle-end/96814
* expr.c (store_constructor): Handle VECTOR_BOOLEAN_TYPE_P
CTORs correctly.
* gcc.target/i386/pr96814.c: New testcase.
This implements the missing move assignment to make std::swap work
on auto_vec<>
2020-09-25 Richard Biener <rguenther@suse.de>
PR middle-end/97207
* vec.h (auto_vec<T>::operator=(auto_vec<T>&&)): Implement.
Now that G++ defaults to gnu++17 we don't need special rules for
compiling the C++17 allocation and deallocation functions.
libstdc++-v3/ChangeLog:
* libsupc++/Makefile.am: Remove redundant -std=gnu++1z flags.
* libsupc++/Makefile.in: Regenerate.
This patch fixes ICEs in gcc.dg/torture/float16-basic.c for
-march=armv8.1-m.main+mve -mfloat-abi=hard. The problem was
that an fp16 argument was (rightly) being passed in FPRs,
but the fp16 move patterns only handled GPRs. LRA then cycled
trying to look for a way of handling the FPR.
It looks like there are three related problems here:
(1) We're using the wrong fp16 move pattern for base MVE.
*mov<mode>_vfp_<mode>16 (the pattern we use for +mve.fp)
works for base MVE too.
(2) The fp16 MVE load and store patterns are separate from the
main move patterns. The loads and stores should instead be
alternatives of the main move patterns, so that LRA knows
what to do with pseudo registers that become stack slots.
(3) The range restrictions for the loads and stores were wrong
for fp16: we were enforcing a multiple of 4 in [-255*4, 255*4]
instead of a multiple of 2 in [-255*2, 255*2].
(2) came from a patch to prevent writeback being used for MVE.
That patch also added a Uj constraint to enforce the correct
memory types for MVE. I think the simplest fix is therefore to merge
the loads and stores back into the main pattern and extend the Uj
constraint so that it acts like Um for non-MVE.
The testcase for that patch was mve-vldstr16-no-writeback.c, whose
main function is:
void
fn1 (__fp16 *pSrc)
{
__fp16 high;
__fp16 *pDst = 0;
unsigned i;
for (i = 0;; i++)
if (pSrc[i])
pDst[i] = high;
}
Fixing (2) causes the store part to fail, not because we're using
writeback, but because we decide to use GPRs to store high (which is
uninitialised, and so gets replaced with zero). This patch therefore
adds some scan-assembler-nots instead. (I wondered about changing the
testcase to initialise high, but that seemed like a bad idea for
a regression test.)
For (3): MVE seems to be the only thing to use arm_coproc_mem_operand_wb
(and its various interfaces) for 16-bit scalars: the Neon patterns only
use it for 32-bit scalars.
I've added new tests to try the various FPR alternatives of the
move patterns. The range of offsets that GCC uses for FPR loads
and stores is the intersection of the range allowed for GPRs and
FPRs, so the tests include GPR<->memory tests as well.
The fp32 and fp64 tests already pass, they're just there for
completeness.
gcc/
* config/arm/arm-protos.h (arm_mve_mode_and_operands_type_check):
Delete.
* config/arm/arm.c (arm_coproc_mem_operand_wb): Use a scale factor
of 2 rather than 4 for 16-bit modes.
(arm_mve_mode_and_operands_type_check): Delete.
* config/arm/constraints.md (Uj): Allow writeback for Neon,
but continue to disallow it for MVE.
* config/arm/arm.md (*arm32_mov<HFBF:mode>): Add !TARGET_HAVE_MVE.
* config/arm/vfp.md (*mov_load_vfp_hf16, *mov_store_vfp_hf16): Fold
back into...
(*mov<mode>_vfp_<mode>16): ...here but use Uj for the FPR memory
constraints. Use for base MVE too.
gcc/testsuite/
* gcc.target/arm/mve/intrinsics/mve-vldstr16-no-writeback.c: Allow
the store to use GPRs instead of FPRs. Add scan-assembler-nots
for writeback.
* gcc.target/arm/armv8_1m-fp16-move-1.c: New test.
* gcc.target/arm/armv8_1m-fp32-move-1.c: Likewise.
* gcc.target/arm/armv8_1m-fp64-move-1.c: Likewise.
This fixes a corner case with virtual operand update in if-conversion
by re-organizing the code to remove edges only after the last point
we need virtual PHI operands to be available.
2020-09-25 Richard Biener <rguenther@suse.de>
PR tree-optimization/97199
* tree-if-conv.c (combine_blocks): Remove edges only
after looking at virtual PHI args.
Since r11-3402 (g:65c9878641cbe0ed898aa7047b7b994e9d4a5bb1), the
vtrn_half, vuzp_half and vzip_half started failing with
vtrn_half.c:76:17: error: redeclaration of 'vector_float64x2' with no linkage
vtrn_half.c:77:17: error: redeclaration of 'vector2_float64x2' with no linkage
vtrn_half.c:80:17: error: redeclaration of 'vector_res_float64x2' with no linkage
This is because r11-3402 now always declares float64x2 variables for
aarch64, leading to a duplicate declaration in these testcases.
The fix is simply to remove these now useless declarations.
These tests are skipped on arm*, so there is no impact on that target.
2020-09-25 Christophe Lyon <christophe.lyon@linaro.org>
gcc/testsuite/
PR target/71233
* gcc.target/aarch64/advsimd-intrinsics/vtrn_half.c: Remove
declarations of vector, vector2, vector_res for float64x2 type.
* gcc.target/aarch64/advsimd-intrinsics/vuzp_half.c: Likewise.
* gcc.target/aarch64/advsimd-intrinsics/vzip_half.c: Likewise.
This fixes the testcase writing to adjacent stack vars, exposed
my IPA modref.
2020-09-25 Richard Biener <rguenther@suse.de>
PR testsuite/97204
* gcc.target/i386/sse2-mmx-pinsrw.c: Fix.
The following change adds support for non-rectangular simd loops.
While working on that, I've noticed we actually don't vectorize collapsed
simd loops at all, because the code that I thought would be vectorizable
actually is not vectorized. While in theory for the constant lower/upper
bounds and constant step of all but the outermost loop we could in theory
vectorize by computing the seprate iterators using vectorized division
and modulo for each of them from the single iterator that increments
by 1 from 0 to total iteration count in the loop nest, I think that would
be fairly expensive and the chances of the loop body being vectorizable
would be low e.g. because of array indices unlikely to be linear and would
need scatters/gathers.
This patch changes the generated code to vectorize only the innermost
loop which has higher chance of being vectorized. Below is the list of
tests and function names in which the patch resulted in vectorizing something
that hasn't been vectorized before (ok, the first line is a new test).
I've also found that the vectorizer will not vectorize loops with non-constant
steps, I plan to do something about those incrementally on the omp-expand.c
side (basically, compute number of iterations before the loop and use a 0 to
number_of_iterations step 1 IV as the main one).
I have problem with the composite simd vectorization though.
The point is that each thread (or task etc.) is given only a range of
consecutive iterations, so somewhere earlier it computes total number of iterations
and splits the work between the workers and then the intent is to try to vectorize it.
So, each thread is then given a begin ... end-1 range that it would handle.
This means that from the single begin value I need to compute the individual iteration
vars I should start at and then goto into the loop nest to begin iterating there
(and actually compute how many iterations the innermost loop should do each time
so that it stops before end).
Very roughly the IL I emit is something like:
int t[100][100][100];
void
foo (int a, int b, int c, int d, int e, int f, int g, int h, int u, int v, int w, int x)
{
int i, j, k;
int cnt;
if (x)
{
i = u; j = v; k = w; goto doit;
}
for (i = a; i < b; i += c)
for (j = d; j < e; j += f)
{
k = g;
doit:
for (; k < h; k++)
t[i][j][k] += i + j + k;
}
}
Unfortunately, some pass then turns the innermost loop to have more than 2 basic blocks
and it isn't vectorized because of that.
Also, I have disabled (for now) SIMTization of collapsed simd loops, because for SIMT
it would be using a single thread anyway and I didn't want to bother with checking
SIMT on all places I've been changing. If SIMT support is added for some or all
collapsed loops, that omp-low.c change needs to be reverted.
Here is that list of what hasn't been vectorized before and is now:
gcc/testsuite/gcc.dg/vect/vect-simd-17.c doit
gcc/testsuite/gfortran.dg/gomp/openmp-simd-6.f90 bar
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-10.c f28_taskloop_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-10.c _Z24f28_taskloop_simd_normalv._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-11.c f25_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-11.c f26_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-11.c f27_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-11.c f28_tpf_simd_guided32._omp_fn.1
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-11.c f28_tpf_simd_runtime._omp_fn.1
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-11.c _Z17f25_t_simd_normaliiiiiii._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-11.c _Z17f26_t_simd_normaliiiixxi._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-11.c _Z17f27_t_simd_normalv._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-11.c _Z20f28_tpf_simd_runtimev._omp_fn.1
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-11.c _Z21f28_tpf_simd_guided32v._omp_fn.1
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-2.c f7_simd_normal
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-2.c f7_simd_normal
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-2.c f8_f_simd_guided32
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-2.c f8_f_simd_guided32
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-2.c f8_f_simd_runtime
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-2.c f8_f_simd_runtime
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-2.c f8_pf_simd_guided32._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-2.c f8_pf_simd_runtime._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-2.c _Z18f8_pf_simd_runtimev._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-2.c _Z19f8_pf_simd_guided32v._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-4.c f8_taskloop_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-4.c _Z23f8_taskloop_simd_normalv._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-5.c f7_t_simd_normal._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-5.c f8_tpf_simd_guided32._omp_fn.1
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-5.c f8_tpf_simd_runtime._omp_fn.1
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-5.c _Z16f7_t_simd_normalv._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-5.c _Z19f8_tpf_simd_runtimev._omp_fn.1
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-5.c _Z20f8_tpf_simd_guided32v._omp_fn.1
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c f25_simd_normal
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f25_simd_normal
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c f26_simd_normal
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f26_simd_normal
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c f27_simd_normal
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f27_simd_normal
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c f28_f_simd_guided32
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f28_f_simd_guided32
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c f28_f_simd_runtime
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f28_f_simd_runtime
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f28_pf_simd_guided32._omp_fn.0
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/for-8.c f28_pf_simd_runtime._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c _Z19f28_pf_simd_runtimev._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/for-8.c _Z20f28_pf_simd_guided32v._omp_fn.0
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/master-combined-1.c main._omp_fn.9
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/master-combined-1.c main._omp_fn.9
libgomp/testsuite/libgomp.c++/../libgomp.c-c++-common/simd-1.c f2
libgomp/testsuite/libgomp.c/../libgomp.c-c++-common/simd-1.c f2
libgomp/testsuite/libgomp.c/pr70680-2.c f1._omp_fn.0
libgomp/testsuite/libgomp.c/pr70680-2.c f2._omp_fn.0
libgomp/testsuite/libgomp.c/pr70680-2.c f3._omp_fn.0
libgomp/testsuite/libgomp.c/pr70680-2.c f4._omp_fn.0
libgomp/testsuite/libgomp.c/simd-8.c foo
libgomp/testsuite/libgomp.c/simd-9.c bar
libgomp/testsuite/libgomp.c/simd-9.c foo
2020-09-25 Jakub Jelinek <jakub@redhat.com>
gcc/
* omp-low.c (scan_omp_1_stmt): Don't call scan_omp_simd for
collapse > 1 loops as simt doesn't support collapsed loops yet.
* omp-expand.c (expand_omp_for_init_counts, expand_omp_for_init_vars):
Small tweaks to function comment.
(expand_omp_simd): Rewritten collapse > 1 support to only attempt
to vectorize the innermost loop and emit set of outer loops around it.
For non-composite simd with collapse > 1 without broken loop don't
even try to compute number of iterations first. Add support for
non-rectangular simd loops.
(expand_omp_for): Don't sorry_at on non-rectangular simd loops.
gcc/testsuite/
* gcc.dg/vect/vect-simd-17.c: New test.
libgomp/
* testsuite/libgomp.c/loop-25.c: New test.
On nvptx we run into:
...
FAIL: c-c++-common/ident-1b.c -Wc++-compat scan-assembler GCC:
FAIL: c-c++-common/ident-2b.c -Wc++-compat scan-assembler GCC:
...
Using a scan-assembler directive adds -fno-indent to the compile options.
The test c-c++-common/ident-1b.c adds dg-options "-fident", and intends to
check that the -fident overrides the -fno-indent, by means of the
scan-assembler. But for nvptx, there's no .ident directive, both with -fident
and -fno-ident.
Fix this by adding an effective target ident_directive, and requiring
it in both test-cases.
Tested on nvptx and x86_64.
gcc/testsuite/ChangeLog:
2020-09-24 Tom de Vries <tdevries@suse.de>
* lib/target-supports.exp (check_effective_target_ident_directive): New proc.
* c-c++-common/ident-1b.c: Require effective target ident_directive.
* c-c++-common/ident-2b.c: Same.
This adds a get_DW_UT_name function to dwarfnames using dwarf2.def
for use in binutils readelf to show the unit types in a DWARF5 header.
Also remove DW_CIE_VERSION which was already removed in binutils/gdb
and is not used in gcc.
include/ChangeLog:
* dwarf2.def: Add DWARF5 Unit type header encoding macros
DW_UT_FIRST, DW_UT and DW_UT_END.
* dwarf2.h (enum dwarf_unit_type): Removed and define using
DW_UT_FIRST, DW_UT and DW_UT_END macros.
(DW_CIE_VERSION): Removed.
(get_DW_UT_name): New function declaration.
libiberty/ChangeLog:
* dwarfnames.c (get_DW_UT_name): Define using DW_UT_FIRST, DW_UT
and DW_UT_END.
In cleaning up local decl handling, here's an initial patch that takes
advantage of C++'s default args for the is_friend parm of pushdecl,
duplicate_decls and push_template_decl_real and the scope & tpl_header
parms of xref_tag. Then many of the calls simply not mention these.
I also rename push_template_decl_real to push_template_decl, deleting
the original forwarding function. This'll make my later patches
changing their types less intrusive. There are 2 functional changes:
1) push_template_decl requires is_friend to be correct, it doesn't go
checking for a friend function (an assert is added).
2) debug_overload prints out Hidden and Using markers for the overload set.
gcc/cp/
* cp-tree.h (duplicate_decls): Default is_friend to false.
(xref_tag): Default tag_scope & tpl_header_p to ts_current & false.
(push_template_decl_real): Default is_friend to false. Rename to
...
(push_template_decl): ... here. Delete original decl.
* name-lookup.h (pushdecl_namespace_level): Default is_friend to
false.
(pushtag): Default tag_scope to ts_current.
* coroutines.cc (morph_fn_to_coro): Drop default args to xref_tag.
* decl.c (start_decl): Drop default args to duplicate_decls.
(start_enum): Drop default arg to pushtag & xref_tag.
(start_preparsed_function): Pass DECL_FRIEND_P to
push_template_decl.
(grokmethod): Likewise.
* friend.c (do_friend): Rename push_template_decl_real calls.
* lambda.c (begin_lamnbda_type): Drop default args to xref_tag.
(vla_capture_type): Likewise.
* name-lookup.c (maybe_process_template_type_declaration): Rename
push_template_decl_real call.
(pushdecl_top_level_and_finish): Drop default arg to
pushdecl_namespace_level.
* pt.c (push_template_decl_real): Assert no surprising friend
functions. Rename to ...
(push_template_decl): ... here. Delete original function.
(lookup_template_class_1): Drop default args from pushtag.
(instantiate_class_template_1): Likewise.
* ptree.c (debug_overload): Print hidden and using markers.
* rtti.c (init_rtti_processing): Drop refault args from xref_tag.
(build_dynamic_cast_1, tinfo_base_init): Likewise.
* semantics.c (begin_class_definition): Drop default args to
pushtag.
gcc/objcp/
* objcp-decl.c (objcp_start_struct): Drop default args to
xref_tag.
(objcp_xref_tag): Likewise.
libcc1/
* libcp1plugin.cc (supplement_binding): Drop default args to
duplicate_decls.
(safe_pushtag): Drop scope parm. Drop default args to pushtag.
(safe_pushdecl_maybe_friend): Rename to ...
(safe_pushdecl): ... here. Drop is_friend parm. Drop default args
to pushdecl.
(plugin_build_decl): Adjust safe_pushdecl & safe_pushtag calls.
(plugin_build_constant): Adjust safe_pushdecl call.
AIX ptrace syscalls doesn't have the same semantic than the glibc one.
The syscall package is already handling it correctly so disable the new
__go_ptrace C function for AIX.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/256777
libstdc++-v3/ChangeLog:
PR libstdc++/71579
* include/std/type_traits (invoke_result, is_invocable)
(is_invocable_r, is_nothrow_invocable, is_nothrow_invocable_r):
Add static_asserts to make sure that the arguments of the type
traits are not misused with incomplete types.
* testsuite/20_util/invoke_result/incomplete_args_neg.cc: New test.
* testsuite/20_util/is_invocable/incomplete_args_neg.cc: New test.
* testsuite/20_util/is_invocable/incomplete_neg.cc: New test.
* testsuite/20_util/is_nothrow_invocable/incomplete_args_neg.cc:
New test.
* testsuite/20_util/is_nothrow_invocable/incomplete_neg.cc: Check
for error on incomplete type usage in trait.
The class template semiregular-box<T> defined in [range.semi.wrap] is
used by a number of views to accomodate non-semiregular subobjects
while ensuring that the overall view remains semiregular. It provides
a stand-in default constructor, copy assignment operator and move
assignment operator whenever the underlying type lacks them. The
wrapper derives from std::optional<T> to support default construction
when T is not default constructible.
It would be nice for this wrapper to essentially be a no-op when the
underlying type is already semiregular, but this is currently not the
case due to its use of std::optional<T>, which incurs space overhead
compared to storing just T.
To that end, this patch specializes the semiregular wrapper for
semiregular T. Compared to the primary template, this specialization
uses less space, and it allows [[no_unique_address]] to optimize away
wrapped data members whose underlying type is empty and semiregular
(e.g. a non-capturing lambda). This patch also applies
[[no_unique_address]] to the five data members that use the wrapper.
libstdc++-v3/ChangeLog:
* include/std/ranges (__detail::__boxable): Split out the
associated constraints of __box into here.
(__detail::__box): Use the __boxable concept. Define a leaner
partial specialization for semiregular types.
(single_view::_M_value): Give it [[no_unique_address]].
(filter_view::_M_pred): Likewise.
(transform_view::_M_fun): Likewise.
(take_while_view::_M_pred): Likewise.
(drop_while_view::_M_pred):: Likewise.
* testsuite/std/ranges/adaptors/detail/semiregular_box.cc: New
test.
This adds support for Arm's Neoverse N2 CPU to the AArch32 backend.
Neoverse N2 builds AArch32 at EL0 and therefore needs support in AArch32
GCC.
gcc/ChangeLog:
* config/arm/arm-cpus.in (neoverse-n2): New.
* config/arm/arm-tables.opt: Regenerate.
* config/arm/arm-tune.md: Regenerate.
* doc/invoke.texi: Document support for Neoverse N2.
This patch adds support for Arm's Neoverse N2 CPU to the AArch64
backend.
gcc/ChangeLog:
* config/aarch64/aarch64-cores.def: Add Neoverse N2.
* config/aarch64/aarch64-tune.md: Regenerate.
* doc/invoke.texi: Document AArch64 support for Neoverse N2.
This adds a move CTOR to auto_vec<T, 0> and makes use of a
auto_vec<edge> return value for get_loop_exit_edges denoting
that lifetime management of the vector is handed to the caller.
The move CTOR prompted the hash_table change because it appearantly
makes the copy CTOR implicitely deleted (good) and hash-table
expansion of the odr_enum_map which is
hash_map <nofree_string_hash, odr_enum> where odr_enum has an
auto_vec<odr_enum_val, 0> member triggers this. Not sure if
there's a latent bug there before this (I think we're not
invoking DTORs, but we're invoking copy-CTORs).
2020-08-06 Richard Biener <rguenther@suse.de>
* vec.h (auto_vec<T, 0>::auto_vec (auto_vec &&)): New move CTOR.
(auto_vec<T, 0>::operator=(auto_vec &&)): Delete.
* hash-table.h (hash_table::expand): Use std::move when expanding.
* cfgloop.h (get_loop_exit_edges): Return auto_vec<edge>.
* cfgloop.c (get_loop_exit_edges): Adjust.
* cfgloopmanip.c (fix_loop_placement): Likewise.
* ipa-fnsummary.c (analyze_function_body): Likewise.
* ira-build.c (create_loop_tree_nodes): Likewise.
(create_loop_tree_node_allocnos): Likewise.
(loop_with_complex_edge_p): Likewise.
* ira-color.c (ira_loop_edge_freq): Likewise.
* loop-unroll.c (analyze_insns_in_loop): Likewise.
* predict.c (predict_loops): Likewise.
* tree-predcom.c (last_always_executed_block): Likewise.
* tree-ssa-loop-ch.c (ch_base::copy_headers): Likewise.
* tree-ssa-loop-im.c (store_motion_loop): Likewise.
* tree-ssa-loop-ivcanon.c (loop_edge_to_cancel): Likewise.
(canonicalize_loop_induction_variables): Likewise.
* tree-ssa-loop-manip.c (get_loops_exits): Likewise.
* tree-ssa-loop-niter.c (find_loop_niter): Likewise.
(finite_loop_p): Likewise.
(find_loop_niter_by_eval): Likewise.
(estimate_numbers_of_iterations): Likewise.
* tree-ssa-loop-prefetch.c (emit_mfence_after_loop): Likewise.
(may_use_storent_in_loop_p): Likewise.
This fixes an ICE in noexcept instantiation. It was presuming
functions always have template_info, but that changed with my
DECL_LOCAL_DECL_P changes. Fortunately DECL_LOCAL_DECL_P fns are
never member fns, so we don't need to go fishing out a this pointer.
Also I realized I'd misnamed local10.C, so renaming it local-fn3.C,
and while there adding the effective-target lto that David E pointed
out was missing.
PR c++/97186
gcc/cp/
* pt.c (maybe_instantiate_noexcept): Local externs are never
member fns.
gcc/testsuite/
* g++.dg/template/local10.C: Rename ...
* g++.dg/template/local-fn3.C: .. here. Require lto.
* g++.dg/template/local-fn4.C: New.
re-add tracking of accesses which was unfinished in David's patch.
At the moment I only implemented tracking of the fact that access is based on
derefernece of the parameter (so we track THIS pointers).
Patch does not implement IPA propagation since it needs bit more work which
I will post shortly: ipa-fnsummary needs to track when parameter points to
local memory, summaries needs to be merged when function is inlined (because
jump functions are) and propagation needs to be turned into iterative dataflow
on SCC components.
Patch also adds documentation of -fipa-modref and params that was left uncommited
in my branch :(.
Even without this change it does lead to nice increase of disambiguations
for cc1plus build.
Alias oracle query stats:
refs_may_alias_p: 62758323 disambiguations, 72935683 queries
ref_maybe_used_by_call_p: 139511 disambiguations, 63654045 queries
call_may_clobber_ref_p: 23502 disambiguations, 29242 queries
nonoverlapping_component_refs_p: 0 disambiguations, 37654 queries
nonoverlapping_refs_since_match_p: 19417 disambiguations, 55555 must overlaps, 75721 queries
aliasing_component_refs_p: 54665 disambiguations, 752449 queries
TBAA oracle: 21917926 disambiguations 53054678 queries
15763411 are in alias set 0
10162238 queries asked about the same object
124 queries asked about the same alias set
0 access volatile
3681593 are dependent in the DAG
1529386 are aritificially in conflict with void *
Modref stats:
modref use: 8311 disambiguations, 32527 queries
modref clobber: 742126 disambiguations, 1036986 queries
1987054 tbaa queries (1.916182 per modref query)
125479 base compares (0.121004 per modref query)
PTA query stats:
pt_solution_includes: 968314 disambiguations, 13609584 queries
pt_solutions_intersect: 1019136 disambiguations, 13147139 queries
So compared to
https://gcc.gnu.org/pipermail/gcc-patches/2020-September/554605.html
we get 41% more use disambiguations (with similar number of queries) and 8% more
clobber disambiguations.
For tramp3d:
Alias oracle query stats:
refs_may_alias_p: 2052256 disambiguations, 2312703 queries
ref_maybe_used_by_call_p: 7122 disambiguations, 2089118 queries
call_may_clobber_ref_p: 234 disambiguations, 234 queries
nonoverlapping_component_refs_p: 0 disambiguations, 4299 queries
nonoverlapping_refs_since_match_p: 329 disambiguations, 10200 must overlaps, 10616 queries
aliasing_component_refs_p: 857 disambiguations, 34555 queries
TBAA oracle: 885546 disambiguations 1677080 queries
132105 are in alias set 0
469030 queries asked about the same object
0 queries asked about the same alias set
0 access volatile
190084 are dependent in the DAG
315 are aritificially in conflict with void *
Modref stats:
modref use: 426 disambiguations, 1881 queries
modref clobber: 10042 disambiguations, 16202 queries
19405 tbaa queries (1.197692 per modref query)
2775 base compares (0.171275 per modref query)
PTA query stats:
pt_solution_includes: 313908 disambiguations, 526183 queries
pt_solutions_intersect: 130510 disambiguations, 416084 queries
Here uses decrease by 4 disambiguations and clobber improve by 3.5%. I think
the difference is caused by fact that gcc has much more alias set 0 accesses
originating from gimple and tree unions as I mentioned in original mail.
After pushing out the IPA propagation I will re-add code to track offsets and
sizes that further improve disambiguation. On tramp3d it enables a lot of DSE
for structure fields not acessed by uninlined function.
gcc/
* doc/invoke.texi: Document -fipa-modref, ipa-modref-max-bases,
ipa-modref-max-refs, ipa-modref-max-accesses, ipa-modref-max-tests.
* ipa-modref-tree.c (test_insert_search_collapse): Update.
(test_merge): Update.
(gt_ggc_mx): New function.
* ipa-modref-tree.h (struct modref_access_node): New structure.
(struct modref_ref_node): Add every_access and accesses array.
(modref_ref_node::modref_ref_node): Update ctor.
(modref_ref_node::search): New member function.
(modref_ref_node::collapse): New member function.
(modref_ref_node::insert_access): New member function.
(modref_base_node::insert_ref): Do not collapse base if ref is 0.
(modref_base_node::collapse): Copllapse also refs.
(modref_tree): Add accesses.
(modref_tree::modref_tree): Initialize max_accesses.
(modref_tree::insert): Add access parameter.
(modref_tree::cleanup): New member function.
(modref_tree::merge): Add parm_map; merge accesses.
(modref_tree::copy_from): New member function.
(modref_tree::create_ggc): Add max_accesses.
* ipa-modref.c (dump_access): New function.
(dump_records): Dump accesses.
(dump_lto_records): Dump accesses.
(get_access): New function.
(record_access): Record access.
(record_access_lto): Record access.
(analyze_call): Compute parm_map.
(analyze_function): Update construction of modref records.
(modref_summaries::duplicate): Likewise; use copy_from.
(write_modref_records): Stream accesses.
(read_modref_records): Sream accesses.
(pass_ipa_modref::execute): Update call of merge.
* params.opt (-param=modref-max-accesses): New.
* tree-ssa-alias.c (alias_stats): Add modref_baseptr_tests.
(dump_alias_stats): Update.
(base_may_alias_with_dereference_p): New function.
(modref_may_conflict): Check accesses.
(ref_maybe_used_by_call_p_1): Update call to modref_may_conflict.
(call_may_clobber_ref_p_1): Update call to modref_may_conflict.
With nvptx, we run into:
...
FAIL: gcc.dg/tls/thr-cse-1.c scan-assembler-not \
emutls_get_address.*emutls_get_address.*
...
because the nvptx assembly looks like:
...
call (%value_in), __emutls_get_address, (%out_arg1);
...
// BEGIN GLOBAL FUNCTION DECL: __emutls_get_address
.extern .func (.param.u64 %value_out) __emutls_get_address (.param.u64 %in_ar0);
...
Fix this by checking the slim final dump instead, where we have just:
...
12: r35:DI=call [`__emutls_get_address'] argc:0
...
gcc/testsuite/ChangeLog:
2020-09-24 Tom de Vries <tdevries@suse.de>
* gcc.dg/tls/thr-cse-1.c: Scan final dump instead of assembly for
nvptx.