* config/mips/mips.c (DIRECT_BUILTIN_PURE): New macro. Add a
pure qualifier to the built-in.
(MSA_BUILTIN_PURE): New macro. Add a pure qualifier to the MSA
built-ins.
(struct mips_builtin_description): Add is_pure flag.
(mips_init_builtins): Mark built-in as pure if the flag in the
corresponding mips_builtin_description struct is set.
* gcc.target/mips/mips-builtins-pure.c: New test.
From-SVN: r277534
* config/mips/mips-msa.md (msa_insert_<msaftm_f>): Add an
alternative which covers the floating-point input value. Also
forbid the split of insert.d pattern for floating-point values.
* gcc.target/mips/msa-insert-split.c: New test.
From-SVN: r277533
When using the -msave-restore flag we end up with calls to
_riscv_save_0 and _riscv_restore_0. These functions adjust the stack
and save or restore the return address. Due to grouping multiple
save/restore stub functions together the save/restore 0 calls actually
save s0, s1, s2, and the return address, but only the return address
actually matters. Leaf functions don't call the save/restore stubs,
so whenever we do see a call to the save/restore stubs, the store of
the return address is required.
If we look in gcc/config/riscv/riscv.c at the function
riscv_expand_prologue and riscv_expand_epilogue we can see that it
would be reasonably easy to adjust these functions to avoid the calls
to the save/restore stubs for those cases where we are about to call
_riscv_save_0 and _riscv_restore_0, however, the actual code size
saving this would give is debatable, with linker relaxation, the calls
to save/restore are often just 4-bytes, and can sometimes even be
2-bytes, while leaving the stack adjust and return address save inline
is always going to be 4-bytes.
The interesting case is when we call _riscv_save_0 and
_riscv_restore_0, and also have a frame that would (without
save/restore) have resulted in a tail call. In this case if we could
remove the save/restore calls, and restore the tail call then we would
get a real size saving.
The problem is that the choice of generating a tail call or not is
done during the gimple expand pass, at which point we don't know how
many registers we need to save (or restore).
The solution presented in this patch offers a partial solution to this
problem. By using the TARGET_MACHINE_DEPENDENT_REORG pass to
implement a very limited pattern matching we identify functions that
call _riscv_save_0 and _riscv_restore_0, and which could be converted
to make use of a tail call. These functions are then converted to the
non save/restore tail call form.
This should result in a code size reduction when compiling with -Os
and with the -msave-restore flag.
gcc/ChangeLog:
* config.gcc: Add riscv-sr.o to extra_objs for riscv.
* config/riscv/riscv-sr.c: New file.
* config/riscv/riscv.c (riscv_reorg): New function.
(TARGET_MACHINE_DEPENDENT_REORG): Define.
* config/riscv/riscv.h (SIBCALL_REG_P): Define.
(riscv_remove_unneeded_save_restore_calls): Declare.
* config/riscv/t-riscv (riscv-sr.o): New build rule.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/save-restore-2.c: New file.
* gcc.target/riscv/save-restore-3.c: New file.
* gcc.target/riscv/save-restore-4.c: New file.
* gcc.target/riscv/save-restore-5.c: New file.
* gcc.target/riscv/save-restore-6.c: New file.
* gcc.target/riscv/save-restore-7.c: New file.
* gcc.target/riscv/save-restore-8.c: New file.
From-SVN: r277527
2019-10-28 Richard Biener <rguenther@suse.de>
PR tree-optimization/92241
* tree-vect-loop.c (vect_fixup_scalar_cycles_with_patterns): When
we failed to update the reduction index do not use the pattern
stmts for the reduction chain.
(vectorizable_reduction): When the reduction chain is corrupt,
fail.
* tree-vect-patterns.c (vect_mark_pattern_stmts): Stop when we
fail to update the reduction chain.
* gcc.dg/torture/pr92241.c: New testcase.
From-SVN: r277516
https://gcc.gnu.org/ml/gcc-patches/2019-10/msg01962.html
We use an eof_token global variable as a sentinel on a deferred parse
(such as in-class function definitions, or default args). This
complicates retrieving the next token in certain places.
As such deferred parses always nest properly and completely before
resuming the outer lexer, we can simply morph the token after the
deferred buffer into a CPP_EOF token and restore it afterwards. I
finally got around to implementing it with this patch.
One complication is that we have to change the discriminator for when
the token's value is a tree. We can't look at the token's type because
it might have been overwritten. I add a bool flag to the token
(there's several spare bits), and use that. This does simplify the
discriminator because we just check a single bit, rather than a set of
token types.
* parser.h (struct cp_token): Drop {ENUM,BOOL}_BITFIELD C-ism.
Add tree_check_p flag, use as nested union discriminator.
(struct cp_lexer): Add saved_type & saved_keyword fields.
* parser.c (eof_token): Delete.
(cp_lexer_new_main): Always init last_token to last token of
buffer.
(cp_lexer_new_from_tokens): Overlay EOF token at end of range.
(cp_lexer_destroy): Restore token under the EOF.
(cp_lexer_previous_token_position): No check for eof_token here.
(cp_lexer_get_preprocessor_token): Clear tree_check_p.
(cp_lexer_peek_nth_token): Check CPP_EOF not eof_token.
(cp_lexer_consume_token): Assert not CPP_EOF, no check for
eof_token.
(cp_lexer_purge_token): Likewise.
(cp_lexer_purge_tokens_after): No check for EOF token.
(cp_parser_nested_name_specifier, cp_parser_decltype)
(cp_parser_template_id): Set tree_check_p.
From-SVN: r277514
2019-10-28 Richard Biener <rguenther@suse.de>
* tree-vect-loop.c (vect_create_epilog_for_reduction): Use
STMT_VINFO_REDUC_IDX from the actual stmt.
(vect_transform_reduction): Likewise.
(vectorizable_reduction): Compute the reduction chain length,
do not recompute the reduction operand index. Remove no longer
necessary restriction for condition reduction chains.
From-SVN: r277513
2019-10-28 Richard Biener <rguenther@suse.de>
PR c/92249
* gimple-parser.c (c_parser_parse_gimple_body): Make
current_bb the entry block initially to easier recover
from errors.
(c_parser_gimple_compound_statement): Adjust.
From-SVN: r277512
where LIM interacts with foo10. On 64bit LIM doesn't do the problematic
change for whatever reason, but it seems better to disable LIM
alltogether, which requires a minor change in the testcase.
From-SVN: r277508
r266734 has introduced a new instance of jump threading pass in order to
take advantage of opportunities that combine opens up. It was perceived
back then that it was beneficial to delay it after reload, since that
might produce even more such opportunities.
Unfortunately jump threading interferes with hot/cold partitioning. In
the code from PR92007, it converts the following
+-------------------------- 2/HOT ------------------------+
| |
v v
3/HOT --> 5/HOT --> 8/HOT --> 11/COLD --> 6/HOT --EH--> 16/HOT
| ^
| |
+-------------------------------+
into the following:
+---------------------- 2/HOT ------------------+
| |
v v
3/HOT --> 8/HOT --> 11/COLD --> 6/COLD --EH--> 16/HOT
This makes hot bb 6 dominated by cold bb 11, and because of this
fixup_partitions makes bb 6 cold as well, which in turn makes EH edge
6->16 a crossing one. Not only can't we have crossing EH edges, we are
also not allowed to introduce new crossing edges after reload in
general, since it might require extra registers on some targets.
Therefore, move the jump threading pass between combine and hot/cold
partitioning. Building SPEC 2006 and SPEC 2017 with the old and the new
code indicates that:
* When doing jump threading right after reload, 3889 edges are threaded.
* When doing jump threading right after combine, 3918 edges are
threaded.
This means this change will not introduce performance regressions.
gcc/ChangeLog:
2019-10-28 Ilya Leoshkevich <iii@linux.ibm.com>
PR rtl-optimization/92007
* cfgcleanup.c (thread_jump): Add an assertion that we don't
call it after reload if hot/cold partitioning has been done.
(class pass_postreload_jump): Rename to
pass_jump_after_combine.
(make_pass_postreload_jump): Rename to
make_pass_jump_after_combine.
* passes.def(pass_postreload_jump): Move before reload, rename
to pass_jump_after_combine.
* tree-pass.h (make_pass_postreload_jump): Rename to
make_pass_jump_after_combine.
gcc/testsuite/ChangeLog:
2019-10-28 Ilya Leoshkevich <iii@linux.ibm.com>
PR rtl-optimization/92007
* g++.dg/opt/pr92007.C: New test (from Arseny Solokha).
From-SVN: r277507
In PR88760, there are a few disscussion about improve or tune unroller for
targets. And we would agree to enable unroller for small loops at O2 first.
And we could see performance improvement(~10%) for below code:
```
subroutine foo (i, i1, block)
integer :: i, i1
integer :: block(9, 9, 9)
block(i:9,1,i1) = block(i:9,1,i1) - 10
end subroutine foo
```
This kind of code occurs a few times in exchange2 benchmark.
Similar C code:
```
for (i = 0; i < n; i++)
arr[i] = arr[i] - 10;
```
On powerpcle, for O2 , enable -funroll-loops and limit
PARAM_MAX_UNROLL_TIMES=2 and PARAM_MAX_UNROLLED_INSNS=20, we can see >2%
overall improvement for SPEC2017.
This patch is only for rs6000 in which we see visible performance improvement.
gcc/
2019-10-25 Jiufu Guo <guojiufu@linux.ibm.com>
PR tree-optimization/88760
* config/rs6000/rs6000-common.c (rs6000_option_optimization_table):
Enable -funroll-loops for -O2 and above.
* config/rs6000/rs6000.c (rs6000_option_override_internal): Set
PARAM_MAX_UNROLL_TIMES to 2 and PARAM_MAX_UNROLLED_INSNS to 20, and
do not turn on web and rngreg implicitly, if the unroller is not
explicitly enabled.
gcc.testsuite/
2019-10-25 Jiufu Guo <guojiufu@linux.ibm.com>
PR tree-optimization/88760
* gcc.target/powerpc/small-loop-unroll.c: New test.
* c-c++-common/tsan/thread_leak2.c: Update test.
* gcc.dg/pr59643.c: Update test.
* gcc.target/powerpc/loop_align.c: Update test.
* gcc.target/powerpc/ppc-fma-1.c: Update test.
* gcc.target/powerpc/ppc-fma-2.c: Update test.
* gcc.target/powerpc/ppc-fma-3.c: Update test.
* gcc.target/powerpc/ppc-fma-4.c: Update test.
* gcc.target/powerpc/pr78604.c: Update test.
From-SVN: r277501
2019-10-27 Paul Thomas <pault@gcc.gnu.org>
PR fortran/86248
* resolve.c (flag_fn_result_spec): Correct a typo before the
function declaration.
* trans-decl.c (gfc_sym_identifier): Boost the length of 'name'
to allow for all variants. Simplify the code by using a pointer
to the symbol's proc_name and taking the return out of each of
the conditional branches. Allow symbols with fn_result_spec set
that do not come from a procedure namespace and have a module
name to go through the non-fn_result_spec branch.
2019-10-27 Paul Thomas <pault@gcc.gnu.org>
PR fortran/86248
* gfortran.dg/char_result_19.f90 : New test.
* gfortran.dg/char_result_mod_19.f90 : Module for the new test.
From-SVN: r277487
This comment cut&pasto fix was split out of another patch I'm about to
contribute, as the current version of the patch no longer touches cgraph
data structures.
for gcc/ChangeLog
* cgraph.c (cgraph_node::rtl_info): Fix cut&pasto in comment.
* cgraph.h (cgraph_node::rtl_info): Likewise.
From-SVN: r277485
* ipa-cp.c (propagate_constants_across_call): If args are not available
just drop everything to varying.
(find_aggregate_values_for_callers_subset): Watch for missing
edge summary.
(find_more_scalar_values_for_callers_subs): Likewise.
* ipa-prop.c (ipa_compute_jump_functions_for_edge,
update_jump_functions_after_inlining, propagate_controlled_uses):
Watch for missing summaries.
(ipa_propagate_indirect_call_infos): Remove summary after propagation
is finished.
(ipa_write_node_info): Watch for missing summaries.
(ipa_read_edge_info): Create new ref.
(ipa_edge_args_sum_t): Add remove.
(IPA_EDGE_REF_GET_CREATE): New macro.
* ipa-fnsummary.c (evaluate_properties_for_edge): Watch for missing
edge summary.
(remap_edge_change_prob): Likewise.
From-SVN: r277484
When we have -fstack-limit-symbol with sysv we can end up with a non-
existing instruction (you cannot add an immediate to register 0). Fix
this by using register 11 instead. It might be used for something else
already though, so save and restore its value around this. In
optimizing compiles these extra moves are usually removed again: the
restore by cprop_hardreg, and then the save by rtl_dce.
PR target/91289
* config/rs6000/rs6000-logue.c (rs6000_emit_allocate_stack): Don't add
an immediate to r0; use r11 instead. Save and restore r11 to r0 around
this.
From-SVN: r277472
All of these special member functions do exactly what the compiler would
do anyway. By defining them as defaulted for C++11 and later we prevent
move constructors and move assignment operators being defined (which is
consistent with the previous semantics).
Also move default init of the input_iterator_wrapper members from the
derived constructor to the protected base constructor.
* testsuite/util/testsuite_iterators.h (output_iterator_wrapper)
(input_iterator_wrapper, forward_iterator_wrapper)
bidirectional_iterator_wrapper, random_access_iterator_wrapper): Remove
user-provided copy constructors and copy assignment operators so they
are defined implicitly.
(input_iterator_wrapper): Initialize members in default constructor.
(forward_iterator_wrapper): Remove assignments to members of base.
From-SVN: r277459
The new constexpr destructor on std::allocator breaks compilation with
Clang in C++2a mode. This only makes it constexpr if the compiler
supports the P0784R7 features.
* include/bits/allocator.h: Check __cpp_constexpr_dynamic_alloc
before making the std::allocator destructor constexpr.
* testsuite/20_util/allocator/requirements/constexpr.cc: New test.
From-SVN: r277458
2019-10-25 Cesar Philippidis <cesar@codesourcery.com>
Tobias Burnus <tobias@codesourcery.com>
gcc/fortran/
* openmp.c (gfc_match_omp_map_clause): Add and pass allow_commons
argument.
(gfc_match_omp_clauses): Update calls to permit common blocks for
OpenACC's copy/copyin/copyout, create/delete, host,
pcopy/pcopy_in/pcopy_out, present_or_copy, present_or_copy_in,
present_or_copy_out, present_or_create and self.
gcc/
* gimplify.c (oacc_default_clause): Privatize fortran common blocks.
(omp_notice_variable): Defer the expansion of DECL_VALUE_EXPR for
common block decls.
gcc/testsuite/
* gfortran.dg/goacc/common-block-1.f90: New test.
* gfortran.dg/goacc/common-block-2.f90: New test.
* gfortran.dg/goacc/common-block-3.f90: New test.
libgomp/
* testsuite/libgomp.oacc-fortran/common-block-1.f90: New test.
* testsuite/libgomp.oacc-fortran/common-block-2.f90: New test.
* testsuite/libgomp.oacc-fortran/common-block-3.f90: New test.
Reviewed-by: Thomas Schwinge <thomas@codesourcery.com>
Co-Authored-By: Tobias Burnus <tobias@codesourcery.com>
From-SVN: r277451
This fixes a regression when using Clang.
* include/bits/range_cmp.h: Check __cpp_lib_concepts before defining
concepts. Fix comment.
From-SVN: r277449
2019-10-25 Richard Biener <rguenther@suse.de>
PR tree-optimization/92222
* tree-vect-slp.c (_slp_oprnd_info::first_pattern): Remove.
(_slp_oprnd_info::second_pattern): Likewise.
(_slp_oprnd_info::any_pattern): New.
(vect_create_oprnd_info): Adjust.
(vect_get_and_check_slp_defs): Compute whether any stmt is
in a pattern.
(vect_build_slp_tree_2): Avoid building up a node from scalars
if any of the operand defs, not just the first, is in a pattern.
* gcc.dg/torture/pr92222.c: New testcase.
From-SVN: r277448
2019-10-25 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (vect_get_and_check_slp_defs): Only fail
swapping if we actually have to modify the IL on a shared stmt.
(vect_build_slp_tree_2): Never fail swapping on shared stmts
because we no longer modify the IL.
From-SVN: r277446
Unwanted unrolling meant that we had more single-precision FADDAs
than expected.
2019-10-25 Richard Sandiford <richard.sandiford@arm.com>
gcc/testsuite/
* gcc.target/aarch64/sve/reduc_strict_3.c (double_reduc1): Prevent
the loop from being unrolled.
From-SVN: r277442
Recent target-independent patches mean that several SVE tests
now produce the code that we'd originally wanted them to produce.
Really nice to see :-)
This patch therefore updates the expected baseline, so that hopefully
we don't regress from this point in future.
2019-10-25 Richard Sandiford <richard.sandiford@arm.com>
gcc/testsuite/
* gcc.target/aarch64/sve/loop_add_5.c: Remove XFAILs for tests
that now pass.
* gcc.target/aarch64/sve/reduc_1.c: Likewise.
* gcc.target/aarch64/sve/reduc_2.c: Likewise.
* gcc.target/aarch64/sve/reduc_5.c: Likewise.
* gcc.target/aarch64/sve/reduc_8.c: Likewise.
* gcc.target/aarch64/sve/slp_13.c: Likewise.
* gcc.target/aarch64/sve/slp_5.c: Likewise. Update expected
WHILELO counts.
* gcc.target/aarch64/sve/slp_7.c: Likewise.
From-SVN: r277441
Now that vectorizable_operation vectorises most loop stmts involved
in a reduction, it needs to be aware of reductions in fully-masked loops.
The LOOP_VINFO_CAN_FULLY_MASK_P parts of vectorizable_reduction now only
apply to cases that use vect_transform_reduction.
This new way of doing things is definitely an improvement for SVE though,
since it means we can lift the old restriction of not using fully-masked
loops for reduction chains.
2019-10-25 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop.c (vectorizable_reduction): Restrict the
LOOP_VINFO_CAN_FULLY_MASK_P handling to cases that will be
handled by vect_transform_reduction. Allow fully-masked loops
to be used with reduction chains.
* tree-vect-stmts.c (vectorizable_operation): Handle reduction
operations in fully-masked loops.
(vectorizable_condition): Reject EXTRACT_LAST_REDUCTION
operations in fully-masked loops.
gcc/testsuite/
* gcc.dg/vect/pr65947-1.c: No longer expect doubled dump lines
for FOLD_EXTRACT_LAST reductions.
* gcc.dg/vect/pr65947-2.c: Likewise.
* gcc.dg/vect/pr65947-3.c: Likewise.
* gcc.dg/vect/pr65947-4.c: Likewise.
* gcc.dg/vect/pr65947-5.c: Likewise.
* gcc.dg/vect/pr65947-6.c: Likewise.
* gcc.dg/vect/pr65947-9.c: Likewise.
* gcc.dg/vect/pr65947-10.c: Likewise.
* gcc.dg/vect/pr65947-12.c: Likewise.
* gcc.dg/vect/pr65947-13.c: Likewise.
* gcc.dg/vect/pr65947-14.c: Likewise.
* gcc.dg/vect/pr80631-1.c: Likewise.
* gcc.dg/vect/pr80631-2.c: Likewise.
* gcc.dg/vect/vect-cond-reduc-3.c: Likewise.
* gcc.dg/vect/vect-cond-reduc-4.c: Likewise.
From-SVN: r277438
2019-10-25 Richard Biener <rguenther@suse.de>
* tree-vect-loop.c (vectorizable_reduction): Verify
STMT_VINFO_REDUC_IDX on the to be vectorized stmts is set up
correctly.
* tree-vect-patterns.c (vect_mark_pattern_stmts): Transfer
STMT_VINFO_REDUC_IDX from the original stmts to the pattern
stmts.
From-SVN: r277437