Commit Graph

182858 Commits

Author SHA1 Message Date
Marek Polacek
814299a9d4 c++: -Wmissing-field-initializers in unevaluated ctx [PR98620]
This PR wants us not to warn about missing field initializers when
the code in question takes places in decltype and similar.  Fixed
thus.

gcc/cp/ChangeLog:

	PR c++/98620
	* typeck2.c (process_init_constructor_record): Don't emit
	-Wmissing-field-initializers warnings in unevaluated contexts.

gcc/testsuite/ChangeLog:

	PR c++/98620
	* g++.dg/warn/Wmissing-field-initializers-2.C: New test.
2021-01-11 22:31:39 -05:00
liuhongt
240f0a490d Delete dead code in ix86_expand_sse_comi.
d->flag is always 0 for builtins located in
BDESC_FIRST (comi,COMI,...)
...
BDESC_END (COMI, PCMPESTR)

gcc/ChangeLog:
	PR target/98612
	* config/i386/i386-builtins.h (BUILTIN_DESC_SWAP_OPERANDS):
	Deleted.
	* config/i386/i386-expand.c (ix86_expand_sse_comi): Delete
	dead code.
2021-01-12 11:17:29 +08:00
Alexandre Oliva
640296c367 make FOR_EACH_IMM_USE_STMT safe for early exits
Use a dtor to automatically remove ITER from IMM_USE list in
FOR_EACH_IMM_USE_STMT.


for  gcc/ChangeLog

	* ssa-iterators.h (end_imm_use_stmt_traverse): Forward
	declare.
	(auto_end_imm_use_stmt_traverse): New struct.
	(FOR_EACH_IMM_USE_STMT): Use it.
	(BREAK_FROM_IMM_USE_STMT, RETURN_FROM_IMM_USE_STMT): Remove,
	along with uses...
	* gimple-ssa-strength-reduction.c: ... here, ...
	* graphite-scop-detection.c: ... here, ...
	* ipa-modref.c, ipa-pure-const.c, ipa-sra.c: ... here, ...
	* tree-predcom.c, tree-ssa-ccp.c: ... here, ...
	* tree-ssa-dce.c, tree-ssa-dse.c: ... here, ...
	* tree-ssa-loop-ivopts.c, tree-ssa-math-opts.c: ... here, ...
	* tree-ssa-phiprop.c, tree-ssa.c: ... here, ...
	* tree-vect-slp.c: ... and here, ...
	* doc/tree-ssa.texi: ... and the example here.
2021-01-11 23:37:59 -03:00
David Malcolm
ab88f36072 analyzer: fix ICE merging dereferencing unknown ptrs [PR98628]
gcc/analyzer/ChangeLog:
	PR analyzer/98628
	* store.cc (binding_cluster::make_unknown_relative_to): Don't mark
	dereferenced unknown pointers as having escaped.

gcc/testsuite/ChangeLog:
	PR analyzer/98628
	* gcc.dg/analyzer/pr98628.c: New test.
2021-01-11 20:23:41 -05:00
GCC Administrator
67fbb7f0fd Daily bump. 2021-01-12 00:16:22 +00:00
Richard Sandiford
a958b2fc6d aarch64: Add support for unpacked SVE ASRD
This patch adds support for both conditional and unconditional unpacked
ASRD.  This meant adding a new define_insn for the unconditional form,
instead of reusing the conditional instructions.  It also meant
extending the current conditional patterns to support merging with
any independent value, not just zero.

gcc/
	* config/aarch64/aarch64-sve.md (sdiv_pow2<mode>3): Extend from
	SVE_FULL_I to SVE_I.  Generate an UNSPEC_PRED_X.
	(*sdiv_pow2<mode>3): New pattern.
	(@cond_<sve_int_op><mode>): Extend from SVE_FULL_I to SVE_I.
	Wrap the ASRD in an UNSPEC_PRED_X.
	(*cond_<sve_int_op><mode>_2): Likewise.  Replace the UNSPEC_PRED_X
	predicate with a constant PTRUE, if it isn't already.
	(*cond_<sve_int_op><mode>_z): Replace with...
	(*cond_<sve_int_op><mode>_any): ...this new pattern.

gcc/testsuite/
	* gcc.target/aarch64/sve/asrdiv_4.c: New test.
	* gcc.target/aarch64/sve/cond_asrd_1.c: Likewise.
	* gcc.target/aarch64/sve/cond_asrd_1_run.c: Likewise.
	* gcc.target/aarch64/sve/cond_asrd_2.c: Likewise.
	* gcc.target/aarch64/sve/cond_asrd_2_run.c: Likewise.
	* gcc.target/aarch64/sve/cond_asrd_3.c: Likewise.
	* gcc.target/aarch64/sve/cond_asrd_3_run.c: Likewise.
2021-01-11 18:03:26 +00:00
Richard Sandiford
37426e0f06 aarch64: Add support for unpacked SVE conditional BIC
This patch adds support for unpacked conditional BIC.  The type suffix
could be taken from the element size or the container size, so the
patch continues to use the element size.  This is consistent with
the existing support for unconditional BIC.

gcc/
	* config/aarch64/aarch64-sve.md (*cond_bic<mode>_2): Extend from
	SVE_FULL_I to SVE_I.
	(*cond_bic<mode>_any): Likewise.

gcc/testsuite/
	* g++.target/aarch64/sve/cond_bic_1.C: New test.
	* g++.target/aarch64/sve/cond_bic_2.C: Likewise.
	* g++.target/aarch64/sve/cond_bic_3.C: Likewise.
	* g++.target/aarch64/sve/cond_bic_4.C: Likewise.
2021-01-11 18:03:25 +00:00
Richard Sandiford
7446de5a2a aarch64: Add support for unpacked SVE MULH
This patch extends the SMULH and UMULH support to unpacked vectors.
The type suffix must be taken from the element size rather than the
container size.

The main use of these patterns is to support division and modulus
by a constant.  The conditional forms would be hard to trigger from
non-ACLE code, and ACLE code needs fully-packed vectors only.

gcc/
	* config/aarch64/aarch64-sve.md (<su>mul<mode>3_highpart)
	(@aarch64_pred_<MUL_HIGHPART:optab><mode>): Extend from SVE_FULL_I
	to SVE_I.

gcc/testsuite/
	* gcc.target/aarch64/sve/mul_highpart_3.c: New test.
2021-01-11 18:03:24 +00:00
Richard Sandiford
907ea37955 aarch64: Add support for unpacked SVE ABD
This patch adds support for unpacked SVE SABD and UABD.
It also rewrites the patterns so that they match as combine
patterns without the need for REG_EQUAL notes.  Finally,
there was no pattern for merging with the second input,
which can be handled by reversing the operands.

The type suffix needs to be taken from the element size rather
than the container size.

gcc/
	* config/aarch64/aarch64-sve.md (<su>abd<mode>_3): Extend from
	SVE_FULL_I to SVE_I.
	(*aarch64_cond_<su>abd<mode>_2): Likewise.
	(*aarch64_cond_<su>abd<mode>_any): Likewise.
	(@aarch64_pred_<su>abd<mode>): Likewise.  Use UNSPEC_PRED_X
	for the max and min but not for the minus.
	(*aarch64_cond_<su>abd<mode>_3): New pattern.

gcc/testsuite/
	* g++.target/aarch64/sve/abd_1.C: New test.
	* g++.target/aarch64/sve/cond_abd_1.C: Likewise.
	* g++.target/aarch64/sve/cond_abd_2.C: Likewise.
	* g++.target/aarch64/sve/cond_abd_3.C: Likewise.
	* g++.target/aarch64/sve/cond_abd_4.C: Likewise.
2021-01-11 18:03:23 +00:00
Richard Sandiford
3f8b0bba03 aarch64: Add support for unpacked SVE ADR
This patch extends the ADR patterns to handle unpacked vectors.
They would work with both elements and containers, but since
the instructions only support .s and .d, we get more coverage
by using containers.

gcc/
	* config/aarch64/iterators.md (SVE_24I): New iterator.
	* config/aarch64/aarch64-sve.md (*aarch64_adr<mode>_shift): Extend from
	SVE_FULL_SDI to SVE_24I.  Use containers rather than elements.

gcc/testsuite/
	* gcc.target/aarch64/sve/adr_6.c: New test.
2021-01-11 18:03:23 +00:00
Richard Sandiford
ab76e3db6b aarch64: Add general unpacked SVE conditional binary arithmetic
This patch adds support for conditional binary ADD, SUB, MUL, SMAX,
UMAX, SMIN, UMIN, LSL, LSR, ASR, AND, ORR and EOR.  It's not really
possible to split it up further given how the patterns are written.

Min, max and right-shift need the element size rather than the container
size.  The others would work with both, although MUL should be more
efficient when applied to elements instead of containers.

gcc/
	* config/aarch64/aarch64-sve.md (@cond_<SVE_INT_BINARY:optab><mode>)
	(*cond_<SVE_INT_BINARY:optab><mode>_2): Extend from SVE_FULL_I
	to SVE_I.
	(*cond_<SVE_INT_BINARY:optab><mode>_3): Likewise.
	(*cond_<SVE_INT_BINARY:optab><mode>_any): Likewise.
	(*cond_<SVE_INT_BINARY:optab><mode>_2_const): Likewise.
	(*cond_<SVE_INT_BINARY:optab><mode>_any_const): Likewise.

gcc/testsuite/
	* g++.target/aarch64/sve/cond_arith_1.C: New test.
	* g++.target/aarch64/sve/cond_arith_2.C: Likewise.
	* g++.target/aarch64/sve/cond_arith_3.C: Likewise.
	* g++.target/aarch64/sve/cond_arith_4.C: Likewise.
	* g++.target/aarch64/sve/cond_shift_1.C: New test.
	* g++.target/aarch64/sve/cond_shift_2.C: Likewise.
	* g++.target/aarch64/sve/cond_shift_3.C: Likewise.
	* g++.target/aarch64/sve/cond_shift_4.C: Likewise.
2021-01-11 18:03:22 +00:00
Richard Sandiford
48c7f5b881 aarch64: Add support for unpacked SVE mult, max and min
This patch makes the SVE_INT_BINARY_IMM patterns support
unpacked arithmetic, covering MUL, SMAX, SMIN, UMAX and UMIN.
For min and max, the type suffix must be taken from the element
size rather than the container size.

The XFAILs are due to PR98602.

gcc/
	* config/aarch64/aarch64-sve.md (<SVE_INT_BINARY_IMM:optab><mode>3)
	(@aarch64_pred_<SVE_INT_BINARY_IMM:optab><mode>)
	(*post_ra_<SVE_INT_BINARY_IMM:optab><mode>3): Extend from SVE_FULL_I
	to SVE_I.

gcc/testsuite/
	PR testsuite/98602
	* g++.target/aarch64/sve/max_1.C: New test.
	* g++.target/aarch64/sve/min_1.C: Likewise.
	* gcc.target/aarch64/sve/mul_2.c: Likewise.
2021-01-11 18:03:21 +00:00
Richard Sandiford
b81fbfe1eb aarch64: Add support for unpacked SVE shifts
This patch adds support for unpacked SVE LSL, ASR and LSR.
For right shifts, the type suffix needs to be taken from the
element size rather than the container size.

gcc/
	* config/aarch64/aarch64-sve.md (<ASHIFT:optab><mode>3)
	(v<ASHIFT:optab><mode>3, @aarch64_pred_<optab><mode>)
	(*post_ra_v<ASHIFT:optab><mode>3): Extend from SVE_FULL_I to SVE_I.

gcc/testsuite/
	* gcc.target/aarch64/sve/shift_2.c: New test.
2021-01-11 18:03:20 +00:00
Martin Liska
cbe9758ff4 Properly release symtab::m_clones.
gcc/ChangeLog:

	PR jit/98615
	* symtab-clones.h (clone_info::release): Release
	symtab::m_clones with ggc_delete as it's a GGC memory.
2021-01-11 18:15:06 +01:00
Jakub Jelinek
3dd0d3ee1d c++, abi: Fix abi_tag attribute handling [PR98481]
In GCC10 cp_walk_subtrees has been changed to walk template arguments.
As the following testcase, that changed the mangling of some functions.
I believe the previous behavior that find_abi_tags_r doesn't recurse into
template args has been the correct one, but setting *walk_subtrees = 0
for the types and handling the types subtree walking manually in
find_abi_tags_r looks too hard, there are a lot of subtrees and details what
should and shouldn't be walked, both in tree.c (walk_type_fields there,
which is static) and in cp_walk_subtrees itself.

The following patch abuses the fact that *walk_subtrees is an int to
tell cp_walk_subtrees it shouldn't walk the template args.

Co-authored-by: Jason Merrill <jason@redhat.com>

gcc/cp/ChangeLog:

	PR c++/98481
	* class.c (find_abi_tags_r): Set *walk_subtrees to 2 instead of 1
	for types.
	(mark_abi_tags_r): Likewise.
	* decl2.c (min_vis_r): Likewise.
	* tree.c (cp_walk_subtrees): If *walk_subtrees_p is 2, look through
	typedefs.

gcc/testsuite/ChangeLog:

	PR c++/98481
	* g++.dg/abi/abi-tag24.C: New test.
2021-01-11 11:12:48 -05:00
Matthias Klose
8c09b788a9 Make the serialized link target more verbose
2020-12-07  Matthias Klose  <doko@ubuntu.com>

	* Makefile.in (LINK_PROGRESS): Show the link target.
2021-01-11 14:51:35 +00:00
Martin Liska
3b25e83536 Port update-copyright.py to Python3
contrib/ChangeLog:

	* update-copyright.py: Port to python3 by guessing encoding
	(first utf8, then iso8859). Add 2 more ignores: .png and .pyc.
2021-01-11 14:08:50 +01:00
Richard Biener
84684e0f78 tree-optimization/91403 - avoid excessive code-generation
The vectorizer, for large permuted grouped loads, generates
inefficient intermediate code (cleaned up only later) that runs
into complexity issues in SCEV analysis and elsewhere.  For the
non-single-element interleaving case we already put a hard limit
in place, this applies the same limit to the missing case.

2021-01-11  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/91403
	* tree-vect-data-refs.c (vect_analyze_group_access_1): Cap
	single-element interleaving group size at 4096 elements.

	* gcc.dg/vect/pr91403.c: New testcase.
2021-01-11 13:38:18 +01:00
Bernd Edlinger
6ebf79fcd4 testsuite: Fix test failures from outputs.exp [PR98225]
The .ld1_args file is not created when HAVE_GNU_LD is false.
The ltrans0.ltrans_arg file is not created when the make jobserver
is available, so remove the MAKEFLAGS variable.
Add an exception for *.gcc_args files similar to the
exception for *.cdtor.* files.
Limit both exceptions to targets that define EH_FRAME_THROUGH_COLLECT2.
That means although the test case does not use C++ constructors
or destructors it is still using dwarf2 frame info.

2021-01-11  Bernd Edlinger  <bernd.edlinger@hotmail.de>

	PR testsuite/98225
	* gcc.misc-tests/outputs.exp: Unset MAKEFLAGS.
	Expect .ld1_args only when GNU LD is used.
	Add an exception for *.gcc_args files.
2021-01-11 13:28:29 +01:00
Richard Biener
04bff1bbfc tree-optimization/98526 - fix vectorizer reduction cost
This fixes a double-counting in the reduction cost when vectorizing
the reduction through the regular vectorizable_* functions.

2021-01-11  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/98526
	* tree-vect-loop.c (vect_model_reduction_cost): Remove costing
	of the actual reduction op for the regular case.
	(vectorizable_reduction): Cost the stmts
	vect_transform_reduction produces here.
2021-01-11 12:50:59 +01:00
Iain Buclaw
928e96bbe9 d: Remove visibility and lookup deprecation
The deprecation phase for access checks is finished.

The `-ftransition=import` and `-ftransition=checkimports` switches no
longer have an effect and are now removed.  Symbols that are not visible
in a particular scope will no longer be found by the compiler.

Reviewed-on: https://github.com/dlang/dmd/pull/12124

gcc/d/ChangeLog:

	* dmd/MERGE: Merge upstream dmd 2d3d13748.
	* d-lang.cc (d_handle_option): Remove OPT_ftransition_checkimports and
	OPT_ftransition_import.
	* gdc.texi (Warnings): Remove documentation for -ftransition=import
	and -ftransition=checkimports.
	* lang.opt (ftransition=checkimports): Remove.
	(ftransition=import): Remove.
2021-01-11 12:21:03 +01:00
Andreas Krebbel
300a3ce5c5 tree-optimization/98221 - fix wrong unpack operation used for big-endian
The vec-abi-varargs-1.c testcase on IBM Z currently fails.

While adding an SI mode vector to a DI mode vector the first is unpacked using:

  _28 = BIT_INSERT_EXPR <{ 0, 0, 0, 0 }, _2, 0>;
  _34 = [vec_unpack_lo_expr] _28;

However, on big endian targets lo refers to the right hand side of the vector - in this case the zeroes.

2021-01-11  Andreas Krebbel  <krebbel@linux.ibm.com>

	* tree-ssa-forwprop.c (simplify_vector_constructor): For
	big-endian, use UNPACK[_FLOAT]_HI.
2021-01-11 11:46:31 +01:00
Tamar Christina
0c18faac3f slp: upgrade complex add to new format and fix memory leaks
This fixes a memory leak in complex_add_pattern because I was not calling
vect_free_slp_tree when dissolving one side of the TWO_OPERANDS nodes.

Secondly it also upgrades the class to the new inteface required by the other
patterns.

gcc/ChangeLog:

	* tree-vect-slp-patterns.c (class complex_pattern,
	class complex_add_pattern): Add parameters to matches.
	(complex_add_pattern::build): Free memory.
	(complex_add_pattern::matches): Move validation end of match.
	(complex_add_pattern::recognize): Likewise.
2021-01-11 09:58:36 +00:00
Tamar Christina
bd4298e192 slp: handle externals correctly in linear_loads_p
This fixes a bug with externals and linear_loads_p where I forgot to save the
value before returning.

It also fixes handling of nodes with multiple children on a non VEC_PERM node.
There the child iteration would already resolve the kind and the loads are All
expected to be the same if valid so just return one.

gcc/ChangeLog:

	* tree-vect-slp-patterns.c (linear_loads_p): Fix externals.
2021-01-11 09:57:41 +00:00
Tamar Christina
39666d2b88 slp: fix is_linear_load_p to prevent multiple answers
This fixes an issue where is_linear_load_p could return the incorrect
permutation kind because it is singe pass.

This arranges the candidates in such a way that there won't be any ambiguity so
that the function can still be linear but give correct values.

gcc/ChangeLog:

	* tree-vect-slp-patterns.c (is_linear_load_p): Fix ambiguity.
2021-01-11 09:56:44 +00:00
Jakub Jelinek
9a6c37e6ae reassoc: Reassociate integral multiplies [PR95867]
For floating point multiply, we have nice code in reassoc to reassociate
multiplications to almost optimal sequence of as few multiplications as
possible (or library call), but for integral types we just give up
because there is no __builtin_powi* for those types.

As there is no library routine we could use, instead of adding new internal
call just to hold it temporarily and then lower to multiplications again,
this patch for the integral types calls into the sincos pass routine that
expands it into multiplications right away.

2021-01-11  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/95867
	* tree-ssa-math-opts.h: New header.
	* tree-ssa-math-opts.c: Include tree-ssa-math-opts.h.
	(powi_as_mults): No longer static.  Use build_one_cst instead of
	build_real.  Formatting fix.
	* tree-ssa-reassoc.c: Include tree-ssa-math-opts.h.
	(attempt_builtin_powi): Handle multiplication reassociation without
	powi_fndecl using powi_as_mults.
	(reassociate_bb): For integral types don't require
	-funsafe-math-optimizations to call attempt_builtin_powi.

	* gcc.dg/tree-ssa/pr95867.c: New test.
2021-01-11 10:36:24 +01:00
Jakub Jelinek
9febe9e4be widening_mul: Pattern recognize also signed multiplication with overflow check [PR95852]
On top of the previous widening_mul patch, this one recognizes also
(non-perfect) signed multiplication with overflow, like:
int
f5 (int x, int y, int *res)
{
  *res = (unsigned) x * y;
  return x && (*res / x) != y;
}
The problem with such checks is that they invoke UB if x is -1 and
y is INT_MIN during the division, but perhaps the code knows that
those values won't appear.  As that case is UB, we can do for that
case whatever we want and handling that case as signed overflow
is the best option.  If x is a constant not equal to -1, then the checks
are 100% correct though.
Haven't tried to pattern match bullet-proof checks, because I really don't
know if users would write it in real-world code like that,
perhaps
  *res = (unsigned) x * y;
  return x && (x == -1 ? (*res / y) != x : (*res / x) != y);
?

https://wiki.sei.cmu.edu/confluence/display/c/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow
suggests to use twice as wide multiplication (perhaps we should handle that
too, for both signed and unsigned), or some very large code
with 4 different divisions nested in many conditionals, no way one can
match all the possible variants thereof.

2021-01-11  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/95852
	* tree-ssa-math-opts.c (maybe_optimize_guarding_check): Change
	mul_stmts parameter type to vec<gimple *> &.  Before cond_stmt
	allow in the bb any of the stmts in that vector, div_stmt and
	up to 3 cast stmts.
	(arith_cast_equal_p): New function.
	(arith_overflow_check_p): Add cast_stmt argument, handle signed
	multiply overflow checks.
	(match_arith_overflow): Adjust caller.  Handle signed multiply
	overflow checks.

	* gcc.target/i386/pr95852-3.c: New test.
	* gcc.target/i386/pr95852-4.c: New test.
2021-01-11 10:34:07 +01:00
Jakub Jelinek
a2106317cd widening_mul: Pattern recognize unsigned multiplication with overflow check [PR95852]
The following patch pattern recognizes some forms of multiplication followed
by overflow check through division by one of the operands compared to the
other one, with optional removal of guarding non-zero check for that operand
if possible.  The patterns are replaced with effectively
__builtin_mul_overflow or __builtin_mul_overflow_p.  The testcases cover 64
different forms of that.

2021-01-11  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/95852
	* tree-ssa-math-opts.c (maybe_optimize_guarding_check): New function.
	(uaddsub_overflow_check_p): Renamed to ...
	(arith_overflow_check_p): ... this.  Handle also multiplication
	with overflow check.
	(match_uaddsub_overflow): Renamed to ...
	(match_arith_overflow): ... this.  Add cfg_changed argument.  Handle
	also multiplication with overflow check.  Adjust function comment.
	(math_opts_dom_walker::after_dom_children): Adjust callers.  Call
	match_arith_overflow also for MULT_EXPR.

	* gcc.target/i386/pr95852-1.c: New test.
	* gcc.target/i386/pr95852-2.c: New test.
2021-01-11 10:32:19 +01:00
Kyrylo Tkachov
64dc013853 aarch64: Reimplement vmovl*/vmovn* intrinsics using __builtin_convertvector
__builtin_convertvector seems well-suited to implementing the vmovl and
vmovn intrinsics that widen and narrow
the integer elements in a vector.

This removes some more inline assembly from the intrinsics.

gcc/
	* config/aarch64/arm_neon.h (vmovl_s8): Reimplement using
	__builtin_convertvector.
	(vmovl_s16): Likewise.
	(vmovl_s32): Likewise.
	(vmovl_u8): Likewise.
	(vmovl_u16): Likewise.
	(vmovl_u32): Likewise.
	(vmovn_s16): Likewise.
	(vmovn_s32): Likewise.
	(vmovn_s64): Likewise.
	(vmovn_u16): Likewise.
	(vmovn_u32): Likewise.
	(vmovn_u64): Likewise.
2021-01-11 09:12:22 +00:00
Martin Liska
4e275dccfc Add pytest for a GCOV test-case
gcc/testsuite/ChangeLog:

	PR gcov-profile/98273
	* lib/gcov.exp: Add run-gcov-pytest function which runs pytest.
	* g++.dg/gcov/pr98273.C: New test.
	* g++.dg/gcov/gcov.py: New test.
	* g++.dg/gcov/test-pr98273.py: New test.
2021-01-11 09:18:53 +01:00
Martin Liska
fa4586d854 if-to-switch: remove memory leaks
gcc/ChangeLog:

	* gimple-if-to-switch.cc (struct condition_info): Use auto_var.
	(if_chain::is_beneficial): Delete clusters
	(find_conditions): Make second argument of conditions_in_bbs a
	pointer so that we control over it's lifetime.
	(pass_if_to_switch::execute): Delete them.
2021-01-11 09:18:28 +01:00
Kewen Lin
bcb3065b2b ira: Skip some pseudos in move_unallocated_pseudos
This patch is to make move_unallocated_pseudos consistent
to what we have in function find_moveable_pseudos, where we
record the original pseudo into pseudo_replaced_reg only if
validate_change succeeds with newreg.  To ensure every
unallocated pseudo in move_unallocated_pseudos has expected
information, it's better to add a check and skip it if it's
unexpected.  This avoids possible ICEs in future.

gcc/ChangeLog:

	* ira.c (move_unallocated_pseudos): Check other_reg and skip if
	it isn't set.
2021-01-10 20:33:23 -06:00
GCC Administrator
366f86bd42 Daily bump. 2021-01-11 00:16:17 +00:00
David Edelsohn
4a1d7f7e20 libstdc++: Suppress more vstring testsuite warnings. [PR 98613]
PR c++/57111 - 57111 - Generalize -Wfree-nonheap-object to delete

can create false positive warnings for vstring _S_empty_rep.

This patch prunes the excess false positive warnings from two more
testcases.

libstdc++-v3/ChangeLog:

	PR libstdc++/98613
	* testsuite/ext/vstring/cons/moveable.cc: Suppress false positive
	warning.
	* testsuite/ext/vstring/modifiers/assign/move_assign.cc: Same.
2021-01-10 18:22:51 -05:00
GCC Administrator
872373360d Daily bump. 2021-01-10 00:16:20 +00:00
Iain Buclaw
7da827c99c d: Synchronize testsuite with upstream dmd
Adds TEST_OUTPUT directives and reduces the verbosity of many tests.

Reviewed-on: https://github.com/dlang/dmd/pull/12112

gcc/d/ChangeLog:

	* dmd/MERGE: Merge upstream dmd cb1106ad5.
2021-01-09 23:59:30 +01:00
Iain Buclaw
7a103daef7 d: Support deprecated, @disable, and user-defined attributes on enum members
Reviewed-on: https://github.com/dlang/dmd/pull/12108

gcc/d/ChangeLog:

	* dmd/MERGE: Merge upstream dmd 9bba772fa.
2021-01-09 23:45:46 +01:00
Iain Buclaw
acae7b21bc d: Implement expression-based contract syntax
Expression-based contract syntax has been added.  Contracts that consist
of a single assertion can now be written more succinctly and multiple
`in` or `out` contracts can be specified for the same function.

Reviewed-on: https://github.com/dlang/dmd/pull/12106

gcc/d/ChangeLog:

	* dmd/MERGE: Merge upstream dmd e598f69c0.
2021-01-09 23:45:46 +01:00
Maciej W. Rozycki
f2a5346244 VAX/testsuite: Remove notsi comparison elimination regressions
Remove fallout from commit 0bd675183d ("match.pd: Add ~(X - Y) -> ~X
+ Y simplification [PR96685]") and paper over the regression caused as
it is not the matter of the test cases affected.

Previously assembly like this:

	.text
	.align 1
.globl eq_notsi
	.type	eq_notsi, @function
eq_notsi:
	.word 0	# 35	[c=0]  procedure_entry_mask
	subl2 $4,%sp	# 46	[c=32]  *addsi3
	mcoml 4(%ap),%r0	# 32	[c=16]  *one_cmplsi2_ccz
	jeql .L1		# 34	[c=26]  *branch_ccz
	addl2 $2,%r0	# 31	[c=32]  *addsi3
.L1:
	ret		# 40	[c=0]  return
	.size	eq_notsi, .-eq_notsi

was produced.  Now this:

	.text
	.align 1
.globl eq_notsi
	.type	eq_notsi, @function
eq_notsi:
	.word 0	# 36	[c=0]  procedure_entry_mask
	subl2 $4,%sp	# 48	[c=32]  *addsi3
	movl 4(%ap),%r0	# 33	[c=16]  *movsi_2
	cmpl %r0,$-1	# 34	[c=8]  *cmpsi_ccz/1
	jeql .L3		# 35	[c=26]  *branch_ccz
	subl3 %r0,$1,%r0	# 32	[c=32]  *subsi3/1
	ret		# 27	[c=0]  return
.L3:
	clrl %r0		# 31	[c=2]  *movsi_2
	ret		# 41	[c=0]  return
	.size	eq_notsi, .-eq_notsi

is, which cannot work with post-reload comparison elimination, due to
the comparison against -1 rather than 0.

Use subtraction from a constant then rather than addition as the former
operation is not transformed, removing these regressions:

FAIL: gcc.target/vax/cmpelim-eq-notsi.c   -O1   scan-rtl-dump-times cmpelim "deleting insn with uid" 1
FAIL: gcc.target/vax/cmpelim-eq-notsi.c   -O1   scan-assembler-not \t(bit|cmpz?|tst).
FAIL: gcc.target/vax/cmpelim-eq-notsi.c   -O1   scan-assembler one_cmplsi[^ ]*_ccz(/[0-9]+)?\n
FAIL: gcc.target/vax/cmpelim-lt-notsi.c   -O1   scan-rtl-dump-times cmpelim "deleting insn with uid" 1
FAIL: gcc.target/vax/cmpelim-lt-notsi.c   -O1   scan-assembler-not \t(bit|cmpz?|tst).
FAIL: gcc.target/vax/cmpelim-lt-notsi.c   -O1   scan-assembler one_cmplsi[^ ]*_ccn(/[0-9]+)?\n

and likewise across some of the other the optimization levels verified.

The LE variant appears unaffected as the new transformation produces
slightly different although still suboptimal code:

	.text
	.align 1
.globl le_notsi
	.type	le_notsi, @function
le_notsi:
	.word 0	# 27	[c=0]  procedure_entry_mask
	subl2 $4,%sp	# 34	[c=32]  *addsi3
	movl 4(%ap),%r1	# 23	[c=16]  *movsi_2
	mcoml %r1,%r0	# 24	[c=8]  *one_cmplsi2_ccnz
	jleq .L1		# 26	[c=26]  *branch_ccnz
	subl3 %r1,$1,%r0	# 22	[c=32]  *subsi3/1
.L1:
	ret		# 32	[c=0]  return
	.size	le_notsi, .-le_notsi

but update the test case too, for consistency with the other two.

	gcc/testsuite/
	* gcc.target/vax/cmpelim-eq-notsi.c: Use subtraction from a
	constant then rather than addition.
	* gcc.target/vax/cmpelim-le-notsi.c: Likewise.
	* gcc.target/vax/cmpelim-lt-notsi.c: Likewise.
2021-01-09 16:30:51 +00:00
Maciej W. Rozycki
7f5c4d23db VAX: Remove a duplicate `cc' mode attribute
Remove the `cc' mode attribute that duplicates the implicitly defined
`mode' attribute.  No change to semantics.

	gcc/
	* config/vax/vax.md (cc): Remove mode attribute.
	(subst_<cc>, subst_f<cc>): Rename to...
	(subst_<mode>, subst_f<VAXccnz:mode>): ... these respectively.
	(*cbranch<VAXint:mode>4_<VAXcc:mode>): Update for `cc' removal.
	(*cbranch<VAXfp:mode>4_<VAXccnz:mode>): Likewise.
	(*branch_<mode>, *branch_<mode>_reversed): Likewise.
2021-01-09 16:30:50 +00:00
Maciej W. Rozycki
c38bbf5eed VAX: Use a mode with `const_double_zero' expressions
For predictable semantics propagate the mode from operands referred by
the FP substitution to the `const_double_zero' expressions used with the
associated condition code calculation.  Use an iterator to make copies
of the FP substitution across the FP modes supported as the substitution
now has to match the mode of the operands.

	gcc/
	* config/vax/vax.md (subst_f<cc>): Add mode to operands and
	`const_double_zero'.
2021-01-09 16:30:25 +00:00
Maciej W. Rozycki
be7e807242 PDP11: Use a mode with `const_double_zero' expressions
For predictable semantics propagate the mode from operands referred by
FP substitutions to the `const_double_zero' expressions used with the
associated condition code calculation, resulting in the following update
to insn-emit.c code produced for the `pdp11-aout' target (with machine
description line numbering change noise removed):

@@ -1514,7 +1514,7 @@
 	gen_rtx_COMPARE (CCmode,
 	gen_rtx_ABS (DFmode,
 	operand1),
-	CONST_DOUBLE_ATOF ("0", VOIDmode))),
+	CONST_DOUBLE_ATOF ("0", DFmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_ABS (DFmode,
 	copy_rtx (operand1)))));
@@ -1555,7 +1555,7 @@
 	gen_rtx_COMPARE (CCmode,
 	gen_rtx_NEG (DFmode,
 	operand1),
-	CONST_DOUBLE_ATOF ("0", VOIDmode))),
+	CONST_DOUBLE_ATOF ("0", DFmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_NEG (DFmode,
 	copy_rtx (operand1)))));
@@ -1790,7 +1790,7 @@
 	gen_rtx_MULT (DFmode,
 	operand1,
 	operand2),
-	CONST_DOUBLE_ATOF ("0", VOIDmode))),
+	CONST_DOUBLE_ATOF ("0", DFmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_MULT (DFmode,
 	copy_rtx (operand1),
@@ -1942,7 +1942,7 @@
 	gen_rtx_DIV (DFmode,
 	operand1,
 	operand2),
-	CONST_DOUBLE_ATOF ("0", VOIDmode))),
+	CONST_DOUBLE_ATOF ("0", DFmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_DIV (DFmode,
 	copy_rtx (operand1),

Provide a new iterator to provide copies of FP substitutions across the
FP modes supported as the substitutions now need to match the mode of
the operands.

	gcc/
	* config/pdp11/pdp11.md (PDPfp): New mode iterator.
	(fcc_cc, fcc_ccnz): Use it.  Add mode to `const_double_zero' and
	operands.
2021-01-09 15:50:27 +00:00
Maciej W. Rozycki
859be2e44a RTL: Update `const_double_zero' handling for mode and callable insns
Handle machine mode specification with `const_double_zero' and handle
the rtx with callable code produced from named insns.  Complementing
commit 20ab43b5ca ("RTL: Add `const_double_zero' syntactic rtx") and
removing a commit c60d0736df ("PDP11: Use `const_double_zero' to
express double zero constant") build regression observed with the
`pdp11-aout' target:

genemit: Internal error: abort in gen_exp, at genemit.c:202
make[2]: *** [Makefile:2427: s-emit] Error 1

where a:

(const_double 0 [0] 0 [0] 0 [0] 0 [0])

rtx coming from:

(parallel [
        (set (reg:CC 16)
            (compare:CC (abs:DF (match_operand:DF 1 ("general_operand") ("0,0")))
                (const_double 0 [0] 0 [0] 0 [0] 0 [0])))
        (set (match_operand:DF 0 ("nonimmediate_operand") ("=fR,Q"))
            (abs:DF (match_dup 1)))
    ])

and ultimately `(const_double_zero)' referred in a named RTL insn cannot
be interpreted.  Handle the rtx then by supplying the constant 0 double
operand requested, resulting in the following update to insn-emit.c code
produced for the `pdp11-aout' target, relative to before the triggering
commit:

@@ -1514,7 +1514,7 @@ gen_absdf2_cc (rtx operand0 ATTRIBUTE_UN
 	gen_rtx_COMPARE (CCmode,
 	gen_rtx_ABS (DFmode,
 	operand1),
-	const0_rtx)),
+	CONST_DOUBLE_ATOF ("0", VOIDmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_ABS (DFmode,
 	copy_rtx (operand1)))));
@@ -1555,7 +1555,7 @@ gen_negdf2_cc (rtx operand0 ATTRIBUTE_UN
 	gen_rtx_COMPARE (CCmode,
 	gen_rtx_NEG (DFmode,
 	operand1),
-	const0_rtx)),
+	CONST_DOUBLE_ATOF ("0", VOIDmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_NEG (DFmode,
 	copy_rtx (operand1)))));
@@ -1790,7 +1790,7 @@ gen_muldf3_cc (rtx operand0 ATTRIBUTE_UN
 	gen_rtx_MULT (DFmode,
 	operand1,
 	operand2),
-	const0_rtx)),
+	CONST_DOUBLE_ATOF ("0", VOIDmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_MULT (DFmode,
 	copy_rtx (operand1),
@@ -1942,7 +1942,7 @@ gen_divdf3_cc (rtx operand0 ATTRIBUTE_UN
 	gen_rtx_DIV (DFmode,
 	operand1,
 	operand2),
-	const0_rtx)),
+	CONST_DOUBLE_ATOF ("0", VOIDmode))),
 		gen_rtx_SET (operand0,
 	gen_rtx_DIV (DFmode,
 	copy_rtx (operand1),

This does not (yet) remove VOIDmode CONST_DOUBLE use, as it is up to
individual machine descriptions to choose.

	gcc/
	* genemit.c (gen_exp) <CONST_DOUBLE>: Handle `const_double_zero'
	rtx.
	* read-rtl.c (rtx_reader::read_rtx_code): Handle machine mode
	with `const_double_zero'.
	* doc/rtl.texi (Constant Expression Types): Document it.
2021-01-09 15:46:02 +00:00
Jakub Jelinek
991656092f tree-cfg: Allow enum types as result of POINTER_DIFF_EXPR [PR98556]
As conversions between signed integers and signed enums with the same
precision are useless in GIMPLE, it seems strange that we require that
POINTER_DIFF_EXPR result must be INTEGER_TYPE.

If we really wanted to require that, we'd need to change the gimplifier
to ensure that, which it isn't the case on the following testcase.
What is going on during the gimplification is that when we have the
(enum T) (p - q) cast, it is stripped through
      /* Strip away as many useless type conversions as possible
         at the toplevel.  */
      STRIP_USELESS_TYPE_CONVERSION (*expr_p);
and when the MODIFY_EXPR is gimplified, the *to_p has enum T type,
while *from_p has intptr_t type and as there is no conversion in between,
we just create GIMPLE_ASSIGN from that.

2021-01-09  Jakub Jelinek  <jakub@redhat.com>

	PR c++/98556
	* tree-cfg.c (verify_gimple_assign_binary): Allow lhs of
	POINTER_DIFF_EXPR to be any integral type.

	* c-c++-common/pr98556.c: New test.
2021-01-09 10:49:38 +01:00
Jakub Jelinek
16dae48e9c vregs: Fix up instantiate_virtual_regs_in_insn for asm goto with outputs [PR98603]
If an asm insn fails constraint checking during vregs, it is just deleted.
We don't delete asm goto though because of the edges to the labels, so
instantiate_virtual_regs_in_insn would just remove the inputs and their
constraints, the pattern etc.
This worked fine when asm goto couldn't have output operands, but causes
ICEs later on when it has more than one output (and furthermore doesn't
really remove the problematic outputs).  The problem is that
for multiple outputs we have a PARALLEL with multiple ASM_OPERANDS, but
those must use the same ASM_OPERANDS_INPUT_VEC etc., but the code was
adjusting just one.

The following patch turns invalid asm goto into a bare
asm goto ("" : : : : lab, lab2, lab3);
i.e. no inputs/outputs/clobbers, just the labels.

2021-01-09  Jakub Jelinek  <jakub@redhat.com>

	PR rtl-optimization/98603
	* function.c (instantiate_virtual_regs_in_insn): For asm goto
	with impossible constraints, drop all SETs, CLOBBERs, drop PARALLEL
	if any, set ASM_OPERANDS mode to VOIDmode and change
	ASM_OPERANDS_OUTPUT_CONSTRAINT and ASM_OPERANDS_OUTPUT_IDX.

	* gcc.target/i386/pr98603.c: New test.
	* gcc.target/aarch64/pr98603.c: New test.
2021-01-09 10:48:20 +01:00
Alexandre Oliva
57450da2fe final: accept markers at line 0
Back when I introduced debug markers, I seem to have been under the
impression that location line 0 would only ever occur for unknown and
builtin locations.

Though line 0 never comes up in normal processing of source files, and
debug info formats often cannot represent them, I suppose there's no
need to preemptively discard them during final.


for  gcc/ChangeLog

	PR debug/97714
	* final.c (notice_source_line): Narrow down the condition to
	skip a line-0 marker.

for  gcc/testsuite/ChangeLog

	PR debug/97714
	* gcc.dg/debug/pr97714.c: New.
2021-01-09 00:09:02 -03:00
GCC Administrator
bf5cbb9edf Daily bump. 2021-01-09 00:16:22 +00:00
Sergei Trofimovich
0b874e0ffd ipa-modref: avoid linebreak split in debug print
* ipa-modref.c (merge_call_side_effects): Fix
	linebreak split by reordering two print calls.
2021-01-08 21:25:29 +00:00
Ilya Leoshkevich
0e47d6c808 IBM Z: Fix constraints in vpdi patterns
The destination register is only partially overwritten, so + should be
used instead of =.

gcc/ChangeLog:

2021-01-08  Ilya Leoshkevich  <iii@linux.ibm.com>

	* config/s390/vector.md (*tf_to_fprx2_0): Rename from
	"*mov_tf_to_fprx2_0" for consistency, fix constraint.
	(*tf_to_fprx2_1): Rename from "*mov_tf_to_fprx2_1" for
	consistency, fix constraint.
2021-01-08 18:15:47 +01:00
H.J. Lu
745d04e796 x86-64: Require lp64 for PR target/98482 tests
Require lp64 for PR target/98482 tests since -mcmodel=large is isn't
supported for x32.

	PR target/98482
	* gcc.target/i386/pr98482-1.c: Require lp64.
	* gcc.target/i386/pr98482-2.c: Likewise.
2021-01-08 08:47:06 -08:00