This patch adds new movmisalign<mode>_mve_load and store patterns for
MVE to help vectorization. They are very similar to their Neon
counterparts, but use different iterators and instructions.
Indeed MVE supports less vectors modes than Neon, so we use the
MVE_VLD_ST iterator where Neon uses VQX.
Since the supported modes are different from the ones valid for
arithmetic operators, we introduce two new sets of macros:
ARM_HAVE_NEON_<MODE>_LDST
true if Neon has vector load/store instructions for <MODE>
ARM_HAVE_<MODE>_LDST
true if any vector extension has vector load/store instructions for <MODE>
We move the movmisalign<mode> expander from neon.md to vec-commond.md, and
replace the TARGET_NEON enabler with ARM_HAVE_<MODE>_LDST.
The patch also updates the mve-vneg.c test to scan for the better code
generation when loading and storing the vectors involved: it checks
that no 'orr' instruction is generated to cope with misalignment at
runtime.
This test was chosen among the other mve tests, but any other should
be OK. Using a plain vector copy loop (dest[i] = a[i]) is not a good
test because the compiler chooses to use memcpy.
For instance we now generate:
test_vneg_s32x4:
vldrw.32 q3, [r1]
vneg.s32 q3, q3
vstrw.32 q3, [r0]
bx lr
instead of:
test_vneg_s32x4:
orr r3, r1, r0
lsls r3, r3, #28
bne .L15
vldrw.32 q3, [r1]
vneg.s32 q3, q3
vstrw.32 q3, [r0]
bx lr
.L15:
push {r4, r5}
ldrd r2, r3, [r1, #8]
ldrd r5, r4, [r1]
rsbs r2, r2, #0
rsbs r5, r5, #0
rsbs r4, r4, #0
rsbs r3, r3, #0
strd r5, r4, [r0]
pop {r4, r5}
strd r2, r3, [r0, #8]
bx lr
2021-01-12 Christophe Lyon <christophe.lyon@linaro.org>
PR target/97875
gcc/
* config/arm/arm.h (ARM_HAVE_NEON_V8QI_LDST): New macro.
(ARM_HAVE_NEON_V16QI_LDST, ARM_HAVE_NEON_V4HI_LDST): Likewise.
(ARM_HAVE_NEON_V8HI_LDST, ARM_HAVE_NEON_V2SI_LDST): Likewise.
(ARM_HAVE_NEON_V4SI_LDST, ARM_HAVE_NEON_V4HF_LDST): Likewise.
(ARM_HAVE_NEON_V8HF_LDST, ARM_HAVE_NEON_V4BF_LDST): Likewise.
(ARM_HAVE_NEON_V8BF_LDST, ARM_HAVE_NEON_V2SF_LDST): Likewise.
(ARM_HAVE_NEON_V4SF_LDST, ARM_HAVE_NEON_DI_LDST): Likewise.
(ARM_HAVE_NEON_V2DI_LDST): Likewise.
(ARM_HAVE_V8QI_LDST, ARM_HAVE_V16QI_LDST): Likewise.
(ARM_HAVE_V4HI_LDST, ARM_HAVE_V8HI_LDST): Likewise.
(ARM_HAVE_V2SI_LDST, ARM_HAVE_V4SI_LDST, ARM_HAVE_V4HF_LDST): Likewise.
(ARM_HAVE_V8HF_LDST, ARM_HAVE_V4BF_LDST, ARM_HAVE_V8BF_LDST): Likewise.
(ARM_HAVE_V2SF_LDST, ARM_HAVE_V4SF_LDST, ARM_HAVE_DI_LDST): Likewise.
(ARM_HAVE_V2DI_LDST): Likewise.
* config/arm/mve.md (*movmisalign<mode>_mve_store): New pattern.
(*movmisalign<mode>_mve_load): New pattern.
* config/arm/neon.md (movmisalign<mode>): Move to ...
* config/arm/vec-common.md: ... here.
PR target/97875
gcc/testsuite/
* gcc.target/arm/simd/mve-vneg.c: Update test.
LRA can loop infinitely on targets without `reg + imm` insns. Register elimination
on such targets can increase register pressure resulting in permanent
stack size increase and changing elimination offset. To avoid such situation, a simple
transformation can be done to avoid register pressure increase after
generating reload insns containing eliminated hard regs.
gcc/ChangeLog:
PR target/97969
* lra-eliminations.c (eliminate_regs_in_insn): Add transformation
of pattern 'plus (plus (hard reg, const), pseudo)'.
gcc/testsuite/ChangeLog:
PR target/97969
* gcc.target/arm/pr97969.c: New.
This patch teaches cp_walk_subtrees to visit the template represented
by a CTAD placeholder, which would otherwise be not visited during
find_template_parameters. The template may be a template template
parameter (as in the first testcase), or it may implicitly use the
template parameters of an enclosing class template (as in the second
testcase), and in either case we need to visit this tree to record the
template parameters used therein for later satisfaction.
gcc/cp/ChangeLog:
PR c++/98611
* tree.c (cp_walk_subtrees) <case TEMPLATE_TYPE_PARM>: Visit
the template of a CTAD placeholder.
gcc/testsuite/ChangeLog:
PR c++/98611
* g++.dg/cpp2a/concepts-ctad1.C: New test.
* g++.dg/cpp2a/concepts-ctad2.C: New test.
This fixes the check that disqualifies BB vectorization because of
required unrolling to match up with the later exact_div we do. To
not disable the ability to split groups that do not match up
exactly with a choosen vector type this also introduces a soft-fail
mechanism to vect_build_slp_tree_1 which delays failing to after
the matches[] array is populated from other checks and only then
determines the split point according to the vector type.
2021-01-12 Richard Biener <rguenther@suse.de>
PR tree-optimization/98550
* tree-vect-slp.c (vect_record_max_nunits): Check whether
the group size is a multiple of the vector element count.
(vect_build_slp_tree_1): When we need to fail because
the vector type choosen causes unrolling do so lazily
without affecting matches only at the end to guide group splitting.
* g++.dg/opt/pr98550.C: New testcase.
Similarly to 7f967bd2a7, we need to
compare string with strcmp.
gcc/ChangeLog:
PR c++/97284
* optc-save-gen.awk: Compare also n_target_save vars with
strcmp.
gcc/ChangeLog:
* gcov.c (source_info::debug): New.
(print_usage): Add --debug (-D) option.
(process_args): Likewise.
(generate_results): Call src->debug after
accumulate_line_counts.
(read_graph_file): Properly assign id for EXIT_BLOCK.
* profile.c (branch_prob): Dump function body before it is
instrumented.
As the testcase shows, my latest changes caused ICE on that testcase.
The problem is that arith_overflow_check_p now can change the use_stmt
argument (has a reference), so that if it succeeds (returns non-zero),
it points it to the GIMPLE_COND or EQ/NE or COND_EXPR assignment from the
TRUNC_DIV_EXPR assignment.
The problem was that it would change use_stmt also if it returned 0 in some
cases, such as multiple imm uses of the division, and in one of the callers
if arith_overflow_check_p returns 0 it looks at use_stmt again and performs
other checks, which of course assumes that use_stmt is the one passed
to arith_overflow_check_p and not e.g. NULL instead or some other unrelated
stmt.
The following patch fixes that by only changing use_stmt when we are about
to return non-zero (for the MULT_EXPR case, which is the only one with the
need to use different use_stmt).
2021-01-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/98629
* tree-ssa-math-opts.c (arith_overflow_check_p): Don't update use_stmt
unless returning non-zero.
* gcc.c-torture/compile/pr98629.c: New test.
We already had x != 0 && y != 0 to (x | y) != 0 and
x != -1 && y != -1 to (x & y) != -1 and
x < 32U && y < 32U to (x | y) < 32U, this patch adds signed
x < 0 && y < 0 to (x | y) < 0. In that case, the low/high seem to be
always the same and just in_p indices whether it is >= 0 or < 0,
also, all types in the same bucket (same precision) should be type
compatible, but we can have some >= 0 and some < 0 comparison mixed,
so the patch handles that by using the right BIT_IOR_EXPR or BIT_AND_EXPR
and doing one set of < 0 or >= 0 first, then BIT_NOT_EXPR and then the other
one. I had to move optimize_range_tests_var_bound before this optimization
because that one deals with signed a >= 0 && a < b, and limited it to the
last reassoc pass as reassoc itself can't virtually undo this optimization
yet (and not sure if vrp would be able to).
2021-01-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95731
* tree-ssa-reassoc.c (optimize_range_tests_cmp_bitwise): Also optimize
x < 0 && y < 0 && z < 0 into (x | y | z) < 0 for signed x, y, z.
(optimize_range_tests): Call optimize_range_tests_cmp_bitwise
only after optimize_range_tests_var_bound.
* gcc.dg/tree-ssa/pr95731.c: New test.
* gcc.c-torture/execute/pr95731.c: New test.
As reported by Matthias, --enable-link-serialization=1 can currently start
two concurrent links first (e.g. gnat1 and cc1).
The problem is that make var = value values seem to work differently between
dependencies and actual rules (where it was tested).
As the language make fragments can be in different order, we can have:
ada.prev = ... magic that will become $(c.serial) under --enable-link-serialization=1
gnat1$(exe): ..... $(ada.prev)
...
c.serial = cc1$(exe)
and while if I add echo $(ada.prev) in the gnat1 rule's command, it prints
cc1, the dependencies are actually evaluated during reading of the goal or
when.
The configure creates (and puts into Makefile) some serialization order of
the languages and in that order c always comes first, and the rest is
actually sorted the way the all_lang_makefrags are already sorted,
so just by forcing c/Make-lang.in first we achieve that X.serial variable
is always defined before some other Y.prev will use it in its goal
dependencies.
2021-01-12 Jakub Jelinek <jakub@redhat.com>
* configure.ac: Ensure c/Make-lang.in comes first in @all_lang_makefrags@.
* configure: Regenerated.
This PR wants us not to warn about missing field initializers when
the code in question takes places in decltype and similar. Fixed
thus.
gcc/cp/ChangeLog:
PR c++/98620
* typeck2.c (process_init_constructor_record): Don't emit
-Wmissing-field-initializers warnings in unevaluated contexts.
gcc/testsuite/ChangeLog:
PR c++/98620
* g++.dg/warn/Wmissing-field-initializers-2.C: New test.
Use a dtor to automatically remove ITER from IMM_USE list in
FOR_EACH_IMM_USE_STMT.
for gcc/ChangeLog
* ssa-iterators.h (end_imm_use_stmt_traverse): Forward
declare.
(auto_end_imm_use_stmt_traverse): New struct.
(FOR_EACH_IMM_USE_STMT): Use it.
(BREAK_FROM_IMM_USE_STMT, RETURN_FROM_IMM_USE_STMT): Remove,
along with uses...
* gimple-ssa-strength-reduction.c: ... here, ...
* graphite-scop-detection.c: ... here, ...
* ipa-modref.c, ipa-pure-const.c, ipa-sra.c: ... here, ...
* tree-predcom.c, tree-ssa-ccp.c: ... here, ...
* tree-ssa-dce.c, tree-ssa-dse.c: ... here, ...
* tree-ssa-loop-ivopts.c, tree-ssa-math-opts.c: ... here, ...
* tree-ssa-phiprop.c, tree-ssa.c: ... here, ...
* tree-vect-slp.c: ... and here, ...
* doc/tree-ssa.texi: ... and the example here.
This patch adds support for both conditional and unconditional unpacked
ASRD. This meant adding a new define_insn for the unconditional form,
instead of reusing the conditional instructions. It also meant
extending the current conditional patterns to support merging with
any independent value, not just zero.
gcc/
* config/aarch64/aarch64-sve.md (sdiv_pow2<mode>3): Extend from
SVE_FULL_I to SVE_I. Generate an UNSPEC_PRED_X.
(*sdiv_pow2<mode>3): New pattern.
(@cond_<sve_int_op><mode>): Extend from SVE_FULL_I to SVE_I.
Wrap the ASRD in an UNSPEC_PRED_X.
(*cond_<sve_int_op><mode>_2): Likewise. Replace the UNSPEC_PRED_X
predicate with a constant PTRUE, if it isn't already.
(*cond_<sve_int_op><mode>_z): Replace with...
(*cond_<sve_int_op><mode>_any): ...this new pattern.
gcc/testsuite/
* gcc.target/aarch64/sve/asrdiv_4.c: New test.
* gcc.target/aarch64/sve/cond_asrd_1.c: Likewise.
* gcc.target/aarch64/sve/cond_asrd_1_run.c: Likewise.
* gcc.target/aarch64/sve/cond_asrd_2.c: Likewise.
* gcc.target/aarch64/sve/cond_asrd_2_run.c: Likewise.
* gcc.target/aarch64/sve/cond_asrd_3.c: Likewise.
* gcc.target/aarch64/sve/cond_asrd_3_run.c: Likewise.
This patch adds support for unpacked conditional BIC. The type suffix
could be taken from the element size or the container size, so the
patch continues to use the element size. This is consistent with
the existing support for unconditional BIC.
gcc/
* config/aarch64/aarch64-sve.md (*cond_bic<mode>_2): Extend from
SVE_FULL_I to SVE_I.
(*cond_bic<mode>_any): Likewise.
gcc/testsuite/
* g++.target/aarch64/sve/cond_bic_1.C: New test.
* g++.target/aarch64/sve/cond_bic_2.C: Likewise.
* g++.target/aarch64/sve/cond_bic_3.C: Likewise.
* g++.target/aarch64/sve/cond_bic_4.C: Likewise.
This patch extends the SMULH and UMULH support to unpacked vectors.
The type suffix must be taken from the element size rather than the
container size.
The main use of these patterns is to support division and modulus
by a constant. The conditional forms would be hard to trigger from
non-ACLE code, and ACLE code needs fully-packed vectors only.
gcc/
* config/aarch64/aarch64-sve.md (<su>mul<mode>3_highpart)
(@aarch64_pred_<MUL_HIGHPART:optab><mode>): Extend from SVE_FULL_I
to SVE_I.
gcc/testsuite/
* gcc.target/aarch64/sve/mul_highpart_3.c: New test.
This patch adds support for unpacked SVE SABD and UABD.
It also rewrites the patterns so that they match as combine
patterns without the need for REG_EQUAL notes. Finally,
there was no pattern for merging with the second input,
which can be handled by reversing the operands.
The type suffix needs to be taken from the element size rather
than the container size.
gcc/
* config/aarch64/aarch64-sve.md (<su>abd<mode>_3): Extend from
SVE_FULL_I to SVE_I.
(*aarch64_cond_<su>abd<mode>_2): Likewise.
(*aarch64_cond_<su>abd<mode>_any): Likewise.
(@aarch64_pred_<su>abd<mode>): Likewise. Use UNSPEC_PRED_X
for the max and min but not for the minus.
(*aarch64_cond_<su>abd<mode>_3): New pattern.
gcc/testsuite/
* g++.target/aarch64/sve/abd_1.C: New test.
* g++.target/aarch64/sve/cond_abd_1.C: Likewise.
* g++.target/aarch64/sve/cond_abd_2.C: Likewise.
* g++.target/aarch64/sve/cond_abd_3.C: Likewise.
* g++.target/aarch64/sve/cond_abd_4.C: Likewise.
This patch extends the ADR patterns to handle unpacked vectors.
They would work with both elements and containers, but since
the instructions only support .s and .d, we get more coverage
by using containers.
gcc/
* config/aarch64/iterators.md (SVE_24I): New iterator.
* config/aarch64/aarch64-sve.md (*aarch64_adr<mode>_shift): Extend from
SVE_FULL_SDI to SVE_24I. Use containers rather than elements.
gcc/testsuite/
* gcc.target/aarch64/sve/adr_6.c: New test.
This patch adds support for conditional binary ADD, SUB, MUL, SMAX,
UMAX, SMIN, UMIN, LSL, LSR, ASR, AND, ORR and EOR. It's not really
possible to split it up further given how the patterns are written.
Min, max and right-shift need the element size rather than the container
size. The others would work with both, although MUL should be more
efficient when applied to elements instead of containers.
gcc/
* config/aarch64/aarch64-sve.md (@cond_<SVE_INT_BINARY:optab><mode>)
(*cond_<SVE_INT_BINARY:optab><mode>_2): Extend from SVE_FULL_I
to SVE_I.
(*cond_<SVE_INT_BINARY:optab><mode>_3): Likewise.
(*cond_<SVE_INT_BINARY:optab><mode>_any): Likewise.
(*cond_<SVE_INT_BINARY:optab><mode>_2_const): Likewise.
(*cond_<SVE_INT_BINARY:optab><mode>_any_const): Likewise.
gcc/testsuite/
* g++.target/aarch64/sve/cond_arith_1.C: New test.
* g++.target/aarch64/sve/cond_arith_2.C: Likewise.
* g++.target/aarch64/sve/cond_arith_3.C: Likewise.
* g++.target/aarch64/sve/cond_arith_4.C: Likewise.
* g++.target/aarch64/sve/cond_shift_1.C: New test.
* g++.target/aarch64/sve/cond_shift_2.C: Likewise.
* g++.target/aarch64/sve/cond_shift_3.C: Likewise.
* g++.target/aarch64/sve/cond_shift_4.C: Likewise.
This patch makes the SVE_INT_BINARY_IMM patterns support
unpacked arithmetic, covering MUL, SMAX, SMIN, UMAX and UMIN.
For min and max, the type suffix must be taken from the element
size rather than the container size.
The XFAILs are due to PR98602.
gcc/
* config/aarch64/aarch64-sve.md (<SVE_INT_BINARY_IMM:optab><mode>3)
(@aarch64_pred_<SVE_INT_BINARY_IMM:optab><mode>)
(*post_ra_<SVE_INT_BINARY_IMM:optab><mode>3): Extend from SVE_FULL_I
to SVE_I.
gcc/testsuite/
PR testsuite/98602
* g++.target/aarch64/sve/max_1.C: New test.
* g++.target/aarch64/sve/min_1.C: Likewise.
* gcc.target/aarch64/sve/mul_2.c: Likewise.
This patch adds support for unpacked SVE LSL, ASR and LSR.
For right shifts, the type suffix needs to be taken from the
element size rather than the container size.
gcc/
* config/aarch64/aarch64-sve.md (<ASHIFT:optab><mode>3)
(v<ASHIFT:optab><mode>3, @aarch64_pred_<optab><mode>)
(*post_ra_v<ASHIFT:optab><mode>3): Extend from SVE_FULL_I to SVE_I.
gcc/testsuite/
* gcc.target/aarch64/sve/shift_2.c: New test.
In GCC10 cp_walk_subtrees has been changed to walk template arguments.
As the following testcase, that changed the mangling of some functions.
I believe the previous behavior that find_abi_tags_r doesn't recurse into
template args has been the correct one, but setting *walk_subtrees = 0
for the types and handling the types subtree walking manually in
find_abi_tags_r looks too hard, there are a lot of subtrees and details what
should and shouldn't be walked, both in tree.c (walk_type_fields there,
which is static) and in cp_walk_subtrees itself.
The following patch abuses the fact that *walk_subtrees is an int to
tell cp_walk_subtrees it shouldn't walk the template args.
Co-authored-by: Jason Merrill <jason@redhat.com>
gcc/cp/ChangeLog:
PR c++/98481
* class.c (find_abi_tags_r): Set *walk_subtrees to 2 instead of 1
for types.
(mark_abi_tags_r): Likewise.
* decl2.c (min_vis_r): Likewise.
* tree.c (cp_walk_subtrees): If *walk_subtrees_p is 2, look through
typedefs.
gcc/testsuite/ChangeLog:
PR c++/98481
* g++.dg/abi/abi-tag24.C: New test.
The vectorizer, for large permuted grouped loads, generates
inefficient intermediate code (cleaned up only later) that runs
into complexity issues in SCEV analysis and elsewhere. For the
non-single-element interleaving case we already put a hard limit
in place, this applies the same limit to the missing case.
2021-01-11 Richard Biener <rguenther@suse.de>
PR tree-optimization/91403
* tree-vect-data-refs.c (vect_analyze_group_access_1): Cap
single-element interleaving group size at 4096 elements.
* gcc.dg/vect/pr91403.c: New testcase.
The .ld1_args file is not created when HAVE_GNU_LD is false.
The ltrans0.ltrans_arg file is not created when the make jobserver
is available, so remove the MAKEFLAGS variable.
Add an exception for *.gcc_args files similar to the
exception for *.cdtor.* files.
Limit both exceptions to targets that define EH_FRAME_THROUGH_COLLECT2.
That means although the test case does not use C++ constructors
or destructors it is still using dwarf2 frame info.
2021-01-11 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR testsuite/98225
* gcc.misc-tests/outputs.exp: Unset MAKEFLAGS.
Expect .ld1_args only when GNU LD is used.
Add an exception for *.gcc_args files.
This fixes a double-counting in the reduction cost when vectorizing
the reduction through the regular vectorizable_* functions.
2021-01-11 Richard Biener <rguenther@suse.de>
PR tree-optimization/98526
* tree-vect-loop.c (vect_model_reduction_cost): Remove costing
of the actual reduction op for the regular case.
(vectorizable_reduction): Cost the stmts
vect_transform_reduction produces here.
The deprecation phase for access checks is finished.
The `-ftransition=import` and `-ftransition=checkimports` switches no
longer have an effect and are now removed. Symbols that are not visible
in a particular scope will no longer be found by the compiler.
Reviewed-on: https://github.com/dlang/dmd/pull/12124
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd 2d3d13748.
* d-lang.cc (d_handle_option): Remove OPT_ftransition_checkimports and
OPT_ftransition_import.
* gdc.texi (Warnings): Remove documentation for -ftransition=import
and -ftransition=checkimports.
* lang.opt (ftransition=checkimports): Remove.
(ftransition=import): Remove.
The vec-abi-varargs-1.c testcase on IBM Z currently fails.
While adding an SI mode vector to a DI mode vector the first is unpacked using:
_28 = BIT_INSERT_EXPR <{ 0, 0, 0, 0 }, _2, 0>;
_34 = [vec_unpack_lo_expr] _28;
However, on big endian targets lo refers to the right hand side of the vector - in this case the zeroes.
2021-01-11 Andreas Krebbel <krebbel@linux.ibm.com>
* tree-ssa-forwprop.c (simplify_vector_constructor): For
big-endian, use UNPACK[_FLOAT]_HI.
This fixes a memory leak in complex_add_pattern because I was not calling
vect_free_slp_tree when dissolving one side of the TWO_OPERANDS nodes.
Secondly it also upgrades the class to the new inteface required by the other
patterns.
gcc/ChangeLog:
* tree-vect-slp-patterns.c (class complex_pattern,
class complex_add_pattern): Add parameters to matches.
(complex_add_pattern::build): Free memory.
(complex_add_pattern::matches): Move validation end of match.
(complex_add_pattern::recognize): Likewise.
This fixes a bug with externals and linear_loads_p where I forgot to save the
value before returning.
It also fixes handling of nodes with multiple children on a non VEC_PERM node.
There the child iteration would already resolve the kind and the loads are All
expected to be the same if valid so just return one.
gcc/ChangeLog:
* tree-vect-slp-patterns.c (linear_loads_p): Fix externals.
This fixes an issue where is_linear_load_p could return the incorrect
permutation kind because it is singe pass.
This arranges the candidates in such a way that there won't be any ambiguity so
that the function can still be linear but give correct values.
gcc/ChangeLog:
* tree-vect-slp-patterns.c (is_linear_load_p): Fix ambiguity.
For floating point multiply, we have nice code in reassoc to reassociate
multiplications to almost optimal sequence of as few multiplications as
possible (or library call), but for integral types we just give up
because there is no __builtin_powi* for those types.
As there is no library routine we could use, instead of adding new internal
call just to hold it temporarily and then lower to multiplications again,
this patch for the integral types calls into the sincos pass routine that
expands it into multiplications right away.
2021-01-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95867
* tree-ssa-math-opts.h: New header.
* tree-ssa-math-opts.c: Include tree-ssa-math-opts.h.
(powi_as_mults): No longer static. Use build_one_cst instead of
build_real. Formatting fix.
* tree-ssa-reassoc.c: Include tree-ssa-math-opts.h.
(attempt_builtin_powi): Handle multiplication reassociation without
powi_fndecl using powi_as_mults.
(reassociate_bb): For integral types don't require
-funsafe-math-optimizations to call attempt_builtin_powi.
* gcc.dg/tree-ssa/pr95867.c: New test.
On top of the previous widening_mul patch, this one recognizes also
(non-perfect) signed multiplication with overflow, like:
int
f5 (int x, int y, int *res)
{
*res = (unsigned) x * y;
return x && (*res / x) != y;
}
The problem with such checks is that they invoke UB if x is -1 and
y is INT_MIN during the division, but perhaps the code knows that
those values won't appear. As that case is UB, we can do for that
case whatever we want and handling that case as signed overflow
is the best option. If x is a constant not equal to -1, then the checks
are 100% correct though.
Haven't tried to pattern match bullet-proof checks, because I really don't
know if users would write it in real-world code like that,
perhaps
*res = (unsigned) x * y;
return x && (x == -1 ? (*res / y) != x : (*res / x) != y);
?
https://wiki.sei.cmu.edu/confluence/display/c/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow
suggests to use twice as wide multiplication (perhaps we should handle that
too, for both signed and unsigned), or some very large code
with 4 different divisions nested in many conditionals, no way one can
match all the possible variants thereof.
2021-01-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95852
* tree-ssa-math-opts.c (maybe_optimize_guarding_check): Change
mul_stmts parameter type to vec<gimple *> &. Before cond_stmt
allow in the bb any of the stmts in that vector, div_stmt and
up to 3 cast stmts.
(arith_cast_equal_p): New function.
(arith_overflow_check_p): Add cast_stmt argument, handle signed
multiply overflow checks.
(match_arith_overflow): Adjust caller. Handle signed multiply
overflow checks.
* gcc.target/i386/pr95852-3.c: New test.
* gcc.target/i386/pr95852-4.c: New test.
The following patch pattern recognizes some forms of multiplication followed
by overflow check through division by one of the operands compared to the
other one, with optional removal of guarding non-zero check for that operand
if possible. The patterns are replaced with effectively
__builtin_mul_overflow or __builtin_mul_overflow_p. The testcases cover 64
different forms of that.
2021-01-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95852
* tree-ssa-math-opts.c (maybe_optimize_guarding_check): New function.
(uaddsub_overflow_check_p): Renamed to ...
(arith_overflow_check_p): ... this. Handle also multiplication
with overflow check.
(match_uaddsub_overflow): Renamed to ...
(match_arith_overflow): ... this. Add cfg_changed argument. Handle
also multiplication with overflow check. Adjust function comment.
(math_opts_dom_walker::after_dom_children): Adjust callers. Call
match_arith_overflow also for MULT_EXPR.
* gcc.target/i386/pr95852-1.c: New test.
* gcc.target/i386/pr95852-2.c: New test.
__builtin_convertvector seems well-suited to implementing the vmovl and
vmovn intrinsics that widen and narrow
the integer elements in a vector.
This removes some more inline assembly from the intrinsics.
gcc/
* config/aarch64/arm_neon.h (vmovl_s8): Reimplement using
__builtin_convertvector.
(vmovl_s16): Likewise.
(vmovl_s32): Likewise.
(vmovl_u8): Likewise.
(vmovl_u16): Likewise.
(vmovl_u32): Likewise.
(vmovn_s16): Likewise.
(vmovn_s32): Likewise.
(vmovn_s64): Likewise.
(vmovn_u16): Likewise.
(vmovn_u32): Likewise.
(vmovn_u64): Likewise.
gcc/testsuite/ChangeLog:
PR gcov-profile/98273
* lib/gcov.exp: Add run-gcov-pytest function which runs pytest.
* g++.dg/gcov/pr98273.C: New test.
* g++.dg/gcov/gcov.py: New test.
* g++.dg/gcov/test-pr98273.py: New test.
gcc/ChangeLog:
* gimple-if-to-switch.cc (struct condition_info): Use auto_var.
(if_chain::is_beneficial): Delete clusters
(find_conditions): Make second argument of conditions_in_bbs a
pointer so that we control over it's lifetime.
(pass_if_to_switch::execute): Delete them.
This patch is to make move_unallocated_pseudos consistent
to what we have in function find_moveable_pseudos, where we
record the original pseudo into pseudo_replaced_reg only if
validate_change succeeds with newreg. To ensure every
unallocated pseudo in move_unallocated_pseudos has expected
information, it's better to add a check and skip it if it's
unexpected. This avoids possible ICEs in future.
gcc/ChangeLog:
* ira.c (move_unallocated_pseudos): Check other_reg and skip if
it isn't set.
Adds TEST_OUTPUT directives and reduces the verbosity of many tests.
Reviewed-on: https://github.com/dlang/dmd/pull/12112
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd cb1106ad5.
Expression-based contract syntax has been added. Contracts that consist
of a single assertion can now be written more succinctly and multiple
`in` or `out` contracts can be specified for the same function.
Reviewed-on: https://github.com/dlang/dmd/pull/12106
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd e598f69c0.
Remove fallout from commit 0bd675183d ("match.pd: Add ~(X - Y) -> ~X
+ Y simplification [PR96685]") and paper over the regression caused as
it is not the matter of the test cases affected.
Previously assembly like this:
.text
.align 1
.globl eq_notsi
.type eq_notsi, @function
eq_notsi:
.word 0 # 35 [c=0] procedure_entry_mask
subl2 $4,%sp # 46 [c=32] *addsi3
mcoml 4(%ap),%r0 # 32 [c=16] *one_cmplsi2_ccz
jeql .L1 # 34 [c=26] *branch_ccz
addl2 $2,%r0 # 31 [c=32] *addsi3
.L1:
ret # 40 [c=0] return
.size eq_notsi, .-eq_notsi
was produced. Now this:
.text
.align 1
.globl eq_notsi
.type eq_notsi, @function
eq_notsi:
.word 0 # 36 [c=0] procedure_entry_mask
subl2 $4,%sp # 48 [c=32] *addsi3
movl 4(%ap),%r0 # 33 [c=16] *movsi_2
cmpl %r0,$-1 # 34 [c=8] *cmpsi_ccz/1
jeql .L3 # 35 [c=26] *branch_ccz
subl3 %r0,$1,%r0 # 32 [c=32] *subsi3/1
ret # 27 [c=0] return
.L3:
clrl %r0 # 31 [c=2] *movsi_2
ret # 41 [c=0] return
.size eq_notsi, .-eq_notsi
is, which cannot work with post-reload comparison elimination, due to
the comparison against -1 rather than 0.
Use subtraction from a constant then rather than addition as the former
operation is not transformed, removing these regressions:
FAIL: gcc.target/vax/cmpelim-eq-notsi.c -O1 scan-rtl-dump-times cmpelim "deleting insn with uid" 1
FAIL: gcc.target/vax/cmpelim-eq-notsi.c -O1 scan-assembler-not \t(bit|cmpz?|tst).
FAIL: gcc.target/vax/cmpelim-eq-notsi.c -O1 scan-assembler one_cmplsi[^ ]*_ccz(/[0-9]+)?\n
FAIL: gcc.target/vax/cmpelim-lt-notsi.c -O1 scan-rtl-dump-times cmpelim "deleting insn with uid" 1
FAIL: gcc.target/vax/cmpelim-lt-notsi.c -O1 scan-assembler-not \t(bit|cmpz?|tst).
FAIL: gcc.target/vax/cmpelim-lt-notsi.c -O1 scan-assembler one_cmplsi[^ ]*_ccn(/[0-9]+)?\n
and likewise across some of the other the optimization levels verified.
The LE variant appears unaffected as the new transformation produces
slightly different although still suboptimal code:
.text
.align 1
.globl le_notsi
.type le_notsi, @function
le_notsi:
.word 0 # 27 [c=0] procedure_entry_mask
subl2 $4,%sp # 34 [c=32] *addsi3
movl 4(%ap),%r1 # 23 [c=16] *movsi_2
mcoml %r1,%r0 # 24 [c=8] *one_cmplsi2_ccnz
jleq .L1 # 26 [c=26] *branch_ccnz
subl3 %r1,$1,%r0 # 22 [c=32] *subsi3/1
.L1:
ret # 32 [c=0] return
.size le_notsi, .-le_notsi
but update the test case too, for consistency with the other two.
gcc/testsuite/
* gcc.target/vax/cmpelim-eq-notsi.c: Use subtraction from a
constant then rather than addition.
* gcc.target/vax/cmpelim-le-notsi.c: Likewise.
* gcc.target/vax/cmpelim-lt-notsi.c: Likewise.