The following patch improves the A / (1 << B) -> A >> B simplification,
as seen in the testcase, if there is unnecessary widening for the division,
we just optimize it into a shift on the widened type, but if the lshift
is widened too, there is no reason to do that, we can just shift it in the
original type and convert after. The tree_nonzero_bits & wi::mask check
already ensures it is fine even for signed values.
I've split the vr-values optimization into a separate patch as it causes
a small regression on two testcases, but this patch fixes what has been
reported in the PR alone.
2021-01-05 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96930
* match.pd ((A / (1 << B)) -> (A >> B)): If A is extended
from narrower value which has the same type as 1 << B, perform
the right shift on the narrower value followed by extension.
* g++.dg/tree-ssa/pr96930.C: New test.
I've tried to add such helper, but handling over just analysis and letting
each pass handle it differently seems complicated given the limitations of
the bswap infrastructure.
So, this patch just hooks the optimization also into store-merging so that
the original testcase from the PR can be fixed.
2021-01-05 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96239
* gimple-ssa-store-merging.c (maybe_optimize_vector_constructor): New
function.
(get_status_for_store_merging): Don't return BB_INVALID for blocks
with potential bswap optimizable CONSTRUCTORs.
(pass_store_merging::execute): Optimize vector CONSTRUCTORs with bswap
if possible.
* gcc.dg/tree-ssa/pr96239.c: New test.
Description of options should be . terminated, the:
FAIL: compiler driver --help=go option(s): "^ +-.*[^:.]$" absent from output: " -fgo-embedcfg=<file> List embedded files via go:embed"
test even reports that.
2021-01-05 Jakub Jelinek <jakub@redhat.com>
* lang.opt (fgo-embedcfg=): Add full stop at the end of description.
This fixes extraction of live bool vector results for the case of
integer mode vectors.
2021-01-05 Richard Biener <rguenther@suse.de>
PR tree-optimization/98381
* tree.c (vector_element_bits): Properly compute bool vector
element size.
* tree-vect-loop.c (vectorizable_live_operation): Properly
compute the last lane bit offset.
Prevent spurious FP exceptions with _mm_cvt{,t}ps_pi32 for TARGET_MMX_WITH_SSE
by clearing the top 64 bytes of the input XMM register.
2021-01-05 Uroš Bizjak <ubizjak@gmail.com>
gcc/
PR target/98522
* config/i386/sse.md (sse_cvtps2pi): Redefine as define_insn_and_split.
Clear the top 64 bytes of the input XMM register.
(sse_cvttps2pi): Ditto.
gcc/testsuite
PR target/98522
* gcc.target/i386/pr98522.c: New test.
The diagnostic for a misplaced module decl was essentially 'computer
says no', which isn't the most helpful. This adjusts it to indicate
what would be acceptable.
gcc/cp/
* parser.c (cp_parser_module_declaration): Alter diagnostic
text to say where is permissable.
gcc/testsuite/
* g++.dg/modules/mod-decl-1.C: Adjust.
* g++.dg/modules/p0713-2.C: Adjust.
* g++.dg/modules/p0713-3.C: Adjust.
_mm_extract_pi16 is intrinsic for pextrw, which should be zero-extended,
not sign-extended.
gcc/
PR target/98495
* config/i386/xmmintrin.h (_mm_extract_pi16): Cast to unsigned
short first.
gcc/testsuite/
PR target/98495
* gcc.target/i386/pr98495-1.c: New test.
* gcc.target/i386/pr98495-2.c: New test.
* gcc.target/i386/pr98495-3.c: New test.
* gcc.target/i386/pr98495-4.c: New test.
* gcc.target/i386/pr98495-5.c: New test.
The following patch adds define_insn_and_split to optimize
vpmovmskb %xmm0, %eax
- movzwl %ax, %eax
notl %eax
and combine splitter to optimize
pmovmskb %xmm0, %eax
- notl %eax
- movzwl %ax, %eax
+ xorl $65535, %eax
gcc/ChangeLog
PR target/98461
* config/i386/sse.md (*sse2_pmovskb_zexthisi): New
define_insn_and_split for zero_extend of subreg HI of pmovskb
result.
(*sse2_pmovskb_zexthisi): Add new combine splitters for
zero_extend of not of subreg HI of pmovskb result.
gcc/testsuite/ChangeLog
* gcc.target/i386/sse2-pr98461-2.c: New test.
This patch fixes a mode/rtx mismatch for ILP32 targets in:
mem = force_const_mem (ptr_mode, imm);
where imm can be Pmode rather than ptr_mode.
The patch uses convert_memory_address to convert the Pmode address
to ptr_mode before the call. However, immediate addresses can in
general contain unspecs, and convert_memory_address wasn't set up
to handle those.
The patch therefore adds some generic unspec handling to
convert_memory_address_addr_space_1. As the comment says, we can add
a target hook if this behaviour turns out to be wrong for some targets.
But I think what the patch does is a strict improvement over the status
quo: without it, we would try to force the unspec into a register,
but nevertheless wrap the result in a (const ...). That in turn
would be invalid rtl and seems bound to generate an ICE later.
I tested the explow.c part using -fstack-protector with local hacks
to force SYMBOL_FORCE_TO_MEM for UNSPEC_SALT_ADDR.
Fixes c-c++-common/torture/pr57945.c and various other tests.
gcc/
PR target/97269
* explow.c (convert_memory_address_addr_space_1): Handle UNSPECs
nested in CONSTs.
* config/aarch64/aarch64.c (aarch64_expand_mov_immediate): Use
convert_memory_address to convert symbolic immediates to ptr_mode
before forcing them to memory.
aarch64's *add<mode>3_poly_1 has a pattern with the constraints:
"=...,r,&r"
"...,0,rk"
"...,Uai,Uat"
i.e. the penultimate alternative requires operands 0 and 1 to match,
but the final alternative does not allow them to match.
The register allocators dealt with this correctly, and so used
different input and output registers for instructions with Uat
operands. However, constrain_operands carried the penultimate
alternative's matching rule over to the final alternative,
so it would essentially ignore the earlyclobber. This in turn
allowed postreload to convert a correct Uat pairing into an
incorrect one.
The fix is simple: recompute the matching information for each
alternative.
gcc/
PR rtl-optimization/97144
* recog.c (constrain_operands): Initialize matching_operand
for each alternative, rather than only doing it once.
gcc/testsuite/
PR rtl-optimization/97144
* gcc.c-torture/compile/pr97144.c: New test.
* gcc.target/aarch64/sve/pr97144.c: Likewise.
In the PR, fwprop was changing a call instruction and tripped
an assert when trying to update a list of call clobbers.
There are two ways we could handle this: remove the call clobber
and then add it back, or assume that the clobber will stay in its
current place.
At the moment we don't have enough information to safely move
calls around, so the second approach seems simpler and more
efficient.
gcc/
PR rtl-optimization/98403
* rtl-ssa/changes.cc (function_info::finalize_new_accesses): Explain
why we don't remove call clobbers.
(function_info::apply_changes_to_insn): Don't attempt to add
call clobbers here.
gcc/testsuite/
PR rtl-optimization/98403
* g++.dg/opt/pr98403.C: New test.
On AArch64, the vectoriser tries various ways of vectorising with both
SVE and Advanced SIMD and picks the best one. All other things being
equal, it prefers earlier attempts over later attempts.
The way this works currently is that, once it has a successful
vectorisation attempt A, it analyses all other attempts as epilogue
loops of A:
/* When pick_lowest_cost_p is true, we should in principle iterate
over all the loop_vec_infos that LOOP_VINFO could replace and
try to vectorize LOOP_VINFO under the same conditions.
E.g. when trying to replace an epilogue loop, we should vectorize
LOOP_VINFO as an epilogue loop with the same VF limit. When trying
to replace the main loop, we should vectorize LOOP_VINFO as a main
loop too.
However, autovectorize_vector_modes is usually sorted as follows:
- Modes that naturally produce lower VFs usually follow modes that
naturally produce higher VFs.
- When modes naturally produce the same VF, maskable modes
usually follow unmaskable ones, so that the maskable mode
can be used to vectorize the epilogue of the unmaskable mode.
This order is preferred because it leads to the maximum
epilogue vectorization opportunities. Targets should only use
a different order if they want to make wide modes available while
disparaging them relative to earlier, smaller modes. The assumption
in that case is that the wider modes are more expensive in some
way that isn't reflected directly in the costs.
There should therefore be few interesting cases in which
LOOP_VINFO fails when treated as an epilogue loop, succeeds when
treated as a standalone loop, and ends up being genuinely cheaper
than FIRST_LOOP_VINFO. */
However, the vectoriser can normally elide alias checks for epilogue
loops, on the basis that the main loop should do them instead.
Converting an epilogue loop to a main loop can therefore cause the alias
checks to be skipped. (It probably also unfairly penalises the original
loop in the cost comparison, given that one loop will have alias checks
and the other won't.)
As the comment says, we should in principle analyse each vector mode
twice: once as a main loop and once as an epilogue. However, doing
that up-front would be quite expensive. This patch instead goes for a
compromise: if an epilogue loop for mode M2 seems better than a main
loop for mode M1, re-analyse with M2 as the main loop.
The patch fixes dg.torture.exp=pr69719.c when testing with
-msve-vector-bits=128.
gcc/
PR tree-optimization/98371
* tree-vect-loop.c (vect_reanalyze_as_main_loop): New function.
(vect_analyze_loop): If an epilogue loop appears to be cheaper
than the main loop, re-analyze it as a main loop before adopting
it as a main loop.
With the introduction of C++20 modules and libcody, cc1plus and
cc1objplus gained a dependency on the socket functions. Before those
were merged into libc in Solaris 11.4, one needed to link with -lsocket -lnsl
on Solaris, so that merge broke the Solaris 11.3 build.
While we already have 4 different checks for those libraries in the
tree, I decided to import autoconf-archive's AX_LIB_SOCKET_NSL macro
instead. At the same time, the patch only links libcody and the
networking libs where needed (cc1plus, cc1objplus).
Bootstrapped without regressions on i386-pc-solaris2.11 (Solaris 11.3
and 11.4), sparc-sun-solaris2.11, and x86_64-pc-linux-gnu.
2020-12-16 Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
c++tools:
PR c++/98316
* configure.ac: Include ../config/ax_lib_socket_nsl.m4.
(NETLIBS): Determine using AX_LIB_SOCKET_NSL.
* configure: Regenerate.
* Makefile.in (NETLIBS): Define.
(g++-mapper-server$(exeext)): Add $(NETLIBS).
gcc/objcp:
PR c++/98316
* Make-lang.in (cc1objplus$(exeext)): Add $(CODYLIB), $(NETLIBS).
gcc/cp:
PR c++/98316
* Make-lang.in (cc1plus$(exeext)): Add $(CODYLIB), $(NETLIBS).
gcc:
PR c++/98316
* configure.ac (NETLIBS): Determine using AX_LIB_SOCKET_NSL.
* aclocal.m4, configure: Regenerate.
* Makefile.in (NETLIBS): Define.
(BACKEND): Remove $(CODYLIB).
config:
PR c++/98316
* ax_lib_socket_nsl.m4: Import from autoconf-archive.
We don't try to optimize for signed x, y (int) (x - 1U) * y + y
into x * y, we can't do that with signed x * y, because the former
is well defined for INT_MIN and -1, while the latter is not.
We could perhaps optimize it during isel or some very late optimization
where we'd turn magically flag_wrapv, but we don't do that yet.
This patch optimizes it in simplify-rtx.c, such that we can optimize
it during combine.
2021-01-05 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/98334
* simplify-rtx.c (simplify_context::simplify_binary_operation_1):
Optimize (X - 1) * Y + Y to X * Y or (X + 1) * Y - Y to X * Y.
* gcc.target/i386/pr98334.c: New test.
This is just a precautionary fix.
2021-01-05 Bernd Edlinger <bernd.edlinger@hotmail.de>
* tree-inline.c (expand_call_inline): Restore input_location.
Return result from recursive call.
The constexpr iteration dereferenced an array element past the end of
the array.
for gcc/testsuite/ChangeLog
* g++.dg/cpp1y/constexpr-66093.C: Fix bounds issue.
This option will be used by the go command to implement go:embed directives,
which are new with the upcoming Go 1.16 release.
* lang.opt (fgo-embedcfg): New option.
* go-c.h (struct go_create_gogo_args): Add embedcfg field.
* go-lang.c (go_embedcfg): New static variable.
(go_langhook_init): Set go_create_gogo_args embedcfg field.
(go_langhook_handle_option): Handle OPT_fgo_embedcfg_.
* gccgo.texi (Invoking gccgo): Document -fgo-embedcfg.
-fsanitize=undefined with calls to nonnull functions
creates struct __ubsan_nonnull_arg_data instances
with CONSTRUCTORs for RECORD_TYPEs with NULL index values.
The analyzer was mistakenly using INTEGER_CST for these
fields, leading to ICEs.
Fix the issue by iterating through the fields in the type
for such cases, imitating similar logic in varasm.c's
output_constructor.
gcc/analyzer/ChangeLog:
PR analyzer/98293
* store.cc (binding_map::apply_ctor_to_region): When "index" is
NULL, iterate through the fields for RECORD_TYPEs, rather than
creating an INTEGER_CST index.
gcc/testsuite/ChangeLog:
PR analyzer/98293
* gcc.dg/analyzer/pr98293.c: New test.
The IFN_MASK* functions take two leading arguments: a load or
store pointer and a “cookie”. The type of the cookie is the
type of the access for TBAA purposes (like for MEM_REFs)
while the value of the cookie is the alignment of the access.
This PR was caused by a disagreement about whether the alignment
is measured in bits or bytes.
It looks like this goes back to PR68786, which made the
vectoriser create its own cookie argument rather than reusing
the one created by ifcvt. The alignment value of the new cookie
was measured in bytes (as needed by set_ptr_info_alignment)
while the existing code expected it to be measured in bits.
The folds I added for IFN_MASK_LOAD and STORE then made
things worse.
gcc/
PR tree-optimization/95401
* config/aarch64/aarch64-sve-builtins.cc
(gimple_folder::load_store_cookie): Use bits rather than bytes
for the alignment argument to IFN_MASK_LOAD and IFN_MASK_STORE.
* gimple-fold.c (gimple_fold_mask_load_store_mem_ref): Likewise.
* tree-vect-stmts.c (vectorizable_store): Likewise.
(vectorizable_load): Likewise.
gcc/testsuite/
PR tree-optimization/95401
* g++.dg/vect/pr95401.cc: New test.
* g++.dg/vect/pr95401a.cc: Likewise.
This makes sure to set the vector type on an invariant mask argument
for a masked load and SLP.
2021-01-04 Richard Biener <rguenther@suse.de>
PR tree-optimization/98308
* tree-vect-stmts.c (vectorizable_load): Set invariant mask
SLP vectype.
* gcc.dg/vect/pr98308.c: New testcase.
As the testcase shows, we punt unnecessarily on popcount loop idioms if
the type is smaller than int or larger than long long.
Smaller type than int can be handled by zero-extending the argument to
unsigned int, and types twice as long as long long by doing
__builtin_popcountll on both halves of the __int128.
2020-01-04 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95771
* tree-ssa-loop-niter.c (number_of_iterations_popcount): Handle types
with precision smaller than int's precision and types with precision
twice as large as long long. Formatting fixes.
* gcc.target/i386/pr95771.c: New test.
This does VN replacement in loop nb_iterations consistent with
the rest of the IL by using availability at the definition site
of uses.
2021-01-04 Richard Biener <rguenther@suse.de>
PR tree-optimization/98464
* tree-ssa-sccvn.c (vn_valueize_for_srt): Rename from ...
(vn_valueize_wrapper): ... this. Temporarily adjust vn_context_bb.
(process_bb): Adjust.
* g++.dg/opt/pr98464.C: New testcase.
The original documentation added to mention the clash between
-fsanitize=address and -fsanitize=hwaddress used confusing wording trying
to say that -fsanitize=hwaddress is only available on AArch64.
It read as if -fsanitize=address were only supported on AArch64.
This patch fixes that wording by being more explicit.
gcc/ChangeLog:
PR other/98437
* doc/invoke.texi (-fsanitize=address): Fix wording describing
clash with -fsanitize=hwaddress.
This avoids running into memory reference code in compute_avail by
properly classifying unfolded reference trees on constants.
2021-01-04 Richard Biener <rguenther@suse.de>
PR tree-optimization/98282
* tree-ssa-sccvn.c (vn_get_stmt_kind): Classify tcc_reference on
invariants as VN_NARY.
* g++.dg/opt/pr98282.C: New testcase.
This patch fixes a codegen regression in the handling of things like:
__temp.val[0] \
= vcombine_##funcsuffix (__b.val[0], \
vcreate_##funcsuffix (__AARCH64_UINT64_C (0))); \
in the 64-bit vst[234] functions. The zero was forced into a
register at expand time, and we relied on combine to fuse the
zero and combine back together into a single combinez pattern.
The problem is that the zero could be hoisted before combine
gets a chance to do its thing.
gcc/
PR target/89057
* config/aarch64/aarch64-simd.md (aarch64_combine<mode>): Accept
aarch64_simd_reg_or_zero for operand 2. Use the combinez patterns
to handle zero operands.
gcc/testsuite/
PR target/89057
* gcc.target/aarch64/pr89057.c: New test.
The expansions of the svprf[bhwd] instructions weren't taking
advantage of the immediate addressing mode.
gcc/
* config/aarch64/aarch64.c (offset_6bit_signed_scaled_p): New function.
(offset_6bit_unsigned_scaled_p): Fix typo in comment.
(aarch64_sve_prefetch_operand_p): Accept MUL VLs in the range
[-32, 31].
gcc/testsuite/
* gcc.target/aarch64/sve/acle/asm/prfb.c: Test for a MUL VL range of
[-32, 31].
* gcc.target/aarch64/sve/acle/asm/prfh.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/prfw.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/prfd.c: Likewise.
This zeroes matches when failing SLP discovery because of the
work limit.
2021-01-04 Richard Biener <rguenther@suse.de>
PR tree-optimization/98393
* tree-vect-slp.c (vect_build_slp_tree): Properly zero matches
when hitting the limit.
When the VF is one a SLP reduction is in-order and thus we can
vectorize even when the reduction op is not associative.
2021-01-04 Richard Biener <rguenther@suse.de>
PR tree-optimization/98291
* tree-vect-loop.c (vectorizable_reduction): Bypass
associativity check for SLP reductions with VF 1.
* gcc.dg/vect/slp-reduc-11.c: New testcase.
* gcc.dg/vect/vect-reduc-in-order-4.c: Adjust.
x is never equal to ~x, so we can fold such comparisons to constants.
2021-01-04 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96782
* match.pd (x == ~x -> false, x != ~x -> true): New simplifications.
* gcc.dg/tree-ssa/pr96782.c: New test.
When linking with -flto and -save-temps, various
temporary files are created in /tmp.
The same happens when invoking the driver with @file
parameter, and using -L or -I options.
gcc:
2021-01-04 Bernd Edlinger <bernd.edlinger@hotmail.de>
* collect-utils.c (collect_execute): Check dumppfx.
* collect2.c (maybe_run_lto_and_relink, do_link): Pass atsuffix
to collect_execute.
(do_link): Add new parameter atsuffix.
(main): Handle -dumpdir option. Skip one argument for
-o, -isystem and -B options.
* gcc.c (make_at_file): New helper function.
(close_at_file): Use it.
gcc/testsuite:
2021-01-04 Bernd Edlinger <bernd.edlinger@hotmail.de>
* gcc.misc-tests/outputs.exp: Adjust testcase.