gcc/ada/
* exp_spark.adb (Expand_SPARK_Array_Aggregate): Dedicated
routine for array aggregates; mostly reuses existing code, but
calls itself recursively for multi-dimensional array aggregates.
(Expand_SPARK_N_Aggregate): Call Expand_SPARK_Array_Aggregate to
do the actual expansion, starting from the first index of the
array type.
gcc/ada/
* sem_aggr.adb (Resolve_Iterated_Component_Association): new
internal subprogram Remove_References, to reset semantic
information on each reference to the index variable of the
association, so that Collect_Aggregate_Bounds can work properly
on multidimensional arrays with nested associations, and
subsequent expansion into loops can verify that dimensions of
each subaggregate are compatible.
* builtin-attrs.def (STRERRNOC): New macro.
(STRERRNOP): New macro.
(ATTR_ERRNOCONST_NOTHROW_LEAF_LIST): New attr list.
(ATTR_ERRNOPURE_NOTHROW_LEAF_LIST): New attr list.
* builtins.def (ATTR_MATHFN_ERRNO): Use
ATTR_ERRNOCONST_NOTHROW_LEAF_LIST.
(ATTR_MATHFN_FPROUNDING_ERRNO): Use ATTR_ERRNOCONST_NOTHROW_LEAF_LIST
or ATTR_ERRNOPURE_NOTHROW_LEAF_LIST.
- Generalize logic for translating arch to internal flags, this patch
is infrastructure for supporing sub-extension parsing.
gcc/ChangeLog
* common/config/riscv/riscv-common.c (opt_var_ref_t): New.
(riscv_ext_flag_table_t): New.
(riscv_ext_flag_table): New.
(riscv_parse_arch_string): Pass gcc_options* instead of
&opts->x_target_flags only, and using riscv_arch_option_table to
setup flags.
(riscv_handle_option): Update argument for riscv_parse_arch_string.
(riscv_expand_arch): Ditto.
(riscv_expand_arch_from_cpu): Ditto.
* tree.c (set_call_expr_flags): Fix string for ECF_RET1.
(build_common_builtin_nodes): Do not set ECF_RET1 for memcpy, memmove,
and memset. They are handled by builtin_fnspec.
This introduces a global alloc-pool for SLP nodes to reduce overhead
on SLP allocation churn which will get worse and to eventually release
SLP cycles which will retain a refcount of one and thus are never
freed at the moment.
2020-10-26 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (slp_tree_pool): Declare.
(_slp_tree::operator new): Likewise.
(_slp_tree::operator delete): Likewise.
* tree-vectorizer.c (vectorize_loops): Allocate and free the
slp_tree_pool.
(pass_slp_vectorize::execute): Likewise.
* tree-vect-slp.c (slp_tree_pool): Define.
(_slp_tree::operator new): Likewise.
(_slp_tree::operator delete): Likewise.
We newly correctly detect that a job server is not active for
a LTO linking:
lto-wrapper: warning: jobserver is not available: '--jobserver-auth=' is not present in 'MAKEFLAGS'
In that situation we should not call make -f abc.mk as it can leed
to N^2 LTRANS units.
gcc/ChangeLog:
* lto-wrapper.c (run_gcc): Do not use sub-make when jobserver is
not detected properly.
When running with -m32
FAIL: gcc.target/powerpc/pr94740.c (test for excess errors)
Excess errors:
cc1: error: '-mpcrel' requires '-mcmodel=medium'
The others don't run for -m32, but remove the unnecessary -mpcrel
anyway.
* gcc.target/powerpc/localentry-1.c: Remove -mpcrel from options.
* gcc.target/powerpc/notoc-direct-1.c: Likewise.
* gcc.target/powerpc/pr94740.c: Likewise.
All these tests fail with -m32 due to lack of int128 support, in some
cases with what I thought was not the best error message. For example
vsx_mask-move-runnable.c:34:3: error: unknown type name 'vector'
is misleading. The problem isn't "vector" but "vector __uint128_t".
* gcc.target/powerpc/vsx-load-element-extend-char.c: Require int128.
* gcc.target/powerpc/vsx-load-element-extend-int.c: Likewise.
* gcc.target/powerpc/vsx-load-element-extend-longlong.c: Likewise.
* gcc.target/powerpc/vsx-load-element-extend-short.c: Likewise.
* gcc.target/powerpc/vsx-store-element-truncate-char.c: Likewise.
* gcc.target/powerpc/vsx-store-element-truncate-int.c: Likewise.
* gcc.target/powerpc/vsx-store-element-truncate-longlong.c: Likewise.
* gcc.target/powerpc/vsx-store-element-truncate-short.c: Likewise.
* gcc.target/powerpc/vsx_mask-count-runnable.c: Likewise.
* gcc.target/powerpc/vsx_mask-expand-runnable.c: Likewise.
* gcc.target/powerpc/vsx_mask-extract-runnable.c: Likewise.
* gcc.target/powerpc/vsx_mask-move-runnable.c: Likewise.
This tests behaviour near the limit of 16-bit signed offsets. If
power10 prefix instructions are enabled, no such testing occurs.
* gcc.target/powerpc/dimode_off.c: Add -mno-prefixed to options.
These tests require -mno-pcrel because they are testing features
of the non-pcrel ABI.
* gcc.target/powerpc/cprophard.c: Add -mno-pcrel to options.
* gcc.target/powerpc/float128-hw3.c: Likewise.
* gcc.target/powerpc/pr79439-1.c: Likewise.
* gcc.target/powerpc/pr79439-2.c: Likewise.
* gcc.target/powerpc/r2_shrink-wrap.c: Likewise.
When combining logical OR operands with a FALSE result, union the false
ranges for operand1 and operand2... not intersection.
gcc/
PR tree-optimization/97567
* gimple-range-gori.cc (gori_compute::logical_combine): Union the
ranges of operand1 and operand2, not intersect.
gcc/testsuite/
* gcc.dg/pr97567.c: New.
* attr-fnspec.h: Update toplevel comment.
(attr_fnspec::attr_fnspec): New constructor.
(attr_fnspec::arg_read_p,
attr_fnspec::arg_written_p,
attr_fnspec::arg_access_size_given_by_arg_p,
attr_fnspec::arg_single_access_p
attr_fnspec::loads_known_p
attr_fnspec::stores_known_p,
attr_fnspec::clobbers_errno_p): New member functions.
(gimple_call_fnspec): Declare.
(builtin_fnspec): Declare.
* builtins.c: Include attr-fnspec.h
(builtin_fnspec): New function.
* builtins.def (BUILT_IN_MEMCPY): Do not specify RET1 fnspec.
(BUILT_IN_MEMMOVE): Do not specify RET1 fnspec.
(BUILT_IN_MEMSET): Do not specify RET1 fnspec.
(BUILT_IN_STRCAT): Do not specify RET1 fnspec.
(BUILT_IN_STRCPY): Do not specify RET1 fnspec.
(BUILT_IN_STRNCAT): Do not specify RET1 fnspec.
(BUILT_IN_STRNCPY): Do not specify RET1 fnspec.
(BUILT_IN_MEMCPY_CHK): Do not specify RET1 fnspec.
(BUILT_IN_MEMMOVE_CHK): Do not specify RET1 fnspec.
(BUILT_IN_MEMSET_CHK): Do not specify RET1 fnspec.
(BUILT_IN_STRCAT_CHK): Do not specify RET1 fnspec.
(BUILT_IN_STRCPY_CHK): Do not specify RET1 fnspec.
(BUILT_IN_STRNCAT_CHK): Do not specify RET1 fnspec.
(BUILT_IN_STRNCPY_CHK): Do not specify RET1 fnspec.
* gimple.c (gimple_call_fnspec): Return attr_fnspec.
(gimple_call_arg_flags): Update.
(gimple_call_return_flags): Update.
* tree-ssa-alias.c (check_fnspec): New function.
(ref_maybe_used_by_call_p_1): Use fnspec for builtin handling.
(call_may_clobber_ref_p_1): Likewise.
(attr_fnspec::verify): Update verifier.
* calls.c (decl_fnspec): New function.
(decl_return_flags): Use it.
The problem here is we are trying to add 1 to a -1 in a signed 1-bit
field and coming up with UNDEFINED because of the overflow.
Signed 1-bits are annoying because you can't really add or subtract
one, because the one is unrepresentable. For invert() we have a
special subtract_one() function that handles 1-bit signed fields.
This patch implements the analogous add_one() function so that invert
works.
gcc/ChangeLog:
PR tree-optimization/97555
* range-op.cc (range_tests): Test 1-bit signed invert.
* value-range.cc (subtract_one): Adjust comment.
(add_one): New.
(irange::invert): Call add_one.
gcc/testsuite/ChangeLog:
* gcc.dg/pr97555.c: New test.
this patch implements thre two-state optimize_for_size predicates, so with -Os
and with profile feedback for never executed code it returns OPTIMIZE_SIZE_MAX
while in cases we decide to optimize for size based on branch prediction logic
it return OPTIMIZE_SIZE_BALLANCED.
The idea is that for places where we guess that code is unlikely we do not
want to do extreme optimizations for size that leads to many fold slowdowns
(using idiv rather than few shigts or using rep based inlined stringops).
I will update RTL handling code to also support this with BB granuality (which
we don't currently). LLVM has -Os and -Oz levels where -Oz is our -Os and
LLVM's -Os would ocrrespond to OPTIMIZE_SIZE_BALLANCED. I wonder if we want
to export this to command line somehow? For me it would be definitly useful
to test things, I am not sure how "weaker" -Os is desired in practice.
gcc/ChangeLog:
* cgraph.h (cgraph_node::optimize_for_size_p): Return
optimize_size_level.
(cgraph_node::optimize_for_size_p): Update.
* coretypes.h (enum optimize_size_level): New enum.
* predict.c (unlikely_executed_edge_p): Microoptimize.
(optimize_function_for_size_p): Return optimize_size_level.
(optimize_bb_for_size_p): Likewise.
(optimize_edge_for_size_p): Likewise.
(optimize_insn_for_size_p): Likewise.
(optimize_loop_nest_for_size_p): Likewise.
* predict.h (optimize_function_for_size_p): Update declaration.
(optimize_bb_for_size_p): Update declaration.
(optimize_edge_for_size_p): Update declaration.
(optimize_insn_for_size_p): Update declaration.
(optimize_loop_for_size_p): Update declaration.
(optimize_loop_nest_for_size_p): Update declaration.
This refactors the toplevel entry to analyze an SLP instance to
expose a worker analyzing from a vector of stmts and an SLP entry
kind.
2020-10-26 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (enum slp_instance_kind): New.
(vect_build_slp_instance): Split out from...
(vect_analyze_slp_instance): ... this.
This patch fixes the ICE in the PR by bailing out of find_bswap_or_nop
on poly_int sizes.
I don't think it intends to handle them and from my reading of the code
it's the most appropriate place to reject them
here rather than in the callers.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR tree-optimization/97546
* gimple-ssa-store-merging.c (find_bswap_or_nop): Return NULL if
type is not INTEGER_CST.
gcc/testsuite/
PR tree-optimization/97546
* gcc.target/aarch64/sve/acle/general/pr97546.c: New test.
This makes us always use a single-bit boolean type component type
for integer mode mask VECTOR_BOOLEAN_TYPE_P to match the RTL and target
representation. This aovids the need for magic translation and
the inconsistencies from the translation requirement now that
we expose temporaries of those types on the GIMPLE level.
2020-10-23 Richard Biener <rguenther@suse.de>
PR middle-end/97521
* expr.c (const_scalar_mask_from_tree): Remove.
(expand_expr_real_1): Always VIEW_CONVERT integer mode
vector constants to an integer type.
* tree.c (build_truth_vector_type_for_mode): Use a single-bit
boolean component type for non-vector-mode mask_mode.
* gcc.target/i386/pr97521.c: New testcase.
Expand strncmp to "repz cmpsb" only with -minline-all-stringops since
"repz cmpsb" can be much slower than strncmp function implemented with
vector instructions, see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=43052
gcc/
PR target/95458
* config/i386/i386-expand.c (ix86_expand_cmpstrn_or_cmpmem):
Return false for -mno-inline-all-stringops.
gcc/testsuite/
PR target/95458
* gcc.target/i386/pr95458-1.c: New test.
* gcc.target/i386/pr95458-2.c: Likewise.
We used to expand memcmp to "repz cmpsb" via cmpstrnsi. It was changed
by
commit 9b0f6f5e51
Author: Nick Clifton <nickc@redhat.com>
Date: Fri Aug 12 16:26:11 2011 +0000
builtins.c (expand_builtin_memcmp): Do not use cmpstrnsi pattern.
* builtins.c (expand_builtin_memcmp): Do not use cmpstrnsi
pattern.
* doc/md.texi (cmpstrn): Note that the comparison stops if both
fetched bytes are zero.
(cmpstr): Likewise.
(cmpmem): Note that the comparison does not stop if both of the
fetched bytes are zero.
Duplicate the cmpstrn pattern for cmpmem. The only difference is that
the length argument of cmpmem is guaranteed to be less than or equal to
lengths of 2 memory areas. Since "repz cmpsb" can be much slower than
memcmp function implemented with vector instruction, see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=43052
expand cmpmem to "repz cmpsb" only for -minline-all-stringops.
gcc/
PR target/95151
* config/i386/i386-expand.c (ix86_expand_cmpstrn_or_cmpmem): New
function.
* config/i386/i386-protos.h (ix86_expand_cmpstrn_or_cmpmem): New
prototype.
* config/i386/i386.md (cmpmemsi): New pattern.
gcc/testsuite/
PR target/95151
* gcc.target/i386/pr95151-1.c: New test.
* gcc.target/i386/pr95151-2.c: Likewise.
* gcc.target/i386/pr95151-3.c: Likewise.
* gcc.target/i386/pr95151-4.c: Likewise.
This avoids overflow in the allocation size computations in
sbitmap_vector_alloc when the result exceeds 2GB.
2020-10-26 Richard Biener <rguenther@suse.de>
* sbitmap.c (sbitmap_vector_alloc): Use size_t for byte
quantities to avoid overflow.
This makes sure to reset out-of-loop debug uses before vectorizer
loop peeling as we cannot make sure to retain the use-def dominance
relationship when there are no LC SSA nodes.
2020-10-26 Richard Biener <rguenther@suse.de>
PR tree-optimization/97539
* tree-vect-loop-manip.c (vect_do_peeling): Reset out-of-loop
debug uses before peeling.
* gcc.dg/pr97539.c: New testcase.
the default duplicate and insert methods of sumaries produce empty
summary that is not useful for anything and makes it easy to introduce
bugs.
This patch makes the default hooks to abort and summaries that do not
need dupicaito/insertion disable the corresponding hooks. I also
implemented missing insertion hook for ipa-sra which forced me to move
analysis out of anonymous namespace.
2020-10-23 Jan Hubicka <hubicka@ucw.cz>
* cgraph.h (struct cgraph_node): Make ipa_transforms_to_apply vl_ptr.
* ipa-inline-analysis.c (initialize_growth_caches): Disable insertion
and duplication hooks.
* ipa-inline-transform.c (clone_inlined_nodes): Clear
ipa_transforms_to_apply.
(save_inline_function_body): Disable insertion hoook for
ipa_saved_clone_sources.
* ipa-prop.c (ipcp_transformation_initialize): Disable insertion hook.
* ipa-prop.h (ipa_node_params_t): Disable insertion hook.
* ipa-reference.c (propagate): Disable insertion hoook.
* ipa-sra.c (ipa_sra_summarize_function): Move out of anonymous
namespace.
(ipa_sra_function_summaries::insert): New virtual function.
* passes.c (execute_one_pass): Do not add transforms to inline clones.
* symbol-summary.h (function_summary_base): Make insert and duplicate
hooks fail instead of silently producing empty summaries; add way to
disable duplication hooks
(call_summary_base): Likewise.
* tree-nested.c (nested_function_info::get_create): Disable insertion
hooks
(maybe_record_nested_function): Likewise.
libstdc++-v3/ChangeLog:
* include/bits/shared_ptr_base.h
(_Sp_counted_base::_M_add_ref_lock_nothrow(): Add noexcept to
definitions to match declaration.
(__shared_count(const __weak_count&, nothrow_t)): Add noexcept
to declaration to match definition.
gcc/ada/
* exp_aggr.adb (Build_Array_Aggr_Code): If the aggregate
includes an Others_Choice in an association that is an
Iterated_Component_Association, generate a proper loop for it.