* constexpr.c (cxx_eval_constant_expression) <case CLEANUP_STMT>:
Temporarily change input_location to CLEANUP_STMT location.
* g++.dg/cpp2a/constexpr-dtor3.C: Expect in 'constexpr' expansion of
message on the line with variable declaration.
* g++.dg/ext/constexpr-attr-cleanup1.C: Likewise.
From-SVN: r277320
PR tree-optimization/92131
* tree-vrp.c (extract_range_from_plus_minus_expr): If the resulting
range would be symbolic, drop to varying for any explicit overflow
in the constant part or if neither range is a singleton.
From-SVN: r277314
aarch64_emit_approx_sqrt handles both vectors and scalars and was using
mode_for_int_vector even for the scalar case. Although that happened
to work, it isn't how mode_for_int_vector is supposed to be used.
2019-10-23 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64.c (aarch64_emit_approx_sqrt): Use
int_mode_for_mode rather than mode_for_int_vector for scalars.
From-SVN: r277311
2019-10-23 Richard Biener <rguenther@suse.de>
PR tree-optimization/92179
* tree-vect-stmts.c (vectorizable_shift): For shift args
that are all the same remove type restriction in the SLP case.
Adjust SLP code to handle converting of the shift arg to
only apply in case the modes are different.
From-SVN: r277310
2019-10-23 Martin Liska <mliska@suse.cz>
PR ipa/91969
* ipa-inline.c (recursive_inlining): Do not print
when curr->count is not initialized.
2019-10-23 Martin Liska <mliska@suse.cz>
PR ipa/91969
* g++.dg/ipa/pr91969.C: New test.
From-SVN: r277309
2019-10-23 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (vect_build_slp_tree_2): Do not build
op from scalars in case there's a constant operand in its
definition.
From-SVN: r277308
There are some cases in which the value for the max skip to a p2align
directive can be negative. The older assembler just ignores these cases
where newer tools produce an error. To preserve behaviour, we avoid
emitting out of range values.
gcc/ChangeLog:
2019-10-23 Iain Sandoe <iain@sandoe.co.uk>
* config/rs6000/darwin.h (ASM_OUTPUT_MAX_SKIP_ALIGN): Guard
against out of range max skip or log values.
From-SVN: r277307
My recent change to this file broke running the testsuite with
-std=c++98 because std::unordered_map isn't available. This fixes it.
* testsuite/util/testsuite_abi.h: Restore use of tr1/unordered_map
when compiled as C++98.
From-SVN: r277302
C++20 removes a number of std::allocator members that have correct
defaults provided by std::allocator_traits, so aren't needed.
Several extensions including __gnu_cxx::hash_map and tr1 containers are
no longer usable with std::allocator in C++20 mode. They need to be
updated to use __gnu_cxx::__alloc_traits in a follow-up patch.
* include/bits/alloc_traits.h
(allocator_traits<allocator<T>>::allocate): Ignore hint for C++20.
(allocator_traits<allocator<T>>::construct): Perform placement new
directly for C++20, instead of calling allocator<T>::construct.
(allocator_traits<allocator<T>>::destroy): Call destructor directly
for C++20, instead of calling allocator<T>::destroy.
(allocator_traits<allocator<T>>::max_size): Return value directly
for C++20, instead of calling std::allocator<T>::max_size().
(__do_alloc_on_copy, __do_alloc_on_move, __do_alloc_on_swap): Do not
define for C++17 and up.
(__alloc_on_copy, __alloc_on_move, __alloc_on_swap): Use if-constexpr
for C++17 and up, instead of tag dispatching.
* include/bits/allocator.h (allocator<void>): Remove for C++20.
(allocator::pointer, allocator::const_pointer, allocator::reference)
(allocator::const_reference, allocator::rebind): Remove for C++20.
* include/bits/basic_string.h (basic_string): Use __alloc_traits to
rebind allocator.
* include/bits/memoryfwd.h (allocator<void>): Remove for C++20.
* include/ext/debug_allocator.h: Use __alloc_traits for rebinding.
* include/ext/malloc_allocator.h (malloc_allocator::~malloc_allocator)
(malloc_allocator::pointer, malloc_allocator::const_pointer)
(malloc_allocator::reference, malloc_allocator::const_reference)
(malloc_allocator::rebind, malloc_allocator::max_size)
(malloc_allocator::construct, malloc_allocator::destroy): Do not
define for C++20.
(malloc_allocator::_M_max_size): Define new function.
* include/ext/new_allocator.h (new_allocator::~new_allocator)
(new_allocator::pointer, new_allocator::const_pointer)
(new_allocator::reference, new_allocator::const_reference)
(new_allocator::rebind, new_allocator::max_size)
(new_allocator::construct, new_allocator::destroy): Do not
define for C++20.
(new_allocator::_M_max_size): Define new function.
* include/ext/rc_string_base.h (__rc_string_base::_Rep): Use
__alloc_traits to rebind allocator.
* include/ext/rope (_Rope_rep_base, _Rope_base): Likewise.
(rope::rope(CharT, const allocator_type&)): Use __alloc_traits
to construct character.
* include/ext/slist (_Slist_base): Use __alloc_traits to rebind
allocator.
* include/ext/sso_string_base.h (__sso_string_base::_M_max_size):
Use __alloc_traits.
* include/ext/throw_allocator.h (throw_allocator): Do not use optional
members of std::allocator, use __alloc_traits members instead.
* include/ext/vstring.h (__versa_string): Use __alloc_traits.
* include/ext/vstring_util.h (__vstring_utility): Likewise.
* include/std/memory: Include <bits/alloc_traits.h>.
* testsuite/20_util/allocator/8230.cc: Use __gnu_test::max_size.
* testsuite/20_util/allocator/rebind_c++20.cc: New test.
* testsuite/20_util/allocator/requirements/typedefs.cc: Do not check
for pointer, const_pointer, reference, const_reference or rebind in
C++20.
* testsuite/20_util/allocator/requirements/typedefs_c++20.cc: New test.
* testsuite/23_containers/deque/capacity/29134.cc: Use
__gnu_test::max_size.
* testsuite/23_containers/forward_list/capacity/1.cc: Likewise.
* testsuite/23_containers/list/capacity/29134.cc: Likewise.
* testsuite/23_containers/map/capacity/29134.cc: Likewise.
* testsuite/23_containers/multimap/capacity/29134.cc: Likewise.
* testsuite/23_containers/multiset/capacity/29134.cc: Likewise.
* testsuite/23_containers/set/capacity/29134.cc: Likewise.
* testsuite/23_containers/vector/capacity/29134.cc: Likewise.
* testsuite/ext/malloc_allocator/variadic_construct.cc: Do not run
test for C++20.
* testsuite/ext/new_allocator/variadic_construct.cc: Likewise.
* testsuite/ext/vstring/capacity/29134.cc: Use __gnu_test::max_size.
* testsuite/util/replacement_memory_operators.h: Do not assume
Alloc::pointer exists.
* testsuite/util/testsuite_allocator.h (__gnu_test::max_size): Define
helper to call max_size for any allocator.
From-SVN: r277300
When using lto-dump -callgraph with two or more .o files containing distinct
functions with the same name, dump_graphviz incorrectly merged those functions
into a single node. This patch fixes this issue by calling `dump_name` instead
of `name`, therefore concat'ing the function name with the node's id.
To understeand what was the issue, let's say you have two files:
a.c: static void foo (void) { do_something (); }
b.c: static void foo (void) { do_something_else (); }
These are distinct functions and should be represented as distinct nodes in the
callgraph dump.
2019-10-22 Giuliano Belinassi <giuliano.belinassi@usp.br>
* cgraph.c (dump_graphviz): Change name to dump_name
From-SVN: r277299
2019-10-22 Steven G. Kargl <kargl@gcc.gnu.org>
PR fortran/92174
* decl.c (attr_decl1): Move check for F2018:C822 from here ...
* array.c (gfc_set_array_spec): ... to here.
From-SVN: r277297
2019-10-22 Marc Glisse <marc.glisse@inria.fr>
gcc/cp/
* constexpr.c (cxx_eval_builtin_function_call): Only set
force_folding_builtin_constant_p if manifestly_const_eval.
gcc/testsuite/
* g++.dg/pr85746.C: New file.
From-SVN: r277292
Glibc has recently introduced changed to the mode field in ipc_perm
in commit 2f959dfe849e0646e27403f2e4091536496ac0f0. For Arm this
means that the mode field no longer has the same size.
This causes an assert failure against libsanitizer's internal copy
of ipc_perm. Since this change can't be easily detected I am adding
arm to the list of targets that are excluded from this check. libsanitizer
doesn't use this field (and others, it in fact uses only 1 field) so this check
can be ignored.
Padding bits were used by glibc when the field was changed so sizeof and offsets
of the remaining fields should be the same.
libsanitizer/ChangeLog:
PR sanitizer/92154
* sanitizer_common/sanitizer_platform_limits_posix.cpp (defined):
Cherry-pick compiler-rt revision r375220.
From-SVN: r277291
On Arm we have both carry and borrow operations, but borrow is
essentially '~carry'. Of course, with boolean logic ~carry is also
1-carry.
GCC transforms
(1 - X - LTU (cc, 0))
into
(GEU (cc, 0) - X)
Now the former matches a real insn in Arm state, using the RSC
instruction with #1 as the immediate, but we currently do not
recognize the canonicalized form. Nevertheless, given the above
logic, this turns out to be quite straight forward as the original
expression matches arm_borrow_operation and the revised form can be
used with arm_carry_operation. Since we match this new pattern we
also update rtx_costs to handle it.
* config/arm/arm.md (rsbsi_carryin_reg): New pattern.
* config/arm/arm.c (arm_rtx_costs_internal, case MINUS): Handle
subtraction from a carry operation.
From-SVN: r277290
Arm_carry_operation and arm_borrow_operation are duals: given that we
have a comparison that returns a result that relies solely in the
carry flag one is the inverse of the other. So there's no reason for
one to have a CC mode that the other does not have. This patch
restores that equivalence.
* config/arm/predicates.md (arm_borrow_operation): Handle CC_ADCmode.
From-SVN: r277289
2019-10-22 Richard Biener <rguenther@suse.de>
PR tree-optimization/92173
* tree-vect-loop.c (vectorizable_reduction): If
vect_transform_reduction cannot handle code-generation try without
the single-def-use-cycle optimization. Pass optab_vector to
optab_for_tree_code to get vector shifts as that's what we'd
generate.
* gcc.dg/torture/pr92173.c: New testcase.
From-SVN: r277288
Fix PR middle-end/90796
PR middle-end/90796
* gimple-loop-jam.c (any_access_function_variant_p): New function.
(adjust_unroll_factor): Use it to constrain safety, new parameter.
(tree_loop_unroll_and_jam): Adjust call and profitable unroll factor.
testsuite/
* gcc.dg/unroll-and-jam.c: Add three invalid and one valid case.
From-SVN: r277287
2019-10-22 Richard Biener <rguenther@suse.de>
PR tree-optimization/92173
* tree-vect-loop.c (vectorizable_reduction): If
vect_transform_reduction cannot handle code-generation try without
the single-def-use-cycle optimization. Pass optab_vector to
optab_for_tree_code to get vector shifts as that's what we'd
generate.
* gcc.dg/torture/pr92173.c: New testcase.
From-SVN: r277286
r277235 was a bit too mechanical and ended up introducing use
after free bugs in both loop and SLP vectorisation.
2019-10-22 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-slp.c (vect_slp_bb_region): Check whether
autodetected_vector_size rather than vector_size is zero.
* tree-vect-loop.c (vect_analyze_loop): Likewise.
Set autodetected_vector_size immediately after calling
vect_analyze_loop_2. Check for a fatal error before advancing
next_size.
From-SVN: r277282
This patch extends r276951 to work for C++ too.
2019-10-22 Richard Sandiford <richard.sandiford@arm.com>
gcc/cp/
* cp-tree.h (STF_USER_VISIBLE): New constant.
(strip_typedefs, strip_typedefs_expr): Take a flags argument.
* tree.c (strip_typedefs, strip_typedefs_expr): Likewise,
updating mutual calls accordingly. When STF_USER_VISIBLE is true,
only look through typedefs if user_facing_original_type_p.
* error.c (dump_template_bindings, type_to_string): Pass
STF_USER_VISIBLE to strip_typedefs.
(dump_type): Likewise, unless pp_c_flag_gnu_v3 is set.
gcc/testsuite/
* g++.dg/diagnostic/aka5.h: New test.
* g++.dg/diagnostic/aka5a.C: Likewise.
* g++.dg/diagnostic/aka5b.C: Likewise.
* g++.target/aarch64/diag_aka_1.C: Likewise.
From-SVN: r277281
To avoid the result of this test depending on the implementation of
the system 'string.h', provide prototypes for the two functions used
in the test.
gcc/testsuite/ChangeLog:
2019-10-22 Iain Sandoe <iain@sandoe.co.uk>
* gcc.dg/Wnonnull.c: Provide prototypes for strlen and memcpy.
Use __SIZE_TYPE__ instead of size_t.
From-SVN: r277280
* lock-and-run.sh: Check for process existence rather than timeout.
Matthias Klose noted that on less powerful targets, a link might take more
than 5 minutes; he mentions a figure of 3 hours for an LTO link. So this
patch changes the timeout to a check for whether the locking process still
exists. If the lock exists in an erroneous state (no pid file or can't
signal the pid) for 30 sec, steal it.
From-SVN: r277277
2019-10-21 Kamlesh Kumar <kamleshbhalui@gmail.com>
* rtti.c (get_tinfo_decl_dynamic): Do not call
TYPE_MAIN_VARIANT for function.
(get_typeid): Likewise.
* g++.dg/rtti/pr83534.C: New Test.
Reviewed-by: Jason Merrill <jason@redhat.com>
Co-Authored-By: Jason Merrill <jason@redhat.com>
From-SVN: r277270
has_value_dependent_address wasn't stripping location wrappers so it
gave the wrong answer for "&x" in the static_assert. That led us to
thinking that the expression isn't instantiation-dependent, and we
skipped static initialization of A<0>::x.
This patch adds stripping so that has_value_dependent_address gives the
same answer as it used to before the location wrappers addition.
* pt.c (has_value_dependent_address): Strip location wrappers.
* g++.dg/cpp0x/constexpr-odr1.C: New test.
* g++.dg/cpp0x/constexpr-odr2.C: New test.
From-SVN: r277266
My DImode arithmetic patches introduced a bug on thumb2 where we could
generate a register controlled shift into an ALU operation. In
fairness the bug was always present, but latent.
As part of cleaning this up (and auditing to ensure I've caught them
all this time) I've gone through all the shift generating patterns in
the MD files and cleaned them up, reducing some duplicate patterns
between the arm and thumb2 descriptions where we can now share the
same pattern. In some cases we were missing the shift attribute; in
most cases I've eliminated an ugly attribute setting using the fact
that we normally need separate alternatives for shift immediate and
shift reg to simplify the logic.
* config/arm/iterators.md (t2_binop0): Fix typo in comment.
* config/arm/arm.md (addsi3_carryin_shift): Simplify selection of the
type attribute.
(subsi3_carryin_shift): Separate into register and constant controlled
alternatives. Use shift_amount_operand for operand 4. Set shift
attribute and simplify type attribute.
(subsi3_carryin_shift_alt): Likewise.
(rsbsi3_carryin_shift): Likewise.
(rsbsi3_carryin_shift_alt): Likewise.
(andsi_not_shiftsi_si): Enable for TARGET_32BIT. Separate constant
and register controlled shifts into distinct alternatives.
(andsi_not_shiftsi_si_scc_no_reuse): Likewise.
(andsi_not_shiftsi_si_scc): Likewise.
(arm_cmpsi_negshiftsi_si): Likewise.
(not_shiftsi): Remove redundant M constraint from alternative 1.
(not_shiftsi_compare0): Likewise.
(arm_cmpsi_insn): Remove redundant alternative 2.
(cmpsi_shift_swp): Likewise.
(sub_shiftsi): Likewise.
(sub_shiftsi_compare0_scratch): Likewise.
* config/arm/thumb2.md (thumb_andsi_not_shiftsi_si): Delete pattern.
(thumb2_cmpsi_neg_shiftsi): Likewise.
From-SVN: r277262
Extend dg-extract-results.sh and dg-extract-results.py to support the
KPASS test result status. This is required by GDB which uses a copy
of the dg-extract-results.{sh,py} scripts that it tries to keep in
sync with GCC.
ChangeLog:
* contrib/dg-extract-results.sh: Add support for KPASS.
* contrib/dg-extract-results.py: Likewise.
From-SVN: r277260
2019-10-21 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (_slp_tree::ops): New member.
(SLP_TREE_SCALAR_OPS): New.
(vect_get_slp_defs): Adjust prototype.
* tree-vect-slp.c (vect_free_slp_tree): Release
SLP_TREE_SCALAR_OPS.
(vect_create_new_slp_node): Initialize it. New overload for
initializing by an operands array.
(_slp_oprnd_info::ops): New member.
(vect_create_oprnd_info): Initialize it.
(vect_free_oprnd_info): Release it.
(vect_get_and_check_slp_defs): Populate the operands array.
Do not swap operands in the IL when not necessary.
(vect_build_slp_tree_2): Build SLP nodes for invariant operands.
Record SLP_TREE_SCALAR_OPS for all invariant nodes. Also
swap operands in the operands array. Do not swap operands in
the IL.
(vect_slp_rearrange_stmts): Re-arrange SLP_TREE_SCALAR_OPS as well.
(vect_gather_slp_loads): Fix.
(vect_detect_hybrid_slp_stmts): Likewise.
(vect_slp_analyze_node_operations_1): Search for a internal
def child for computing reduction SLP_TREE_NUMBER_OF_VEC_STMTS.
(vect_slp_analyze_node_operations): Skip ops-only stmts for
the def-type push/pop dance.
(vect_get_constant_vectors): Compute number_of_vectors here.
Use SLP_TREE_SCALAR_OPS and simplify greatly.
(vect_get_slp_vect_defs): Use gimple_get_lhs also for PHIs.
(vect_get_slp_defs): Simplify greatly.
* tree-vect-loop.c (vectorize_fold_left_reduction): Simplify.
(vect_transform_reduction): Likewise.
* tree-vect-stmts.c (vect_get_vec_defs): Simplify.
(vectorizable_call): Likewise.
(vectorizable_operation): Likewise.
(vectorizable_load): Likewise.
(vectorizable_condition): Likewise.
(vectorizable_comparison): Likewise.
From-SVN: r277241
This patch implements the recently published[1] __rndr and __rndrrs
intrinsics used to access the RNG in Armv8.5-A.
The __rndrrs intrinsics can be used to reseed the generator too.
They are guarded by the __ARM_FEATURE_RNG feature macro.
A quirk with these intrinsics is that they store the random number in
their pointer argument and return a status
code if the generation succeeded.
The instructions themselves write the CC flags indicating the success of
the operation that we can then read with a CSET.
Therefore this implementation makes use of the IGNORE indicator to the
builtin expand machinery to avoid generating
the CSET if its result is unused (the CC reg clobbering effect is still
reflected in the pattern).
I've checked that using unspec_volatile prevents undesirable CSEing of
the instructions.
[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics
* config/aarch64/aarch64.md (UNSPEC_RNDR, UNSPEC_RNDRRS): Define.
(aarch64_rndr): New define_insn.
(aarch64_rndrrs): Likewise.
* config/aarch64/aarch64.h (AARCH64_ISA_RNG): Define.
(TARGET_RNG): Likewise.
* config/aarch64/aarch64.c (aarch64_expand_builtin): Use IGNORE
argument.
* config/aarch64/aarch64-protos.h (aarch64_general_expand_builtin):
Add fourth argument in prototype.
* config/aarch64/aarch64-builtins.c (enum aarch64_builtins):
Add AARCH64_BUILTIN_RNG_RNDR, AARCH64_BUILTIN_RNG_RNDRRS.
(aarch64_init_rng_builtins): Define.
(aarch64_general_init_builtins): Call aarch64_init_rng_builtins.
(aarch64_expand_rng_builtin): Define.
(aarch64_general_expand_builtin): Use IGNORE argument, handle
RNG builtins.
* config/aarch64/aarch64-c.c (aarch64_update_cpp_builtins): Define
__ARM_FEATURE_RNG when TARGET_RNG.
* config/aarch64/arm_acle.h (__rndr, __rndrrs): Define.
* gcc.target/aarch64/acle/rng_1.c: New test.
From-SVN: r277239
This patch makes sure ensure_base_align only changes alignment if the new
alignment is more restrictive. It already did this if we were dealing with
symbols, but it now does it for all types of declarations.
gcc/ChangeLog:
2019-10-21 Andre Vieira <andre.simoesdiasvieira@arm.com>
* tree-vect-stmts (ensure_base_align): Only change alignment if new
alignment is more restrictive.
From-SVN: r277238