Commit Graph

164171 Commits

Author SHA1 Message Date
Martin Liska
f2db460264 Properly mark lambdas in GCOV (PR gcov-profile/86109).
2018-10-03  Martin Liska  <mliska@suse.cz>

	PR gcov-profile/86109
	* coverage.c (coverage_begin_function): Do not
	mark lambdas as artificial.
	* tree-core.h (struct GTY): Remove tm_clone_flag
	and introduce new lambda_function.
	* tree.h (DECL_LAMBDA_FUNCTION): New macro.
2018-10-03  Martin Liska  <mliska@suse.cz>

	PR gcov-profile/86109
	* parser.c (cp_parser_lambda_declarator_opt):
	Set DECL_LAMBDA_FUNCTION for lambdas.
2018-10-03  Martin Liska  <mliska@suse.cz>

	PR gcov-profile/86109
	* g++.dg/gcov/pr86109.C: New test.

From-SVN: r264806
2018-10-03 08:30:10 +00:00
François Dumont
784779d471 2018-10-03 François Dumont <fdumont@gcc.gnu.org>
* include/debug/map.h
	(map<>::emplace<>(_Args&&...)): Use C++11 direct initialization.
	(map<>::emplace_hint<>(const_iterator, _Args&&...)): Likewise.
	(map<>::insert(value_type&&)): Likewise.
	(map<>::insert<>(_Pair&&)): Likewise.
	(map<>::insert<>(const_iterator, _Pair&&)): Likewise.
	(map<>::try_emplace): Likewise.
	(map<>::insert_or_assign): Likewise.
	(map<>::insert(node_type&&)): Likewise.
	(map<>::insert(const_iterator, node_type&&)): Likewise.
	(map<>::erase(const_iterator)): Likewise.
	(map<>::erase(const_iterator, const_iterator)): Likewise.
	* include/debug/multimap.h
	(multimap<>::emplace<>(_Args&&...)): Use C++11 direct initialization.
	(multimap<>::emplace_hint<>(const_iterator, _Args&&...)): Likewise.
	(multimap<>::insert<>(_Pair&&)): Likewise.
	(multimap<>::insert<>(const_iterator, _Pair&&)): Likewise.
	(multimap<>::insert(node_type&&)): Likewise.
	(multimap<>::insert(const_iterator, node_type&&)): Likewise.
	(multimap<>::erase(const_iterator)): Likewise.
	(multimap<>::erase(const_iterator, const_iterator)): Likewise.
	* include/debug/set.h
	(set<>::emplace<>(_Args&&...)): Use C++11 direct initialization.
	(set<>::emplace_hint<>(const_iterator, _Args&&...)): Likewise.
	(set<>::insert(value_type&&)): Likewise.
	(set<>::insert<>(const_iterator, value_type&&)): Likewise.
	(set<>::insert(const_iterator, node_type&&)): Likewise.
	(set<>::erase(const_iterator)): Likewise.
	(set<>::erase(const_iterator, const_iterator)): Likewise.
	* include/debug/multiset.h
	(multiset<>::emplace<>(_Args&&...)): Use C++11 direct initialization.
	(multiset<>::emplace_hint<>(const_iterator, _Args&&...)): Likewise.
	(multiset<>::insert<>(value_type&&)): Likewise.
	(multiset<>::insert<>(const_iterator, value_type&&)): Likewise.
	(multiset<>::insert(node_type&&)): Likewise.
	(multiset<>::insert(const_iterator, node_type&&)): Likewise.
	(multiset<>::erase(const_iterator)): Likewise.
	(multiset<>::erase(const_iterator, const_iterator)): Likewise.

From-SVN: r264805
2018-10-03 05:50:01 +00:00
GCC Administrator
da76e70f62 Daily bump.
From-SVN: r264804
2018-10-03 00:16:59 +00:00
Gerald Pfeifer
ff504cc2b8 * io/close.c [!HAVE_UNLINK_OPEN_FILE]: Include <string.h>.
From-SVN: r264800
2018-10-02 20:02:03 +00:00
Aaron Sawdey
6bd2b8ec8d re PR target/87474 (ICE in extract_insn, at recog.c:2305)
2018-10-02  Aaron Sawdey  <acsawdey@linux.ibm.com>

	PR target/87474
	* config/rs6000/rs6000-string.c (expand_strn_compare): Check that both
	P8_VECTOR and VSX are enabled.

From-SVN: r264799
2018-10-02 12:31:53 -05:00
Ian Lance Taylor
d8ccfadbf2 internal/bytealg: support systems that don't have memmem
Reviewed-on: https://go-review.googlesource.com/138839

From-SVN: r264798
2018-10-02 16:45:51 +00:00
Andreas Krebbel
3c609d36a6 S/390: Support IBM z14 Model ZR1 with -march=native
This adds the CPU model number of the IBM z14 Model ZR1 machine to
-march=native.  The patch doesn't actually change anything since we
anyway default to z14 for unknown CPU model numbers.  So this is just
for the sake of completeness.

2018-10-02  Andreas Krebbel  <krebbel@linux.ibm.com>

	* config/s390/driver-native.c (s390_host_detect_local_cpu): Add
	0x3907 as CPU model number.

From-SVN: r264797
2018-10-02 15:36:49 +00:00
Andreas Krebbel
e9e8efc982 S/390: Rename arch12 to z14
This is a mechanical change not impacting code generation.  With that
patch I try to hide the artificial CPU name arch12 which we had to use
before the announcement of the IBM z14 machine.  arch12 of course
stays a valid option to -march and -mtune.  So this is just about
making the code somewhat easier to read.

gcc/ChangeLog:

2018-10-02  Andreas Krebbel  <krebbel@linux.ibm.com>

	* common/config/s390/s390-common.c: Rename PF_ARCH12 to PF_Z14.
	* config/s390/s390.h (enum processor_flags): Rename PF_ARCH12 to
	PF_Z14.  Rename TARGET_CPU_ARCH12 to TARGET_CPU_Z14,
	TARGET_CPU_ARCH12_P to TARGET_CPU_Z14_P, TARGET_ARCH12 to
	TARGET_Z14, and TARGET_ARCH12_P to TARGET_Z14_P.
	* config/s390/s390.md: Likewise. Rename also the cpu attribute
	value from arch12 to z14.

From-SVN: r264796
2018-10-02 15:35:52 +00:00
Uros Bizjak
8dc5696fcd i386.md (fxam<mode>2_i387_with_temp): Remove.
* config/i386/i386.md (fxam<mode>2_i387_with_temp): Remove.
	(isinfxf2): Ditto.
	(isinf<mode>2): Ditto.

From-SVN: r264795
2018-10-02 17:27:07 +02:00
Uros Bizjak
34c77d0b2f i386.c (ix86_emit_i387_round): Extend op1 to XFmode before emitting fxam.
* config/i386/i386.c (ix86_emit_i387_round): Extend op1 to XFmode
	before emitting fxam.  Perform calculations in XFmode.

From-SVN: r264794
2018-10-02 17:12:30 +02:00
Ian Lance Taylor
4913fc07e0 net: don't fail test if splice fails because pipe2 is missing
This reportedly happens on CentOS 5.11.  The real code will work fine;
    this test is assuming that the unexported slice function will handle
    the splice, but if pipe2 does not work then it doesn't.  The relevant
    code in internal/poll/splice_linux.go says "Falling back to pipe is
    possible, but prior to 2.6.29 splice returns -EAGAIN instead of 0 when
    the connection is closed."
    
    Reviewed-on: https://go-review.googlesource.com/138838

From-SVN: r264793
2018-10-02 15:07:14 +00:00
Marc Glisse
0036218b10 ((X /[ex] A) +- B) * A --> X +- A * B
2018-10-02  Marc Glisse  <marc.glisse@inria.fr>

gcc/
	* match.pd (((X /[ex] A) +- B) * A): New transformation.

gcc/testsuite/
	* gcc.dg/tree-ssa/muldiv-1.c: New file.
	* gcc.dg/tree-ssa/muldiv-2.c: Likewise.

From-SVN: r264792
2018-10-02 15:02:13 +00:00
Marc Glisse
86920074bf vector<bool> _M_start and 0 offset
2018-10-02  Marc Glisse  <marc.glisse@inria.fr>

	PR libstdc++/87258
	* include/bits/stl_bvector.h (vector::begin(), vector::cbegin()):
	Rebuild _M_start with an explicit 0 offset.

From-SVN: r264791
2018-10-02 14:59:25 +00:00
Marc Glisse
057cf66ca3 No a*x+b*x factorization for signed vectors
2018-10-02  Marc Glisse  <marc.glisse@inria.fr>

	PR middle-end/87319
	* fold-const.c (fold_plusminus_mult_expr): Handle complex and vectors.
	* tree.c (signed_or_unsigned_type_for): Handle complex.

From-SVN: r264790
2018-10-02 14:55:39 +00:00
Segher Boessenkool
a1c3d798ef rs6000: Fix vec-init-6.c (PR87081)
Since a while we use a rldimi instead of rldicl/rldicr/or to combine
two words to one.


	PR target/87081
	* gcc.target/powerpc/vec-init-6.c: Fix expected asm.

From-SVN: r264789
2018-10-02 16:19:49 +02:00
Jeff Law
911ce7b502 * gimple-fold.c (get_range_strlen): Remove dead code.
From-SVN: r264788
2018-10-02 08:10:16 -06:00
Martin Sebor
6c4aa5f6bd builtins.c (unterminated_array): Add new arguments.
* builtins.c (unterminated_array): Add new arguments.
	If argument is not terminated, bubble up size and exact
	state to callers.
	(expand_builtin_strnlen): Detect, avoid expanding
	and diagnose unterminated arrays.
	(c_strlen): Fill in offset of start of unterminated strings.
	* builtins.h (unterminated_array): Update prototype.

	* gcc.dg/warn-strnlen-no-nul.c: New.

Co-Authored-By: Jeff Law <law@redhat.com>

From-SVN: r264787
2018-10-02 08:08:53 -06:00
Jonathan Wakely
469218a3f9 Avoid redundant runtime checks in std::visit
Calling std::get will check some static assertions and also do a runtime
check for a valid index before calling __detail::__variant::__get. The
std::visit function already handles the case where any variant has an
invalid index, so __get can be used directly in __visit_invoke.

	* include/std/variant (__gen_vtable_impl::__visit_invoke): Call __get
	directly instead of get, as caller ensures correct index is used.
	(holds_alternative, get, get_if): Remove redundant inline specifiers.
	(_VARIANT_RELATION_FUNCTION_TEMPLATE): Likewise.

From-SVN: r264786
2018-10-02 15:00:50 +01:00
Richard Biener
f512bf3ee9 sse.md (reduc_plus_scal_v4df): Avoid the use of haddv4df...
2018-10-02  Richard Biener  <rguenther@suse.de>

	* config/i386/sse.md (reduc_plus_scal_v4df): Avoid the use
	of haddv4df, first reduce to SSE width and exploit the fact
	that we only need element zero with the reduction result.
	(reduc_plus_scal_v2df): Likewise.

From-SVN: r264785
2018-10-02 13:06:54 +00:00
Joseph Myers
1c02928295 Use -fno-show-column in libstdc++ installed testing.
<https://gcc.gnu.org/ml/libstdc++/2016-08/msg00006.html> arranged for
libstdc++ tests to use -fno-show-column by default, but only for
build-tree testing.  This patch adds it to the options used for
installed testing as well.

Tested with installed testing for a cross to x86_64-linux-gnu, where
it fixes various test failures.

	* testsuite/lib/libstdc++.exp (libstdc++_init): Use
	-fno-show-column in default cxxflags.

From-SVN: r264784
2018-10-02 13:46:32 +01:00
Bernhard Reutner-Fischer
15b946f77d config: Remove unused define for os uClibc
__NO_STRING_INLINES was removed from uClibc around 2004 so has no
effect.

libstdc++-v3/ChangeLog:

2018-10-01  Bernhard Reutner-Fischer  <aldot@gcc.gnu.org>

        * config/os/uclibc/os_defines.h (__NO_STRING_INLINES): Delete.

From-SVN: r264783
2018-10-02 14:35:42 +02:00
Eric Botcazou
be099f3724 dojump.h (do_jump): Delete.
* dojump.h (do_jump): Delete.
	(do_jump_1): Likewise.
	(split_comparison): Move around.
	* dojump.c (do_jump): Make static.
	(do_jump_1): Likewise.
	(jumpifnot): Move around.
	(jumpifnot_1): Likewise.
	(jumpif): Likewise.
	(jumpif_1): Likewise.
	* expr.c (expand_expr_real_1): Call jumpif[not] instead of do_jump.

From-SVN: r264781
2018-10-02 10:55:33 +00:00
Eric Botcazou
5d11b4bf36 reorg.c (make_return_insns): Use emit_copy_of_insn_after for the insns in the delay slot and add_insn_after...
* reorg.c (make_return_insns): Use emit_copy_of_insn_after for the
	insns in the delay slot and add_insn_after for the jump insn.

From-SVN: r264780
2018-10-02 10:20:08 +00:00
Richard Biener
0edf3afe13 c-decl.c (warn_if_shadowing): Do not test DECL_FROM_INLINE.
2018-10-02  Richard Biener  <rguenther@suse.de>

	c/
	* c-decl.c (warn_if_shadowing): Do not test DECL_FROM_INLINE.

	cp/
	* name-lookup.c (check_local_shadow): Do not test DECL_FROM_INLINE.

From-SVN: r264779
2018-10-02 10:08:22 +00:00
Richard Biener
cd6ae11750 tree-inline.c (expand_call_inline): Use the location of the callee declaration for the inline-entry marker.
2018-10-02  Richard Biener  <rguenther@suse.de>

	* tree-inline.c (expand_call_inline): Use the location of
	the callee declaration for the inline-entry marker.
	* final.c (notice_source_line): Remove special-casing of
	NOTE_INSN_INLINE_ENTRY.

From-SVN: r264778
2018-10-02 10:07:29 +00:00
GCC Administrator
491ec3dfa9 Daily bump.
From-SVN: r264777
2018-10-02 00:16:36 +00:00
Ian Lance Taylor
b1d88684bb compiler: use the underlying type to build placeholder type for alias
When asking for a placeholder type of an alias type, build a
    placeholder for the underlying type, instead of treating the
    alias as a named type and calling get_backend. The latter may
    fail as we may not be ready to build a complete backend type. We
    have already used a unified backend type for alias type and its
    underlying type. Do the same for placeholders as well.
    
    Reviewed-on: https://go-review.googlesource.com/138635

From-SVN: r264773
2018-10-01 22:25:52 +00:00
Ian Lance Taylor
44ef03008c libgo: support x32 as GOARCH=amd64p32 GOOS=linux
This is enough to let libgo build when configured using
    --with-multilib-list=m64,m32,mx32.  I don't have an x32-enabled kernel
    so I haven't tested whether it executes correctly.
    
    For https://gcc.gnu.org/PR87470
    
    Reviewed-on: https://go-review.googlesource.com/138817

From-SVN: r264772
2018-10-01 20:17:11 +00:00
Ian Lance Taylor
1b28253347 runtime: add arm64 version of AES hash code
Rewrite the arm64 AES hashing code from gc assembler to C code using
    intrinsics.  The resulting code generates the same hash code for the
    same input as the gc code--that doesn't matter as such, but testing it
    ensures that the C code does something useful.
    
    Reviewed-on: https://go-review.googlesource.com/138535

From-SVN: r264771
2018-10-01 20:14:29 +00:00
Nathan Sidwell
df1346b423 [libiberty] Use pipe inside pex_run
https://gcc.gnu.org/ml/gcc-patches/2018-10/msg00039.html
	* configure.ac (checkfuncs): Add pipe2.
	* config.in, configure: Rebuilt.
	* pex-unix.c (pex_unix_exec_child): Comminicate errors from child
	to parent with a pipe, when possible.

From-SVN: r264769
2018-10-01 18:46:51 +00:00
Joseph Myers
2649038cea * ru.po: Update.
From-SVN: r264766
2018-10-01 18:11:46 +01:00
Carl Love
84eea0f700 Update, forgot to put the PR number in the Change Log.
gcc/ChangeLog:

2018-10-01  Carl Love  <cel@us.ibm.com>

	PR 69431
	* config/rs6000/rs6000-builtin.def (__builtin_mffsl): New.
	(__builtin_mtfsb0): New.
	(__builtin_mtfsb1): New.
	( __builtin_set_fpscr_rn): New.
	(__builtin_set_fpscr_drn): New.
	* config/rs6000/rs6000.c (rs6000_expand_mtfsb_builtin): Add.
	(rs6000_expand_set_fpscr_rn_builtin): Add.
	(rs6000_expand_set_fpscr_drn_builtin): Add.
	(rs6000_expand_builtin): Add case statement entries for
	RS6000_BUILTIN_MTFSB0, RS6000_BUILTIN_MTFSB1,
	RS6000_BUILTIN_SET_FPSCR_RN, RS6000_BUILTIN_SET_FPSCR_DRN,
	RS6000_BUILTIN_MFFSL.
	(rs6000_init_builtins): Add ftype initialization and def_builtin
	calls for __builtin_mffsl, __builtin_mtfsb0, __builtin_mtfsb1,
	__builtin_set_fpscr_rn, __builtin_set_fpscr_drn.
	* config/rs6000.md (rs6000_mtfsb0, rs6000_mtfsb1, rs6000_mffscrn,
	rs6000_mffscdrn): Add define_insn.
	(rs6000_set_fpscr_rn, rs6000_set_fpscr_drn): Add define_expand.
	* doc/extend.texi: Add documentation for the builtins.

gcc/testsuite/ChangeLog:

2018-10-01  Carl Love  <cel@us.ibm.com>

	PR 69431
	* gcc.target/powerpc/test_mffsl-p9.c: New file.
	* gcc.target/powerpc/test_fpscr_rn_builtin.c: New file.
	* gcc.target/powerpc/test_fpscr_drn_builtin.c: New file.
	* gcc.target/powerpc/test_fpscr_rn_builtin_error.c: New file.
	* gcc.target/powerpc/test_fpscr_drn_builtin_error.c: New file.

From-SVN: r264764
2018-10-01 15:57:13 +00:00
Carl Love
2da14663d0 rs6000-builtin.def (__builtin_mffsl): New.
gcc/ChangeLog:

2018-10-01  Carl Love  <cel@us.ibm.com>

	* config/rs6000/rs6000-builtin.def (__builtin_mffsl): New.
	(__builtin_mtfsb0): New.
	(__builtin_mtfsb1): New.
	( __builtin_set_fpscr_rn): New.
	(__builtin_set_fpscr_drn): New.
	* config/rs6000/rs6000.c (rs6000_expand_mtfsb_builtin): Add.
	(rs6000_expand_set_fpscr_rn_builtin): Add.
	(rs6000_expand_set_fpscr_drn_builtin): Add.
	(rs6000_expand_builtin): Add case statement entries for
	RS6000_BUILTIN_MTFSB0, RS6000_BUILTIN_MTFSB1,
	RS6000_BUILTIN_SET_FPSCR_RN, RS6000_BUILTIN_SET_FPSCR_DRN,
	RS6000_BUILTIN_MFFSL.
	(rs6000_init_builtins): Add ftype initialization and def_builtin
	calls for __builtin_mffsl, __builtin_mtfsb0, __builtin_mtfsb1,
	__builtin_set_fpscr_rn, __builtin_set_fpscr_drn.
	* config/rs6000.md (rs6000_mtfsb0, rs6000_mtfsb1, rs6000_mffscrn,
	rs6000_mffscdrn): Add define_insn.
	(rs6000_set_fpscr_rn, rs6000_set_fpscr_drn): Add define_expand.
	* doc/extend.texi: Add documentation for the builtins.

gcc/testsuite/ChangeLog:

2018-10-01  Carl Love  <cel@us.ibm.com>

	* gcc.target/powerpc/test_mffsl-p9.c: New file.
	* gcc.target/powerpc/test_fpscr_rn_builtin.c: New file.
	* gcc.target/powerpc/test_fpscr_drn_builtin.c: New file.
	* gcc.target/powerpc/test_fpscr_rn_builtin_error.c: New file.
	* gcc.target/powerpc/test_fpscr_drn_builtin_error.c: New file.

From-SVN: r264762
2018-10-01 15:41:24 +00:00
Gerald Pfeifer
3553df866a allocator.xml: Adjust link to "Reconsidering Custom Memory Allocation".
* doc/xml/manual/allocator.xml: Adjust link to "Reconsidering
	Custom Memory Allocation".

From-SVN: r264761
2018-10-01 15:17:15 +00:00
Jonathan Wakely
8e4d333b0e Regenerate libstdc++ HTML pages
* doc/html/*: Regenerate.

From-SVN: r264760
2018-10-01 15:28:36 +01:00
Paul Thomas
b093d688da re PR fortran/65677 (Incomplete assignment on deferred-length character variable)
2018-10-01  Paul Thomas  <pault@gcc.gnu.org>

	PR fortran/65677
	* trans-expr.c (gfc_trans_assignment_1): Set the 'identical'
	flag in the call to gfc_check_dependency.


2018-10-01  Paul Thomas  <pault@gcc.gnu.org>

	PR fortran/65677
	* gfortran.dg/dependency_52.f90 : Expand the test to check both
	the call to adjustl and direct assignment of the substring.

From-SVN: r264759
2018-10-01 14:27:17 +00:00
Richard Biener
fd5c626c68 re PR tree-optimization/87465 (Loop removal regression)
2018-10-01  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/87465
	* tree-ssa-loop-ivcanon.c (tree_estimate_loop_size): Fix typo
	causing branch miscounts.

	* gcc.dg/tree-ssa/cunroll-15.c: New testcase.

From-SVN: r264758
2018-10-01 13:10:48 +00:00
Tamar Christina
329130cc40 Validate and set default parameters for stack-clash.
This patch defines the default parameters and validation for the aarch64
stack clash probing interval and guard sizes.  It cleans up the previous
implementation and insures that at no point the invalidate arguments are
present in the pipeline for AArch64.  Currently they are only corrected once
cc1 initalizes the back-end.

The default for AArch64 is 64 KB for both of these and we only support 4 KB and 64 KB
probes.  We also enforce that any value you set here for the parameters must be
in sync.

If an invalid value is specified an error will be generated and compilation aborted.

gcc/

	* common/config/aarch64/aarch64-common.c (TARGET_OPTION_DEFAULT_PARAM,
	aarch64_option_default_param):	New.
	(params.h): Include.
	(TARGET_OPTION_VALIDATE_PARAM, aarch64_option_validate_param): New.
	* config/aarch64/aarch64.c (aarch64_override_options_internal): Simplify
	stack-clash protection validation code.

From-SVN: r264757
2018-10-01 13:09:29 +00:00
Tamar Christina
f622a56bcb Update options framework for parameters to properly handle and validate configure time params.
This patch changes it so that default parameters are validated during
initialization. This change is needed to ensure parameters set via by the
target specific common initialization routines still keep the parameters within
the valid range.

gcc/

	* params.c (validate_param): New.
	(add_params): Use it.
	(set_param_value): Refactor param validation into validate_param.
	(diagnostic.h): Include.
	* diagnostic.h (diagnostic_ready_p): New.

From-SVN: r264756
2018-10-01 13:08:10 +00:00
Tamar Christina
03ced4ab9f Allow back-ends to be able to do custom validations on params.
This patch adds the ability for backends to add custom constrains to the param
values by defining a new hook option_validate_param.

This hook is invoked on every set_param_value which allows the back-end to
ensure that the parameters are always within it's desired state.

gcc/

	* params.c (set_param_value):
	Add index of parameter being validated.
	* common/common-target.def (option_validate_param): New.
	* common/common-targhooks.h (default_option_validate_param): New.
	* common/common-targhooks.c (default_option_validate_param): New.
	* doc/tm.texi.in (TARGET_OPTION_VALIDATE_PARAM): New.
	* doc/tm.texi: Regenerate.

From-SVN: r264755
2018-10-01 13:06:53 +00:00
Tamar Christina
c98f502f0a Cleanup the AArch64 testsuite when stack-clash is on.
This patch cleans up the testsuite when a run is done with stack clash
protection turned on.

Concretely this switches off -fstack-clash-protection for a couple of tests:

* assembler scan: some tests are quite fragile in that they check for exact
       assembly output, e.g. check for exact amount of sub etc.  These won't
       match now.
* vla: Some of the ubsan tests negative array indices. Because the arrays weren't
       used before the incorrect $sp wouldn't have been used. The correct value is
       restored on ret.  Now however we probe the $sp which causes a segfault.
* params: When testing the parameters we have to skip these on AArch64 because of our
          custom constraints on them.  We already test them separately so this isn't a
          loss.

Note that the testsuite is not entire clean due to gdb failure caused by alloca with
stack clash. On AArch64 we output an incorrect .loc directive, but this is already the
case with the current implementation in GCC and is a bug unrelated to this patch series.

gcc/testsuite/

	PR target/86486
	* gcc.dg/pr82788.c: Skip for AArch64.
	* gcc.dg/guality/vla-1.c: Turn off stack-clash.
	* gcc.target/aarch64/subsp.c: Likewise.
	* gcc.dg/params/blocksort-part.c: Skip stack-clash checks
	on AArch64.
	* gcc.dg/stack-check-10.c: Add AArch64 specific checks.
	* gcc.dg/stack-check-12.c: ILP32 fixup.
	* gcc.dg/stack-check-5.c: Add AArch64 specific checks.
	* gcc.dg/stack-check-6a.c: Skip on AArch64, we don't support this.
	* testsuite/lib/target-supports.exp
	(check_effective_target_frame_pointer_for_non_leaf): AArch64 does not
	require frame pointer for non-leaf functions.

From-SVN: r264754
2018-10-01 13:05:30 +00:00
Tamar Christina
fbe9af5042 Set default values for stack-clash and do basic validation in back-end.
This patch enforces that the default guard size for stack-clash protection for
AArch64 be 64KB unless the user has overriden it via configure in which case
the user value is used as long as that value is within the valid range.

It also does some basic validation to ensure that the guard size is only 4KB or
64KB and also enforces that for aarch64 the stack-clash probing interval is
equal to the guard size.

gcc/

	PR target/86486
	* config/aarch64/aarch64.c (aarch64_override_options_internal):
	Add validation for stack-clash parameters and set defaults.

From-SVN: r264753
2018-10-01 13:03:31 +00:00
Tamar Christina
630b1e3a18 Allow setting of stack-clash via configure options.
This patch defines a configure option to allow the setting of the default
guard size via configure flags when building the target.

The new flag is:

 * --with-stack-clash-protection-guard-size=<num>

The patch defines a new macro DEFAULT_STK_CLASH_GUARD_SIZE which targets need
to use explicitly is they want to support this configure flag and values that
users may have set.

gcc/

	PR target/86486
	* configure.ac: Add stack-clash-protection-guard-size.
	* doc/install.texi: Document it.
	* config.in (DEFAULT_STK_CLASH_GUARD_SIZE): New.
	* params.def: Update comment for guard-size.
	(PARAM_STACK_CLASH_PROTECTION_GUARD_SIZE,
	PARAM_STACK_CLASH_PROTECTION_PROBE_INTERVAL): Update description.
	* configure: Regenerate.

From-SVN: r264752
2018-10-01 13:02:21 +00:00
Tamar Christina
8c6e3b2355 Ensure that outgoing argument size is at least 8 bytes when alloca and stack-clash.
This patch adds a requirement that the number of outgoing arguments for a
function is at least 8 bytes when using stack-clash protection and alloca.

By using this condition we can avoid a check in the alloca code and so have
smaller and simpler code there.

A simplified version of the AArch64 stack frames is:

   +-----------------------+                                              
   |                       |                                                 
   |                       |                                              
   |                       |                                              
   +-----------------------+                                              
   |LR                     |                                              
   +-----------------------+                                              
   |FP                     |                                              
   +-----------------------+                                              
   |dynamic allocations    | ----  expanding area which will push the outgoing
   +-----------------------+       args down during each allocation.
   |padding                |
   +-----------------------+
   |outgoing stack args    | ---- safety buffer of 8 bytes (aligned)
   +-----------------------+

By always defining an outgoing argument, alloca(0) effectively is safe to probe
at $sp due to the reserved buffer being there.  It will never corrupt the stack.

This is also safe for alloca(x) where x is 0 or x % page_size == 0.  In the
former it is the same case as alloca(0) while the latter is safe because any
allocation pushes the outgoing stack args down:

   |FP                     |                                              
   +-----------------------+                                              
   |                       |
   |dynamic allocations    | ----  alloca (x)
   |                       |
   +-----------------------+
   |padding                |
   +-----------------------+
   |outgoing stack args    | ---- safety buffer of 8 bytes (aligned)
   +-----------------------+

Which means when you probe for the residual, if it's 0 you'll again just probe
in the outgoing stack args range, which we know is non-zero (at least 8 bytes).

gcc/

	PR target/86486
	* config/aarch64/aarch64.h (STACK_CLASH_MIN_BYTES_OUTGOING_ARGS,
	STACK_DYNAMIC_OFFSET): New.
	* config/aarch64/aarch64.c (aarch64_layout_frame):
	Update outgoing args size.
	(aarch64_stack_clash_protection_alloca_probe_range,
	TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE): New.

gcc/testsuite/

	PR target/86486
	* gcc.target/aarch64/stack-check-alloca-1.c: New.
	* gcc.target/aarch64/stack-check-alloca-10.c: New.
	* gcc.target/aarch64/stack-check-alloca-2.c: New.
	* gcc.target/aarch64/stack-check-alloca-3.c: New.
	* gcc.target/aarch64/stack-check-alloca-4.c: New.
	* gcc.target/aarch64/stack-check-alloca-5.c: New.
	* gcc.target/aarch64/stack-check-alloca-6.c: New.
	* gcc.target/aarch64/stack-check-alloca-7.c: New.
	* gcc.target/aarch64/stack-check-alloca-8.c: New.
	* gcc.target/aarch64/stack-check-alloca-9.c: New.
	* gcc.target/aarch64/stack-check-alloca.h: New.
	* gcc.target/aarch64/stack-check-14.c: New.
	* gcc.target/aarch64/stack-check-15.c: New.

From-SVN: r264751
2018-10-01 13:00:58 +00:00
Tamar Christina
2c25083e75 Add a hook to support telling the mid-end when to probe the stack.
This patch adds a hook to tell the mid-end about the probing requirements of the
target.  On AArch64 we allow a specific range for which no probing needs to
be done.  This same range is also the amount that will have to be probed up when
a probe is needed after dropping the stack.

Defining this probe comes with the extra requirement that the outgoing arguments
size of any function that uses alloca and stack clash be at the very least 8
bytes.  With this invariant we can skip doing the zero checks for alloca and
save some code.

A simplified version of the AArch64 stack frame is:

   +-----------------------+                                              
   |                       |                                                 
   |                       |                                              
   |                       |                                              
   +-----------------------+                                              
   |LR                     |                                              
   +-----------------------+                                              
   |FP                     |                                              
   +-----------------------+                                              
   |dynamic allocations    | -\      probe range hook effects these       
   +-----------------------+   --\   and ensures that outgoing stack      
   |padding                |      -- args is always > 8 when alloca.      
   +-----------------------+  ---/   Which means it's always safe to probe
   |outgoing stack args    |-/       at SP                                
   +-----------------------+                                              
                                                                                                           

This allows us to generate better code than without the hook without affecting
other targets.

With this patch I am also removing the stack_clash_protection_final_dynamic_probe
hook which was added specifically for AArch64 but that is no longer needed.

gcc/

	PR target/86486
	* explow.c (anti_adjust_stack_and_probe_stack_clash): Support custom
	probe ranges.
	* target.def (stack_clash_protection_alloca_probe_range): New.
	(stack_clash_protection_final_dynamic_probe): Remove.
	* targhooks.h (default_stack_clash_protection_alloca_probe_range) New.
	(default_stack_clash_protection_final_dynamic_probe): Remove.
	* targhooks.c: Likewise.
	* doc/tm.texi.in (TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE): New.
	(TARGET_STACK_CLASH_PROTECTION_FINAL_DYNAMIC_PROBE): Remove.
	* doc/tm.texi: Regenerate.

From-SVN: r264750
2018-10-01 12:58:21 +00:00
Tamar Christina
eb471ba379 Add support for SVE stack clash probing.
This patch adds basic support for SVE stack clash protection.
It is a first implementation and will use a loop to do the
probing and stack adjustments.

An example sequence is:

        .cfi_startproc
        mov     x15, sp
        cntb    x16, all, mul #11
        add     x16, x16, 304
        .cfi_def_cfa_register 15
.SVLPSPL0:
        cmp     x16, 61440
        b.lt    .SVLPEND0
        sub     sp, sp, 61440
        str     xzr, [sp, 0]
        sub     x16, x16, 61440
        b      .SVLPSPL0
.SVLPEND0:
        sub     sp, sp, x16
        .cfi_escape 0xf,0xc,0x8f,0,0x92,0x2e,0,0x8,0x58,0x1e,0x23,0xb0,0x2,0x22

for a 64KB guard size, and for a 4KB guard size

        .cfi_startproc
        mov     x15, sp
        cntb    x16, all, mul #11
        add     x16, x16, 304
        .cfi_def_cfa_register 15
.SVLPSPL0:
        cmp     x16, 3072
        b.lt    .SVLPEND0
        sub     sp, sp, 3072
        str     xzr, [sp, 0]
        sub     x16, x16, 3072
        b       .SVLPSPL0
.SVLPEND0:
        sub     sp, sp, x16
        .cfi_escape 0xf,0xc,0x8f,0,0x92,0x2e,0,0x8,0x58,0x1e,0x23,0xb0,0x2,0x22

This has about the same semantics as alloca, except we prioritize the common case
where no probe is required.  We also change the amount we adjust the stack and
the probing interval to be the nearest value to `guard size - abi buffer` that
fits in the 12-bit shifted immediate used by cmp.

While this would mean we probe a bit more often than we require, in practice the
amount of SVE vectors you'd need to spill is significant. Even more so to enter the
loop more than once.


gcc/

	PR target/86486
	* config/aarch64/aarch64-protos.h (aarch64_output_probe_sve_stack_clash): New.
	* config/aarch64/aarch64.c (aarch64_output_probe_sve_stack_clash,
	aarch64_clamp_to_uimm12_shift): New.
	(aarch64_allocate_and_probe_stack_space): Add SVE specific section.
	* config/aarch64/aarch64.md (probe_sve_stack_clash): New.

gcc/testsuite/

	PR target/86486
	* gcc.target/aarch64/stack-check-prologue-16.c: New test
	* gcc.target/aarch64/stack-check-cfa-3.c: New test.
	* gcc.target/aarch64/sve/struct_vect_24.c: New test.
	* gcc.target/aarch64/sve/struct_vect_24_run.c: New test.

From-SVN: r264749
2018-10-01 12:56:40 +00:00
Tamar Christina
db6b62a858 stack-clash: Add LR assert to layout_frame.
Since stack clash depends on the LR being saved for non-leaf functions this
patch adds an assert such that if this changes we would notice this.

gcc/
	PR target/86486
	* config/aarch64/aarch64.c (aarch64_layout_frame): Add assert.

From-SVN: r264748
2018-10-01 12:53:34 +00:00
Jeff Law
cd1bef27d2 Updated stack-clash implementation supporting 64k probes.
This patch implements the use of the stack clash mitigation for aarch64.
In Aarch64 we expect both the probing interval and the guard size to be 64KB
and we enforce them to always be equal.

We also probe up by 1024 bytes in the general case when a probe is required.

AArch64 has the following probing conditions:

 1a) Any initial adjustment less than 63KB requires no probing.  An ABI defined
     safe buffer of 1Kbytes is used and a page size of 64k is assumed.

  b) Any final adjustment residual requires a probe at SP + 1KB.
     We know this to be safe since you would have done at least one page worth
     of allocations already to get to that point.

  c) Any final adjustment more than remainder (total allocation amount) larger
     than 1K - LR offset requires a probe at SP.


  safe buffer mentioned in 1a is maintained by the storing of FP/LR.
  In the case of -fomit-frame-pointer we can still count on LR being stored
  if the function makes a call, even if it's a tail call.  The AArch64 frame
  layout code guarantees this and tests have been added to check against
  this particular case.

 2) Any allocations larger than 1 page size, is done in increments of page size
    and probed up by 1KB leaving the residuals.

 3a) Any residual for initial adjustment that is less than guard-size - 1KB
     requires no probing.  Essentially this is a sliding window.  The probing
     range determines the ABI safe buffer, and the amount to be probed up.

Incrementally allocating less than the probing thresholds, e.g. recursive functions will
not be an issue as the storing of LR counts as a probe.


                            +-------------------+                                    
                            |  ABI SAFE REGION  |                                    
                  +------------------------------                                    
                  |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |                                    
 maximum amount   |         |                   |                                    
 not needing a    |         |                   |                                    
 probe            |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |                                    
                  |         |                   |        Probe offset when           
                  |         ---------------------------- probe is required           
                  |         |                   |                                    
                  +-------- +-------------------+ --------  Point of first probe     
                            |  ABI SAFE REGION  |                                    
                            ---------------------                                    
                            |                   |                                    
                            |                   |                                    
                            |                   |                                         

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
Target was tested with stack clash on and off by default.

GLIBC testsuite also ran with stack clash on by default and no new
regressions.


Co-Authored-By: Richard Sandiford <richard.sandiford@linaro.org>
Co-Authored-By: Tamar Christina <tamar.christina@arm.com>

From-SVN: r264747
2018-10-01 12:49:35 +00:00
Tamar Christina
041bfa6f07 Fix caching of tests for multiple variant runs and update existing target-supports tests.
Currently some target supports checks such as vect_int cache their
results in a manner that would cause them not to be rechecked when
running the same tests against a different variant in a multi variant
run.  This causes tests to be skipped or run when they shouldn't be.

there is already an existing caching mechanism in place that does the
caching correctly, but presumably these weren't used because some of these
tests originally only contained static data. e.g. only checked if the target is
aarch64*-*-* etc.

This patch changes every function that needs to do any caching at all to use
check_cached_effective_target which will cache per variant instead of globally.

For those tests that already parameterize over et_index I have created
check_cached_effective_target_indexed to handle this common case by creating a list
containing the property name and the current value of et_index.

These changes result in a much simpler implementation for most tests and a large
reduction in lines for target-supports.exp.

Regtested on
  aarch64-none-elf
  x86_64-pc-linux-gnu
  powerpc64-unknown-linux-gnu
  arm-none-eabi

and no testsuite errors. Difference would depend on your site.exp.
On arm we get about 4500 new testcases and on aarch64 the low 10s.
On PowerPC and x86_64 no changes as expected since the default exp for these
just test the default configuration.

What this means for new target checks is that they should always use either
check_cached_effective_target or check_cached_effective_target_indexed if the
result of the check is to be cached.

As an example the new vect_int looks like

proc check_effective_target_vect_int { } {
    return [check_cached_effective_target_indexed <name> {
      expr {
         <condition>
	}}]
}

The debug information that was once there is now all hidden in
check_cached_effective_target, (called from check_cached_effective_target_indexed)
and so the only thing you are required to do is give it a unique cache name and a condition.

The condition doesn't need to be an if statement so simple boolean expressions are enough here:

         [istarget i?86-*-*] || [istarget x86_64-*-*]
         || ([istarget powerpc*-*-*]
	     && ![istarget powerpc-*-linux*paired*])
         || ...

From-SVN: r264745
2018-10-01 12:34:05 +00:00
MCC CS
03cc70b5f1 re PR tree-optimization/87261 (Optimize bool expressions)
2018-10-01  MCC CS <deswurstes@users.noreply.github.com>

	PR tree-optimization/87261
	* match.pd: Remove trailing whitespace.
	Add (x & y) | ~(x | y) -> ~(x ^ y),
	(~x | y) ^ (x ^ y) -> x | ~y and (x ^ y) | ~(x | y) -> ~(x & y)

	* gcc.dg/pr87261.c: New test.

From-SVN: r264744
2018-10-01 11:25:45 +00:00