[gcc]
2018-01-16 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
* config/rs6000/rs6000.c (rs6000_opt_vars): Add entry for
-mspeculate-indirect-jumps.
* config/rs6000/rs6000.md (*call_indirect_elfv2<mode>): Disable
for -mno-speculate-indirect-jumps.
(*call_indirect_elfv2<mode>_nospec): New define_insn.
(*call_value_indirect_elfv2<mode>): Disable for
-mno-speculate-indirect-jumps.
(*call_value_indirect_elfv2<mode>_nospec): New define_insn.
(indirect_jump): Emit different RTL for
-mno-speculate-indirect-jumps.
(*indirect_jump<mode>): Disable for
-mno-speculate-indirect-jumps.
(*indirect_jump<mode>_nospec): New define_insn.
(tablejump): Emit different RTL for
-mno-speculate-indirect-jumps.
(tablejumpsi): Disable for -mno-speculate-indirect-jumps.
(tablejumpsi_nospec): New define_expand.
(tablejumpdi): Disable for -mno-speculate-indirect-jumps.
(tablejumpdi_nospec): New define_expand.
(*tablejump<mode>_internal1): Disable for
-mno-speculate-indirect-jumps.
(*tablejump<mode>_internal1_nospec): New define_insn.
* config/rs6000/rs6000.opt (mspeculate-indirect-jumps): New
option.
[gcc/testsuite]
2018-01-16 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
* gcc.target/powerpc/safe-indirect-jump-1.c: New file.
* gcc.target/powerpc/safe-indirect-jump-2.c: New file.
* gcc.target/powerpc/safe-indirect-jump-3.c: New file.
* gcc.target/powerpc/safe-indirect-jump-4.c: New file.
* gcc.target/powerpc/safe-indirect-jump-5.c: New file.
* gcc.target/powerpc/safe-indirect-jump-6.c: New file.
From-SVN: r256753
PR libgomp/83590
* gimplify.c (gimplify_one_sizepos): For is_gimple_constant (expr)
return early, inline manually is_gimple_sizepos. Make sure if we
call gimplify_expr we don't end up with a gimple constant.
* tree.c (variably_modified_type_p): Don't return true for
is_gimple_constant (_t). Inline manually is_gimple_sizepos.
* gimplify.h (is_gimple_sizepos): Remove.
Co-Authored-By: Richard Biener <rguenther@suse.de>
From-SVN: r256748
vect_analyze_loop_operations was calling vectorizable_live_operation
for all live-out phis, which led to a bogus ncopies calculation in
the pure SLP case. I think v_a_l_o should only be passing phis
that are vectorised using normal loop vectorisation, since
vect_slp_analyze_node_operations handles the SLP side (and knows
the correct slp_index and slp_node arguments to pass in, via
vect_analyze_stmt).
With that fixed we hit an older bug that vectorizable_live_operation
didn't handle live-out SLP inductions. Fixed by using gimple_phi_result
rather than gimple_get_lhs for phis.
2018-01-16 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR tree-optimization/83857
* tree-vect-loop.c (vect_analyze_loop_operations): Don't call
vectorizable_live_operation for pure SLP statements.
(vectorizable_live_operation): Handle PHIs.
gcc/testsuite/
PR tree-optimization/83857
* gcc.dg/vect/pr83857.c: New test.
From-SVN: r256747
2018-01-16 Richard Biener <rguenther@suse.de>
PR tree-optimization/83867
* tree-vect-stmts.c (vect_transform_stmt): Precompute
nested_in_vect_loop_p since the scalar stmt may get invalidated.
* gcc.dg/vect/pr83867.c: New testcase.
From-SVN: r256746
PR c/83844
* stor-layout.c (handle_warn_if_not_align): Use byte_position and
multiple_of_p instead of unchecked tree_to_uhwi and UHWI check.
If off is not INTEGER_CST, issue a may not be aligned warning
rather than isn't aligned. Use isn%'t rather than isn't.
* fold-const.c (multiple_of_p) <case BIT_AND_EXPR>: Don't fall through
into MULT_EXPR.
<case MULT_EXPR>: Improve the case when bottom and one of the
MULT_EXPR operands are INTEGER_CSTs and bottom is multiple of that
operand, in that case check if the other operand is multiple of
bottom divided by the INTEGER_CST operand.
* gcc.dg/pr83844.c: New test.
From-SVN: r256745
The port-local FUNCTION_ARG_SIZE:
((((MODE) != BLKmode \
? (HOST_WIDE_INT) GET_MODE_SIZE (MODE) \
: int_size_in_bytes (TYPE)) + UNITS_PER_WORD - 1) / UNITS_PER_WORD)
is used by code in pa.c and by ASM_DECLARE_FUNCTION_NAME in som.h.
Treating GET_MODE_SIZE as a constant is OK for the former but not
the latter, which is used in target-independent code. This caused
a build failure on hppa2.0w-hp-hpux11.11.
2018-01-16 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR target/83858
* config/pa/pa.h (FUNCTION_ARG_SIZE): Delete.
* config/pa/pa-protos.h (pa_function_arg_size): Declare.
* config/pa/som.h (ASM_DECLARE_FUNCTION_NAME): Use
pa_function_arg_size instead of FUNCTION_ARG_SIZE.
* config/pa/pa.c (pa_function_arg_advance): Likewise.
(pa_function_arg, pa_arg_partial_bytes): Likewise.
(pa_function_arg_size): New function.
From-SVN: r256744
We had:
tree t = fold_vec_perm (type, arg1, arg2,
vec_perm_indices (sel, 2, nelts));
where fold_vec_perm takes a const vec_perm_indices &. GCC 4.1 apparently
required a public copy constructor:
gcc/vec-perm-indices.h:85: error: 'vec_perm_indices::vec_perm_indices(const vec_perm_indices&)' is private
gcc/fold-const.c:11410: error: within this context
even though no copy should be made here. This patch tries to work
around that by constructing the vec_perm_indices separately.
2018-01-16 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* fold-const.c (fold_ternary_loc): Construct the vec_perm_indices
in a separate statement.
From-SVN: r256740
In the testcase we were trying to group two gather loads, even though
that isn't supported. Fixed by explicitly disallowing grouping of
gathers and scatters.
This problem didn't show up on SVE because there we convert to
IFN_GATHER_LOAD/IFN_SCATTER_STORE pattern statements, which fail
the can_group_stmts_p check.
2018-01-16 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* tree-vect-data-refs.c (vect_analyze_data_ref_accesses):
gcc/testsuite/
* gcc.dg/torture/pr83847.c: New test.
From-SVN: r256730
PR rtl-optimization/86620
* params.def (max-sched-ready-insns): Bump minimum value to 1.
* gcc.dg/pr64935-2.c: Use --param=max-sched-ready-insns=1
instead of --param=max-sched-ready-insns=0.
* gcc.target/i386/pr83620.c: New test.
* gcc.dg/pr83620.c: New test.
From-SVN: r256729
PR c++/83817
* pt.c (tsubst_copy_and_build) <case CALL_EXPR>: If function
is AGGR_INIT_EXPR rather than CALL_EXPR, set AGGR_INIT_FROM_THUNK_P
instead of CALL_FROM_THUNK_P.
* g++.dg/cpp1y/pr83817.C: New test.
From-SVN: r256726
PR c++/83825
* name-lookup.c (member_vec_dedup): Return early if len is 0.
(resort_type_member_vec, set_class_bindings,
insert_late_enum_def_bindings): Use vec qsort method instead of
calling qsort directly.
* g++.dg/template/pr83825.C: New test.
From-SVN: r256725
gcc/cp/ChangeLog:
PR c++/83588
* class.c (find_flexarrays): Make a record of multiple flexible array
members.
gcc/testsuite/ChangeLog:
PR c++/83588
* g++.dg/ext/flexary28.C: New test.
From-SVN: r256721
PR middle-end/83837
* omp-expand.c (expand_omp_atomic_pipeline): Use loaded_val
type rather than type addr's type points to.
(expand_omp_atomic_mutex): Likewise.
(expand_omp_atomic): Likewise.
From-SVN: r256710
Reclaim the memory of escape analysis Nodes before kicking off
the backend, as they are not needed in get_backend.
Reviewed-on: https://go-review.googlesource.com/86243
From-SVN: r256707
Local variables captured by the deferred closure need to be live
until the function finishes, especially when the deferred
function runs. In Function::build, for function that has a defer,
we wrap the function body in a try block. So the backend sees
the local variables only live in the try block, without knowing
that they are needed also in the finally block where we invoke
the deferred function. Fix this by creating top-level
declarations for non-escaping address-taken locals when there
is a defer.
An example of miscompilation without this CL:
func F(fn func()) {
didPanic := true
defer func() {
println(didPanic)
}()
fn()
didPanic = false
}
With escape analysis turned on, at optimization level -O1 or -O2,
the store "didPanic = false" is elided by the backend's
optimizer, presumably because it thinks "didPanic" is not live
after the store, so the store is useless.
Reviewed-on: https://go-review.googlesource.com/86241
From-SVN: r256706
In this wrong-code bug we combine a VSUB.I8 and a VABS.S8
into a VABD.S8 instruction . This combination is not valid
for integer operands because in the VABD instruction the semantics
are that the difference is computed in notionally infinite precision
and the absolute difference is computed on that, whereas for a
VSUB.I8 + VABS.S8 sequence the VSUB operation will perform any
wrapping that's needed for the 8-bit signed type before the VABS
gets its hands on it.
This leads to the wrong-code in the PR where the expected
sequence from the intrinsics:
VSUB + VABS of two vectors {-100, -100, -100...}, {100, 100, 100...}
gives a result of {56, 56, 56...} (-100 - 100)
but GCC optimises it into a single
VABD of {-100, -100, -100...}, {100, 100, 100...}
which produces a result of {200, 200, 200...}
The transformation is still valid for floating-point operands,
which is why it was added in the first place I believe (r178817)
but this patch disables it for integer operands.
The HFmode variants though only exist for TARGET_NEON_FP16INST, so
this patch adds the appropriate guards to the new mode iterator
Bootstrapped and tested on arm-none-linux-gnueabihf.
PR target/83687
* config/arm/iterators.md (VF): New mode iterator.
* config/arm/neon.md (neon_vabd<mode>_2): Use the above.
Remove integer-related logic from pattern.
(neon_vabd<mode>_3): Likewise.
* gcc.target/arm/neon-combine-sub-abs-into-vabd.c: Delete integer
tests.
* gcc.target/arm/pr83687.c: New test.
From-SVN: r256696
* include/std/optional (_Optional_payload): Fix the comment in
the class head and turn into a primary and one specialization.
(_Optional_payload::_M_engaged): Strike the NSDMI.
(_Optional_payload<_Tp, false>::operator=(const _Optional_payload&)):
New.
(_Optional_payload<_Tp, false>::operator=(_Optional_payload&&)):
Likewise.
(_Optional_payload<_Tp, false>::_M_get): Likewise.
(_Optional_payload<_Tp, false>::_M_reset): Likewise.
(_Optional_base_impl): Likewise.
(_Optional_base): Turn into a primary and three specializations.
(optional(nullopt)): Change the base init.
* testsuite/20_util/optional/assignment/8.cc: New.
* testsuite/20_util/optional/cons/trivial.cc: Likewise.
* testsuite/20_util/optional/cons/value_neg.cc: Adjust.
From-SVN: r256694