2019-11-08 Andrew MacLeod <amacleod@redhat.com>
* range-op.h (range_operator::fold_range): Return result in a
reference parameter instead of by value.
(range_operator::wi_fold): Same.
* range-op.cc (range_operator::wi_fold): Return result in a reference
parameter instead of by value.
(range_operator::fold_range): Same.
(value_range_from_overflowed_bounds): Same.
(value_range_with_overflow): Same
(create_possibly_reversed_range): Same.
(operator_equal::fold_range): Same.
(operator_not_equal::fold_range): Same.
(operator_lt::fold_range): Same.
(operator_le::fold_range): Same.
(operator_gt::fold_range): Same.
(operator_ge::fold_range): Same.
(operator_plus::wi_fold): Same.
(operator_plus::op1_range): Change call to fold_range.
(operator_plus::op2_range): Change call to fold_range.
(operator_minus::wi_fold): Return result via reference parameter.
(operator_minus::op1_range): Change call to fold_range.
(operator_minus::op2_range): Change call to fold_range.
(operator_min::wi_fold): Return result via reference parameter.
(operator_max::wi_fold): Same.
(cross_product_operator::wi_cross_product): Same.
(operator_mult::wi_fold): Same.
(operator_div::wi_fold): Same.
(operator_div op_floor_div): Fix whitespace.
(operator_exact_divide::op1_range): Change call to fold_range.
(operator_lshift::fold_range): Return result via reference parameter.
(operator_lshift::wi_fold): Same.
(operator_rshift::fold_range): Same.
(operator_rshift::wi_fold): Same.
(operator_cast::fold_range): Same.
(operator_cast::op1_range): Change calls to fold_range.
(operator_logical_and::fold_range): Return result via reference.
(wi_optimize_and_or): Adjust call to value_range_with_overflow.
(operator_bitwise_and::wi_fold): Return result via reference.
(operator_logical_or::fold_range): Same.
(operator_bitwise_or::wi_fold): Same.
(operator_bitwise_xor::wi_fold): Same.
(operator_trunc_mod::wi_fold): Same.
(operator_logical_not::fold_range): Same.
(operator_bitwise_not::fold_range): Same.
(operator_bitwise_not::op1_range): Change call to fold_range.
(operator_cst::fold_range): Return result via reference.
(operator_identity::fold_range): Same.
(operator_abs::wi_fold): Same.
(operator_absu::wi_fold): Same.
(operator_negate::fold_range): Same.
(operator_negate::op1_range): Change call to fold_range.
(operator_addr_expr::fold_range): Return result via reference.
(operator_addr_expr::op1_range): Change call to fold_range.
(operator_pointer_plus::wi_fold): Return result via reference.
(operator_pointer_min_max::wi_fold): Same.
(operator_pointer_and::wi_fold): Same.
(operator_pointer_or::wi_fold): Same.
(range_op_handler): Change call to fold_range.
(range_cast): Same.
* tree-vrp.c (range_fold_binary_symbolics_p): Change call to
fold_range.
(range_fold_unary_symbolics_p): Same.
(range_fold_binary_expr): Same.
(range_fold_unary_expr): Same.
From-SVN: r277979
With the new reduction vectype handling, neutral_op_for_slp_reduction
needs to know whether the caller is using STMT_VINFO_REDUC_VECTYPE
(for an epilogue value) or STMT_VINFO_VECTYPE (for a PHI argument).
This fixes various gcc.target/aarch64/sve/slp_* tests.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop.c (neutral_op_for_slp_reduction): Take the
vector type as an argument rather than reading it from the
stmt_vec_info.
(vect_create_epilog_for_reduction): Update accordingly.
(vectorizable_reduction): Likewise.
(vect_transform_cycle_phi): Likewise.
From-SVN: r277977
* config/rs6000/predicates.md (branch_comparison_operator): Allow only
the comparison codes that make sense for the mode used, and only the
codes that can be done with a single branch instruction.
From-SVN: r277976
Allows character literals to used to assign values to non-character variables
in the same way that Hollerith constants are used. In addition character
literals can be used in data statements just like Hollerith constants.
Warnings of such use are output to discourage this usage as it is a non-standard
legacy feature and must be explicitly enabled.
Enabled by -fdec and -fdec-char-conversions.
Co-Authored-By: Jim MacArthur <jim.macarthur@codethink.co.uk>
From-SVN: r277975
gcc/ChangeLog:
2019-11-08 Andre Vieira <andre.simoesdiasvieira@arm.com>
PR tree-optimization/92351
* tree-vect-data-refs.c (vect_compute_data_ref_alignment): When we are
peeling the main loop for alignment, make sure to set the misalignment
of the epilogue's data references to DR_MISALIGNMENT_UNKNOWN.
gcc/testsuite/ChangeLog:
2019-11-08 Andre Vieira <andre.simoesdiasvieira@arm.com>
PR tree-optimization/92351
* gcc.dg/vect/vect-peel-2.c: Disable epilogue vectorization and
split the source of this test to...
* gcc.dg/vect/vect-peel-2-src.c: ... This.
* gcc.dg/vect/vect-peel-2-epilogues.c: New test.
From-SVN: r277974
2019-11-08 Richard Biener <rguenther@suse.de>
PR ipa/92409
* tree-inline.c (declare_return_variable): Properly handle
type mismatches for the return slot.
From-SVN: r277972
PR target/92095
* config/sparc/sparc-protos.h (output_load_pcrel_sym): Declare.
* config/sparc/sparc.c (sparc_cannot_force_const_mem): Revert latest
change.
(got_helper_needed): New static variable.
(output_load_pcrel_sym): New function.
(get_pc_thunk_name): Remove after inlining...
(load_got_register): ...here. Rework the initialization of the GOT
register and of the GOT helper.
(save_local_or_in_reg_p): Test the REGNO of the GOT register.
(sparc_file_end): Test got_helper_needed to decide whether the GOT
helper must be emitted. Use output_asm_insn instead of fprintf.
(sparc_init_pic_reg): In PIC mode, always initialize the PIC register
if optimization is enabled.
* config/sparc/sparc.md (load_pcrel_sym<P:mode>): Emit the assembly
by calling output_load_pcrel_sym.
From-SVN: r277966
If get_ref_base_and_extent returns poly_int offsets or sizes,
tree-sra.c:create_access prevents SRA from being applied to the base.
However, we haven't verified by that point that we have a valid base
to disqualify.
This originally led to an ICE on the attached testcase, but it
no longer triggers there after the introduction of IPA SRA.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-sra.c (create_access): Delay disqualifying the base
for poly_int values until we know we have a base.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/inline_2.c: New test.
From-SVN: r277965
gcc/ChangeLog:
2019-11-08 Andre Vieira <andre.simoesdiasvieira@arm.com>
* tree-vect-loop.c (vect_analyze_loop): Disable epilogue vectorization
for loops with SIMDUID set. Enable epilogue vectorization for loops
with SIMDLEN set after finding a main loop with a VF that matches it.
From-SVN: r277964
PR target/92038
* gimple-ssa-store-merging.c (find_constituent_stores): For return
value only, return non-NULL if there is a single non-clobber
constituent store even if there are constituent clobbers and return
one of clobber constituent stores if all constituent stores are
clobbers.
(split_group): Handle clobbers.
(imm_store_chain_info::output_merged_store): When computing
bzero_first, look after all clobbers at the start. Don't count
clobber stmts in orig_num_stmts, except if the first orig store is
a clobber covering the whole area and split_stores cover the whole
area, consider equal number of stmts ok. Punt if split_stores
contains only ->orig stores and their number plus number of original
clobbers is equal to original number of stmts. For ->orig, look past
clobbers in the constituent stores.
(imm_store_chain_info::output_merged_stores): Don't remove clobber
stmts.
(rhs_valid_for_store_merging_p): Don't return false for clobber stmt
rhs.
(store_valid_for_store_merging_p): Allow clobber stmts.
(verify_clear_bit_region_be): Fix up a thinko in function comment.
* g++.dg/opt/store-merging-1.C: New test.
* g++.dg/opt/store-merging-2.C: New test.
* g++.dg/opt/store-merging-3.C: New test.
From-SVN: r277963
PR c++/92384
* function.c (assign_parm_setup_block, assign_parm_setup_stack): Don't
copy TYPE_EMPTY_P arguments from data->entry_parm to data->stack_parm
slot.
(assign_parms): For TREE_ADDRESSABLE parms with TYPE_EMPTY_P type
force creation of a unique data.stack_parm slot.
* g++.dg/torture/pr92384.C: New test.
From-SVN: r277962
2019-11-08 Richard Biener <rguenther@suse.de>
* genmatch.c (expr::gen_transform): Use the resimplify
member function instead of hard-coding the gimple_resimplifyN variant.
(dt_simplify::gen_1): Likewise.
From-SVN: r277961
2019-11-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/92324
* tree-vect-loop.c (vect_create_epilog_for_reduction): Use
STMT_VINFO_REDUC_VECTYPE for all computations, inserting
sign-conversions as necessary.
(vectorizable_reduction): Reject conversions in the chain
that are not sign-conversions, base analysis on a non-converting
stmt and its operation sign. Set STMT_VINFO_REDUC_VECTYPE.
* tree-vect-stmts.c (vect_stmt_relevant_p): Don't dump anything
for debug stmts.
* tree-vectorizer.h (_stmt_vec_info::reduc_vectype): New.
(STMT_VINFO_REDUC_VECTYPE): Likewise.
* gcc.dg/vect/pr92205.c: XFAIL.
* gcc.dg/vect/pr92324-1.c: New testcase.
* gcc.dg/vect/pr92324-2.c: Likewise.
From-SVN: r277958
SVE allows variable-length vectors to be returned by value,
which tripped the assert in declare_return_variable.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-inline.c (declare_return_variable): Check for poly_int_tree_p
instead of INTEGER_CST.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/inline_1.c: New test.
From-SVN: r277956
2019-11-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/92324
* tree-vect-loop.c (vect_create_epilog_for_reduction): Use
STMT_VINFO_REDUC_VECTYPE for all computations, inserting
sign-conversions as necessary.
(vectorizable_reduction): Reject conversions in the chain
that are not sign-conversions, base analysis on a non-converting
stmt and its operation sign. Set STMT_VINFO_REDUC_VECTYPE.
* tree-vect-stmts.c (vect_stmt_relevant_p): Don't dump anything
for debug stmts.
* tree-vectorizer.h (_stmt_vec_info::reduc_vectype): New.
(STMT_VINFO_REDUC_VECTYPE): Likewise.
* gcc.dg/vect/pr92205.c: XFAIL.
* gcc.dg/vect/pr92324-1.c: New testcase.
* gcc.dg/vect/pr92324-2.c: Likewise.
From-SVN: r277955
PR target/92055
* config/avr/avr.opt (-mdouble=, -mlong-double=):
Fix a missing '-' when displaying these options in the
help screen.
From-SVN: r277954
aarch64_builtin_vectorized_function no longer needs to handle bswap*
since we have internal functions and optabs for all supported cases.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-builtins.c
(aarch64_builtin_vectorized_function): Remove bswap handling.
From-SVN: r277951
The AArch64 port defines built-in SVE types at start-up under names
like __SVInt8_t. These types are represented in the front end and
gimple as normal VECTOR_TYPEs and are code-generated as normal vectors.
However, we'd like to stop the frontends from treating them in the
same way as GNU-style ("vector_size") vectors, for several reasons:
(1) We allowed the GNU vector extensions to be mixed with Advanced SIMD
vector types and it ended up causing a lot of confusion on big-endian
targets. Although SVE handles big-endian vectors differently from
Advanced SIMD, there are still potential surprises; see the block
comment near the head of aarch64-sve.md for details.
(2) One of the SVE vectors is a packed one-bit-per-element boolean vector.
That isn't a combination the GNU vector extensions have supported
before. E.g. it means that vectors can no longer decompose to
arrays for indexing, and that not all elements are individually
addressable. It also makes it less clear which order the initialiser
should be in (lsb first, or bitfield ordering?). We could define
all that of course, but it seems a bit weird to go to the effort
for this case when, given all the other reasons, we don't want the
extensions anyway.
(3) The GNU vector extensions only provide full-vector operations,
which is a very artifical limitation on a predicated architecture
like SVE.
(4) The set of operations provided by the GNU vector extensions is
relatively small, whereas the SVE intrinsics provide many more.
(5) It makes it easier to ensure that (with default options) code is
portable between compilers without the GNU vector extensions having
to become an official part of the SVE intrinsics spec.
(6) The length of the SVE types is usually not fixed at compile time,
whereas the GNU vector extension is geared around fixed-length
vectors.
It's possible to specify the length of an SVE vector using the
command-line option -msve-vector-bits=N, but in principle it should
be possible to have functions compiled for different N in the same
translation unit. This isn't supported yet but would be very useful
for implementing ifuncs. Once mixing lengths in a translation unit
is supported, the SVE types should represent the same type throughout
the translation unit, just as GNU vector types do.
However, when -msve-vector-bits=N is in effect, we do allow conversions
between explicit GNU vector types of N bits and the corresponding SVE
types. This doesn't undermine the intent of (5) because in this case
the use of GNU vector types is explicit and intentional. It also doesn't
undermine the intent of (6) because converting between the types is just
a conditionally-supported operation. In other words, the types still
represent the same types throughout the translation unit, it's just that
conversions between them are valid in cases where a certain precondition
is known to hold. It's similar to the way that the SVE vector types are
defined throughout the translation unit but can only be used in functions
for which SVE is enabled.
The patch adds a new flag to tree_type_common to select this behaviour.
(We currently have 17 bits free.) To avoid making the flag too specific
to vectors, I called it TYPE_INDIVISIBLE_P, to mean that the frontend
should not allow the components of the type to be accessed directly.
This could perhaps be useful in future for hiding the fact that a
type is an array, or for hiding the fields of a record or union.
The actual frontend changes are very simple, mostly just replacing
VECTOR_TYPE_P with gnu_vector_type_p in selected places.
One interesting case is:
/* Need to convert condition operand into a vector mask. */
if (VECTOR_TYPE_P (TREE_TYPE (ifexp)))
{
tree vectype = TREE_TYPE (ifexp);
tree elem_type = TREE_TYPE (vectype);
tree zero = build_int_cst (elem_type, 0);
tree zero_vec = build_vector_from_val (vectype, zero);
tree cmp_type = build_same_sized_truth_vector_type (vectype);
ifexp = build2 (NE_EXPR, cmp_type, ifexp, zero_vec);
}
in build_conditional_expr. This appears to be trying to support
elementwise conditions like "vec1 ? vec2 : vec3", which is something
the C++ frontend supports. However, this code can never trigger AFAICT,
because "vec1" does not survive c_objc_common_truthvalue_conversion:
case VECTOR_TYPE:
error_at (location, "used vector type where scalar is required");
return error_mark_node;
Even if it did, the operation should be a VEC_COND_EXPR rather
than a COND_EXPR.
I've therefore left that condition as-is, but added tests for the
"vec1 ? vec2 : vec3" case to make sure that we don't accidentally
allow it for SVE vectors in future.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-core.h (tree_type_common::indivisible_p): New member variable.
* tree.h (TYPE_INDIVISIBLE_P): New macro.
* config/aarch64/aarch64-sve-builtins.cc (register_builtin_types):
Treat the vector types as indivisible.
gcc/c-family/
* c-common.h (gnu_vector_type_p): New function.
* c-common.c (c_build_vec_perm_expr): Require __builtin_shuffle
vectors to satisfy gnu_vector_type_p.
(c_build_vec_convert): Likewise __builtin_convertvector.
(convert_vector_to_array_for_subscript): Likewise when applying
implicit vector to array conversion.
(scalar_to_vector): Likewise when converting vector-scalar
operations to vector-vector operations.
gcc/c/
* c-convert.c (convert): Only handle vector conversions if one of
the types satisfies gnu_vector_type_p or if -flax-vector-conversions
allows it.
* c-typeck.c (build_array_ref): Only allow vector indexing if the
vectors satisfy gnu_vector_type_p.
(build_unary_op): Only allow unary operators to be applied to
vectors if they satisfy gnu_vector_type_p.
(digest_init): Only allow by-element initialization of vectors
if they satisfy gnu_vector_type_p.
(really_start_incremental_init): Likewise.
(push_init_level): Likewise.
(pop_init_level): Likewise.
(process_init_element): Likewise.
(build_binary_op): Only allow binary operators to be applied to
vectors if they satisfy gnu_vector_type_p.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general-c/gnu_vectors_1.c: New test.
* gcc.target/aarch64/sve/acle/general-c/gnu_vectors_2.c: Likewise.
From-SVN: r277950
The gather and scatter optabs required the vector offset to be
the integer equivalent of the vector mode being loaded or stored.
This patch generalises them so that the two vectors can have different
element sizes, although they still need to have the same number of
elements.
One consequence of this is that it's possible (if unlikely)
for two IFN_GATHER_LOADs to have the same arguments but different
return types. E.g. the same scalar base and vector of 32-bit offsets
could be used to load 8-bit elements and to load 16-bit elements.
From just looking at the arguments, we could wrongly deduce that
they're equivalent.
I know we saw this happen at one point with IFN_WHILE_ULT,
and we dealt with it there by passing a zero of the return type
as an extra argument. Doing the same here also makes the load
and store functions have the same argument assignment.
For now this patch should be a no-op, but later SVE patches take
advantage of the new flexibility.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* optabs.def (gather_load_optab, mask_gather_load_optab)
(scatter_store_optab, mask_scatter_store_optab): Turn into
conversion optabs, with the offset mode given explicitly.
* doc/md.texi: Update accordingly.
* config/aarch64/aarch64-sve-builtins-base.cc
(svld1_gather_impl::expand): Likewise.
(svst1_scatter_impl::expand): Likewise.
* internal-fn.c (gather_load_direct, scatter_store_direct): Likewise.
(expand_scatter_store_optab_fn): Likewise.
(direct_gather_load_optab_supported_p): Likewise.
(direct_scatter_store_optab_supported_p): Likewise.
(expand_gather_load_optab_fn): Likewise. Expect the mask argument
to be argument 4.
(internal_fn_mask_index): Return 4 for IFN_MASK_GATHER_LOAD.
(internal_gather_scatter_fn_supported_p): Replace the offset sign
argument with the offset vector type. Require the two vector
types to have the same number of elements but allow their element
sizes to be different. Treat the optabs as conversion optabs.
* internal-fn.h (internal_gather_scatter_fn_supported_p): Update
prototype accordingly.
* optabs-query.c (supports_at_least_one_mode_p): Replace with...
(supports_vec_convert_optab_p): ...this new function.
(supports_vec_gather_load_p): Update accordingly.
(supports_vec_scatter_store_p): Likewise.
* tree-vectorizer.h (vect_gather_scatter_fn_p): Take a vec_info.
Replace the offset sign and bits parameters with a scalar type tree.
* tree-vect-data-refs.c (vect_gather_scatter_fn_p): Likewise.
Pass back the offset vector type instead of the scalar element type.
Allow the offset to be wider than the memory elements. Search for
an offset type that the target supports, stopping once we've
reached the maximum of the element size and pointer size.
Update call to internal_gather_scatter_fn_supported_p.
(vect_check_gather_scatter): Update calls accordingly.
When testing a new scale before knowing the final offset type,
check whether the scale is supported for any signed or unsigned
offset type. Check whether the target supports the source and
target types of a conversion before deciding whether to look
through the conversion. Record the chosen offset_vectype.
* tree-vect-patterns.c (vect_get_gather_scatter_offset_type): Delete.
(vect_recog_gather_scatter_pattern): Get the scalar offset type
directly from the gs_info's offset_vectype instead. Pass a zero
of the result type to IFN_GATHER_LOAD and IFN_MASK_GATHER_LOAD.
* tree-vect-stmts.c (check_load_store_masking): Update call to
internal_gather_scatter_fn_supported_p, passing the offset vector
type recorded in the gs_info.
(vect_truncate_gather_scatter_offset): Update call to
vect_check_gather_scatter, leaving it to search for a valid
offset vector type.
(vect_use_strided_gather_scatters_p): Convert the offset to the
element type of the gs_info's offset_vectype.
(vect_get_gather_scatter_ops): Get the offset vector type directly
from the gs_info.
(vect_get_strided_load_store_ops): Likewise.
(vectorizable_load): Pass a zero of the result type to IFN_GATHER_LOAD
and IFN_MASK_GATHER_LOAD.
* config/aarch64/aarch64-sve.md (gather_load<mode>): Rename to...
(gather_load<mode><v_int_equiv>): ...this.
(mask_gather_load<mode>): Rename to...
(mask_gather_load<mode><v_int_equiv>): ...this.
(scatter_store<mode>): Rename to...
(scatter_store<mode><v_int_equiv>): ...this.
(mask_scatter_store<mode>): Rename to...
(mask_scatter_store<mode><v_int_equiv>): ...this.
From-SVN: r277949
To support full condition reduction vectorization, we have to define
vec_cmp* and vcond_mask_*. This patch is to add related expands.
Also add the missing vector fp comparison RTL pattern supports
like: ungt, unge, unlt, unle, ne, lt and le.
gcc/ChangeLog
2019-11-08 Kewen Lin <linkw@gcc.gnu.org>
PR target/92132
* config/rs6000/predicates.md
(signed_or_equality_comparison_operator): New predicate.
(unsigned_or_equality_comparison_operator): Likewise.
* config/rs6000/rs6000.md (one_cmpl<mode>2): Remove expand.
(one_cmpl<mode>3_internal): Rename to one_cmpl<mode>2.
* config/rs6000/vector.md
(vcond_mask_<mode><mode> for VEC_I and VEC_I): New expand.
(vec_cmp<mode><mode> for VEC_I and VEC_I): Likewise.
(vec_cmpu<mode><mode> for VEC_I and VEC_I): Likewise.
(vcond_mask_<mode><VEC_int> for VEC_F): New expand for float
vector modes and same-size integer vector modes.
(vec_cmp<mode><VEC_int> for VEC_F): Likewise.
(vector_lt<mode> for VEC_F): New expand.
(vector_le<mode> for VEC_F): Likewise.
(vector_ne<mode> for VEC_F): Likewise.
(vector_unge<mode> for VEC_F): Likewise.
(vector_ungt<mode> for VEC_F): Likewise.
(vector_unle<mode> for VEC_F): Likewise.
(vector_unlt<mode> for VEC_F): Likewise.
(vector_uneq<mode>): Expose name.
(vector_ltgt<mode>): Likewise.
(vector_unordered<mode>): Likewise.
(vector_ordered<mode>): Likewise.
gcc/testsuite/ChangeLog
2019-11-08 Kewen Lin <linkw@gcc.gnu.org>
PR target/92132
* gcc.target/powerpc/pr92132-fp-1.c: New test.
* gcc.target/powerpc/pr92132-fp-2.c: New test.
* gcc.target/powerpc/pr92132-int-1.c: New test.
* gcc.target/powerpc/pr92132-int-2.c: New test.
From-SVN: r277947
C2x removes support for old-style function definitions with identifier
lists, changing () in function definitions to be equivalent to (void)
(while () in declarations that are not definitions still gives an
unprototyped type).
This patch updates GCC accordingly. The new semantics for () are
implemented for C2x mode (meaning () in function definitions isn't
diagnosed by -Wold-style-definition in that mode).
-Wold-style-definition is enabled by default, and turned into a
pedwarn, for C2x.
Bootstrapped with no regressions on x86_64-pc-linux-gnu.
gcc:
* doc/invoke.texi (-Wold-style-definition): Document () not being
considered an old-style definition for C2x.
gcc/c:
* c-decl.c (grokparms): Convert () in a function definition to
(void) for C2x.
(store_parm_decls_oldstyle): Pedwarn for C2x.
(store_parm_decls): Update comment about () not generating a
prototype.
gcc/c-family:
* c.opt (Wold-style-definition): Initialize to -1.
* c-opts.c (c_common_post_options): Set warn_old_style_definition
to flag_isoc2x if not set explicitly.
gcc/testsuite:
* gcc.dg/c11-old-style-definition-1.c,
gcc.dg/c11-old-style-definition-2.c,
gcc.dg/c2x-old-style-definition-1.c,
gcc.dg/c2x-old-style-definition-2.c,
gcc.dg/c2x-old-style-definition-3.c,
gcc.dg/c2x-old-style-definition-4.c,
gcc.dg/c2x-old-style-definition-5.c,
gcc.dg/c2x-old-style-definition-6.c: New tests.
From-SVN: r277945
Also process it with Doxygen.
* doc/doxygen/user.cfg.in (INPUT): Add <compare> header.
* include/precompiled/stdc++.h: Include <compare> header.
From-SVN: r277944
After the simplify-rtx patch, we can now be asked about conditions we
wouldn't be asked about before. This is perfectly fine, except we
have a little over-eager assert. Remove that one.
* config/rs6000/rs6000.c (validate_condition_mode): Don't assert for
valid conditions.
From-SVN: r277936
There is one place in the C parser that already handles a subset of
the C2x [[]] attribute syntax: c_parser_transaction_attributes.
This patch factors C2x attribute parsing out of there, extending it to
cover the full C2x attribute syntax (although currently only called
from that one place in the parser - so this is another piece of
preparation for supporting C2x attributes in the places where C2x says
they are valid, not the patch that actually enables such support).
The new C2X attribute parsing code uses the same representation for
scoped attributes as C++ does, so requiring parse_tm_stmt_attr to
handle the scoped attributes representation (C++ currently
special-cases TM attributes "to avoid the pedwarn in C++98 mode"; in C
I'm using an argument to c_parser_std_attribute_specifier to disable
the pedwarn_c11 call in the TM case).
Parsing of arguments to known attributes is shared by GNU and C2x
attributes. C2x specifies that unknown attributes are ignored (GCC
practice in such a case is to warn along with ignoring the attribute)
and gives a very general balanced-token-sequence syntax for arguments
to unknown attributes (known ones each have their own syntax which is
a subset of balanced-token-sequence), so support is added for parsing
and ignoring such balanced-token-sequences as arguments of unknown
attributes.
Some limited tests are added of different attribute usages in the TM
attribute case. The cases that become valid in the TM case include
extra commas inside [[]], and an explicit "gnu" namespace, as the
extra commas have no semantic effect for C2x attributes, while
accepting the "gnu" namespace seems appropriate because the attribute
in question is accepted inside __attribute__ (()), which is considered
equivalent to the "gnu" namespace inside [[]].
Bootstrapped with no regressions on x86_64-pc-linux-gnu.
gcc/c:
* c-parser.c (c_parser_attribute_arguments): New function.
Factored out of c_parser_gnu_attribute.
(c_parser_gnu_attribute): Use c_parser_attribute_arguments.
(c_parser_balanced_token_sequence, c_parser_std_attribute)
(c_parser_std_attribute_specifier): New functions.
(c_parser_transaction_attributes): Use
c_parser_std_attribute_specifier.
gcc/c-family:
* c-attribs.c (parse_tm_stmt_attr): Handle scoped attributes.
gcc/testsuite:
* gcc.dg/tm/attrs-1.c: New test.
* gcc.dg/tm/props-5.c: New test. Based on props-4.c.
From-SVN: r277935
The Library Working Group have approved a change to std::for_each_n that
requires it to handle negative N gracefully, which we were not doing for
random access iterators.
* include/bits/stl_algo.h (for_each_n): Handle negative count.
* testsuite/25_algorithms/for_each/for_each_n_debug.cc: New test.
From-SVN: r277932
This introduces simplify_logical_relational_operation. Currently the
only thing implemented it can simplify is the IOR of two CONDs of the
same arguments.
* simplify-rtx.c (comparison_to_mask): New function.
(mask_to_comparison): New function.
(simplify_logical_relational_operation): New function.
(simplify_binary_operation_1): Call
simplify_logical_relational_operation.
From-SVN: r277931
This test uses -masm=intel, which isn't supported by Darwin. Add the
necessary dg-require-effective-target.
gcc/testsuite/ChangeLog:
2019-11-07 Iain Sandoe <iain@sandoe.co.uk>
* gcc.target/i386/pr92258.c: Add dg-requires for masm_intel.
From-SVN: r277930
PR c++/91370 - Implement P1041R4 and P1139R2 - Stronger Unicode reqs
* charset.c (narrow_str_to_charconst): Add TYPE argument. For
CPP_UTF8CHAR diagnose whenever number of chars is > 1, using
CPP_DL_ERROR instead of CPP_DL_WARNING.
(wide_str_to_charconst): For CPP_CHAR16 or CPP_CHAR32, use
CPP_DL_ERROR instead of CPP_DL_WARNING when multiple char16_t
or char32_t chars are needed.
(cpp_interpret_charconst): Adjust narrow_str_to_charconst caller.
* g++.dg/cpp1z/utf8-neg.C: Expect errors rather than -Wmultichar
warnings.
* g++.dg/ext/utf16-4.C: Expect errors rather than warnings.
* g++.dg/ext/utf32-4.C: Likewise.
* g++.dg/cpp2a/ucn2.C: New test.
From-SVN: r277929
* optc-save-gen.awk: Generate cl_target_option_free
and cl_optimization_option_free.
* opth-en.awk: Declare cl_target_option_free
and cl_optimization_option_free.
* tree.c (free_node): Use it.
From-SVN: r277926
Shortly after I finished implementing the previous semantics, the
committee decided to remove the *_equality comparison categories, because
they were largely obsoleted by the earlier change that separated operator==
from its original dependency on operator<=>.
gcc/cp/
* method.c (enum comp_cat_tag, comp_cat_info): Remove *_equality.
(genericize_spaceship, common_comparison_type): Likewise.
* typeck.c (cp_build_binary_op): Move SPACESHIP_EXPR to be with the
relational operators, exclude other types no longer supported.
libstdc++-v3/
* libsupc++/compare: Remove strong_equality and weak_equality.
From-SVN: r277925