This is incomplete because std::strong_order doesn't support
floating-point types.
The partial_order and weak_order tests use VERIFY instead of
static_assert because of PR 92431.
* libsupc++/compare (strong_order, weak_order, partial_order)
(compare_strong_order_fallback, compare_weak_order_fallback)
(compare_partial_order_fallback): Define customization point objects
for C++20.
* testsuite/18_support/comparisons/algorithms/partial_order.cc: New
test.
* testsuite/18_support/comparisons/algorithms/strong_order.cc: New
test.
* testsuite/18_support/comparisons/algorithms/weak_order.cc: New test.
From-SVN: r278149
This is a complaint that we issue a [[nodiscard]] warning even in SFINAE
contexts. Here 'complain' is tf_decltype, but not tf_warning so I guess
we can fix it as below.
* cvt.c (convert_to_void): Guard maybe_warn_nodiscard calls with
tf_warning.
* g++.dg/cpp1z/nodiscard7.C: New test.
From-SVN: r278147
C2x adds <float.h> constants FLT_NORM_MAX, DBL_NORM_MAX and
LDBL_NORM_MAX. These are for the maximum "normalized" finite
floating-point number, where the given definition of normalized is
that all possible values with MANT_DIG significand digits (leading one
not zero) can be represented with that exponent. The effect of that
definition is that these macros are the same as the corresponding MAX
macros for all formats except IBM long double, where the NORM_MAX
value has exponent 1 smaller than the MAX one so that all 106 digits
can be 1.
This patch adds those macros to GCC. They are only defined for float,
double and long double; C2x does not include such macros for DFP
types, and while the integration of TS 18661-3 into C2x has not yet
occurred, the draft proposed text does not add them for the _FloatN /
_FloatNx types (where they would always be the same as the MAX
macros).
Bootstrapped with no regressions on x86_64-pc-linux-gnu. Also tested
compilation of the new test for powerpc-linux-gnu to confirm the check
of LDBL_NORM_MAX in the IBM long double case does get properly
optimized out.
gcc:
* ginclude/float.c [__STDC_VERSION__ > 201710L] (FLT_NORM_MAX,
DBL_NORM_MAX, LDBL_NORM_MAX): Define.
* real.c (get_max_float): Add norm_max argument.
* real.h (get_max_float): Update prototype.
* builtins.c (fold_builtin_interclass_mathfn): Update calls to
get_max_float.
gcc/c-family:
* c-cppbuiltin.c (builtin_define_float_constants): Also define
NORM_MAX constants. Update call to get_max_float.
(LAZY_HEX_FP_VALUES_CNT): Update value to include NORM_MAX
constants.
gcc/d:
* d-target.cc (define_float_constants): Update call to
get_max_float.
gcc/testsuite:
* gcc.dg/c11-float-3.c, gcc.dg/c2x-float-1.c: New tests.
From-SVN: r278145
2019-11-13 Martin Liska <mliska@suse.cz>
* common.opt: Document change of -fdbg-cnt option.
* dbgcnt.c (DEBUG_COUNTER): Remove.
(dbg_cnt_is_enabled): Remove.
(dbg_cnt): Work with new intervals.
(dbg_cnt_set_limit_by_index): Set to new
list of intervals.
(dbg_cnt_set_limit_by_name): Likewise.
(dbg_cnt_process_single_pair): Process new format.
(dbg_cnt_process_opt): Likewise.
(dbg_cnt_list_all_counters): Likewise.
* doc/invoke.texi: Document change of -fdbg-cnt option.
(cmp_tuples): New.
2019-11-13 Martin Liska <mliska@suse.cz>
* gcc.dg/ipa/ipa-icf-39.c: Update -fdbg-cnt to the new format.
* gcc.dg/pr68766.c: Likewise.
From-SVN: r278140
2019-11-13 Andrew Stubbs <ams@codesourcery.com>
libgomp/
* config/gcn/team.c (gomp_gcn_enter_kernel): Set up the team arena
and use team_malloc variants.
(gomp_gcn_exit_kernel): Use team_free.
* libgomp.h (TEAM_ARENA_SIZE): Define.
(TEAM_ARENA_START): Define.
(TEAM_ARENA_FREE): Define.
(TEAM_ARENA_END): Define.
(team_malloc): New function.
(team_malloc_cleared): New function.
(team_free): New function.
* team.c (gomp_new_team): Initialize and use team_malloc.
(free_team): Use team_free.
(gomp_free_thread): Use team_free.
(gomp_pause_host): Use team_free.
* work.c (gomp_init_work_share): Use team_malloc.
(gomp_fini_work_share): Use team_free.
From-SVN: r278136
2019-11-13 Andrew Stubbs <ams@codesourcery.com>
Kwok Cheung Yeung <kcy@codesourcery.com>
Julian Brown <julian@codesourcery.com>
Tom de Vries <tom@codesourcery.com>
gcc/
* config/gcn/mkoffload.c: New file.
* config/gcn/offload.h: New file.
From-SVN: r278133
2019-11-13 Andrew Stubbs <ams@codesourcery.com>
gcc/
* config/gcn/gcn-run.c (heap_region): New global variable.
(struct hsa_runtime_fn_info): Add hsa_memory_assign_agent_fn.
(init_hsa_runtime_functions): Initialize hsa_memory_assign_agent.
(get_kernarg_region): Move contents to ....
(get_memory_region): .... here.
(get_heap_region): New function.
(init_device): Initialize the heap_region.
(device_malloc): Add region parameter.
(struct kernargs): Move heap ....
(heap): ... to global scope.
(main): Allocate heap separate to kernargs.
From-SVN: r278131
* c-ada-spec.c (get_underlying_decl): Do not look through typedefs.
(dump_forward_type): Do not generate a declaration for function types.
(dump_nested_type) <ARRAY_TYPE>: Do not generate a nested declaration
of the component type if it is declared in another file.
From-SVN: r278129
We didn't take the cost of generating loop masks into account, and so
tended to underestimate the cost of loops that need multiple masks.
2019-11-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop.c (vect_estimate_min_profitable_iters): Include
the cost of generating loop masks.
gcc/testsuite/
* gcc.target/aarch64/sve/mask_struct_store_3.c: Add
-fno-vect-cost-model.
* gcc.target/aarch64/sve/mask_struct_store_3_run.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_2.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_2_run.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_3.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_3_run.c: Likewise.
From-SVN: r278125
vect_analyze_loop_costing uses two profitability thresholds: a runtime
one and a static compile-time one. The runtime one is simply the point
at which the vector loop is cheaper than the scalar loop, while the
static one also takes into account the cost of choosing between the
scalar and vector loops at runtime. We compare this static cost against
the expected execution frequency to decide whether it's worth generating
any vector code at all.
However, we never reclaimed the cost of applying the runtime threshold
if it turned out that the vector code can always be used. And we only
know whether that's true once we've calculated what the runtime
threshold would be.
2019-11-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_apply_runtime_profitability_check_p):
New function.
* tree-vect-loop-manip.c (vect_loop_versioning): Use it.
* tree-vect-loop.c (vect_analyze_loop_2): Likewise.
(vect_transform_loop): Likewise.
(vect_analyze_loop_costing): Don't take the cost of versioning
into account for the static profitability threshold if it turns
out that no versioning is needed.
From-SVN: r278124
* ipa.c (cgraph_build_static_cdtor): Pass optimization_default_node
and target_option_default_node to get -fprofile-generate ctors working
right with LTO.
From-SVN: r278123
vectorizable_assignment handles true SSA-to-SSA copies (which hopefully
we don't see in practice) and no-op conversions that are required
to maintain correct gimple, such as changes between signed and
unsigned types. These cases shouldn't generate any code and so
shouldn't count against either the scalar or vector costs.
Later patches test this, but it seemed worth splitting out.
2019-11-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_nop_conversion_p): Declare.
* tree-vect-stmts.c (vect_nop_conversion_p): New function.
(vectorizable_assignment): Don't add a cost for nop conversions.
* tree-vect-loop.c (vect_compute_single_scalar_iteration_cost):
Likewise.
* tree-vect-slp.c (vect_bb_slp_scalar_cost): Likewise.
From-SVN: r278122
This patch makes two tweaks to vectorizable_conversion. The first
is to use "modifier" to distinguish between promotion, demotion,
and neither promotion nor demotion, rather than using a code for
some cases and "modifier" for others. The second is to take ncopies
into account for the promotion and demotion costs; previously we gave
multiple copies the same cost as a single copy.
Later patches test this, but it seemed worth splitting out.
2019-11-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-stmts.c (vect_model_promotion_demotion_cost): Take the
number of ncopies as an additional argument.
(vectorizable_conversion): Update call accordingly. Use "modifier"
to check whether a conversion is between vectors with the same
numbers of units.
From-SVN: r278121
This is a like-for-like change at the moment, but is a prerequisite
for removing mode_for_int_vector.
2019-11-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-sve-builtins-functions.h
(unary_count::expand): Use aarch64_sve_int_mode instead of
mode_for_int_vector.
From-SVN: r278120
One of the changes in r277281 was to make the typedef variant
handling in strip_typedefs pass the raw DECL_ORIGINAL_TYPE to the
recursive call, instead of applying TYPE_MAIN_VARIANT first.
This PR shows that that interacts badly with the implementation
of DR1558, because we then refuse to strip aliases with dependent
template parameters and trip:
gcc_assert (!typedef_variant_p (result)
|| ((flags & STF_USER_VISIBLE)
&& !user_facing_original_type_p (result)));
Keeping the current behaviour but suppressing the ICE leads to a
duplicate error (the dg-bogus in the first test), so that didn't
seem like a good fix.
I assume keeping the alias should never actually be necessary for
DECL_ORIGINAL_TYPEs, because it will already have been checked
somewhere, even for implicit TYPE_DECLs. This patch therefore
passes a flag to say that we can safely strip aliases with
dependent template parameters.
2019-11-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/cp/
PR c++/92206
* cp-tree.h (STF_STRIP_DEPENDENT): New constant.
* tree.c (strip_typedefs): Add STF_STRIP_DEPENDENT to the flags
when calling strip_typedefs recursively on a DECL_ORIGINAL_TYPE.
Don't apply the fix for DR1558 in that case; allow aliases with
dependent template parameters to be stripped instead.
gcc/testsuite/
PR c++/92206
* g++.dg/cpp0x/alias-decl-pr92206-1.C: New test.
* g++.dg/cpp0x/alias-decl-pr92206-2.C: Likewise.
* g++.dg/cpp0x/alias-decl-pr92206-3.C: Likewise.
From-SVN: r278119
2019-11-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/92473
* tree-vect-loop.c (vect_create_epilog_for_reduction): Perform
direct optab reduction in the correct type.
From-SVN: r278113
This case is testing 'web' on ignore naked clobber.
-funroll-loops no longer implies -fweb for powerpc.
So, add -fweb to enable 'web' for this case.
gcc.testsuite/
2019-11-13 Jiufu Guo <guojiufu@linux.ibm.com>
PR target/92465
* gcc.dg/pr47763.c: Add option -fweb.
From-SVN: r278112
C++98 does not have long long int, and does not use (unsigned) long
long int for hexadecimal literals. So let's use an ULL suffix here,
which is still not strict C++98, but which works with more compilers.
* config/rs6000/rs6000.md (rs6000_set_fpscr_drn): Use ULL on big
hexadecimal literal.
From-SVN: r278107
* config/rs6000/vsx.md (xscmpexpdp_<code> for CMP_TEST): Handle
UNORDERED if !HONOR_NANS (DFmode).
(xscmpexpqp_<code>_<mode> for CMP_TEST and IEEE128): Handle UNORDERED
if !HONOR_NANS (<MODE>mode).
From-SVN: r278103
gcc/ChangeLog:
PR middle-end/83688
* gimple-ssa-sprintf.c (format_result::alias_info): New struct.
(directive::argno): New member.
(format_result::aliases, format_result::alias_count): New data members.
(format_result::append_alias): New member function.
(fmtresult::dst_offset): New data member.
(pass_sprintf_length::call_info::dst_origin): New data member.
(pass_sprintf_length::call_info::dst_field, dst_offset): Same.
(char_type_p, array_elt_at_offset, field_at_offset): New functions.
(get_origin_and_offset): Same.
(format_string): Call it.
(format_directive): Call append_alias and set directive argument
number.
(maybe_warn_overlap): New function.
(pass_sprintf_length::compute_format_length): Call it.
(pass_sprintf_length::handle_gimple_call): Initialize new members.
* gcc/tree-ssa-strlen.c (): Also enable when -Wrestrict is on.
gcc/testsuite/ChangeLog:
PR tree-optimization/35503
* gcc.dg/tree-ssa/builtin-sprintf-warn-23.c: New test.
From-SVN: r278098
try_forward_edges does not update dominance info, and merge_blocks
relies on it being up-to-date. In PR92430 stale dominance info makes
merge_blocks produce a loop in the dominator tree, which in turn makes
delete_basic_block loop forever.
Fix by freeing dominance info at the beginning of cleanup_cfg.
gcc/ChangeLog:
2019-11-12 Ilya Leoshkevich <iii@linux.ibm.com>
PR rtl-optimization/92430
* cfgcleanup.c (pass_jump_after_combine::execute): Free
dominance info at the beginning.
gcc/testsuite/ChangeLog:
2019-11-12 Ilya Leoshkevich <iii@linux.ibm.com>
PR rtl-optimization/92430
* gcc.dg/pr92430.c: New test (from Arseny Solokha).
From-SVN: r278095