PR c++/91369
* constexpr.c (struct constexpr_global_ctx): Add cleanups member,
initialize it in the ctor.
(cxx_eval_constant_expression) <case TARGET_EXPR>: If TARGET_EXPR_SLOT
is already in the values hash_map, don't evaluate it again. Put
TARGET_EXPR_SLOT into hash_map even if not lval, and push it into
save_exprs too. If there is TARGET_EXPR_CLEANUP and not
CLEANUP_EH_ONLY, push the cleanup to cleanups vector.
<case CLEANUP_POINT_EXPR>: Save outer cleanups, set cleanups to
local auto_vec, after evaluating the body evaluate cleanups and
restore previous cleanups.
<case TRY_CATCH_EXPR>: Don't crash if the first operand is NULL_TREE.
(cxx_eval_outermost_constant_expr): Set cleanups to local auto_vec,
after evaluating the expression evaluate cleanups.
* g++.dg/cpp2a/constexpr-new8.C: New test.
From-SVN: r278945
while looking into Firefox inlining dumps I noticed that we often do not
inline because we think function calls comdat local while the comdat group
itself has been dissolved.
* cgraph.c (cgraph_node::verify_node): Check that calls_comdat_local
is set only for symbol in comdat group.
* symtab.c (symtab_node::dissolve_same_comdat_group_1): Clear it.
From-SVN: r278944
Even EXACT_DIV_EXPR doesn't distribute across addition for wrapping
types, so in general we can't fold EXACT_DIV_EXPRs of POLY_INT_CSTs
at compile time. This was causing an ICE when trying to gimplify the
element size field in an ARRAY_REF.
If the result of that EXACT_DIV_EXPR is an invariant, we don't bother
recording it in the ARRAY_REF and simply read the element size from the
element type. This avoids the overhead of doing:
/* ??? tree_ssa_useless_type_conversion will eliminate casts to
sizetype from another type of the same width and signedness. */
if (TREE_TYPE (aligned_size) != sizetype)
aligned_size = fold_convert_loc (loc, sizetype, aligned_size);
return size_binop_loc (loc, MULT_EXPR, aligned_size,
size_int (TYPE_ALIGN_UNIT (elmt_type)));
each time array_ref_element_size is called.
So rather than read array_ref_element_size, do some arithmetic on it,
and only then check whether the result is an invariant, we might as
well check whether the element size is an invariant to start with.
We're then directly testing whether array_ref_element_size gives
a reusable value.
For consistency, the patch makes the same change for the offset field
in a COMPONENT_REF, although I don't think that can trigger yet.
2019-12-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* gimplify.c (gimplify_compound_lval): Don't gimplify and install
an array element size if array_element_size is already an invariant.
Similarly don't gimplify and install a field offset if
component_ref_field_offset is already an invariant.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general-c/struct_1.c: New test.
From-SVN: r278942
If SVE code is written for a specific vector length, it might load from
or store to fixed-sized objects. This needs to work even without
-msve-vector-bits=N (which should never be needed for correctness).
There's no way of handling a direct poly-int sized reference to a
fixed-size register; it would have to go via memory. And in that
case it's more efficient to mark the fixed-size object as
addressable from the outset, like we do for array references
with non-constant indices.
2019-12-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* cfgexpand.c (discover_nonconstant_array_refs_r): If an access
with POLY_INT_CST size is made to a fixed-size object, force the
object to live in memory.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/deref_1.c: New test.
From-SVN: r278941
2019-12-03 Andrew Stubbs <ams@codesourcery.com>
gcc/
* config/gcn/gcn-valu.md: Change "vcondu" patterns to use VEC_1REG_MODE
for the data mode.
From-SVN: r278940
This patch implements C++20 P0960R3: Parenthesized initialization of aggregates
(<wg21.link/p0960>; see R0 for more background info). Essentially, if you have
an aggregate, you can now initialize it by (x, y), similarly to {x, y}. E.g.
struct A {
int x, y;
// no A(int, int) ctor (see paren-init14.C for = delete; case)
};
A a(1, 2);
The difference between ()-init and {}-init is that narrowing conversions are
permitted, designators are not permitted, a temporary object bound to
a reference does not have its lifetime extended, and there is no brace elision.
Further, things like
int a[](1, 2, 3); // will deduce the array size
const A& r(1, 2.3, 3); // narrowing is OK
int (&&rr)[](1, 2, 3);
int b[3](1, 2); // b[2] will be value-initialized
now work as expected. Note that
char f[]("fluff");
has always worked and this patch keeps it that way. Also note that A a((1, 2))
is not the same as A a{{1,2}}; the inner (1, 2) remains a COMPOUND_EXPR.
The approach I took was to handle (1, 2) similarly to {1, 2} -- conjure up
a CONSTRUCTOR, and introduce LOOKUP_AGGREGATE_PAREN_INIT to distinguish
between the two. This kind of initialization is only supported in C++20;
I've made no attempt to support it in earlier standards, like we don't
support CTAD pre-C++17, for instance.
* c-cppbuiltin.c (c_cpp_builtins): Predefine
__cpp_aggregate_paren_init=201902 for -std=c++2a.
* call.c (build_new_method_call_1): Handle parenthesized initialization
of aggregates by building up a CONSTRUCTOR.
(extend_ref_init_temps): Do nothing for CONSTRUCTOR_IS_PAREN_INIT.
* cp-tree.h (CONSTRUCTOR_IS_PAREN_INIT, LOOKUP_AGGREGATE_PAREN_INIT):
Define.
* decl.c (grok_reference_init): Handle aggregate initialization from
a parenthesized list of values.
(reshape_init): Do nothing for CONSTRUCTOR_IS_PAREN_INIT.
(check_initializer): Handle initialization of an array from a
parenthesized list of values. Use NULL_TREE instead of NULL.
* tree.c (build_cplus_new): Handle BRACE_ENCLOSED_INITIALIZER_P.
* typeck2.c (digest_init_r): Set LOOKUP_AGGREGATE_PAREN_INIT if it
receives a CONSTRUCTOR with CONSTRUCTOR_IS_PAREN_INIT set. Allow
narrowing when LOOKUP_AGGREGATE_PAREN_INIT.
(massage_init_elt): Don't lose LOOKUP_AGGREGATE_PAREN_INIT when passing
flags to digest_init_r.
* g++.dg/cpp0x/constexpr-99.C: Only expect an error in C++17 and
lesser.
* g++.dg/cpp0x/explicit7.C: Likewise.
* g++.dg/cpp0x/initlist12.C: Adjust dg-error.
* g++.dg/cpp0x/pr31437.C: Likewise.
* g++.dg/cpp2a/feat-cxx2a.C: Add __cpp_aggregate_paren_init test.
* g++.dg/cpp2a/paren-init1.C: New test.
* g++.dg/cpp2a/paren-init10.C: New test.
* g++.dg/cpp2a/paren-init11.C: New test.
* g++.dg/cpp2a/paren-init12.C: New test.
* g++.dg/cpp2a/paren-init13.C: New test.
* g++.dg/cpp2a/paren-init14.C: New test.
* g++.dg/cpp2a/paren-init15.C: New test.
* g++.dg/cpp2a/paren-init16.C: New test.
* g++.dg/cpp2a/paren-init17.C: New test.
* g++.dg/cpp2a/paren-init18.C: New test.
* g++.dg/cpp2a/paren-init19.C: New test.
* g++.dg/cpp2a/paren-init2.C: New test.
* g++.dg/cpp2a/paren-init3.C: New test.
* g++.dg/cpp2a/paren-init4.C: New test.
* g++.dg/cpp2a/paren-init5.C: New test.
* g++.dg/cpp2a/paren-init6.C: New test.
* g++.dg/cpp2a/paren-init7.C: New test.
* g++.dg/cpp2a/paren-init8.C: New test.
* g++.dg/cpp2a/paren-init9.C: New test.
* g++.dg/ext/desig10.C: Adjust dg-error.
* g++.dg/template/crash107.C: Likewise.
* g++.dg/template/crash95.C: Likewise.
* g++.old-deja/g++.jason/crash3.C: Likewise.
* g++.old-deja/g++.law/ctors11.C: Likewise.
* g++.old-deja/g++.law/ctors9.C: Likewise.
* g++.old-deja/g++.mike/net22.C: Likewise.
* g++.old-deja/g++.niklas/t128.C: Likewise.
From-SVN: r278939
Check that function arguments of type acc_device_t
are valid enumeration values in all publicly visible
functions from oacc-init.c.
2019-12-03 Frederik Harwath <frederik@codesourcery.com>
libgomp/
* oacc-init.c (acc_known_device_type): Add function.
(unknown_device_type_error): Add function.
(name_of_acc_device_t): Change to call unknown_device_type_error
on unknown type.
(resolve_device): Use acc_known_device_type.
(acc_init): Fail if acc_device_t argument is not valid.
(acc_shutdown): Likewise.
(acc_get_num_devices): Likewise.
(acc_set_device_type): Likewise.
(acc_get_device_num): Likewise.
(acc_set_device_num): Likewise.
(acc_on_device): Add comment that argument validity is not checked.
Reviewed-by: Thomas Schwinge <thomas@codesourcery.com>
From-SVN: r278937
2019-12-03 Andrew Stubbs <ams@codesourcery.com>
libgomp/
* testsuite/lib/libgomp.exp (offload_target_to_openacc_device_type):
Recognize amdgcn.
(check_effective_target_openacc_amdgcn_accel_present): New proc.
(check_effective_target_openacc_amdgcn_accel_selected): New proc.
* testsuite/libgomp.oacc-c++/c++.exp: Add support for amdgcn.
* testsuite/libgomp.oacc-c/c.exp: Likewise.
* testsuite/libgomp.oacc-fortran/fortran.exp: Likewise.
From-SVN: r278935
2019-12-03 Richard Biener <rguenther@suse.de>
PR tree-optimization/92645
* gimple-fold.c (gimple_fold_builtin_memory_op): Fold memcpy
from or to a properly aligned register variable.
* gcc.target/i386/pr92645-5.c: New testcase.
From-SVN: r278934
2019-12-03 Richard Biener <rguenther@suse.de>
PR tree-optimization/92751
* tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Fail
when a clobber ends up in the partial-def vector.
(vn_reference_lookup_3): Let clobbers be handled by the
assignment from CTOR handling.
* g++.dg/tree-ssa/pr92751.C: New testcase.
From-SVN: r278931
* gcc-interface/utils.c (potential_alignment_gap): Delete.
(rest_of_record_type_compilation): Do not call above function. Use
the alignment of the field instead of that of its type, if need be.
When the original field has variable size, always lower the alignment
of the pointer type. Reset the bit-field status of the new field if
it does not encode a bit-field.
From-SVN: r278930
2019-12-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/testsuite/
* gfortran.dg/loop_versioning_6.f90: XFAIL the scans for ! lp64.
From-SVN: r278928
* gcc-interface/decl.c (gnat_to_gnu_subprog_type): With the Copy-In/
Copy-Out mechanism, do not promote the mode of the return type to an
integral mode if it contains a field on a non-integral type and even
demote it for 64-bit targets.
From-SVN: r278927
The slow_clock type was introduced to the testsuite in 2018 in the
testsuite/30_threads/condition_variable/members/2.cc test, so the new
header should have that date.
* testsuite/util/slow_clock.h: Fix copyright date.
From-SVN: r278926
PR target/92744
* config/i386/i386.md (peephole2 for *swap<mode>): Use
general_reg_operand predicates instead of register_operand.
* g++.dg/dfp/pr92744.C: New test.
From-SVN: r278924
PR c++/92705
* call.c (strip_standard_conversion): New function.
(build_new_op_1): Use it for user_conv_p.
(compare_ics): Likewise.
(source_type): Likewise.
* g++.dg/conversion/ambig4.C: New test.
From-SVN: r278922
PR c++/92695
* constexpr.c (cxx_bind_parameters_in_call): For virtual calls,
adjust the first argument to point to the derived object rather
than its base.
* g++.dg/cpp2a/constexpr-virtual14.C: New test.
From-SVN: r278921
2019-12-03 Richard Biener <rguenther@suse.de>
PR tree-optimization/92645
* tree-ssa.c (execute_update_addresses_taken): Avoid representing
a full def of a vector via a BIT_INSERT_EXPR.
From-SVN: r278920
GCC wrongly accepts [*] in old-style parameter definitions because
because parm_flag is set on the scope used for those definitions and,
unlike the case of a prototype in a function definition, there is no
subsequent check to disallow this invalid usage. This patch adds such
a check. (At this point we don't have location information for the
[*], so the diagnostic location isn't ideal.)
Bootstrapped with no regressions for x86_64-pc-linux-gnu.
PR c/88704
gcc/c:
* c-decl.c (store_parm_decls_oldstyle): Diagnose use of [*] in
old-style parameter definitions.
gcc/testsuite:
* gcc.dg/vla-25.c: New test.
From-SVN: r278917
* g++.dg/lto/inline-crossmodule-1_0.C: Use -fdump-ipa-inline-details
instead of -fdump-ipa-inline. Use "inline" instead of "inlined" as
last argument to scan-wpa-ipa-dump-times, use \\\( and \\\) instead of
( and ) in the regex.
From-SVN: r278916
PR c++/92695
* constexpr.c (cxx_eval_constant_expression) <case OBJ_TYPE_REF>: Use
STRIP_NOPS before checking for ADDR_EXPR.
* g++.dg/cpp2a/constexpr-virtual15.C: New test.
From-SVN: r278912
In this PR, IPA-CP was misled into using NOP_EXPR rather than
VIEW_CONVERT_EXPR to reinterpret a vector of 4 shorts as a vector
of 2 ints. This tripped the tree-cfg.c assert I'd added in r278245.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR middle-end/92741
* fold-const.c (fold_convertible_p): Check vector types more
thoroughly.
gcc/testsuite/
PR middle-end/92741
* gcc.dg/pr92741.c: New test.
From-SVN: r278910
This patch reports an error if code tries to use variable-length
SVE types when SVE is disabled. We already report a similar error
for definitions or uses of SVE functions when SVE is disabled.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64.c (aarch64_report_sve_required): New function.
(aarch64_expand_mov_immediate): Use it when attempting to measure
the length of an SVE vector.
(aarch64_mov_operand_p): Only allow SVE CNT immediates when
SVE is enabled.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/nosve_4.c: New test.
* gcc.target/aarch64/sve/acle/general/nosve_5.c: Likewise.
* gcc.target/aarch64/sve/pcs/nosve_4.c: Expected a second error
for the copy.
* gcc.target/aarch64/sve/pcs/nosve_5.c: Likewise.
* gcc.target/aarch64/sve/pcs/nosve_6.c: Likewise.
From-SVN: r278909
Now that the C frontend can cope with POLY_INT_CST-length initialisers,
we can make aarch64-sve-acle.exp run the full set of tests. This will
introduce new failures for -mabi=ilp32; I'll make the testsuite ILP32
clean separately.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/testsuite/
* gcc.target/aarch64/sve/acle/aarch64-sve-acle.exp: Run the
general/* tests too.
From-SVN: r278908
When writing vector-length specific SVE code, it's useful to be able
to store an svbool_t predicate in a GNU vector of unsigned chars.
This patch makes sure that there is no overhead when converting
to that form and then immediately reading it back again.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-sve-builtins.h
(gimple_folder::force_vector): Declare.
* config/aarch64/aarch64-sve-builtins.cc
(gimple_folder::force_vector): New function.
* config/aarch64/aarch64-sve-builtins-base.cc
(svcmp_impl::fold): Likewise.
(svdup_impl::fold): Handle svdup_z too.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/eqne_dup_1.c: New test.
* gcc.target/aarch64/sve/acle/asm/dup_f16.c (dup_0_f16_z): Expect
the call to be folded to zero.
* gcc.target/aarch64/sve/acle/asm/dup_f32.c (dup_0_f32_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_f64.c (dup_0_f64_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s8.c (dup_0_s8_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s16.c (dup_0_s16_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s32.c (dup_0_s32_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s64.c (dup_0_s64_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u8.c (dup_0_u8_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u16.c (dup_0_u16_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u32.c (dup_0_u32_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u64.c (dup_0_u64_z): Likewise.
From-SVN: r278907
Since r275022 which deprecates some uses of volatile, all arm-fp16-ops-*.C
were failing with warnings of deprecated valatile uses on arm-none-eabi and
arm-none-linux-gnueabihf. This patch removes the volatile declarations from
the header. Since none of the tests are run with any high optimization levels,
this should change should not prevent the real function of the tests.
gcc/testsuite/ChangeLog:
2019-12-02 Sudakshina Das <sudi.das@arm.com>
* g++.dg/ext/arm-fp16/arm-fp16-ops.h: Remove volatile keyword.
From-SVN: r278905
This is the equivalent to PR libstdc++/91906, but for shared_mutex.
A non-standard clock may tick more slowly than std::chrono::steady_clock.
This means that we risk returning false early when the specified timeout
may not have expired. This can be avoided by looping until the timeout time
as reported by the non-standard clock has been reached.
Unfortunately, we have no way to tell whether the non-standard clock ticks
more quickly that std::chrono::steady_clock. If it does then we risk
returning later than would be expected, but that is unavoidable without
waking up periodically to check, which would be rather too expensive.
François Dumont pointed out[1] a flaw in an earlier version of this patch
that revealed a hole in the test coverage, so I've added a new test that
try_lock_until acts as try_lock if the timeout has already expired.
[1] https://gcc.gnu.org/ml/libstdc++/2019-10/msg00021.html
2019-12-02 Mike Crowe <mac@mcrowe.com>
Fix try_lock_until and try_lock_shared_until on arbitrary clock
* include/std/shared_mutex (shared_timed_mutex::try_lock_until)
(shared_timed_mutex::try_lock_shared_until): Loop until the absolute
timeout time is reached as measured against the appropriate clock.
* testsuite/30_threads/shared_timed_mutex/try_lock_until/1.cc: New
file. Test try_lock_until and try_lock_shared_until timeouts against
various clocks.
* testsuite/30_threads/shared_timed_mutex/try_lock_until/1.cc: New
file. Test try_lock_until and try_lock_shared_until timeouts against
various clocks.
From-SVN: r278904
The pthread_rwlock_clockrdlock and pthread_rwlock_clockwrlock functions
were added to glibc in v2.30. They have also been added to Android
Bionic. If these functions are available in the C library then they can
be used to implement shared_timed_mutex::try_lock_until,
shared_timed_mutex::try_lock_for,
shared_timed_mutex::try_lock_shared_until and
shared_timed_mutex::try_lock_shared_for so that they are no longer
unaffected by the system clock being warped. (This is the shared_mutex
equivalent of PR libstdc++/78237 for mutex.)
If the new functions are available then steady_clock is deemed to be the
"best" clock available which means that it is used for the relative
try_lock_for calls and absolute try_lock_until calls using steady_clock
and user-defined clocks. It's not possible to have
_GLIBCXX_USE_PTHREAD_RWLOCK_CLOCKLOCK defined without
_GLIBCXX_USE_PTHREAD_RWLOCK_T, so the requirement that the clock be the
same as condition_variable is maintained. Calls explicitly using
system_clock (aka high_resolution_clock) continue to use CLOCK_REALTIME
via the old pthread_rwlock_timedrdlock and pthread_rwlock_timedwrlock
functions.
If the new functions are not available then system_clock is deemed to be
the "best" clock available which means that the previous suboptimal
behaviour remains.
Additionally, the user-defined clock used with
shared_timed_mutex::try_lock_for and shared_mutex::try_lock_shared_for
may have higher precision than __clock_t. We may need to round the
duration up to ensure that the timeout is long enough. (See
__timed_mutex_impl::_M_try_lock_for)
2019-12-02 Mike Crowe <mac@mcrowe.com>
Add full steady_clock support to shared_timed_mutex
* acinclude.m4 (GLIBCXX_CHECK_PTHREAD_RWLOCK_CLOCKLOCK): Define
to check for the presence of both pthread_rwlock_clockrdlock and
pthread_rwlock_clockwrlock.
* config.h.in: Regenerate.
* configure.ac: Call GLIBCXX_USE_PTHREAD_RWLOCK_CLOCKLOCK.
* configure: Regenerate.
* include/std/shared_mutex (shared_timed_mutex): Define __clock_t as
the best clock to use for relative waits.
(shared_timed_mutex::try_lock_for) Round up wait duration if necessary.
(shared_timed_mutex::try_lock_shared_for): Likewise.
(shared_timed_mutex::try_lock_until): Use existing try_lock_until
implementation for system_clock (which matches __clock_t when
_GLIBCCXX_USE_PTHREAD_RWLOCK_CLOCKLOCK is not defined). Add new
overload for steady_clock that uses pthread_rwlock_clockwrlock if it
is available. Simplify overload for non-standard clock to just call
try_lock_for with a relative timeout.
(shared_timed_mutex::try_lock_shared_until): Likewise.
From-SVN: r278903
A non-standard clock may tick more slowly than
std::chrono::steady_clock. This means that we risk returning false
early when the specified timeout may not have expired. This can be
avoided by looping until the timeout time as reported by the
non-standard clock has been reached.
Unfortunately, we have no way to tell whether the non-standard clock
ticks more quickly that std::chrono::steady_clock. If it does then we
risk returning later than would be expected, but that is unavoidable and
permitted by the standard.
2019-12-02 Mike Crowe <mac@mcrowe.com>
PR libstdc++/91906 Fix timed_mutex::try_lock_until on arbitrary clock
* include/std/mutex (__timed_mutex_impl::_M_try_lock_until): Loop
until the absolute timeout time is reached as measured against the
appropriate clock.
* testsuite/util/slow_clock.h: New file. Move implementation of
slow_clock test class.
* testsuite/30_threads/condition_variable/members/2.cc: Include
slow_clock from header.
* testsuite/30_threads/shared_timed_mutex/try_lock/3.cc: Convert
existing test to templated function so that it can be called with
both system_clock and steady_clock.
* testsuite/30_threads/timed_mutex/try_lock_until/3.cc: Also run test
using slow_clock to test above fix.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/3.cc:
Likewise.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/4.cc: Add
new test that try_lock_until behaves as try_lock if the timeout has
already expired or exactly matches the current time.
From-SVN: r278902
The pthread_mutex_clocklock function is available in glibc since the
2.30 release. If this function is available in the C library it can be
used to fix PR libstdc++/78237 by supporting steady_clock properly with
timed_mutex.
This means that code using timed_mutex::try_lock_for or
timed_mutex::wait_until with steady_clock is no longer subject to timing
out early or potentially waiting for much longer if the system clock is
warped at an inopportune moment.
If pthread_mutex_clocklock is available then steady_clock is deemed to
be the "best" clock available which means that it is used for the
relative try_lock_for calls and absolute try_lock_until calls using
steady_clock and user-defined clocks. Calls explicitly using
system_clock (aka high_resolution_clock) continue to use CLOCK_REALTIME
via __gthread_cond_timedwait.
If pthread_mutex_clocklock is not available then system_clock is deemed
to be the "best" clock available which means that the previous
suboptimal behaviour remains.
2019-12-02 Mike Crowe <mac@mcrowe.com>
PR libstdc++/78237 Add full steady_clock support to timed_mutex
* acinclude.m4 (GLIBCXX_CHECK_PTHREAD_MUTEX_CLOCKLOCK): Define to
detect presence of pthread_mutex_clocklock function.
* config.h.in: Regenerate.
* configure: Regenerate.
* configure.ac: Call GLIBCXX_CHECK_PTHREAD_MUTEX_CLOCKLOCK.
* include/std/mutex (__timed_mutex_impl): Remove unnecessary __clock_t.
(__timed_mutex_impl::_M_try_lock_for): Use best clock to turn relative
timeout into absolute timeout.
(__timed_mutex_impl::_M_try_lock_until): Keep existing implementation
for system_clock. Add new implementation for steady_clock that calls
_M_clocklock. Modify overload for user-defined clock to use a relative
wait so that it automatically uses the best clock.
[_GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK] (timed_mutex::_M_clocklock):
New member function.
(recursive_timed_mutex::_M_clocklock): Likewise.
From-SVN: r278901
2019-12-02 Mike Crowe <mac@mcrowe.com>
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/3.cc:
New test. Ensure that timed_mutex::try_lock_until actually times out
after the specified time when using both system_clock and
steady_clock.
* testsuite/30_threads/timed_mutex/try_lock_until/3.cc: New test.
Likewise but for recursive_timed_mutex.
* testsuite/30_threads/timed_mutex/try_lock_until/57641.cc: Template
test functions and use them to test both steady_clock and system_clock.
* testsuite/30_threads/unique_lock/locking/4.cc: Likewise. Wrap call
to timed_mutex::try_lock_until in VERIFY macro to check its return
value.
From-SVN: r278900
2019-12-02 Richard Biener <rguenther@suse.de>
PR tree-optimization/92742
* tree-vect-loop.c (vect_fixup_reduc_chain): Do not
touch the def-type but verify it is consistent with the
original stmts.
* gcc.dg/torture/pr92742.c: New testcase.
From-SVN: r278896
PR tree-optimization/92712
* match.pd ((A * B) +- A -> (B +- 1) * A,
A +- (A * B) -> (1 +- B) * A): Allow optimizing signed integers
even when we don't know anything about range of A, but do know
something about range of B and the simplification won't introduce
new UB.
* gcc.dg/tree-ssa/pr92712-1.c: New test.
* gcc.dg/tree-ssa/pr92712-2.c: New test.
* gcc.dg/tree-ssa/pr92712-3.c: New test.
* gfortran.dg/loop_versioning_1.f90: Adjust expected number of
likely to be innermost dimension messages.
* gfortran.dg/loop_versioning_10.f90: Likewise.
* gfortran.dg/loop_versioning_6.f90: Likewise.
From-SVN: r278894
2019-12-01 Sandra Loosemore <sandra@codesourcery.com>
Fix bugs relating to flexibly-sized objects in nios2 backend.
PR target/92499
gcc/c/
* c-decl.c (flexible_array_type_p): Move to common code.
gcc/
* config/nios2/nios2.c (nios2_in_small_data_p): Do not consider
objects of flexible types to be small if they have internal linkage
or are declared extern.
* config/nios2/nios2.h (ASM_OUTPUT_ALIGNED_LOCAL): Replace with...
(ASM_OUTPUT_ALIGNED_DECL_LOCAL): ...this. Use targetm.in_small_data_p
instead of the size of the object initializer.
* tree.c (flexible_array_type_p): Move from C front end, and
generalize to handle fields in non-C structures.
* tree.h (flexible_array_type_p): Declare.
gcc/testsuite/
* gcc.target/nios2/pr92499-1.c: New.
* gcc.target/nios2/pr92499-2.c: New.
* gcc.target/nios2/pr92499-3.c: New.
From-SVN: r278891
P9LE generated instruction is not worse than P8LE.
mtvsrdd;xxlnot;stxv vs. not;not;std;std.
It can have longer latency, but latency via memory is not so critical,
and this does save decode and other resources. It's hard to choose
which is best. Update the test case to fix failures.
gcc/testsuite/ChangeLog:
2019-12-02 Luo Xiong Hu <luoxhu@linux.ibm.com>
PR testsuite/92398
* gcc.target/powerpc/pr72804.c: Split the store function to...
* gcc.target/powerpc/pr92398.h: ... this one. New.
* gcc.target/powerpc/pr92398.p9+.c: New.
* gcc.target/powerpc/pr92398.p9-.c: New.
* lib/target-supports.exp (check_effective_target_p8): New.
(check_effective_target_p9+): New.
From-SVN: r278890