2019-12-03 Richard Biener <rguenther@suse.de>
PR tree-optimization/92645
* tree-ssa.c (execute_update_addresses_taken): Avoid representing
a full def of a vector via a BIT_INSERT_EXPR.
From-SVN: r278920
GCC wrongly accepts [*] in old-style parameter definitions because
because parm_flag is set on the scope used for those definitions and,
unlike the case of a prototype in a function definition, there is no
subsequent check to disallow this invalid usage. This patch adds such
a check. (At this point we don't have location information for the
[*], so the diagnostic location isn't ideal.)
Bootstrapped with no regressions for x86_64-pc-linux-gnu.
PR c/88704
gcc/c:
* c-decl.c (store_parm_decls_oldstyle): Diagnose use of [*] in
old-style parameter definitions.
gcc/testsuite:
* gcc.dg/vla-25.c: New test.
From-SVN: r278917
* g++.dg/lto/inline-crossmodule-1_0.C: Use -fdump-ipa-inline-details
instead of -fdump-ipa-inline. Use "inline" instead of "inlined" as
last argument to scan-wpa-ipa-dump-times, use \\\( and \\\) instead of
( and ) in the regex.
From-SVN: r278916
PR c++/92695
* constexpr.c (cxx_eval_constant_expression) <case OBJ_TYPE_REF>: Use
STRIP_NOPS before checking for ADDR_EXPR.
* g++.dg/cpp2a/constexpr-virtual15.C: New test.
From-SVN: r278912
In this PR, IPA-CP was misled into using NOP_EXPR rather than
VIEW_CONVERT_EXPR to reinterpret a vector of 4 shorts as a vector
of 2 ints. This tripped the tree-cfg.c assert I'd added in r278245.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR middle-end/92741
* fold-const.c (fold_convertible_p): Check vector types more
thoroughly.
gcc/testsuite/
PR middle-end/92741
* gcc.dg/pr92741.c: New test.
From-SVN: r278910
This patch reports an error if code tries to use variable-length
SVE types when SVE is disabled. We already report a similar error
for definitions or uses of SVE functions when SVE is disabled.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64.c (aarch64_report_sve_required): New function.
(aarch64_expand_mov_immediate): Use it when attempting to measure
the length of an SVE vector.
(aarch64_mov_operand_p): Only allow SVE CNT immediates when
SVE is enabled.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/nosve_4.c: New test.
* gcc.target/aarch64/sve/acle/general/nosve_5.c: Likewise.
* gcc.target/aarch64/sve/pcs/nosve_4.c: Expected a second error
for the copy.
* gcc.target/aarch64/sve/pcs/nosve_5.c: Likewise.
* gcc.target/aarch64/sve/pcs/nosve_6.c: Likewise.
From-SVN: r278909
Now that the C frontend can cope with POLY_INT_CST-length initialisers,
we can make aarch64-sve-acle.exp run the full set of tests. This will
introduce new failures for -mabi=ilp32; I'll make the testsuite ILP32
clean separately.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/testsuite/
* gcc.target/aarch64/sve/acle/aarch64-sve-acle.exp: Run the
general/* tests too.
From-SVN: r278908
When writing vector-length specific SVE code, it's useful to be able
to store an svbool_t predicate in a GNU vector of unsigned chars.
This patch makes sure that there is no overhead when converting
to that form and then immediately reading it back again.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-sve-builtins.h
(gimple_folder::force_vector): Declare.
* config/aarch64/aarch64-sve-builtins.cc
(gimple_folder::force_vector): New function.
* config/aarch64/aarch64-sve-builtins-base.cc
(svcmp_impl::fold): Likewise.
(svdup_impl::fold): Handle svdup_z too.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/eqne_dup_1.c: New test.
* gcc.target/aarch64/sve/acle/asm/dup_f16.c (dup_0_f16_z): Expect
the call to be folded to zero.
* gcc.target/aarch64/sve/acle/asm/dup_f32.c (dup_0_f32_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_f64.c (dup_0_f64_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s8.c (dup_0_s8_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s16.c (dup_0_s16_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s32.c (dup_0_s32_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_s64.c (dup_0_s64_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u8.c (dup_0_u8_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u16.c (dup_0_u16_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u32.c (dup_0_u32_z): Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_u64.c (dup_0_u64_z): Likewise.
From-SVN: r278907
Since r275022 which deprecates some uses of volatile, all arm-fp16-ops-*.C
were failing with warnings of deprecated valatile uses on arm-none-eabi and
arm-none-linux-gnueabihf. This patch removes the volatile declarations from
the header. Since none of the tests are run with any high optimization levels,
this should change should not prevent the real function of the tests.
gcc/testsuite/ChangeLog:
2019-12-02 Sudakshina Das <sudi.das@arm.com>
* g++.dg/ext/arm-fp16/arm-fp16-ops.h: Remove volatile keyword.
From-SVN: r278905
This is the equivalent to PR libstdc++/91906, but for shared_mutex.
A non-standard clock may tick more slowly than std::chrono::steady_clock.
This means that we risk returning false early when the specified timeout
may not have expired. This can be avoided by looping until the timeout time
as reported by the non-standard clock has been reached.
Unfortunately, we have no way to tell whether the non-standard clock ticks
more quickly that std::chrono::steady_clock. If it does then we risk
returning later than would be expected, but that is unavoidable without
waking up periodically to check, which would be rather too expensive.
François Dumont pointed out[1] a flaw in an earlier version of this patch
that revealed a hole in the test coverage, so I've added a new test that
try_lock_until acts as try_lock if the timeout has already expired.
[1] https://gcc.gnu.org/ml/libstdc++/2019-10/msg00021.html
2019-12-02 Mike Crowe <mac@mcrowe.com>
Fix try_lock_until and try_lock_shared_until on arbitrary clock
* include/std/shared_mutex (shared_timed_mutex::try_lock_until)
(shared_timed_mutex::try_lock_shared_until): Loop until the absolute
timeout time is reached as measured against the appropriate clock.
* testsuite/30_threads/shared_timed_mutex/try_lock_until/1.cc: New
file. Test try_lock_until and try_lock_shared_until timeouts against
various clocks.
* testsuite/30_threads/shared_timed_mutex/try_lock_until/1.cc: New
file. Test try_lock_until and try_lock_shared_until timeouts against
various clocks.
From-SVN: r278904
The pthread_rwlock_clockrdlock and pthread_rwlock_clockwrlock functions
were added to glibc in v2.30. They have also been added to Android
Bionic. If these functions are available in the C library then they can
be used to implement shared_timed_mutex::try_lock_until,
shared_timed_mutex::try_lock_for,
shared_timed_mutex::try_lock_shared_until and
shared_timed_mutex::try_lock_shared_for so that they are no longer
unaffected by the system clock being warped. (This is the shared_mutex
equivalent of PR libstdc++/78237 for mutex.)
If the new functions are available then steady_clock is deemed to be the
"best" clock available which means that it is used for the relative
try_lock_for calls and absolute try_lock_until calls using steady_clock
and user-defined clocks. It's not possible to have
_GLIBCXX_USE_PTHREAD_RWLOCK_CLOCKLOCK defined without
_GLIBCXX_USE_PTHREAD_RWLOCK_T, so the requirement that the clock be the
same as condition_variable is maintained. Calls explicitly using
system_clock (aka high_resolution_clock) continue to use CLOCK_REALTIME
via the old pthread_rwlock_timedrdlock and pthread_rwlock_timedwrlock
functions.
If the new functions are not available then system_clock is deemed to be
the "best" clock available which means that the previous suboptimal
behaviour remains.
Additionally, the user-defined clock used with
shared_timed_mutex::try_lock_for and shared_mutex::try_lock_shared_for
may have higher precision than __clock_t. We may need to round the
duration up to ensure that the timeout is long enough. (See
__timed_mutex_impl::_M_try_lock_for)
2019-12-02 Mike Crowe <mac@mcrowe.com>
Add full steady_clock support to shared_timed_mutex
* acinclude.m4 (GLIBCXX_CHECK_PTHREAD_RWLOCK_CLOCKLOCK): Define
to check for the presence of both pthread_rwlock_clockrdlock and
pthread_rwlock_clockwrlock.
* config.h.in: Regenerate.
* configure.ac: Call GLIBCXX_USE_PTHREAD_RWLOCK_CLOCKLOCK.
* configure: Regenerate.
* include/std/shared_mutex (shared_timed_mutex): Define __clock_t as
the best clock to use for relative waits.
(shared_timed_mutex::try_lock_for) Round up wait duration if necessary.
(shared_timed_mutex::try_lock_shared_for): Likewise.
(shared_timed_mutex::try_lock_until): Use existing try_lock_until
implementation for system_clock (which matches __clock_t when
_GLIBCCXX_USE_PTHREAD_RWLOCK_CLOCKLOCK is not defined). Add new
overload for steady_clock that uses pthread_rwlock_clockwrlock if it
is available. Simplify overload for non-standard clock to just call
try_lock_for with a relative timeout.
(shared_timed_mutex::try_lock_shared_until): Likewise.
From-SVN: r278903
A non-standard clock may tick more slowly than
std::chrono::steady_clock. This means that we risk returning false
early when the specified timeout may not have expired. This can be
avoided by looping until the timeout time as reported by the
non-standard clock has been reached.
Unfortunately, we have no way to tell whether the non-standard clock
ticks more quickly that std::chrono::steady_clock. If it does then we
risk returning later than would be expected, but that is unavoidable and
permitted by the standard.
2019-12-02 Mike Crowe <mac@mcrowe.com>
PR libstdc++/91906 Fix timed_mutex::try_lock_until on arbitrary clock
* include/std/mutex (__timed_mutex_impl::_M_try_lock_until): Loop
until the absolute timeout time is reached as measured against the
appropriate clock.
* testsuite/util/slow_clock.h: New file. Move implementation of
slow_clock test class.
* testsuite/30_threads/condition_variable/members/2.cc: Include
slow_clock from header.
* testsuite/30_threads/shared_timed_mutex/try_lock/3.cc: Convert
existing test to templated function so that it can be called with
both system_clock and steady_clock.
* testsuite/30_threads/timed_mutex/try_lock_until/3.cc: Also run test
using slow_clock to test above fix.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/3.cc:
Likewise.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/4.cc: Add
new test that try_lock_until behaves as try_lock if the timeout has
already expired or exactly matches the current time.
From-SVN: r278902
The pthread_mutex_clocklock function is available in glibc since the
2.30 release. If this function is available in the C library it can be
used to fix PR libstdc++/78237 by supporting steady_clock properly with
timed_mutex.
This means that code using timed_mutex::try_lock_for or
timed_mutex::wait_until with steady_clock is no longer subject to timing
out early or potentially waiting for much longer if the system clock is
warped at an inopportune moment.
If pthread_mutex_clocklock is available then steady_clock is deemed to
be the "best" clock available which means that it is used for the
relative try_lock_for calls and absolute try_lock_until calls using
steady_clock and user-defined clocks. Calls explicitly using
system_clock (aka high_resolution_clock) continue to use CLOCK_REALTIME
via __gthread_cond_timedwait.
If pthread_mutex_clocklock is not available then system_clock is deemed
to be the "best" clock available which means that the previous
suboptimal behaviour remains.
2019-12-02 Mike Crowe <mac@mcrowe.com>
PR libstdc++/78237 Add full steady_clock support to timed_mutex
* acinclude.m4 (GLIBCXX_CHECK_PTHREAD_MUTEX_CLOCKLOCK): Define to
detect presence of pthread_mutex_clocklock function.
* config.h.in: Regenerate.
* configure: Regenerate.
* configure.ac: Call GLIBCXX_CHECK_PTHREAD_MUTEX_CLOCKLOCK.
* include/std/mutex (__timed_mutex_impl): Remove unnecessary __clock_t.
(__timed_mutex_impl::_M_try_lock_for): Use best clock to turn relative
timeout into absolute timeout.
(__timed_mutex_impl::_M_try_lock_until): Keep existing implementation
for system_clock. Add new implementation for steady_clock that calls
_M_clocklock. Modify overload for user-defined clock to use a relative
wait so that it automatically uses the best clock.
[_GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK] (timed_mutex::_M_clocklock):
New member function.
(recursive_timed_mutex::_M_clocklock): Likewise.
From-SVN: r278901
2019-12-02 Mike Crowe <mac@mcrowe.com>
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/3.cc:
New test. Ensure that timed_mutex::try_lock_until actually times out
after the specified time when using both system_clock and
steady_clock.
* testsuite/30_threads/timed_mutex/try_lock_until/3.cc: New test.
Likewise but for recursive_timed_mutex.
* testsuite/30_threads/timed_mutex/try_lock_until/57641.cc: Template
test functions and use them to test both steady_clock and system_clock.
* testsuite/30_threads/unique_lock/locking/4.cc: Likewise. Wrap call
to timed_mutex::try_lock_until in VERIFY macro to check its return
value.
From-SVN: r278900
2019-12-02 Richard Biener <rguenther@suse.de>
PR tree-optimization/92742
* tree-vect-loop.c (vect_fixup_reduc_chain): Do not
touch the def-type but verify it is consistent with the
original stmts.
* gcc.dg/torture/pr92742.c: New testcase.
From-SVN: r278896
PR tree-optimization/92712
* match.pd ((A * B) +- A -> (B +- 1) * A,
A +- (A * B) -> (1 +- B) * A): Allow optimizing signed integers
even when we don't know anything about range of A, but do know
something about range of B and the simplification won't introduce
new UB.
* gcc.dg/tree-ssa/pr92712-1.c: New test.
* gcc.dg/tree-ssa/pr92712-2.c: New test.
* gcc.dg/tree-ssa/pr92712-3.c: New test.
* gfortran.dg/loop_versioning_1.f90: Adjust expected number of
likely to be innermost dimension messages.
* gfortran.dg/loop_versioning_10.f90: Likewise.
* gfortran.dg/loop_versioning_6.f90: Likewise.
From-SVN: r278894
2019-12-01 Sandra Loosemore <sandra@codesourcery.com>
Fix bugs relating to flexibly-sized objects in nios2 backend.
PR target/92499
gcc/c/
* c-decl.c (flexible_array_type_p): Move to common code.
gcc/
* config/nios2/nios2.c (nios2_in_small_data_p): Do not consider
objects of flexible types to be small if they have internal linkage
or are declared extern.
* config/nios2/nios2.h (ASM_OUTPUT_ALIGNED_LOCAL): Replace with...
(ASM_OUTPUT_ALIGNED_DECL_LOCAL): ...this. Use targetm.in_small_data_p
instead of the size of the object initializer.
* tree.c (flexible_array_type_p): Move from C front end, and
generalize to handle fields in non-C structures.
* tree.h (flexible_array_type_p): Declare.
gcc/testsuite/
* gcc.target/nios2/pr92499-1.c: New.
* gcc.target/nios2/pr92499-2.c: New.
* gcc.target/nios2/pr92499-3.c: New.
From-SVN: r278891
P9LE generated instruction is not worse than P8LE.
mtvsrdd;xxlnot;stxv vs. not;not;std;std.
It can have longer latency, but latency via memory is not so critical,
and this does save decode and other resources. It's hard to choose
which is best. Update the test case to fix failures.
gcc/testsuite/ChangeLog:
2019-12-02 Luo Xiong Hu <luoxhu@linux.ibm.com>
PR testsuite/92398
* gcc.target/powerpc/pr72804.c: Split the store function to...
* gcc.target/powerpc/pr92398.h: ... this one. New.
* gcc.target/powerpc/pr92398.p9+.c: New.
* gcc.target/powerpc/pr92398.p9-.c: New.
* lib/target-supports.exp (check_effective_target_p8): New.
(check_effective_target_p9+): New.
From-SVN: r278890
* profile-count.h (profile_count::operator<): Use IPA value for
comparsion.
(profile_count::operator>): Likewise.
(profile_count::operator<=): Likewise.
(profile_count::operator>=): Likewise.
* predict.c (maybe_hot_count_p): Do not convert to gcov_type.
From-SVN: r278885
This patch adds a new target hook to check whether there are any
target-specific reasons why a type cannot be used in a certain
source-language context. It works in a similar way to existing
hooks like TARGET_INVALID_CONVERSION and TARGET_INVALID_UNARY_OP.
The reason for adding the hook is to report invalid uses of SVE types.
Throughout a TU, the SVE vector and predicate types represent values
that can be stored in an SVE vector or predicate register. At certain
points in the TU we might be able to generate code that assumes the
registers have a particular size, but often we can't. In some cases
we might even make multiple different assumptions in the same TU
(e.g. when implementing an ifunc for multiple vector lengths).
But SVE types themselves are the same type throughout. The register
size assumptions change how we generate code, but they don't change
the definition of the types.
This means that the types do not have a fixed size at the C level
even when -msve-vector-bits=N is in effect. It also means that the
size does not work in the same way as for C VLAs, where the abstract
machine evaluates the size at a particular point and then carries that
size forward to later code.
The SVE ACLE deals with this by making it invalid to use C and C++
constructs that depend on the size or layout of SVE types. The spec
refers to the types as "sizeless" types and defines their semantics as
edits to the standards. See:
https://gcc.gnu.org/ml/gcc-patches/2018-10/msg00868.html
for a fuller description and:
https://gcc.gnu.org/ml/gcc/2019-11/msg00088.html
for a recent update on the status.
However, since all current sizeless types are target-specific built-in
types, there's no real reason for the frontends to handle them directly.
They can just hand off the checks to target code instead. It's then
possible for the errors to refer to "SVE types" rather than "sizeless
types", which is likely to be more meaningful to users.
There is a slight overlap between the new tests and the ones for
gnu_vector_type_p in r277950, but here the emphasis is on testing
sizelessness.
2019-11-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* target.h (type_context_kind): New enum.
(verify_type_context): Declare.
* target.def (verify_type_context): New target hook.
* doc/tm.texi.in (TARGET_VERIFY_TYPE_CONTEXT): Likewise.
* doc/tm.texi: Regenerate.
* tree.c (verify_type_context): New function.
* config/aarch64/aarch64-protos.h (aarch64_sve::verify_type_context):
Declare.
* config/aarch64/aarch64-sve-builtins.cc (verify_type_context):
New function.
* config/aarch64/aarch64.c (aarch64_verify_type_context): Likewise.
(TARGET_VERIFY_TYPE_CONTEXT): Define.
gcc/c-family/
* c-common.c (pointer_int_sum): Use verify_type_context to check
whether the target allows pointer arithmetic for the types involved.
(c_sizeof_or_alignof_type, c_alignof_expr): Use verify_type_context
to check whether the target allows sizeof and alignof operations
for the types involved.
gcc/c/
* c-decl.c (start_decl): Allow initialization of variables whose
size is a POLY_INT_CST.
(finish_decl): Use verify_type_context to check whether the target
allows variables with a particular type to have static or thread-local
storage duration. Don't raise a second error if such variables do
not have a constant size.
(grokdeclarator): Use verify_type_context to check whether the
target allows fields or array elements to have a particular type.
* c-typeck.c (pointer_diff): Use verify_type_context to test whether
the target allows pointer difference for the types involved.
(build_unary_op): Likewise for pointer increment and decrement.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general-c/sizeless-1.c: New test.
* gcc.target/aarch64/sve/acle/general-c/sizeless-2.c: Likewise.
From-SVN: r278877
2019-11-30 Thomas Koenig <tkoenig@gcc.gnu.org>
PR fortran/91783
* dependency.c (gfc_dep_resolver): Do not look at _data
component if present.
2019-11-30 Thomas Koenig <tkoenig@gcc.gnu.org>
PR fortran/91783
* gfortran.dg/dependency_56.f90: New test.
From-SVN: r278873
Fix an issue with the GCC driver and the `-x' option where a warning is
issued in an invocation like:
$ riscv64-linux-gnu-gcc -print-multi-directory -x c++
riscv64-linux-gnu-gcc: warning: '-x c++' after last input file has no effect
lib64/lp64d
$
where no inputs were given and hence the use of `-x' is irrelevant.
The statement printed is also untrue as the `-x' does not come after the
last input file given that none was given. Do not print it then if no
inputs were supplied.
* gcc.c (process_command): Only warn about an ineffective `-x'
option if any input files have actually been supplied.
From-SVN: r278872
The `--enable-version-specific-runtime-libs' configuration option is now
supported throughout all of our target library subdirectories, so update
installation documentation accordingly and also mention that the default
for the option is `yes' for libada and `no' for the remaining libraries.
gcc/
* doc/install.texi (Options specification): Remove the list of
target library subdirectories supporting
`--enable-version-specific-runtime-libs'. Document defaults for
the option.
From-SVN: r278871
This function failed to compile when called with a std::string.
Also, constructing a path with a char8_t string did not correctly treat
the string as already UTF-8 encoded.
* include/bits/fs_path.h (u8path(InputIterator, InputIterator))
(u8path(const Source&)) [_GLIBCXX_FILESYSTEM_IS_WINDOWS]: Simplify
conditions.
* include/experimental/bits/fs_path.h [_GLIBCXX_FILESYSTEM_IS_WINDOWS]
(__u8path(const Source&, char)): Add overloads for std::string and
types convertible to std::string.
(_Cvt::_S_wconvert): Add a new overload for char8_t strings and use
codecvt_utf8_utf16 to do the correct conversion.
From-SVN: r278869
2019-11-29 Richard Biener <rguenther@suse.de>
PR tree-optimization/91003
* tree-vect-slp.c (vect_mask_constant_operand_p): Pass in the
operand number, avoid handling the non-condition operands of
COND_EXPRs as comparisons.
(vect_get_constant_vectors): Pass down the operand number.
(vect_get_slp_defs): Likewise.
* gfortran.dg/pr91003.f90: New testcase.
From-SVN: r278860
New tests
This patch adds new tests to validate new deleted overloads of wchar_t,
char8_t, char16_t, and char32_t for ordinary and wide formatted character and
string ostream inserters.
Additionally, new tests are added to validate invocations of u8path with
sequences of char8_t for both the C++17 and filesystem TS implementations.
2019-11-29 Tom Honermann <tom@honermann.net>
New tests
* testsuite/27_io/basic_ostream/inserters_character/char/deleted.cc:
New test to validate deleted overloads of character and string
inserters for narrow ostreams.
* testsuite/27_io/basic_ostream/inserters_character/wchar_t/deleted.cc:
New test to validate deleted overloads of character and string
inserters for wide ostreams.
* testsuite/27_io/filesystem/path/factory/u8path-char8_t.cc: New test
to validate u8path invocations with sequences of char8_t.
* testsuite/experimental/filesystem/path/factory/u8path-char8_t.cc:
New test to validate u8path invocations with sequences of char8_t.
From-SVN: r278858
Updates to existing tests
This patch updates existing tests to validate the new value for the
__cpp_lib_char8_t feature test macros and to exercise u8path factory
function invocations with std::string, std::string_view, and interator
pair arguments.
2019-11-29 Tom Honermann <tom@honermann.net>
Updates to existing tests
* testsuite/experimental/feat-char8_t.cc: Updated the expected
__cpp_lib_char8_t feature test macro value.
* testsuite/27_io/filesystem/path/factory/u8path.cc: Added testing of
u8path invocation with std::string, std::string_view, and iterators
thereof.
* testsuite/experimental/filesystem/path/factory/u8path.cc: Added
testing of u8path invocation with std::string, std::string_view, and
iterators thereof.
From-SVN: r278857
Update feature test macro, add deleted operators, update u8path
This patch increments the __cpp_lib_char8_t feature test macro, adds deleted
operator<< overloads for basic_ostream, and modifies u8path to accept
sequences of char8_t for both the C++17 implementation of std::filesystem, and
the filesystem TS implementation.
The implementation mechanism used for u8path differs between the C++17 and
filesystem TS implementations. The changes to the former take advantage of
C++17 'if constexpr'. The changes to the latter retain C++11 compatibility
and rely on tag dispatching.
2019-11-29 Tom Honermann <tom@honermann.net>
Update feature test macro, add deleted operators, update u8path
* include/bits/c++config: Bumped the value of the __cpp_lib_char8_t
feature test macro.
* include/bits/fs_path.h (u8path): Modified u8path to accept sequences
of char8_t.
* include/experimental/bits/fs_path.h (u8path): Modified u8path to
accept sequences of char8_t.
* include/std/ostream: Added deleted overloads of wchar_t, char8_t,
char16_t, and char32_t for ordinary and wide formatted character and
string inserters.
From-SVN: r278856
Decouple constraints for u8path from path constructors
This patch moves helper classes and functions for std::filesystem::path out of
the class definition to a detail namespace so that they are available to the
implementations of std::filesystem::u8path. Prior to this patch, the SFINAE
constraints for those implementations were specified via delegation to the
overloads of path constructors with a std::locale parameter; it just so
happened that those overloads had the same constraints. As of P1423R3, u8path
and those overloads no longer have the same constraints, so this dependency
must be broken.
This patch also updates the experimental implementation of the filesystem TS
to add SFINAE constraints to its implementations of u8path. These functions
were previously unconstrained and marked with a TODO comment.
This patch does not provide any intentional behavioral changes other than the
added constraints to the experimental filesystem TS implementation of u8path.
Alternatives to this refactoring would have been to make the u8path overloads
friends of class path, or to make the helpers public members. Both of those
approaches struck me as less desirable than this approach, though this
approach does require more code changes and will affect implementation detail
portions of mangled names for path constructors and inline member functions
(mostly function template specializations).
2019-11-29 Tom Honermann <tom@honermann.net>
Decouple constraints for u8path from path constructors
* include/bits/fs_path.h: Moved helper utilities out of
std::filesystem::path into a detail namespace to make them
available for use by u8path.
* include/experimental/bits/fs_path.h: Moved helper utilities out
of std::experimental::filesystem::v1::path into a detail
namespace to make them available for use by u8path.
From-SVN: r278855
The function maybe_resimplify_conditional_op uses operation_could_trap_p to
check if the resulting operation of a simplification can trap. Because of the
changes introduced by revision r276659, this results in an ICE due to a
violated assertion in operation_could_trap_p if the operation is a COND_EXPR or
a VEC_COND_EXPR. The changes have allowed those expressions to trap and
whether they do cannot be determined without considering their condition
which is not available to operation_could_trap_p.
Change maybe_resimplify_conditional_op to inspect the condition of
COND_EXPRs and VEC_COND_EXPRs to determine if they can trap.
From-SVN: r278853
When dissolving an SLP-only group of accesses, we should only set
the gap to group_size - 1 for normal non-strided groups.
2019-11-29 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR tree-optimization/92677
* tree-vect-loop.c (vect_dissolve_slp_only_groups): Set the gap
to zero when dissolving a group of strided accesses.
gcc/testsuite/
PR tree-optimization/92677
* gcc.dg/vect/pr92677.c: New test.
From-SVN: r278852
Now that stmt_vec_info records the choice between vector mask
types and normal nonmask types, we can use that information in
vect_get_vector_types_for_stmt instead of deferring the choice
of vector type till later.
vect_get_mask_type_for_stmt used to check whether the boolean inputs
to an operation:
(a) consistently used mask types or consistently used nonmask types; and
(b) agreed on the number of elements.
(b) shouldn't be a problem when (a) is met. If the operation
consistently uses mask types, tree-vect-patterns.c will have corrected
any mismatches in mask precision. (This is because we only use mask
types for a small well-known set of operations and tree-vect-patterns.c
knows how to handle any that could have different mask precisions.)
And if the operation consistently uses normal nonmask types, there's
no reason why booleans should need extra vector compatibility checks
compared to ordinary integers.
So the potential difficulties all seem to come from (a). Now that
we've chosen the result type ahead of time, we also have to consider
whether the outputs and inputs consistently use mask types.
Taking each vectorizable_* routine in turn:
- vectorizable_call
vect_get_vector_types_for_stmt only handled booleans specially
for gassigns, so vect_get_mask_type_for_stmt never had chance to
handle calls. I'm not sure we support any calls that operate on
booleans, but as things stand, a boolean result would always have
a nonmask type. Presumably any vector argument would also need to
use nonmask types, unless it corresponds to internal_fn_mask_index
(which is already a special case).
For safety, I've added a check for mask/nonmask combinations here
even though we didn't check this previously.
- vectorizable_simd_clone_call
Again, vect_get_mask_type_for_stmt never had chance to handle calls.
The result of the call will always be a nonmask type and the patch
for PR 92710 rejects mask arguments. So all booleans should
consistently use nonmask types here.
- vectorizable_conversion
The function already rejects any conversion between booleans in which
one type isn't a mask type.
- vectorizable_operation
This function definitely needs a consistency check, e.g. to handle
& and | in which one operand is loaded from memory and the other is
a comparison result. Ideally we'd handle this via pattern stmts
instead (like we do for the all-mask case), but that's future work.
- vectorizable_assignment
VECT_SCALAR_BOOLEAN_TYPE_P requires single-bit precision, so the
current code already rejects problematic cases.
- vectorizable_load
Loads always produce nonmask types and there are no relevant inputs
to check against.
- vectorizable_store
vect_check_store_rhs already rejects mask/nonmask combinations
via useless_type_conversion_p.
- vectorizable_reduction
- vectorizable_lc_phi
PHIs always have nonmask types. After the change above, attempts
to combine the PHI result with a mask type would be rejected by
vectorizable_operation. (Again, it would be better to handle
this using pattern stmts.)
- vectorizable_induction
We don't generate inductions for booleans.
- vectorizable_shift
The function already rejects boolean shifts via type_has_mode_precision_p.
- vectorizable_condition
The function already rejects mismatches via useless_type_conversion_p.
- vectorizable_comparison
The function already rejects comparisons between mask and nonmask types.
The result is always a mask type.
2019-11-29 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR tree-optimization/92596
* tree-vect-stmts.c (vectorizable_call): Punt on hybrid mask/nonmask
operations.
(vectorizable_operation): Likewise, instead of relying on
vect_get_mask_type_for_stmt to do this.
(vect_get_vector_types_for_stmt): Always return a vector type
immediately, rather than deferring the choice for boolean results.
Use a vector mask type instead of a normal vector if
vect_use_mask_type_p.
(vect_get_mask_type_for_stmt): Delete.
* tree-vect-loop.c (vect_determine_vf_for_stmt_1): Remove
mask_producers argument and special boolean_type_node handling.
(vect_determine_vf_for_stmt): Remove mask_producers argument and
update calls to vect_determine_vf_for_stmt_1. Remove doubled call.
(vect_determine_vectorization_factor): Update call accordingly.
* tree-vect-slp.c (vect_build_slp_tree_1): Remove special
boolean_type_node handling.
(vect_slp_analyze_node_operations_1): Likewise.
gcc/testsuite/
PR tree-optimization/92596
* gcc.dg/vect/bb-slp-pr92596.c: New test.
* gcc.dg/vect/bb-slp-43.c: Likewise.
From-SVN: r278851
search_type_for_mask uses a worklist to search a chain of boolean
operations for a natural vector mask type. This patch instead does
that in vect_determine_stmt_precisions, where we also look for
overpromoted integer operations. We then only need to compute
the precision once and can cache it in the stmt_vec_info.
The new function vect_determine_mask_precision is supposed
to handle exactly the same cases as search_type_for_mask_1,
and in the same way. There's a lot we could improve here,
but that's not stage 3 material.
I wondered about sharing mask_precision with other fields like
operation_precision, but in the end that seemed too dangerous.
We have patterns to convert between boolean and non-boolean
operations and it would be very easy to get mixed up about
which case the fields are describing.
2019-11-29 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (stmt_vec_info::mask_precision): New field.
(vect_use_mask_type_p): New function.
* tree-vect-patterns.c (vect_init_pattern_stmt): Copy the
mask precision to the pattern statement.
(append_pattern_def_seq): Add a scalar_type_for_mask parameter
and use it to initialize the new stmt's mask precision.
(search_type_for_mask_1): Delete.
(search_type_for_mask): Replace with...
(integer_type_for_mask): ...this new function. Use the information
cached in the stmt_vec_info.
(vect_recog_bool_pattern): Update accordingly.
(build_mask_conversion): Pass the scalar type associated with the
mask type to append_pattern_def_seq.
(vect_recog_mask_conversion_pattern): Likewise. Call
integer_type_for_mask instead of search_type_for_mask.
(vect_convert_mask_for_vectype): Call integer_type_for_mask instead
of search_type_for_mask.
(possible_vector_mask_operation_p): New function.
(vect_determine_mask_precision): Likewise.
(vect_determine_stmt_precisions): Call it.
From-SVN: r278850