2020-11-04 Jan Hubicka <hubicka@ucw.cz>
PR ipa/97695
* cgraph.c (cgraph_edge::redirect_call_stmt_to_callee): Fix ICE with
in dumping code.
(cgraph_node::remove): Save clone info before releasing it and pass it
to unregister.
* cgraph.h (symtab_node::unregister): Add clone_info parameter.
(cgraph_clone::unregister): Likewise.
* cgraphclones.c (cgraph_node::find_replacement): Copy clone info
* symtab-clones.cc (clone_infos_t::duplicate): Remove.
(clone_info::get_create): Simplify.
* symtab.c (symtab_node::unregister): Pass around clone info.
* varpool.c (varpool_node::remove): Update.
The patch for 94923 that introduced is_byte_access_type wrongly changed
build_cplus_array_type to treat even arrays of char16_t as typeless storage,
which is wrong; only arrays of char and unsigned char have the special alias
semantics in C++.
G++ used to treat signed char the same way, as C does, but C++ has always
omitted it.
gcc/cp/ChangeLog:
* tree.c (is_byte_access_type): Don't use char_type_p.
gcc/testsuite/ChangeLog:
* g++.dg/Wclass-memaccess.C: Check that signed char and
char16_t aren't treated as byte-access types.
Bug fix for recent commit beddd1762a "[OpenACC]
More precise diagnostics for 'gang', 'worker', 'vector' clauses with arguments
on 'loop' only allowed in 'kernels' regions":
> [...], and 'inform' at the location of the enclosing parent
> compute construct/[...].
Now really.
gcc/
* omp-low.c (scan_omp_for) <OpenACC>: Use proper location to
'inform' of enclosing parent compute construct.
gcc/testsuite/
* c-c++-common/goacc/pr92793-1.c: Extend.
* gfortran.dg/goacc/pr92793-1.f90: Likewise.
While these function declarations have NULL decl_specifiers->type,
they have still type specifiers specified from which the default int
in the return type is added, so we shouldn't try to parse those as
deduction guides.
2020-11-03 Jakub Jelinek <jakub@redhat.com>
PR c++/97663
* parser.c (cp_parser_init_declarator): Don't try to parse
C++17 deduction guides if there are any type specifiers even when
type is NULL.
* g++.dg/cpp1z/class-deduction75.C: New test.
This separates the definition of std::__call_proxy into two funcions,
one for TLS and one for non-TLS, to make them easier to read. It also
replaces the __get_once_functor_lock_ptr() internal helper with a new
set_lock_ptr(unique_lock<mutex>*) function so that __once_proxy doesn't
need to call it twice.
libstdc++-v3/ChangeLog:
* src/c++11/mutex.cc [_GLIBCXX_HAVE_TLS] (__once_proxy): Define
separately for TLS targets.
[!_GLIBCXX_HAVE_TLS] (__get_once_functor_lock_ptr): Replace with ...
(set_lock_ptr): ... this. Set new value and return previous
value.
[!_GLIBCXX_HAVE_TLS] (__set_once_functor_lock_ptr): Adjust to
use set_lock_ptr.
[!_GLIBCXX_HAVE_TLS] (__once_proxy): Likewise.
When there are two possible matches and one is a base of the other, choose
the derived class rather than fail.
gcc/cp/ChangeLog
2020-10-21 Kamlesh Kumar <kamleshbhalui@gmail.com>
Jason Merrill <jason@redhat.com>
PR c++/97453
DR2303
* pt.c (get_template_base): Consider closest base in template
deduction when base of base also matches.
gcc/testsuite/ChangeLog
2020-10-21 Kamlesh Kumar <kamleshbhalui@gmail.com>
* g++.dg/DRs/dr2303.C: New test.
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
In streaming using decls I needed to check some assumptions. This
adds those checks to the instantiation machinery.
gcc/cp/
* pt.c (tsubst_expr): Simplify using decl instantiation, add
asserts.
This patch sets copy_fndecl_with_name to always inform
rest_of_decl_compilation that it is not a top-level decl (it's a
member function). I also refactor build_cdtor_clones to conditionally
do the method vector updating. That happens to be a better interface
for modules to use.
gcc/cp/
* class.c (copy_fndecl_with_name): Always not top level.
(build_cdtor_clones): Add update_methods parm, use it to
conditionally update the method vec. Return void
(clone_cdtor): Adjust.
(clone_constructors_and_destructors): Adjust comment.
Use up to SSE_REGPARM_MAX registers to pass function parameters
for 32bit Mach-O targets. Also, define X86_32_MMX_REGPARM_MAX
to return 0 for 32bit Mach-O targets.
2020-11-03 Uroš Bizjak <ubizjak@gmail.com>
gcc/
* config/i386/i386.c (ix86_function_arg_regno_p): Use up to
SSE_REGPARM_MAX registers to pass function parameters
for 32bit Mach-O targets.
* config/i386/i386.h (X86_32_MMX_REGPARM_MAX): New macro.
(MMX_REGPARM_MAX): Use it.
Now I know about VAR_OR_FUNCTION_DECL_P I found a place to use it.
Also positively checking for a function_decl is clearer than
negatively checking for things that are not.
gcc/cp/
* pt.c (primary_template_specialization_p): Use
VAR_OR_FUNCTION_DECL_P.
(tsubst_template_decl): Check for FUNCTION_DECL, not !TYPE && !VAR
for registering a specialization.
This patch moves the generation of PRAGMA_EOF earlier, to when we set
need_line, rather than when we try and get the next line. It also
prevents peeking past a PRAGMA token.
libcpp/
* lex.c (cpp_peek_token): Do not peek past CPP_PRAGMA.
(_cpp_lex_direct): Handle EOF in pragma when setting need_line,
not when needing a line.
This helps powerpc-vxworks kernel mode.
2020-11-03 Olivier Hainque <hainque@adacore.com>
gcc/testsuite/
* gcc.target/powerpc/pr67789.c: Add
dg-require-effective-target fpic.
* gcc.target/powerpc/pr83629.c: Likewise.
* gcc.target/powerpc/pr84112.c: Likewise. Remove
a superflous target test in the dg-do compile
directive while at it.
This adds ${cpu_type}/t-lse and t-slibgcc-libgcc to the tmake_file
list for aarch64-vxworks7* configurations, as the Linux port does.
t-lse is needed by all triplets now anyway and the standard setting
for slibgcc makes sense as we are working on reintroducing PIC support
for RTPs on various targets. The VxWorks7 system environments are leaning
towards more and more similarilties with Linux in general, so the
closer configurations the better.
2020-11-02 Pat Bernardi <bernardi@adacore.com>
libgcc/
* config.host (aarch64-vxworks7*, tmake_file): Add
${cpu_type}/t-lse and t-slibgcc-libgcc.
Co-authored-by: Olivier Hainque <hainque@adacore.com>
This patch implements ACLE intrinsics vget_low_bf16 and vget_high_bf16
to extract lower or higher half from a bfloat16x8 vector. The
vget_high_bf16 is done by 'dup' instruction. The vget_low_bf16 is just
to return the lower half of a vector register. Tests include both big-
and little-endian cases.
gcc/ChangeLog:
2020-11-03 Dennis Zhang <dennis.zhang@arm.com>
* config/aarch64/aarch64-simd-builtins.def (vget_lo_half): New entry.
(vget_hi_half): Likewise.
* config/aarch64/aarch64-simd.md (aarch64_vget_lo_halfv8bf): New entry.
(aarch64_vget_hi_halfv8bf): Likewise.
* config/aarch64/arm_neon.h (vget_low_bf16): New intrinsic.
(vget_high_bf16): Likewise.
gcc/testsuite/ChangeLog
* gcc.target/aarch64/advsimd-intrinsics/bf16_get.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/bf16_get-be.c: New test.
eh-specifiers in a class definition are complete-definition contexts,
and we sometimes need to deferr their parsing. We create a deferred
eh specifier, which can end up persisting in the type system due to
variants being created before the deferred parse. This causes
problems in modules handling.
This patch adds fixup_deferred_exception_variants, which directly
modifies the variants of such an eh spec once parsed. As commented,
the general case is quite hard, so it doesn't deal with everything.
But I do catch the cases I encountered (from the std library).
gcc/cp/
* cp-tree.h (fixup_deferred_exception_variants): Declare.
* parser.c (cp_parser_class_specifier_1): Call it when
completing deferred parses rather than creating a variant.
(cp_parser_member_declaration): Move comment from ...
(cp_parser_noexcept_specification_opt): ... here. Refactor the
deferred parse.
* tree.c (fixup_deferred_exception_variants): New.
I noticed that we were handling lambda extra scope during template
instantiation in a different order to how we handle the non-template
case. Reordered that for consistency. Also some more RAII during
template instantiation.
gcc/cp/
* pt.c (tsubst_lambda_expr): Reorder extra-scope handling to match
the non-template case.
(instantiate_body): Move a couple of declarations to their
initializers.
duplicate_decls was being lenient about extern-c mismatches, allowing
you to have two declarations in the symbol table after emitting an
error. This resulted in duplicate error messages in modules, when we
find the same problem multiple times. Let's just not let that happen.
gcc/cp/
* decl.c (duplicate_decls): Return error_mark_node fo extern-c
mismatch.
I noticed a fencepost error in the preprocessor. We should be
checking if the next char is at the limit, not the current char (which
can't be, because we're looking at it).
libcpp/
* lex.c (_cpp_clean_line): Fix DOS off-by-one error.
This is the first patch of PR96342. In order to add support for
"omp declare simd", change the type of the field "simdlen" of
struct cgraph_simd_clone from unsigned int to poly_uint64 and
related adaptation. Since the length might be variable for the
SVE cases.
2020-11-03 Yang Yang <yangyang305@huawei.com>
gcc/ChangeLog:
* cgraph.h (struct cgraph_simd_clone): Change field "simdlen" of
struct cgraph_simd_clone from unsigned int to poly_uint64.
* config/aarch64/aarch64.c
(aarch64_simd_clone_compute_vecsize_and_simdlen): adaptation of
operations on "simdlen".
* config/i386/i386.c (ix86_simd_clone_compute_vecsize_and_simdlen):
Printf formats update.
* gengtype.c (main): Handle poly_uint64.
* omp-simd-clone.c (simd_clone_mangle): Likewise.Re
(simd_clone_adjust_return_type): Likewise.
(create_tmp_simd_array): Likewise.
(simd_clone_adjust_argument_types): Likewise.
(simd_clone_init_simd_arrays): Likewise.
(ipa_simd_modify_function_body): Likewise.
(simd_clone_adjust): Likewise.
(expand_simd_clones): Likewise.
* poly-int-types.h (vector_unroll_factor): New macro.
* poly-int.h (constant_multiple_p): Add two-argument versions.
* tree-vect-stmts.c (vectorizable_simd_clone_call): Likewise.
This limits insert iteration caused by PRE insertions generating
hoist insertion opportunities and vice versa. The patch limits
the hoist insertion iterations to three by default.
2020-11-03 Richard Biener <rguenther@suse.de>
PR tree-optimization/97623
* params.opt (-param=max-pre-hoist-insert-iterations): New.
* doc/invoke.texi (max-pre-hoist-insert-iterations): Document.
* tree-ssa-pre.c (insert): Do at most max-pre-hoist-insert-iterations
hoist insert iterations.
This fixes a mistake in the optab query done by ISEL. It
doesn't fix the PR but shifts the ICE elsewhere.
2020-11-03 Richard Biener <rguenther@suse.de>
PR middle-end/97579
* gimple-isel.cc (gimple_expand_vec_cond_expr): Use
the correct types for the vcond_mask/vec_cmp optab queries.
This patch splits the individual value propagation out from fill_block_cache,
and calls it from set_global_value when the global value is updated.
This ensures the "current" global value is reflected in the on-entry cache.
* gimple-range-cache.cc (ssa_global_cache::get_global_range): Return
true if there was a previous range set.
(ranger_cache::ranger_cache): Take a gimple_ranger parameter.
(ranger_cache::set_global_range): Propagate the value if updating.
(ranger_cache::propagate_cache): Renamed from iterative_cache_update.
(ranger_cache::propagate_updated_value): New. Split from:
(ranger_cache::fill_block_cache): Split out value propagator.
* gimple-range-cache.h (ssa_global_cache): Update prototypes.
(ranger_cache): Update prototypes.
We may not call value_dependent_expression_p on expressions that are
not potential constant expressions, otherwise value_d could crash,
as I saw recently (in C++98). So beef up the checking in i_d_e_p.
This revealed a curious issue: when we have __PRETTY_FUNCTION__ in
a template function, we set its DECL_VALUE_EXPR to error_mark_node
(cp_make_fname_decl), so potential_c_e returns false when it gets it,
but value_dependent_expression_p handles it specially and says true.
This broke lambda-generic-pretty1.C. So take care of that.
And then also tweak uses_template_parms.
gcc/cp/ChangeLog:
* constexpr.c (potential_constant_expression_1): Treat
__PRETTY_FUNCTION__ inside a template function as
potentially-constant.
* pt.c (uses_template_parms): Call
instantiation_dependent_expression_p instead of
value_dependent_expression_p.
(instantiation_dependent_expression_p): Check
potential_constant_expression before calling
value_dependent_expression_p.
Jon suggested turning this warning off when we're not actually
evaluating the operand. This patch does that.
gcc/cp/ChangeLog:
PR c++/97632
* init.c (build_new_1): Disable -Winit-list-lifetime for an unevaluated
operand.
gcc/testsuite/ChangeLog:
PR c++/97632
* g++.dg/warn/Winit-list4.C: New test.
This removes a duplicated statement.
It was apparently introduced due to a merge mistake.
2020-11-03 Bernd Edlinger <bernd.edlinger@hotmail.de>
* fold-const.c (getbyterep): Remove duplicated statement.
This makes sure that stack allocated SSA_NAMEs are
at least MODE_ALIGNED. Also increase the MEM_ALIGN
for the corresponding rtl objects.
gcc:
2020-11-03 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR target/97205
* cfgexpand.c (align_local_variable): Make SSA_NAMEs
at least MODE_ALIGNED.
(expand_one_stack_var_at): Increase MEM_ALIGN for SSA_NAMEs.
gcc/testsuite:
2020-11-03 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR target/97205
* gcc.c-torture/compile/pr97205.c: New test.
This allows us to release references to BLOCKs by not keeping
them rooted in the external_die_map but instead remove it from
there as soon as we created the corresponding stub DIE. For
decls it doesn't help since we still keep the decl_die_table.
2020-11-03 Richard Biener <rguenther@suse.de>
* dwarf2out.c (maybe_create_die_with_external_ref): Remove
hashtable entry.
A couple of small fixes. I noticed bind_template_template_parms was
not marking the parm a template parm (this broke some module
handling). Debugging CALL_EXPR comparisons led me to refactor
cp_tree_equal's CALL_EXPR code (and my recent fix to debug printing of
same). Finally TREE_VECS are best compared by comp_template_args. I
recall that last piece being a left over from fixes during gcc-10.
I've been using it on the modules branch since then.
gcc/cp/
* tree.c (bind_template_template_parm): Mark the parm as a
template parm.
(cp_tree_equal): Refactor CALL_EXPR. Use comp_template_args for
TREE_VECs.
Here are a few cleanups from the modules branch. Generally some RAII,
and a bit of lazy namespace pushing.
gcc/cp/
* rtti.c (init_rtti_processing): Move var decl to its init.
(get_tinfo_decl): Likewise. Break out creation to called helper
...
(get_tinfo_decl_direct): ... here.
(build_dynamic_cast_1): Move var decls to their initializers.
(tinfo_base_init): Set decl's location to BUILTINS_LOCATION.
(get_tinfo_desc): Only push ABI namespace when needed. Set type's
context.
This patch cleans up the interface to the dependency generation a
little. We now only check the option in one place, and the
cpp_get_deps function returns nullptr if there are no dependencies. I
also reworded the -MT and -MQ help text to be make agnostic -- as
there are ideas about emitting, say, JSON.
libcpp/
* include/mkdeps.h: Include cpplib.h
(deps_write): Adjust first parm type.
* mkdeps.c: Include internal.h
(make_write): Adjust first parm type. Check phony option
directly.
(deps_write): Adjust first parm type.
* init.c (cpp_read_main_file): Use get_deps.
* directives.c (cpp_get_deps): Check option before initializing.
gcc/c-family/
* c.opt (MQ,MT): Reword description to be make-agnostic.
gcc/fortran/
* cpp.c (gfc_cpp_add_dep): Only add dependency if we're recording
them.
(gfc_cpp_init): Likewise for target.
This patch enables intrinsics to convert BFloat16 scalar and vector
operands to Float32 modes. The intrinsics are implemented by shifting
each BFloat16 item 16 bits to left using shl/shll/shll2 instructions.
gcc/ChangeLog:
2020-11-03 Dennis Zhang <dennis.zhang@arm.com>
* config/aarch64/aarch64-simd-builtins.def(vbfcvt): New entry.
(vbfcvt_high, bfcvt): Likewise.
* config/aarch64/aarch64-simd.md(aarch64_vbfcvt<mode>): New entry.
(aarch64_vbfcvt_highv8bf, aarch64_bfcvtsf): Likewise.
* config/aarch64/arm_bf16.h (vcvtah_f32_bf16): New intrinsic.
* config/aarch64/arm_neon.h (vcvt_f32_bf16): Likewise.
(vcvtq_low_f32_bf16, vcvtq_high_f32_bf16): Likewise.
gcc/testsuite/ChangeLog
* gcc.target/aarch64/advsimd-intrinsics/bfcvt-compile.c
(test_vcvt_f32_bf16, test_vcvtq_low_f32_bf16): New tests.
(test_vcvtq_high_f32_bf16, test_vcvth_f32_bf16): Likewise.
This fixes the bad assumption that sizeof (bool) == 1
2020-11-03 Richard Biener <rguenther@suse.de>
PR bootstrap/97666
* tree-vect-slp.c (vect_build_slp_tree_2): Scale
allocation of skip_args by sizeof (bool).