The sigsetjmp analyzer tests use jmp_buf in sigsetjmp and siglongjmp
calls. Not every system that supports sigsetjmp uses the same data
structure for setjmp and sigsetjmp, which results in type mismatches.
This patch changes the tests to use sigjmp_buf, that is the
POSIX-specific type for use with sigsetjmp and siglongjmp.
for gcc/testsuite/ChnageLog
* gcc.dg/analyzer/sigsetjmp-5.c: Use sigjmp_buf.
* gcc.dg/analyzer/sigsetjmp-6.c: Likewise.
The getpass function is not available on all systems; and not
necessarily declared in unistd.h, as expected by the sensitive-1
analyzer test.
Since this is a compile-only test, it doesn't really matter if the
function is defined in the system libraries. All we need is a
declaration, to avoid warnings from calling an undeclared function.
This patch adds the declaration, in a way that is most unlikely to
conflict with any existing declaration.
for gcc/testsuite/ChangeLog
* gcc.dg/analyzer/sensitive-1.c: Declare getpass.
Similar to nvptx offloading, see PR65099 "nvptx offloading: hard-coded 64-bit
assumptions".
gcc/
* config/gcn/mkoffload.c (main): Create an offload image only in
64-bit configurations.
During error recovery after an invalid derived type specification it was
possible to try to resolve an invalid array specification. We now skip
this if the component has the ALLOCATABLE or POINTER attribute and the
shape is not deferred.
gcc/fortran/ChangeLog:
PR fortran/98661
* resolve.c (resolve_component): Derived type components with
ALLOCATABLE or POINTER attribute shall have a deferred shape.
gcc/testsuite/ChangeLog:
PR fortran/98661
* gfortran.dg/pr98661.f90: New test.
During error recovery after an invalid derived type specification it was
possible to try to resolve an invalid array specification. We now skip
this if the component has the ALLOCATABLE or POINTER attribute and the
shape is not deferred.
gcc/fortran/ChangeLog:
PR fortran/98661
* resolve.c (resolve_component): Derived type components with
ALLOCATABLE or POINTER attribute shall have a deferred shape.
gcc/testsuite/ChangeLog:
PR fortran/98661
* gfortran.dg/pr98661.f90: New test.
As recently again discussed in <https://gcc.gnu.org/PR97436> "[nvptx] -m32
support", nvptx offloading other than for 64-bit host has never been
implemented, tested, supported. So we simply should buildn't the nvptx libgomp
plugin in this case.
This avoids build problems if, for example, in a (standard) bi-arch
x86_64-pc-linux-gnu '-m64'/'-m32' build, libcuda is available only in a 64-bit
variant but not in a 32-bit one, which, for example, is the case if you build
GCC against the CUDA toolkit's 'stubs/libcuda.so' (see
<https://stackoverflow.com/a/52784819>).
This amends PR65099 commit a92defdab7 (r225560)
"[nvptx offloading] Only 64-bit configurations are currently supported" to
match the way we're doing this for the HSA/GCN plugins.
libgomp/
PR libgomp/65099
* plugin/configfrag.ac (PLUGIN_NVPTX): Restrict to supported
configurations.
* configure: Regenerate.
* plugin/plugin-nvptx.c (nvptx_get_num_devices): Remove 64-bit
check.
Fix ordering problem on Windows targets where filesystem_error was used
before being defined.
libstdc++-v3/ChangeLog:
PR libstdc++/98471
* include/bits/fs_path.h (__throw_conversion_error): New
function to throw or abort on character conversion errors.
(__wstr_from_utf8): Move definition after filesystem_error has
been defined. Use __throw_conversion_error.
(path::_S_convert<_EcharT>): Use __throw_conversion_error.
(path::_S_str_convert<_CharT, _Traits, _Allocator>): Likewise.
(path::u8string): Likewise.
This fixes an infinite loop one could see for:
git show b87ec922c4 | ./contrib/mklog.py
contrib/ChangeLog:
* mklog.py: Fix infinite loop for unsupported files.
-fcf-protection with CF_BRANCH inserts ENDBR32 at function entries.
ENDBR32 is NOP only on 64-bit processors and 32-bit TARGET_CMOV
processors. Issue an error for -fcf-protection with CF_BRANCH when
compiling for 32-bit non-TARGET_CMOV targets.
gcc/
PR target/98667
* config/i386/i386-options.c (ix86_option_override_internal):
Issue an error for -fcf-protection with CF_BRANCH when compiling
for 32-bit non-TARGET_CMOV targets.
gcc/testsuite/
PR target/98667
* gcc.target/i386/pr98667-1.c: New file.
* gcc.target/i386/pr98667-2.c: Likewise.
* gcc.target/i386/pr98667-3.c: Likewise.
This improves dependence analysis on refs that access the same
array but with different typed but same sized accesses. That's
obviously safe for the case of types that cannot have any
access function based off them. For the testcase this is
signed short vs. unsigned short.
2021-01-14 Richard Biener <rguenther@suse.de>
PR tree-optimization/98674
* tree-data-ref.c (base_supports_access_fn_components_p): New.
(initialize_data_dependence_relation): For two bases without
possible access fns resort to type size equality when determining
shape compatibility.
* gcc.dg/vect/pr98674.c: New testcase.
Also pass -mpreferred-stack-boundary=4 -mno-stackrealign to avoid
disabling STV by:
/* Disable STV if -mpreferred-stack-boundary={2,3} or
-mincoming-stack-boundary={2,3} or -mstackrealign - the needed
stack realignment will be extra cost the pass doesn't take into
account and the pass can't realign the stack. */
if (ix86_preferred_stack_boundary < 128
|| ix86_incoming_stack_boundary < 128
|| opts->x_ix86_force_align_arg_pointer)
opts->x_target_flags &= ~MASK_STV;
PR target/98676
* gcc.target/i386/pr95021-1.c: Add -mpreferred-stack-boundary=4
-mno-stackrealign.
* gcc.target/i386/pr95021-3.c: Likewise.
The patch adding these files was approved in 2020 but it wasn't
committed until 2021, so the copyright years were not updated along with
the years in all the existing files.
libstdc++-v3/ChangeLog:
* include/std/barrier: Update copyright years. Fix whitespace.
* include/std/version: Fix whitespace.
* testsuite/30_threads/barrier/1.cc: Update copyright years.
* testsuite/30_threads/barrier/2.cc: Likewise.
* testsuite/30_threads/barrier/arrive.cc: Likewise.
* testsuite/30_threads/barrier/arrive_and_drop.cc: Likewise.
* testsuite/30_threads/barrier/arrive_and_wait.cc: Likewise.
* testsuite/30_threads/barrier/completion.cc: Likewise.
I flubbed an application of De Morgan's law. Let's just express the
logic directly and let the compiler figure it out. This bug made it
look like pr52830 was fixed, but it is not.
PR c++/98372
gcc/cp/
* tree.c (cp_tree_equal): Correct map_context logic.
gcc/testsuite/
* g++.dg/cpp0x/constexpr-52830.C: Restore dg-ice
* g++.dg/template/pr98372.C: New.
I've made two mistakes in the *sse4_1_zero_extend* define_insn_and_split
patterns. One is that when it uses vector_operand, it should use Bm rather
than m constraint, and the other one is that because it is a post-reload
splitter it needs isa attribute to select which alternatives are valid for
which ISAs. Sorry for messing this up.
2021-01-14 Jakub Jelinek <jakub@redhat.com>
PR target/98670
* config/i386/sse.md (*sse4_1_zero_extendv8qiv8hi2_3,
*sse4_1_zero_extendv4hiv4si2_3, *sse4_1_zero_extendv2siv2di2_3):
Use Bm instead of m for non-avx. Add isa attribute.
* gcc.target/i386/pr98670.c: New test.
This patch optimizes two GIMPLE operations into just one.
As mentioned in the PR, there is some risk this might create more expensive
constants, but sometimes it will make them on the other side less expensive,
it really depends on the exact value.
And if it is an important issue, we should do it in md or during expansion.
2021-01-14 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96688
* match.pd (~(X >> Y) -> ~X >> Y): New simplification if
~X can be simplified.
* gcc.dg/tree-ssa/pr96688.c: New test.
* gcc.dg/tree-ssa/reassoc-37.c: Adjust scan-tree-dump regex.
* gcc.target/i386/pr66821.c: Likewise.
At the moment, if we use only one vector of an LD4 result,
we'll treat the LD4 as having the cost of a single load.
But all 4 loads and any associated permutes take place
regardless of which results are actually used.
This patch therefore counts the cost of unused LOAD_LANES
results against the first statement in a group. An alternative
would be to multiply the ncopies of the first stmt by the group
size and treat other stmts in the group as having zero cost,
but I thought that might be more surprising when reading dumps.
gcc/
* tree-vect-stmts.c (vect_model_load_cost): Account for unused
IFN_LOAD_LANES results.
gcc/testsuite/
* gcc.target/aarch64/sve/cost_model_11.c: New test.
* gcc.target/aarch64/sve/mask_struct_load_5.c: Use
-fno-vect-cost-model.
Turns out __builtin_convertvector is not as good a fit for the widening
and narrowing intrinsics as I had hoped.
During the veclower phase we lower most of it to bitfield operations and
hope DCE cleans it back up into
vector pack/unpack and extend operations. I received reports that in
more complex cases GCC fails to do that
and we're left with many vector extract operations that clutter the
output.
I think veclower can be improved on that front, but for GCC 10 I'd like
to just implement these builtins
with a good old RTL builtin rather than inline asm.
gcc/
* config/aarch64/aarch64-simd.md (aarch64_<su>xtl<mode>):
Define.
(aarch64_xtn<mode>): Likewise.
* config/aarch64/aarch64-simd-builtins.def (sxtl, uxtl, xtn):
Define
builtins.
* config/aarch64/arm_neon.h (vmovl_s8): Reimplement using
builtin.
(vmovl_s16): Likewise.
(vmovl_s32): Likewise.
(vmovl_u8): Likewise.
(vmovl_u16): Likewise.
(vmovl_u32): Likewise.
(vmovn_s16): Likewise.
(vmovn_s32): Likewise.
(vmovn_s64): Likewise.
(vmovn_u16): Likewise.
(vmovn_u32): Likewise.
(vmovn_u64): Likewise.
The vmovn_high* intrinsics are supposed to map to XTN2 instructions that
narrow their source vector and instert it into the top half of the destination vector.
This patch reimplements them away from inline assembly to an RTL builtin
that performs a vec_concat with a truncate.
gcc/
* config/aarch64/aarch64-simd.md (aarch64_xtn2<mode>_le):
Define.
(aarch64_xtn2<mode>_be): Likewise.
(aarch64_xtn2<mode>): Likewise.
* config/aarch64/aarch64-simd-builtins.def (xtn2): Define
builtins.
* config/aarch64/arm_neon.h (vmovn_high_s16): Reimplement using
builtins.
(vmovn_high_s32): Likewise.
(vmovn_high_s64): Likewise.
(vmovn_high_u16): Likewise.
(vmovn_high_u32): Likewise.
(vmovn_high_u64): Likewise.
gcc/testsuite/
* gcc.target/aarch64/narrow_high-intrinsics.c: Adjust
scan-assembler-times for xtn2.
While running glibc tests several *-textrel tests failed showing that
relocations remained against read only sections. It turned out this was
related to exception headers data encoding being wrong.
By default pointer encoding will always use the DW_EH_PE_absptr format.
This patch uses format DW_EH_PE_pcrel and DW_EH_PE_sdata4. Optionally
DW_EH_PE_indirect is included for global symbols. This eliminates the
relocations.
gcc/ChangeLog:
* config/or1k/or1k.h (ASM_PREFERRED_EH_DATA_FORMAT): New macro.
Define TARGET_ASM_FILE_END as file_end_indicate_exec_stack to allow
generation of the ".note.GNU-stack" section note. This allows binutils
to properly set PT_GNU_STACK in the program header.
This fixes a glibc execstack testsuite test failure found while working
on the OpenRISC glibc port.
gcc/ChangeLog:
* config/or1k/linux.h (TARGET_ASM_FILE_END): Define macro.
This allows the openrisc softfloat implementation to set exceptions.
This also sets the correct tininess after rounding value to be
consistent with hardware and simulator implementations.
libgcc/ChangeLog:
* config/or1k/sfp-machine.h (FP_RND_NEAREST, FP_RND_ZERO,
FP_RND_PINF, FP_RND_MINF, FP_RND_MASK, FP_EX_OVERFLOW,
FP_EX_UNDERFLOW, FP_EX_INEXACT, FP_EX_INVALID, FP_EX_DIVZERO,
FP_EX_ALL): New constant macros.
(_FP_DECL_EX, FP_ROUNDMODE, FP_INIT_ROUNDMODE,
FP_HANDLE_EXCEPTIONS): New macros.
(_FP_TININESS_AFTER_ROUNDING): Change to 1.
This is used in libgcc and now glibc to detect when hardware floating
point operations are supported by the target.
gcc/ChangeLog:
* config/or1k/or1k.h (TARGET_CPU_CPP_BUILTINS): Add builtin
define for __or1k_hard_float__.
Defining this to not abort as found when working on running tests in
the glibc test suite.
We implement this with a call to _mcount with no arguments. The required
return address's will be pulled from the stack. Passing the LR (r9) as
an argument had problems as sometimes r9 is clobbered by the GOT logic
in the prologue before the call to _mcount.
gcc/ChangeLog:
* config/or1k/or1k.h (NO_PROFILE_COUNTERS): Define as 1.
(PROFILE_HOOK): Define to call _mcount.
(FUNCTION_PROFILER): Change from abort to no-op.
In r11-4690 we removed the call to finish_nonmember_using_decl in
tsubst_expr/DECL_EXPR in the USING_DECL block. This was done not
to perform name lookup twice for a non-dependent using-decl, which
sounds sensible.
However, finish_nonmember_using_decl also pushes the decl's bindings
which we still have to do so that we can find the USING_DECL's name
later. In this case, we've got a USING_DECL N::operator<< that we are
tsubstituting. We already looked it up while parsing the template
"foo", and lookup_using_decl stashed the OVERLOAD it found into
USING_DECL_DECLS. Now we just have to update the IDENTIFIER_BINDING of
the identifier for operator<< with the overload the name is bound to.
I didn't want to export push_local_binding so I've introduced a new
wrapper.
gcc/cp/ChangeLog:
PR c++/98231
* name-lookup.c (push_using_decl_bindings): New.
* name-lookup.h (push_using_decl_bindings): Declare.
* pt.c (tsubst_expr): Call push_using_decl_bindings.
gcc/testsuite/ChangeLog:
PR c++/98231
* g++.dg/lookup/using63.C: New test.
These simplifications are only simplifications if the (~D ^ C) or (D ^ C)
expressions fold into gimple vals, but in that case they decrease number of
operations by 1.
2021-01-13 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96691
* match.pd ((~X | C) ^ D -> (X | C) ^ (~D ^ C),
(~X & C) ^ D -> (X & C) ^ (D ^ C)): New simplifications if
(~D ^ C) or (D ^ C) can be simplified.
* gcc.dg/tree-ssa/pr96691.c: New test.
contrib/ChangeLog:
* gcc-changelog/git_commit.py: Support wrapping of functions
in parentheses that can take multiple lines.
* gcc-changelog/test_email.py: Add tests for it.
* gcc-changelog/test_patches.txt: Add 2 patches.
This avoids canonicalizing BIT_FIELD_REF <T1> (a, <sz>, 0) to
(T1)a on integer typed a. This confuses the vectorizer SLP matching.
With this delayed to after vector lowering the testcase in PR92645
from Skia is now finally optimized to reasonable assembly.
2021-01-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/92645
* match.pd (BIT_FIELD_REF to conversion): Delay canonicalization
until after vector lowering.
* gcc.target/i386/pr92645-7.c: New testcase.
* gcc.dg/tree-ssa/ssa-fre-54.c: Adjust.
* gcc.dg/pr69047.c: Likewise.
I misunderstood the cp_build_function_call_vec API, thinking a NULL
vector was an acceptable way of passing no arguments. You need to
pass a vector of no elements.
PR c++/98626
gcc/cp/
* module.cc (module_add_import_initializers): Pass a
zero-element argument vector.
This patch extends the MLS/MSB patterns to support unpacked
integer vectors. The type suffix could be either the element
size or the container size, but using the element size should
be more efficient.
gcc/
* config/aarch64/aarch64-sve.md (fnma<mode>4): Extend from SVE_FULL_I
to SVE_I.
(@aarch64_pred_fnma<mode>, cond_fnma<mode>, *cond_fnma<mode>_2)
(*cond_fnma<mode>_4, *cond_fnma<mode>_any): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/mls_2.c: New test.
* g++.target/aarch64/sve/cond_mls_1.C: Likewise.
* g++.target/aarch64/sve/cond_mls_2.C: Likewise.
* g++.target/aarch64/sve/cond_mls_3.C: Likewise.
* g++.target/aarch64/sve/cond_mls_4.C: Likewise.
* g++.target/aarch64/sve/cond_mls_5.C: Likewise.
This patch extends the MLA/MAD patterns to support unpacked
integer vectors. The type suffix could be either the element
size or the container size, but using the element size should
be more efficient.
gcc/
* config/aarch64/aarch64-sve.md (fma<mode>4): Extend from SVE_FULL_I
to SVE_I.
(@aarch64_pred_fma<mode>, cond_fma<mode>, *cond_fma<mode>_2)
(*cond_fma<mode>_4, *cond_fma<mode>_any): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/mla_2.c: New test.
* g++.target/aarch64/sve/cond_mla_1.C: Likewise.
* g++.target/aarch64/sve/cond_mla_2.C: Likewise.
* g++.target/aarch64/sve/cond_mla_3.C: Likewise.
* g++.target/aarch64/sve/cond_mla_4.C: Likewise.
* g++.target/aarch64/sve/cond_mla_5.C: Likewise.
This improves SLP discovery in the face of existing vectors allowing
punning of the vector shape (or even punning from an integer type).
For punning from integer types this does not yet handle lane zero
extraction being represented as conversion rather than BIT_FIELD_REF.
2021-01-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/92645
* tree-vect-slp.c (vect_build_slp_tree_1): Relax supported
BIT_FIELD_REF argument.
(vect_build_slp_tree_2): Record the desired vector type
on the external vector def.
(vectorizable_slp_permutation): Handle required punning
of existing vector defs.
* gcc.target/i386/pr92645-6.c: New testcase.
Noticed while testing on a different machine that the sve/sel_*.c
tests require .variant_pcs support but don't test for it.
.variant_pcs post-dates SVE so there shouldn't be a need to test
for both.
gcc/testsuite/
* gcc.target/aarch64/sve/sel_1.c: Require aarch64_variant_pcs.
* gcc.target/aarch64/sve/sel_2.c: Likewise.
* gcc.target/aarch64/sve/sel_3.c: Likewise.
Noticed while looking at something else that the comment above
def_lookup got the description of the comparisons the wrong way
round.
gcc/
* rtl-ssa/accesses.h (def_lookup): Fix order of comparison results.
This patch fixes a regression on sh4 introduced by the rtl-ssa stuff.
The port had a pattern:
(define_insn "movsf_ie"
[(set (match_operand:SF 0 "general_movdst_operand"
"=f,r,f,f,fy, f,m, r, r,m,f,y,y,rf,r,y,<,y,y")
(match_operand:SF 1 "general_movsrc_operand"
" f,r,G,H,FQ,mf,f,FQ,mr,r,y,f,>,fr,y,r,y,>,y"))
(use (reg:SI FPSCR_MODES_REG))
(clobber (match_scratch:SI 2 "=X,X,X,X,&z, X,X, X, X,X,X,X,X, y,X,X,X,X,X"))]
"TARGET_SH2E
&& (arith_reg_operand (operands[0], SFmode)
|| fpul_operand (operands[0], SFmode)
|| arith_reg_operand (operands[1], SFmode)
|| fpul_operand (operands[1], SFmode)
|| arith_reg_operand (operands[2], SImode))"
But recog can generate this pattern from something that matches:
[(set (match_operand:SF 0 "general_movdst_operand")
(match_operand:SF 1 "general_movsrc_operand")
(use (reg:SI FPSCR_MODES_REG))]
with recog adding the (clobber (match_scratch:SI)) automatically.
recog tests the C condition before adding the clobber, so there might
not be an operands[2] to test.
Similarly, gen_movsf_ie takes only two arguments, with operand 2
being filled in automatically. The only way to create this pattern
with a REG operands[2] before RA would be to generate it directly
from RTL. AFAICT the only things that do this are the secondary
reload patterns, which are generated during RA and come with
pre-vetted operands.
arith_reg_operand rejects 6 specific registers:
return (regno != T_REG && regno != PR_REG
&& regno != FPUL_REG && regno != FPSCR_REG
&& regno != MACH_REG && regno != MACL_REG);
The fpul_operand tests allow FPUL_REG, leaving 5 invalid registers.
However, in all alternatives of movsf_ie, either operand 0 or
operand 1 is a register that belongs r, f or y, none of which
include any of the 5 rejected registers. This means that any
post-RA pattern would satisfy the operands[0] or operands[1]
condition without the operands[2] test being necessary.
gcc/
* config/sh/sh.md (movsf_ie): Remove operands[2] test.
contrib/ChangeLog:
* gcc-changelog/git_commit.py: Allow modifications of older
ChangeLog (or specific) files without need to make a ChangeLog
entry.
* gcc-changelog/test_email.py: Test it.
* gcc-changelog/test_patches.txt: Add new patch.
When the application sets SA_SIGINFO, the signal trampoline parameters
are different to follow POSIX.
libgcc/
* config/i386/gnu-unwind.h (x86_gnu_fallback_frame_state): Add the
posix siginfo case to struct handler_args. Detect between legacy
and siginfo from the second parameter, which is a small sigcode in
the legacy case, and a pointer in the siginfo case.
The following patch implements what I've talked about, i.e. to no longer
force operands of vec_perm_const into registers in the generic code, but let
each of the (currently 8) targets force it into registers individually,
giving the targets better control on if it does that and when and allowing
them to do something special with some particular operands.
And then defines the define_insn_and_split for the 256-bit and 512-bit
permutations into vpmovzx* (only the bw, wd and dq cases, in theory we could
add define_insn_and_split patterns also for the bd, bq and wq).
2021-01-13 Jakub Jelinek <jakub@redhat.com>
PR target/95905
* optabs.c (expand_vec_perm_const): Don't force v0 and v1 into
registers before calling targetm.vectorize.vec_perm_const, only after
that.
* config/i386/i386-expand.c (ix86_vectorize_vec_perm_const): Handle
two argument permutation when one operand is zero vector and only
after that force operands into registers.
* config/i386/sse.md (*avx2_zero_extendv16qiv16hi2_1): New
define_insn_and_split pattern.
(*avx512bw_zero_extendv32qiv32hi2_1): Likewise.
(*avx512f_zero_extendv16hiv16si2_1): Likewise.
(*avx2_zero_extendv8hiv8si2_1): Likewise.
(*avx512f_zero_extendv8siv8di2_1): Likewise.
(*avx2_zero_extendv4siv4di2_1): Likewise.
* config/mips/mips.c (mips_vectorize_vec_perm_const): Force operands
into registers.
* config/arm/arm.c (arm_vectorize_vec_perm_const): Likewise.
* config/sparc/sparc.c (sparc_vectorize_vec_perm_const): Likewise.
* config/ia64/ia64.c (ia64_vectorize_vec_perm_const): Likewise.
* config/aarch64/aarch64.c (aarch64_vectorize_vec_perm_const): Likewise.
* config/rs6000/rs6000.c (rs6000_vectorize_vec_perm_const): Likewise.
* config/gcn/gcn.c (gcn_vectorize_vec_perm_const): Likewise. Use std::swap.
* gcc.target/i386/pr95905-2.c: Use scan-assembler-times instead of
scan-assembler. Add tests with zero vector as first __builtin_shuffle
operand.
* gcc.target/i386/pr95905-3.c: New test.
* gcc.target/i386/pr95905-4.c: New test.