This reverts the bogus previous change causing runtime failures
and instead realizes that we now have the loop condition
if-converted and the BB vectorization opportunity realized during
the loop vectorization pass.
2021-10-21 Richard Biener <rguenther@suse.de>
PR testsuite/102861
* gcc.dg/vect/bb-slp-16.c: Revert previous change, scan
the vect dump instead.
This implements strictly-structured blocks support for Fortran, as specified in
OpenMP 5.2. This now allows using a Fortran BLOCK construct as the body of most
OpenMP constructs, with a "!$omp end ..." ending directive optional for that
form.
gcc/fortran/ChangeLog:
* decl.c (gfc_match_end): Add COMP_OMP_STRICTLY_STRUCTURED_BLOCK case
together with COMP_BLOCK.
* parse.c (parse_omp_structured_block): Change return type to
'gfc_statement', add handling for strictly-structured block case, adjust
recursive calls to parse_omp_structured_block.
(parse_executable): Adjust calls to parse_omp_structured_block.
* parse.h (enum gfc_compile_state): Add
COMP_OMP_STRICTLY_STRUCTURED_BLOCK.
* trans-openmp.c (gfc_trans_omp_workshare): Add EXEC_BLOCK case
handling.
gcc/testsuite/ChangeLog:
* gfortran.dg/gomp/cancel-1.f90: Adjust testcase.
* gfortran.dg/gomp/nesting-3.f90: Adjust testcase.
* gfortran.dg/gomp/strictly-structured-block-1.f90: New test.
* gfortran.dg/gomp/strictly-structured-block-2.f90: New test.
* gfortran.dg/gomp/strictly-structured-block-3.f90: New test.
libgomp/ChangeLog:
* libgomp.texi (Support of strictly structured blocks in Fortran):
Adjust to 'Y'.
* testsuite/libgomp.fortran/task-reduction-16.f90: Adjust testcase.
This patch reimplements the SHAPE intrinsic to be inlined similarly to
LBOUND and UBOUND, instead of as a library call, to avoid an
unnecessary array copy. Various bugs are also fixed.
gcc/fortran/
PR fortran/94070
* expr.c (gfc_simplify_expr): Handle GFC_ISYM_SHAPE along with
GFC_ISYM_LBOUND and GFC_ISYM_UBOUND.
* trans-array.c (gfc_conv_ss_startstride): Likewise.
(set_loop_bounds): Likewise.
* trans-intrinsic.c (gfc_trans_intrinsic_bound): Extend to
handle SHAPE. Correct logic for zero-size special cases and
detecting assumed-rank arrays associated with an assumed-size
argument.
(gfc_conv_intrinsic_shape): Deleted.
(gfc_conv_intrinsic_function): Handle GFC_ISYM_SHAPE like
GFC_ISYM_LBOUND and GFC_ISYM_UBOUND.
(gfc_add_intrinsic_ss_code): Likewise.
(gfc_walk_intrinsic_bound): Likewise.
gcc/testsuite/
PR fortran/94070
* gfortran.dg/c-interop/shape-bindc.f90: New test.
* gfortran.dg/c-interop/shape-poly.f90: New test.
* gfortran.dg/c-interop/size-bindc.f90: New test.
* gfortran.dg/c-interop/size-poly.f90: New test.
* gfortran.dg/c-interop/ubound-bindc.f90: New test.
* gfortran.dg/c-interop/ubound-poly.f90: New test.
libstdc++-v3/ChangeLog:
* include/bits/stl_iterator.h (common_iterator::__arrow_proxy):
Make fully constexpr as per LWG 3595.
(common_iterator::__postfix_proxy): Likewise.
libstdc++-v3/ChangeLog:
* include/std/ranges (lazy_split_view::base): Add forward_range
constraint as per LWG 3591.
(lazy_split_view::begin, lazy_split_view::end): Also check
simpleness of _Pattern as per LWG 3592.
(split_view::base): Relax copyable constraint as per LWG 3590.
libstdc++-v3/ChangeLog:
* include/std/ranges (join_view::__iter_cat::_S_iter_cat): Adjust
criteria for returning bidirectional_iterator_tag as per LWG 3535.
(join_view::_Iterator::_S_iter_concept): Likewise.
libstdc++-v3/ChangeLog:
* include/bits/ranges_base.h (viewable_range): Adjust as per
LWG 3481.
* testsuite/std/ranges/adaptors/all.cc (test07): New test.
The constraints on transform and and_then can cause errors when checking
satisfaction. The constraints that were present in R6 of the paper were
moved for he final F8 revision, and so should have been included in the
implementation.
libstdc++-v3/ChangeLog:
PR libstdc++/102863
* include/std/optional (optional::and_then, optional::transform):
Remove requires-clause.
* testsuite/20_util/optional/monadic/and_then.cc: Check
overload resolution doesn't cause errors.
* testsuite/20_util/optional/monadic/transform.cc: Likewise.
cp_parser_parse_and_diagnose_invalid_type_name is called during declaration
parsing, so it should pass 'true' for the declarator_p argument. But that
caused a diagnostic regression on template/pr84789.C due to undesired lookup
in dependent scopes. To fix that, cp_parser_nested_name_specifier_opt needs
to respect the value of check_dependency_p.
This patch avoids a regression from Andrew Sharp's WIP patch for PR70417.
It would make more sense to test only check_dependency_p, not declarator_p,
but removing the declarator_p condition turns out to reveal complicated
interactions of cp_parser_constructor_declarator_p and caching of
nested-name-specifiers and template-ids that I've already spent too much
time trying to sort out.
gcc/cp/ChangeLog:
* parser.c (cp_parser_parse_and_diagnose_invalid_type_name):
Pass true for declarator_p.
(cp_parser_nested_name_specifier_opt): Only look through
TYPENAME_TYPE if check_dependency_p is false.
Looking at calls.c:initialize_argument_information, I spotted some dead
code that seems to have been left behind from when MPX support was
removed.
This change removes that code as well as the associated target hooks
(which appear to be unused).
gcc/ChangeLog:
* calls.c (initialize_argument_information): Remove some dead
code, remove handling for function_arg returning const_int.
* doc/tm.texi: Delete documentation for unused target hooks.
* doc/tm.texi.in: Likewise.
* target.def (load_bounds_for_arg): Delete.
(store_bounds_for_arg): Delete.
(load_returned_bounds): Delete.
(store_returned_bounds): Delete.
* targhooks.c (default_load_bounds_for_arg): Delete.
(default_store_bounds_for_arg): Delete.
(default_load_returned_bounds): Delete.
(default_store_returned_bounds): Delete.
* targhooks.h (default_load_bounds_for_arg): Delete.
(default_store_bounds_for_arg): Delete.
(default_load_returned_bounds): Delete.
(default_store_returned_bounds): Delete.
The test_copy_elision() function was supposed to ensure that the result
is constructed directly in the std::optional, without early temporary
materialization. But I forgot to write the test.
libstdc++-v3/ChangeLog:
* testsuite/20_util/optional/monadic/transform.cc: Check that
an rvalue result is not materialized too soon.
The documentation on asm statements suggests asm is always a GNU
extension, but it's been part of ISO C++ since the first standard.
The documentation of -fno-asm is wrong for C++ as it states that it only
affects typeof, but actually it affects typeof and asm (despite asm
being part of ISO C++).
gcc/ChangeLog:
* doc/extend.texi (Basic Asm): Clarify that asm is not an
extension in C++.
* doc/invoke.texi (-fno-asm): Fix description for C++.
This turns a bitwise inverse of an equality comparison with 0 into a compare of
bitwise nonzero (cmtst).
We already have one pattern for cmsts, this adds an additional one which does
not require an additional bitwise and.
i.e.
#include <arm_neon.h>
uint8x8_t bar(int16x8_t abs_row0, int16x8_t row0) {
uint16x8_t row0_diff =
vreinterpretq_u16_s16(veorq_s16(abs_row0, vshrq_n_s16(row0, 15)));
uint8x8_t abs_row0_gt0 =
vmovn_u16(vcgtq_u16(vreinterpretq_u16_s16(abs_row0), vdupq_n_u16(0)));
return abs_row0_gt0;
}
now generates:
bar:
cmtst v0.8h, v0.8h, v0.8h
xtn v0.8b, v0.8h
ret
instead of:
bar:
cmeq v0.8h, v0.8h, #0
not v0.16b, v0.16b
xtn v0.8b, v0.8h
ret
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (*aarch64_cmtst_same_<mode>): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/mvn-cmeq0-1.c: New test.
This turns truncate operations with a hi/lo pair into a single permute of half
the bit size of the input and just ignoring the top bits (which are truncated
out).
i.e.
void d2 (short * restrict a, int *b, int n)
{
for (int i = 0; i < n; i++)
a[i] = b[i];
}
now generates:
.L4:
ldp q0, q1, [x3]
add x3, x3, 32
uzp1 v0.8h, v0.8h, v1.8h
str q0, [x5], 16
cmp x4, x3
bne .L4
instead of
.L4:
ldp q0, q1, [x3]
add x3, x3, 32
xtn v0.4h, v0.4s
xtn2 v0.8h, v1.4s
str q0, [x5], 16
cmp x4, x3
bne .L4
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (*aarch64_narrow_trunc<mode>): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/narrow_high_combine.c: Update case.
* gcc.target/aarch64/xtn-combine-1.c: New test.
* gcc.target/aarch64/xtn-combine-2.c: New test.
* gcc.target/aarch64/xtn-combine-3.c: New test.
* gcc.target/aarch64/xtn-combine-4.c: New test.
* gcc.target/aarch64/xtn-combine-5.c: New test.
* gcc.target/aarch64/xtn-combine-6.c: New test.
This optimizes signed right shift by BITSIZE-1 into a cmlt operation which is
more optimal because generally compares have a higher throughput than shifts.
On AArch64 the result of the shift would have been either -1 or 0 which is the
results of the compare.
i.e.
void e (int * restrict a, int *b, int n)
{
for (int i = 0; i < n; i++)
b[i] = a[i] >> 31;
}
now generates:
.L4:
ldr q0, [x0, x3]
cmlt v0.4s, v0.4s, #0
str q0, [x1, x3]
add x3, x3, 16
cmp x4, x3
bne .L4
instead of:
.L4:
ldr q0, [x0, x3]
sshr v0.4s, v0.4s, 31
str q0, [x1, x3]
add x3, x3, 16
cmp x4, x3
bne .L4
Thanks,
Tamar
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_simd_ashr<mode>): Add case cmp
case.
* config/aarch64/constraints.md (D1): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/shl-combine-2.c: New test.
* gcc.target/aarch64/shl-combine-3.c: New test.
* gcc.target/aarch64/shl-combine-4.c: New test.
* gcc.target/aarch64/shl-combine-5.c: New test.
When doing a (narrowing) right shift by half the width of the original type then
we are essentially shuffling the top bits from the first number down.
If we have a hi/lo pair we can just use a single shuffle instead of needing two
shifts.
i.e.
typedef short int16_t;
typedef unsigned short uint16_t;
void foo (uint16_t * restrict a, int16_t * restrict d, int n)
{
for( int i = 0; i < n; i++ )
d[i] = (a[i] * a[i]) >> 16;
}
now generates:
.L4:
ldr q0, [x0, x3]
umull v1.4s, v0.4h, v0.4h
umull2 v0.4s, v0.8h, v0.8h
uzp2 v0.8h, v1.8h, v0.8h
str q0, [x1, x3]
add x3, x3, 16
cmp x4, x3
bne .L4
instead of
.L4:
ldr q0, [x0, x3]
umull v1.4s, v0.4h, v0.4h
umull2 v0.4s, v0.8h, v0.8h
sshr v1.4s, v1.4s, 16
sshr v0.4s, v0.4s, 16
xtn v1.4h, v1.4s
xtn2 v1.8h, v0.4s
str q1, [x1, x3]
add x3, x3, 16
cmp x4, x3
bne .L4
Thanks,
Tamar
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md
(*aarch64_<srn_op>topbits_shuffle<mode>_le): New.
(*aarch64_topbits_shuffle<mode>_le): New.
(*aarch64_<srn_op>topbits_shuffle<mode>_be): New.
(*aarch64_topbits_shuffle<mode>_be): New.
* config/aarch64/predicates.md
(aarch64_simd_shift_imm_vec_exact_top): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/shrn-combine-10.c: New test.
* gcc.target/aarch64/shrn-combine-5.c: New test.
* gcc.target/aarch64/shrn-combine-6.c: New test.
* gcc.target/aarch64/shrn-combine-7.c: New test.
* gcc.target/aarch64/shrn-combine-8.c: New test.
* gcc.target/aarch64/shrn-combine-9.c: New test.
This patch implements support for the in_reduction clause for Fortran.
It also includes more completion of the taskgroup construct inside the
Fortran front-end, thus allowing task_reduction to work for task and
target constructs.
gcc/fortran/ChangeLog:
* openmp.c (gfc_match_omp_clause_reduction): Add 'openmp_target' default
false parameter. Add 'always,tofrom' map for OMP_LIST_IN_REDUCTION case.
(gfc_match_omp_clauses): Add 'openmp_target' default false parameter,
adjust call to gfc_match_omp_clause_reduction.
(match_omp): Adjust call to gfc_match_omp_clauses
* trans-openmp.c (gfc_trans_omp_taskgroup): Add call to
gfc_match_omp_clause, create and return block.
gcc/ChangeLog:
* omp-low.c (omp_copy_decl_2): For !ctx, use record_vars to add new copy
as local variable.
(scan_sharing_clauses): Place copy of OMP_CLAUSE_IN_REDUCTION decl in
ctx->outer instead of ctx.
gcc/testsuite/ChangeLog:
* gfortran.dg/gomp/reduction4.f90: Adjust omp target in_reduction' scan
pattern.
libgomp/ChangeLog:
* testsuite/libgomp.fortran/target-in-reduction-1.f90: New test.
* testsuite/libgomp.fortran/target-in-reduction-2.f90: New test.
Tune the case-values-threshold setting for modern cores. A value of 11 improves
SPECINT2017 by 0.2% and reduces codesize by 0.04%. With -Os use value 8 which
reduces codesize by 0.07%.
2021-10-18 Wilco Dijkstra <wdijkstr@arm.com>
gcc/
* config/aarch64/aarch64.c (aarch64_case_values_threshold):
Change to 8 with -Os, 11 otherwise.
Enable the fast shift feature in Neoverse V1 and N2 tunings as well.
2021-10-20 Wilco Dijkstra <wdijkstr@arm.com>
gcc/
* config/aarch64/aarch64.c (neoversev1_tunings):
Enable AARCH64_EXTRA_TUNE_CHEAP_SHIFT_EXTEND.
(neoversen2_tunings): Likewise.
* testsuite/lib/libffi.exp (load_gcc_lib): Load library from GCC
testsuite.
Load target-supports.exp and target-supports-dg.exp.
(libffi-init): Use libraries in GCC build tree.
(libffi_target_compile): Link with -shared-libgcc -lstdc++ for
C++ sources.
Add scripts for syncing with libffi upstream:
1. Clone libffi repo.
2. Checkout the specific commit.
3. Remove the unused files.
4. Add new files and remove old files if needed.
* HOWTO_MERGE: New file.
* autogen.sh: Likewise.
* merge.sh: Likewise.
As preparation for a new global object that will encapsulate
asm_out_file, we would need to live with a macro that will
define asm_out_file as casm->out_file and thus the name
can't be used in function arguments.
gcc/ChangeLog:
* config/arm/arm.c (arm_unwind_emit_sequence): Do not declare
already declared global variable.
(arm_unwind_emit_set): Use out_file as function argument.
(arm_unwind_emit): Likewise.
* config/darwin.c (machopic_output_data_section_indirection): Likewise.
(machopic_output_stub_indirection): Likewise.
(machopic_output_indirection): Likewise.
(machopic_finish): Likewise.
* config/i386/i386.c (ix86_asm_output_function_label): Likewise.
* config/i386/winnt.c (i386_pe_seh_unwind_emit): Likewise.
* config/ia64/ia64.c (process_epilogue): Likewise.
(process_cfa_adjust_cfa): Likewise.
(process_cfa_register): Likewise.
(process_cfa_offset): Likewise.
(ia64_asm_unwind_emit): Likewise.
* config/s390/s390.c (s390_asm_output_function_label): Likewise.
This avoids running into the assert in compute_distributive_range when
starting the analysis with operations in a trapping type.
2021-10-20 Richard Biener <rguenther@suse.de>
PR tree-optimization/102853
* tree-data-ref.c (split_constant_offset_1): Bail out
immediately if the expression traps on overflow.
These are some random obvious cleanups to the threading dumps, since
it seems I'm not the only one looking at dumps these days.
The "just threaded" debugging message is redundant since there's
already an equivalent "Registering jump thread" message.
The "about to thread" message is actually confusing, because the source
block doesn't match the IL, since the CFG update is mid-flight.
Tested on x86-64 Linux.
gcc/ChangeLog:
* tree-ssa-threadupdate.c (back_jt_path_registry::adjust_paths_after_duplication):
Remove superflous debugging message.
(back_jt_path_registry::duplicate_thread_path): Same.
gcc/ada/
* libgnat/s-widlllu.ads: Mark in SPARK.
* libgnat/s-widllu.ads: Likewise.
* libgnat/s-widuns.ads: Likewise.
* libgnat/s-widthu.adb: Add ghost code and a
pseudo-postcondition.
gcc/ada/
* libgnat/a-nbnbin__ghost.adb (Signed_Conversions,
Unsigned_Conversions): Mark subprograms as not imported.
* libgnat/a-nbnbin__ghost.ads: Provide a dummy body.
gcc/ada/
* sem_eval.adb (Eval_Type_Conversion): If the target subtype is
a static floating-point subtype and the result is a real literal,
consider its machine-rounded value to raise Constraint_Error.
(Test_In_Range): Turn local variables into constants.
gcc/ada/
* sem_eval.ads (Machine_Number): New inline function.
* sem_eval.adb (Machine_Number): New function body implementing
the machine rounding operation specified by RM 4.9(38/2).
(Check_Non_Static_Context): Call Machine_Number and set the
Is_Machine_Number flag consistently on the resulting node.
* sem_attr.adb (Eval_Attribute) <Attribute_Machine>: Likewise.
* checks.adb (Apply_Float_Conversion_Check): Call Machine_Number.
(Round_Machine): Likewise.
gcc/ada/
* sem_ch6.adb (Check_Return_Construct_Accessibility): Modify
generation of accessibility checks to be more consolidated and
get triggered properly in required cases.
* sem_util.adb (Accessibility_Level): Add extra check within
condition to handle aliased formals properly in more cases.
gcc/ada/
* exp_ch7.adb (Make_Final_Call): Detect expanded protected types
and use original protected type in order to calculate
appropriate finalization routine.