As discussed, the test tests atomics on doubles which are 64-bit and so we
should use sync_long_long effective target instead of sync_int_long that
covers 64-bit atomics only on 64-bit arches. I've added -march=pentium
to follow what is documented for sync_long_long, I guess -march=zarch should
be added for s390* too, but haven't tested that.
And using sync_long_long found a syntax error in that effective target
implementation, so I've fixed that too.
2021-09-14 Jakub Jelinek <jakub@redhat.com>
* c-c++-common/gomp/atomic-29.c: Add -march=pentium
dg-additional-options for ia32. Use sync_long_long effective target
instead of sync_int_long.
* lib/target-supports.exp (check_effective_target_sync_long_long): Fix
a syntax error.
This patch adds testing checks (goa_stabilize_expr with NULL pre_p) for more
tree codes, so that we don't gimplify their operands individually unless lhs
appears in them. Also, so that we don't have exponential compile time complexity
with the added checks, I've added a depth computation, we don't expect lhs
to be found in depth 8 or above as all the atomic forms must have x expression
in specific places in the expressions.
2021-09-14 Jakub Jelinek <jakub@redhat.com>
* gimplify.c (goa_stabilize_expr): Add depth argument, propagate
it to recursive calls, for depth above 7 just gimplify or return.
Perform a test even for MODIFY_EXPR, ADDR_EXPR, COMPOUND_EXPR with
__builtin_clear_padding and TARGET_EXPR.
(gimplify_omp_atomic): Adjust goa_stabilize_expr callers.
For consistency's sake with -Wall & -w, this makes -Werror imply -gnatwe.
gcc/ada/
PR ada/101385
* doc/gnat_ugn/building_executable_programs_with_gnat.rst
(-Wall): Minor fixes.
(-w): Likewise.
(-Werror): Document that it also sets -gnatwe by default.
* gcc-interface/lang-specs.h (ada): Expand -gnatwe if -Werror is
passed and move expansion of -gnatw switches to before -gnatez.
Recent compilers enforce more strictly the RM C.6(18) clause, which says
that volatile record types are by-reference types. This changes the typical
error message now given in these cases.
gcc/ada/
* gcc-interface/decl.c (gnat_to_gnu_entity) <is_type>: Declare new
constant. Adjust error message issued by validate_size in the case
of by-reference types.
(validate_size): Always use the error strings passed by the caller.
My C++17 hardware interference sizes patch caused a bogus warning on 32-bit
x86, where we have a default L1 cache line size of 0, and the front end
complained that the default constructive interference size of 64 was larger
than that.
gcc/cp/ChangeLog:
* decl.c (cxx_init_decl_processing): Don't warn if L1 cache line
size is smaller than maxalign.
gcc/fortran/ChangeLog:
PR fortran/82314
* decl.c (add_init_expr_to_sym): For proper initialization of
array-valued named constants the array bounds need to be
simplified before adding the initializer.
gcc/testsuite/ChangeLog:
PR fortran/82314
* gfortran.dg/pr82314.f90: New test.
gcc/fortran/ChangeLog:
PR fortran/85130
* expr.c (find_substring_ref): Handle given substring start and
end indices as signed integers, not unsigned.
gcc/testsuite/ChangeLog:
PR fortran/85130
* gfortran.dg/substr_6.f90: Revert commit r8-7574, adding again
test that was erroneously considered as illegal.
This resolves PR101574 "gcc/sparseset.h:215:20: error: suggest parentheses
around assignment used as truth value [-Werror=parentheses]", as (bogusly)
reported at commit a61f6afbee:
In file included from [...]/source-gcc/gcc/lra-lives.c:43:
[...]/source-gcc/gcc/lra-lives.c: In function ‘void make_hard_regno_dead(int)’:
[...]/source-gcc/gcc/sparseset.h:215:20: error: suggest parentheses around assignment used as truth value [-Werror=parentheses]
215 | && (((ITER) = sparseset_iter_elm (SPARSESET)) || 1); \
| ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[...]/source-gcc/gcc/lra-lives.c:304:3: note: in expansion of macro ‘EXECUTE_IF_SET_IN_SPARSESET’
304 | EXECUTE_IF_SET_IN_SPARSESET (pseudos_live, i)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
gcc/
PR bootstrap/101574
* diagnostic-spec.c (warning_suppressed_at, copy_warning): Handle
'RESERVED_LOCATION_P' locations.
* warning-control.cc (get_nowarn_spec, suppress_warning)
(copy_warning): Likewise.
To make it obvious what exactly the key type is. No change in behavior.
gcc/
* diagnostic-spec.h (typedef xint_hash_t): Use 'location_t' instead of...
(typedef key_type_t): ... this. Remove.
(nowarn_map): Document.
* diagnostic-spec.c (nowarn_map): Likewise.
* warning-control.cc (convert_to_key): Evolve functions into...
(get_location): ... these. Adjust all users.
The last missing piece of the C++17 standard library is the hardware
intereference size constants. Much of the delay in implementing these has
been due to uncertainty about what the right values are, and even whether
there is a single constant value that is suitable; the destructive
interference size is intended to be used in structure layout, so program
ABIs will depend on it.
In principle, both of these values should be the same as the target's L1
cache line size. When compiling for a generic target that is intended to
support a range of target CPUs with different cache line sizes, the
constructive size should probably be the minimum size, and the destructive
size the maximum, unless you are constrained by ABI compatibility with
previous code.
From discussion on gcc-patches, I've come to the conclusion that the
solution to the difficulty of choosing stable values is to give up on it,
and instead encourage only uses where ABI stability is unimportant: in
particular, uses where the ABI is shared at most between translation units
built at the same time with the same flags.
To that end, I've added a warning for any use of the constant value of
std::hardware_destructive_interference_size in a header or module export.
Appropriate uses within a project can disable the warning.
A previous iteration of this patch included an -finterference-tune flag to
make the value vary with -mtune; this iteration makes that the default
behavior, which should be appropriate for all reasonable uses of the
variable. The previous default of "stable-ish" seems to me likely to have
been more of an attractive nuisance; since we can't promise actual
stability, we should instead make proper uses more convenient.
JF Bastien's implementation proposal is summarized at
https://github.com/itanium-cxx-abi/cxx-abi/issues/74
I implement this by adding new --params for the two sizes. Targets can
override these values in targetm.target_option.override() to support a range
of values for the generic target; otherwise, both will default to the L1
cache line size.
64 bytes still seems correct for all x86.
I'm not sure why he proposed 64/64 for generic 32-bit ARM, since the Cortex
A9 has a 32-byte cache line, so I'd think 32/64 would make more sense.
He proposed 64/128 for generic AArch64, but since the A64FX now has a 256B
cache line, I've changed that to 64/256.
Other arch maintainers are invited to set ranges for their generic targets
if that seems better than using the default cache line size for both values.
With the above choice to reject stability as a goal, getting these values
"right" is now just a matter of what we want the default optimization to be,
and we can feel free to adjust them as CPUs with different cache lines
become more and less common.
gcc/ChangeLog:
* params.opt: Add destructive-interference-size and
constructive-interference-size.
* doc/invoke.texi: Document them.
* config/aarch64/aarch64.c (aarch64_override_options_internal):
Set them.
* config/arm/arm.c (arm_option_override): Set them.
* config/i386/i386-options.c (ix86_option_override_internal):
Set them.
gcc/c-family/ChangeLog:
* c.opt: Add -Winterference-size.
* c-cppbuiltin.c (cpp_atomic_builtins): Add __GCC_DESTRUCTIVE_SIZE
and __GCC_CONSTRUCTIVE_SIZE.
gcc/cp/ChangeLog:
* constexpr.c (maybe_warn_about_constant_value):
Complain about std::hardware_destructive_interference_size.
(cxx_eval_constant_expression): Call it.
* decl.c (cxx_init_decl_processing): Check
--param *-interference-size values.
libstdc++-v3/ChangeLog:
* include/std/version: Define __cpp_lib_hardware_interference_size.
* libsupc++/new: Define hardware interference size variables.
gcc/testsuite/ChangeLog:
* g++.dg/warn/Winterference.H: New file.
* g++.dg/warn/Winterference.C: New test.
* g++.target/aarch64/interference.C: New test.
* g++.target/arm/interference.C: New test.
* g++.target/i386/interference.C: New test.
As mentioned in the PR, we do miss supports target micro-architectures
in target and target_clone attribute. While the levels
x86-64 x86-64-v2 x86-64-v3 x86-64-v4 are supported values by -march
option, they are actually only aliases for k8 CPU. That said, they are more
closer to __builtin_cpu_supports function and we decided to implement
it there.
PR target/101696
gcc/ChangeLog:
* common/config/i386/cpuinfo.h (cpu_indicator_init): Add support
for x86-64 micro levels for __builtin_cpu_supports.
* common/config/i386/i386-cpuinfo.h (enum feature_priority):
Add priorities for the micro-arch levels.
(enum processor_features): Add new features.
* common/config/i386/i386-isas.h: Add micro-arch features.
* config/i386/i386-builtins.c (get_builtin_code_for_version):
Support the micro-arch levels by callsing
__builtin_cpu_supports.
* doc/extend.texi: Document that the levels are support by
__builtin_cpu_supports.
gcc/testsuite/ChangeLog:
* g++.target/i386/mv30.C: New test.
* gcc.target/i386/mvc16.c: New test.
* gcc.target/i386/builtin_target.c (CHECK___builtin_cpu_supports):
New.
Co-Authored-By: H.J. Lu <hjl.tools@gmail.com>
This patch adds simple folding of __builtin_aarch64_im_lane_boundsi where
we are not going to error out. It fixes the problem by the removal
of the function from the IR.
OK? Bootstrapped and tested on aarch64-linux-gnu with no regressions.
gcc/ChangeLog:
PR target/95969
* config/aarch64/aarch64-builtins.c (aarch64_fold_builtin_lane_check):
New function.
(aarch64_general_fold_builtin): Handle AARCH64_SIMD_BUILTIN_LANE_CHECK.
(aarch64_general_gimple_fold_builtin): Likewise.
gcc/testsuite/ChangeLog:
PR target/95969
* gcc.target/aarch64/lane-bound-1.c: New test.
* gcc.target/aarch64/lane-bound-2.c: New test.
m32r support never made it to glibc and the support for the Linux kernel
was removed with 4.18. It does not remove much but no reason to keep
around a port which never worked or one which the support in other
projects is gone.
OK? Checked to make sure m32r-linux and m32rle-linux were rejected
when building.
contrib/ChangeLog:
* config-list.mk: Remove m32r-linux and m32rle-linux
from the list.
gcc/ChangeLog:
* config.gcc: Add m32r-*-linux* and m32rle-*-linux*
to the Unsupported targets list.
Remove support for m32r-*-linux* and m32rle-*-linux*.
* config/m32r/linux.h: Removed.
* config/m32r/t-linux: Removed.
libgcc/ChangeLog:
* config.host: Remove m32r-*-linux* and m32rle-*-linux*.
* config/m32r/libgcc-glibc.ver: Removed.
* config/m32r/t-linux: Removed.
So right now liblto_plugin.so exports many libiberty symbols and
simple_object file symbols but really it just needs to export onload.
This fixes the problem by using "-export-symbols-regex onload" on
the libtool link line.
lto-plugin/ChangeLog:
PR lto/49664
* Makefile.am: Export only onload.
* Makefile.in: Regenerate.
In the testcase we generate invalid assembly for an SVE load predicate instruction.
The RTL for the insn is:
(insn 9 8 10 (set (reg:VNx16BI 68 p0)
(mem:VNx16BI (plus:DI (mult:DI (reg:DI 1 x1 [93])
(const_int 8 [0x8]))
(reg/f:DI 0 x0 [92])) [2 work_3(D)->array[offset_4(D)]+0 S8 A16]))
That addressing mode is not valid for the instruction [1] as it only accepts the addressing mode:
[<Xn|SP>{, #<imm>, MUL VL}]
This patch rejects the register index form for SVE predicate modes.
Bootstrapped and tested on aarch64-none-linux-gnu.
[1] https://developer.arm.com/documentation/ddi0602/2021-06/SVE-Instructions/LDR--predicate---Load-predicate-register-
gcc/ChangeLog:
PR target/102252
* config/aarch64/aarch64.c (aarch64_classify_address): Don't allow
register index for SVE predicate modes.
gcc/testsuite/ChangeLog:
PR target/102252
* g++.target/aarch64/sve/pr102252.C: New test.
Now that the jump thread back registry has been split into the generic
copier and the custom (old) copier, it becomes trivial to remove the
FSM bits from the jump threaders.
First, there's no need for an EDGE_FSM_THREAD type. The only reason
we were looking at the threading type was to determine what type of
copier to use, and now that the copier has been split, there's no need
to even look. However, there is one check in register_jump_thread
where we verify that only the generic copier can thread through
back-edges. I've removed that check in favor of a flag passed to the
constructor.
I've also removed all the FSM references from the code and tests.
Interestingly, some tests weren't even testing the right thing. They
were testing for "FSM" which would catch jump thread paths as well as
the backward threader *failing* on registering a path. *big eye roll*
The only remaining code that was actually checking for EDGE_FSM_THREAD
was adjust_paths_after_duplication, and the checks could be written
without looking at the edge type at all. For the record, the code
there is horrible: it's convoluted, hard to read, and doesn't have any
tests. I'd smack myself if I could go back in time.
All that remains are the FSM references in the --param's themselves.
I think we should s/fsm/threader/, since I envision a day when we can
share the cost basis code between the threaders. However, I don't
know what the proper procedure is for renaming existing compiler
options.
By the way, param_fsm_maximum_phi_arguments is no longer relevant
after the rewrite. We can nuke that one right away.
Tested on x86-64 Linux.
gcc/ChangeLog:
* tree-ssa-threadbackward.c
(back_threader_profitability::profitable_path_p): Remove FSM
references.
(back_threader_registry::register_path): Same.
* tree-ssa-threadedge.c
(jump_threader::simplify_control_stmt_condition): Same.
* tree-ssa-threadupdate.c (jt_path_registry::jt_path_registry):
Add backedge_threads argument.
(fwd_jt_path_registry::fwd_jt_path_registry): Pass
backedge_threads argument.
(back_jt_path_registry::back_jt_path_registry): Same.
(dump_jump_thread_path): Adjust for FSM removal.
(back_jt_path_registry::rewire_first_differing_edge): Same.
(back_jt_path_registry::adjust_paths_after_duplication): Same.
(back_jt_path_registry::update_cfg): Same.
(jt_path_registry::register_jump_thread): Same.
* tree-ssa-threadupdate.h (enum jump_thread_edge_type): Remove
EDGE_FSM_THREAD.
(class back_jt_path_registry): Add backedge_threads to
constructor.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr21417.c: Adjust for FSM removal.
* gcc.dg/tree-ssa/pr66752-3.c: Same.
* gcc.dg/tree-ssa/pr68198.c: Same.
* gcc.dg/tree-ssa/pr69196-1.c: Same.
* gcc.dg/tree-ssa/pr70232.c: Same.
* gcc.dg/tree-ssa/pr77445.c: Same.
* gcc.dg/tree-ssa/ranger-threader-4.c: Same.
* gcc.dg/tree-ssa/ssa-dom-thread-18.c: Same.
* gcc.dg/tree-ssa/ssa-dom-thread-6.c: Same.
* gcc.dg/tree-ssa/ssa-thread-12.c: Same.
* gcc.dg/tree-ssa/ssa-thread-13.c: Same.
Here when partially instantiating the first pack expansion, substitution
into the condition of the constexpr if yields a still-dependent tree, so
tsubst_expr returns an IF_STMT with an unsubstituted IF_COND and with
IF_STMT_EXTRA_ARGS added to. Hence after partial instantiation the pack
expansion pattern still refers to the unlowered parameter pack 'ts' of
level 2, and it's thusly recorded in the new PACK_EXPANSION_PARAMETER_PACKS.
During the subsequent final instantiation of the regenerated lambda we
crash in tsubst_pack_expansion because it can't find an argument pack
for this unlowered 'ts', due to the level mismatch. (Likewise when the
constexpr if is replaced by a requires-expr, which also uses the extra
args mechanism for avoiding partial instantiation.)
So essentially, a pack expansion pattern that contains an "extra args"
tree doesn't play well with partial instantiation. This patch fixes
this by forcing such pack expansions to use the extra args mechanism as
well.
PR c++/101764
gcc/cp/ChangeLog:
* cp-tree.h (PACK_EXPANSION_FORCE_EXTRA_ARGS_P): New accessor
macro.
* pt.c (has_extra_args_mechanism_p): New function.
(find_parameter_pack_data::found_extra_args_tree_p): New data
member.
(find_parameter_packs_r): Set ppd->found_extra_args_tree_p
appropriately.
(make_pack_expansion): Set PACK_EXPANSION_FORCE_EXTRA_ARGS_P if
ppd.found_extra_args_tree_p.
(use_pack_expansion_extra_args_p): Return true if there were
unsubstituted packs and PACK_EXPANSION_FORCE_EXTRA_ARGS_P.
(tsubst_pack_expansion): Pass the pack expansion to
use_pack_expansion_extra_args_p.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1z/constexpr-if35.C: New test.
We need to use the pointer equivalence tracking from evrp in the jump
threader. Instead of moving it to some *evrp.h header, it's cleaner for
it to live in its own file, since it's completely independent and not
evrp specific.
Tested on x86-64 Linux.
gcc/ChangeLog:
* Makefile.in (OBJS): Add value-pointer-equiv.o.
* gimple-ssa-evrp.c (class ssa_equiv_stack): Move to
value-pointer-equiv.*.
(ssa_equiv_stack::ssa_equiv_stack): Same.
(ssa_equiv_stack::enter): Same.
(ssa_equiv_stack::leave): Same.
(ssa_equiv_stack::push_replacement): Same.
(ssa_equiv_stack::get_replacement): Same.
(is_pointer_ssa): Same.
(class pointer_equiv_analyzer): Same.
(pointer_equiv_analyzer::pointer_equiv_analyzer): Same.
(pointer_equiv_analyzer::~pointer_equiv_analyzer): Same.
(pointer_equiv_analyzer::set_global_equiv): Same.
(pointer_equiv_analyzer::set_cond_equiv): Same.
(pointer_equiv_analyzer::get_equiv): Same.
(pointer_equiv_analyzer::enter): Same.
(pointer_equiv_analyzer::leave): Same.
(pointer_equiv_analyzer::get_equiv_expr): Same.
(pta_valueize): Same.
(pointer_equiv_analyzer::visit_stmt): Same.
(pointer_equiv_analyzer::visit_edge): Same.
(hybrid_folder::value_of_expr): Same.
(hybrid_folder::value_on_edge): Same.
* value-pointer-equiv.cc: New file.
* value-pointer-equiv.h: New file.
The current restriction on folding memcpy to a single element of size
MOVE_MAX is excessively cautious on most machines and limits some
significant further optimizations. So relax the restriction provided
the copy size does not exceed MOVE_MAX * MOVE_RATIO and that a SET
insn exists for moving the value into machine registers.
Note that there were already checks in place for having misaligned
move operations when one or more of the operands were unaligned.
On Arm this now permits optimizing
uint64_t bar64(const uint8_t *rData1)
{
uint64_t buffer;
memcpy(&buffer, rData1, sizeof(buffer));
return buffer;
}
from
ldr r2, [r0] @ unaligned
sub sp, sp, #8
ldr r3, [r0, #4] @ unaligned
strd r2, [sp]
ldrd r0, [sp]
add sp, sp, #8
to
mov r3, r0
ldr r0, [r0] @ unaligned
ldr r1, [r3, #4] @ unaligned
PR target/102125 - (ARM Cortex-M3 and newer) missed optimization. memcpy not needed operations
gcc/ChangeLog:
PR target/102125
* gimple-fold.c (gimple_fold_builtin_memory_op): Allow folding
memcpy if the size is not more than MOVE_MAX * MOVE_RATIO.
DImode is currently handled only for machines with vector modes
enabled, but this is unduly restrictive and is generally better done
in core registers.
gcc/ChangeLog:
PR target/102125
* config/arm/arm.md (movmisaligndi): New define_expand.
* config/arm/vec-common.md (movmisalign<mode>): Iterate over VDQ mode.
gen_lowpart_general handles forming a lowpart of a MEM by using
adjust_address to rework and validate a new version of the MEM.
Do the same for gen_highpart rather than calling simplify_gen_subreg
for this case.
gcc/ChangeLog:
PR target/102125
* emit-rtl.c (gen_highpart): Use adjust_address to handle
MEM rather than calling simplify_gen_subreg.
As we are still building it for ./contrib/config-list.mk, let's add
--enable-obsolete so this has a chance to work.
contrib/ChangeLog:
* config-list.mk (LIST): --enable-obsolete for cr16-elf.
INIT_CUMULATIVE_ARGS() expands to multiple statements, which will break right
after an `if` statement. Wrap it into a block.
gcc/ChangeLog:
* config/alpha/vms.h (INIT_CUMULATIVE_ARGS): Wrap multi-statment
define into a block.
This removes the always defined DARWIN_PREFER_DWARF and the code
guarded by it being not defined, removing the possibility to
default some i386 darwin configurations to STABS when it would
not be defined.
2021-09-10 Richard Biener <rguenther@suse.de>
* config/darwin.h (DARWIN_PREFER_DWARF): Do not define.
* config/i386/darwin.h (PREFERRED_DEBUGGING_TYPE): Do not
change based on DARWIN_PREFER_DWARF not being defined.
With the last adjustment I failed to remove a stray undef of
PREFERRED_DEBUGGING_TYPE from config/i386/lynx.h
2021-09-13 Richard Biener <rguenther@suse.de>
* config/i386/lynx.h: Remove undef of PREFERRED_DEBUGGING_TYPE
to inherit from elfos.h
This adds cr16-*-* to the list of obsoleted targets in config.gcc
2021-09-13 Richard Biener <rguenther@suse.de>
* config.gcc: Add cr16-*-* to the list of obsoleted targets.
This switches the AVR port to generate DWARF2 debugging info by
default since the support for STABS is going to be deprecated for
GCC 12.
2021-09-10 Richard Biener <rguenther@suse.de>
* config/avr/elf.h (PREFERRED_DEBUGGING_TYPE): Remove
override, pick up DWARF2_DEBUG define from elfos.h
The RX port defaults to STABS when -mas100-syntax is used because
the AS100 assembler does not support some of the pseudo-ops used
by DWARF2 debug emission. Since STABS is going to be deprecated
that has to change. The following simply always uses DWARF2,
likely leaving -mas100-syntax broken when debug info is generated.
Can the RX port maintainer please sort out the situation?
2021-09-10 Richard Biener <rguenther@suse.de>
* config/rx/rx.h (PREFERRED_DEBUGGING_TYPE): Always define to
DWARF2_DEBUG.
This changes the default debug format for Alpha/VMS to DWARF2 only,
skipping emission of VMS debug info which is going do be deprecated
for GCC 12 alongside the support for STABS.
2021-09-10 Richard Biener <rguenther@suse.de>
* config/alpha/vms.h (PREFERRED_DEBUGGING_TYPE): Define to
DWARF2_DEBUG.
This removes the fallback to STABS as default for cygwin and mingw
when the assembler does not support .secrel32 and the default is
to emit 32bit code. Support for .secrel32 was added to binutils 2.16
released in 2005 so instead document that as requirement.
I left the now unused check for .secrel32 in configure around
in case somebody wants to turn that into an error or warning.
2021-09-10 Richard Biener <rguenther@suse.de>
* config/i386/cygming.h: Always default to DWARF2 debugging.
Do not define DBX_DEBUGGING_INFO, that's done via dbxcoff.h
already.
* doc/install.texi: Document binutils 2.16 as minimum
requirement for mingw.
We noticed that SPEC2017 503.bwaves_r run time degrades by
about 8% on P8 and P9 if we enabled vectorization at O2
fast-math (with cheap vect cost model). Comparing to Ofast,
compiler doesn't do the loop interchange on the innermost
loop, it's not profitable to vectorize it then.
As Richi's comments [1], this follows the similar idea to
over price the vector construction fed by VMAT_ELEMENTWISE
or VMAT_STRIDED_SLP. Instead of adding the extra cost on
vector construction costing immediately, it firstly records
how many loads and vectorized statements in the given loop,
later in rs6000_density_test (called by finish_cost) it
computes the load density ratio against all vectorized
statements, and check with the corresponding thresholds
DENSITY_LOAD_NUM_THRESHOLD and DENSITY_LOAD_PCT_THRESHOLD,
do the actual extra pricing if both thresholds are exceeded.
Note that this new load density heuristic check is based on
some fields in target cost which are updated as needed when
scanning each add_stmt_cost entry, it's independent of the
current function rs6000_density_test which requires to scan
non_vect stmts. Since it's checking the load stmts count
vs. all vectorized stmts, it's kind of density, so I put
it in function rs6000_density_test. With the same reason to
keep it independent, I didn't put it as an else arm of the
current existing density threshold check hunk or before this
hunk.
In the investigation of -1.04% degradation from 526.blender_r
on Power8, I noticed that the extra penalized cost 320 on one
single vector construction for mode V16QI is much exaggerated,
which makes the final body cost unreliable, so this patch adds
one maximum bound for the extra penalized cost for each vector
construction statement.
Full SPEC2017 performance evaluation on Power8/Power9 with
option combinations:
* -O2 -ftree-vectorize {,-fvect-cost-model=very-cheap}
{,-ffast-math}
* {-O3, -Ofast} {,-funroll-loops}
bwaves_r degradations on P8/P9 have been fixed, nothing else
remarkable was observed. Power10 -Ofast -funroll-loops run
shows it's neutral, while -O2 -ftree-vectorize run shows the
bwaves_r degradation is fixed expectedly.
[1] https://gcc.gnu.org/pipermail/gcc-patches/2021-May/570076.html
gcc/ChangeLog:
* config/rs6000/rs6000.c (struct rs6000_cost_data): New members
nstmts, nloads and extra_ctor_cost.
(rs6000_density_test): Add load density related heuristics. Do
extra costing on vector construction statements if need.
(rs6000_init_cost): Init new members.
(rs6000_update_target_cost_per_stmt): New function.
(rs6000_add_stmt_cost): Factor vect_nonmem hunk out to function
rs6000_update_target_cost_per_stmt and call it.
As Segher pointed out, to typedef struct _rs6000_cost_data as
rs6000_cost_data is useless, so rewrite it without typedef.
gcc/ChangeLog:
* config/rs6000/rs6000.c (struct rs6000_cost_data): Remove typedef.
(rs6000_init_cost): Adjust.