This adds more correct .machine for most older CPUs. It should be
conservative in the sense that everything we handled before we handle at
least as well now. This does not yet revamp the server CPU handling, it
is too risky at this point in time.
Tested on powerpc64-linux {-m32,-m64}. Also manually tested with all
-mcpu=, and the output of that passed through the GNU assembler.
2022-03-04 Segher Boessenkool <segher@kernel.crashing.org>
* config/rs6000/rs6000.cc (rs6000_machine_from_flags): Restructure a
bit. Handle most older CPUs.
The std manglings for things like std::string should not apply if
we're not in the global module.
gcc/cp/
* mangle.cc (is_std_substitution): Check global module.
(is_std_substitution_char): Return bool.
gcc/testsuite/
* g++.dg/modules/std-subst-2.C: New.
* g++.dg/modules/std-subst-3.C: New.
* g++.dg/modules/std-subst-4_a.C: New.
* g++.dg/modules/std-subst-4_b.C: New.
* g++.dg/modules/std-subst-4_c.C: New.
PR analyzer/103521 reports that commit r12-5585-g132902177138c09803d639e12b1daebf2b9edddc
("analyzer: further false leak fixes due to overzealous state merging [PR103217]")
led to failures of gcc.dg/analyzer/pr93032-mztools.c on some targets,
where rather than reporting FILE * leaks, the analyzer would hit
complexity limits and give up.
The cause is that pr93032-mztools.c has some 'unsigned char' values that
are copied to 'char'. On targets where 'char' defaults to being signed,
this leads to casts, whereas on targets where 'char' defaults to being
unsigned, no casts are needed.
When the casts occur, various symbolic values within the loop (the
locals 'crc', 'cpsize', and 'uncpsize') become sufficiently complex as
to hit the --param=analyzer-max-svalue-depth= limit, and are treated as
UNKNOWN, allowing the analysis of the loop to quickly terminate, with
much of this state as UNKNOWN (but retaining the FILE * information, and
thus correctly reporting the FILE * leaks).
Without the casts, the symbolic values for these variables don't quite
hit the complexity limit, and the analyzer attempts to track these
values in the loop, leading to the analyzer eventually hitting the
per-program-point limit on the number of states, and giving up on
these execution paths, thus failing to report the FILE * leaks.
This patch tweaks the default value of the param:
--param=analyzer-max-svalue-depth=.
from 13 down to 12. This allows the pr93032-mztools.c testcase to
succeeed with both -fsigned-char and -funsigned-char, and thus allows
this integration test to succeed on both styles of target without
requiring extra command-line flags. The patch duplicates the test so
it runs with both -fsigned-char and -funsigned-char.
My hope is that this will allow similar cases to terminate loop analysis
earlier. I tried reducing it further, but doing so caused some test
cases to regress.
The tradeoff here is between:
(a) precision of individual states in the analysis, versus
(b) maximizing code-path coverage in the analysis
I can imagine a more nuanced approach that splits the current
per-program-point hard limit into soft and hard limits: on hitting the
soft limit at a program point, go into a less precise mode for states
at that program point, in the hope that we can fully explore execution
paths beyond it without hitting the hard limit, but this seems like
GCC 13 material.
Another possible future fix might be for the analysis plan to make an
attempt to prioritize parts of the code in an enode budget, rather than
setting the same hard limit uniformly across all program points.
gcc/analyzer/ChangeLog:
PR analyzer/103521
* analyzer.opt (-param=analyzer-max-svalue-depth=): Reduce from 13
to 12.
gcc/testsuite/ChangeLog:
PR analyzer/103521
* gcc.dg/analyzer/pr93032-mztools.c: Move to...
* gcc.dg/analyzer/pr93032-mztools-signed-char.c: ...this, adding
-fsigned-char to args, and...
* gcc.dg/analyzer/pr93032-mztools-unsigned-char.c: ...copy to here,
adding -funsigned-char to args.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
DECL_MD_FUNCTION_CODE() returns an int, on one particular compiler the
code in darwin_fold_builtin() triggers a warning.
Fixed thus.
Signed-off-by: Iain Sandoe <iain@sandoe.co.uk>
gcc/ChangeLog:
* config/darwin.cc (darwin_fold_builtin): Make fcode an int to
avoid a mismatch with DECL_MD_FUNCTION_CODE().
PowerPC Darwin8 is the last version to use an unwind frame fallback routine.
This had been omitted from the new shared EH library, along with one more
header dependency that only fires there.
Signed-off-by: Iain Sandoe <iain@sandoe.co.uk>
libgcc/ChangeLog:
* config/rs6000/t-darwin-ehs: Add darwin-fallback.o.
* config/t-darwin-ehs: Add dependency on unwind.h.
This implements a new module mangling ABI as the original one has a
few issues:
a) it was not demangleable (oops)
b) implemented a weak ownership model.
This implements a strong ownership model, so that exported entities
from named modules are mangled to include their module attachment.
This gives more informative linker diagnostics and better module
isolation. Weak ownership was hoped to allow backwards compatibility
with non-modular code, but in practice was very brittle, and C++20
added new semantics for linkage declarations that cover the needed
functionality.
FAOD Clang is also moving to this ABI and documentation will be added
to the Itanium ABI specification.
gcc/cp/
* cp-tree.h (mangle_identifier): Replace with ...
(mangle_module_component): ... this.
* mangle.cc (dump_substitution_candidates): Adjust.
(add_substitution): Likewise.
(find_substitution): Likewise.
(unmangled_name_p): Likewise.
(mangle_module_substitution): Reimplement.
(mangle_module_component): New.
(write_module, maybe_write_module): Adjust.
(write_name): Drop modules here.
(write_unqualified): Do them here instead.
(mangle_global_init): Adjust.
* module.cc (module_state::mangle): Adjust.
(mangle_module): Likewise.
(get_originating_module): Adjust.
gcc/testsuite/
* g++.dg/modules/fn-inline-1_b.C: Adjust.
* g++.dg/modules/fn-inline-1_c.C: Adjust.
* g++.dg/modules/imp-inline-1_a.C: Adjust.
* g++.dg/modules/imp-inline-1_b.C: Adjust.
* g++.dg/modules/init-2_a.C: Adjust.
* g++.dg/modules/init-2_b.C: Adjust.
* g++.dg/modules/init-2_c.C: Adjust.
* g++.dg/modules/member-def-2_d.C: Adjust.
* g++.dg/modules/mod-sym-1.C: Adjust.
* g++.dg/modules/mod-sym-2.C: Adjust.
* g++.dg/modules/mod-sym-3.C: Adjust.
* g++.dg/modules/sym-subst-1.C: Adjust.
* g++.dg/modules/sym-subst-2_b.C: Adjust.
* g++.dg/modules/sym-subst-3_a.C: Adjust.
* g++.dg/modules/sym-subst-3_b.C: Adjust.
* g++.dg/modules/sym-subst-4.C: Adjust.
* g++.dg/modules/sym-subst-5.C: Adjust.
* g++.dg/modules/sym-subst-6.C: Adjust.
* g++.dg/modules/tpl-spec-1_a.C: Adjust.
* g++.dg/modules/tpl-spec-2_b.C: Adjust.
* g++.dg/modules/tpl-spec-2_d.C: Adjust.
* g++.dg/modules/tpl-spec-3_a.C: Adjust.
* g++.dg/modules/virt-1_a.C: Adjust.
* g++.dg/modules/virt-2_a.C: Adjust.
* g++.dg/modules/virt-2_b.C: Adjust.
* g++.dg/modules/virt-2_c.C: Adjust.
* g++.dg/modules/vtt-1_a.C: Adjust.
* g++.dg/modules/vtt-1_b.C: Adjust.
Follow up discussion to the initial patch for this PR identified that it is
preferable to avoid the LRA change, and arrange for the target to reject the
hi and lo_sum selections when presented with an invalid address.
We split the Darwin high/low selectors into two:
1. One that handles non-PIC addresses (kernel mode, mdynamic-no-pic).
2. One that handles PIC addresses and rejects SYMBOL_REFs unless they are
suitably wrapped in the MACHOPIC_OFFSET unspec.
The second case is handled by providing a new predicate (macho_pic_address)
that checks the requirements.
Signed-off-by: Iain Sandoe <iain@sandoe.co.uk>
PR target/104117
gcc/ChangeLog:
* config/rs6000/darwin.md (@machopic_high_<mode>): New.
(@machopic_low_<mode>): New.
* config/rs6000/predicates.md (macho_pic_address): New.
* config/rs6000/rs6000.cc (rs6000_legitimize_address): Do not
apply the TLS processing to Darwin.
* lra-constraints.cc (process_address_1): Revert the changes
in r12-7209.
The glibc build is showing a build error due to extra "error" checking from my
PR87496 fix. That checking was overeager, disallowing setting the long double
size to 64-bits if the 128-bit long double ABI had already been specified.
Now we only emit an error if we specify a 128-bit long double ABI if our
long double size is not 128 bits. This also fixes an erroneous error when
-mabi=ieeelongdouble is used and ISA 2.06 is not enabled, but the long double
size has been changed to 64 bits.
2022-03-04 Peter Bergner <bergner@linux.ibm.com>
gcc/
PR target/87496
PR target/104208
* config/rs6000/rs6000.cc (rs6000_option_override_internal): Make the
ISA 2.06 requirement for -mabi=ieeelongdouble conditional on
-mlong-double-128.
Move the -mabi=ieeelongdouble and -mabi=ibmlongdouble error checking
from here...
* common/config/rs6000/rs6000-common.cc (rs6000_handle_option):
... to here.
gcc/testsuite/
PR target/87496
PR target/104208
* gcc.target/powerpc/pr104208-1.c: New test.
* gcc.target/powerpc/pr104208-2.c: Likewise.
* gcc.target/powerpc/pr87496-2.c: Swap long double options to trigger
the expected error.
* gcc.target/powerpc/pr87496-3.c: Likewise.
The following testcase regressed when SRA started punting on stores to
TREE_READONLY vars. We document that:
"In a VAR_DECL, PARM_DECL or FIELD_DECL, or any kind of ..._REF node,
nonzero means it may not be the lhs of an assignment."
so the SRA change looks desirable. On the other side, at least in this
testcase the TREE_READONLY is set there intentionally from the
PR85873 fix, because gimplify_init_constructor itself uses TREE_READONLY
on the object to determine if it can perform promotion to static const
or not.
So, similarly to other spots in the gimplifier where we also clear
TREE_READONLY when we emit IL that stores into the object, this
does the same in gimplify_init_constructor, but in the way so that
the TREE_READONLY test for the promotion to static const keeps working
and doesn't change anything for notify_temp_creation mode, which doesn't
emit any IL, just tests if it would need a temporary or not.
This keeps PR85873 testcase working as before and fixes this regression.
2022-03-04 Jakub Jelinek <jakub@redhat.com>
PR middle-end/104529
* gimplify.cc (gimplify_init_constructor): Clear TREE_READONLY
on automatic objects which will be runtime initialized.
* g++.dg/tree-ssa/pr104529.C: New test.
... by generalizing the existing 'gcc/omp-low.cc:task_shared_vars'.
Fix-up for commit 9b32c1669a
"OpenACC 'kernels' decomposition: Mark variables used in
synthesized data clauses as addressable [PR100280]".
PR middle-end/100280
PR middle-end/104132
PR middle-end/104133
gcc/
* omp-low.cc (task_shared_vars): Rename to
'make_addressable_vars'. Adjust all users.
(scan_sharing_clauses) <OMP_CLAUSE_MAP> Use it for
'OMP_CLAUSE_MAP_DECL_MAKE_ADDRESSABLE' DECLs, too.
gcc/testsuite/
* c-c++-common/goacc/kernels-decompose-pr104061-1-3.c: Adjust.
* c-c++-common/goacc/kernels-decompose-pr104061-1-4.c: Likewise.
* c-c++-common/goacc/kernels-decompose-pr104132-1.c: Likewise.
* c-c++-common/goacc/kernels-decompose-pr104133-1.c: Likewise.
libgomp/
* testsuite/libgomp.oacc-c-c++-common/kernels-decompose-1.c:
Extend.
The r12-7287-g1b71bc7c8b18bd1b change improved the -Wdeprecated
warning for C++, but regressed it for C, in particular in
gcc.dg/deprecated.c testcase we now report a type that actually isn't
deprecated as deprecated instead of the one that is deprecated.
The following change tries to find the middle ground between what
we used to do before and what r12-7287 change does.
If TYPE_STUB_DECL (node) is non-NULL (that is what happens with
those C tests), then it will do what it used to do before (just smarter,
there is no need to lookup_attribute when it is called again a few lines
below this), if it is NULL, it will try
TYPE_STUB_DECL (TYPE_MAIN_VARIANT (node)) - what the deprecated-16.C
test needs.
2022-03-04 Jakub Jelinek <jakub@redhat.com>
PR c/104627
* tree.cc (warn_deprecated_use): For types prefer to use node
and only use TYPE_MAIN_VARIANT (node) if TYPE_STUB_DECL (node) is
NULL.
The std::__debug::vector isn't usable in constant expressions, so this
test fails in debug mode. Until the debug vector is fixed we can just
make the test use the non-debug one.
libstdc++-v3/ChangeLog:
PR libstdc++/104748
* testsuite/std/ranges/adaptors/all.cc: Use non-debug vector for
constexpr test.
This fixes a test failure due to a non-reserved name in an AIX system
header (included via <pthread.h>). That name clashes with one of the
names we check our own headers for, so skip checking that name on AIX.
libstdc++-v3/ChangeLog:
* testsuite/17_intro/names.cc (func): Undef on AIX.
This removes a FIXME in <compare>, defining the total order for
floating-point types. I originally opened PR96526 to request a new
compiler built-in to implement this, but now that we have std::bit_cast
it can be done entirely in the library.
The implementation is based on the glibc definitions of totalorder,
totalorderf, totalorderl etc.
I think this works for all the types that satisfy std::floating_point
today, and should also work for the types expected to be added by P1467
except for std::bfloat16_t. It also supports some additional types that
don't currently satisfy std::floating_point, such as __float80, but we
probably do want that to satisfy the concept for non-strict modes.
libstdc++-v3/ChangeLog:
PR libstdc++/96526
* libsupc++/compare (strong_order): Add missing support for
floating-point types.
* testsuite/18_support/comparisons/algorithms/strong_order_floats.cc:
New test.
This rejects variables that are array types, array elements or derived type
members when used as the event handle inside a detach clause (in accordance
with the OpenMP specification). This would previously lead to an ICE.
2022-03-03 Kwok Cheung Yeung <kcy@codesourcery.com>
gcc/fortran/
PR fortran/104131
* openmp.cc (gfc_match_omp_detach): Move check for type of event
handle to...
(resolve_omp_clauses) ...here. Also check that the event handle is
not an array, or an array access or structure element access.
gcc/testsuite/
PR fortran/104131
* gfortran.dg/gomp/pr104131.f90: New.
* gfortran.dg/gomp/task-detach-1.f90: Update expected error message.
In gcc-5 to gcc-11, the ptx isa version was 3.1.
On trunk, the default is now 6.0, which is also what will be the value in
the libraries.
Consequently, there may be setups with an older driver that worked with
gcc-11, but will become unsupported with gcc-12.
Fix this by building the libraries with mptx=3.1.
After this, setups with an older driver still won't work out of the box
with gcc-12, because the default ptx isa version has changed, but should work
after specifying mptx=3.1.
gcc/ChangeLog:
2022-03-03 Tom de Vries <tdevries@suse.de>
* config/nvptx/t-nvptx (MULTILIB_EXTRA_OPTS): Add mptx=3.1.
In gcc-11, when specifying -misa=sm_30, an executable may still contain sm_35
code (due to libraries being built with the default -misa=sm_35), so it won't
run on an sm_30 board.
Fix this by building libraries with sm_30, as was the case in gcc-5 to gcc-10.
gcc/ChangeLog:
2022-03-03 Tom de Vries <tdevries@suse.de>
PR target/104758
* config/nvptx/t-nvptx (MULTILIB_EXTRA_OPTS): Add misa=sm_30.
In PR97348, we ran into the problem that recent CUDA dropped support for
sm_30, which inhibited the build when building with CUDA bin in the path,
because the nvptx-tools assembler uses CUDA's ptxas to do ptx verification.
To fix this, in gcc-11 the default sm_xx was moved from sm_30 to sm_35.
This however broke support for sm_30 boards: an executable build for sm_30
might contain sm_35 code from the libraries, which are build with the default
sm_xx (PR104758).
We want to fix this by going back to having the libraries build with sm_30, as
was the case for gcc-5 to gcc-10. That however reintroduces the problem from
PR97348.
Deal with PR97348 in the simplest way possible: when calling the assembler for
sm_30, specify --no-verify.
This has the unfortunate effect that after fixing PR104758 by building
libraries with sm_30, the libraries are no longer verified. This can be
improved upon by:
- adding a configure test in gcc that tests if CUDA supports sm_30, and
if so disabling this patch
- dealing with this in nvptx-tools somehow, either:
- detect at ptxas execution time that it doesn't support sm_30, or
- detect this at nvptx-tool configure time.
gcc/ChangeLog:
2022-03-03 Tom de Vries <tdevries@suse.de>
* config/nvptx/nvptx.h (ASM_SPEC): Add %{misa=sm_30:--no-verify}.
With target board nvptx-none-run/-mptx=3.1 we run into:
...
cc1: error: PTX version (-mptx) needs to be at least 4.2 to support \
selected -misa (sm_53)^M
compiler exited with status 1
FAIL: gcc.target/nvptx/sm53.c (test for excess errors)
...
Fix this by adding -mptx=_ in sm53.c and similar.
Tested on nvptx.
gcc/testsuite/ChangeLog:
2022-03-03 Tom de Vries <tdevries@suse.de>
* gcc.target/nvptx/sm53.c: Add -mptx=_.
* gcc.target/nvptx/sm70.c: Same.
* gcc.target/nvptx/sm75.c: Same.
* gcc.target/nvptx/sm80.c: Same.
When offloading to nvptx is enabled, scan_omp_simd duplicates the simd
region including its clauses and body using inliner's
copy_gimple_seq_and_replace_locals. That works nicely for decls, remaps
only those that are seen in the nested bind expr vars (i.e. local variables)
and doesn't remap other vars. But for SSA_NAMEs it remaps them always, doesn't
know if their def stmt is outside of the simd (then it better shouldn't be remapped)
or inside of it (then it should) and without cfg/dominators that is pretty hard
to figure out (well, we could walk the region twice, once note SSA_NAMEs defined
by each stmt seen there and once do the remapping of only those visited SSA_NAMEs).
This patch uses a simpler way, disables temporarily into_ssa for the clauses and
body of each simd region; we already disable into_ssa e.g. in parallel/target/task
etc. regions through push_gimplify_context () but for simd we don't push
any gimplification context and appart from into_ssa I think we don't need it.
2022-03-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/104757
* gimplify.cc (gimplify_omp_loop): Call gimplify_expr rather than
gimplify_omp_for.
(gimplify_expr) <case OMP_SIMD>: Temporarily disable
gimplify_ctxp->into_ssa around call to gimplify_omp_for.
* gfortran.dg/gomp/pr104757.f90: New test.
* gcc.dg/gomp/pr104757.c: New test.
The following testcase ICEs on x86_64 when asked to use the pre-GCC 8
ABI where zero sized arguments weren't ignored.
In GCC 7 the emit_push_insn calls in store_one_arg were unconditional,
it is true that they didn't actually push anything because it had zero
size, but because arg->locate.alignment_pad is 8 in this case,
emit_push_insn at the end performs
if (alignment_pad && args_addr == 0)
anti_adjust_stack (alignment_pad);
and an assert larger on is upset if we don't do it.
The following patch keeps the emit_push_insn conditional but calls
the anti_adjust_stack when needed by hand for the zero sized arguments.
For the new x86_64 ABI where zero sized arguments are ignored
arg->locate.alignment_pad is 0 in this case, so nothing changes
- we in that case really do ignore it.
There is another emit_push_insn call earlier in store_one_arg, also made
conditional on non-zero size by Marek in GCC 8, but that one is for
arguments with non-BLKmode and the only way those can be zero size is
if they are TYPE_EMPTY_P aka when they are completely ignored. But
I believe arg->locate.alignment_pad should be 0 in that case, so IMHO
there is no need to do anything in the second spot.
2022-03-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/104558
* calls.cc (store_one_arg): When not calling emit_push_insn
because size_rtx is const0_rtx, call at least anti_adjust_stack
on arg->locate.alignment_pad if !argblock and the alignment might
be non-zero.
* gcc.dg/pr104558.c: New test.
gcc/fortran/ChangeLog:
PR fortran/104573
* resolve.cc (resolve_structure_cons): Avoid NULL pointer
dereference when there is no valid component.
gcc/testsuite/ChangeLog:
PR fortran/104573
* gfortran.dg/assumed_type_14.f90: New test.
The testcase references a vector type that elicits a psabi warning.
This patch adds the option to suppress the warning.
* c-c++-common/pr104505.c: Add -Wno-psabi.
Unlike e.g. remove_inheritance_pseudos, undo_optional_reloads didn't
deal with subregs, so instead of removing multi-word moves, it
replaced the reload pseudo with the original pseudo. Besides the
redundant move, that retained the clobber of the dest, that starts a
multi-word move. After the remap, the sequence that should have
become a no-op move starts by clobbering the original pseudo and then
moving its pieces onto themselves. The problem is the clobber: it
makes earlier sets of the original pseudo to be regarded as dead: if
the optional reload sequence was an output reload, the insn for which
the output reload was attempted may be regarded as dead and deleted.
I've arranged for undo_optional_reloads to accept SUBREGs and use
get_regno, like remove_inheritance_pseudo, adjusted its insn-removal
loop to tolerate iterating over a removed clobber, and added logic to
catch any left-over reload clobbers that could trigger the problem.
for gcc/ChangeLog
* lra-constraints.cc (undo_optional_reloads): Recognize and
drop insns of multi-word move sequences, tolerate removal
iteration on an already-removed clobber, and refuse to
substitute original pseudos into clobbers.
At the same time, adding -Wtrivial-auto-var-init and update documentation.
-Wtrivial-auto-var-init and update documentation.
for the following testing case:
1 int g(int *);
2 int f1()
3 {
4 switch (0) {
5 int x;
6 default:
7 return g(&x);
8 }
9 }
compiling with -O -ftrivial-auto-var-init causes spurious warning:
warning: statement will never be executed [-Wswitch-unreachable]
5 | int x;
| ^
This is due to the compiler-generated initialization at the point of
the declaration.
We could avoid the warning to exclude the following cases:
when
flag_auto_var_init > AUTO_INIT_UNINITIALIZED
And
1) call to .DEFERRED_INIT
2) call to __builtin_clear_padding if the 2nd argument is present and non-zero
3) a gimple assign store right after the .DEFERRED_INIT call that has the LHS
as RHS
However, we still need to warn users about the incapability of the option
-ftrivial-auto-var-init by adding a new warning option -Wtrivial-auto-var-init
to report cases when it cannot initialize the auto variable. At the same
time, update documentation for -ftrivial-auto-var-init to connect it with
the new warning option -Wtrivial-auto-var-init, and add documentation
for -Wtrivial-auto-var-init.
gcc/ChangeLog:
PR middle-end/102276
* common.opt (-Wtrivial-auto-var-init): New option.
* doc/invoke.texi (-Wtrivial-auto-var-init): Document new option.
(-ftrivial-auto-var-init): Update option;
* gimplify.cc (emit_warn_switch_unreachable): New function.
(warn_switch_unreachable_r): Rename to ...
(warn_switch_unreachable_and_auto_init_r): This.
(maybe_warn_switch_unreachable): Rename to ...
(maybe_warn_switch_unreachable_and_auto_init): This.
(gimplify_switch_expr): Update calls to renamed function.
gcc/testsuite/ChangeLog:
PR middle-end/102276
* gcc.dg/auto-init-pr102276-1.c: New test.
* gcc.dg/auto-init-pr102276-2.c: New test.
* gcc.dg/auto-init-pr102276-3.c: New test.
* gcc.dg/auto-init-pr102276-4.c: New test.
In this PR allocnos_conflict_p takes 90% of the compile-time via
the calls from update_conflict_hard_regno_costs. This is due to
the high number of conflicts recorded in the dense bitvector
representation. Fortunately we can take advantage of the bitvector
representation here and turn the O(n) conflict test into an O(1) one,
greatly speeding up the compile of the testcase from 39s to just 4s
(93% IRA time to 26% IRA time).
While for the testcase in question the first allocno is almost always
the nice one the patch tries a more systematic approach to finding
the allocno to iterate object conflicts over. That does reduce
the actual number of compares for the testcase but it doesn't make
a measurable difference wall-clock wise. That's not guaranteed
though I think so I've kept this systematic way of choosing the
cheapest allocno.
2022-03-02 Richard Biener <rguenther@suse.de>
PR rtl-optimization/104686
* ira-color.cc (object_conflicts_with_allocno_p): New function
using a bitvector test instead of iterating when possible.
(allocnos_conflict_p): Choose the best allocno to iterate over
object conflicts.
(update_conflict_hard_regno_costs): Do allocnos_conflict_p test
last.