One (unintended) side effect of the patches to support multiple
ABIs is that we can now represent tlsdesc calls as normal calls
on SVE targets. This is likely to be handled more efficiently than
clobber_high, and for example fixes the long-standing failure in
gcc.target/aarch64/sve/tls_preserve_1.c.
2019-10-01 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR target/91452
* config/aarch64/aarch64.h (ARM_PCS_TLSDESC): New arm_pcs.
* config/aarch64/aarch64-protos.h (aarch64_tlsdesc_abi_id): Declare.
* config/aarch64/aarch64.c (aarch64_hard_regno_call_part_clobbered):
Handle ARM_PCS_TLSDESC.
(aarch64_tlsdesc_abi_id): New function.
* config/aarch64/aarch64.md (tlsdesc_small_sve_<mode>): Use a call
rtx instead of a list of clobbers and clobber_highs.
(tlsdesc_small_<mode>): Update accordingly.
From-SVN: r276392
At the moment we rely on SYMBOL_REF_DECL to get the ABI of the callee
of a call insn, falling back to the default ABI if the decl isn't
available. I think it'd be cleaner to attach the ABI directly to the
call instruction instead, which would also have the very minor benefit
of handling indirect calls more efficiently.
2019-10-01 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-protos.h (aarch64_expand_call): Take an
extra callee_abi argument.
* config/aarch64/aarch64.c (aarch64_expand_call): Likewise.
Insert a CALLEE_ABI unspec into the call pattern as the second
element in the PARALLEL.
(aarch64_simd_call_p): Delete.
(aarch64_insn_callee_abi): Get the arm_pcs of the callee from
the new CALLEE_ABI element of the PARALLEL.
(aarch64_init_cumulative_args): Get the arm_pcs of the callee
from the function type, if given.
(aarch64_function_arg_advance): Handle ARM_PCS_SIMD.
(aarch64_function_arg): Likewise. Return the arm_pcs of the callee
when passed the function_arg_info end marker.
(aarch64_output_mi_thunk): Pass the arm_pcs of the callee as the
final argument of gen_sibcall.
* config/aarch64/aarch64.md (UNSPEC_CALLEE_ABI): New unspec.
(call): Make operand 2 a const_int_operand and pass it to expand_call.
Wrap it in an UNSPEC_CALLEE_ABI unspec for the dummy define_expand
pattern.
(call_value): Likewise operand 3.
(sibcall): Likewise operand 2. Place the unspec before rather than
after the return.
(sibcall_value): Likewise operand 3.
(*call_insn, *call_value_insn): Include an UNSPEC_CALLEE_ABI.
(tlsgd_small_<mode>, *tlsgd_small_<mode>): Likewise.
(*sibcall_insn, *sibcall_value_insn): Likewise. Remove empty
constraint strings.
(untyped_call): Pass const0_rtx as the callee ABI to gen_call.
gcc/testsuite/
* gcc.target/aarch64/torture/simd-abi-10.c: New test.
* gcc.target/aarch64/torture/simd-abi-11.c: Likewise.
From-SVN: r276391
It says "size N/2" in a few places where "size S/2" is meant.
* doc/md.texi (vec_pack_trunc_@var{m}): Fix typo.
(vec_pack_sfix_trunc_@var{m}, vec_pack_ufix_trunc_@var{m}): Ditto.
(vec_packs_float_@var{m}, vec_packu_float_@var{m}): Ditto.
From-SVN: r276387
2019-09-30 Andreas Tobler <andreast@gcc.gnu.org>
* include/experimental/internet: Include netinet/in.h if we have
_GLIBCXX_HAVE_NETINET_IN_H defined.
From-SVN: r276374
This patch improves the handling of large numbers of labels within a
rich_location: previously, overlapping labels could lead to an assertion
failure within layout::print_any_labels. Also, the labels were printed
in reverse order of insertion into the rich_location.
This patch moves the determination of whether a vertical bar should
be printed for a line_label into the
'Figure out how many "label lines" we need, and which
one each label is printed in.'
step of layout::print_any_labels, rather than doing it as the lines
are printed. It also flips the sort order, so that labels at the
same line/column are printed in order of insertion into the
rich_location.
I haven't run into these issues with our existing diagnostics, but it
affects a patch kit I'm working on that makes more extensive use of
labels.
gcc/ChangeLog:
* diagnostic-show-locus.c (line_label::line_label): Initialize
m_has_vbar.
(line_label::comparator): Reverse the sort order by m_state_idx,
so that when the list is walked backwards the labels appear in
order of insertion into the rich_location.
(line_label::m_has_vbar): New field.
(layout::print_any_labels): When dealing with multiple labels at
the same line and column, only print vertical bars for the one
with the highest label_line.
(selftest::test_one_liner_labels): Update test for multiple labels
to expect the labels to be in the order of insertion into the
rich_location. Add a test for many such labels, where the column
numbers are out-of-order relative to the insertion order.
From-SVN: r276371
ix86_compute_frame_layout sets use_fast_prologue_epilogue if
the function isn't more expensive than a certain threshold,
where the threshold depends on the number of saved registers.
However, the RA is allowed to insert and delete instructions
as it goes along, which can change whether this threshold is
crossed or not.
I hit this with an RA change I'm working on. Rematerialisation
was able to remove an instruction and avoid a spill, which happened
to bring the size of the function below the threshold. But since
nothing legitimately frame-related had changed, there was no need for
the RA to lay out the frame again. We then failed the final sanity
check in lra_eliminate.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/i386/i386.h (ix86_frame::expensive_p): New field.
(ix86_frame::expensive_count): Likewise.
* config/i386/i386.c (ix86_compute_frame_layout): Make the choice
of use_fast_prologue_epilogue robust against incidental changes
in function size.
From-SVN: r276361
2019-09-30 Yuliang Wang <yuliang.wang@arm.com>
gcc/
* config/aarch64/aarch64-sve.md (sdiv_pow2<mode>3):
New pattern for ASRD.
* config/aarch64/iterators.md (UNSPEC_ASRD): New unspec.
* internal-fn.def (IFN_DIV_POW2): New internal function.
* optabs.def (sdiv_pow2_optab): New optab.
* tree-vect-patterns.c (vect_recog_divmod_pattern):
Modify pattern to support new operation.
* doc/md.texi (sdiv_pow2$var{m3}): Documentation for the above.
* doc/sourcebuild.texi (vect_sdiv_pow2_si):
Document new target selector.
gcc/testsuite/
* gcc.dg/vect/vect-sdiv-pow2-1.c: New test.
* gcc.target/aarch64/sve/asrdiv_1.c: As above.
* lib/target-supports.exp (check_effective_target_vect_sdiv_pow2_si):
Return true for AArch64 with SVE.
From-SVN: r276343
This patch makes more use of the function_abi infrastructure.
We can then avoid checking specifically for the vector PCS in
a few places, and can test it more directly otherwise.
Specifically: we no longer need to call df_set_regs_ever_live
for the extra call-saved registers, since IRA now does that for us.
We also don't need to handle the vector PCS specially in
aarch64_epilogue_uses, because DF now marks the registers
as live on exit.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64.c (aarch64_layout_frame): Use crtl->abi
to test whether we're compiling a vector PCS function and to test
whether the function needs to save a particular register.
Remove the vector PCS handling of df_set_regs_ever_live.
(aarch64_components_for_bb): Use crtl->abi to test whether
the function needs to save a particular register.
(aarch64_process_components): Use crtl->abi to test whether
we're compiling a vector PCS function.
(aarch64_expand_prologue, aarch64_expand_epilogue): Likewise.
(aarch64_epilogue_uses): Remove handling of vector PCS functions.
From-SVN: r276341
With the function ABI stuff, we can now support shrink-wrapping of
non-leaf vector PCS functions. This is particularly useful if the
vector PCS function calls an ordinary function on an error path,
since we can then keep the extra saves and restores specific to
that path too.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-protos.h (aarch64_use_simple_return_insn_p):
Delete.
* config/aarch64/aarch64.c (aarch64_components_for_bb): Check
whether the block calls a function that clobbers more registers
than the current function is allowed to.
(aarch64_use_simple_return_insn_p): Delete.
* config/aarch64/aarch64.md (simple_return): Remove condition.
gcc/testsuite/
* gcc.target/aarch64/torture/simd-abi-9.c: New test.
From-SVN: r276340
If we support multiple ABIs in the same translation unit, it can
sometimes be the case that a callee clobbers more registers than
its caller is allowed to. We need to call df_set_regs_ever_live
on these extra registers so that the prologue and epilogue code
can handle them appropriately.
This patch does that in IRA. I wanted to avoid another full
instruction walk just for this, so I combined it with the existing
set_paradoxical_subreg walk. This happens before the first
calculation of elimination offsets.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* function-abi.h (function_abi_aggregator): New class.
* function-abi.cc (function_abi_aggregator::caller_save_regs): New
function.
* ira.c (update_equiv_regs_prescan): New function. Call
set_paradoxical_subreg here rather than...
(update_equiv_regs): ...here.
(ira): Call update_equiv_regs_prescan.
From-SVN: r276339
The previous patches removed all target-independent uses of
regs_invalidated_by_call, call_used_or_fixed_regs and
call_used_or_fixed_reg_p. This patch therefore restricts
them to target-specific code (and reginfo.c, which sets them up).
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* hard-reg-set.h (regs_invalidated_by_call): Only define if
IN_TARGET_CODE.
(call_used_or_fixed_regs): Likewise.
(call_used_or_fixed_reg_p): Likewise.
* reginfo.c (regs_invalidated_by_call): New macro.
From-SVN: r276338
This is a straight replacement of "calls we can clobber without saving
them first".
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* shrink-wrap.c: Include function-abi.h.
(requires_stack_frame_p): Use crtl->abi to test whether the
current function can use a register without saving it first.
From-SVN: r276337
The main change here is to replace a crosses_call boolean with
a bitmask of the ABIs used by the crossed calls. For space reasons,
I didn't also add a HARD_REG_SET that tracks the set of registers
that are actually clobbered, which means that this is the one part
of the series that doesn't benefit from -fipa-ra. The existing
FIXME suggests that the current structures aren't the preferred
way of representing this anyhow, and the pass already makes
conservative assumptions about call-crossing registers.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* sel-sched-ir.h (_def::crosses_call): Replace with...
(_def::crossed_call_abis): ..this new field.
(def_list_add): Take a mask of ABIs instead of a crosses_call
boolean.
* sel-sched-ir.c (def_list_add): Likewise. Update initialization
of _def accordingly.
* sel-sched.c: Include function-abi.h.
(hard_regs_data::regs_for_call_clobbered): Delete.
(reg_rename::crosses_call): Replace with...
(reg_rename::crossed_call_abis): ...this new field.
(fur_static_params::crosses_call): Replace with...
(fur_static_params::crossed_call_abis): ...this new field.
(init_regs_for_mode): Don't initialize sel_hrd.regs_for_call_clobbered.
(init_hard_regs_data): Use crtl->abi to test which registers the
current function would need to save before it uses them.
(mark_unavailable_hard_regs): Update handling of call-clobbered
registers, using call_clobbers_in_region to find out which registers
might be call-clobbered (but without taking -fipa-ra into account
for now). Remove separate handling of partially call-clobbered
registers.
(verify_target_availability): Use crossed_call_abis instead of
crosses_call.
(get_spec_check_type_for_insn, find_used_regs): Likewise.
(fur_orig_expr_found, fur_on_enter, fur_orig_expr_not_found): Likewise.
From-SVN: r276336
This is a straight replacement of an existing "full or partial"
call-clobber check.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* sched-deps.c (deps_analyze_insn): Use the ABI of the target
function to test whether a register is fully or partly clobbered.
From-SVN: r276335
The reg_set_p part is simple, since the caller is asking about
a specific REG rtx, with a known register number and mode.
The find_all_hard_reg_sets part emphasises that the "implicit"
behaviour was always a bit suspect, since it includes fully-clobbered
registers but not partially-clobbered registers. The only current
user of this path is the c6x-specific scheduler predication code,
and c6x doesn't have partly call-clobbered registers, so in practice
it's fine. I've added a comment to try to disuade future users.
(The !implicit path is OK and useful though.)
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* rtlanal.c: Include function-abi.h.
(reg_set_p): Use insn_callee_abi to get the ABI of the called
function and clobbers_reg_p to test whether the register
is call-clobbered.
(find_all_hard_reg_sets): When implicit is true, use insn_callee_abi
to get the ABI of the called function and full_reg_clobbers to
get the set of fully call-clobbered registers. Warn about the
pitfalls of using this mode.
From-SVN: r276334
The inheritance code in find_equiv_reg can use clobbers_reg_p
to test whether a call clobbers either of the equivalent registers.
reload and find_reg use crtl->abi to test whether a register needs
to be saved in the prologue before use.
reload_as_needed can use full_and_partial_reg_clobbers and thus
avoid needing to keep its own record of which registers are part
call-clobbered.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* reload.c: Include function-abi.h.
(find_equiv_reg): Use clobbers_reg_p to test whether either
of the equivalent registers is clobbered by a call.
* reload1.c: Include function-abi.h.
(reg_reloaded_call_part_clobbered): Delete.
(reload): Use crtl->abi to test which registers would need
saving in the prologue before use.
(find_reg): Likewise.
(emit_reload_insns): Remove code for reg_reloaded_call_part_clobbered.
(reload_as_needed): Likewise. Use full_and_partial_reg_clobbers
instead of call_used_or_fixed_regs | reg_reloaded_call_part_clobbered.
From-SVN: r276333
This patch makes regrename use a similar mask-and-clobber-set
pair to IRA when tracking whether registers are clobbered by
calls in a region. Testing for a nonzero ABI mask is equivalent
to testing for a register that crosses a call.
Since AArch64 and c6x use regrename.h, they need to be updated
to include function-abi.h first. AIUI this is preferred over
including function-abi.h in regrename.h.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* regrename.h (du_head::call_clobber_mask): New field.
(du_head::need_caller_save_reg): Replace with...
(du_head::call_abis): ...this new field.
* regrename.c: Include function-abi.h.
(call_clobbered_in_chain_p): New function.
(check_new_reg_p): Use crtl->abi when deciding whether a register
is free for use after RA. Use call_clobbered_in_chain_p to test
whether a candidate register would be clobbered by a call.
(find_rename_reg): Don't add call-clobber conflicts here.
(rename_chains): Check call_abis instead of need_caller_save_reg.
(merge_chains): Update for changes to du_head.
(build_def_use): Use insn_callee_abi to get the ABI of the call insn
target. Record the ABI identifier in call_abis and the set of
fully or partially clobbered registers in call_clobber_mask.
Add fully-clobbered registers to hard_conflicts here rather
than in find_rename_reg.
* config/aarch64/cortex-a57-fma-steering.c: Include function-abi.h.
(rename_single_chain): Check call_abis instead of need_caller_save_reg.
* config/aarch64/falkor-tag-collision-avoidance.c: Include
function-abi.h.
* config/c6x/c6x.c: Likewise.
From-SVN: r276332
This is a direct replacement of an existing test for fully and
partially clobbered registers.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* regcprop.c (copyprop_hardreg_forward_1): Use the recorded
mode of the register when deciding whether it is no longer
available after a call.
From-SVN: r276331
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* recog.c: Include function-abi.h.
(peep2_find_free_register): Use crtl->abi when deciding whether
a register is free for use after RA.
From-SVN: r276330
This is another case in which we should conservatively treat
partial kills as full kills.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* postreload-gcse.c: Include regs.h and function-abi.h.
(record_opr_changes): Use insn_callee_abi to get the ABI of the
call insn target. Conservatively assume that partially-clobbered
registers are altered.
From-SVN: r276329
The "|= fixed_regs" in reload_combine isn't necessary, since the
set is only used to determine which values have changed (rather than,
for example, which registers are available for use).
In reload_cse_move2add we can be accurate about which registers
are still available. BLKmode indicates a continuation of the
previous register, and since clobbers_reg_p handles multi-register
values, it's enough to skip over BLKmode entries and just test the
start register.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* postreload.c (reload_combine_recognize_pattern): Use crtl->abi
when deciding whether a register is free for use after RA.
(reload_combine): Remove unnecessary use of fixed_reg_set.
(reload_cse_move2add): Use insn_callee_abi to get the ABI of the
call insn target. Use reg_mode when testing whether a register
is no longer available.
From-SVN: r276328
lra_reg has an actual_call_used_reg_set field that is only used during
inheritance. This in turn required a special lra_create_live_ranges
pass for flag_ipa_ra to set up this field. This patch instead makes
the inheritance code do its own live register tracking, using the
same ABI-mask-and-clobber-set pair as for IRA.
Tracking ABIs simplifies (and cheapens) the logic in lra-lives.c and
means we no longer need a separate path for -fipa-ra. It also means
we can remove TARGET_RETURN_CALL_WITH_MAX_CLOBBERS.
The patch also strengthens the sanity check in lra_assigns so that
we check that reg_renumber is consistent with the whole conflict set,
not just the call-clobbered registers.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* target.def (return_call_with_max_clobbers): Delete.
* doc/tm.texi.in (TARGET_RETURN_CALL_WITH_MAX_CLOBBERS): Delete.
* doc/tm.texi: Regenerate.
* config/aarch64/aarch64.c (aarch64_return_call_with_max_clobbers)
(TARGET_RETURN_CALL_WITH_MAX_CLOBBERS): Delete.
* lra-int.h (lra_reg::actual_call_used_reg_set): Delete.
(lra_reg::call_insn): Delete.
* lra.c: Include function-abi.h.
(initialize_lra_reg_info_element): Don't initialize the fields above.
(lra): Use crtl->abi to test whether the current function needs to
save a register in the prologue. Remove special pre-inheritance
lra_create_live_ranges pass for flag_ipa_ra.
* lra-assigns.c: Include function-abi.h
(find_hard_regno_for_1): Use crtl->abi to test whether the current
function needs to save a register in the prologue.
(lra_assign): Assert that registers aren't allocated to a
conflicting register, rather than checking only for overlaps
with call_used_or_fixed_regs. Do this even for flag_ipa_ra,
and for registers that are not live across a call.
* lra-constraints.c (last_call_for_abi): New variable.
(full_and_partial_call_clobbers): Likewise.
(setup_next_usage_insn): Remove the register from
full_and_partial_call_clobbers.
(need_for_call_save_p): Use call_clobbered_in_region_p to test
whether the register needs a caller save.
(need_for_split_p): Use full_and_partial_reg_clobbers instead
of call_used_or_fixed_regs.
(inherit_in_ebb): Initialize and maintain last_call_for_abi and
full_and_partial_call_clobbers.
* lra-lives.c (check_pseudos_live_through_calls): Replace
last_call_used_reg_set and call_insn arguments with an abi argument.
Remove handling of lra_reg::call_insn. Use function_abi::mode_clobbers
as the set of conflicting registers.
(calls_have_same_clobbers_p): Delete.
(process_bb_lives): Track the ABI of the last call instead of an
insn/HARD_REG_SET pair. Update calls to
check_pseudos_live_through_calls. Use eh_edge_abi to calculate
the set of registers that could be clobbered by an EH edge.
Include partially-clobbered as well as fully-clobbered registers.
(lra_create_live_ranges_1): Don't initialize lra_reg::call_insn.
* lra-remat.c: Include function-abi.h.
(call_used_regs_arr_len, call_used_regs_arr): Delete.
(set_bb_regs): Use insn_callee_abi to get the set of call-clobbered
registers and bitmap_view to combine them into dead_regs.
(call_used_input_regno_present_p): Take a function_abi argument
and use it to test whether a register is call-clobbered.
(calculate_gen_cands): Use insn_callee_abi to get the ABI of the
call insn target. Update tje call to call_used_input_regno_present_p.
(do_remat): Likewise.
(lra_remat): Remove the initialization of call_used_regs_arr_len
and call_used_regs_arr.
From-SVN: r276327
Similar idea to the combine.c and gcse.c patches.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* loop-iv.c: Include regs.h and function-abi.h.
(simplify_using_initial_values): Use insn_callee_abi to get the
ABI of the call insn target. Conservatively assume that
partially-clobbered registers are altered.
From-SVN: r276326
For -fipa-ra, IRA already keeps track of which specific registers
are call-clobbered in a region, rather than using global information.
The patch generalises this so that it tracks which ABIs are used
by calls in the region.
We can then use the new ABI descriptors to handle partially-clobbered
registers in the same way as fully-clobbered registers, without having
special code for targetm.hard_regno_call_part_clobbered. This in turn
makes -fipa-ra work for partially-clobbered registers too.
A side-effect of allowing multiple ABIs is that we no longer have
an obvious set of conflicting registers for the self-described
"fragile hack" in ira-constraints.c. This code kicks in for
user-defined registers that aren't live across a call at -O0,
and it tries to avoid allocating a call-clobbered register to them.
Here I've used the set of call-clobbered registers in the current
function's ABI, applying on top of any registers that are clobbered by
called functions. This is enough to keep gcc.dg/debug/dwarf2/pr5948.c
happy.
The handling of GENERIC_STACK_CHECK in do_reload seemed to have
a reversed condition:
for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
if (df_regs_ever_live_p (i)
&& !fixed_regs[i]
&& call_used_or_fixed_reg_p (i))
size += UNITS_PER_WORD;
The final part of the condition counts registers that don't need to be
saved in the prologue, but I think the opposite was intended.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* function-abi.h (call_clobbers_in_region): Declare.
(call_clobbered_in_region_p): New function.
* function-abi.cc (call_clobbers_in_region): Likewise.
* ira-int.h: Include function-abi.h.
(ira_allocno::crossed_calls_abis): New field.
(ALLOCNO_CROSSED_CALLS_ABIS): New macro.
(ira_need_caller_save_regs): New function.
(ira_need_caller_save_p): Likewise.
* ira.c (setup_reg_renumber): Use ira_need_caller_save_p instead
of call_used_or_fixed_regs.
(do_reload): Use crtl->abi to test whether the current function
needs to save a register in the prologue. Count registers that
need to be saved rather than registers that don't.
* ira-build.c (create_cap_allocno): Copy ALLOCNO_CROSSED_CALLS_ABIS.
Remove unnecessary | from ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
(propagate_allocno_info): Merge ALLOCNO_CROSSED_CALLS_ABIS too.
(propagate_some_info_from_allocno): Likewise.
(copy_info_to_removed_store_destinations): Likewise.
(ira_flattening): Say that ALLOCNO_CROSSED_CALLS_ABIS and
ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS are handled conservatively.
(ira_build): Use ira_need_caller_save_regs instead of
call_used_or_fixed_regs.
* ira-color.c (calculate_saved_nregs): Use crtl->abi to test
whether the current function would need to save a register
before using it.
(calculate_spill_cost): Likewise.
(allocno_reload_assign): Use ira_need_caller_save_regs and
ira_need_caller_save_p instead of call_used_or_fixed_regs.
* ira-conflicts.c (ira_build_conflicts): Use
ira_need_caller_save_regs rather than call_used_or_fixed_regs
as the set of call-clobbered registers. Remove the
call_used_or_fixed_regs mask from the calculation of
temp_hard_reg_set and mask its use instead. Remove special
handling of partially-clobbered registers.
* ira-costs.c (ira_tune_allocno_costs): Use ira_need_caller_save_p.
* ira-lives.c (process_bb_node_lives): Use mode_clobbers to
calculate the set of conflicting registers for calls that
can throw. Record the ABIs of calls in ALLOCNO_CROSSED_CALLS_ABIS.
Use full_and_partial_reg_clobbers rather than full_reg_clobbers
for the calculation of ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
Use eh_edge_abi to calculate the set of registers that could
be clobbered by an EH edge. Include partially-clobbered as
well as fully-clobbered registers.
From-SVN: r276325
The code patched here is counting how many registers the current
function would need to save in the prologue before it uses them.
The code is called per function, so using crtl is OK.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* haifa-sched.c: Include function-abi.h.
(alloc_global_sched_pressure_data): Use crtl->abi to check whether
the function would need to save a register before using it.
From-SVN: r276324
This is another case in which we can conservatively treat partial
kills as full kills. Again this is in principle a bug fix for
TARGET_HARD_REGNO_CALL_PART_CLOBBERED targets, but in practice
it probably doesn't make a difference.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* gcse.c: Include function-abi.h.
(compute_hash_table_work): Use insn_callee_abi to get the ABI of
the call insn target. Invalidate partially call-clobbered
registers as well as fully call-clobbered ones.
From-SVN: r276323
Whatever the rights and wrongs of the way aggregate_value_p
handles call-preserved registers, it's a de facto part of the ABI,
so we shouldn't change it. The patch simply extends the current
approach to whatever call-preserved set the function happens to
be using.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* function.c (aggregate_value_p): Work out which ABI the
function is using before testing which registers are at least
partly preserved by a call.
From-SVN: r276322
This pass previously excluded rematerialisation candidates if they
clobbered a call-preserved register, on the basis that it then
wouldn't be safe to add new instances of the candidate instruction
after a call. This patch instead makes the decision on a call-by-call
basis.
The second emit_remat_insns_for_block hunk probably isn't needed,
but it seems safer and more consistent to have it, so that every call
to emit_remat_insns is preceded by a check for invalid clobbers.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* early-remat.c: Include regs.h and function-abi.h.
(early_remat::maybe_add_candidate): Don't check for call-clobbered
registers here.
(early_remat::restrict_remat_for_unavail_regs): New function.
(early_remat::restrict_remat_for_call): Likewise.
(early_remat::process_block): Before calling emit_remat_insns
for a previous call in the block, invalidate any candidates
that would clobber call-preserved registers.
(early_remat::emit_remat_insns_for_block): Likewise for the
final call in a block. Do the same thing for live-in registers
when calling emit_remat_insns at the head of a block.
From-SVN: r276321
The code patched here is seeing whether the current function
needs to save at least part of a register before using it.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* df-scan.c (df_get_entry_block_def_set): Use crtl->abi to test
whether the current function needs to save at least part of a
register before using it.
(df_get_exit_block_use_set): Likewise for epilogue restores.
From-SVN: r276320
The DF dense_invalidated_by_call and sparse_invalidated_by_call
sets are actually only used on EH edges, and so are more the set
of registers that are invalidated by a taken EH edge. Under the
new order, that means that they describe eh_edge_abi.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* df-problems.c: Include regs.h and function-abi.h.
(df_rd_problem_data): Rename sparse_invalidated_by_call to
sparse_invalidated_by_eh and dense_invalidated_by_call to
dense_invalidated_by_eh.
(df_print_bb_index): Update accordingly.
(df_rd_alloc, df_rd_start_dump, df_rd_confluence_n): Likewise.
(df_lr_confluence_n): Use eh_edge_abi to get the set of registers
that are clobbered by an EH edge. Clobber partially-clobbered
registers as well as fully-clobbered ones.
(df_md_confluence_n): Likewise.
(df_rd_local_compute): Likewise. Update for changes to
df_rd_problem_data.
* df-scan.c (df_scan_start_dump): Use eh_edge_abi to get the set
of registers that are clobbered by an EH edge. Includde partially-
clobbered registers as well as fully-clobbered ones.
From-SVN: r276319
cselib_invalidate_regno is a no-op if REG_VALUES (i) is null,
so we can check that first. Then, if we know what mode the register
currently has, we can check whether it's clobbered in that mode.
Using GET_MODE (values->elt->val_rtx) to get the mode of the last
set is taken from cselib_reg_set_mode.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* cselib.c (cselib_process_insn): If we know what mode a
register was set in, check whether it is clobbered in that
mode by a call. Only fall back to reg_raw_mode if that fails.
From-SVN: r276318
Like with the combine.c patch, this one keeps things simple by
invalidating values in partially-clobbered registers, rather than
trying to tell whether the value in a partially-clobbered register
is actually clobbered or not. Again, this is in principle a bug fix,
but probably never matters in practice.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* cse.c: Include regs.h and function-abi.h.
(invalidate_for_call): Take the call insn as an argument.
Use insn_callee_abi to get the ABI of the call and invalidate
partially clobbered registers as well as fully clobbered ones.
(cse_insn): Update call accordingly.
From-SVN: r276317
There shouldn't be many cases in which a useful hard register is
live across a call before RA, so we might as well keep things simple
and invalidate partially-clobbered registers here, in case the values
they hold leak into the call-clobbered part. In principle this is
a bug fix for TARGET_HARD_REGNO_CALL_PART_CLOBBERED targets,
but in practice it probably doesn't make a difference.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* combine.c: Include function-abi.h.
(record_dead_and_set_regs): Use insn_callee_abi to get the ABI
of the target of call insns. Invalidate partially-clobbered
registers as well as fully-clobbered ones.
From-SVN: r276316
...or rather, make the use of the default ABI explicit. That seems
OK if not ideal for this heuristic.
In practical terms, the code patched here is counting GENERAL_REGS,
which are treated in the same way by all concurrent ABI variants
on AArch64. It might give bad results if used for interrupt
handlers though.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* cfgloopanal.c: Include regs.h and function-abi.h.
(init_set_costs): Use default_function_abi to test whether
a general register is call-clobbered.
From-SVN: r276315
old_insns_match_p just tests whether two instructions are
similar enough to merge. With insn_callee_abi it makes more
sense to compare the ABIs directly.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* cfgcleanup.c (old_insns_match_p): Compare the ABIs of calls
instead of the call-clobbered sets.
From-SVN: r276314
All caller-save.c uses of "|= fixed_reg_set" added in a previous patch
were redundant, since the sets are later ANDed with ~fixed_reg_set.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* caller-save.c (setup_save_areas): Remove redundant |s of
fixed_reg_set.
(save_call_clobbered_regs): Likewise. Use the call ABI rather
than call_used_or_fixed_regs to decide whether a REG_RETURNED
value is useful.
From-SVN: r276313
choose_hard_reg_mode previously took a boolean saying whether the
mode needed to be call-preserved. This patch replaces it with an
optional ABI pointer instead, so that the function can use that
to test whether a value is call-saved.
default_dwarf_frame_reg_mode uses eh_edge_abi because that's the
ABI that matters for unwinding. Targets need to override the hook
if they want something different.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* rtl.h (predefined_function_abi): Declare.
(choose_hard_reg_mode): Take a pointer to a predefined_function_abi
instead of a boolean call_save flag.
* config/gcn/gcn.c (gcn_hard_regno_caller_save_mode): Update call
accordingly.
* config/i386/i386.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
* config/ia64/ia64.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
* config/mips/mips.c (mips_hard_regno_caller_save_mode): Likewise.
* config/msp430/msp430.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
* config/rs6000/rs6000.h (HARD_REGNO_CALLER_SAVE_MODE): Likewise.
* config/sh/sh.c (sh_hard_regno_caller_save_mode): Likewise.
* reginfo.c (init_reg_modes_target): Likewise.
(choose_hard_reg_mode): Take a pointer to a predefined_function_abi
instead of a boolean call_save flag.
* targhooks.c: Include function-abi.h.
(default_dwarf_frame_reg_mode): Update call to choose_hard_reg_mode,
using eh_edge_abi to choose the mode.
From-SVN: r276312
This patch replaces the rtx_insn argument to
targetm.hard_regno_call_part_clobbered with an ABI identifier, since
call insns are now just one possible way of getting an ABI handle.
This in turn allows predefined_function_abi::initialize to do the
right thing for non-default ABIs.
The horrible ?: in need_for_call_save_p goes away in a later patch,
with the series as a whole removing most direct calls to the hook in
favour of function_abi operations.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* target.def (hard_regno_call_part_clobbered): Take an ABI
identifier instead of an rtx_insn.
* doc/tm.texi: Regenerate.
* hooks.h (hook_bool_insn_uint_mode_false): Delete.
(hook_bool_uint_uint_mode_false): New function.
* hooks.c (hook_bool_insn_uint_mode_false): Delete.
(hook_bool_uint_uint_mode_false): New function.
* config/aarch64/aarch64.c (aarch64_hard_regno_call_part_clobbered):
Take an ABI identifier instead of an rtx_insn.
* config/avr/avr.c (avr_hard_regno_call_part_clobbered): Likewise.
* config/i386/i386.c (ix86_hard_regno_call_part_clobbered): Likewise.
* config/mips/mips.c (mips_hard_regno_call_part_clobbered): Likewise.
* config/pru/pru.c (pru_hard_regno_call_part_clobbered): Likewise.
* config/rs6000/rs6000.c (rs6000_hard_regno_call_part_clobbered):
Likewise.
* config/s390/s390.c (s390_hard_regno_call_part_clobbered): Likewise.
* cselib.c: Include function-abi.h.
(cselib_process_insn): Update call to
targetm.hard_regno_call_part_clobbered, using insn_callee_abi
to get the appropriate ABI identifier.
* function-abi.cc (predefined_function_abi::initialize): Update call
to targetm.hard_regno_call_part_clobbered.
* ira-conflicts.c (ira_build_conflicts): Likewise.
* ira-costs.c (ira_tune_allocno_costs): Likewise.
* lra-constraints.c: Include function-abi.h.
(need_for_call_save_p): Update call to
targetm.hard_regno_call_part_clobbered, using insn_callee_abi
to get the appropriate ABI identifier.
* lra-lives.c (check_pseudos_live_through_calls): Likewise.
* regcprop.c (copyprop_hardreg_forward_1): Update call
to targetm.hard_regno_call_part_clobbered.
* reginfo.c (choose_hard_reg_mode): Likewise.
* regrename.c (check_new_reg_p): Likewise.
* reload.c (find_equiv_reg): Likewise.
* reload1.c (emit_reload_insns): Likewise.
* sched-deps.c: Include function-abi.h.
(deps_analyze_insn): Update call to
targetm.hard_regno_call_part_clobbered, using insn_callee_abi
to get the appropriate ABI identifier.
* sel-sched.c (init_regs_for_mode, mark_unavailable_hard_regs): Update
call to targetm.hard_regno_call_part_clobbered.
* targhooks.c (default_dwarf_frame_reg_mode): Likewise.
From-SVN: r276311
One of the effects of the function_abi series is to make -fipa-ra
work for partially call-clobbered registers. E.g. if a call preserves
only the low 32 bits of a register R, we handled the partial clobber
separately from -fipa-ra, and so treated the upper bits of R as
clobbered even if we knew that the target function doesn't touch R.
"Fixing" this caused problems for the vzeroupper handling on x86.
The pass that inserts the vzerouppers assumes that no 256-bit or 512-bit
values are live across a call unless the call takes a 256-bit or 512-bit
argument:
/* Needed mode is set to AVX_U128_CLEAN if there are
no 256bit or 512bit modes used in function arguments. */
This implicitly relies on:
/* Implement TARGET_HARD_REGNO_CALL_PART_CLOBBERED. The only ABI that
saves SSE registers across calls is Win64 (thus no need to check the
current ABI here), and with AVX enabled Win64 only guarantees that
the low 16 bytes are saved. */
static bool
ix86_hard_regno_call_part_clobbered (rtx_insn *insn ATTRIBUTE_UNUSED,
unsigned int regno, machine_mode mode)
{
return SSE_REGNO_P (regno) && GET_MODE_SIZE (mode) > 16;
}
The comment suggests that this code is only needed for Win64 and that
not testing for Win64 is just a simplification. But in practice it was
needed for correctness on GNU/Linux and other targets too, since without
it the RA would be able to keep 256-bit and 512-bit values in SSE
registers across calls that are known not to clobber them.
This patch conservatively treats calls as AVX_U128_ANY if the RA can see
that some SSE registers are not touched by a call. There are then no
regressions if the ix86_hard_regno_call_part_clobbered check is disabled
for GNU/Linux (not something we should do, was just for testing).
If in fact we want -fipa-ra to pretend that all functions clobber
SSE registers above 128 bits, it'd certainly be possible to arrange
that. But IMO that would be an optimisation decision, whereas what
the patch is fixing is a correctness decision. So I think we should
have this check even so.
2019-09-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/i386/i386.c: Include function-abi.h.
(ix86_avx_u128_mode_needed): Treat function calls as AVX_U128_ANY
if they preserve some 256-bit or 512-bit SSE registers.
From-SVN: r276310