re PR middle-end/28071 (A file that can not be compiled in reasonable time/space)

* sched-int.h (ds_to_dk, dk_to_ds): Declare functions.
	
	(struct _dep): New type.
	(dep_t): New typedef.
	(DEP_PRO, DEP_CON, DEP_KIND): New access macros.
	(DEP_STATUS): New access macro.  The macro with the same name was
	renamed to DEP_LINK_STATUS.
	(dep_init): Declare function

	(struct _dep_link): New type.
	(dep_link_t): New typedef.
	(DEP_LINK_NODE, DEP_LINK_NEXT, DEP_LINK_PREV_NEXTP): New access macros.
	(DEP_LINK_DEP, DEP_LINK_PRO, DEP_LINK_CON, DEP_LINK_KIND): New macros.
	(DEP_LINK_STATUS): New macro.
	(debug_dep_links): New debug function.

	(struct _deps_list): New type.
	(deps_list_t): New typedef.
	(DEPS_LIST_FIRST): New access macro.
	(FOR_EACH_DEP_LINK): New cycle macro.
	(create_deps_list, free_deps_list, delete_deps_list): Declare
	functions.
	(deps_list_empty_p, debug_deps_list, add_back_dep_to_deps_list): Ditto.
	(find_link_by_pro_in_deps_list, find_link_by_con_in_deps_list): Ditto.
	(copy_deps_list_change_con): Ditto.

	(move_dep_link): Declare function.

	(struct _dep_node): New type.
	(dep_node_t): New typedef.
	(DEP_NODE_BACK, DEP_NODE_DEP, DEP_NODE_FORW): New access macros.

	(struct haifa_insn_data.back_deps): New field to hold backward
	dependencies of the insn.
	(struct haifa_insn_data.depend): Rename to forw_deps.  Change its type
	to deps_list_t.
	(struct haifa_insn_data.resolved_deps): Rename to resolved_back_deps.
	Change its type	to deps_list_t.
	(INSN_BACK_DEPS): New access macro to use instead of LOG_LINKS.
	(INSN_DEPEND): Rename to INSN_FORW_DEPS.
	(RESOLVED_DEPS): Rename to INSN_RESOLVED_BACK_DEPS.

	(INSN_COST): Move to haifa-sched.c.  Use insn_cost () instead.
	
	(DEP_STATUS): Rename to DEP_LINK_STATUS.  Fix typo in the comment.

	(add_forw_dep, delete_back_forw_dep, insn_cost): Update declaration and
	all callers.
	(dep_cost): Declare.
	
	* sched-deps.c (CHECK): New macro to (en/dis)able sanity checks.
	(ds_to_dk, dk_to_ds): New functions.
	
	(init_dep_1): New static function.
	(init_dep): New function.
	(copy_dep): New static function.
	
	(dep_link_consistent_p, attach_dep_link, add_to_deps_list): New static
	functions.
	(detach_dep_link): New static function.
	(move_dep_link): New function.
	
	(dep_links_consistent_p, dump_dep_links): New static functions.
	(debug_dep_links): New debugging function.
	
	(deps_obstack, dl_obstack, dn_obstack): New static variables.
	
	(alloc_deps_list, init_deps_list): New static functions.
	(create_deps_list): New function.
	(clear_deps_list): New static function.
	(free_deps_list, delete_deps_list, deps_list_empty_p): New functions.
	(deps_list_consistent_p, dump_deps_list): New static functions.
	(debug_deps_list): New function.
	(add_back_dep_to_deps_list, find_link_by_pro_in_deps_list): New
	functions.
	(find_link_by_con_in_deps_list, copy_deps_list_change_con): Ditto.

	(maybe_add_or_update_back_dep_1, add_or_update_back_dep_1): Update to
	use new scheduler dependencies lists.
	(add_back_dep, delete_all_dependences, fixup_sched_groups): Ditto.
	(sched_analyze): Ditto.  Initialize dependencies lists.
	(add_forw_dep, compute_forward_dependences): Update to use new
	scheduler dependencies lists.
	
	(init_dependency_caches): Init deps_obstack.
	(free_dependency_caches): Free deps_obstack.
	
	(adjust_add_sorted_back_dep, adjust_back_add_forw_dep): Update to use
	new scheduler dependencies lists.
	(delete_forw_dep, add_or_update_back_forw_dep): Ditto.
	(add_back_forw_dep, delete_back_forw_dep): Ditto.

	* sched-rgn.c (set_spec_fed, find_conditional_protection, is_pfree):
	Update to use new scheduler dependencies lists.
	(is_conditionally_protected, is_prisky, add_branch_dependences): Ditto.
	(debug_dependencies): Ditto.
	(schedule_region): Update comments.
	
	* sched-ebb.c (earliest_block_with_similiar_load): Update to use new
	scheduler dependencies lists.
	(schedule_ebb): Update comments.
	
	* rtl.def (DEPS_LIST): Remove.
	
	* lists.c (unused_deps_list): Remove.
	(free_list): Update assertions.
	
	(alloc_DEPS_LIST, free_DEPS_LIST_list, free_DEPS_LIST_node): Remove.
	(remove_free_DEPS_LIST_elem, copy_DEPS_LIST_list): Ditto.

	* rtl.h (free_DEPS_LIST_list, alloc_DEPS_LIST): Remove declarations.
	(remove_free_DEPS_LIST_elem, copy_DEPS_LIST_list): Ditto.
	
	* haifa-sched.c (comments): Update.
	(insn_cost1): Remove.  Inline the code into insn_cost ().
	(insn_cost): Update to use new scheduler dependencies lists.  Move
	processing of the dependency cost to dep_cost ().
	(dep_cost): New function.  Use it instead of insn_cost () when
	evaluating cost of the dependency.  Use compatible interface to
	interact with the target.
	(priority): Update to use new scheduler dependencies lists.
	(rank_for_schedule): Ditto.  Optimize heuristic that prefers the insn
	with greater number of insns that depend on the insn.
	(schedule_insn): Update to use new scheduler dependencies lists.  Add
	code to free backward dependencies lists.  Inline and optimize code
	from resolve_dep () - see PR28071.
	(ok_for_early_queue_removal): Update to use new scheduler dependencies
	lists.  Update call to targetm.sched.is_costly_dependence hook.
	
	(fix_inter_tick, try_ready, fix_tick_ready): Update to use new
	scheduler dependencies lists.
	
	(resolve_dep): Remove.  Move the logic to schedule_insn ().
	(init_h_i_d): Initialize dependencies lists.
	
	(process_insn_depend_be_in_spec): Rename to
	process_insn_forw_deps_be_in_spec.  Update to use new scheduler
	dependencies lists.
	(add_to_speculative_block, create_check_block_twin, fix_recovery_deps):
	Update to use new scheduler dependencies lists.
	(clear_priorities, calc_priorities, add_jump_dependencies): Ditto.
	
	* ddg.c (create_ddg_dependence, create_ddg_dep_no_link): Update to use
	new scheduler dependencies lists.
	(build_intra_loop_deps): Ditto.
	
	* target.h (struct _dep): Declare to use in
	gcc_target.sched.is_costly_dependence.
	(struct gcc_target.sched.adjust_cost): Fix typo.
	(struct gcc_target.sched.is_costly_dependence): Change signature to use
	single dep_t parameter instead of an equivalent triad.
	(struct gcc_target.sched.adjust_cost_2): Remove.

	* target-def.h (TARGET_SCHED_ADJUST_COST_2): Remove.

	* reg-notes.def (DEP_TRUE, DEP_OUTPUT, DEP_ANTI): Update comments.

	* doc/tm.texi (TARGET_SCHED_IS_COSTLY_DEPENDENCE): Update
	documentation.
	(TARGET_SCHED_ADJUST_COST_2): Remove documentation.

	* doc/rtl.texi (LOG_LINKS): Remove part about instruction scheduler.
	(REG_DEP_TRUE): Document.
	
	* config/ia64/ia64.c (ia64_adjust_cost_2): Rename to ia64_adjust_cost.
	Change signature to correspond to the targetm.sched.adjust_cost hook.  
	Update use in TARGET_SCHED_ADJUST_COST_2.
	(TARGET_SCHED_ADJUST_COST_2): Rename to TARGET_SCHED_ADJUST_COST.
	(ia64_dependencies_evaluation_hook, ia64_dfa_new_cycle): Update to use
	new scheduler dependencies lists.
	(ia64_gen_check): Ditto.

	* config/mips/mips.c (vr4130_swap_insns_p): Update to use new scheduler
	dependencies lists.
	
	* config/rs6000/rs6000.c (rs6000_is_costly_dependence): Change
	signature to correspond to the targetm.sched.is_costly_dependence hook.
	(is_costly_group): Update to use new scheduler dependencies lists.

	* config/spu/spu.c (spu_sched_adjust_cost): Use insn_cost () function
	instead of INSN_COST () macro.

From-SVN: r121494
This commit is contained in:
Maxim Kuvyrkov 2007-02-02 09:11:11 +00:00 committed by Maxim Kuvyrkov
parent 9a5a8e58d2
commit b198261f9c
19 changed files with 1369 additions and 571 deletions

View File

@ -1,3 +1,187 @@
2007-02-02 Maxim Kuvyrkov <mkuvyrkov@ispras.ru>
* sched-int.h (ds_to_dk, dk_to_ds): Declare functions.
(struct _dep): New type.
(dep_t): New typedef.
(DEP_PRO, DEP_CON, DEP_KIND): New access macros.
(DEP_STATUS): New access macro. The macro with the same name was
renamed to DEP_LINK_STATUS.
(dep_init): Declare function
(struct _dep_link): New type.
(dep_link_t): New typedef.
(DEP_LINK_NODE, DEP_LINK_NEXT, DEP_LINK_PREV_NEXTP): New access macros.
(DEP_LINK_DEP, DEP_LINK_PRO, DEP_LINK_CON, DEP_LINK_KIND): New macros.
(DEP_LINK_STATUS): New macro.
(debug_dep_links): New debug function.
(struct _deps_list): New type.
(deps_list_t): New typedef.
(DEPS_LIST_FIRST): New access macro.
(FOR_EACH_DEP_LINK): New cycle macro.
(create_deps_list, free_deps_list, delete_deps_list): Declare
functions.
(deps_list_empty_p, debug_deps_list, add_back_dep_to_deps_list): Ditto.
(find_link_by_pro_in_deps_list, find_link_by_con_in_deps_list): Ditto.
(copy_deps_list_change_con): Ditto.
(move_dep_link): Declare function.
(struct _dep_node): New type.
(dep_node_t): New typedef.
(DEP_NODE_BACK, DEP_NODE_DEP, DEP_NODE_FORW): New access macros.
(struct haifa_insn_data.back_deps): New field to hold backward
dependencies of the insn.
(struct haifa_insn_data.depend): Rename to forw_deps. Change its type
to deps_list_t.
(struct haifa_insn_data.resolved_deps): Rename to resolved_back_deps.
Change its type to deps_list_t.
(INSN_BACK_DEPS): New access macro to use instead of LOG_LINKS.
(INSN_DEPEND): Rename to INSN_FORW_DEPS.
(RESOLVED_DEPS): Rename to INSN_RESOLVED_BACK_DEPS.
(INSN_COST): Move to haifa-sched.c. Use insn_cost () instead.
(DEP_STATUS): Rename to DEP_LINK_STATUS. Fix typo in the comment.
(add_forw_dep, delete_back_forw_dep, insn_cost): Update declaration and
all callers.
(dep_cost): Declare.
* sched-deps.c (CHECK): New macro to (en/dis)able sanity checks.
(ds_to_dk, dk_to_ds): New functions.
(init_dep_1): New static function.
(init_dep): New function.
(copy_dep): New static function.
(dep_link_consistent_p, attach_dep_link, add_to_deps_list): New static
functions.
(detach_dep_link): New static function.
(move_dep_link): New function.
(dep_links_consistent_p, dump_dep_links): New static functions.
(debug_dep_links): New debugging function.
(deps_obstack, dl_obstack, dn_obstack): New static variables.
(alloc_deps_list, init_deps_list): New static functions.
(create_deps_list): New function.
(clear_deps_list): New static function.
(free_deps_list, delete_deps_list, deps_list_empty_p): New functions.
(deps_list_consistent_p, dump_deps_list): New static functions.
(debug_deps_list): New function.
(add_back_dep_to_deps_list, find_link_by_pro_in_deps_list): New
functions.
(find_link_by_con_in_deps_list, copy_deps_list_change_con): Ditto.
(maybe_add_or_update_back_dep_1, add_or_update_back_dep_1): Update to
use new scheduler dependencies lists.
(add_back_dep, delete_all_dependences, fixup_sched_groups): Ditto.
(sched_analyze): Ditto. Initialize dependencies lists.
(add_forw_dep, compute_forward_dependences): Update to use new
scheduler dependencies lists.
(init_dependency_caches): Init deps_obstack.
(free_dependency_caches): Free deps_obstack.
(adjust_add_sorted_back_dep, adjust_back_add_forw_dep): Update to use
new scheduler dependencies lists.
(delete_forw_dep, add_or_update_back_forw_dep): Ditto.
(add_back_forw_dep, delete_back_forw_dep): Ditto.
* sched-rgn.c (set_spec_fed, find_conditional_protection, is_pfree):
Update to use new scheduler dependencies lists.
(is_conditionally_protected, is_prisky, add_branch_dependences): Ditto.
(debug_dependencies): Ditto.
(schedule_region): Update comments.
* sched-ebb.c (earliest_block_with_similiar_load): Update to use new
scheduler dependencies lists.
(schedule_ebb): Update comments.
* rtl.def (DEPS_LIST): Remove.
* lists.c (unused_deps_list): Remove.
(free_list): Update assertions.
(alloc_DEPS_LIST, free_DEPS_LIST_list, free_DEPS_LIST_node): Remove.
(remove_free_DEPS_LIST_elem, copy_DEPS_LIST_list): Ditto.
* rtl.h (free_DEPS_LIST_list, alloc_DEPS_LIST): Remove declarations.
(remove_free_DEPS_LIST_elem, copy_DEPS_LIST_list): Ditto.
* haifa-sched.c (comments): Update.
(insn_cost1): Remove. Inline the code into insn_cost ().
(insn_cost): Update to use new scheduler dependencies lists. Move
processing of the dependency cost to dep_cost ().
(dep_cost): New function. Use it instead of insn_cost () when
evaluating cost of the dependency. Use compatible interface to
interact with the target.
(priority): Update to use new scheduler dependencies lists.
(rank_for_schedule): Ditto. Optimize heuristic that prefers the insn
with greater number of insns that depend on the insn.
(schedule_insn): Update to use new scheduler dependencies lists. Add
code to free backward dependencies lists. Inline and optimize code
from resolve_dep () - see PR28071.
(ok_for_early_queue_removal): Update to use new scheduler dependencies
lists. Update call to targetm.sched.is_costly_dependence hook.
(fix_inter_tick, try_ready, fix_tick_ready): Update to use new
scheduler dependencies lists.
(resolve_dep): Remove. Move the logic to schedule_insn ().
(init_h_i_d): Initialize dependencies lists.
(process_insn_depend_be_in_spec): Rename to
process_insn_forw_deps_be_in_spec. Update to use new scheduler
dependencies lists.
(add_to_speculative_block, create_check_block_twin, fix_recovery_deps):
Update to use new scheduler dependencies lists.
(clear_priorities, calc_priorities, add_jump_dependencies): Ditto.
* ddg.c (create_ddg_dependence, create_ddg_dep_no_link): Update to use
new scheduler dependencies lists.
(build_intra_loop_deps): Ditto.
* target.h (struct _dep): Declare to use in
gcc_target.sched.is_costly_dependence.
(struct gcc_target.sched.adjust_cost): Fix typo.
(struct gcc_target.sched.is_costly_dependence): Change signature to use
single dep_t parameter instead of an equivalent triad.
(struct gcc_target.sched.adjust_cost_2): Remove.
* target-def.h (TARGET_SCHED_ADJUST_COST_2): Remove.
* reg-notes.def (DEP_TRUE, DEP_OUTPUT, DEP_ANTI): Update comments.
* doc/tm.texi (TARGET_SCHED_IS_COSTLY_DEPENDENCE): Update
documentation.
(TARGET_SCHED_ADJUST_COST_2): Remove documentation.
* doc/rtl.texi (LOG_LINKS): Remove part about instruction scheduler.
(REG_DEP_TRUE): Document.
* config/ia64/ia64.c (ia64_adjust_cost_2): Rename to ia64_adjust_cost.
Change signature to correspond to the targetm.sched.adjust_cost hook.
Update use in TARGET_SCHED_ADJUST_COST_2.
(TARGET_SCHED_ADJUST_COST_2): Rename to TARGET_SCHED_ADJUST_COST.
(ia64_dependencies_evaluation_hook, ia64_dfa_new_cycle): Update to use
new scheduler dependencies lists.
(ia64_gen_check): Ditto.
* config/mips/mips.c (vr4130_swap_insns_p): Update to use new scheduler
dependencies lists.
* config/rs6000/rs6000.c (rs6000_is_costly_dependence): Change
signature to correspond to the targetm.sched.is_costly_dependence hook.
(is_costly_group): Update to use new scheduler dependencies lists.
* config/spu/spu.c (spu_sched_adjust_cost): Use insn_cost () function
instead of INSN_COST () macro.
2007-02-01 Ian Lance Taylor <iant@google.com>
* lower-subreg.c (resolve_clobber): Handle a subreg of a concatn.

View File

@ -211,7 +211,7 @@ static void ia64_output_function_epilogue (FILE *, HOST_WIDE_INT);
static void ia64_output_function_end_prologue (FILE *);
static int ia64_issue_rate (void);
static int ia64_adjust_cost_2 (rtx, int, rtx, int);
static int ia64_adjust_cost (rtx, rtx, rtx, int);
static void ia64_sched_init (FILE *, int, int);
static void ia64_sched_init_global (FILE *, int, int);
static void ia64_sched_finish_global (FILE *, int);
@ -326,8 +326,8 @@ static const struct attribute_spec ia64_attribute_table[] =
#undef TARGET_IN_SMALL_DATA_P
#define TARGET_IN_SMALL_DATA_P ia64_in_small_data_p
#undef TARGET_SCHED_ADJUST_COST_2
#define TARGET_SCHED_ADJUST_COST_2 ia64_adjust_cost_2
#undef TARGET_SCHED_ADJUST_COST
#define TARGET_SCHED_ADJUST_COST ia64_adjust_cost
#undef TARGET_SCHED_ISSUE_RATE
#define TARGET_SCHED_ISSUE_RATE ia64_issue_rate
#undef TARGET_SCHED_VARIABLE_ISSUE
@ -6265,18 +6265,16 @@ ia64_single_set (rtx insn)
return ret;
}
/* Adjust the cost of a scheduling dependency.
Return the new cost of a dependency of type DEP_TYPE or INSN on DEP_INSN.
COST is the current cost. */
/* Adjust the cost of a scheduling dependency. Return the new cost of
a dependency LINK or INSN on DEP_INSN. COST is the current cost. */
static int
ia64_adjust_cost_2 (rtx insn, int dep_type1, rtx dep_insn, int cost)
ia64_adjust_cost (rtx insn, rtx link, rtx dep_insn, int cost)
{
enum reg_note dep_type = (enum reg_note) dep_type1;
enum attr_itanium_class dep_class;
enum attr_itanium_class insn_class;
if (dep_type != REG_DEP_OUTPUT)
if (REG_NOTE_KIND (link) != REG_DEP_OUTPUT)
return cost;
insn_class = ia64_safe_itanium_class (insn);
@ -6305,7 +6303,7 @@ ia64_emit_insn_before (rtx insn, rtx before)
static void
ia64_dependencies_evaluation_hook (rtx head, rtx tail)
{
rtx insn, link, next, next_tail;
rtx insn, next, next_tail;
/* Before reload, which_alternative is not set, which means that
ia64_safe_itanium_class will produce wrong results for (at least)
@ -6321,13 +6319,16 @@ ia64_dependencies_evaluation_hook (rtx head, rtx tail)
if (INSN_P (insn)
&& ia64_safe_itanium_class (insn) == ITANIUM_CLASS_IALU)
{
for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
dep_link_t link;
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
{
enum attr_itanium_class c;
if (REG_NOTE_KIND (link) != REG_DEP_TRUE)
if (DEP_LINK_KIND (link) != REG_DEP_TRUE)
continue;
next = XEXP (link, 0);
next = DEP_LINK_CON (link);
c = ia64_safe_itanium_class (next);
if ((c == ITANIUM_CLASS_ST
|| c == ITANIUM_CLASS_STF)
@ -6616,14 +6617,14 @@ ia64_dfa_new_cycle (FILE *dump, int verbose, rtx insn, int last_clock,
if (c != ITANIUM_CLASS_MMMUL && c != ITANIUM_CLASS_MMSHF)
{
rtx link;
dep_link_t link;
int d = -1;
for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
if (REG_NOTE_KIND (link) == 0)
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
{
enum attr_itanium_class dep_class;
rtx dep_insn = XEXP (link, 0);
rtx dep_insn = DEP_LINK_PRO (link);
dep_class = ia64_safe_itanium_class (dep_insn);
if ((dep_class == ITANIUM_CLASS_MMMUL
@ -7141,13 +7142,13 @@ ia64_gen_check (rtx insn, rtx label, bool mutate_p)
As long as patterns are unique for each instruction, this can be
accomplished by matching ORIG_PAT fields. */
{
rtx link;
dep_link_t link;
int check_no = 0;
rtx orig_pat = ORIG_PAT (insn);
for (link = RESOLVED_DEPS (insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
{
rtx x = XEXP (link, 0);
rtx x = DEP_LINK_PRO (link);
if (ORIG_PAT (x) == orig_pat)
check_no = spec_check_no[INSN_UID (x)];

View File

@ -9806,7 +9806,7 @@ vr4130_true_reg_dependence_p (rtx insn)
static bool
vr4130_swap_insns_p (rtx insn1, rtx insn2)
{
rtx dep;
dep_link_t dep;
/* Check for the following case:
@ -9816,11 +9816,11 @@ vr4130_swap_insns_p (rtx insn1, rtx insn2)
If INSN1 is the last instruction blocking X, it would better to
choose (INSN1, X) over (INSN2, INSN1). */
for (dep = INSN_DEPEND (insn1); dep != 0; dep = XEXP (dep, 1))
if (REG_NOTE_KIND (dep) == REG_DEP_ANTI
&& INSN_PRIORITY (XEXP (dep, 0)) > INSN_PRIORITY (insn2)
&& recog_memoized (XEXP (dep, 0)) >= 0
&& get_attr_vr4130_class (XEXP (dep, 0)) == VR4130_CLASS_ALU)
FOR_EACH_DEP_LINK (dep, INSN_FORW_DEPS (insn1))
if (DEP_LINK_KIND (dep) == REG_DEP_ANTI
&& INSN_PRIORITY (DEP_LINK_CON (dep)) > INSN_PRIORITY (insn2)
&& recog_memoized (DEP_LINK_CON (dep)) >= 0
&& get_attr_vr4130_class (DEP_LINK_CON (dep)) == VR4130_CLASS_ALU)
return false;
if (vr4130_last_insn != 0

View File

@ -699,7 +699,7 @@ static bool set_to_load_agen (rtx,rtx);
static bool adjacent_mem_locations (rtx,rtx);
static int rs6000_adjust_priority (rtx, int);
static int rs6000_issue_rate (void);
static bool rs6000_is_costly_dependence (rtx, rtx, rtx, int, int);
static bool rs6000_is_costly_dependence (dep_t, int, int);
static rtx get_next_active_insn (rtx, rtx);
static bool insn_terminates_group_p (rtx , enum group_termination);
static bool insn_must_be_first_in_group (rtx);
@ -17544,9 +17544,11 @@ get_store_dest (rtx pat)
costly by the given target. */
static bool
rs6000_is_costly_dependence (rtx insn, rtx next, rtx link, int cost,
int distance)
rs6000_is_costly_dependence (dep_t dep, int cost, int distance)
{
rtx insn;
rtx next;
/* If the flag is not enabled - no dependence is considered costly;
allow all dependent insns in the same group.
This is the most aggressive option. */
@ -17559,6 +17561,9 @@ rs6000_is_costly_dependence (rtx insn, rtx next, rtx link, int cost,
if (rs6000_sched_costly_dep == all_deps_costly)
return true;
insn = DEP_PRO (dep);
next = DEP_CON (dep);
if (rs6000_sched_costly_dep == store_to_load_dep_costly
&& is_load_insn (next)
&& is_store_insn (insn))
@ -17568,7 +17573,7 @@ rs6000_is_costly_dependence (rtx insn, rtx next, rtx link, int cost,
if (rs6000_sched_costly_dep == true_store_to_load_dep_costly
&& is_load_insn (next)
&& is_store_insn (insn)
&& (!link || (int) REG_NOTE_KIND (link) == 0))
&& DEP_KIND (dep) == REG_DEP_TRUE)
/* Prevent load after store in the same group if it is a true
dependence. */
return true;
@ -18040,24 +18045,24 @@ static bool
is_costly_group (rtx *group_insns, rtx next_insn)
{
int i;
rtx link;
int cost;
int issue_rate = rs6000_issue_rate ();
for (i = 0; i < issue_rate; i++)
{
dep_link_t link;
rtx insn = group_insns[i];
if (!insn)
continue;
for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
{
rtx next = XEXP (link, 0);
if (next == next_insn)
{
cost = insn_cost (insn, link, next_insn);
if (rs6000_is_costly_dependence (insn, next_insn, link, cost, 0))
return true;
}
dep_t dep = DEP_LINK_DEP (link);
rtx next = DEP_CON (dep);
if (next == next_insn
&& rs6000_is_costly_dependence (dep, dep_cost (dep), 0))
return true;
}
}

View File

@ -2260,7 +2260,7 @@ spu_sched_adjust_cost (rtx insn, rtx link ATTRIBUTE_UNUSED,
jump_insn. We adjust here so higher cost insns will get scheduled
earlier. */
if (GET_CODE (insn) == JUMP_INSN && REG_NOTE_KIND (link) == REG_DEP_ANTI)
return INSN_COST (dep_insn) - 3;
return insn_cost (dep_insn) - 3;
return cost;
}

View File

@ -53,7 +53,7 @@ enum edge_flag {NOT_IN_SCC = 0, IN_SCC};
static void add_backarc_to_ddg (ddg_ptr, ddg_edge_ptr);
static void add_backarc_to_scc (ddg_scc_ptr, ddg_edge_ptr);
static void add_scc_to_ddg (ddg_all_sccs_ptr, ddg_scc_ptr);
static void create_ddg_dependence (ddg_ptr, ddg_node_ptr, ddg_node_ptr, rtx);
static void create_ddg_dependence (ddg_ptr, ddg_node_ptr, ddg_node_ptr, dep_t);
static void create_ddg_dep_no_link (ddg_ptr, ddg_node_ptr, ddg_node_ptr,
dep_type, dep_data_type, int);
static ddg_edge_ptr create_ddg_edge (ddg_node_ptr, ddg_node_ptr, dep_type,
@ -148,7 +148,7 @@ mem_access_insn_p (rtx insn)
a ddg_edge and adds it to the given DDG. */
static void
create_ddg_dependence (ddg_ptr g, ddg_node_ptr src_node,
ddg_node_ptr dest_node, rtx link)
ddg_node_ptr dest_node, dep_t link)
{
ddg_edge_ptr e;
int latency, distance = 0;
@ -166,11 +166,11 @@ create_ddg_dependence (ddg_ptr g, ddg_node_ptr src_node,
gcc_assert (link);
/* Note: REG_DEP_ANTI applies to MEM ANTI_DEP as well!! */
if (REG_NOTE_KIND (link) == REG_DEP_ANTI)
if (DEP_KIND (link) == REG_DEP_ANTI)
t = ANTI_DEP;
else if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT)
else if (DEP_KIND (link) == REG_DEP_OUTPUT)
t = OUTPUT_DEP;
latency = insn_cost (src_node->insn, link, dest_node->insn);
latency = dep_cost (link);
e = create_ddg_edge (src_node, dest_node, t, dt, latency, distance);
@ -200,15 +200,23 @@ create_ddg_dep_no_link (ddg_ptr g, ddg_node_ptr from, ddg_node_ptr to,
{
ddg_edge_ptr e;
int l;
rtx link = alloc_INSN_LIST (to->insn, NULL_RTX);
enum reg_note dep_kind;
struct _dep _dep, *dep = &_dep;
if (d_t == ANTI_DEP)
PUT_REG_NOTE_KIND (link, REG_DEP_ANTI);
dep_kind = REG_DEP_ANTI;
else if (d_t == OUTPUT_DEP)
PUT_REG_NOTE_KIND (link, REG_DEP_OUTPUT);
dep_kind = REG_DEP_OUTPUT;
else
{
gcc_assert (d_t == TRUE_DEP);
l = insn_cost (from->insn, link, to->insn);
free_INSN_LIST_node (link);
dep_kind = REG_DEP_TRUE;
}
init_dep (dep, from->insn, to->insn, dep_kind);
l = dep_cost (dep);
e = create_ddg_edge (from, to, d_t, d_dt, l, distance);
if (distance > 0)
@ -375,7 +383,8 @@ build_intra_loop_deps (ddg_ptr g)
int i;
/* Hold the dependency analysis state during dependency calculations. */
struct deps tmp_deps;
rtx head, tail, link;
rtx head, tail;
dep_link_t link;
/* Build the dependence information, using the sched_analyze function. */
init_deps_global ();
@ -394,16 +403,16 @@ build_intra_loop_deps (ddg_ptr g)
if (! INSN_P (dest_node->insn))
continue;
for (link = LOG_LINKS (dest_node->insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (dest_node->insn))
{
ddg_node_ptr src_node = get_node_of_insn (g, XEXP (link, 0));
dep_t dep = DEP_LINK_DEP (link);
ddg_node_ptr src_node = get_node_of_insn (g, DEP_PRO (dep));
if (!src_node)
continue;
add_forw_dep (dest_node->insn, link);
create_ddg_dependence (g, src_node, dest_node,
INSN_DEPEND (src_node->insn));
add_forw_dep (link);
create_ddg_dependence (g, src_node, dest_node, dep);
}
/* If this insn modifies memory, add an edge to all insns that access

View File

@ -3271,13 +3271,7 @@ This list is originally set up by the flow analysis pass; it is a null
pointer until then. Flow only adds links for those data dependencies
which can be used for instruction combination. For each insn, the flow
analysis pass adds a link to insns which store into registers values
that are used for the first time in this insn. The instruction
scheduling pass adds extra links so that every dependence will be
represented. Links represent data dependencies, antidependencies and
output dependencies; the machine mode of the link distinguishes these
three types: antidependencies have mode @code{REG_DEP_ANTI}, output
dependencies have mode @code{REG_DEP_OUTPUT}, and data dependencies have
mode @code{VOIDmode}.
that are used for the first time in this insn.
The @code{REG_NOTES} field of an insn is a chain similar to the
@code{LOG_LINKS} field but it includes @code{expr_list} expressions in
@ -3500,13 +3494,18 @@ they simply have mode @code{VOIDmode}, and are printed without any
descriptive text.
@table @code
@findex REG_DEP_ANTI
@item REG_DEP_ANTI
This indicates an anti dependence (a write after read dependence).
@findex REG_DEP_TRUE
@item REG_DEP_TRUE
This indicates a true dependence (a read after write dependence).
@findex REG_DEP_OUTPUT
@item REG_DEP_OUTPUT
This indicates an output dependence (a write after write dependence).
@findex REG_DEP_ANTI
@item REG_DEP_ANTI
This indicates an anti dependence (a write after read dependence).
@end table
These notes describe information gathered from gcov profile data. They

View File

@ -6112,14 +6112,13 @@ correspondingly processor cycle on which the previous insn has been
issued and the current processor cycle.
@end deftypefn
@deftypefn {Target Hook} bool TARGET_SCHED_IS_COSTLY_DEPENDENCE (rtx @var{insn1}, rtx @var{insn2}, rtx @var{dep_link}, int @var{dep_cost}, int @var{distance})
@deftypefn {Target Hook} bool TARGET_SCHED_IS_COSTLY_DEPENDENCE (struct dep_def *@var{_dep}, int @var{cost}, int @var{distance})
This hook is used to define which dependences are considered costly by
the target, so costly that it is not advisable to schedule the insns that
are involved in the dependence too close to one another. The parameters
to this hook are as follows: The second parameter @var{insn2} is dependent
upon the first parameter @var{insn1}. The dependence between @var{insn1}
and @var{insn2} is represented by the third parameter @var{dep_link}. The
fourth parameter @var{cost} is the cost of the dependence, and the fifth
to this hook are as follows: The first parameter @var{_dep} is the dependence
being evaluated. The second parameter @var{cost} is the cost of the
dependence, and the third
parameter @var{distance} is the distance in cycles between the two insns.
The hook returns @code{true} if considering the distance between the two
insns the dependence between them is considered costly by the target,
@ -6134,14 +6133,6 @@ closer to one another---i.e., closer than the dependence distance; however,
not in cases of "costly dependences", which this hooks allows to define.
@end deftypefn
@deftypefn {Target Hook} int TARGET_SCHED_ADJUST_COST_2 (rtx @var{insn}, int @var{dep_type}, rtx @var{dep_insn}, int @var{cost})
This hook is a modified version of @samp{TARGET_SCHED_ADJUST_COST}. Instead
of passing dependence as a second parameter, it passes a type of that
dependence. This is useful to calculate cost of dependence between insns
not having the corresponding link. If @samp{TARGET_SCHED_ADJUST_COST_2} is
defined it is used instead of @samp{TARGET_SCHED_ADJUST_COST}.
@end deftypefn
@deftypefn {Target Hook} void TARGET_SCHED_H_I_D_EXTENDED (void)
This hook is called by the insn scheduler after emitting a new instruction to
the instruction stream. The hook notifies a target backend to extend its

View File

@ -82,9 +82,9 @@ Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
compute_block_backward_dependences ().
Dependencies set up by memory references are treated in exactly the
same way as other dependencies, by using LOG_LINKS backward
dependences. LOG_LINKS are translated into INSN_DEPEND forward
dependences for the purpose of forward list scheduling.
same way as other dependencies, by using insn backward dependences
INSN_BACK_DEPS. INSN_BACK_DEPS are translated into forward dependences
INSN_FORW_DEPS the purpose of forward list scheduling.
Having optimized the critical path, we may have also unduly
extended the lifetimes of some registers. If an operation requires
@ -251,8 +251,8 @@ static basic_block before_recovery;
sufficient time has passed to make them ready. As time passes,
insns move from the "Queued" set to the "Ready" list.
The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
insns, i.e., those that are ready, queued, and pending.
The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
unscheduled insns, i.e., those that are ready, queued, and pending.
The "Queued" set (Q) is implemented by the variable `insn_queue'.
The "Ready" list (R) is implemented by the variables `ready' and
`n_ready'.
@ -489,7 +489,6 @@ haifa_classify_insn (rtx insn)
/* Forward declarations. */
HAIFA_INLINE static int insn_cost1 (rtx, enum reg_note, rtx, rtx);
static int priority (rtx);
static int rank_for_schedule (const void *, const void *);
static void swap_sort (rtx *, int);
@ -544,7 +543,6 @@ static rtx choose_ready (struct ready_list *);
static void fix_inter_tick (rtx, rtx);
static int fix_tick_ready (rtx);
static void change_queue_index (rtx, int);
static void resolve_dep (rtx, rtx);
/* The following functions are used to implement scheduling of data/control
speculative instructions. */
@ -555,7 +553,7 @@ static void extend_global (rtx);
static void extend_all (rtx);
static void init_h_i_d (rtx);
static void generate_recovery_code (rtx);
static void process_insn_depend_be_in_spec (rtx, rtx, ds_t);
static void process_insn_forw_deps_be_in_spec (deps_list_t, rtx, ds_t);
static void begin_speculative_block (rtx);
static void add_to_speculative_block (rtx);
static dw_t dep_weak (ds_t);
@ -607,27 +605,15 @@ static struct sched_info current_sched_info_var;
static rtx last_scheduled_insn;
/* Compute cost of executing INSN given the dependence LINK on the insn USED.
/* Cached cost of the instruction. Use below function to get cost of the
insn. -1 here means that the field is not initialized. */
#define INSN_COST(INSN) (h_i_d[INSN_UID (INSN)].cost)
/* Compute cost of executing INSN.
This is the number of cycles between instruction issue and
instruction results. */
HAIFA_INLINE int
insn_cost (rtx insn, rtx link, rtx used)
{
return insn_cost1 (insn, used ? REG_NOTE_KIND (link) : REG_NOTE_MAX,
link, used);
}
/* Compute cost of executing INSN given the dependence on the insn USED.
If LINK is not NULL, then its REG_NOTE_KIND is used as a dependence type.
Otherwise, dependence between INSN and USED is assumed to be of type
DEP_TYPE. This function was introduced as a workaround for
targetm.adjust_cost hook.
This is the number of cycles between instruction issue and
instruction results. */
HAIFA_INLINE static int
insn_cost1 (rtx insn, enum reg_note dep_type, rtx link, rtx used)
insn_cost (rtx insn)
{
int cost = INSN_COST (insn);
@ -652,9 +638,17 @@ insn_cost1 (rtx insn, enum reg_note dep_type, rtx link, rtx used)
}
}
/* In this case estimate cost without caring how insn is used. */
if (used == 0)
return cost;
return cost;
}
/* Compute cost of dependence LINK.
This is the number of cycles between instruction issue and
instruction results. */
int
dep_cost (dep_t link)
{
rtx used = DEP_CON (link);
int cost;
/* A USE insn should never require the value used to be computed.
This allows the computation of a function's result and parameter
@ -663,7 +657,10 @@ insn_cost1 (rtx insn, enum reg_note dep_type, rtx link, rtx used)
cost = 0;
else
{
gcc_assert (!link || dep_type == REG_NOTE_KIND (link));
rtx insn = DEP_PRO (link);
enum reg_note dep_type = DEP_KIND (link);
cost = insn_cost (insn);
if (INSN_CODE (insn) >= 0)
{
@ -680,13 +677,23 @@ insn_cost1 (rtx insn, enum reg_note dep_type, rtx link, rtx used)
cost = insn_latency (insn, used);
}
if (targetm.sched.adjust_cost_2)
cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost);
else
if (targetm.sched.adjust_cost != NULL)
{
gcc_assert (link);
if (targetm.sched.adjust_cost)
cost = targetm.sched.adjust_cost (used, link, insn, cost);
/* This variable is used for backward compatibility with the
targets. */
rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
/* Make it self-cycled, so that if some tries to walk over this
incomplete list he/she will be cought in an endless loop. */
XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
/* Targets use only REG_NOTE_KIND of the link. */
PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_KIND (link));
cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
insn, cost);
free_INSN_LIST_node (dep_cost_rtx_link);
}
if (cost < 0)
@ -701,7 +708,7 @@ insn_cost1 (rtx insn, enum reg_note dep_type, rtx link, rtx used)
static int
priority (rtx insn)
{
rtx link;
dep_link_t link;
if (! INSN_P (insn))
return 0;
@ -710,8 +717,12 @@ priority (rtx insn)
{
int this_priority = 0;
if (INSN_DEPEND (insn) == 0)
this_priority = insn_cost (insn, 0, 0);
if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
some forward deps but all of them are ignored by
contributes_to_priority hook. At the moment we set priority of
such insn to 0. */
this_priority = insn_cost (insn);
else
{
rtx prev_first, twin;
@ -719,8 +730,9 @@ priority (rtx insn)
/* For recovery check instructions we calculate priority slightly
different than that of normal instructions. Instead of walking
through INSN_DEPEND (check) list, we walk through INSN_DEPEND list
of each instruction in the corresponding recovery block. */
through INSN_FORW_DEPS (check) list, we walk through
INSN_FORW_DEPS list of each instruction in the corresponding
recovery block. */
rec = RECOVERY_BLOCK (insn);
if (!rec || rec == EXIT_BLOCK_PTR)
@ -736,15 +748,18 @@ priority (rtx insn)
do
{
for (link = INSN_DEPEND (twin); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (twin))
{
rtx next;
int next_priority;
next = XEXP (link, 0);
dep_t dep = DEP_LINK_DEP (link);
next = DEP_CON (dep);
if (BLOCK_FOR_INSN (next) != rec)
{
int cost;
/* Critical path is meaningful in block boundaries
only. */
if (! (*current_sched_info->contributes_to_priority)
@ -756,17 +771,23 @@ priority (rtx insn)
producers will more likely be scheduled, thus,
resolving the dependence. */
|| ((current_sched_info->flags & DO_SPECULATION)
&& (DEP_STATUS (link) & SPECULATIVE)
&& (DEP_STATUS (dep) & SPECULATIVE)
&& !(spec_info->flags
& COUNT_SPEC_IN_CRITICAL_PATH)))
continue;
next_priority = insn_cost1 (insn,
twin == insn ?
REG_NOTE_KIND (link) :
REG_DEP_ANTI,
twin == insn ? link : 0,
next) + priority (next);
if (twin == insn)
cost = dep_cost (dep);
else
{
struct _dep _dep1, *dep1 = &_dep1;
init_dep (dep1, insn, next, REG_DEP_ANTI);
cost = dep_cost (dep1);
}
next_priority = cost + priority (next);
if (next_priority > this_priority)
this_priority = next_priority;
@ -803,8 +824,8 @@ rank_for_schedule (const void *x, const void *y)
{
rtx tmp = *(const rtx *) y;
rtx tmp2 = *(const rtx *) x;
rtx link;
int tmp_class, tmp2_class, depend_count1, depend_count2;
dep_link_t link1, link2;
int tmp_class, tmp2_class;
int val, priority_val, weight_val, info_val;
/* The insn in a schedule group should be issued the first. */
@ -858,18 +879,26 @@ rank_for_schedule (const void *x, const void *y)
2) Anti/Output dependent on last scheduled insn.
3) Independent of last scheduled insn, or has latency of one.
Choose the insn from the highest numbered class if different. */
link = find_insn_list (tmp, INSN_DEPEND (last_scheduled_insn));
if (link == 0 || insn_cost (last_scheduled_insn, link, tmp) == 1)
link1
= find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
tmp);
if (link1 == NULL || dep_cost (DEP_LINK_DEP (link1)) == 1)
tmp_class = 3;
else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
else if (/* Data dependence. */
DEP_LINK_KIND (link1) == REG_DEP_TRUE)
tmp_class = 1;
else
tmp_class = 2;
link = find_insn_list (tmp2, INSN_DEPEND (last_scheduled_insn));
if (link == 0 || insn_cost (last_scheduled_insn, link, tmp2) == 1)
link2
= find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
tmp2);
if (link2 == NULL || dep_cost (DEP_LINK_DEP (link2)) == 1)
tmp2_class = 3;
else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
else if (/* Data dependence. */
DEP_LINK_KIND (link2) == REG_DEP_TRUE)
tmp2_class = 1;
else
tmp2_class = 2;
@ -881,17 +910,22 @@ rank_for_schedule (const void *x, const void *y)
/* Prefer the insn which has more later insns that depend on it.
This gives the scheduler more freedom when scheduling later
instructions at the expense of added register pressure. */
depend_count1 = 0;
for (link = INSN_DEPEND (tmp); link; link = XEXP (link, 1))
depend_count1++;
depend_count2 = 0;
for (link = INSN_DEPEND (tmp2); link; link = XEXP (link, 1))
depend_count2++;
link1 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp));
link2 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp2));
val = depend_count2 - depend_count1;
if (val)
return val;
while (link1 != NULL && link2 != NULL)
{
link1 = DEP_LINK_NEXT (link1);
link2 = DEP_LINK_NEXT (link2);
}
if (link1 != NULL && link2 == NULL)
/* TMP (Y) has more insns that depend on it. */
return -1;
if (link1 == NULL && link2 != NULL)
/* TMP2 (X) has more insns that depend on it. */
return 1;
/* If insns are equally good, sort by INSN_LUID (original insn order),
so that we make the sort stable. This minimizes instruction movement,
@ -1127,7 +1161,7 @@ static int last_clock_var;
static int
schedule_insn (rtx insn)
{
rtx link;
dep_link_t link;
int advance = 0;
if (sched_verbose >= 1)
@ -1147,18 +1181,16 @@ schedule_insn (rtx insn)
/* Scheduling instruction should have all its dependencies resolved and
should have been removed from the ready list. */
gcc_assert (INSN_DEP_COUNT (insn) == 0);
gcc_assert (!LOG_LINKS (insn));
gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
gcc_assert (INSN_DEP_COUNT (insn) == 0
&& deps_list_empty_p (INSN_BACK_DEPS (insn)));
free_deps_list (INSN_BACK_DEPS (insn));
/* Now we can free INSN_RESOLVED_BACK_DEPS list. */
delete_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
/* Now we can free RESOLVED_DEPS list. */
if (current_sched_info->flags & USE_DEPS_LIST)
free_DEPS_LIST_list (&RESOLVED_DEPS (insn));
else
free_INSN_LIST_list (&RESOLVED_DEPS (insn));
gcc_assert (INSN_TICK (insn) >= MIN_TICK);
if (INSN_TICK (insn) > clock_var)
/* INSN has been prematurely moved from the queue to the ready list.
@ -1170,11 +1202,19 @@ schedule_insn (rtx insn)
INSN_TICK (insn) = clock_var;
/* Update dependent instructions. */
for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
{
rtx next = XEXP (link, 0);
rtx next = DEP_LINK_CON (link);
resolve_dep (next, insn);
/* Resolve the dependence between INSN and NEXT. */
INSN_DEP_COUNT (next)--;
move_dep_link (DEP_NODE_BACK (DEP_LINK_NODE (link)),
INSN_RESOLVED_BACK_DEPS (next));
gcc_assert ((INSN_DEP_COUNT (next) == 0)
== deps_list_empty_p (INSN_BACK_DEPS (next)));
if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
{
@ -1191,7 +1231,7 @@ schedule_insn (rtx insn)
/* Check always has only one forward dependence (to the first insn in
the recovery block), therefore, this will be executed only once. */
{
gcc_assert (XEXP (link, 1) == 0);
gcc_assert (DEP_LINK_NEXT (link) == NULL);
fix_recovery_deps (RECOVERY_BLOCK (insn));
}
}
@ -1525,17 +1565,22 @@ ok_for_early_queue_removal (rtx insn)
{
for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
{
rtx dep_link = 0;
int dep_cost;
int cost;
if (!NOTE_P (prev_insn))
{
dep_link = find_insn_list (insn, INSN_DEPEND (prev_insn));
dep_link_t dep_link;
dep_link = (find_link_by_con_in_deps_list
(INSN_FORW_DEPS (prev_insn), insn));
if (dep_link)
{
dep_cost = insn_cost (prev_insn, dep_link, insn) ;
if (targetm.sched.is_costly_dependence (prev_insn, insn,
dep_link, dep_cost,
dep_t dep = DEP_LINK_DEP (dep_link);
cost = dep_cost (dep);
if (targetm.sched.is_costly_dependence (dep, cost,
flag_sched_stalled_insns_dep - n_cycles))
return false;
}
@ -2705,7 +2750,7 @@ fix_inter_tick (rtx head, rtx tail)
if (INSN_P (head))
{
int tick;
rtx link;
dep_link_t link;
tick = INSN_TICK (head);
gcc_assert (tick >= MIN_TICK);
@ -2722,11 +2767,11 @@ fix_inter_tick (rtx head, rtx tail)
INSN_TICK (head) = tick;
}
for (link = INSN_DEPEND (head); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (head))
{
rtx next;
next = XEXP (link, 0);
next = DEP_LINK_CON (link);
tick = INSN_TICK (next);
if (tick != INVALID_TICK
@ -2764,7 +2809,7 @@ int
try_ready (rtx next)
{
ds_t old_ts, *ts;
rtx link;
dep_link_t link;
ts = &TODO_SPEC (next);
old_ts = *ts;
@ -2775,27 +2820,34 @@ try_ready (rtx next)
if (!(current_sched_info->flags & DO_SPECULATION))
{
if (!LOG_LINKS (next))
if (deps_list_empty_p (INSN_BACK_DEPS (next)))
*ts &= ~HARD_DEP;
}
else
{
*ts &= ~SPECULATIVE & ~HARD_DEP;
link = LOG_LINKS (next);
if (link)
*ts &= ~SPECULATIVE & ~HARD_DEP;
link = DEPS_LIST_FIRST (INSN_BACK_DEPS (next));
if (link != NULL)
{
/* LOG_LINKS are maintained sorted.
ds_t ds = DEP_LINK_STATUS (link) & SPECULATIVE;
/* Backward dependencies of the insn are maintained sorted.
So if DEP_STATUS of the first dep is SPECULATIVE,
than all other deps are speculative too. */
if (DEP_STATUS (link) & SPECULATIVE)
if (ds != 0)
{
/* Now we've got NEXT with speculative deps only.
1. Look at the deps to see what we have to do.
2. Check if we can do 'todo'. */
*ts = DEP_STATUS (link) & SPECULATIVE;
while ((link = XEXP (link, 1)))
*ts = ds_merge (*ts, DEP_STATUS (link) & SPECULATIVE);
*ts = ds;
while ((link = DEP_LINK_NEXT (link)) != NULL)
{
ds = DEP_LINK_STATUS (link) & SPECULATIVE;
*ts = ds_merge (*ts, ds);
}
if (dep_weak (*ts) < spec_info->weakness_cutoff)
/* Too few points. */
@ -2805,25 +2857,25 @@ try_ready (rtx next)
*ts |= HARD_DEP;
}
}
if (*ts & HARD_DEP)
gcc_assert (*ts == old_ts
&& QUEUE_INDEX (next) == QUEUE_NOWHERE);
else if (current_sched_info->new_ready)
*ts = current_sched_info->new_ready (next, *ts);
*ts = current_sched_info->new_ready (next, *ts);
/* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
/* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
have its original pattern or changed (speculative) one. This is due
to changing ebb in region scheduling.
* But if (old_ts & SPECULATIVE), then we are pretty sure that insn
has speculative pattern.
We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
control-speculative NEXT could have been discarded by sched-rgn.c
(the same case as when discarded by can_schedule_ready_p ()). */
if ((*ts & SPECULATIVE)
/* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
/* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
need to change anything. */
&& *ts != old_ts)
{
@ -2920,33 +2972,34 @@ try_ready (rtx next)
static int
fix_tick_ready (rtx next)
{
rtx link;
int tick, delay;
link = RESOLVED_DEPS (next);
if (link)
if (!deps_list_empty_p (INSN_RESOLVED_BACK_DEPS (next)))
{
int full_p;
dep_link_t link;
tick = INSN_TICK (next);
/* if tick is not equal to INVALID_TICK, then update
INSN_TICK of NEXT with the most recent resolved dependence
cost. Otherwise, recalculate from scratch. */
full_p = tick == INVALID_TICK;
do
{
rtx pro;
full_p = (tick == INVALID_TICK);
FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (next))
{
dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
int tick1;
pro = XEXP (link, 0);
gcc_assert (INSN_TICK (pro) >= MIN_TICK);
tick1 = INSN_TICK (pro) + insn_cost (pro, link, next);
tick1 = INSN_TICK (pro) + dep_cost (dep);
if (tick1 > tick)
tick = tick1;
if (!full_p)
break;
}
while ((link = XEXP (link, 1)) && full_p);
}
else
tick = -1;
@ -3005,22 +3058,6 @@ change_queue_index (rtx next, int delay)
}
}
/* INSN is being scheduled. Resolve the dependence between INSN and NEXT. */
static void
resolve_dep (rtx next, rtx insn)
{
rtx dep;
INSN_DEP_COUNT (next)--;
dep = remove_list_elem (insn, &LOG_LINKS (next));
XEXP (dep, 1) = RESOLVED_DEPS (next);
RESOLVED_DEPS (next) = dep;
gcc_assert ((INSN_DEP_COUNT (next) != 0 || !LOG_LINKS (next))
&& (LOG_LINKS (next) || INSN_DEP_COUNT (next) == 0));
}
/* Extend H_I_D data. */
static void
extend_h_i_d (void)
@ -3095,7 +3132,15 @@ init_h_i_d (rtx insn)
QUEUE_INDEX (insn) = QUEUE_NOWHERE;
INSN_TICK (insn) = INVALID_TICK;
INTER_TICK (insn) = INVALID_TICK;
find_insn_reg_weight1 (insn);
find_insn_reg_weight1 (insn);
/* These two lists will be freed in schedule_insn (). */
INSN_BACK_DEPS (insn) = create_deps_list (false);
INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
/* This one should be allocated on the obstack because it should live till
the scheduling ends. */
INSN_FORW_DEPS (insn) = create_deps_list (true);
}
/* Generates recovery code for INSN. */
@ -3114,18 +3159,20 @@ generate_recovery_code (rtx insn)
/* Helper function.
Tries to add speculative dependencies of type FS between instructions
in LINK list and TWIN. */
in deps_list L and TWIN. */
static void
process_insn_depend_be_in_spec (rtx link, rtx twin, ds_t fs)
process_insn_forw_deps_be_in_spec (deps_list_t l, rtx twin, ds_t fs)
{
for (; link; link = XEXP (link, 1))
dep_link_t link;
FOR_EACH_DEP_LINK (link, l)
{
ds_t ds;
rtx consumer;
consumer = XEXP (link, 0);
consumer = DEP_LINK_CON (link);
ds = DEP_STATUS (link);
ds = DEP_LINK_STATUS (link);
if (/* If we want to create speculative dep. */
fs
@ -3152,7 +3199,7 @@ process_insn_depend_be_in_spec (rtx link, rtx twin, ds_t fs)
ds |= fs;
}
add_back_forw_dep (consumer, twin, REG_NOTE_KIND (link), ds);
add_back_forw_dep (consumer, twin, DEP_LINK_KIND (link), ds);
}
}
@ -3175,7 +3222,8 @@ static void
add_to_speculative_block (rtx insn)
{
ds_t ts;
rtx link, twins = NULL;
dep_link_t link;
rtx twins = NULL;
ts = TODO_SPEC (insn);
gcc_assert (!(ts & ~BE_IN_SPEC));
@ -3191,34 +3239,37 @@ add_to_speculative_block (rtx insn)
DONE_SPEC (insn) |= ts;
/* First we convert all simple checks to branchy. */
for (link = LOG_LINKS (insn); link;)
for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
{
rtx check;
check = XEXP (link, 0);
rtx check = DEP_LINK_PRO (link);
if (IS_SPECULATION_SIMPLE_CHECK_P (check))
{
create_check_block_twin (check, true);
link = LOG_LINKS (insn);
/* Restart search. */
link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
}
else
link = XEXP (link, 1);
/* Continue search. */
link = DEP_LINK_NEXT (link);
}
clear_priorities (insn);
do
{
rtx link, check, twin;
dep_link_t link;
rtx check, twin;
basic_block rec;
link = LOG_LINKS (insn);
gcc_assert (!(DEP_STATUS (link) & BEGIN_SPEC)
&& (DEP_STATUS (link) & BE_IN_SPEC)
&& (DEP_STATUS (link) & DEP_TYPES) == DEP_TRUE);
link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
check = XEXP (link, 0);
gcc_assert ((DEP_LINK_STATUS (link) & BEGIN_SPEC) == 0
&& (DEP_LINK_STATUS (link) & BE_IN_SPEC) != 0
&& (DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
check = DEP_LINK_PRO (link);
gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
&& QUEUE_INDEX (check) == QUEUE_NOWHERE);
@ -3228,7 +3279,9 @@ add_to_speculative_block (rtx insn)
twin = emit_insn_before (copy_rtx (PATTERN (insn)), BB_END (rec));
extend_global (twin);
RESOLVED_DEPS (twin) = copy_DEPS_LIST_list (RESOLVED_DEPS (insn));
copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
INSN_RESOLVED_BACK_DEPS (insn),
twin);
if (sched_verbose && spec_info->dump)
/* INSN_BB (insn) isn't determined for twin insns yet.
@ -3246,10 +3299,11 @@ add_to_speculative_block (rtx insn)
do
{
link = XEXP (link, 1);
if (link)
link = DEP_LINK_NEXT (link);
if (link != NULL)
{
check = XEXP (link, 0);
check = DEP_LINK_PRO (link);
if (BLOCK_FOR_INSN (check) == rec)
break;
}
@ -3258,27 +3312,31 @@ add_to_speculative_block (rtx insn)
}
while (1);
}
while (link);
while (link != NULL);
process_insn_depend_be_in_spec (INSN_DEPEND (insn), twin, ts);
process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, ts);
for (link = LOG_LINKS (insn); link;)
/* Remove all dependencies between INSN and insns in REC. */
for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
{
check = XEXP (link, 0);
check = DEP_LINK_PRO (link);
if (BLOCK_FOR_INSN (check) == rec)
{
delete_back_forw_dep (insn, check);
link = LOG_LINKS (insn);
delete_back_forw_dep (link);
/* Restart search. */
link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
}
else
link = XEXP (link, 1);
/* Continue search. */
link = DEP_LINK_NEXT (link);
}
}
while (LOG_LINKS (insn));
while (!deps_list_empty_p (INSN_BACK_DEPS (insn)));
/* We can't add the dependence between insn and twin earlier because
that would make twin appear in the INSN_DEPEND (insn). */
/* We couldn't have added the dependencies between INSN and TWINS earlier
because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
while (twins)
{
rtx twin;
@ -3471,7 +3529,8 @@ static void
create_check_block_twin (rtx insn, bool mutate_p)
{
basic_block rec;
rtx label, check, twin, link;
rtx label, check, twin;
dep_link_t link;
ds_t fs;
gcc_assert (ORIG_PAT (insn)
@ -3521,14 +3580,14 @@ create_check_block_twin (rtx insn, bool mutate_p)
in the recovery block). */
if (rec != EXIT_BLOCK_PTR)
{
rtx link;
for (link = RESOLVED_DEPS (insn); link; link = XEXP (link, 1))
if (DEP_STATUS (link) & DEP_OUTPUT)
FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
if ((DEP_LINK_STATUS (link) & DEP_OUTPUT) != 0)
{
RESOLVED_DEPS (check) =
alloc_DEPS_LIST (XEXP (link, 0), RESOLVED_DEPS (check), DEP_TRUE);
PUT_REG_NOTE_KIND (RESOLVED_DEPS (check), REG_DEP_TRUE);
struct _dep _dep, *dep = &_dep;
init_dep (dep, DEP_LINK_PRO (link), check, REG_DEP_TRUE);
add_back_dep_to_deps_list (INSN_RESOLVED_BACK_DEPS (check), dep);
}
twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
@ -3549,7 +3608,9 @@ create_check_block_twin (rtx insn, bool mutate_p)
(TRUE | OUTPUT). */
}
RESOLVED_DEPS (twin) = copy_DEPS_LIST_list (RESOLVED_DEPS (insn));
copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
INSN_RESOLVED_BACK_DEPS (insn),
twin);
if (rec != EXIT_BLOCK_PTR)
/* In case of branchy check, fix CFG. */
@ -3612,8 +3673,10 @@ create_check_block_twin (rtx insn, bool mutate_p)
/* Move backward dependences from INSN to CHECK and
move forward dependences from INSN to TWIN. */
for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
{
rtx pro = DEP_LINK_PRO (link);
enum reg_note dk = DEP_LINK_KIND (link);
ds_t ds;
/* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
@ -3631,7 +3694,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
twin ~~TRUE~~> producer
twin --ANTI--> check */
ds = DEP_STATUS (link);
ds = DEP_LINK_STATUS (link);
if (ds & BEGIN_SPEC)
{
@ -3641,24 +3704,27 @@ create_check_block_twin (rtx insn, bool mutate_p)
if (rec != EXIT_BLOCK_PTR)
{
add_back_forw_dep (check, XEXP (link, 0), REG_NOTE_KIND (link), ds);
add_back_forw_dep (twin, XEXP (link, 0), REG_NOTE_KIND (link), ds);
add_back_forw_dep (check, pro, dk, ds);
add_back_forw_dep (twin, pro, dk, ds);
}
else
add_back_forw_dep (check, XEXP (link, 0), REG_NOTE_KIND (link), ds);
add_back_forw_dep (check, pro, dk, ds);
}
for (link = LOG_LINKS (insn); link;)
if ((DEP_STATUS (link) & BEGIN_SPEC)
for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
if ((DEP_LINK_STATUS (link) & BEGIN_SPEC)
|| mutate_p)
/* We can delete this dep only if we totally overcome it with
BEGIN_SPECULATION. */
{
delete_back_forw_dep (insn, XEXP (link, 0));
link = LOG_LINKS (insn);
delete_back_forw_dep (link);
/* Restart search. */
link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
}
else
link = XEXP (link, 1);
/* Continue search. */
link = DEP_LINK_NEXT (link);
fs = 0;
@ -3683,7 +3749,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
CHECK_SPEC (check) = CHECK_SPEC (insn);
/* Future speculations: call the helper. */
process_insn_depend_be_in_spec (INSN_DEPEND (insn), twin, fs);
process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, fs);
if (rec != EXIT_BLOCK_PTR)
{
@ -3698,12 +3764,19 @@ create_check_block_twin (rtx insn, bool mutate_p)
}
else
{
dep_link_t link;
if (spec_info->dump)
fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
(*current_sched_info->print_insn) (insn, 0));
for (link = INSN_DEPEND (insn); link; link = INSN_DEPEND (insn))
delete_back_forw_dep (XEXP (link, 0), insn);
/* Remove all forward dependencies of the INSN. */
link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
while (link != NULL)
{
delete_back_forw_dep (link);
link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
}
if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
try_ready (check);
@ -3731,8 +3804,10 @@ create_check_block_twin (rtx insn, bool mutate_p)
static void
fix_recovery_deps (basic_block rec)
{
rtx note, insn, link, jump, ready_list = 0;
dep_link_t link;
rtx note, insn, jump, ready_list = 0;
bitmap_head in_ready;
rtx link1;
bitmap_initialize (&in_ready, 0);
@ -3745,29 +3820,31 @@ fix_recovery_deps (basic_block rec)
do
{
for (link = INSN_DEPEND (insn); link;)
for (link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn)); link != NULL;)
{
rtx consumer;
consumer = XEXP (link, 0);
consumer = DEP_LINK_CON (link);
if (BLOCK_FOR_INSN (consumer) != rec)
{
delete_back_forw_dep (consumer, insn);
delete_back_forw_dep (link);
if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
{
ready_list = alloc_INSN_LIST (consumer, ready_list);
bitmap_set_bit (&in_ready, INSN_LUID (consumer));
}
link = INSN_DEPEND (insn);
/* Restart search. */
link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
}
else
{
gcc_assert ((DEP_STATUS (link) & DEP_TYPES) == DEP_TRUE);
gcc_assert ((DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
link = XEXP (link, 1);
/* Continue search. */
link = DEP_LINK_NEXT (link);
}
}
@ -3778,8 +3855,8 @@ fix_recovery_deps (basic_block rec)
bitmap_clear (&in_ready);
/* Try to add instructions to the ready or queue list. */
for (link = ready_list; link; link = XEXP (link, 1))
try_ready (XEXP (link, 0));
for (link1 = ready_list; link1; link1 = XEXP (link1, 1))
try_ready (XEXP (link1, 0));
free_INSN_LIST_list (&ready_list);
/* Fixing jump's dependences. */
@ -4209,13 +4286,12 @@ sched_remove_insn (rtx insn)
static void
clear_priorities (rtx insn)
{
rtx link;
dep_link_t link;
for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
{
rtx pro;
rtx pro = DEP_LINK_PRO (link);
pro = XEXP (link, 0);
if (INSN_PRIORITY_KNOWN (pro))
{
INSN_PRIORITY_KNOWN (pro) = 0;
@ -4229,13 +4305,12 @@ clear_priorities (rtx insn)
static void
calc_priorities (rtx insn)
{
rtx link;
dep_link_t link;
for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
{
rtx pro;
rtx pro = DEP_LINK_PRO (link);
pro = XEXP (link, 0);
if (!INSN_PRIORITY_KNOWN (pro))
{
priority (pro);
@ -4256,11 +4331,12 @@ add_jump_dependencies (rtx insn, rtx jump)
if (insn == jump)
break;
if (!INSN_DEPEND (insn))
if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
add_back_forw_dep (jump, insn, REG_DEP_ANTI, DEP_ANTI);
}
while (1);
gcc_assert (LOG_LINKS (jump));
gcc_assert (!deps_list_empty_p (INSN_BACK_DEPS (jump)));
}
/* Return the NOTE_INSN_BASIC_BLOCK of BB. */

View File

@ -28,7 +28,6 @@ Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
#include "ggc.h"
static void free_list (rtx *, rtx *);
static void free_DEPS_LIST_node (rtx);
/* Functions for maintaining cache-able lists of EXPR_LIST and INSN_LISTs. */
@ -38,10 +37,6 @@ static GTY ((deletable)) rtx unused_insn_list;
/* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
static GTY ((deletable)) rtx unused_expr_list;
/* An DEPS_LIST containing all DEPS_LISTs allocated but currently unused. */
static GTY ((deletable)) rtx unused_deps_list;
/* This function will free an entire list of either EXPR_LIST, INSN_LIST
or DEPS_LIST nodes. This is to be used only on lists that consist
exclusively of nodes of one type only. This is only called by
@ -54,17 +49,13 @@ free_list (rtx *listp, rtx *unused_listp)
prev_link = *listp;
link = XEXP (prev_link, 1);
gcc_assert ((unused_listp != &unused_insn_list
|| GET_CODE (prev_link) == INSN_LIST)
&& (unused_listp != &unused_deps_list
|| GET_CODE (prev_link) == DEPS_LIST));
gcc_assert (unused_listp != &unused_insn_list
|| GET_CODE (prev_link) == INSN_LIST);
while (link)
{
gcc_assert ((unused_listp != &unused_insn_list
|| GET_CODE (prev_link) == INSN_LIST)
&& (unused_listp != &unused_deps_list
|| GET_CODE (prev_link) == DEPS_LIST));
gcc_assert (unused_listp != &unused_insn_list
|| GET_CODE (prev_link) == INSN_LIST);
prev_link = link;
link = XEXP (link, 1);
@ -155,31 +146,6 @@ alloc_EXPR_LIST (int kind, rtx val, rtx next)
return r;
}
/* This call is used in place of a gen_rtx_DEPS_LIST. If there is a cached
node available, we'll use it, otherwise a call to gen_rtx_DEPS_LIST
is made. */
rtx
alloc_DEPS_LIST (rtx val, rtx next, int ds)
{
rtx r;
if (unused_deps_list)
{
r = unused_deps_list;
unused_deps_list = XEXP (r, 1);
XEXP (r, 0) = val;
XEXP (r, 1) = next;
XINT (r, 2) = ds;
PUT_REG_NOTE_KIND (r, VOIDmode);
gcc_assert (GET_CODE (r) == DEPS_LIST);
}
else
r = gen_rtx_DEPS_LIST (VOIDmode, val, next, ds);
return r;
}
/* This function will free up an entire list of EXPR_LIST nodes. */
void
free_EXPR_LIST_list (rtx *listp)
@ -198,15 +164,6 @@ free_INSN_LIST_list (rtx *listp)
free_list (listp, &unused_insn_list);
}
/* This function will free up an entire list of DEPS_LIST nodes. */
void
free_DEPS_LIST_list (rtx *listp)
{
if (*listp == 0)
return;
free_list (listp, &unused_deps_list);
}
/* This function will free up an individual EXPR_LIST node. */
void
free_EXPR_LIST_node (rtx ptr)
@ -224,23 +181,6 @@ free_INSN_LIST_node (rtx ptr)
unused_insn_list = ptr;
}
/* This function will free up an individual DEPS_LIST node. */
static void
free_DEPS_LIST_node (rtx ptr)
{
gcc_assert (GET_CODE (ptr) == DEPS_LIST);
XEXP (ptr, 1) = unused_deps_list;
unused_deps_list = ptr;
}
/* Remove and free corresponding to ELEM node in the DEPS_LIST pointed to
by LISTP. */
void
remove_free_DEPS_LIST_elem (rtx elem, rtx *listp)
{
free_DEPS_LIST_node (remove_list_elem (elem, listp));
}
/* Remove and free corresponding to ELEM node in the INSN_LIST pointed to
by LISTP. */
void
@ -249,20 +189,4 @@ remove_free_INSN_LIST_elem (rtx elem, rtx *listp)
free_INSN_LIST_node (remove_list_elem (elem, listp));
}
/* Create and return a copy of the DEPS_LIST LIST. */
rtx
copy_DEPS_LIST_list (rtx list)
{
rtx res = NULL_RTX, *resp = &res;
while (list)
{
*resp = alloc_DEPS_LIST (XEXP (list, 0), 0, XINT (list, 2));
PUT_REG_NOTE_KIND (*resp, REG_NOTE_KIND (list));
resp = &XEXP (*resp, 1);
list = XEXP (list, 1);
}
return res;
}
#include "gt-lists.h"

View File

@ -26,10 +26,10 @@ Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
/* Shorthand. */
#define REG_NOTE(NAME) DEF_REG_NOTE (REG_##NAME)
/* REG_DEP_TRUE is used in LOG_LINKS to represent a read-after-write
dependency (i.e. a true data dependency). This is here, not
grouped with REG_DEP_ANTI and REG_DEP_OUTPUT, because some passes
use a literal 0 for it. */
/* REG_DEP_TRUE is used in scheduler dependencies lists to represent a
read-after-write dependency (i.e. a true data dependency). This is
here, not grouped with REG_DEP_ANTI and REG_DEP_OUTPUT, because some
passes use a literal 0 for it. */
REG_NOTE (DEP_TRUE)
/* The value in REG dies in this insn (i.e., it is not needed past
@ -97,8 +97,9 @@ REG_NOTE (CC_USER)
This note is an INSN_LIST. */
REG_NOTE (LABEL)
/* REG_DEP_ANTI and REG_DEP_OUTPUT are used in LOG_LINKS to represent
write-after-read and write-after-write dependencies respectively. */
/* REG_DEP_OUTPUT and REG_DEP_ANTI are used in scheduler dependencies lists
to represent write-after-write and write-after-read dependencies
respectively. */
REG_NOTE (DEP_OUTPUT)
REG_NOTE (DEP_ANTI)

View File

@ -93,11 +93,6 @@ DEF_RTL_EXPR(EXPR_LIST, "expr_list", "ee", RTX_EXTRA)
The insns are represented in print by their uids. */
DEF_RTL_EXPR(INSN_LIST, "insn_list", "ue", RTX_EXTRA)
/* a linked list of dependencies.
The insns are represented in print by their uids.
Operand 2 is the status of a dependence (see sched-int.h for more). */
DEF_RTL_EXPR(DEPS_LIST, "deps_list", "uei", RTX_EXTRA)
/* SEQUENCE appears in the result of a `gen_...' function
for a DEFINE_EXPAND that wants to make several insns.
Its elements are the bodies of the insns that should be made.

View File

@ -1751,12 +1751,8 @@ void free_EXPR_LIST_node (rtx);
void free_INSN_LIST_node (rtx);
rtx alloc_INSN_LIST (rtx, rtx);
rtx alloc_EXPR_LIST (int, rtx, rtx);
void free_DEPS_LIST_list (rtx *);
rtx alloc_DEPS_LIST (rtx, rtx, int);
void remove_free_DEPS_LIST_elem (rtx, rtx *);
void remove_free_INSN_LIST_elem (rtx, rtx *);
rtx remove_list_elem (rtx, rtx *);
rtx copy_DEPS_LIST_list (rtx);
/* regclass.c */

View File

@ -44,6 +44,365 @@ Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
#include "cselib.h"
#include "df.h"
#ifdef ENABLE_CHECKING
#define CHECK (true)
#else
#define CHECK (false)
#endif
/* Return the major type present in the DS. */
enum reg_note
ds_to_dk (ds_t ds)
{
if (ds & DEP_TRUE)
return REG_DEP_TRUE;
if (ds & DEP_OUTPUT)
return REG_DEP_OUTPUT;
gcc_assert (ds & DEP_ANTI);
return REG_DEP_ANTI;
}
/* Return equivalent dep_status. */
ds_t
dk_to_ds (enum reg_note dk)
{
switch (dk)
{
case REG_DEP_TRUE:
return DEP_TRUE;
case REG_DEP_OUTPUT:
return DEP_OUTPUT;
default:
gcc_assert (dk == REG_DEP_ANTI);
return DEP_ANTI;
}
}
/* Functions to operate with dependence information container - dep_t. */
/* Init DEP with the arguments. */
static void
init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note kind, ds_t ds)
{
DEP_PRO (dep) = pro;
DEP_CON (dep) = con;
DEP_KIND (dep) = kind;
DEP_STATUS (dep) = ds;
}
/* Init DEP with the arguments.
While most of the scheduler (including targets) only need the major type
of the dependency, it is convinient to hide full dep_status from them. */
void
init_dep (dep_t dep, rtx pro, rtx con, enum reg_note kind)
{
ds_t ds;
if ((current_sched_info->flags & USE_DEPS_LIST) != 0)
ds = dk_to_ds (kind);
else
ds = -1;
init_dep_1 (dep, pro, con, kind, ds);
}
/* Make a copy of FROM in TO. */
static void
copy_dep (dep_t to, dep_t from)
{
memcpy (to, from, sizeof (*to));
}
/* Functions to operate with a single link from the dependencies lists -
dep_link_t. */
/* Return true if dep_link L is consistent. */
static bool
dep_link_consistent_p (dep_link_t l)
{
dep_link_t next = DEP_LINK_NEXT (l);
return (next == NULL
|| &DEP_LINK_NEXT (l) == DEP_LINK_PREV_NEXTP (next));
}
/* Attach L to appear after link X whose &DEP_LINK_NEXT (X) is given by
PREV_NEXT_P. */
static void
attach_dep_link (dep_link_t l, dep_link_t *prev_nextp)
{
dep_link_t next = *prev_nextp;
gcc_assert (DEP_LINK_PREV_NEXTP (l) == NULL
&& DEP_LINK_NEXT (l) == NULL);
/* Init node being inserted. */
DEP_LINK_PREV_NEXTP (l) = prev_nextp;
DEP_LINK_NEXT (l) = next;
/* Fix next node. */
if (next != NULL)
{
gcc_assert (DEP_LINK_PREV_NEXTP (next) == prev_nextp);
DEP_LINK_PREV_NEXTP (next) = &DEP_LINK_NEXT (l);
}
/* Fix prev node. */
*prev_nextp = l;
}
/* Add dep_link LINK to deps_list L. */
static void
add_to_deps_list (dep_link_t link, deps_list_t l)
{
attach_dep_link (link, &DEPS_LIST_FIRST (l));
}
/* Detach dep_link L from the list. */
static void
detach_dep_link (dep_link_t l)
{
dep_link_t *prev_nextp = DEP_LINK_PREV_NEXTP (l);
dep_link_t next = DEP_LINK_NEXT (l);
*prev_nextp = next;
if (next != NULL)
DEP_LINK_PREV_NEXTP (next) = prev_nextp;
/* Though this is property is not used anywhere but in the assert in
attach_dep_link (), this can prevent latent errors. */
DEP_LINK_PREV_NEXTP (l) = NULL;
DEP_LINK_NEXT (l) = NULL;
}
/* Move LINK from whatever list it is now to L. */
void
move_dep_link (dep_link_t link, deps_list_t l)
{
detach_dep_link (link);
add_to_deps_list (link, l);
}
/* Check L's and its successors' consistency.
This is, potentially, an expensive check, hence it should be guarded by
ENABLE_CHECKING at all times. */
static bool
dep_links_consistent_p (dep_link_t l)
{
while (l != NULL)
{
if (dep_link_consistent_p (l))
l = DEP_LINK_NEXT (l);
else
return false;
}
return true;
}
/* Dump dep_nodes starting from l. */
static void
dump_dep_links (FILE *dump, dep_link_t l)
{
while (l != NULL)
{
dep_t d = DEP_LINK_DEP (l);
fprintf (dump, "%d%c>%d ", INSN_UID (DEP_PRO (d)),
dep_link_consistent_p (l) ? '-' : '!', INSN_UID (DEP_CON (d)));
l = DEP_LINK_NEXT (l);
}
fprintf (dump, "\n");
}
/* Dump dep_nodes starting from L to stderr. */
void
debug_dep_links (dep_link_t l)
{
dump_dep_links (stderr, l);
}
/* Obstack to allocate dep_nodes and deps_lists on. */
static struct obstack deps_obstack;
/* Obstack to hold forward dependencies lists (deps_list_t). */
static struct obstack *dl_obstack = &deps_obstack;
/* Obstack to hold all dependency nodes (dep_node_t). */
static struct obstack *dn_obstack = &deps_obstack;
/* Functions to operate with dependences lists - deps_list_t. */
/* Allocate deps_list.
If ON_OBSTACK_P is true, allocate the list on the obstack. This is done for
INSN_FORW_DEPS lists because they should live till the end of scheduling.
INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated on the free
store and are being freed in haifa-sched.c: schedule_insn (). */
static deps_list_t
alloc_deps_list (bool on_obstack_p)
{
if (on_obstack_p)
return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
else
return xmalloc (sizeof (struct _deps_list));
}
/* Initialize deps_list L. */
static void
init_deps_list (deps_list_t l)
{
DEPS_LIST_FIRST (l) = NULL;
}
/* Create (allocate and init) deps_list.
The meaning of ON_OBSTACK_P is the same as in alloc_deps_list (). */
deps_list_t
create_deps_list (bool on_obstack_p)
{
deps_list_t l = alloc_deps_list (on_obstack_p);
init_deps_list (l);
return l;
}
/* Free dep_data_nodes that present in L. */
static void
clear_deps_list (deps_list_t l)
{
/* All dep_nodes are allocated on the dn_obstack. They'll be freed with
the obstack. */
DEPS_LIST_FIRST (l) = NULL;
}
/* Free deps_list L. */
void
free_deps_list (deps_list_t l)
{
gcc_assert (deps_list_empty_p (l));
free (l);
}
/* Delete (clear and free) deps_list L. */
void
delete_deps_list (deps_list_t l)
{
clear_deps_list (l);
free_deps_list (l);
}
/* Return true if L is empty. */
bool
deps_list_empty_p (deps_list_t l)
{
return DEPS_LIST_FIRST (l) == NULL;
}
/* Check L's consistency.
This is, potentially, an expensive check, hence it should be guarded by
ENABLE_CHECKING at all times. */
static bool
deps_list_consistent_p (deps_list_t l)
{
dep_link_t first = DEPS_LIST_FIRST (l);
return (first == NULL
|| (&DEPS_LIST_FIRST (l) == DEP_LINK_PREV_NEXTP (first)
&& dep_links_consistent_p (first)));
}
/* Dump L to F. */
static void
dump_deps_list (FILE *f, deps_list_t l)
{
dump_dep_links (f, DEPS_LIST_FIRST (l));
}
/* Dump L to STDERR. */
void
debug_deps_list (deps_list_t l)
{
dump_deps_list (stderr, l);
}
/* Add a dependency described by DEP to the list L.
L should be either INSN_BACK_DEPS or INSN_RESOLVED_BACK_DEPS. */
void
add_back_dep_to_deps_list (deps_list_t l, dep_t dep_from)
{
dep_node_t n = (dep_node_t) obstack_alloc (dn_obstack,
sizeof (*n));
dep_t dep_to = DEP_NODE_DEP (n);
dep_link_t back = DEP_NODE_BACK (n);
dep_link_t forw = DEP_NODE_FORW (n);
copy_dep (dep_to, dep_from);
DEP_LINK_NODE (back) = n;
DEP_LINK_NODE (forw) = n;
/* There is no particular need to initialize these four fields except to make
assert in attach_dep_link () happy. */
DEP_LINK_NEXT (back) = NULL;
DEP_LINK_PREV_NEXTP (back) = NULL;
DEP_LINK_NEXT (forw) = NULL;
DEP_LINK_PREV_NEXTP (forw) = NULL;
add_to_deps_list (back, l);
}
/* Find the dep_link with producer PRO in deps_list L. */
dep_link_t
find_link_by_pro_in_deps_list (deps_list_t l, rtx pro)
{
dep_link_t link;
FOR_EACH_DEP_LINK (link, l)
if (DEP_LINK_PRO (link) == pro)
return link;
return NULL;
}
/* Find the dep_link with consumer CON in deps_list L. */
dep_link_t
find_link_by_con_in_deps_list (deps_list_t l, rtx con)
{
dep_link_t link;
FOR_EACH_DEP_LINK (link, l)
if (DEP_LINK_CON (link) == con)
return link;
return NULL;
}
/* Make a copy of FROM in TO with substituting consumer with CON.
TO and FROM should be RESOLVED_BACK_DEPS lists. */
void
copy_deps_list_change_con (deps_list_t to, deps_list_t from, rtx con)
{
dep_link_t l;
gcc_assert (deps_list_empty_p (to));
FOR_EACH_DEP_LINK (l, from)
{
add_back_dep_to_deps_list (to, DEP_LINK_DEP (l));
DEP_LINK_CON (DEPS_LIST_FIRST (to)) = con;
}
}
static regset reg_pending_sets;
static regset reg_pending_clobbers;
@ -103,14 +462,14 @@ static rtx sched_get_condition (rtx);
static int conditions_mutex_p (rtx, rtx);
static enum DEPS_ADJUST_RESULT maybe_add_or_update_back_dep_1 (rtx, rtx,
enum reg_note, ds_t, rtx, rtx, rtx **);
enum reg_note, ds_t, rtx, rtx, dep_link_t **);
static enum DEPS_ADJUST_RESULT add_or_update_back_dep_1 (rtx, rtx,
enum reg_note, ds_t, rtx, rtx, rtx **);
enum reg_note, ds_t, rtx, rtx, dep_link_t **);
static void add_back_dep (rtx, rtx, enum reg_note, ds_t);
static void adjust_add_sorted_back_dep (rtx, rtx, rtx *);
static void adjust_back_add_forw_dep (rtx, rtx *);
static void delete_forw_dep (rtx, rtx);
static void adjust_add_sorted_back_dep (rtx, dep_link_t, dep_link_t *);
static void adjust_back_add_forw_dep (rtx, dep_link_t *);
static void delete_forw_dep (dep_link_t);
static dw_t estimate_dep_weak (rtx, rtx);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
@ -225,13 +584,13 @@ sched_insns_conditions_mutex_p (rtx insn1, rtx insn2)
return false;
}
/* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
LOG_LINKS of INSN, if it is not already there. DEP_TYPE indicates the
/* Add ELEM wrapped in an dep_link with reg note kind DEP_TYPE to the
INSN_BACK_DEPS (INSN), if it is not already there. DEP_TYPE indicates the
type of dependence that this link represents. DS, if nonzero,
indicates speculations, through which this dependence can be overcome.
MEM1 and MEM2, if non-null, corresponds to memory locations in case of
data speculation. The function returns a value indicating if an old entry
has been changed or a new entry has been added to insn's LOG_LINK.
has been changed or a new entry has been added to insn's backward deps.
In case of changed entry CHANGED_LINKPP sets to its address.
See also the definition of enum DEPS_ADJUST_RESULT in sched-int.h.
Actual manipulation of dependence data structures is performed in
@ -240,7 +599,7 @@ sched_insns_conditions_mutex_p (rtx insn1, rtx insn2)
static enum DEPS_ADJUST_RESULT
maybe_add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
ds_t ds, rtx mem1, rtx mem2,
rtx **changed_linkpp)
dep_link_t **changed_linkpp)
{
gcc_assert (INSN_P (insn) && INSN_P (elem));
@ -267,7 +626,7 @@ static enum DEPS_ADJUST_RESULT
add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
ds_t ds ATTRIBUTE_UNUSED,
rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED,
rtx **changed_linkpp ATTRIBUTE_UNUSED)
dep_link_t **changed_linkpp ATTRIBUTE_UNUSED)
{
bool maybe_present_p = true, present_p = false;
@ -359,15 +718,17 @@ add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
/* Check that we don't already have this dependence. */
if (maybe_present_p)
{
rtx *linkp;
dep_link_t *linkp;
for (linkp = &LOG_LINKS (insn); *linkp; linkp = &XEXP (*linkp, 1))
for (linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
*linkp != NULL;
linkp = &DEP_LINK_NEXT (*linkp))
{
rtx link = *linkp;
dep_t link = DEP_LINK_DEP (*linkp);
gcc_assert (true_dependency_cache == 0 || present_p);
if (XEXP (link, 0) == elem)
if (DEP_PRO (link) == elem)
{
enum DEPS_ADJUST_RESULT changed_p = DEP_PRESENT;
@ -412,7 +773,7 @@ add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
if (true_dependency_cache != NULL
&& !(current_sched_info->flags & USE_DEPS_LIST))
{
enum reg_note kind = REG_NOTE_KIND (link);
enum reg_note kind = DEP_KIND (link);
switch (kind)
{
@ -440,9 +801,9 @@ add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
/* If this is a more restrictive type of dependence than the
existing one, then change the existing dependence to this
type. */
if ((int) dep_type < (int) REG_NOTE_KIND (link))
if ((int) dep_type < (int) DEP_KIND (link))
{
PUT_REG_NOTE_KIND (link, dep_type);
DEP_KIND (link) = dep_type;
changed_p = DEP_CHANGED;
}
@ -453,13 +814,13 @@ add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
{
if (!(current_sched_info->flags & USE_DEPS_LIST))
{
if (REG_NOTE_KIND (link) == REG_DEP_TRUE)
if (DEP_KIND (link) == REG_DEP_TRUE)
bitmap_set_bit (&true_dependency_cache
[INSN_LUID (insn)], INSN_LUID (elem));
else if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT)
else if (DEP_KIND (link) == REG_DEP_OUTPUT)
bitmap_set_bit (&output_dependency_cache
[INSN_LUID (insn)], INSN_LUID (elem));
else if (REG_NOTE_KIND (link) == REG_DEP_ANTI)
else if (DEP_KIND (link) == REG_DEP_ANTI)
bitmap_set_bit (&anti_dependency_cache
[INSN_LUID (insn)], INSN_LUID (elem));
}
@ -511,16 +872,17 @@ add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
static void
add_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
{
struct _dep _dep, *dep = &_dep;
gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
if (current_sched_info->flags & USE_DEPS_LIST)
LOG_LINKS (insn) = alloc_DEPS_LIST (elem, LOG_LINKS (insn), ds);
init_dep_1 (dep, elem, insn, dep_type, ds);
else
LOG_LINKS (insn) = alloc_INSN_LIST (elem, LOG_LINKS (insn));
/* Insn dependency, not data dependency. */
PUT_REG_NOTE_KIND (LOG_LINKS (insn), dep_type);
init_dep_1 (dep, elem, insn, dep_type, -1);
add_back_dep_to_deps_list (INSN_BACK_DEPS (insn), dep);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
check_dep_status (dep_type, ds, false);
@ -612,12 +974,7 @@ delete_all_dependences (rtx insn)
}
#endif
if (!(current_sched_info->flags & USE_DEPS_LIST))
/* In this case LOG_LINKS are formed from the DEPS_LISTs,
not the INSN_LISTs. */
free_INSN_LIST_list (&LOG_LINKS (insn));
else
free_DEPS_LIST_list (&LOG_LINKS (insn));
clear_deps_list (INSN_BACK_DEPS (insn));
}
/* All insns in a scheduling group except the first should only have
@ -629,20 +986,25 @@ delete_all_dependences (rtx insn)
static void
fixup_sched_groups (rtx insn)
{
rtx link, prev_nonnote;
dep_link_t link;
rtx prev_nonnote;
for (link = LOG_LINKS (insn); link ; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
{
rtx i = insn;
dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
do
{
i = prev_nonnote_insn (i);
if (XEXP (link, 0) == i)
if (pro == i)
goto next_link;
} while (SCHED_GROUP_P (i));
if (! sched_insns_conditions_mutex_p (i, XEXP (link, 0)))
add_dependence (i, XEXP (link, 0), REG_NOTE_KIND (link));
if (! sched_insns_conditions_mutex_p (i, pro))
add_dependence (i, pro, DEP_KIND (dep));
next_link:;
}
@ -1450,8 +1812,8 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn)
fixup_sched_groups (insn);
}
/* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
for every dependency. */
/* Analyze every insn between HEAD and TAIL inclusive, creating backward
dependencies for each insn. */
void
sched_analyze (struct deps *deps, rtx head, rtx tail)
@ -1474,11 +1836,22 @@ sched_analyze (struct deps *deps, rtx head, rtx tail)
{
rtx link, end_seq, r0, set;
if (NONJUMP_INSN_P (insn) || JUMP_P (insn))
if (INSN_P (insn))
{
/* Clear out the stale LOG_LINKS from flow. */
free_INSN_LIST_list (&LOG_LINKS (insn));
/* These two lists will be freed in schedule_insn (). */
INSN_BACK_DEPS (insn) = create_deps_list (false);
INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
/* This one should be allocated on the obstack because it should live
till the scheduling ends. */
INSN_FORW_DEPS (insn) = create_deps_list (true);
}
if (NONJUMP_INSN_P (insn) || JUMP_P (insn))
{
/* Make each JUMP_INSN a scheduling barrier for memory
references. */
if (JUMP_P (insn))
@ -1498,9 +1871,6 @@ sched_analyze (struct deps *deps, rtx head, rtx tail)
CANT_MOVE (insn) = 1;
/* Clear out the stale LOG_LINKS from flow. */
free_INSN_LIST_list (&LOG_LINKS (insn));
if (find_reg_note (insn, REG_SETJMP, NULL))
{
/* This is setjmp. Assume that all registers, not just
@ -1625,11 +1995,11 @@ sched_analyze (struct deps *deps, rtx head, rtx tail)
given DEP_TYPE. The forward dependence should be not exist before. */
void
add_forw_dep (rtx to, rtx link)
add_forw_dep (dep_link_t link)
{
rtx new_link, from;
from = XEXP (link, 0);
dep_t dep = DEP_LINK_DEP (link);
rtx to = DEP_CON (dep);
rtx from = DEP_PRO (dep);
#ifdef ENABLE_CHECKING
/* If add_dependence is working properly there should never
@ -1647,24 +2017,20 @@ add_forw_dep (rtx to, rtx link)
bitmap_set_bit (&forward_dependency_cache[INSN_LUID (from)],
INSN_LUID (to));
}
else
gcc_assert (!find_insn_list (to, INSN_DEPEND (from)));
gcc_assert (find_link_by_con_in_deps_list (INSN_FORW_DEPS (from), to)
== NULL);
#endif
if (!(current_sched_info->flags & USE_DEPS_LIST))
new_link = alloc_INSN_LIST (to, INSN_DEPEND (from));
else
new_link = alloc_DEPS_LIST (to, INSN_DEPEND (from), DEP_STATUS (link));
add_to_deps_list (DEP_NODE_FORW (DEP_LINK_NODE (link)),
INSN_FORW_DEPS (from));
PUT_REG_NOTE_KIND (new_link, REG_NOTE_KIND (link));
INSN_DEPEND (from) = new_link;
INSN_DEP_COUNT (to) += 1;
}
/* Examine insns in the range [ HEAD, TAIL ] and Use the backward
dependences from LOG_LINKS to build forward dependences in
INSN_DEPEND. */
dependences from INSN_BACK_DEPS list to build forward dependences in
INSN_FORW_DEPS. */
void
compute_forward_dependences (rtx head, rtx tail)
@ -1675,26 +2041,41 @@ compute_forward_dependences (rtx head, rtx tail)
next_tail = NEXT_INSN (tail);
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
{
rtx link;
dep_link_t link;
if (! INSN_P (insn))
continue;
if (current_sched_info->flags & DO_SPECULATION)
{
rtx new = 0, link, next;
/* We will add links, preserving order, from INSN_BACK_DEPS to
NEW. */
dep_link_t new = NULL;
for (link = LOG_LINKS (insn); link; link = next)
link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
while (link != NULL)
{
next = XEXP (link, 1);
dep_link_t next = DEP_LINK_NEXT (link);
detach_dep_link (link);
adjust_add_sorted_back_dep (insn, link, &new);
link = next;
}
LOG_LINKS (insn) = new;
/* Attach NEW to be the list of backward dependencies. */
if (new != NULL)
{
DEP_LINK_PREV_NEXTP (new)
= &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)) = new;
}
}
for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
add_forw_dep (insn, link);
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
add_forw_dep (link);
}
}
@ -1775,6 +2156,16 @@ init_dependency_caches (int luid)
cache_size = 0;
extend_dependency_caches (luid, true);
}
/* Lifetime of this obstack is whole function scheduling (not single region
scheduling) because some dependencies can be manually generated for
outside regions. See dont_calc_deps in sched-{rgn, ebb}.c .
Possible solution would be to have two obstacks:
* the big one for regular dependencies with region scheduling lifetime,
* and the small one for manually generated dependencies with function
scheduling lifetime. */
gcc_obstack_init (&deps_obstack);
}
/* Create or extend (depending on CREATE_P) dependency caches to
@ -1820,6 +2211,8 @@ extend_dependency_caches (int n, bool create_p)
void
free_dependency_caches (void)
{
obstack_free (&deps_obstack, NULL);
if (true_dependency_cache)
{
int i;
@ -1878,63 +2271,67 @@ finish_deps_global (void)
/* Insert LINK into the dependence chain pointed to by LINKP and
maintain the sort order. */
static void
adjust_add_sorted_back_dep (rtx insn, rtx link, rtx *linkp)
adjust_add_sorted_back_dep (rtx insn, dep_link_t link, dep_link_t *linkp)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION);
/* If the insn cannot move speculatively, but the link is speculative,
make it hard dependence. */
if (HAS_INTERNAL_DEP (insn)
&& (DEP_STATUS (link) & SPECULATIVE))
&& (DEP_LINK_STATUS (link) & SPECULATIVE))
{
DEP_STATUS (link) &= ~SPECULATIVE;
DEP_LINK_STATUS (link) &= ~SPECULATIVE;
if (true_dependency_cache)
bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
INSN_LUID (XEXP (link, 0)));
INSN_LUID (DEP_LINK_PRO (link)));
}
/* Non-speculative links go at the head of LOG_LINKS, followed by
/* Non-speculative links go at the head of deps_list, followed by
speculative links. */
if (DEP_STATUS (link) & SPECULATIVE)
while (*linkp && !(DEP_STATUS (*linkp) & SPECULATIVE))
linkp = &XEXP (*linkp, 1);
if (DEP_LINK_STATUS (link) & SPECULATIVE)
while (*linkp && !(DEP_LINK_STATUS (*linkp) & SPECULATIVE))
linkp = &DEP_LINK_NEXT (*linkp);
XEXP (link, 1) = *linkp;
*linkp = link;
attach_dep_link (link, linkp);
if (CHECK)
gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
}
/* Move the dependence pointed to by LINKP to the back dependencies
of INSN, and also add this dependence to the forward ones. All LOG_LINKS,
of INSN, and also add this dependence to the forward ones. All dep_links,
except one pointed to by LINKP, must be sorted. */
static void
adjust_back_add_forw_dep (rtx insn, rtx *linkp)
adjust_back_add_forw_dep (rtx insn, dep_link_t *linkp)
{
rtx link;
dep_link_t link;
gcc_assert (current_sched_info->flags & DO_SPECULATION);
link = *linkp;
*linkp = XEXP (*linkp, 1);
detach_dep_link (link);
adjust_add_sorted_back_dep (insn, link, &LOG_LINKS (insn));
add_forw_dep (insn, link);
adjust_add_sorted_back_dep (insn, link,
&DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
add_forw_dep (link);
}
/* Remove forward dependence ELEM from the DEPS_LIST of INSN. */
/* Remove forward dependence described by L. */
static void
delete_forw_dep (rtx insn, rtx elem)
delete_forw_dep (dep_link_t l)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION);
#ifdef ENABLE_CHECKING
if (true_dependency_cache)
bitmap_clear_bit (&forward_dependency_cache[INSN_LUID (elem)],
INSN_LUID (insn));
bitmap_clear_bit (&forward_dependency_cache[INSN_LUID (DEP_LINK_PRO (l))],
INSN_LUID (DEP_LINK_CON (l)));
#endif
remove_free_DEPS_LIST_elem (insn, &INSN_DEPEND (elem));
INSN_DEP_COUNT (insn)--;
detach_dep_link (l);
INSN_DEP_COUNT (DEP_LINK_CON (l))--;
}
/* Estimate the weakness of dependence between MEM1 and MEM2. */
@ -2001,16 +2398,16 @@ add_or_update_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type,
ds_t ds)
{
enum DEPS_ADJUST_RESULT res;
rtx *linkp;
dep_link_t *linkp;
res = add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, &linkp);
if (res == DEP_CHANGED || res == DEP_CREATED)
{
if (res == DEP_CHANGED)
delete_forw_dep (insn, elem);
delete_forw_dep (DEP_NODE_FORW (DEP_LINK_NODE (*linkp)));
else if (res == DEP_CREATED)
linkp = &LOG_LINKS (insn);
linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
adjust_back_add_forw_dep (insn, linkp);
}
@ -2021,30 +2418,35 @@ add_or_update_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type,
void
add_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
{
add_back_dep (insn, elem, dep_type, ds);
adjust_back_add_forw_dep (insn, &LOG_LINKS (insn));
add_back_dep (insn, elem, dep_type, ds);
adjust_back_add_forw_dep (insn, &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
if (CHECK)
gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
}
/* Remove both backward and forward dependencies between INSN and ELEM. */
/* Remove a dependency refered by L. */
void
delete_back_forw_dep (rtx insn, rtx elem)
delete_back_forw_dep (dep_link_t l)
{
dep_node_t n = DEP_LINK_NODE (l);
gcc_assert (current_sched_info->flags & DO_SPECULATION);
if (true_dependency_cache != NULL)
{
bitmap_clear_bit (&true_dependency_cache[INSN_LUID (insn)],
INSN_LUID (elem));
bitmap_clear_bit (&anti_dependency_cache[INSN_LUID (insn)],
INSN_LUID (elem));
bitmap_clear_bit (&output_dependency_cache[INSN_LUID (insn)],
INSN_LUID (elem));
bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
INSN_LUID (elem));
dep_t dep = DEP_NODE_DEP (n);
int elem_luid = INSN_LUID (DEP_PRO (dep));
int insn_luid = INSN_LUID (DEP_CON (dep));
bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
}
remove_free_DEPS_LIST_elem (elem, &LOG_LINKS (insn));
delete_forw_dep (insn, elem);
delete_forw_dep (DEP_NODE_FORW (n));
detach_dep_link (DEP_NODE_BACK (n));
}
/* Return weakness of speculative type TYPE in the dep_status DS. */

View File

@ -300,28 +300,24 @@ static struct sched_info ebb_sched_info =
static basic_block
earliest_block_with_similiar_load (basic_block last_block, rtx load_insn)
{
rtx back_link;
dep_link_t back_link;
basic_block bb, earliest_block = NULL;
for (back_link = LOG_LINKS (load_insn);
back_link;
back_link = XEXP (back_link, 1))
FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
{
rtx insn1 = XEXP (back_link, 0);
rtx insn1 = DEP_LINK_PRO (back_link);
if (GET_MODE (back_link) == VOIDmode)
if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
{
/* Found a DEF-USE dependence (insn1, load_insn). */
rtx fore_link;
dep_link_t fore_link;
for (fore_link = INSN_DEPEND (insn1);
fore_link;
fore_link = XEXP (fore_link, 1))
FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
{
rtx insn2 = XEXP (fore_link, 0);
rtx insn2 = DEP_LINK_CON (fore_link);
basic_block insn2_block = BLOCK_FOR_INSN (insn2);
if (GET_MODE (fore_link) == VOIDmode)
if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
{
if (earliest_block != NULL
&& earliest_block->index < insn2_block->index)
@ -404,7 +400,7 @@ add_deps_for_risky_insns (rtx head, rtx tail)
REG_DEP_ANTI, DEP_ANTI);
if (res == DEP_CREATED)
add_forw_dep (insn, LOG_LINKS (insn));
add_forw_dep (DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
else
gcc_assert (res != DEP_CHANGED);
}
@ -451,12 +447,12 @@ schedule_ebb (rtx head, rtx tail)
{
init_deps_global ();
/* Compute LOG_LINKS. */
/* Compute backward dependencies. */
init_deps (&tmp_deps);
sched_analyze (&tmp_deps, head, tail);
free_deps (&tmp_deps);
/* Compute INSN_DEPEND. */
/* Compute forward dependencies. */
compute_forward_dependences (head, tail);
add_deps_for_risky_insns (head, tail);

View File

@ -42,6 +42,218 @@ typedef int ds_t;
/* Type to represent weakness of speculative dependence. */
typedef int dw_t;
extern enum reg_note ds_to_dk (ds_t);
extern ds_t dk_to_ds (enum reg_note);
/* Information about the dependency. */
struct _dep
{
/* Producer. */
rtx pro;
/* Consumer. */
rtx con;
/* Dependency kind (aka dependency major type). This field is superseded
by STATUS below. Though, it is still in place because all the backends
use it. */
enum reg_note kind;
/* Dependency status. This field holds all dependency types and additional
information for speculative dependencies. */
ds_t status;
};
typedef struct _dep *dep_t;
#define DEP_PRO(D) ((D)->pro)
#define DEP_CON(D) ((D)->con)
#define DEP_KIND(D) ((D)->kind)
#define DEP_STATUS(D) ((D)->status)
/* Functions to work with dep. */
extern void init_dep (dep_t, rtx, rtx, enum reg_note);
/* Definition of this struct resides below. */
struct _dep_node;
/* A link in the dependency list. This is essentially an equivalent of a
single {INSN, DEPS}_LIST rtx. */
struct _dep_link
{
/* Dep node with all the data. */
struct _dep_node *node;
/* Next link in the list. For the last one it is NULL. */
struct _dep_link *next;
/* Pointer to the next field of the previous link in the list.
For the first link this points to the deps_list->first.
With help of this field it is easy to remove and insert links to the
list. */
struct _dep_link **prev_nextp;
};
typedef struct _dep_link *dep_link_t;
#define DEP_LINK_NODE(N) ((N)->node)
#define DEP_LINK_NEXT(N) ((N)->next)
#define DEP_LINK_PREV_NEXTP(N) ((N)->prev_nextp)
/* Macros to work dep_link. For most usecases only part of the dependency
information is need. These macros conveniently provide that piece of
information. */
#define DEP_LINK_DEP(N) (DEP_NODE_DEP (DEP_LINK_NODE (N)))
#define DEP_LINK_PRO(N) (DEP_PRO (DEP_LINK_DEP (N)))
#define DEP_LINK_CON(N) (DEP_CON (DEP_LINK_DEP (N)))
#define DEP_LINK_KIND(N) (DEP_KIND (DEP_LINK_DEP (N)))
#define DEP_LINK_STATUS(N) (DEP_STATUS (DEP_LINK_DEP (N)))
void debug_dep_links (dep_link_t);
/* A list of dep_links. Lists of this type are now used instead of rtx
LOG_LINKS and alike lists. */
struct _deps_list
{
dep_link_t first;
};
typedef struct _deps_list *deps_list_t;
#define DEPS_LIST_FIRST(L) ((L)->first)
/* Macro to walk through deps_list. */
#define FOR_EACH_DEP_LINK(LINK, LIST) \
for ((LINK) = DEPS_LIST_FIRST (LIST); \
(LINK) != NULL; \
(LINK) = DEP_LINK_NEXT (LINK))
/* Functions to work with deps_list. */
deps_list_t create_deps_list (bool);
void free_deps_list (deps_list_t);
void delete_deps_list (deps_list_t);
bool deps_list_empty_p (deps_list_t);
void debug_deps_list (deps_list_t);
void add_back_dep_to_deps_list (deps_list_t, dep_t);
dep_link_t find_link_by_pro_in_deps_list (deps_list_t, rtx);
dep_link_t find_link_by_con_in_deps_list (deps_list_t, rtx);
void copy_deps_list_change_con (deps_list_t, deps_list_t, rtx);
void move_dep_link (dep_link_t, deps_list_t);
/* Suppose we have a depedence Y between insn pro1 and con1, where pro1 has
additional dependants con0 and con2, and con1 is dependant on additional
insns pro0 and pro1:
.con0 pro0
. ^ |
. | |
. | |
. X A
. | |
. | |
. | V
.pro1--Y-->con1
. | ^
. | |
. | |
. Z B
. | |
. | |
. V |
.con2 pro2
This is represented using a "dep_node" for each dependence arc, which are
connected as follows (diagram is centered around Y which is fully shown;
other dep_nodes shown partially):
. +------------+ +--------------+ +------------+
. : dep_node X : | dep_node Y | : dep_node Z :
. : : | | : :
. : : | | : :
. : forw : | forw | : forw :
. : +--------+ : | +--------+ | : +--------+ :
forw_deps : |dep_link| : | |dep_link| | : |dep_link| :
+-----+ : | +----+ | : | | +----+ | | : | +----+ | :
|first|----->| |next|-+------+->| |next|-+--+----->| |next|-+--->NULL
+-----+ : | +----+ | : | | +----+ | | : | +----+ | :
. ^ ^ : | ^ | : | | ^ | | : | | :
. | | : | | | : | | | | | : | | :
. | +--<----+--+ +--+---<--+--+--+ +--+--+--<---+--+ | :
. | : | | | : | | | | | : | | | :
. | : | +----+ | : | | +----+ | | : | +----+ | :
. | : | |prev| | : | | |prev| | | : | |prev| | :
. | : | |next| | : | | |next| | | : | |next| | :
. | : | +----+ | : | | +----+ | | : | +----+ | :
. | : | | :<-+ | | | |<-+ : | | :<-+
. | : | +----+ | : | | | +----+ | | | : | +----+ | : |
. | : | |node|-+----+ | | |node|-+--+--+ : | |node|-+----+
. | : | +----+ | : | | +----+ | | : | +----+ | :
. | : | | : | | | | : | | :
. | : +--------+ : | +--------+ | : +--------+ :
. | : : | | : :
. | : SAME pro1 : | +--------+ | : SAME pro1 :
. | : DIFF con0 : | |dep | | : DIFF con2 :
. | : : | | | | : :
. | | | +----+ | |
.RTX<------------------------+--+-|pro1| | |
.pro1 | | +----+ | |
. | | | |
. | | +----+ | |
.RTX<------------------------+--+-|con1| | |
.con1 | | +----+ | |
. | | | | |
. | | | +----+ | |
. | | | |kind| | |
. | | | +----+ | |
. | : : | | |stat| | | : :
. | : DIFF pro0 : | | +----+ | | : DIFF pro2 :
. | : SAME con1 : | | | | : SAME con1 :
. | : : | +--------+ | : :
. | : : | | : :
. | : back : | back | : back :
. v : +--------+ : | +--------+ | : +--------+ :
back_deps : |dep_link| : | |dep_link| | : |dep_link| :
+-----+ : | +----+ | : | | +----+ | | : | +----+ | :
|first|----->| |next|-+------+->| |next|-+--+----->| |next|-+--->NULL
+-----+ : | +----+ | : | | +----+ | | : | +----+ | :
. ^ : | ^ | : | | ^ | | : | | :
. | : | | | : | | | | | : | | :
. +--<----+--+ +--+---<--+--+--+ +--+--+--<---+--+ | :
. : | | | : | | | | | : | | | :
. : | +----+ | : | | +----+ | | : | +----+ | :
. : | |prev| | : | | |prev| | | : | |prev| | :
. : | |next| | : | | |next| | | : | |next| | :
. : | +----+ | : | | +----+ | | : | +----+ | :
. : | | :<-+ | | | |<-+ : | | :<-+
. : | +----+ | : | | | +----+ | | | : | +----+ | : |
. : | |node|-+----+ | | |node|-+--+--+ : | |node|-+----+
. : | +----+ | : | | +----+ | | : | +----+ | :
. : | | : | | | | : | | :
. : +--------+ : | +--------+ | : +--------+ :
. : : | | : :
. : dep_node A : | dep_node Y | : dep_node B :
. +------------+ +--------------+ +------------+
*/
struct _dep_node
{
/* Backward link. */
struct _dep_link back;
/* The dep. */
struct _dep dep;
/* Forward link. */
struct _dep_link forw;
};
typedef struct _dep_node *dep_node_t;
#define DEP_NODE_BACK(N) (&(N)->back)
#define DEP_NODE_DEP(N) (&(N)->dep)
#define DEP_NODE_FORW(N) (&(N)->forw)
/* Describe state of dependencies used during sched_analyze phase. */
struct deps
{
@ -263,13 +475,23 @@ extern struct sched_info *current_sched_info;
struct haifa_insn_data
{
/* A list of insns which depend on the instruction. Unlike LOG_LINKS,
/* NB: We can't place 'struct _deps_list' here instead of deps_list_t into
h_i_d because when h_i_d extends, addresses of the deps_list->first
change without updating deps_list->first->next->prev_nextp. Thus
BACK_DEPS and RESOLVED_BACK_DEPS are allocated on the heap and FORW_DEPS
list is allocated on the obstack. */
/* A list of backward dependencies. The insn is a consumer of all the
deps mentioned here. */
deps_list_t back_deps;
/* A list of insns which depend on the instruction. Unlike 'back_deps',
it represents forward dependencies. */
rtx depend;
deps_list_t forw_deps;
/* A list of scheduled producers of the instruction. Links are being moved
from LOG_LINKS to RESOLVED_DEPS during scheduling. */
rtx resolved_deps;
from 'back_deps' to 'resolved_back_deps' while scheduling. */
deps_list_t resolved_back_deps;
/* Logical uid gives the original ordering of the insns. */
int luid;
@ -339,14 +561,15 @@ extern regset *glat_start, *glat_end;
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
#define INSN_DEPEND(INSN) (h_i_d[INSN_UID (INSN)].depend)
#define RESOLVED_DEPS(INSN) (h_i_d[INSN_UID (INSN)].resolved_deps)
#define INSN_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].back_deps)
#define INSN_FORW_DEPS(INSN) (h_i_d[INSN_UID (INSN)].forw_deps)
#define INSN_RESOLVED_BACK_DEPS(INSN) \
(h_i_d[INSN_UID (INSN)].resolved_back_deps)
#define INSN_LUID(INSN) (h_i_d[INSN_UID (INSN)].luid)
#define CANT_MOVE(insn) (h_i_d[INSN_UID (insn)].cant_move)
#define INSN_DEP_COUNT(INSN) (h_i_d[INSN_UID (INSN)].dep_count)
#define INSN_PRIORITY(INSN) (h_i_d[INSN_UID (INSN)].priority)
#define INSN_PRIORITY_KNOWN(INSN) (h_i_d[INSN_UID (INSN)].priority_known)
#define INSN_COST(INSN) (h_i_d[INSN_UID (INSN)].cost)
#define INSN_REG_WEIGHT(INSN) (h_i_d[INSN_UID (INSN)].reg_weight)
#define HAS_INTERNAL_DEP(INSN) (h_i_d[INSN_UID (INSN)].has_internal_dep)
#define TODO_SPEC(INSN) (h_i_d[INSN_UID (INSN)].todo_spec)
@ -370,8 +593,8 @@ extern regset *glat_start, *glat_end;
#define IS_SPECULATION_BRANCHY_CHECK_P(INSN) \
(RECOVERY_BLOCK (INSN) != NULL && RECOVERY_BLOCK (INSN) != EXIT_BLOCK_PTR)
/* DEP_STATUS of the link encapsulates information, that is needed for
speculative scheduling. Namely, it is 4 integers in the range
/* Dep status (aka ds_t) of the link encapsulates information, that is needed
for speculative scheduling. Namely, it is 4 integers in the range
[0, MAX_DEP_WEAK] and 3 bits.
The integers correspond to the probability of the dependence to *not*
exist, it is the probability, that overcoming of this dependence will
@ -386,9 +609,8 @@ extern regset *glat_start, *glat_end;
as only true dependence can be overcome.
There also is the 4-th bit in the DEP_STATUS (HARD_DEP), that is reserved
for using to describe instruction's status. It is set whenever instruction
has at least one dependence, that cannot be overcome.
has at least one dependence, that cannot be overcame.
See also: check_dep_status () in sched-deps.c . */
#define DEP_STATUS(LINK) XINT (LINK, 2)
/* We exclude sign bit. */
#define BITS_PER_DEP_STATUS (HOST_BITS_PER_INT - 1)
@ -610,7 +832,7 @@ extern void init_deps (struct deps *);
extern void free_deps (struct deps *);
extern void init_deps_global (void);
extern void finish_deps_global (void);
extern void add_forw_dep (rtx, rtx);
extern void add_forw_dep (dep_link_t);
extern void compute_forward_dependences (rtx, rtx);
extern rtx find_insn_list (rtx, rtx);
extern void init_dependency_caches (int);
@ -620,7 +842,7 @@ extern enum DEPS_ADJUST_RESULT add_or_update_back_dep (rtx, rtx,
enum reg_note, ds_t);
extern void add_or_update_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
extern void add_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
extern void delete_back_forw_dep (rtx, rtx);
extern void delete_back_forw_dep (dep_link_t);
extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
extern ds_t ds_merge (ds_t, ds_t);
@ -632,7 +854,8 @@ extern int no_real_insns_p (rtx, rtx);
extern void rm_other_notes (rtx, rtx);
extern int insn_cost (rtx, rtx, rtx);
extern int insn_cost (rtx);
extern int dep_cost (dep_t);
extern int set_priorities (rtx, rtx);
extern void schedule_block (basic_block *, int);

View File

@ -1711,12 +1711,12 @@ update_live (rtx insn, int src)
static void
set_spec_fed (rtx load_insn)
{
rtx link;
dep_link_t link;
for (link = INSN_DEPEND (load_insn); link; link = XEXP (link, 1))
if (GET_MODE (link) == VOIDmode)
FED_BY_SPEC_LOAD (XEXP (link, 0)) = 1;
} /* set_spec_fed */
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (load_insn))
if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
FED_BY_SPEC_LOAD (DEP_LINK_CON (link)) = 1;
}
/* On the path from the insn to load_insn_bb, find a conditional
branch depending on insn, that guards the speculative load. */
@ -1724,17 +1724,18 @@ branch depending on insn, that guards the speculative load. */
static int
find_conditional_protection (rtx insn, int load_insn_bb)
{
rtx link;
dep_link_t link;
/* Iterate through DEF-USE forward dependences. */
for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
{
rtx next = XEXP (link, 0);
rtx next = DEP_LINK_CON (link);
if ((CONTAINING_RGN (BLOCK_NUM (next)) ==
CONTAINING_RGN (BB_TO_BLOCK (load_insn_bb)))
&& IS_REACHABLE (INSN_BB (next), load_insn_bb)
&& load_insn_bb != INSN_BB (next)
&& GET_MODE (link) == VOIDmode
&& DEP_LINK_KIND (link) == REG_DEP_TRUE
&& (JUMP_P (next)
|| find_conditional_protection (next, load_insn_bb)))
return 1;
@ -1753,20 +1754,20 @@ find_conditional_protection (rtx insn, int load_insn_bb)
and if insn1 is on the path
region-entry -> ... -> bb_trg -> ... load_insn.
Locate insn1 by climbing on LOG_LINKS from load_insn.
Locate the branch by following INSN_DEPEND from insn1. */
Locate insn1 by climbing on INSN_BACK_DEPS from load_insn.
Locate the branch by following INSN_FORW_DEPS from insn1. */
static int
is_conditionally_protected (rtx load_insn, int bb_src, int bb_trg)
{
rtx link;
dep_link_t link;
for (link = LOG_LINKS (load_insn); link; link = XEXP (link, 1))
FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (load_insn))
{
rtx insn1 = XEXP (link, 0);
rtx insn1 = DEP_LINK_PRO (link);
/* Must be a DEF-USE dependence upon non-branch. */
if (GET_MODE (link) != VOIDmode
if (DEP_LINK_KIND (link) != REG_DEP_TRUE
|| JUMP_P (insn1))
continue;
@ -1809,28 +1810,27 @@ is_conditionally_protected (rtx load_insn, int bb_src, int bb_trg)
static int
is_pfree (rtx load_insn, int bb_src, int bb_trg)
{
rtx back_link;
dep_link_t back_link;
candidate *candp = candidate_table + bb_src;
if (candp->split_bbs.nr_members != 1)
/* Must have exactly one escape block. */
return 0;
for (back_link = LOG_LINKS (load_insn);
back_link; back_link = XEXP (back_link, 1))
FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
{
rtx insn1 = XEXP (back_link, 0);
rtx insn1 = DEP_LINK_PRO (back_link);
if (GET_MODE (back_link) == VOIDmode)
if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
{
/* Found a DEF-USE dependence (insn1, load_insn). */
rtx fore_link;
dep_link_t fore_link;
for (fore_link = INSN_DEPEND (insn1);
fore_link; fore_link = XEXP (fore_link, 1))
FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
{
rtx insn2 = XEXP (fore_link, 0);
if (GET_MODE (fore_link) == VOIDmode)
rtx insn2 = DEP_LINK_CON (fore_link);
if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
{
/* Found a DEF-USE dependence (insn1, insn2). */
if (haifa_classify_insn (insn2) != PFREE_CANDIDATE)
@ -1863,7 +1863,7 @@ is_prisky (rtx load_insn, int bb_src, int bb_trg)
if (FED_BY_SPEC_LOAD (load_insn))
return 1;
if (LOG_LINKS (load_insn) == NULL)
if (deps_list_empty_p (INSN_BACK_DEPS (load_insn)))
/* Dependence may 'hide' out of the region. */
return 1;
@ -2284,7 +2284,9 @@ add_branch_dependences (rtx head, rtx tail)
{
if (!NOTE_P (insn))
{
if (last != 0 && !find_insn_list (insn, LOG_LINKS (last)))
if (last != 0
&& (find_link_by_pro_in_deps_list (INSN_BACK_DEPS (last), insn)
== NULL))
{
if (! sched_insns_conditions_mutex_p (last, insn))
add_dependence (last, insn, REG_DEP_ANTI);
@ -2573,7 +2575,7 @@ debug_dependencies (void)
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
{
rtx link;
dep_link_t link;
if (! INSN_P (insn))
{
@ -2598,7 +2600,7 @@ debug_dependencies (void)
INSN_BB (insn),
INSN_DEP_COUNT (insn),
INSN_PRIORITY (insn),
insn_cost (insn, 0, 0));
insn_cost (insn));
if (recog_memoized (insn) < 0)
fprintf (sched_dump, "nothing");
@ -2606,8 +2608,8 @@ debug_dependencies (void)
print_reservation (sched_dump, insn);
fprintf (sched_dump, "\t: ");
for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
fprintf (sched_dump, "%d ", INSN_UID (XEXP (link, 0)));
FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
fprintf (sched_dump, "%d ", INSN_UID (DEP_LINK_CON (link)));
fprintf (sched_dump, "\n");
}
}
@ -2665,11 +2667,11 @@ schedule_region (int rgn)
for (bb = 0; bb < current_nr_blocks; bb++)
init_deps (bb_deps + bb);
/* Compute LOG_LINKS. */
/* Compute backward dependencies. */
for (bb = 0; bb < current_nr_blocks; bb++)
compute_block_backward_dependences (bb);
/* Compute INSN_DEPEND. */
/* Compute forward dependencies. */
for (bb = current_nr_blocks - 1; bb >= 0; bb--)
{
rtx head, tail;

View File

@ -308,7 +308,6 @@ Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD 0
#define TARGET_SCHED_DFA_NEW_CYCLE 0
#define TARGET_SCHED_IS_COSTLY_DEPENDENCE 0
#define TARGET_SCHED_ADJUST_COST_2 0
#define TARGET_SCHED_H_I_D_EXTENDED 0
#define TARGET_SCHED_SPECULATE_INSN 0
#define TARGET_SCHED_NEEDS_BLOCK_P 0
@ -337,7 +336,6 @@ Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD, \
TARGET_SCHED_DFA_NEW_CYCLE, \
TARGET_SCHED_IS_COSTLY_DEPENDENCE, \
TARGET_SCHED_ADJUST_COST_2, \
TARGET_SCHED_H_I_D_EXTENDED, \
TARGET_SCHED_SPECULATE_INSN, \
TARGET_SCHED_NEEDS_BLOCK_P, \

View File

@ -84,6 +84,8 @@ typedef struct secondary_reload_info
int t_icode; /* Actually an enum insn_code - see above. */
} secondary_reload_info;
/* This is defined in sched-int.h . */
struct _dep;
struct gcc_target
{
@ -241,7 +243,7 @@ struct gcc_target
/* Given the current cost, COST, of an insn, INSN, calculate and
return a new cost based on its relationship to DEP_INSN through
the dependence LINK. The default is to make no adjustment. */
int (* adjust_cost) (rtx insn, rtx link, rtx def_insn, int cost);
int (* adjust_cost) (rtx insn, rtx link, rtx dep_insn, int cost);
/* Adjust the priority of an insn as you see fit. Returns the new
priority. */
@ -324,22 +326,16 @@ struct gcc_target
cycle. */
int (* dfa_new_cycle) (FILE *, int, rtx, int, int, int *);
/* The following member value is a pointer to a function called
by the insn scheduler. It should return true if there exists a
dependence which is considered costly by the target, between
the insn passed as the first parameter, and the insn passed as
the second parameter. The third parameter is the INSN_DEPEND
link that represents the dependence between the two insns. The
fourth argument is the cost of the dependence as estimated by
/* The following member value is a pointer to a function called by the
insn scheduler. It should return true if there exists a dependence
which is considered costly by the target, between the insn
DEP_PRO (&_DEP), and the insn DEP_CON (&_DEP). The first parameter is
the dep that represents the dependence between the two insns. The
second argument is the cost of the dependence as estimated by
the scheduler. The last argument is the distance in cycles
between the already scheduled insn (first parameter) and the
the second insn (second parameter). */
bool (* is_costly_dependence) (rtx, rtx, rtx, int, int);
/* Given the current cost, COST, of an insn, INSN, calculate and
return a new cost based on its relationship to DEP_INSN through the
dependence of type DEP_TYPE. The default is to make no adjustment. */
int (* adjust_cost_2) (rtx insn, int, rtx def_insn, int cost);
bool (* is_costly_dependence) (struct _dep *_dep, int, int);
/* The following member value is a pointer to a function called
by the insn scheduler. This hook is called to notify the backend