ra-conflict.c: New file.

2007-09-02  Kenneth Zadeck <zadeck@naturalbridge.com>

	* ra-conflict.c: New file.
	* ra.h: New file.
	* reload.c (push_reload, find_dummy_reload): Change DF_RA_LIVE
	usage to DF_LIVE usage.
	* rtlanal.c (subreg_nregs_with_regno): New function.  
	* df-scan.c (df_def_record_1, df_uses_record): Add code to set
	DF_REF_EXTRACT, DF_REF_STRICT_LOWER_PART, and DF_REF_SUBREG flags.
	(df_has_eh_preds): Removed.
	(df_bb_refs_collect, df_bb_refs_collect, df_bb_refs_collect,
	df_exit_block_uses_collect): Changed call from df_has_eh_preds to
	bb_has_eh_pred.
	* global.c (allocno, max_allocno, conflicts, allocno_row_words,
	reg_allocno, EXECUTE_IF_SET_IN_ALLOCNO_SET): Moved to ra.h
	(SET_ALLOCNO_LIVE, CLEAR_ALLOCNO_LIVE): Moved to ra-conflicts.c.
	(regs_set, record_one_conflict, record_conflicts, mark_reg_store,
	mark_reg_clobber, mark_reg_conflicts, mark_reg_death): Deleted.
	(global_alloc): Turn off rescanning insns after call to
	global_conflicts and added call to set_preferences.
	(global_conflicts): Moved to ra-alloc.c.
	(set_preferences_1, set_preferences): New function.
	(mirror_conflicts): Changed types for various variables.
	(mark_elimination): Change DF_RA_LIVE
	usage to DF_LIVE usage.
	(build_insn_chain): Rewritten from scratch and made local.
	(print_insn_chain, print_insn_chains): New functions.
	(dump_conflicts): Do not print conflicts for fixed_regs.
	(rest_of_handle_global_alloc): Turn off insn rescanning.
	* hard-reg-set.h: Fixed comment.
	* local-alloc.c (update_equiv_regs): Change DF_RA_LIVE
	usage to DF_LIVE usage and delete refs to TOP sets.
	(block_alloc): Mark regs as live if they are in the artificial
	defs at top of block.
	(find_stack_regs): New function.
	(rest_of_handle_local_alloc): Changed urec problem to live
	problem and do not turn off df rescanning.
	* df.h (DF_UREC, DF_UREC_BB_INFO, DF_LIVE_TOP, DF_RA_LIVE_IN,
	DF_RA_LIVE_TOP, DF_RA_LIVE_OUT, df_urec_bb_info, df_urec,
	df_urec_add_problem, df_urec_get_bb_info, df_has_eh_preds): Removed.
	(DF_CHAIN, DF_NOTE, DF_CHAIN): Renumbered.
	(DF_REF_EXTRACT, DF_REF_STRICT_LOWER_PART, DF_REF_SUBREG): New
	fields in df_ref_flags.  The rest have been renumbered.  
	* init-regs.c (initialize_uninitialized_regs): Enhanced debugging
	at -O1.
	* rtl.h (subreg_nregs_with_regno): New function.
	* df-problems.c: (df_get_live_out, df_get_live_in,
	df_get_live_top): Removed reference to DF_RA_LIVE.
	(df_lr_reset, df_lr_transfer_function, df_live_free_bb_info,
	df_live_alloc, df_live_reset, df_live_local_finalize,
	df_live_free): Make top set only if different from in set.
	(df_lr_top_dump, df_live_top_dump): Only print top set if
	different from in set.
	(df_lr_bb_local_compute): Removed unnecessary check.
	(df_urec_problem_data, df_urec_set_bb_info, df_urec_free_bb_info, 
	df_urec_alloc, df_urec_mark_reg_change, earlyclobber_regclass, 
	df_urec_check_earlyclobber, df_urec_mark_reg_use_for_earlyclobber,
	df_urec_mark_reg_use_for_earlyclobber_1, df_urec_bb_local_compute,
	df_urec_local_compute, df_urec_init, df_urec_local_finalize, 
	df_urec_confluence_n, df_urec_transfer_function, df_urec_free, 
	df_urec_top_dump, df_urec_bottom_dump, problem_UREC,
	df_urec_add_problem): Removed.
	(df_simulate_fixup_sets): Changed call from df_has_eh_preds to
	bb_has_eh_pred. 
	* Makefile.in (ra-conflict.o, ra.h): New dependencies.
	* basic_block.h (bb_has_abnormal_pred): New function.
	* reload1.c (compute_use_by_pseudos): Change DF_RA_LIVE
	usage to DF_LIVE usage.

From-SVN: r128957
This commit is contained in:
Kenneth Zadeck 2007-10-02 13:10:07 +00:00 committed by Kenneth Zadeck
parent 746025f4bc
commit ba49cb7bff
18 changed files with 1968 additions and 1740 deletions

View File

@ -1,3 +1,72 @@
2007-09-02 Kenneth Zadeck <zadeck@naturalbridge.com>
* ra-conflict.c: New file.
* ra.h: New file.
* reload.c (push_reload, find_dummy_reload): Change DF_RA_LIVE
usage to DF_LIVE usage.
* rtlanal.c (subreg_nregs_with_regno): New function.
* df-scan.c (df_def_record_1, df_uses_record): Add code to set
DF_REF_EXTRACT, DF_REF_STRICT_LOWER_PART, and DF_REF_SUBREG flags.
(df_has_eh_preds): Removed.
(df_bb_refs_collect, df_bb_refs_collect, df_bb_refs_collect,
df_exit_block_uses_collect): Changed call from df_has_eh_preds to
bb_has_eh_pred.
* global.c (allocno, max_allocno, conflicts, allocno_row_words,
reg_allocno, EXECUTE_IF_SET_IN_ALLOCNO_SET): Moved to ra.h
(SET_ALLOCNO_LIVE, CLEAR_ALLOCNO_LIVE): Moved to ra-conflicts.c.
(regs_set, record_one_conflict, record_conflicts, mark_reg_store,
mark_reg_clobber, mark_reg_conflicts, mark_reg_death): Deleted.
(global_alloc): Turn off rescanning insns after call to
global_conflicts and added call to set_preferences.
(global_conflicts): Moved to ra-alloc.c.
(set_preferences_1, set_preferences): New function.
(mirror_conflicts): Changed types for various variables.
(mark_elimination): Change DF_RA_LIVE
usage to DF_LIVE usage.
(build_insn_chain): Rewritten from scratch and made local.
(print_insn_chain, print_insn_chains): New functions.
(dump_conflicts): Do not print conflicts for fixed_regs.
(rest_of_handle_global_alloc): Turn off insn rescanning.
* hard-reg-set.h: Fixed comment.
* local-alloc.c (update_equiv_regs): Change DF_RA_LIVE
usage to DF_LIVE usage and delete refs to TOP sets.
(block_alloc): Mark regs as live if they are in the artificial
defs at top of block.
(find_stack_regs): New function.
(rest_of_handle_local_alloc): Changed urec problem to live
problem and do not turn off df rescanning.
* df.h (DF_UREC, DF_UREC_BB_INFO, DF_LIVE_TOP, DF_RA_LIVE_IN,
DF_RA_LIVE_TOP, DF_RA_LIVE_OUT, df_urec_bb_info, df_urec,
df_urec_add_problem, df_urec_get_bb_info, df_has_eh_preds): Removed.
(DF_CHAIN, DF_NOTE, DF_CHAIN): Renumbered.
(DF_REF_EXTRACT, DF_REF_STRICT_LOWER_PART, DF_REF_SUBREG): New
fields in df_ref_flags. The rest have been renumbered.
* init-regs.c (initialize_uninitialized_regs): Enhanced debugging
at -O1.
* rtl.h (subreg_nregs_with_regno): New function.
* df-problems.c: (df_get_live_out, df_get_live_in,
df_get_live_top): Removed reference to DF_RA_LIVE.
(df_lr_reset, df_lr_transfer_function, df_live_free_bb_info,
df_live_alloc, df_live_reset, df_live_local_finalize,
df_live_free): Make top set only if different from in set.
(df_lr_top_dump, df_live_top_dump): Only print top set if
different from in set.
(df_lr_bb_local_compute): Removed unnecessary check.
(df_urec_problem_data, df_urec_set_bb_info, df_urec_free_bb_info,
df_urec_alloc, df_urec_mark_reg_change, earlyclobber_regclass,
df_urec_check_earlyclobber, df_urec_mark_reg_use_for_earlyclobber,
df_urec_mark_reg_use_for_earlyclobber_1, df_urec_bb_local_compute,
df_urec_local_compute, df_urec_init, df_urec_local_finalize,
df_urec_confluence_n, df_urec_transfer_function, df_urec_free,
df_urec_top_dump, df_urec_bottom_dump, problem_UREC,
df_urec_add_problem): Removed.
(df_simulate_fixup_sets): Changed call from df_has_eh_preds to
bb_has_eh_pred.
* Makefile.in (ra-conflict.o, ra.h): New dependencies.
* basic_block.h (bb_has_abnormal_pred): New function.
* reload1.c (compute_use_by_pseudos): Change DF_RA_LIVE
usage to DF_LIVE usage.
2007-10-02 Revital Eres <eres@il.ibm.com>
* config/rs6000/predicates.md (easy_vector_constant): Return false

View File

@ -791,6 +791,7 @@ FUNCTION_H = function.h $(TREE_H) $(HASHTAB_H)
EXPR_H = expr.h insn-config.h $(FUNCTION_H) $(RTL_H) $(FLAGS_H) $(TREE_H) $(MACHMODE_H) $(EMIT_RTL_H)
OPTABS_H = optabs.h insn-codes.h
REGS_H = regs.h varray.h $(MACHMODE_H) $(OBSTACK_H) $(BASIC_BLOCK_H) $(FUNCTION_H)
RA_H = ra.h $(REGS_H)
RESOURCE_H = resource.h hard-reg-set.h
SCHED_INT_H = sched-int.h $(INSN_ATTR_H) $(BASIC_BLOCK_H) $(RTL_H) $(DF_H)
INTEGRATE_H = integrate.h $(VARRAY_H)
@ -1100,6 +1101,7 @@ OBJS-common = \
print-rtl.o \
print-tree.o \
profile.o \
ra-conflict.o \
real.o \
recog.o \
reg-stack.o \
@ -2702,7 +2704,11 @@ bitmap.o : bitmap.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
global.o : global.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
$(TIMEVAR_H) vecprim.h $(DF_H)
$(TIMEVAR_H) vecprim.h $(DF_H) $(DBGCNT_H) $(RA_H)
ra-conflict.o : ra-conflict.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
$(TIMEVAR_H) vecprim.h $(DF_H) $(RA_H) sbitmap.h
varray.o : varray.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(GGC_H) \
$(HASHTAB_H) $(BCONFIG_H) $(VARRAY_H) toplev.h
vec.o : vec.c $(CONFIG_H) $(SYSTEM_H) coretypes.h vec.h $(GGC_H) \

View File

@ -1135,6 +1135,21 @@ bb_has_eh_pred (basic_block bb)
return false;
}
/* Return true when one of the predecessor edges of BB is marked with EDGE_ABNORMAL. */
static inline bool
bb_has_abnormal_pred (basic_block bb)
{
edge e;
edge_iterator ei;
FOR_EACH_EDGE (e, ei, bb->preds)
{
if (e->flags & EDGE_ABNORMAL)
return true;
}
return false;
}
/* In cfgloopmanip.c. */
extern edge mfb_kj_edge;
bool mfb_keep_just (edge);

View File

@ -573,8 +573,8 @@ dce_process_block (basic_block bb, bool redo_out)
/* These regs are considered always live so if they end up dying
because of some def, we need to bring the back again.
Calling df_simulate_fixup_sets has the disadvantage of calling
df_has_eh_preds once per insn, so we cache the information here. */
if (df_has_eh_preds (bb))
bb_has_eh_pred once per insn, so we cache the information here. */
if (bb_has_eh_pred (bb))
au = df->eh_block_artificial_uses;
else
au = df->regular_block_artificial_uses;

View File

@ -71,9 +71,7 @@ df_get_live_out (basic_block bb)
{
gcc_assert (df_lr);
if (df_urec)
return DF_RA_LIVE_OUT (bb);
else if (df_live)
if (df_live)
return DF_LIVE_OUT (bb);
else
return DF_LR_OUT (bb);
@ -89,31 +87,12 @@ df_get_live_in (basic_block bb)
{
gcc_assert (df_lr);
if (df_urec)
return DF_RA_LIVE_IN (bb);
else if (df_live)
if (df_live)
return DF_LIVE_IN (bb);
else
return DF_LR_IN (bb);
}
/* Get the live at top set for BB no matter what problem happens to be
defined. This function is used by the register allocators who
choose different dataflow problems depending on the optimization
level. */
bitmap
df_get_live_top (basic_block bb)
{
gcc_assert (df_lr);
if (df_urec)
return DF_RA_LIVE_TOP (bb);
else
return DF_LR_TOP (bb);
}
/*----------------------------------------------------------------------------
Utility functions.
----------------------------------------------------------------------------*/
@ -210,9 +189,28 @@ df_unset_seen (void)
See df.h for details.
----------------------------------------------------------------------------*/
/* See the comment at the top of the Reaching Uses problem for how the
uses are represented in the kill sets. The same games are played
here for the defs. */
/* This problem plays a large number of games for the sake of
efficiency.
1) The order of the bits in the bitvectors. After the scanning
phase, all of the defs are sorted. All of the defs for the reg 0
are first, followed by all defs for reg 1 and so on.
2) There are two kill sets, one if the number of defs is less or
equal to DF_SPARSE_THRESHOLD and another if the number of defs is
greater.
<= : Data is built directly in the kill set.
> : One level of indirection is used to keep from generating long
strings of 1 bits in the kill sets. Bitvectors that are indexed
by the regnum are used to represent that there is a killing def
for the register. The confluence and transfer functions use
these along with the bitmap_clear_range call to remove ranges of
bits without actually generating a knockout vector.
The kill and sparse_kill and the dense_invalidated_by_call and
sparse_invalidated_by_call both play this game. */
/* Private data used to compute the solution for this problem. These
data structures are not accessible outside of this module. */
@ -740,14 +738,6 @@ df_lr_free_bb_info (basic_block bb ATTRIBUTE_UNUSED,
{
BITMAP_FREE (bb_info->use);
BITMAP_FREE (bb_info->def);
if (bb_info->in == bb_info->top)
bb_info->top = NULL;
else
{
BITMAP_FREE (bb_info->top);
BITMAP_FREE (bb_info->ause);
BITMAP_FREE (bb_info->adef);
}
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
pool_free (df_lr->block_pool, bb_info);
@ -777,11 +767,6 @@ df_lr_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
{
bitmap_clear (bb_info->def);
bitmap_clear (bb_info->use);
if (bb_info->adef)
{
bitmap_clear (bb_info->adef);
bitmap_clear (bb_info->ause);
}
}
else
{
@ -791,9 +776,6 @@ df_lr_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
bb_info->def = BITMAP_ALLOC (NULL);
bb_info->in = BITMAP_ALLOC (NULL);
bb_info->out = BITMAP_ALLOC (NULL);
bb_info->top = bb_info->in;
bb_info->adef = NULL;
bb_info->ause = NULL;
}
}
@ -815,7 +797,6 @@ df_lr_reset (bitmap all_blocks)
gcc_assert (bb_info);
bitmap_clear (bb_info->in);
bitmap_clear (bb_info->out);
bitmap_clear (bb_info->top);
}
}
@ -879,23 +860,18 @@ df_lr_bb_local_compute (unsigned int bb_index)
bitmap_set_bit (bb_info->use, DF_REF_REGNO (use));
}
}
/* Process the registers set in an exception handler. */
/* Process the registers set in an exception handler or the hard
frame pointer if this block is the target of a non local
goto. */
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP)
&& (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL))))
if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
{
unsigned int dregno = DF_REF_REGNO (def);
if (bb_info->adef == NULL)
{
gcc_assert (bb_info->ause == NULL);
gcc_assert (bb_info->top == bb_info->in);
bb_info->adef = BITMAP_ALLOC (NULL);
bb_info->ause = BITMAP_ALLOC (NULL);
bb_info->top = BITMAP_ALLOC (NULL);
}
bitmap_set_bit (bb_info->adef, dregno);
bitmap_set_bit (bb_info->def, dregno);
bitmap_clear_bit (bb_info->use, dregno);
}
}
@ -906,17 +882,7 @@ df_lr_bb_local_compute (unsigned int bb_index)
struct df_ref *use = *use_rec;
/* Add use to set of uses in this BB. */
if (DF_REF_FLAGS (use) & DF_REF_AT_TOP)
{
if (bb_info->adef == NULL)
{
gcc_assert (bb_info->ause == NULL);
gcc_assert (bb_info->top == bb_info->in);
bb_info->adef = BITMAP_ALLOC (NULL);
bb_info->ause = BITMAP_ALLOC (NULL);
bb_info->top = BITMAP_ALLOC (NULL);
}
bitmap_set_bit (bb_info->ause, DF_REF_REGNO (use));
}
bitmap_set_bit (bb_info->use, DF_REF_REGNO (use));
}
#endif
@ -1041,19 +1007,8 @@ df_lr_transfer_function (int bb_index)
bitmap out = bb_info->out;
bitmap use = bb_info->use;
bitmap def = bb_info->def;
bitmap top = bb_info->top;
bitmap ause = bb_info->ause;
bitmap adef = bb_info->adef;
bool changed;
changed = bitmap_ior_and_compl (top, use, out, def);
if (in != top)
{
gcc_assert (ause && adef);
changed |= bitmap_ior_and_compl (in, ause, top, adef);
}
return changed;
return bitmap_ior_and_compl (in, use, out, def);
}
@ -1097,14 +1052,6 @@ df_lr_free (void)
{
BITMAP_FREE (bb_info->use);
BITMAP_FREE (bb_info->def);
if (bb_info->in == bb_info->top)
bb_info->top = NULL;
else
{
BITMAP_FREE (bb_info->top);
BITMAP_FREE (bb_info->ause);
BITMAP_FREE (bb_info->adef);
}
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
}
@ -1297,7 +1244,6 @@ df_lr_verify_transfer_functions (void)
bitmap saved_adef;
bitmap saved_ause;
bitmap all_blocks;
bool need_as;
if (!df)
return;
@ -1326,33 +1272,9 @@ df_lr_verify_transfer_functions (void)
bitmap_clear (bb_info->def);
bitmap_clear (bb_info->use);
if (bb_info->adef)
{
need_as = true;
bitmap_copy (saved_adef, bb_info->adef);
bitmap_copy (saved_ause, bb_info->ause);
bitmap_clear (bb_info->adef);
bitmap_clear (bb_info->ause);
}
else
need_as = false;
df_lr_bb_local_compute (bb->index);
gcc_assert (bitmap_equal_p (saved_def, bb_info->def));
gcc_assert (bitmap_equal_p (saved_use, bb_info->use));
if (need_as)
{
gcc_assert (bb_info->adef);
gcc_assert (bb_info->ause);
gcc_assert (bitmap_equal_p (saved_adef, bb_info->adef));
gcc_assert (bitmap_equal_p (saved_ause, bb_info->ause));
}
else
{
gcc_assert (!bb_info->adef);
gcc_assert (!bb_info->ause);
}
}
}
else
@ -1633,7 +1555,7 @@ df_live_local_finalize (bitmap all_blocks)
{
struct df_lr_bb_info *bb_lr_info = df_lr_get_bb_info (bb_index);
struct df_live_bb_info *bb_live_info = df_live_get_bb_info (bb_index);
/* No register may reach a location where it is not used. Thus
we trim the rr result to the places where it is used. */
bitmap_and_into (bb_live_info->in, bb_lr_info->in);
@ -1913,615 +1835,6 @@ df_live_verify_transfer_functions (void)
BITMAP_FREE (saved_kill);
BITMAP_FREE (all_blocks);
}
/*----------------------------------------------------------------------------
UNINITIALIZED REGISTERS WITH EARLYCLOBBER
Find the set of uses for registers that are reachable from the entry
block without passing thru a definition. In and out bitvectors are built
for each basic block. The regnum is used to index into these sets.
See df.h for details.
This is a variant of the UR problem above that has a lot of special
features just for the register allocation phase. This problem
should go away if someone would fix the interference graph.
----------------------------------------------------------------------------*/
/* Private data used to compute the solution for this problem. These
data structures are not accessible outside of this module. */
struct df_urec_problem_data
{
bool earlyclobbers_found; /* True if any instruction contains an
earlyclobber. */
#ifdef STACK_REGS
bitmap stack_regs; /* Registers that may be allocated to a STACK_REGS. */
#endif
};
/* Set basic block info. */
static void
df_urec_set_bb_info (unsigned int index,
struct df_urec_bb_info *bb_info)
{
gcc_assert (df_urec);
gcc_assert (index < df_urec->block_info_size);
df_urec->block_info[index] = bb_info;
}
/* Free basic block info. */
static void
df_urec_free_bb_info (basic_block bb ATTRIBUTE_UNUSED,
void *vbb_info)
{
struct df_urec_bb_info *bb_info = (struct df_urec_bb_info *) vbb_info;
if (bb_info)
{
BITMAP_FREE (bb_info->gen);
BITMAP_FREE (bb_info->kill);
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
BITMAP_FREE (bb_info->earlyclobber);
pool_free (df_urec->block_pool, bb_info);
}
}
/* Allocate or reset bitmaps for DF_UREC blocks. The solution bits are
not touched unless the block is new. */
static void
df_urec_alloc (bitmap all_blocks)
{
unsigned int bb_index;
bitmap_iterator bi;
struct df_urec_problem_data *problem_data
= (struct df_urec_problem_data *) df_urec->problem_data;
if (!df_urec->block_pool)
df_urec->block_pool = create_alloc_pool ("df_urec_block pool",
sizeof (struct df_urec_bb_info), 50);
if (!df_urec->problem_data)
{
problem_data = XNEW (struct df_urec_problem_data);
df_urec->problem_data = problem_data;
}
problem_data->earlyclobbers_found = false;
df_grow_bb_info (df_urec);
EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
if (bb_info)
{
bitmap_clear (bb_info->kill);
bitmap_clear (bb_info->gen);
bitmap_clear (bb_info->earlyclobber);
}
else
{
bb_info = (struct df_urec_bb_info *) pool_alloc (df_urec->block_pool);
df_urec_set_bb_info (bb_index, bb_info);
bb_info->kill = BITMAP_ALLOC (NULL);
bb_info->gen = BITMAP_ALLOC (NULL);
bb_info->in = BITMAP_ALLOC (NULL);
bb_info->out = BITMAP_ALLOC (NULL);
bb_info->top = BITMAP_ALLOC (NULL);
bb_info->earlyclobber = BITMAP_ALLOC (NULL);
}
}
df_urec->optional_p = true;
}
/* The function modifies local info for register REG being changed in
SETTER. DATA is used to pass the current basic block info. */
static void
df_urec_mark_reg_change (rtx reg, const_rtx setter, void *data)
{
int regno;
int endregno;
int i;
struct df_urec_bb_info *bb_info = (struct df_urec_bb_info*) data;
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
if (!REG_P (reg))
return;
regno = REGNO (reg);
if (regno < FIRST_PSEUDO_REGISTER)
{
endregno = END_HARD_REGNO (reg);
for (i = regno; i < endregno; i++)
{
bitmap_set_bit (bb_info->kill, i);
if (GET_CODE (setter) != CLOBBER)
bitmap_set_bit (bb_info->gen, i);
else
bitmap_clear_bit (bb_info->gen, i);
}
}
else
{
bitmap_set_bit (bb_info->kill, regno);
if (GET_CODE (setter) != CLOBBER)
bitmap_set_bit (bb_info->gen, regno);
else
bitmap_clear_bit (bb_info->gen, regno);
}
}
/* Classes of registers which could be early clobbered in the current
insn. */
static VEC(int,heap) *earlyclobber_regclass;
/* This function finds and stores register classes that could be early
clobbered in INSN. If any earlyclobber classes are found, the function
returns TRUE, in all other cases it returns FALSE. */
static bool
df_urec_check_earlyclobber (rtx insn)
{
int opno;
bool found = false;
extract_insn (insn);
VEC_truncate (int, earlyclobber_regclass, 0);
for (opno = 0; opno < recog_data.n_operands; opno++)
{
char c;
bool amp_p;
int i;
enum reg_class class;
const char *p = recog_data.constraints[opno];
class = NO_REGS;
amp_p = false;
for (;;)
{
c = *p;
switch (c)
{
case '=': case '+': case '?':
case '#': case '!':
case '*': case '%':
case 'm': case '<': case '>': case 'V': case 'o':
case 'E': case 'F': case 'G': case 'H':
case 's': case 'i': case 'n':
case 'I': case 'J': case 'K': case 'L':
case 'M': case 'N': case 'O': case 'P':
case 'X':
case '0': case '1': case '2': case '3': case '4':
case '5': case '6': case '7': case '8': case '9':
/* These don't say anything we care about. */
break;
case '&':
amp_p = true;
break;
case '\0':
case ',':
if (amp_p && class != NO_REGS)
{
int rc;
found = true;
for (i = 0;
VEC_iterate (int, earlyclobber_regclass, i, rc);
i++)
{
if (rc == (int) class)
goto found_rc;
}
/* We use VEC_quick_push here because
earlyclobber_regclass holds no more than
N_REG_CLASSES elements. */
VEC_quick_push (int, earlyclobber_regclass, (int) class);
found_rc:
;
}
amp_p = false;
class = NO_REGS;
break;
case 'r':
class = GENERAL_REGS;
break;
default:
class = REG_CLASS_FROM_CONSTRAINT (c, p);
break;
}
if (c == '\0')
break;
p += CONSTRAINT_LEN (c, p);
}
}
return found;
}
/* The function checks that pseudo-register *X has a class
intersecting with the class of pseudo-register could be early
clobbered in the same insn.
This function is a no-op if earlyclobber_regclass is empty.
Reload can assign the same hard register to uninitialized
pseudo-register and early clobbered pseudo-register in an insn if
the pseudo-register is used first time in given BB and not lived at
the BB start. To prevent this we don't change life information for
such pseudo-registers. */
static int
df_urec_mark_reg_use_for_earlyclobber (rtx *x, void *data)
{
enum reg_class pref_class, alt_class;
int i, regno;
struct df_urec_bb_info *bb_info = (struct df_urec_bb_info*) data;
if (REG_P (*x) && REGNO (*x) >= FIRST_PSEUDO_REGISTER)
{
int rc;
regno = REGNO (*x);
if (bitmap_bit_p (bb_info->kill, regno)
|| bitmap_bit_p (bb_info->gen, regno))
return 0;
pref_class = reg_preferred_class (regno);
alt_class = reg_alternate_class (regno);
for (i = 0; VEC_iterate (int, earlyclobber_regclass, i, rc); i++)
{
if (reg_classes_intersect_p (rc, pref_class)
|| (rc != NO_REGS
&& reg_classes_intersect_p (rc, alt_class)))
{
bitmap_set_bit (bb_info->earlyclobber, regno);
break;
}
}
}
return 0;
}
/* The function processes all pseudo-registers in *X with the aid of
previous function. */
static void
df_urec_mark_reg_use_for_earlyclobber_1 (rtx *x, void *data)
{
for_each_rtx (x, df_urec_mark_reg_use_for_earlyclobber, data);
}
/* Compute local uninitialized register info for basic block BB. */
static void
df_urec_bb_local_compute (unsigned int bb_index)
{
basic_block bb = BASIC_BLOCK (bb_index);
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
rtx insn;
struct df_ref **def_rec;
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
{
unsigned int regno = DF_REF_REGNO (def);
bitmap_set_bit (bb_info->gen, regno);
}
}
FOR_BB_INSNS (bb, insn)
{
if (INSN_P (insn))
{
note_stores (PATTERN (insn), df_urec_mark_reg_change, bb_info);
if (df_urec_check_earlyclobber (insn))
{
struct df_urec_problem_data *problem_data
= (struct df_urec_problem_data *) df_urec->problem_data;
problem_data->earlyclobbers_found = true;
note_uses (&PATTERN (insn),
df_urec_mark_reg_use_for_earlyclobber_1, bb_info);
}
}
}
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP) == 0)
{
unsigned int regno = DF_REF_REGNO (def);
bitmap_set_bit (bb_info->gen, regno);
}
}
}
/* Compute local uninitialized register info. */
static void
df_urec_local_compute (bitmap all_blocks)
{
unsigned int bb_index;
bitmap_iterator bi;
#ifdef STACK_REGS
int i;
HARD_REG_SET stack_hard_regs, used;
struct df_urec_problem_data *problem_data
= (struct df_urec_problem_data *) df_urec->problem_data;
/* Any register that MAY be allocated to a register stack (like the
387) is treated poorly. Each such register is marked as being
live everywhere. This keeps the register allocator and the
subsequent passes from doing anything useful with these values.
FIXME: This seems like an incredibly poor idea. */
CLEAR_HARD_REG_SET (stack_hard_regs);
for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
SET_HARD_REG_BIT (stack_hard_regs, i);
problem_data->stack_regs = BITMAP_ALLOC (NULL);
for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
{
COPY_HARD_REG_SET (used, reg_class_contents[reg_preferred_class (i)]);
IOR_HARD_REG_SET (used, reg_class_contents[reg_alternate_class (i)]);
AND_HARD_REG_SET (used, stack_hard_regs);
if (!hard_reg_set_empty_p (used))
bitmap_set_bit (problem_data->stack_regs, i);
}
#endif
/* We know that earlyclobber_regclass holds no more than
N_REG_CLASSES elements. See df_urec_check_earlyclobber. */
earlyclobber_regclass = VEC_alloc (int, heap, N_REG_CLASSES);
EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
{
df_urec_bb_local_compute (bb_index);
}
VEC_free (int, heap, earlyclobber_regclass);
}
/* Initialize the solution vectors. */
static void
df_urec_init (bitmap all_blocks)
{
unsigned int bb_index;
bitmap_iterator bi;
EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
bitmap_copy (bb_info->out, bb_info->gen);
bitmap_clear (bb_info->in);
}
}
/* Or in the stack regs, hard regs and early clobber regs into the
urec_in sets of all of the blocks. */
static void
df_urec_local_finalize (bitmap all_blocks)
{
bitmap tmp = BITMAP_ALLOC (NULL);
bitmap_iterator bi;
unsigned int bb_index;
struct df_urec_problem_data *problem_data
= (struct df_urec_problem_data *) df_urec->problem_data;
EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
struct df_lr_bb_info *bb_lr_info = df_lr_get_bb_info (bb_index);
if (bb_index != ENTRY_BLOCK && bb_index != EXIT_BLOCK)
{
if (problem_data->earlyclobbers_found)
bitmap_ior_into (bb_info->in, bb_info->earlyclobber);
#ifdef STACK_REGS
/* We can not use the same stack register for uninitialized
pseudo-register and another living pseudo-register
because if the uninitialized pseudo-register dies,
subsequent pass reg-stack will be confused (it will
believe that the other register dies). */
bitmap_ior_into (bb_info->in, problem_data->stack_regs);
bitmap_ior_into (bb_info->out, problem_data->stack_regs);
#endif
}
/* No register may reach a location where it is not used. Thus
we trim the rr result to the places where it is used. */
bitmap_and_into (bb_info->in, bb_lr_info->in);
bitmap_and_into (bb_info->out, bb_lr_info->out);
bitmap_copy (bb_info->top, bb_info->in);
if (bb_lr_info->adef)
bitmap_ior_into (bb_info->top, bb_lr_info->adef);
bitmap_and_into (bb_info->top, bb_lr_info->top);
#if 0
/* Hard registers may still stick in the ur_out set, but not
be in the ur_in set, if their only mention was in a call
in this block. This is because a call kills in the lr
problem but does not kill in the rr problem. To clean
this up, we execute the transfer function on the lr_in
set and then use that to knock bits out of ur_out. */
bitmap_ior_and_compl (tmp, bb_info->gen, bb_lr_info->in,
bb_info->kill);
bitmap_and_into (bb_info->out, tmp);
#endif
}
#ifdef STACK_REGS
BITMAP_FREE (problem_data->stack_regs);
#endif
BITMAP_FREE (tmp);
}
/* Confluence function that ignores fake edges. */
static void
df_urec_confluence_n (edge e)
{
bitmap op1 = df_urec_get_bb_info (e->dest->index)->in;
bitmap op2 = df_urec_get_bb_info (e->src->index)->out;
if (e->flags & EDGE_FAKE)
return;
bitmap_ior_into (op1, op2);
}
/* Transfer function. */
static bool
df_urec_transfer_function (int bb_index)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
bitmap in = bb_info->in;
bitmap out = bb_info->out;
bitmap gen = bb_info->gen;
bitmap kill = bb_info->kill;
return bitmap_ior_and_compl (out, gen, in, kill);
}
/* Free all storage associated with the problem. */
static void
df_urec_free (void)
{
if (df_urec->block_info)
{
unsigned int i;
for (i = 0; i < df_urec->block_info_size; i++)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (i);
if (bb_info)
{
BITMAP_FREE (bb_info->gen);
BITMAP_FREE (bb_info->kill);
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
BITMAP_FREE (bb_info->earlyclobber);
BITMAP_FREE (bb_info->top);
}
}
free_alloc_pool (df_urec->block_pool);
df_urec->block_info_size = 0;
free (df_urec->block_info);
free (df_urec->problem_data);
}
free (df_urec);
}
/* Debugging info at top of bb. */
static void
df_urec_top_dump (basic_block bb, FILE *file)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb->index);
if (!bb_info || !bb_info->in)
return;
fprintf (file, ";; urec in \t");
df_print_regset (file, bb_info->in);
fprintf (file, ";; urec gen \t");
df_print_regset (file, bb_info->gen);
fprintf (file, ";; urec kill\t");
df_print_regset (file, bb_info->kill);
fprintf (file, ";; urec ec\t");
df_print_regset (file, bb_info->earlyclobber);
}
/* Debugging info at bottom of bb. */
static void
df_urec_bottom_dump (basic_block bb, FILE *file)
{
struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb->index);
if (!bb_info || !bb_info->out)
return;
fprintf (file, ";; urec out \t");
df_print_regset (file, bb_info->out);
}
/* All of the information associated with every instance of the problem. */
static struct df_problem problem_UREC =
{
DF_UREC, /* Problem id. */
DF_FORWARD, /* Direction. */
df_urec_alloc, /* Allocate the problem specific data. */
NULL, /* Reset global information. */
df_urec_free_bb_info, /* Free basic block info. */
df_urec_local_compute, /* Local compute function. */
df_urec_init, /* Init the solution specific data. */
df_worklist_dataflow, /* Worklist solver. */
NULL, /* Confluence operator 0. */
df_urec_confluence_n, /* Confluence operator n. */
df_urec_transfer_function, /* Transfer function. */
df_urec_local_finalize, /* Finalize function. */
df_urec_free, /* Free all of the problem information. */
df_urec_free, /* Remove this problem from the stack of dataflow problems. */
NULL, /* Debugging. */
df_urec_top_dump, /* Debugging start block. */
df_urec_bottom_dump, /* Debugging end block. */
NULL, /* Incremental solution verify start. */
NULL, /* Incremental solution verify end. */
&problem_LR, /* Dependent problem. */
TV_DF_UREC, /* Timing variable. */
false /* Reset blocks on dropping out of blocks_to_analyze. */
};
/* Create a new DATAFLOW instance and add it to an existing instance
of DF. The returned structure is what is used to get at the
solution. */
void
df_urec_add_problem (void)
{
df_add_problem (&problem_UREC);
}
/*----------------------------------------------------------------------------
CREATE DEF_USE (DU) and / or USE_DEF (UD) CHAINS
@ -3707,7 +3020,7 @@ df_simulate_fixup_sets (basic_block bb, bitmap live)
{
/* These regs are considered always live so if they end up dying
because of some def, we need to bring the back again. */
if (df_has_eh_preds (bb))
if (bb_has_eh_pred (bb))
bitmap_ior_into (live, df->eh_block_artificial_uses);
else
bitmap_ior_into (live, df->regular_block_artificial_uses);

View File

@ -2752,23 +2752,37 @@ df_def_record_1 (struct df_collection_rec *collection_rec,
|| GET_CODE (dst) == ZERO_EXTRACT)
{
flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
if (GET_CODE (dst) == ZERO_EXTRACT)
flags |= DF_REF_EXTRACT;
else
flags |= DF_REF_STRICT_LOWER_PART;
loc = &XEXP (dst, 0);
dst = *loc;
}
if (df_read_modify_subreg_p (dst))
flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
/* At this point if we do not have a reg or a subreg, just return. */
if (REG_P (dst))
{
df_ref_record (collection_rec,
dst, loc, bb, insn, DF_REF_REG_DEF, flags);
if (REG_P (dst)
|| (GET_CODE (dst) == SUBREG && REG_P (SUBREG_REG (dst))))
df_ref_record (collection_rec,
dst, loc, bb, insn, DF_REF_REG_DEF, flags);
/* We want to keep sp alive everywhere - by making all
writes to sp also use of sp. */
if (REGNO (dst) == STACK_POINTER_REGNUM)
df_ref_record (collection_rec,
dst, NULL, bb, insn, DF_REF_REG_USE, flags);
}
else if (GET_CODE (dst) == SUBREG && REG_P (SUBREG_REG (dst)))
{
if (df_read_modify_subreg_p (dst))
flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
/* We want to keep sp alive everywhere - by making all
writes to sp also use of sp. */
if (REG_P (dst) && REGNO (dst) == STACK_POINTER_REGNUM)
df_ref_record (collection_rec,
dst, NULL, bb, insn, DF_REF_REG_USE, flags);
flags |= DF_REF_SUBREG;
df_ref_record (collection_rec,
dst, loc, bb, insn, DF_REF_REG_DEF, flags);
}
}
@ -2880,7 +2894,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
if (df_read_modify_subreg_p (dst))
{
df_uses_record (collection_rec, &SUBREG_REG (dst),
DF_REF_REG_USE, bb, insn, flags | DF_REF_READ_WRITE);
DF_REF_REG_USE, bb, insn,
flags | DF_REF_READ_WRITE | DF_REF_SUBREG);
break;
}
/* Fall through. */
@ -2902,13 +2917,15 @@ df_uses_record (struct df_collection_rec *collection_rec,
dst = XEXP (dst, 0);
df_uses_record (collection_rec,
(GET_CODE (dst) == SUBREG) ? &SUBREG_REG (dst) : temp,
DF_REF_REG_USE, bb, insn, DF_REF_READ_WRITE);
DF_REF_REG_USE, bb, insn,
DF_REF_READ_WRITE | DF_REF_STRICT_LOWER_PART);
}
break;
case ZERO_EXTRACT:
case SIGN_EXTRACT:
df_uses_record (collection_rec, &XEXP (dst, 0),
DF_REF_REG_USE, bb, insn, DF_REF_READ_WRITE);
DF_REF_REG_USE, bb, insn,
DF_REF_READ_WRITE | DF_REF_EXTRACT);
df_uses_record (collection_rec, &XEXP (dst, 1),
DF_REF_REG_USE, bb, insn, flags);
df_uses_record (collection_rec, &XEXP (dst, 2),
@ -3180,23 +3197,6 @@ df_insn_refs_collect (struct df_collection_rec* collection_rec,
df_canonize_collection_rec (collection_rec);
}
/* Return true if any pred of BB is an eh. */
bool
df_has_eh_preds (basic_block bb)
{
edge e;
edge_iterator ei;
FOR_EACH_EDGE (e, ei, bb->preds)
{
if (e->flags & EDGE_EH)
return true;
}
return false;
}
/* Recompute the luids for the insns in BB. */
void
@ -3261,7 +3261,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
}
#ifdef EH_RETURN_DATA_REGNO
if (df_has_eh_preds (bb))
if (bb_has_eh_pred (bb))
{
unsigned int i;
/* Mark the registers that will contain data for the handler. */
@ -3278,7 +3278,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
#ifdef EH_USES
if (df_has_eh_preds (bb))
if (bb_has_eh_pred (bb))
{
unsigned int i;
/* This code is putting in an artificial ref for the use at the
@ -3310,7 +3310,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
{
bitmap_iterator bi;
unsigned int regno;
bitmap au = df_has_eh_preds (bb)
bitmap au = bb_has_eh_pred (bb)
? df->eh_block_artificial_uses
: df->regular_block_artificial_uses;
@ -3481,8 +3481,6 @@ df_mark_reg (rtx reg, void *vset)
}
/* Set the bit for regs that are considered being defined at the entry. */
static void
@ -3780,7 +3778,7 @@ df_exit_block_uses_collect (struct df_collection_rec *collection_rec, bitmap exi
I do not know why. */
if (reload_completed
&& !bitmap_bit_p (exit_block_uses, ARG_POINTER_REGNUM)
&& df_has_eh_preds (EXIT_BLOCK_PTR)
&& bb_has_eh_pred (EXIT_BLOCK_PTR)
&& fixed_regs[ARG_POINTER_REGNUM])
df_ref_record (collection_rec, regno_reg_rtx[ARG_POINTER_REGNUM], NULL,
EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0);

143
gcc/df.h
View File

@ -43,11 +43,9 @@ struct df_link;
#define DF_SCAN 0
#define DF_LR 1 /* Live Registers backward. */
#define DF_LIVE 2 /* Live Registers & Uninitialized Registers */
#define DF_RD 3 /* Reaching Defs. */
#define DF_UREC 4 /* Uninitialized Registers with Early Clobber. */
#define DF_CHAIN 5 /* Def-Use and/or Use-Def Chains. */
#define DF_NOTE 6 /* REG_DEF and REG_UNUSED notes. */
#define DF_CHAIN 4 /* Def-Use and/or Use-Def Chains. */
#define DF_NOTE 5 /* REG_DEF and REG_UNUSED notes. */
#define DF_LAST_PROBLEM_PLUS1 (DF_NOTE + 1)
@ -70,10 +68,9 @@ enum df_ref_type {DF_REF_REG_DEF, DF_REF_REG_USE, DF_REF_REG_MEM_LOAD,
enum df_ref_flags
{
/* Read-modify-write refs generate both a use and a def and
these are marked with this flag to show that they are not
independent. */
DF_REF_READ_WRITE = 1 << 0,
/* This flag is set if this ref occurs inside of a conditional
execution instruction. */
DF_REF_CONDITIONAL = 1 << 0,
/* If this flag is set for an artificial use or def, that ref
logically happens at the top of the block. If it is not set
@ -85,14 +82,26 @@ enum df_ref_flags
note. */
DF_REF_IN_NOTE = 1 << 2,
/* This bit is true if this ref can make regs_ever_live true for
this regno. */
DF_HARD_REG_LIVE = 1 << 3,
/* This flag is set if this ref is a partial use or def of the
associated register. */
DF_REF_PARTIAL = 1 << 4,
/* Read-modify-write refs generate both a use and a def and
these are marked with this flag to show that they are not
independent. */
DF_REF_READ_WRITE = 1 << 5,
/* This flag is set if this ref, generally a def, may clobber the
referenced register. This is generally only set for hard
registers that cross a call site. With better information
about calls, some of these could be changed in the future to
DF_REF_MUST_CLOBBER. */
DF_REF_MAY_CLOBBER = 1 << 3,
DF_REF_MAY_CLOBBER = 1 << 6,
/* This flag is set if this ref, generally a def, is a real
clobber. This is not currently set for registers live across a
@ -103,34 +112,31 @@ enum df_ref_flags
clobber is to a subreg. So in order to tell if the clobber
wipes out the entire register, it is necessary to also check
the DF_REF_PARTIAL flag. */
DF_REF_MUST_CLOBBER = 1 << 4,
/* This bit is true if this ref is part of a multiword hardreg. */
DF_REF_MW_HARDREG = 1 << 5,
/* This flag is set if this ref is a partial use or def of the
associated register. */
DF_REF_PARTIAL = 1 << 6,
/* This flag is set if this ref occurs inside of a conditional
execution instruction. */
DF_REF_CONDITIONAL = 1 << 7,
DF_REF_MUST_CLOBBER = 1 << 7,
/* This flag is set if this ref is inside a pre/post modify. */
DF_REF_PRE_POST_MODIFY = 1 << 8,
/* This flag is set if the ref contains a ZERO_EXTRACT or SIGN_EXTRACT. */
DF_REF_EXTRACT = 1 << 9,
/* This flag is set if the ref contains a STRICT_LOWER_PART. */
DF_REF_STRICT_LOWER_PART = 1 << 10,
/* This flag is set if the ref contains a SUBREG. */
DF_REF_SUBREG = 1 << 11,
/* This bit is true if this ref is part of a multiword hardreg. */
DF_REF_MW_HARDREG = 1 << 12,
/* This flag is set if this ref is a usage of the stack pointer by
a function call. */
DF_REF_CALL_STACK_USAGE = 1 << 9,
DF_REF_CALL_STACK_USAGE = 1 << 13,
/* This flag is used for verification of existing refs. */
DF_REF_REG_MARKER = 1 << 10,
/* This bit is true if this ref can make regs_ever_live true for
this regno. */
DF_HARD_REG_LIVE = 1 << 11
DF_REF_REG_MARKER = 1 << 14
};
/* The possible ordering of refs within the df_ref_info. */
@ -544,7 +550,6 @@ struct df
#define DF_SCAN_BB_INFO(BB) (df_scan_get_bb_info((BB)->index))
#define DF_RD_BB_INFO(BB) (df_rd_get_bb_info((BB)->index))
#define DF_LR_BB_INFO(BB) (df_lr_get_bb_info((BB)->index))
#define DF_UREC_BB_INFO(BB) (df_urec_get_bb_info((BB)->index))
#define DF_LIVE_BB_INFO(BB) (df_live_get_bb_info((BB)->index))
/* Most transformations that wish to use live register analysis will
@ -552,17 +557,10 @@ struct df
#define DF_LIVE_IN(BB) (DF_LIVE_BB_INFO(BB)->in)
#define DF_LIVE_OUT(BB) (DF_LIVE_BB_INFO(BB)->out)
/* Live in for register allocation also takes into account several other factors. */
#define DF_RA_LIVE_IN(BB) (DF_UREC_BB_INFO(BB)->in)
#define DF_RA_LIVE_TOP(BB) (DF_UREC_BB_INFO(BB)->top)
#define DF_RA_LIVE_OUT(BB) (DF_UREC_BB_INFO(BB)->out)
/* These macros are currently used by only reg-stack since it is not
tolerant of uninitialized variables. This intolerance should be
fixed because it causes other problems. */
#define DF_LR_IN(BB) (DF_LR_BB_INFO(BB)->in)
#define DF_LR_TOP(BB) (DF_LR_BB_INFO(BB)->top)
#define DF_LR_OUT(BB) (DF_LR_BB_INFO(BB)->out)
/* Macros to access the elements within the ref structure. */
@ -685,11 +683,21 @@ extern bitmap df_invalidated_by_call;
/* One of these structures is allocated for every basic block. */
struct df_scan_bb_info
{
/* Defs at the start of a basic block that is the target of an
exception edge. */
/* The entry block has many artificial defs and these are at the
bottom of the block.
Blocks that are targets of exception edges may have some
artificial defs. These are logically located at the top of the
block.
Blocks that are the targets of non-local goto's have the hard
frame pointer defined at the top of the block. */
struct df_ref **artificial_defs;
/* Uses of hard registers that are live at every block. */
/* Blocks that are targets of exception edges may have some
artificial uses. These are logically at the top of the block.
Most blocks have artificial uses at the bottom of the block. */
struct df_ref **artificial_uses;
};
@ -710,23 +718,8 @@ struct df_rd_bb_info
};
/* Live registers. All bitmaps are referenced by the register number.
df_lr_bb_info:IN is the "in" set of the traditional dataflow sense
which is the confluence of out sets of all predecessor blocks.
The difference between IN and TOP is
due to the artificial defs and uses at the top (DF_REF_TOP)
(e.g. exception handling dispatch block, which can have
a few registers defined by the runtime) - which is NOT included
in the "in" set before this function but is included after.
For the initial live set of forward scanning, TOP should be used
instead of IN - otherwise, artificial defs won't be in IN set
causing the bad transformation. TOP set can not simply be
the union of IN set and artificial defs at the top,
because artificial defs might not be used at all,
in which case those defs are not live at any point
(except as a dangling def) - hence TOP has to be calculated
during the LR problem computation and stored in df_lr_bb_info. */
/* Live registers, a backwards dataflow problem. All bitmaps are
referenced by the register number. */
struct df_lr_bb_info
{
@ -734,12 +727,9 @@ struct df_lr_bb_info
bitmap def; /* The set of registers set in this block
- except artificial defs at the top. */
bitmap use; /* The set of registers used in this block. */
bitmap adef; /* The artificial defs at top. */
bitmap ause; /* The artificial uses at top. */
/* The results of the dataflow problem. */
bitmap in; /* Just before the block itself. */
bitmap top; /* Just before the first insn in the block. */
bitmap out; /* At the bottom of the block. */
};
@ -761,23 +751,6 @@ struct df_live_bb_info
};
/* Uninitialized registers. All bitmaps are referenced by the register number. */
struct df_urec_bb_info
{
/* Local sets to describe the basic blocks. */
bitmap earlyclobber; /* The set of registers that are referenced
with an early clobber mode. */
/* Kill and gen are defined as in the UR problem. */
bitmap kill;
bitmap gen;
/* The results of the dataflow problem. */
bitmap in; /* Just before the block. */
bitmap top; /* Just before the first insn in the block. */
bitmap out; /* At the bottom of the block. */
};
/* This is used for debugging and for the dumpers to find the latest
instance so that the df info can be added to the dumps. This
should not be used by regular code. */
@ -786,7 +759,6 @@ extern struct df *df;
#define df_rd (df->problems_by_index[DF_RD])
#define df_lr (df->problems_by_index[DF_LR])
#define df_live (df->problems_by_index[DF_LIVE])
#define df_urec (df->problems_by_index[DF_UREC])
#define df_chain (df->problems_by_index[DF_CHAIN])
#define df_note (df->problems_by_index[DF_NOTE])
@ -861,7 +833,6 @@ extern void df_chain_unlink (struct df_ref *);
extern void df_chain_copy (struct df_ref *, struct df_link *);
extern bitmap df_get_live_in (basic_block);
extern bitmap df_get_live_out (basic_block);
extern bitmap df_get_live_top (basic_block);
extern void df_grow_bb_info (struct dataflow *);
extern void df_chain_dump (struct df_link *, FILE *);
extern void df_print_bb_index (basic_block bb, FILE *file);
@ -871,7 +842,6 @@ extern void df_lr_verify_transfer_functions (void);
extern void df_live_verify_transfer_functions (void);
extern void df_live_add_problem (void);
extern void df_live_set_all_dirty (void);
extern void df_urec_add_problem (void);
extern void df_chain_add_problem (enum df_chain_flags);
extern void df_note_add_problem (void);
extern void df_simulate_find_defs (rtx, bitmap);
@ -898,7 +868,6 @@ extern void df_bb_refs_record (int, bool);
extern bool df_insn_rescan (rtx);
extern void df_insn_rescan_all (void);
extern void df_process_deferred_rescans (void);
extern bool df_has_eh_preds (basic_block);
extern void df_recompute_luids (basic_block);
extern void df_insn_change_bb (rtx);
extern void df_maybe_reorganize_use_refs (enum df_ref_order);
@ -956,16 +925,6 @@ df_live_get_bb_info (unsigned int index)
return NULL;
}
static inline struct df_urec_bb_info *
df_urec_get_bb_info (unsigned int index)
{
if (index < df_urec->block_info_size)
return (struct df_urec_bb_info *) df_urec->block_info[index];
else
return NULL;
}
/* Get the artificial defs for a basic block. */
static inline struct df_ref **

File diff suppressed because it is too large Load Diff

View File

@ -380,7 +380,7 @@ hard_reg_set_empty_p (const HARD_REG_SET x)
return x[0] == 0 && x[1] == 0 && x[2] == 0 && x[3] == 0;
}
#else /* FIRST_PSEUDO_REGISTER > 3*HOST_BITS_PER_WIDEST_FAST_INT */
#else /* FIRST_PSEUDO_REGISTER > 4*HOST_BITS_PER_WIDEST_FAST_INT */
#define CLEAR_HARD_REG_SET(TO) \
do { HARD_REG_ELT_TYPE *scan_tp_ = (TO); \

View File

@ -117,7 +117,11 @@ initialize_uninitialized_regs (void)
}
if (optimize == 1)
df_remove_problem (df_live);
{
if (dump_file)
df_dump (dump_file);
df_remove_problem (df_live);
}
BITMAP_FREE (already_genned);
}

View File

@ -1211,13 +1211,9 @@ update_equiv_regs (void)
if (!bitmap_empty_p (cleared_regs))
FOR_EACH_BB (bb)
{
bitmap_and_compl_into (DF_RA_LIVE_IN (bb), cleared_regs);
if (DF_RA_LIVE_TOP (bb))
bitmap_and_compl_into (DF_RA_LIVE_TOP (bb), cleared_regs);
bitmap_and_compl_into (DF_RA_LIVE_OUT (bb), cleared_regs);
bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
if (DF_LR_TOP (bb))
bitmap_and_compl_into (DF_LR_TOP (bb), cleared_regs);
bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
}
@ -1277,6 +1273,7 @@ block_alloc (int b)
int max_uid = get_max_uid ();
int *qty_order;
int no_conflict_combined_regno = -1;
struct df_ref ** def_rec;
/* Count the instructions in the basic block. */
@ -1299,7 +1296,19 @@ block_alloc (int b)
/* Initialize table of hardware registers currently live. */
REG_SET_TO_HARD_REG_SET (regs_live, DF_LR_TOP (BASIC_BLOCK (b)));
REG_SET_TO_HARD_REG_SET (regs_live, DF_LR_IN (BASIC_BLOCK (b)));
/* This is conservative, as this would include registers that are
artificial-def'ed-but-not-used. However, artificial-defs are
rare, and such uninitialized use is rarer still, and the chance
of this having any performance impact is even less, while the
benefit is not having to compute and keep the TOP set around. */
for (def_rec = df_get_artificial_defs (b); *def_rec; def_rec++)
{
int regno = DF_REF_REGNO (*def_rec);
if (regno < FIRST_PSEUDO_REGISTER)
SET_HARD_REG_BIT (regs_live, regno);
}
/* This loop scans the instructions of the basic block
and assigns quantities to registers.
@ -2502,6 +2511,49 @@ dump_local_alloc (FILE *file)
fprintf (file, ";; Register %d in %d.\n", i, reg_renumber[i]);
}
#ifdef STACK_REGS
static void
find_stack_regs (void)
{
bitmap stack_regs = BITMAP_ALLOC (NULL);
int i;
HARD_REG_SET stack_hard_regs, used;
basic_block bb;
/* Any register that MAY be allocated to a register stack (like the
387) is treated poorly. Each such register is marked as being
live everywhere. This keeps the register allocator and the
subsequent passes from doing anything useful with these values.
FIXME: This seems like an incredibly poor idea. */
CLEAR_HARD_REG_SET (stack_hard_regs);
for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
SET_HARD_REG_BIT (stack_hard_regs, i);
for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
{
COPY_HARD_REG_SET (used, reg_class_contents[reg_preferred_class (i)]);
IOR_HARD_REG_SET (used, reg_class_contents[reg_alternate_class (i)]);
AND_HARD_REG_SET (used, stack_hard_regs);
if (!hard_reg_set_empty_p (used))
bitmap_set_bit (stack_regs, i);
}
if (dump_file)
bitmap_print (dump_file, stack_regs, "stack regs:", "\n");
FOR_EACH_BB (bb)
{
bitmap_ior_into (DF_LIVE_IN (bb), stack_regs);
bitmap_and_into (DF_LIVE_IN (bb), DF_LR_IN (bb));
bitmap_ior_into (DF_LIVE_OUT (bb), stack_regs);
bitmap_and_into (DF_LIVE_OUT (bb), DF_LR_OUT (bb));
}
BITMAP_FREE (stack_regs);
}
#endif
/* Run old register allocator. Return TRUE if we must exit
rest_of_compilation upon return. */
static unsigned int
@ -2512,26 +2564,22 @@ rest_of_handle_local_alloc (void)
df_note_add_problem ();
if (optimize > 1)
df_remove_problem (df_live);
/* Create a new version of df that has the special version of UR if
we are doing optimization. */
if (optimize)
df_urec_add_problem ();
if (optimize == 1)
{
df_live_add_problem ();
df_live_set_all_dirty ();
}
#ifdef ENABLE_CHECKING
df->changeable_flags |= DF_VERIFY_SCHEDULED;
#endif
df_analyze ();
#ifdef STACK_REGS
if (optimize)
find_stack_regs ();
#endif
regstat_init_n_sets_and_refs ();
regstat_compute_ri ();
/* There is just too much going on in the register allocators to
keep things up to date. At the end we have to rescan anyway
because things change when the reload_completed flag is set.
So we just turn off scanning and we will rescan by hand. */
df_set_flags (DF_NO_INSN_RESCAN);
/* If we are not optimizing, then this is the only place before
register allocation where dataflow is done. And that is needed
to generate these warnings. */

1120
gcc/ra-conflict.c Normal file

File diff suppressed because it is too large Load Diff

133
gcc/ra.h Normal file
View File

@ -0,0 +1,133 @@
/* Define per-register tables for data flow info and register allocation.
Copyright (C) 2007 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3, or (at your option) any later
version.
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#ifndef GCC_RA_H
#define GCC_RA_H
#include "regs.h"
struct allocno
{
int reg;
/* Gives the number of consecutive hard registers needed by that
pseudo reg. */
int size;
/* Number of calls crossed by each allocno. */
int calls_crossed;
/* Number of calls that might throw crossed by each allocno. */
int throwing_calls_crossed;
/* Number of refs to each allocno. */
int n_refs;
/* Frequency of uses of each allocno. */
int freq;
/* Guess at live length of each allocno.
This is actually the max of the live lengths of the regs. */
int live_length;
/* Set of hard regs conflicting with allocno N. */
HARD_REG_SET hard_reg_conflicts;
/* Set of hard regs preferred by allocno N.
This is used to make allocnos go into regs that are copied to or from them,
when possible, to reduce register shuffling. */
HARD_REG_SET hard_reg_preferences;
/* Similar, but just counts register preferences made in simple copy
operations, rather than arithmetic. These are given priority because
we can always eliminate an insn by using these, but using a register
in the above list won't always eliminate an insn. */
HARD_REG_SET hard_reg_copy_preferences;
/* Similar to hard_reg_preferences, but includes bits for subsequent
registers when an allocno is multi-word. The above variable is used for
allocation while this is used to build reg_someone_prefers, below. */
HARD_REG_SET hard_reg_full_preferences;
/* Set of hard registers that some later allocno has a preference for. */
HARD_REG_SET regs_someone_prefers;
#ifdef STACK_REGS
/* Set to true if allocno can't be allocated in the stack register. */
bool no_stack_reg;
#endif
};
extern struct allocno *allocno;
/* In ra-conflict.c */
/* Number of pseudo-registers which are candidates for allocation. */
extern int max_allocno;
/* max_allocno by max_allocno array of bits, recording whether two
allocno's conflict (can't go in the same hardware register).
`conflicts' is symmetric after the call to mirror_conflicts. */
extern HOST_WIDE_INT *conflicts;
/* Number of ints required to hold max_allocno bits.
This is the length of a row in `conflicts'. */
extern int allocno_row_words;
/* Indexed by (pseudo) reg number, gives the allocno, or -1
for pseudo registers which are not to be allocated. */
extern int *reg_allocno;
extern void global_conflicts (void);
/* In global.c */
/* For any allocno set in ALLOCNO_SET, set ALLOCNO to that allocno,
and execute CODE. */
#define EXECUTE_IF_SET_IN_ALLOCNO_SET(ALLOCNO_SET, ALLOCNO, CODE) \
do { \
int i_; \
int allocno_; \
HOST_WIDE_INT *p_ = (ALLOCNO_SET); \
\
for (i_ = allocno_row_words - 1, allocno_ = 0; i_ >= 0; \
i_--, allocno_ += HOST_BITS_PER_WIDE_INT) \
{ \
unsigned HOST_WIDE_INT word_ = (unsigned HOST_WIDE_INT) *p_++; \
\
for ((ALLOCNO) = allocno_; word_; word_ >>= 1, (ALLOCNO)++) \
{ \
if (word_ & 1) \
{CODE;} \
} \
} \
} while (0)
extern void ra_init_live_subregs (bool, sbitmap *, int *, int, rtx reg);
#endif /* GCC_RA_H */

View File

@ -1521,7 +1521,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
/* Check that we don't use a hardreg for an uninitialized
pseudo. See also find_dummy_reload(). */
&& (ORIGINAL_REGNO (XEXP (note, 0)) < FIRST_PSEUDO_REGISTER
|| ! bitmap_bit_p (DF_RA_LIVE_OUT (ENTRY_BLOCK_PTR),
|| ! bitmap_bit_p (DF_LIVE_OUT (ENTRY_BLOCK_PTR),
ORIGINAL_REGNO (XEXP (note, 0))))
&& ! refers_to_regno_for_reload_p (regno,
end_hard_regno (rel_mode,
@ -2000,7 +2000,7 @@ find_dummy_reload (rtx real_in, rtx real_out, rtx *inloc, rtx *outloc,
as they would clobber the other live pseudo using the same.
See also PR20973. */
&& (ORIGINAL_REGNO (in) < FIRST_PSEUDO_REGISTER
|| ! bitmap_bit_p (DF_RA_LIVE_OUT (ENTRY_BLOCK_PTR),
|| ! bitmap_bit_p (DF_LIVE_OUT (ENTRY_BLOCK_PTR),
ORIGINAL_REGNO (in))))
{
unsigned int regno = REGNO (in) + in_offset;

View File

@ -548,7 +548,7 @@ compute_use_by_pseudos (HARD_REG_SET *to, regset from)
if (r < 0)
{
/* reload_combine uses the information from
DF_RA_LIVE_IN (BASIC_BLOCK), which might still
DF_LIVE_IN (BASIC_BLOCK), which might still
contain registers that have not actually been allocated
since they have an equivalence. */
gcc_assert (reload_completed);
@ -1158,10 +1158,7 @@ reload (rtx first, int global)
if (! frame_pointer_needed)
FOR_EACH_BB (bb)
{
bitmap_clear_bit (df_get_live_in (bb), HARD_FRAME_POINTER_REGNUM);
bitmap_clear_bit (df_get_live_top (bb), HARD_FRAME_POINTER_REGNUM);
}
bitmap_clear_bit (df_get_live_in (bb), HARD_FRAME_POINTER_REGNUM);
/* Come here (with failure set nonzero) if we can't get enough spill
regs. */

View File

@ -1063,6 +1063,7 @@ extern bool subreg_offset_representable_p (unsigned int, enum machine_mode,
unsigned int, enum machine_mode);
extern unsigned int subreg_regno (const_rtx);
extern unsigned int subreg_nregs (const_rtx);
extern unsigned int subreg_nregs_with_regno (unsigned int, const_rtx);
extern unsigned HOST_WIDE_INT nonzero_bits (const_rtx, enum machine_mode);
extern unsigned int num_sign_bit_copies (const_rtx, enum machine_mode);
extern bool constant_pool_constant_p (rtx);
@ -2172,7 +2173,6 @@ extern void dump_global_regs (FILE *);
/* Yes, this ifdef is silly, but HARD_REG_SET is not always defined. */
extern void retry_global_alloc (int, HARD_REG_SET);
#endif
extern void build_insn_chain (rtx);
/* In regclass.c */
extern int reg_classes_intersect_p (enum reg_class, enum reg_class);

View File

@ -3262,16 +3262,26 @@ subreg_regno (const_rtx x)
to. */
unsigned int
subreg_nregs (const_rtx x)
{
return subreg_nregs_with_regno (REGNO (SUBREG_REG (x)), x);
}
/* Return the number of registers that a subreg REG with REGNO
expression refers to. This is a copy of the rtlanal.c:subreg_nregs
changed so that the regno can be passed in. */
unsigned int
subreg_nregs_with_regno (unsigned int regno, const_rtx x)
{
struct subreg_info info;
rtx subreg = SUBREG_REG (x);
int regno = REGNO (subreg);
subreg_get_info (regno, GET_MODE (subreg), SUBREG_BYTE (x), GET_MODE (x),
&info);
return info.nregs;
}
struct parms_set_data
{
int nregs;

View File

@ -46,25 +46,25 @@ template<typename T>
void test01()
{
do_test<bool>();
do_test<char>();
do_test<signed char>();
do_test<unsigned char>();
do_test<short>();
do_test<int>();
do_test<long>();
do_test<unsigned short>();
do_test<unsigned int>();
do_test<unsigned long>();
do_test<int*>();
do_test<std::string>();
do_test<float>();
do_test<double>();
// do_test<bool>();
// do_test<char>();
// do_test<signed char>();
// do_test<unsigned char>();
// do_test<short>();
// do_test<int>();
// do_test<long>();
// do_test<unsigned short>();
// do_test<unsigned int>();
// do_test<unsigned long>();
// do_test<int*>();
// do_test<std::string>();
// do_test<float>();
// do_test<double>();
do_test<long double>();
#ifdef _GLIBCXX_USE_WCHAR_T
do_test<wchar_t>();
do_test<std::wstring>();
// do_test<wchar_t>();
// do_test<std::wstring>();
#endif
}