dbgcnt.def (ra_byte_scan): Added.

2008-04-24  Richard Sandiford  <rsandifo@nildram.co.uk>
	    Kenneth Zadeck <zadeck@naturalbridge.com>

	* dbgcnt.def (ra_byte_scan): Added.
	* dbgcnt.c (dbg_cnt): Added code to print message to dump_file
	when the last hit happens for a counter.  
	* timevar.def (TV_DF_BYTE_LR): New variable.
	* tree-pass.h (pass_fast_rtl_byte_dce): New pass.
	* passes.c (pass_fast_rtl_byte_dce): New pass.
	* fwprop.c (update_df): Added mode to call df_ref_create.
	Renamed DF_REF_WIDTH and DF_REF_OFFSET to DF_REF_EXTRACT_WIDTH and
	DF_REF_EXTRACT_OFFSET.
	* df.h (DF_BYTE_LR, DF_BYTE_LR_BB_INFO, DF_BYTE_LR_IN, 
	DF_BYTE_LR_OUT, df_byte_lr): New macro.
	(df_mm): New enum.
	(df_ref_extract): Added mode field.
	(DF_REF_WIDTH, DF_REF_OFFSET) Renamed to DF_REF_EXTRACT_WIDTH and
	DF_REF_EXTRACT_OFFSET.
	(DF_REF_EXTRACT_MODE): New macro.
	(df_byte_lr_bb_info): New structure.
	(df_print_byte_regset, df_compute_accessed_bytes, 
	df_byte_lr_add_problem, df_byte_lr_get_regno_start,
	df_byte_lr_get_regno_len, df_byte_lr_simulate_defs,
	df_byte_lr_simulate_uses,
	df_byte_lr_simulate_artificial_refs_at_top,
	df_byte_lr_simulate_artificial_refs_at_end,
	df_compute_accessed_bytes): New function.
	(df_ref_create): Add parameter.
	(df_byte_lr_get_bb_info): New inline function.
	* df-scan.c (df_ref_record, df_uses_record,
	df_ref_create_structure): Added mode parameter.
	(df_ref_create, df_notes_rescan, df_ref_record, df_def_record_1, 
	df_defs_record, df_uses_record, df_get_conditional_uses,
	df_get_call_refs, df_insn_refs_collect, df_bb_refs_collect, 
	df_entry_block_defs_collect, df_exit_block_uses_collect):
	Added mode parameter to calls to df_ref_record, df_uses_record,
	df_ref_create_structure.
       	(df_ref_equal_p, df_ref_compare): Added test for modes.
	(df_ref_create_structure): Added code to set mode.  Renamed
	DF_REF_WIDTH and DF_REF_OFFSET to DF_REF_EXTRACT_WIDTH and
	DF_REF_EXTRACT_OFFSET.
	* df-core.c (df_print_byte_regset): New function.
	* df-byte-scan.c: New file.
	* df-problems.c (df_rd_transfer_function): Removed unnecessary
	calls to BITMAP_FREE.  
	(df_byte_lr_problem_data, df_problem problem_BYTE_LR): New structure.
	(df_byte_lr_get_regno_start, df_byte_lr_get_regno_len,
	df_byte_lr_set_bb_info, df_byte_lr_free_bb_info, 
	df_byte_lr_check_regs, df_byte_lr_expand_bitmap, 
	df_byte_lr_alloc, df_byte_lr_reset, df_byte_lr_bb_local_compute,
	df_byte_lr_local_compute, df_byte_lr_init,
	df_byte_lr_confluence_0, df_byte_lr_confluence_n, 
	df_byte_lr_transfer_function, df_byte_lr_free, 
	df_byte_lr_top_dump, df_byte_lr_bottom_dump,
	df_byte_lr_add_problem, df_byte_lr_simulate_defs, 
	df_byte_lr_simulate_uses,
	df_byte_lr_simulate_artificial_refs_at_top,
	df_byte_lr_simulate_artificial_refs_at_end): New function.
	* dce.c (byte_dce_process_block): New function.
	(dce_process_block): au is now passed in rather than computed
	locally.  Changed loops that look at artificial defs to not look
	for conditional or partial ones, because there never are any.  
	(fast_dce): Now is able to drive byte_dce_process_block or 
	dce_process_block depending on the kind of dce being done.
	(rest_of_handle_fast_dce): Add parameter to fast_dce.
	(rest_of_handle_fast_byte_dce): New function.
	(rtl_opt_pass pass_fast_rtl_byte_dce): New pass.
	* Makefile.in (df-byte-scan.o, debugcnt.o): Added dependencies.




Co-Authored-By: Kenneth Zadeck <zadeck@naturalbridge.com>

From-SVN: r134523
This commit is contained in:
Richard Sandiford 2008-04-21 18:55:13 +00:00 committed by Kenneth Zadeck
parent f7546fa716
commit cc806ac109
13 changed files with 1320 additions and 168 deletions

View File

@ -1,3 +1,73 @@
2008-04-24 Richard Sandiford <rsandifo@nildram.co.uk>
Kenneth Zadeck <zadeck@naturalbridge.com>
* dbgcnt.def (ra_byte_scan): Added.
* dbgcnt.c (dbg_cnt): Added code to print message to dump_file
when the last hit happens for a counter.
* timevar.def (TV_DF_BYTE_LR): New variable.
* tree-pass.h (pass_fast_rtl_byte_dce): New pass.
* passes.c (pass_fast_rtl_byte_dce): New pass.
* fwprop.c (update_df): Added mode to call df_ref_create.
Renamed DF_REF_WIDTH and DF_REF_OFFSET to DF_REF_EXTRACT_WIDTH and
DF_REF_EXTRACT_OFFSET.
* df.h (DF_BYTE_LR, DF_BYTE_LR_BB_INFO, DF_BYTE_LR_IN,
DF_BYTE_LR_OUT, df_byte_lr): New macro.
(df_mm): New enum.
(df_ref_extract): Added mode field.
(DF_REF_WIDTH, DF_REF_OFFSET) Renamed to DF_REF_EXTRACT_WIDTH and
DF_REF_EXTRACT_OFFSET.
(DF_REF_EXTRACT_MODE): New macro.
(df_byte_lr_bb_info): New structure.
(df_print_byte_regset, df_compute_accessed_bytes,
df_byte_lr_add_problem, df_byte_lr_get_regno_start,
df_byte_lr_get_regno_len, df_byte_lr_simulate_defs,
df_byte_lr_simulate_uses,
df_byte_lr_simulate_artificial_refs_at_top,
df_byte_lr_simulate_artificial_refs_at_end,
df_compute_accessed_bytes): New function.
(df_ref_create): Add parameter.
(df_byte_lr_get_bb_info): New inline function.
* df-scan.c (df_ref_record, df_uses_record,
df_ref_create_structure): Added mode parameter.
(df_ref_create, df_notes_rescan, df_ref_record, df_def_record_1,
df_defs_record, df_uses_record, df_get_conditional_uses,
df_get_call_refs, df_insn_refs_collect, df_bb_refs_collect,
df_entry_block_defs_collect, df_exit_block_uses_collect):
Added mode parameter to calls to df_ref_record, df_uses_record,
df_ref_create_structure.
(df_ref_equal_p, df_ref_compare): Added test for modes.
(df_ref_create_structure): Added code to set mode. Renamed
DF_REF_WIDTH and DF_REF_OFFSET to DF_REF_EXTRACT_WIDTH and
DF_REF_EXTRACT_OFFSET.
* df-core.c (df_print_byte_regset): New function.
* df-byte-scan.c: New file.
* df-problems.c (df_rd_transfer_function): Removed unnecessary
calls to BITMAP_FREE.
(df_byte_lr_problem_data, df_problem problem_BYTE_LR): New structure.
(df_byte_lr_get_regno_start, df_byte_lr_get_regno_len,
df_byte_lr_set_bb_info, df_byte_lr_free_bb_info,
df_byte_lr_check_regs, df_byte_lr_expand_bitmap,
df_byte_lr_alloc, df_byte_lr_reset, df_byte_lr_bb_local_compute,
df_byte_lr_local_compute, df_byte_lr_init,
df_byte_lr_confluence_0, df_byte_lr_confluence_n,
df_byte_lr_transfer_function, df_byte_lr_free,
df_byte_lr_top_dump, df_byte_lr_bottom_dump,
df_byte_lr_add_problem, df_byte_lr_simulate_defs,
df_byte_lr_simulate_uses,
df_byte_lr_simulate_artificial_refs_at_top,
df_byte_lr_simulate_artificial_refs_at_end): New function.
* dce.c (byte_dce_process_block): New function.
(dce_process_block): au is now passed in rather than computed
locally. Changed loops that look at artificial defs to not look
for conditional or partial ones, because there never are any.
(fast_dce): Now is able to drive byte_dce_process_block or
dce_process_block depending on the kind of dce being done.
(rest_of_handle_fast_dce): Add parameter to fast_dce.
(rest_of_handle_fast_byte_dce): New function.
(rtl_opt_pass pass_fast_rtl_byte_dce): New pass.
* Makefile.in (df-byte-scan.o, debugcnt.o): Added dependencies.
2008-04-21 Daniel Franke <franke.daniel@gmail.com>
PR fortran/35019

View File

@ -797,7 +797,7 @@ IPA_UTILS_H = ipa-utils.h $(TREE_H) $(CGRAPH_H)
IPA_REFERENCE_H = ipa-reference.h bitmap.h $(TREE_H)
IPA_TYPE_ESCAPE_H = ipa-type-escape.h $(TREE_H)
CGRAPH_H = cgraph.h $(TREE_H)
DF_H = df.h bitmap.h $(BASIC_BLOCK_H) alloc-pool.h
DF_H = df.h bitmap.h $(BASIC_BLOCK_H) alloc-pool.h
RESOURCE_H = resource.h hard-reg-set.h $(DF_H)
DDG_H = ddg.h sbitmap.h $(DF_H)
GCC_H = gcc.h version.h
@ -1026,6 +1026,7 @@ OBJS-common = \
dce.o \
ddg.o \
debug.o \
df-byte-scan.o \
df-core.o \
df-problems.o \
df-scan.o \
@ -2617,6 +2618,8 @@ df-scan.o : df-scan.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
insn-config.h $(RECOG_H) $(FUNCTION_H) $(REGS_H) alloc-pool.h \
hard-reg-set.h $(BASIC_BLOCK_H) $(DF_H) bitmap.h sbitmap.h $(TM_P_H) \
$(FLAGS_H) $(TARGET_H) $(TARGET_DEF_H) $(TREE_H) output.h tree-pass.h
df-byte-scan.o : df-byte-scan.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(DF_H) output.h $(DBGCNT_H)
regstat.o : regstat.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(TM_P_H) $(FLAGS_H) $(REGS_H) output.h except.h hard-reg-set.h \
$(BASIC_BLOCK_H) $(TIMEVAR_H) $(DF_H)
@ -2729,7 +2732,7 @@ global.o : global.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
ra-conflict.o : ra-conflict.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
$(TIMEVAR_H) vecprim.h $(DF_H) $(RA_H) sbitmap.h
$(TIMEVAR_H) vecprim.h $(DF_H) $(RA_H) sbitmap.h
varray.o : varray.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(GGC_H) \
$(HASHTAB_H) $(BCONFIG_H) $(VARRAY_H) toplev.h
vec.o : vec.c $(CONFIG_H) $(SYSTEM_H) coretypes.h vec.h $(GGC_H) \
@ -2882,7 +2885,8 @@ hooks.o: hooks.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(HOOKS_H)
pretty-print.o: $(CONFIG_H) $(SYSTEM_H) coretypes.h intl.h $(PRETTY_PRINT_H) \
$(TREE_H)
errors.o : errors.c $(CONFIG_H) $(SYSTEM_H) errors.h $(BCONFIG_H)
dbgcnt.o: dbgcnt.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(DBGCNT_H)
dbgcnt.o: dbgcnt.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(DBGCNT_H) $(TM_H) \
$(RTL_H) output.h
lower-subreg.o : lower-subreg.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
$(MACHMODE_H) $(TM_H) $(RTL_H) $(TM_P_H) $(TIMEVAR_H) $(FLAGS_H) \
insn-config.h $(BASIC_BLOCK_H) $(RECOG_H) $(OBSTACK_H) bitmap.h \

View File

@ -23,6 +23,9 @@ See dbgcnt.def for usage information. */
#include "system.h"
#include "coretypes.h"
#include "errors.h"
#include "tm.h"
#include "rtl.h"
#include "output.h"
#include "dbgcnt.h"
@ -58,6 +61,10 @@ bool
dbg_cnt (enum debug_counter index)
{
count[index]++;
if (dump_file && count[index] == limit[index])
fprintf (dump_file, "***dbgcnt: limit reached for %s.***\n",
map[index].name);
return dbg_cnt_is_enabled (index);
}
@ -132,7 +139,8 @@ dbg_cnt_process_opt (const char *arg)
/* Print name, limit and count of all counters. */
void dbg_cnt_list_all_counters (void)
void
dbg_cnt_list_all_counters (void)
{
int i;
printf (" %-30s %-5s %-5s\n", "counter name", "limit", "value");

View File

@ -61,6 +61,83 @@ along with GCC; see the file COPYING3. If not see
Use -fdbg-cnt=counter1:N,counter2:M,...
which sets the limit for counter1 to N, and the limit for counter2 to M, etc.
e.g. setting a limit to zero will make dbg_cnt () return false *always*.
The following shell file can then be used to binary search for
exact transformation that causes the bug. A second shell script
should be written, say "tryTest", which exits with 1 if the
compiled program fails and exits with 0 if the program succeeds.
This shell script should take 1 parameter, the value to be passed
to set the counter of the compilation command in tryTest. Then,
assuming that the following script is called binarySearch,
the command:
binarySearch tryTest
will automatically find the highest value of the counter for which
the program fails. If tryTest never fails, binarySearch will
produce unpredictable results as it will try to find an upper bound
that does not exist.
When dbgcnt does hits the limit, it writes a comment in the current
dump_file of the form:
***dbgcnt: limit reached for %s.***
Assuming that the dump file is logging the analysis/transformations
it is making, this pinpoints the exact position in the log file
where the problem transformation is being logged.
=====================================
#!/bin/bash
while getopts "l:u:i:" opt
do
case $opt in
l) lb="$OPTARG";;
u) ub="$OPTARG";;
i) init="$OPTARG";;
?) usage; exit 3;;
esac
done
shift $(($OPTIND - 1))
echo $@
cmd=${1+"${@}"}
lb=${lb:=0}
init=${init:=100}
$cmd $lb
lb_val=$?
if [ -z "$ub" ]; then
# find the upper bound
ub=$(($init + $lb))
true
while [ $? -eq $lb_val ]; do
ub=$(($ub * 10))
#ub=`expr $ub \* 10`
$cmd $ub
done
fi
echo command: $cmd
true
while [ `expr $ub - $lb` -gt 1 ]; do
try=$(($lb + ( $ub - $lb ) / 2))
$cmd $try
if [ $? -eq $lb_val ]; then
lb=$try
else
ub=$try
fi
done
echo lbound: $lb
echo ubound: $ub
=====================================
*/
/* Debug counter definitions. */
@ -73,6 +150,7 @@ DEBUG_COUNTER (dce)
DEBUG_COUNTER (dce_fast)
DEBUG_COUNTER (dce_ud)
DEBUG_COUNTER (delete_trivial_dead)
DEBUG_COUNTER (df_byte_scan)
DEBUG_COUNTER (dse)
DEBUG_COUNTER (dse1)
DEBUG_COUNTER (dse2)

228
gcc/dce.c
View File

@ -1,5 +1,5 @@
/* RTL dead code elimination.
Copyright (C) 2005, 2006, 2007 Free Software Foundation, Inc.
Copyright (C) 2005, 2006, 2007, 2008 Free Software Foundation, Inc.
This file is part of GCC.
@ -593,17 +593,122 @@ struct rtl_opt_pass pass_ud_rtl_dce =
Fast DCE functions
------------------------------------------------------------------------- */
/* Process basic block BB. Return true if the live_in set has changed. */
/* Process basic block BB. Return true if the live_in set has
changed. REDO_OUT is true if the info at the bottom of the block
needs to be recalculated before starting. AU is the proper set of
artificial uses. */
static bool
dce_process_block (basic_block bb, bool redo_out)
byte_dce_process_block (basic_block bb, bool redo_out, bitmap au)
{
bitmap local_live = BITMAP_ALLOC (&dce_tmp_bitmap_obstack);
bitmap au;
rtx insn;
bool block_changed;
struct df_ref **def_rec, **use_rec;
unsigned int bb_index = bb->index;
struct df_ref **def_rec;
if (redo_out)
{
/* Need to redo the live_out set of this block if when one of
the succs of this block has had a change in it live in
set. */
edge e;
edge_iterator ei;
df_confluence_function_n con_fun_n = df_byte_lr->problem->con_fun_n;
bitmap_clear (DF_BYTE_LR_OUT (bb));
FOR_EACH_EDGE (e, ei, bb->succs)
(*con_fun_n) (e);
}
if (dump_file)
{
fprintf (dump_file, "processing block %d live out = ", bb->index);
df_print_byte_regset (dump_file, DF_BYTE_LR_OUT (bb));
}
bitmap_copy (local_live, DF_BYTE_LR_OUT (bb));
df_byte_lr_simulate_artificial_refs_at_end (bb, local_live);
FOR_BB_INSNS_REVERSE (bb, insn)
if (INSN_P (insn))
{
/* The insn is needed if there is someone who uses the output. */
for (def_rec = DF_INSN_DEFS (insn); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
unsigned int last;
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = df_byte_lr_get_regno_start (dregno);
unsigned int len = df_byte_lr_get_regno_len (dregno);
unsigned int sb;
unsigned int lb;
/* This is one of the only places where DF_MM_MAY should
be used for defs. Need to make sure that we are
checking for all of the bits that may be used. */
if (!df_compute_accessed_bytes (def, DF_MM_MAY, &sb, &lb))
{
start += sb;
len = lb - sb;
}
if (bitmap_bit_p (au, dregno))
{
mark_insn (insn, true);
goto quickexit;
}
last = start + len;
while (start < last)
if (bitmap_bit_p (local_live, start++))
{
mark_insn (insn, true);
goto quickexit;
}
}
quickexit:
/* No matter if the instruction is needed or not, we remove
any regno in the defs from the live set. */
df_byte_lr_simulate_defs (insn, local_live);
/* On the other hand, we do not allow the dead uses to set
anything in local_live. */
if (marked_insn_p (insn))
df_byte_lr_simulate_uses (insn, local_live);
if (dump_file)
{
fprintf (dump_file, "finished processing insn %d live out = ",
INSN_UID (insn));
df_print_byte_regset (dump_file, local_live);
}
}
df_byte_lr_simulate_artificial_refs_at_top (bb, local_live);
block_changed = !bitmap_equal_p (local_live, DF_BYTE_LR_IN (bb));
if (block_changed)
bitmap_copy (DF_BYTE_LR_IN (bb), local_live);
BITMAP_FREE (local_live);
return block_changed;
}
/* Process basic block BB. Return true if the live_in set has
changed. REDO_OUT is true if the info at the bottom of the block
needs to be recalculated before starting. AU is the proper set of
artificial uses. */
static bool
dce_process_block (basic_block bb, bool redo_out, bitmap au)
{
bitmap local_live = BITMAP_ALLOC (&dce_tmp_bitmap_obstack);
rtx insn;
bool block_changed;
struct df_ref **def_rec;
if (redo_out)
{
@ -626,30 +731,7 @@ dce_process_block (basic_block bb, bool redo_out)
bitmap_copy (local_live, DF_LR_OUT (bb));
/* Process the artificial defs and uses at the bottom of the block. */
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if (((DF_REF_FLAGS (def) & DF_REF_AT_TOP) == 0)
&& (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL))))
bitmap_clear_bit (local_live, DF_REF_REGNO (def));
}
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
if ((DF_REF_FLAGS (use) & DF_REF_AT_TOP) == 0)
bitmap_set_bit (local_live, DF_REF_REGNO (use));
}
/* These regs are considered always live so if they end up dying
because of some def, we need to bring the back again.
Calling df_simulate_fixup_sets has the disadvantage of calling
bb_has_eh_pred once per insn, so we cache the information here. */
if (bb_has_eh_pred (bb))
au = df->eh_block_artificial_uses;
else
au = df->regular_block_artificial_uses;
df_simulate_artificial_refs_at_end (bb, local_live);
FOR_BB_INSNS_REVERSE (bb, insn)
if (INSN_P (insn))
@ -678,24 +760,7 @@ dce_process_block (basic_block bb, bool redo_out)
df_simulate_uses (insn, local_live);
}
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP)
&& (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL))))
bitmap_clear_bit (local_live, DF_REF_REGNO (def));
}
#ifdef EH_USES
/* Process the uses that are live into an exception handler. */
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
/* Add use to set of uses in this BB. */
struct df_ref *use = *use_rec;
if (DF_REF_FLAGS (use) & DF_REF_AT_TOP)
bitmap_set_bit (local_live, DF_REF_REGNO (use));
}
#endif
df_simulate_artificial_refs_at_top (bb, local_live);
block_changed = !bitmap_equal_p (local_live, DF_LR_IN (bb));
if (block_changed)
@ -706,10 +771,12 @@ dce_process_block (basic_block bb, bool redo_out)
}
/* Perform fast DCE once initialization is done. */
/* Perform fast DCE once initialization is done. If BYTE_LEVEL is
true, use the byte level dce, otherwise do it at the pseudo
level. */
static void
fast_dce (void)
fast_dce (bool byte_level)
{
int *postorder = df_get_postorder (DF_BACKWARD);
int n_blocks = df_get_n_blocks (DF_BACKWARD);
@ -720,6 +787,14 @@ fast_dce (void)
bitmap redo_out = BITMAP_ALLOC (&dce_blocks_bitmap_obstack);
bitmap all_blocks = BITMAP_ALLOC (&dce_blocks_bitmap_obstack);
bool global_changed = true;
/* These regs are considered always live so if they end up dying
because of some def, we need to bring the back again. Calling
df_simulate_fixup_sets has the disadvantage of calling
bb_has_eh_pred once per insn, so we cache the information
here. */
bitmap au = df->regular_block_artificial_uses;
bitmap au_eh = df->eh_block_artificial_uses;
int i;
prescan_insns_for_dce (true);
@ -743,8 +818,14 @@ fast_dce (void)
continue;
}
local_changed
= dce_process_block (bb, bitmap_bit_p (redo_out, index));
if (byte_level)
local_changed
= byte_dce_process_block (bb, bitmap_bit_p (redo_out, index),
bb_has_eh_pred (bb) ? au_eh : au);
else
local_changed
= dce_process_block (bb, bitmap_bit_p (redo_out, index),
bb_has_eh_pred (bb) ? au_eh : au);
bitmap_set_bit (processed, index);
if (local_changed)
@ -780,7 +861,10 @@ fast_dce (void)
to redo the dataflow equations for the blocks that had a
change at the top of the block. Then we need to redo the
iteration. */
df_analyze_problem (df_lr, all_blocks, postorder, n_blocks);
if (byte_level)
df_analyze_problem (df_byte_lr, all_blocks, postorder, n_blocks);
else
df_analyze_problem (df_lr, all_blocks, postorder, n_blocks);
if (old_flag & DF_LR_RUN_DCE)
df_set_flags (DF_LR_RUN_DCE);
@ -797,13 +881,26 @@ fast_dce (void)
}
/* Fast DCE. */
/* Fast register level DCE. */
static unsigned int
rest_of_handle_fast_dce (void)
{
init_dce (true);
fast_dce ();
fast_dce (false);
fini_dce (true);
return 0;
}
/* Fast byte level DCE. */
static unsigned int
rest_of_handle_fast_byte_dce (void)
{
df_byte_lr_add_problem ();
init_dce (true);
fast_dce (true);
fini_dce (true);
return 0;
}
@ -875,3 +972,24 @@ struct rtl_opt_pass pass_fast_rtl_dce =
TODO_ggc_collect /* todo_flags_finish */
}
};
struct rtl_opt_pass pass_fast_rtl_byte_dce =
{
{
RTL_PASS,
"byte-dce", /* name */
gate_fast_dce, /* gate */
rest_of_handle_fast_byte_dce, /* execute */
NULL, /* sub */
NULL, /* next */
0, /* static_pass_number */
TV_DCE, /* tv_id */
0, /* properties_required */
0, /* properties_provided */
0, /* properties_destroyed */
0, /* todo_flags_start */
TODO_dump_func |
TODO_df_finish | TODO_verify_rtl_sharing |
TODO_ggc_collect /* todo_flags_finish */
}
};

View File

@ -1875,6 +1875,69 @@ df_print_regset (FILE *file, bitmap r)
}
/* Write information about registers and basic blocks into FILE. The
bitmap is in the form used by df_byte_lr. This is part of making a
debugging dump. */
void
df_print_byte_regset (FILE *file, bitmap r)
{
unsigned int max_reg = max_reg_num ();
bitmap_iterator bi;
if (r == NULL)
fputs (" (nil)", file);
else
{
unsigned int i;
for (i = 0; i < max_reg; i++)
{
unsigned int first = df_byte_lr_get_regno_start (i);
unsigned int len = df_byte_lr_get_regno_len (i);
if (len > 1)
{
bool found = false;
unsigned int j;
EXECUTE_IF_SET_IN_BITMAP (r, first, j, bi)
{
found = j < first + len;
break;
}
if (found)
{
const char * sep = "";
fprintf (file, " %d", i);
if (i < FIRST_PSEUDO_REGISTER)
fprintf (file, " [%s]", reg_names[i]);
fprintf (file, "(");
EXECUTE_IF_SET_IN_BITMAP (r, first, j, bi)
{
if (j > first + len - 1)
break;
fprintf (file, "%s%d", sep, j-first);
sep = ", ";
}
fprintf (file, ")");
}
}
else
{
if (bitmap_bit_p (r, first))
{
fprintf (file, " %d", i);
if (i < FIRST_PSEUDO_REGISTER)
fprintf (file, " [%s]", reg_names[i]);
}
}
}
}
fprintf (file, "\n");
}
/* Dump dataflow info. */
void

View File

@ -1,5 +1,6 @@
/* Standard problems for dataflow support routines.
Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008
Free Software Foundation, Inc.
Originally contributed by Michael P. Hayes
(m.hayes@elec.canterbury.ac.nz, mhayes@redhat.com)
@ -566,28 +567,12 @@ df_rd_transfer_function (int bb_index)
static void
df_rd_free (void)
{
unsigned int i;
struct df_rd_problem_data *problem_data
= (struct df_rd_problem_data *) df_rd->problem_data;
if (problem_data)
{
for (i = 0; i < df_rd->block_info_size; i++)
{
struct df_rd_bb_info *bb_info = df_rd_get_bb_info (i);
if (bb_info)
{
BITMAP_FREE (bb_info->kill);
BITMAP_FREE (bb_info->sparse_kill);
BITMAP_FREE (bb_info->gen);
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
}
}
free_alloc_pool (df_rd->block_pool);
BITMAP_FREE (problem_data->sparse_invalidated_by_call);
BITMAP_FREE (problem_data->dense_invalidated_by_call);
bitmap_obstack_release (&problem_data->rd_bitmaps);
df_rd->block_info_size = 0;
@ -706,7 +691,7 @@ df_rd_add_problem (void)
Find the locations in the function where any use of a pseudo can
reach in the backwards direction. In and out bitvectors are built
for each basic block. The regnum is used to index into these sets.
for each basic block. The regno is used to index into these sets.
See df.h for details.
----------------------------------------------------------------------------*/
@ -1878,7 +1863,7 @@ struct df_link *
df_chain_create (struct df_ref *src, struct df_ref *dst)
{
struct df_link *head = DF_REF_CHAIN (src);
struct df_link *link = pool_alloc (df_chain->block_pool);;
struct df_link *link = pool_alloc (df_chain->block_pool);
DF_REF_CHAIN (src) = link;
link->next = head;
@ -2344,7 +2329,733 @@ df_chain_add_problem (enum df_chain_flags chain_flags)
/*----------------------------------------------------------------------------
This pass computes REG_DEAD and REG_UNUSED notes.
BYTE LEVEL LIVE REGISTERS
Find the locations in the function where any use of a pseudo can
reach in the backwards direction. In and out bitvectors are built
for each basic block. There are two mapping functions,
df_byte_lr_get_regno_start and df_byte_lr_get_regno_len that are
used to map regnos into bit vector postions.
This problem differs from the regular df_lr function in the way
that subregs, *_extracts and strict_low_parts are handled. In lr
these are consider partial kills, here, the exact set of bytes is
modeled. Note that any reg that has none of these operations is
only modeled with a single bit since all operations access the
entire register.
This problem is more brittle that the regular lr. It currently can
be used in dce incrementally, but cannot be used in an environment
where insns are created or modified. The problem is that the
mapping of regnos to bitmap positions is relatively compact, in
that if a pseudo does not do any of the byte wise operations, only
one slot is allocated, rather than a slot for each byte. If insn
are created, where a subreg is used for a reg that had no subregs,
the mapping would be wrong. Likewise, there are no checks to see
that new pseudos have been added. These issues could be addressed
by adding a problem specific flag to not use the compact mapping,
if there was a need to do so.
----------------------------------------------------------------------------*/
/* Private data used to verify the solution for this problem. */
struct df_byte_lr_problem_data
{
/* Expanded versions of bitvectors used in lr. */
bitmap invalidated_by_call;
bitmap hardware_regs_used;
/* Indexed by regno, this is true if there are subregs, extracts or
strict_low_parts for this regno. */
bitmap needs_expansion;
/* The start position and len for each regno in the various bit
vectors. */
unsigned int* regno_start;
unsigned int* regno_len;
/* An obstack for the bitmaps we need for this problem. */
bitmap_obstack byte_lr_bitmaps;
};
/* Get the starting location for REGNO in the df_byte_lr bitmaps. */
int
df_byte_lr_get_regno_start (unsigned int regno)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;;
return problem_data->regno_start[regno];
}
/* Get the len for REGNO in the df_byte_lr bitmaps. */
int
df_byte_lr_get_regno_len (unsigned int regno)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;;
return problem_data->regno_len[regno];
}
/* Set basic block info. */
static void
df_byte_lr_set_bb_info (unsigned int index,
struct df_byte_lr_bb_info *bb_info)
{
gcc_assert (df_byte_lr);
gcc_assert (index < df_byte_lr->block_info_size);
df_byte_lr->block_info[index] = bb_info;
}
/* Free basic block info. */
static void
df_byte_lr_free_bb_info (basic_block bb ATTRIBUTE_UNUSED,
void *vbb_info)
{
struct df_byte_lr_bb_info *bb_info = (struct df_byte_lr_bb_info *) vbb_info;
if (bb_info)
{
BITMAP_FREE (bb_info->use);
BITMAP_FREE (bb_info->def);
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
pool_free (df_byte_lr->block_pool, bb_info);
}
}
/* Check all of the refs in REF_REC to see if any of them are
extracts, subregs or strict_low_parts. */
static void
df_byte_lr_check_regs (struct df_ref **ref_rec)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
for (; *ref_rec; ref_rec++)
{
struct df_ref *ref = *ref_rec;
if (DF_REF_FLAGS_IS_SET (ref, DF_REF_SIGN_EXTRACT
| DF_REF_ZERO_EXTRACT
| DF_REF_STRICT_LOW_PART)
|| GET_CODE (DF_REF_REG (ref)) == SUBREG)
bitmap_set_bit (problem_data->needs_expansion, DF_REF_REGNO (ref));
}
}
/* Expand bitmap SRC which is indexed by regno to DEST which is indexed by
regno_start and regno_len. */
static void
df_byte_lr_expand_bitmap (bitmap dest, bitmap src)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
bitmap_iterator bi;
unsigned int i;
bitmap_clear (dest);
EXECUTE_IF_SET_IN_BITMAP (src, 0, i, bi)
{
bitmap_set_range (dest, problem_data->regno_start[i],
problem_data->regno_len[i]);
}
}
/* Allocate or reset bitmaps for DF_BYTE_LR blocks. The solution bits are
not touched unless the block is new. */
static void
df_byte_lr_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
{
unsigned int bb_index;
bitmap_iterator bi;
basic_block bb;
unsigned int regno;
unsigned int index = 0;
unsigned int max_reg = max_reg_num();
struct df_byte_lr_problem_data *problem_data
= problem_data = XNEW (struct df_byte_lr_problem_data);
df_byte_lr->problem_data = problem_data;
if (!df_byte_lr->block_pool)
df_byte_lr->block_pool = create_alloc_pool ("df_byte_lr_block pool",
sizeof (struct df_byte_lr_bb_info), 50);
df_grow_bb_info (df_byte_lr);
/* Create the mapping from regnos to slots. This does not change
unless the problem is destroyed and recreated. In particular, if
we end up deleting the only insn that used a subreg, we do not
want to redo the mapping because this would invalidate everything
else. */
bitmap_obstack_initialize (&problem_data->byte_lr_bitmaps);
problem_data->regno_start = XNEWVEC (unsigned int, max_reg);
problem_data->regno_len = XNEWVEC (unsigned int, max_reg);
problem_data->hardware_regs_used = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
problem_data->invalidated_by_call = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
problem_data->needs_expansion = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
/* Discover which regno's use subregs, extracts or
strict_low_parts. */
FOR_EACH_BB (bb)
{
rtx insn;
FOR_BB_INSNS (bb, insn)
{
if (INSN_P (insn))
{
df_byte_lr_check_regs (DF_INSN_DEFS (insn));
df_byte_lr_check_regs (DF_INSN_USES (insn));
}
}
bitmap_set_bit (df_byte_lr->out_of_date_transfer_functions, bb->index);
}
bitmap_set_bit (df_byte_lr->out_of_date_transfer_functions, ENTRY_BLOCK);
bitmap_set_bit (df_byte_lr->out_of_date_transfer_functions, EXIT_BLOCK);
/* Allocate the slots for each regno. */
for (regno = 0; regno < max_reg; regno++)
{
int len;
problem_data->regno_start[regno] = index;
if (bitmap_bit_p (problem_data->needs_expansion, regno))
len = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
else
len = 1;
problem_data->regno_len[regno] = len;
index += len;
}
df_byte_lr_expand_bitmap (problem_data->hardware_regs_used,
df->hardware_regs_used);
df_byte_lr_expand_bitmap (problem_data->invalidated_by_call,
df_invalidated_by_call);
EXECUTE_IF_SET_IN_BITMAP (df_byte_lr->out_of_date_transfer_functions, 0, bb_index, bi)
{
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb_index);
if (bb_info)
{
bitmap_clear (bb_info->def);
bitmap_clear (bb_info->use);
}
else
{
bb_info = (struct df_byte_lr_bb_info *) pool_alloc (df_byte_lr->block_pool);
df_byte_lr_set_bb_info (bb_index, bb_info);
bb_info->use = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
bb_info->def = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
bb_info->in = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
bb_info->out = BITMAP_ALLOC (&problem_data->byte_lr_bitmaps);
}
}
df_byte_lr->optional_p = true;
}
/* Reset the global solution for recalculation. */
static void
df_byte_lr_reset (bitmap all_blocks)
{
unsigned int bb_index;
bitmap_iterator bi;
EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
{
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb_index);
gcc_assert (bb_info);
bitmap_clear (bb_info->in);
bitmap_clear (bb_info->out);
}
}
/* Compute local live register info for basic block BB. */
static void
df_byte_lr_bb_local_compute (unsigned int bb_index)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
basic_block bb = BASIC_BLOCK (bb_index);
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb_index);
rtx insn;
struct df_ref **def_rec;
struct df_ref **use_rec;
/* Process the registers set in an exception handler. */
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP) == 0)
{
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = problem_data->regno_start[dregno];
unsigned int len = problem_data->regno_len[dregno];
bitmap_set_range (bb_info->def, start, len);
bitmap_clear_range (bb_info->use, start, len);
}
}
/* Process the hardware registers that are always live. */
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
/* Add use to set of uses in this BB. */
if ((DF_REF_FLAGS (use) & DF_REF_AT_TOP) == 0)
{
unsigned int uregno = DF_REF_REGNO (use);
unsigned int start = problem_data->regno_start[uregno];
unsigned int len = problem_data->regno_len[uregno];
bitmap_set_range (bb_info->use, start, len);
}
}
FOR_BB_INSNS_REVERSE (bb, insn)
{
unsigned int uid = INSN_UID (insn);
if (!INSN_P (insn))
continue;
for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
/* If the def is to only part of the reg, it does
not kill the other defs that reach here. */
if (!(DF_REF_FLAGS (def) & (DF_REF_CONDITIONAL)))
{
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = problem_data->regno_start[dregno];
unsigned int len = problem_data->regno_len[dregno];
unsigned int sb;
unsigned int lb;
if (!df_compute_accessed_bytes (def, DF_MM_MUST, &sb, &lb))
{
start += sb;
len = lb - sb;
}
if (len)
{
bitmap_set_range (bb_info->def, start, len);
bitmap_clear_range (bb_info->use, start, len);
}
}
}
for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
unsigned int uregno = DF_REF_REGNO (use);
unsigned int start = problem_data->regno_start[uregno];
unsigned int len = problem_data->regno_len[uregno];
unsigned int sb;
unsigned int lb;
if (!df_compute_accessed_bytes (use, DF_MM_MAY, &sb, &lb))
{
start += sb;
len = lb - sb;
}
/* Add use to set of uses in this BB. */
if (len)
bitmap_set_range (bb_info->use, start, len);
}
}
/* Process the registers set in an exception handler or the hard
frame pointer if this block is the target of a non local
goto. */
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
{
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = problem_data->regno_start[dregno];
unsigned int len = problem_data->regno_len[dregno];
bitmap_set_range (bb_info->def, start, len);
bitmap_clear_range (bb_info->use, start, len);
}
}
#ifdef EH_USES
/* Process the uses that are live into an exception handler. */
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
/* Add use to set of uses in this BB. */
if (DF_REF_FLAGS (use) & DF_REF_AT_TOP)
{
unsigned int uregno = DF_REF_REGNO (use);
unsigned int start = problem_data->regno_start[uregno];
unsigned int len = problem_data->regno_len[uregno];
bitmap_set_range (bb_info->use, start, len);
}
}
#endif
}
/* Compute local live register info for each basic block within BLOCKS. */
static void
df_byte_lr_local_compute (bitmap all_blocks ATTRIBUTE_UNUSED)
{
unsigned int bb_index;
bitmap_iterator bi;
EXECUTE_IF_SET_IN_BITMAP (df_byte_lr->out_of_date_transfer_functions, 0, bb_index, bi)
{
if (bb_index == EXIT_BLOCK)
{
/* The exit block is special for this problem and its bits are
computed from thin air. */
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (EXIT_BLOCK);
df_byte_lr_expand_bitmap (bb_info->use, df->exit_block_uses);
}
else
df_byte_lr_bb_local_compute (bb_index);
}
bitmap_clear (df_byte_lr->out_of_date_transfer_functions);
}
/* Initialize the solution vectors. */
static void
df_byte_lr_init (bitmap all_blocks)
{
unsigned int bb_index;
bitmap_iterator bi;
EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
{
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb_index);
bitmap_copy (bb_info->in, bb_info->use);
bitmap_clear (bb_info->out);
}
}
/* Confluence function that processes infinite loops. This might be a
noreturn function that throws. And even if it isn't, getting the
unwind info right helps debugging. */
static void
df_byte_lr_confluence_0 (basic_block bb)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
bitmap op1 = df_byte_lr_get_bb_info (bb->index)->out;
if (bb != EXIT_BLOCK_PTR)
bitmap_copy (op1, problem_data->hardware_regs_used);
}
/* Confluence function that ignores fake edges. */
static void
df_byte_lr_confluence_n (edge e)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
bitmap op1 = df_byte_lr_get_bb_info (e->src->index)->out;
bitmap op2 = df_byte_lr_get_bb_info (e->dest->index)->in;
/* Call-clobbered registers die across exception and call edges. */
/* ??? Abnormal call edges ignored for the moment, as this gets
confused by sibling call edges, which crashes reg-stack. */
if (e->flags & EDGE_EH)
bitmap_ior_and_compl_into (op1, op2, problem_data->invalidated_by_call);
else
bitmap_ior_into (op1, op2);
bitmap_ior_into (op1, problem_data->hardware_regs_used);
}
/* Transfer function. */
static bool
df_byte_lr_transfer_function (int bb_index)
{
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb_index);
bitmap in = bb_info->in;
bitmap out = bb_info->out;
bitmap use = bb_info->use;
bitmap def = bb_info->def;
return bitmap_ior_and_compl (in, use, out, def);
}
/* Free all storage associated with the problem. */
static void
df_byte_lr_free (void)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
if (df_byte_lr->block_info)
{
free_alloc_pool (df_byte_lr->block_pool);
df_byte_lr->block_info_size = 0;
free (df_byte_lr->block_info);
}
BITMAP_FREE (df_byte_lr->out_of_date_transfer_functions);
bitmap_obstack_release (&problem_data->byte_lr_bitmaps);
free (problem_data->regno_start);
free (problem_data->regno_len);
free (problem_data);
free (df_byte_lr);
}
/* Debugging info at top of bb. */
static void
df_byte_lr_top_dump (basic_block bb, FILE *file)
{
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb->index);
if (!bb_info || !bb_info->in)
return;
fprintf (file, ";; blr in \t");
df_print_byte_regset (file, bb_info->in);
fprintf (file, ";; blr use \t");
df_print_byte_regset (file, bb_info->use);
fprintf (file, ";; blr def \t");
df_print_byte_regset (file, bb_info->def);
}
/* Debugging info at bottom of bb. */
static void
df_byte_lr_bottom_dump (basic_block bb, FILE *file)
{
struct df_byte_lr_bb_info *bb_info = df_byte_lr_get_bb_info (bb->index);
if (!bb_info || !bb_info->out)
return;
fprintf (file, ";; blr out \t");
df_print_byte_regset (file, bb_info->out);
}
/* All of the information associated with every instance of the problem. */
static struct df_problem problem_BYTE_LR =
{
DF_BYTE_LR, /* Problem id. */
DF_BACKWARD, /* Direction. */
df_byte_lr_alloc, /* Allocate the problem specific data. */
df_byte_lr_reset, /* Reset global information. */
df_byte_lr_free_bb_info, /* Free basic block info. */
df_byte_lr_local_compute, /* Local compute function. */
df_byte_lr_init, /* Init the solution specific data. */
df_worklist_dataflow, /* Worklist solver. */
df_byte_lr_confluence_0, /* Confluence operator 0. */
df_byte_lr_confluence_n, /* Confluence operator n. */
df_byte_lr_transfer_function, /* Transfer function. */
NULL, /* Finalize function. */
df_byte_lr_free, /* Free all of the problem information. */
df_byte_lr_free, /* Remove this problem from the stack of dataflow problems. */
NULL, /* Debugging. */
df_byte_lr_top_dump, /* Debugging start block. */
df_byte_lr_bottom_dump, /* Debugging end block. */
NULL, /* Incremental solution verify start. */
NULL, /* Incremental solution verify end. */
NULL, /* Dependent problem. */
TV_DF_BYTE_LR, /* Timing variable. */
false /* Reset blocks on dropping out of blocks_to_analyze. */
};
/* Create a new DATAFLOW instance and add it to an existing instance
of DF. The returned structure is what is used to get at the
solution. */
void
df_byte_lr_add_problem (void)
{
df_add_problem (&problem_BYTE_LR);
/* These will be initialized when df_scan_blocks processes each
block. */
df_byte_lr->out_of_date_transfer_functions = BITMAP_ALLOC (NULL);
}
/* Simulate the effects of the defs of INSN on LIVE. */
void
df_byte_lr_simulate_defs (rtx insn, bitmap live)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
struct df_ref **def_rec;
unsigned int uid = INSN_UID (insn);
for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
/* If the def is to only part of the reg, it does
not kill the other defs that reach here. */
if (!(DF_REF_FLAGS (def) & DF_REF_CONDITIONAL))
{
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = problem_data->regno_start[dregno];
unsigned int len = problem_data->regno_len[dregno];
unsigned int sb;
unsigned int lb;
if (!df_compute_accessed_bytes (def, DF_MM_MUST, &sb, &lb))
{
start += sb;
len = lb - sb;
}
if (len)
bitmap_clear_range (live, start, len);
}
}
}
/* Simulate the effects of the uses of INSN on LIVE. */
void
df_byte_lr_simulate_uses (rtx insn, bitmap live)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
struct df_ref **use_rec;
unsigned int uid = INSN_UID (insn);
for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
unsigned int uregno = DF_REF_REGNO (use);
unsigned int start = problem_data->regno_start[uregno];
unsigned int len = problem_data->regno_len[uregno];
unsigned int sb;
unsigned int lb;
if (!df_compute_accessed_bytes (use, DF_MM_MAY, &sb, &lb))
{
start += sb;
len = lb - sb;
}
/* Add use to set of uses in this BB. */
if (len)
bitmap_set_range (live, start, len);
}
}
/* Apply the artificial uses and defs at the top of BB in a forwards
direction. */
void
df_byte_lr_simulate_artificial_refs_at_top (basic_block bb, bitmap live)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
struct df_ref **def_rec;
#ifdef EH_USES
struct df_ref **use_rec;
#endif
int bb_index = bb->index;
#ifdef EH_USES
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
if (DF_REF_FLAGS (use) & DF_REF_AT_TOP)
{
unsigned int uregno = DF_REF_REGNO (use);
unsigned int start = problem_data->regno_start[uregno];
unsigned int len = problem_data->regno_len[uregno];
bitmap_set_range (live, start, len);
}
}
#endif
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
{
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = problem_data->regno_start[dregno];
unsigned int len = problem_data->regno_len[dregno];
bitmap_clear_range (live, start, len);
}
}
}
/* Apply the artificial uses and defs at the end of BB in a backwards
direction. */
void
df_byte_lr_simulate_artificial_refs_at_end (basic_block bb, bitmap live)
{
struct df_byte_lr_problem_data *problem_data
= (struct df_byte_lr_problem_data *)df_byte_lr->problem_data;
struct df_ref **def_rec;
struct df_ref **use_rec;
int bb_index = bb->index;
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP) == 0)
{
unsigned int dregno = DF_REF_REGNO (def);
unsigned int start = problem_data->regno_start[dregno];
unsigned int len = problem_data->regno_len[dregno];
bitmap_clear_range (live, start, len);
}
}
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
if ((DF_REF_FLAGS (use) & DF_REF_AT_TOP) == 0)
{
unsigned int uregno = DF_REF_REGNO (use);
unsigned int start = problem_data->regno_start[uregno];
unsigned int len = problem_data->regno_len[uregno];
bitmap_set_range (live, start, len);
}
}
}
/*----------------------------------------------------------------------------
This problem computes REG_DEAD and REG_UNUSED notes.
----------------------------------------------------------------------------*/
static void
@ -3053,15 +3764,19 @@ void
df_simulate_artificial_refs_at_top (basic_block bb, bitmap live)
{
struct df_ref **def_rec;
#ifdef EH_USES
struct df_ref **use_rec;
#endif
int bb_index = bb->index;
#ifdef EH_USES
for (use_rec = df_get_artificial_uses (bb_index); *use_rec; use_rec++)
{
struct df_ref *use = *use_rec;
if (DF_REF_FLAGS (use) & DF_REF_AT_TOP)
bitmap_set_bit (live, DF_REF_REGNO (use));
}
#endif
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{

View File

@ -95,7 +95,7 @@ static struct df_mw_hardreg * df_null_mw_rec[1];
static void df_ref_record (struct df_collection_rec *,
rtx, rtx *,
basic_block, rtx, enum df_ref_type,
enum df_ref_flags, int, int);
enum df_ref_flags, int, int, enum machine_mode);
static void df_def_record_1 (struct df_collection_rec *,
rtx, basic_block, rtx,
enum df_ref_flags);
@ -104,11 +104,13 @@ static void df_defs_record (struct df_collection_rec *,
enum df_ref_flags);
static void df_uses_record (struct df_collection_rec *,
rtx *, enum df_ref_type,
basic_block, rtx, enum df_ref_flags, int, int);
basic_block, rtx, enum df_ref_flags,
int, int, enum machine_mode);
static struct df_ref *df_ref_create_structure (struct df_collection_rec *, rtx, rtx *,
basic_block, rtx, enum df_ref_type,
enum df_ref_flags, int, int);
enum df_ref_flags,
int, int, enum machine_mode);
static void df_insn_refs_collect (struct df_collection_rec*,
basic_block, rtx);
@ -616,16 +618,16 @@ df_scan_blocks (void)
LOC within INSN of BB. This function is only used externally.
If the REF_FLAGS field contain DF_REF_SIGN_EXTRACT or
DF_REF_ZERO_EXTRACT. WIDTH and OFFSET are used to access the fields
if they were constants. Otherwise they should be -1 if those flags
were set. */
DF_REF_ZERO_EXTRACT. WIDTH, OFFSET and MODE are used to access the
fields if they were constants. Otherwise they should be -1 if
those flags were set. */
struct df_ref *
df_ref_create (rtx reg, rtx *loc, rtx insn,
basic_block bb,
enum df_ref_type ref_type,
enum df_ref_flags ref_flags,
int width, int offset)
int width, int offset, enum machine_mode mode)
{
struct df_ref *ref;
struct df_reg_info **reg_info;
@ -640,7 +642,8 @@ df_ref_create (rtx reg, rtx *loc, rtx insn,
/* You cannot hack artificial refs. */
gcc_assert (insn);
ref = df_ref_create_structure (NULL, reg, loc, bb, insn,
ref_type, ref_flags, width, offset);
ref_type, ref_flags,
width, offset, mode);
if (DF_REF_TYPE (ref) == DF_REF_REG_DEF)
{
@ -2066,7 +2069,7 @@ df_notes_rescan (rtx insn)
case REG_EQUAL:
df_uses_record (&collection_rec,
&XEXP (note, 0), DF_REF_REG_USE,
bb, insn, DF_REF_IN_NOTE, -1, -1);
bb, insn, DF_REF_IN_NOTE, -1, -1, 0);
default:
break;
}
@ -2142,8 +2145,9 @@ df_ref_equal_p (struct df_ref *ref1, struct df_ref *ref2)
compared in the next set of tests. */
if ((DF_REF_FLAGS_IS_SET (ref1, DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT))
&& (DF_REF_FLAGS_IS_SET (ref2, DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT))
&& ((DF_REF_OFFSET (ref1) != DF_REF_OFFSET (ref2))
|| (DF_REF_WIDTH (ref1) != DF_REF_WIDTH (ref2))))
&& ((DF_REF_EXTRACT_OFFSET (ref1) != DF_REF_EXTRACT_OFFSET (ref2))
|| (DF_REF_EXTRACT_WIDTH (ref1) != DF_REF_EXTRACT_WIDTH (ref2))
|| (DF_REF_EXTRACT_MODE (ref1) != DF_REF_EXTRACT_MODE (ref2))))
return false;
return (ref1 == ref2) ||
@ -2199,10 +2203,12 @@ df_ref_compare (const void *r1, const void *r2)
at ref1. */
if (DF_REF_FLAGS_IS_SET (ref1, DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT))
{
if (DF_REF_OFFSET (ref1) != DF_REF_OFFSET (ref2))
return DF_REF_OFFSET (ref1) - DF_REF_OFFSET (ref2);
if (DF_REF_WIDTH (ref1) != DF_REF_WIDTH (ref2))
return DF_REF_WIDTH (ref1) - DF_REF_WIDTH (ref2);
if (DF_REF_EXTRACT_OFFSET (ref1) != DF_REF_EXTRACT_OFFSET (ref2))
return DF_REF_EXTRACT_OFFSET (ref1) - DF_REF_EXTRACT_OFFSET (ref2);
if (DF_REF_EXTRACT_WIDTH (ref1) != DF_REF_EXTRACT_WIDTH (ref2))
return DF_REF_EXTRACT_WIDTH (ref1) - DF_REF_EXTRACT_WIDTH (ref2);
if (DF_REF_EXTRACT_MODE (ref1) != DF_REF_EXTRACT_MODE (ref2))
return DF_REF_EXTRACT_MODE (ref1) - DF_REF_EXTRACT_MODE (ref2);
}
return 0;
}
@ -2583,7 +2589,7 @@ df_refs_add_to_chains (struct df_collection_rec *collection_rec,
/* Allocate a ref and initialize its fields.
If the REF_FLAGS field contain DF_REF_SIGN_EXTRACT or
DF_REF_ZERO_EXTRACT. WIDTH and OFFSET are used to access the fields
DF_REF_ZERO_EXTRACT. WIDTH, OFFSET and MODE are used to access the fields
if they were constants. Otherwise they should be -1 if those flags
were set. */
@ -2593,7 +2599,7 @@ df_ref_create_structure (struct df_collection_rec *collection_rec,
basic_block bb, rtx insn,
enum df_ref_type ref_type,
enum df_ref_flags ref_flags,
int width, int offset)
int width, int offset, enum machine_mode mode)
{
struct df_ref *this_ref;
int regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
@ -2603,8 +2609,9 @@ df_ref_create_structure (struct df_collection_rec *collection_rec,
if (ref_flags & (DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT))
{
this_ref = pool_alloc (problem_data->ref_extract_pool);
DF_REF_WIDTH (this_ref) = width;
DF_REF_OFFSET (this_ref) = offset;
DF_REF_EXTRACT_WIDTH (this_ref) = width;
DF_REF_EXTRACT_OFFSET (this_ref) = offset;
DF_REF_EXTRACT_MODE (this_ref) = mode;
}
else
this_ref = pool_alloc (problem_data->ref_pool);
@ -2659,9 +2666,9 @@ df_ref_create_structure (struct df_collection_rec *collection_rec,
at address LOC within INSN of BB.
If the REF_FLAGS field contain DF_REF_SIGN_EXTRACT or
DF_REF_ZERO_EXTRACT. WIDTH and OFFSET are used to access the fields
if they were constants. Otherwise they should be -1 if those flags
were set. */
DF_REF_ZERO_EXTRACT. WIDTH, OFFSET and MODE are used to access the
fields if they were constants. Otherwise they should be -1 if
those flags were set. */
static void
@ -2670,7 +2677,7 @@ df_ref_record (struct df_collection_rec *collection_rec,
basic_block bb, rtx insn,
enum df_ref_type ref_type,
enum df_ref_flags ref_flags,
int width, int offset)
int width, int offset, enum machine_mode mode)
{
unsigned int regno;
@ -2719,7 +2726,8 @@ df_ref_record (struct df_collection_rec *collection_rec,
for (i = regno; i < endregno; i++)
{
ref = df_ref_create_structure (collection_rec, regno_reg_rtx[i], loc,
bb, insn, ref_type, ref_flags, width, offset);
bb, insn, ref_type, ref_flags,
width, offset, mode);
gcc_assert (ORIGINAL_REGNO (DF_REF_REG (ref)) == i);
}
@ -2728,7 +2736,7 @@ df_ref_record (struct df_collection_rec *collection_rec,
{
struct df_ref *ref;
ref = df_ref_create_structure (collection_rec, reg, loc, bb, insn,
ref_type, ref_flags, width, offset);
ref_type, ref_flags, width, offset, mode);
}
}
@ -2764,6 +2772,7 @@ df_def_record_1 (struct df_collection_rec *collection_rec,
rtx dst;
int offset = -1;
int width = -1;
enum machine_mode mode = 0;
/* We may recursively call ourselves on EXPR_LIST when dealing with PARALLEL
construct. */
@ -2808,6 +2817,7 @@ df_def_record_1 (struct df_collection_rec *collection_rec,
{
width = INTVAL (XEXP (dst, 1));
offset = INTVAL (XEXP (dst, 2));
mode = GET_MODE (dst);
}
loc = &XEXP (dst, 0);
@ -2818,13 +2828,15 @@ df_def_record_1 (struct df_collection_rec *collection_rec,
if (REG_P (dst))
{
df_ref_record (collection_rec,
dst, loc, bb, insn, DF_REF_REG_DEF, flags, width, offset);
dst, loc, bb, insn, DF_REF_REG_DEF, flags,
width, offset, mode);
/* We want to keep sp alive everywhere - by making all
writes to sp also use of sp. */
if (REGNO (dst) == STACK_POINTER_REGNUM)
df_ref_record (collection_rec,
dst, NULL, bb, insn, DF_REF_REG_USE, flags, width, offset);
dst, NULL, bb, insn, DF_REF_REG_USE, flags,
width, offset, mode);
}
else if (GET_CODE (dst) == SUBREG && REG_P (SUBREG_REG (dst)))
{
@ -2834,7 +2846,8 @@ df_def_record_1 (struct df_collection_rec *collection_rec,
flags |= DF_REF_SUBREG;
df_ref_record (collection_rec,
dst, loc, bb, insn, DF_REF_REG_DEF, flags, width, offset);
dst, loc, bb, insn, DF_REF_REG_DEF, flags,
width, offset, mode);
}
}
@ -2873,15 +2886,15 @@ df_defs_record (struct df_collection_rec *collection_rec,
/* Process all the registers used in the rtx at address LOC.
If the REF_FLAGS field contain DF_REF_SIGN_EXTRACT or
DF_REF_ZERO_EXTRACT. WIDTH and LOWER are used to access the fields
if they were constants. Otherwise they should be -1 if those flags
were set. */
DF_REF_ZERO_EXTRACT. WIDTH, OFFSET and MODE are used to access the
fields if they were constants. Otherwise they should be -1 if
those flags were set. */
static void
df_uses_record (struct df_collection_rec *collection_rec,
rtx *loc, enum df_ref_type ref_type,
basic_block bb, rtx insn, enum df_ref_flags flags,
int width, int offset)
int width, int offset, enum machine_mode mode)
{
RTX_CODE code;
rtx x;
@ -2912,7 +2925,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
if (MEM_P (XEXP (x, 0)))
df_uses_record (collection_rec,
&XEXP (XEXP (x, 0), 0),
DF_REF_REG_MEM_STORE, bb, insn, flags, width, offset);
DF_REF_REG_MEM_STORE, bb, insn, flags,
width, offset, mode);
/* If we're clobbering a REG then we have a def so ignore. */
return;
@ -2920,7 +2934,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
case MEM:
df_uses_record (collection_rec,
&XEXP (x, 0), DF_REF_REG_MEM_LOAD,
bb, insn, flags & DF_REF_IN_NOTE, width, offset);
bb, insn, flags & DF_REF_IN_NOTE,
width, offset, mode);
return;
case SUBREG:
@ -2930,14 +2945,16 @@ df_uses_record (struct df_collection_rec *collection_rec,
if (!REG_P (SUBREG_REG (x)))
{
loc = &SUBREG_REG (x);
df_uses_record (collection_rec, loc, ref_type, bb, insn, flags, width, offset);
df_uses_record (collection_rec, loc, ref_type, bb, insn, flags,
width, offset, mode);
return;
}
/* ... Fall through ... */
case REG:
df_ref_record (collection_rec,
x, loc, bb, insn, ref_type, flags, width, offset);
x, loc, bb, insn, ref_type, flags,
width, offset, mode);
return;
case SIGN_EXTRACT:
@ -2951,6 +2968,7 @@ df_uses_record (struct df_collection_rec *collection_rec,
{
width = INTVAL (XEXP (x, 1));
offset = INTVAL (XEXP (x, 2));
mode = GET_MODE (x);
if (code == ZERO_EXTRACT)
flags |= DF_REF_ZERO_EXTRACT;
@ -2958,7 +2976,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
flags |= DF_REF_SIGN_EXTRACT;
df_uses_record (collection_rec,
&XEXP (x, 0), ref_type, bb, insn, flags, width, offset);
&XEXP (x, 0), ref_type, bb, insn, flags,
width, offset, mode);
return;
}
}
@ -2969,7 +2988,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
rtx dst = SET_DEST (x);
gcc_assert (!(flags & DF_REF_IN_NOTE));
df_uses_record (collection_rec,
&SET_SRC (x), DF_REF_REG_USE, bb, insn, flags, width, offset);
&SET_SRC (x), DF_REF_REG_USE, bb, insn, flags,
width, offset, mode);
switch (GET_CODE (dst))
{
@ -2978,7 +2998,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
{
df_uses_record (collection_rec, &SUBREG_REG (dst),
DF_REF_REG_USE, bb, insn,
flags | DF_REF_READ_WRITE | DF_REF_SUBREG, width, offset);
flags | DF_REF_READ_WRITE | DF_REF_SUBREG,
width, offset, mode);
break;
}
/* Fall through. */
@ -2990,7 +3011,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
break;
case MEM:
df_uses_record (collection_rec, &XEXP (dst, 0),
DF_REF_REG_MEM_STORE, bb, insn, flags, width, offset);
DF_REF_REG_MEM_STORE, bb, insn, flags,
width, offset, mode);
break;
case STRICT_LOW_PART:
{
@ -3001,7 +3023,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
df_uses_record (collection_rec,
(GET_CODE (dst) == SUBREG) ? &SUBREG_REG (dst) : temp,
DF_REF_REG_USE, bb, insn,
DF_REF_READ_WRITE | DF_REF_STRICT_LOW_PART, width, offset);
DF_REF_READ_WRITE | DF_REF_STRICT_LOW_PART,
width, offset, mode);
}
break;
case ZERO_EXTRACT:
@ -3011,18 +3034,22 @@ df_uses_record (struct df_collection_rec *collection_rec,
{
width = INTVAL (XEXP (dst, 1));
offset = INTVAL (XEXP (dst, 2));
mode = GET_MODE (dst);
}
else
{
df_uses_record (collection_rec, &XEXP (dst, 1),
DF_REF_REG_USE, bb, insn, flags, width, offset);
DF_REF_REG_USE, bb, insn, flags,
width, offset, mode);
df_uses_record (collection_rec, &XEXP (dst, 2),
DF_REF_REG_USE, bb, insn, flags, width, offset);
DF_REF_REG_USE, bb, insn, flags,
width, offset, mode);
}
df_uses_record (collection_rec, &XEXP (dst, 0),
DF_REF_REG_USE, bb, insn,
DF_REF_READ_WRITE | DF_REF_ZERO_EXTRACT, width, offset);
DF_REF_READ_WRITE | DF_REF_ZERO_EXTRACT,
width, offset, mode);
}
break;
@ -3072,7 +3099,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
df_uses_record (collection_rec, &ASM_OPERANDS_INPUT (x, j),
DF_REF_REG_USE, bb, insn, flags, width, offset);
DF_REF_REG_USE, bb, insn, flags,
width, offset, mode);
return;
}
break;
@ -3087,7 +3115,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
/* Catch the def of the register being modified. */
df_ref_record (collection_rec, XEXP (x, 0), &XEXP (x, 0), bb, insn,
DF_REF_REG_DEF,
flags | DF_REF_READ_WRITE | DF_REF_PRE_POST_MODIFY, width, offset);
flags | DF_REF_READ_WRITE | DF_REF_PRE_POST_MODIFY,
width, offset, mode);
/* ... Fall through to handle uses ... */
@ -3111,7 +3140,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
goto retry;
}
df_uses_record (collection_rec, &XEXP (x, i), ref_type,
bb, insn, flags, width, offset);
bb, insn, flags,
width, offset, mode);
}
else if (fmt[i] == 'E')
{
@ -3119,7 +3149,8 @@ df_uses_record (struct df_collection_rec *collection_rec,
for (j = 0; j < XVECLEN (x, i); j++)
df_uses_record (collection_rec,
&XVECEXP (x, i, j), ref_type,
bb, insn, flags, width, offset);
bb, insn, flags,
width, offset, mode);
}
}
}
@ -3141,19 +3172,21 @@ df_get_conditional_uses (struct df_collection_rec *collection_rec)
{
int width = -1;
int offset = -1;
enum machine_mode mode = 0;
struct df_ref *use;
if (DF_REF_FLAGS_IS_SET (ref, DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT))
{
width = DF_REF_WIDTH (ref);
offset = DF_REF_OFFSET (ref);
width = DF_REF_EXTRACT_WIDTH (ref);
offset = DF_REF_EXTRACT_OFFSET (ref);
mode = DF_REF_EXTRACT_MODE (ref);
}
use = df_ref_create_structure (collection_rec, DF_REF_REG (ref),
DF_REF_LOC (ref), DF_REF_BB (ref),
DF_REF_INSN (ref), DF_REF_REG_USE,
DF_REF_FLAGS (ref) & ~DF_REF_CONDITIONAL,
width, offset);
width, offset, mode);
DF_REF_REGNO (use) = DF_REF_REGNO (ref);
}
}
@ -3191,7 +3224,7 @@ df_get_call_refs (struct df_collection_rec * collection_rec,
{
if (GET_CODE (XEXP (note, 0)) == USE)
df_uses_record (collection_rec, &XEXP (XEXP (note, 0), 0),
DF_REF_REG_USE, bb, insn, flags, -1, -1);
DF_REF_REG_USE, bb, insn, flags, -1, -1, 0);
else if (GET_CODE (XEXP (note, 0)) == CLOBBER)
{
if (REG_P (XEXP (XEXP (note, 0), 0)))
@ -3203,13 +3236,14 @@ df_get_call_refs (struct df_collection_rec * collection_rec,
}
else
df_uses_record (collection_rec, &XEXP (note, 0),
DF_REF_REG_USE, bb, insn, flags, -1, -1);
DF_REF_REG_USE, bb, insn, flags, -1, -1, 0);
}
}
/* The stack ptr is used (honorarily) by a CALL insn. */
df_ref_record (collection_rec, regno_reg_rtx[STACK_POINTER_REGNUM],
NULL, bb, insn, DF_REF_REG_USE, DF_REF_CALL_STACK_USAGE | flags, -1, -1);
NULL, bb, insn, DF_REF_REG_USE, DF_REF_CALL_STACK_USAGE | flags,
-1, -1, 0);
/* Calls may also reference any of the global registers,
so they are recorded as used. */
@ -3217,9 +3251,9 @@ df_get_call_refs (struct df_collection_rec * collection_rec,
if (global_regs[i])
{
df_ref_record (collection_rec, regno_reg_rtx[i],
NULL, bb, insn, DF_REF_REG_USE, flags, -1, -1);
NULL, bb, insn, DF_REF_REG_USE, flags, -1, -1, 0);
df_ref_record (collection_rec, regno_reg_rtx[i],
NULL, bb, insn, DF_REF_REG_DEF, flags, -1, -1);
NULL, bb, insn, DF_REF_REG_DEF, flags, -1, -1, 0);
}
is_sibling_call = SIBLING_CALL_P (insn);
@ -3232,7 +3266,8 @@ df_get_call_refs (struct df_collection_rec * collection_rec,
|| refers_to_regno_p (ui, ui+1,
crtl->return_rtx, NULL)))
df_ref_record (collection_rec, regno_reg_rtx[ui],
NULL, bb, insn, DF_REF_REG_DEF, DF_REF_MAY_CLOBBER | flags, -1, -1);
NULL, bb, insn, DF_REF_REG_DEF, DF_REF_MAY_CLOBBER | flags,
-1, -1, 0);
}
BITMAP_FREE (defs_generated);
@ -3270,7 +3305,7 @@ df_insn_refs_collect (struct df_collection_rec* collection_rec,
case REG_EQUAL:
df_uses_record (collection_rec,
&XEXP (note, 0), DF_REF_REG_USE,
bb, insn, DF_REF_IN_NOTE, -1, -1);
bb, insn, DF_REF_IN_NOTE, -1, -1, 0);
break;
case REG_NON_LOCAL_GOTO:
/* The frame ptr is used by a non-local goto. */
@ -3278,13 +3313,13 @@ df_insn_refs_collect (struct df_collection_rec* collection_rec,
regno_reg_rtx[FRAME_POINTER_REGNUM],
NULL,
bb, insn,
DF_REF_REG_USE, 0, -1, -1);
DF_REF_REG_USE, 0, -1, -1, 0);
#if FRAME_POINTER_REGNUM != HARD_FRAME_POINTER_REGNUM
df_ref_record (collection_rec,
regno_reg_rtx[HARD_FRAME_POINTER_REGNUM],
NULL,
bb, insn,
DF_REF_REG_USE, 0, -1, -1);
DF_REF_REG_USE, 0, -1, -1, 0);
#endif
break;
default:
@ -3298,7 +3333,8 @@ df_insn_refs_collect (struct df_collection_rec* collection_rec,
/* Record the register uses. */
df_uses_record (collection_rec,
&PATTERN (insn), DF_REF_REG_USE, bb, insn, 0, -1, -1);
&PATTERN (insn), DF_REF_REG_USE, bb, insn, 0,
-1, -1, 0);
/* DF_REF_CONDITIONAL needs corresponding USES. */
if (is_cond_exec)
@ -3381,7 +3417,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
if (regno == INVALID_REGNUM)
break;
df_ref_record (collection_rec, regno_reg_rtx[regno], NULL,
bb, NULL, DF_REF_REG_DEF, DF_REF_AT_TOP, -1, -1);
bb, NULL, DF_REF_REG_DEF, DF_REF_AT_TOP, -1, -1, 0);
}
}
#endif
@ -3405,7 +3441,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
if (EH_USES (i))
df_ref_record (collection_rec, regno_reg_rtx[i], NULL,
bb, NULL, DF_REF_REG_USE, DF_REF_AT_TOP, -1, -1);
bb, NULL, DF_REF_REG_USE, DF_REF_AT_TOP, -1, -1, 0);
}
#endif
@ -3413,7 +3449,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
non-local goto. */
if (bb->flags & BB_NON_LOCAL_GOTO_TARGET)
df_ref_record (collection_rec, hard_frame_pointer_rtx, NULL,
bb, NULL, DF_REF_REG_DEF, DF_REF_AT_TOP, -1, -1);
bb, NULL, DF_REF_REG_DEF, DF_REF_AT_TOP, -1, -1, 0);
/* Add the artificial uses. */
if (bb->index >= NUM_FIXED_BLOCKS)
@ -3427,7 +3463,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
EXECUTE_IF_SET_IN_BITMAP (au, 0, regno, bi)
{
df_ref_record (collection_rec, regno_reg_rtx[regno], NULL,
bb, NULL, DF_REF_REG_USE, 0, -1, -1);
bb, NULL, DF_REF_REG_USE, 0, -1, -1, 0);
}
}
@ -3720,7 +3756,7 @@ df_entry_block_defs_collect (struct df_collection_rec *collection_rec,
EXECUTE_IF_SET_IN_BITMAP (entry_block_defs, 0, i, bi)
{
df_ref_record (collection_rec, regno_reg_rtx[i], NULL,
ENTRY_BLOCK_PTR, NULL, DF_REF_REG_DEF, 0, -1, -1);
ENTRY_BLOCK_PTR, NULL, DF_REF_REG_DEF, 0, -1, -1, 0);
}
df_canonize_collection_rec (collection_rec);
@ -3881,7 +3917,7 @@ df_exit_block_uses_collect (struct df_collection_rec *collection_rec, bitmap exi
EXECUTE_IF_SET_IN_BITMAP (exit_block_uses, 0, i, bi)
df_ref_record (collection_rec, regno_reg_rtx[i], NULL,
EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0, -1, -1);
EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0, -1, -1, 0);
#if FRAME_POINTER_REGNUM != ARG_POINTER_REGNUM
/* It is deliberate that this is not put in the exit block uses but
@ -3891,7 +3927,7 @@ df_exit_block_uses_collect (struct df_collection_rec *collection_rec, bitmap exi
&& bb_has_eh_pred (EXIT_BLOCK_PTR)
&& fixed_regs[ARG_POINTER_REGNUM])
df_ref_record (collection_rec, regno_reg_rtx[ARG_POINTER_REGNUM], NULL,
EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0, -1, -1);
EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0, -1, -1, 0);
#endif
df_canonize_collection_rec (collection_rec);
@ -4363,9 +4399,8 @@ df_exit_block_bitmap_verify (bool abort_if_fail)
}
/* Return true if df_ref information for all insns in all BLOCKS are
correct and complete. If BLOCKS is null, all blocks are
checked. */
/* Return true if df_ref information for all insns in all blocks are
correct and complete. */
void
df_scan_verify (void)

View File

@ -42,12 +42,13 @@ struct df_link;
a uniform manner. The last four problems can be added or deleted
at any time are always defined (though LIVE is always there at -O2
or higher); the others are always there. */
#define DF_SCAN 0
#define DF_LR 1 /* Live Registers backward. */
#define DF_LIVE 2 /* Live Registers & Uninitialized Registers */
#define DF_RD 3 /* Reaching Defs. */
#define DF_CHAIN 4 /* Def-Use and/or Use-Def Chains. */
#define DF_NOTE 5 /* REG_DEF and REG_UNUSED notes. */
#define DF_SCAN 0
#define DF_LR 1 /* Live Registers backward. */
#define DF_LIVE 2 /* Live Registers & Uninitialized Registers */
#define DF_RD 3 /* Reaching Defs. */
#define DF_CHAIN 4 /* Def-Use and/or Use-Def Chains. */
#define DF_BYTE_LR 5 /* Subreg tracking lr. */
#define DF_NOTE 6 /* REG_DEF and REG_UNUSED notes. */
#define DF_LAST_PROBLEM_PLUS1 (DF_NOTE + 1)
@ -59,6 +60,13 @@ enum df_flow_dir
DF_BACKWARD
};
/* Used in the byte scanning to determine if may or must info is to be
returned. */
enum df_mm
{
DF_MM_MAY,
DF_MM_MUST
};
/* The first of these is a set of a register. The remaining three are
all uses of a register (the mem_load and mem_store relate to how
@ -398,6 +406,7 @@ struct df_ref_extract
struct df_ref ref;
int width;
int offset;
enum machine_mode mode;
};
/* These links are used for two purposes:
@ -573,6 +582,7 @@ struct df
#define DF_RD_BB_INFO(BB) (df_rd_get_bb_info((BB)->index))
#define DF_LR_BB_INFO(BB) (df_lr_get_bb_info((BB)->index))
#define DF_LIVE_BB_INFO(BB) (df_live_get_bb_info((BB)->index))
#define DF_BYTE_LR_BB_INFO(BB) (df_byte_lr_get_bb_info((BB)->index))
/* Most transformations that wish to use live register analysis will
use these macros. This info is the and of the lr and live sets. */
@ -585,6 +595,12 @@ struct df
#define DF_LR_IN(BB) (DF_LR_BB_INFO(BB)->in)
#define DF_LR_OUT(BB) (DF_LR_BB_INFO(BB)->out)
/* These macros are used by passes that are not tolerant of
uninitialized variables. This intolerance should eventually
be fixed. */
#define DF_BYTE_LR_IN(BB) (DF_BYTE_LR_BB_INFO(BB)->in)
#define DF_BYTE_LR_OUT(BB) (DF_BYTE_LR_BB_INFO(BB)->out)
/* Macros to access the elements within the ref structure. */
@ -619,8 +635,9 @@ struct df
#define DF_REF_PREV_REG(REF) ((REF)->prev_reg)
/* The following two macros may only be applied if one of
DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT is true. */
#define DF_REF_WIDTH(REF) (((struct df_ref_extract *)(REF))->width)
#define DF_REF_OFFSET(REF) (((struct df_ref_extract *)(REF))->offset)
#define DF_REF_EXTRACT_WIDTH(REF) (((struct df_ref_extract *)(REF))->width)
#define DF_REF_EXTRACT_OFFSET(REF) (((struct df_ref_extract *)(REF))->offset)
#define DF_REF_EXTRACT_MODE(REF) (((struct df_ref_extract *)(REF))->mode)
/* Macros to determine the reference type. */
#define DF_REF_REG_DEF_P(REF) (DF_REF_TYPE (REF) == DF_REF_REG_DEF)
@ -775,16 +792,33 @@ struct df_live_bb_info
};
/* Live registers, a backwards dataflow problem. These bitmaps are
indexed by the df_byte_lr_offset array which is indexed by pseudo. */
struct df_byte_lr_bb_info
{
/* Local sets to describe the basic blocks. */
bitmap def; /* The set of registers set in this block
- except artificial defs at the top. */
bitmap use; /* The set of registers used in this block. */
/* The results of the dataflow problem. */
bitmap in; /* Just before the block itself. */
bitmap out; /* At the bottom of the block. */
};
/* This is used for debugging and for the dumpers to find the latest
instance so that the df info can be added to the dumps. This
should not be used by regular code. */
extern struct df *df;
#define df_scan (df->problems_by_index[DF_SCAN])
#define df_rd (df->problems_by_index[DF_RD])
#define df_lr (df->problems_by_index[DF_LR])
#define df_live (df->problems_by_index[DF_LIVE])
#define df_chain (df->problems_by_index[DF_CHAIN])
#define df_note (df->problems_by_index[DF_NOTE])
#define df_scan (df->problems_by_index[DF_SCAN])
#define df_rd (df->problems_by_index[DF_RD])
#define df_lr (df->problems_by_index[DF_LR])
#define df_live (df->problems_by_index[DF_LIVE])
#define df_chain (df->problems_by_index[DF_CHAIN])
#define df_byte_lr (df->problems_by_index[DF_BYTE_LR])
#define df_note (df->problems_by_index[DF_NOTE])
/* This symbol turns on checking that each modification of the cfg has
been identified to the appropriate df routines. It is not part of
@ -831,6 +865,7 @@ extern struct df_ref *df_find_use (rtx, rtx);
extern bool df_reg_used (rtx, rtx);
extern void df_worklist_dataflow (struct dataflow *,bitmap, int *, int);
extern void df_print_regset (FILE *file, bitmap r);
extern void df_print_byte_regset (FILE *file, bitmap r);
extern void df_dump (FILE *);
extern void df_dump_region (FILE *);
extern void df_dump_start (FILE *);
@ -867,6 +902,13 @@ extern void df_live_verify_transfer_functions (void);
extern void df_live_add_problem (void);
extern void df_live_set_all_dirty (void);
extern void df_chain_add_problem (enum df_chain_flags);
extern void df_byte_lr_add_problem (void);
extern int df_byte_lr_get_regno_start (unsigned int);
extern int df_byte_lr_get_regno_len (unsigned int);
extern void df_byte_lr_simulate_defs (rtx, bitmap);
extern void df_byte_lr_simulate_uses (rtx, bitmap);
extern void df_byte_lr_simulate_artificial_refs_at_top (basic_block, bitmap);
extern void df_byte_lr_simulate_artificial_refs_at_end (basic_block, bitmap);
extern void df_note_add_problem (void);
extern void df_simulate_find_defs (rtx, bitmap);
extern void df_simulate_defs (rtx, bitmap);
@ -885,7 +927,7 @@ extern void df_grow_insn_info (void);
extern void df_scan_blocks (void);
extern struct df_ref *df_ref_create (rtx, rtx *, rtx,basic_block,
enum df_ref_type, enum df_ref_flags,
int, int);
int, int, enum machine_mode);
extern void df_ref_remove (struct df_ref *);
extern struct df_insn_info * df_insn_create_insn_record (rtx);
extern void df_insn_delete (basic_block, unsigned int);
@ -911,6 +953,10 @@ extern void df_compute_regs_ever_live (bool);
extern bool df_read_modify_subreg_p (rtx);
extern void df_scan_verify (void);
/* Functions defined in df-byte-scan.c. */
extern bool df_compute_accessed_bytes (struct df_ref *, enum df_mm,
unsigned int *, unsigned int *);
/* Get basic block info. */
@ -950,6 +996,15 @@ df_live_get_bb_info (unsigned int index)
return NULL;
}
static inline struct df_byte_lr_bb_info *
df_byte_lr_get_bb_info (unsigned int index)
{
if (index < df_byte_lr->block_info_size)
return (struct df_byte_lr_bb_info *) df_byte_lr->block_info[index];
else
return NULL;
}
/* Get the artificial defs for a basic block. */
static inline struct df_ref **

View File

@ -679,6 +679,7 @@ update_df (rtx insn, rtx *loc, struct df_ref **use_rec, enum df_ref_type type,
struct df_ref *orig_use = use, *new_use;
int width = -1;
int offset = -1;
enum machine_mode mode = 0;
rtx *new_loc = find_occurrence (loc, DF_REF_REG (orig_use));
use_rec++;
@ -687,15 +688,17 @@ update_df (rtx insn, rtx *loc, struct df_ref **use_rec, enum df_ref_type type,
if (DF_REF_FLAGS_IS_SET (orig_use, DF_REF_SIGN_EXTRACT | DF_REF_ZERO_EXTRACT))
{
width = DF_REF_WIDTH (orig_use);
offset = DF_REF_OFFSET (orig_use);
width = DF_REF_EXTRACT_WIDTH (orig_use);
offset = DF_REF_EXTRACT_OFFSET (orig_use);
mode = DF_REF_EXTRACT_MODE (orig_use);
}
/* Add a new insn use. Use the original type, because it says if the
use was within a MEM. */
new_use = df_ref_create (DF_REF_REG (orig_use), new_loc,
insn, BLOCK_FOR_INSN (insn),
type, DF_REF_FLAGS (orig_use) | new_flags, width, offset);
type, DF_REF_FLAGS (orig_use) | new_flags,
width, offset, mode);
/* Set up the use-def chain. */
df_chain_copy (new_use, DF_REF_CHAIN (orig_use));

View File

@ -731,6 +731,7 @@ init_optimization_passes (void)
NEXT_PASS (pass_partition_blocks);
NEXT_PASS (pass_regmove);
NEXT_PASS (pass_split_all_insns);
NEXT_PASS (pass_fast_rtl_byte_dce);
NEXT_PASS (pass_lower_subreg2);
NEXT_PASS (pass_df_initialize_no_opt);
NEXT_PASS (pass_stack_ptr_mod);

View File

@ -64,6 +64,7 @@ DEFTIMEVAR (TV_DF_LR , "df live regs")
DEFTIMEVAR (TV_DF_LIVE , "df live&initialized regs")
DEFTIMEVAR (TV_DF_UREC , "df uninitialized regs 2")
DEFTIMEVAR (TV_DF_CHAIN , "df use-def / def-use chains")
DEFTIMEVAR (TV_DF_BYTE_LR , "df live byte regs")
DEFTIMEVAR (TV_DF_NOTE , "df reg dead/unused notes")
DEFTIMEVAR (TV_REG_STATS , "register information")

View File

@ -423,6 +423,7 @@ extern struct rtl_opt_pass pass_partition_blocks;
extern struct rtl_opt_pass pass_match_asm_constraints;
extern struct rtl_opt_pass pass_regmove;
extern struct rtl_opt_pass pass_split_all_insns;
extern struct rtl_opt_pass pass_fast_rtl_byte_dce;
extern struct rtl_opt_pass pass_lower_subreg2;
extern struct rtl_opt_pass pass_mode_switching;
extern struct rtl_opt_pass pass_see;