ira-build.c (ira_create_object): New arg SUBWORD; all callers changed.

* ira-build.c (ira_create_object): New arg SUBWORD; all callers changed.
	Initialize OBJECT_SUBWORD.
	(ira_create_allocno): Clear ALLOCNO_NUM_OBJECTS.
	(ira_create_allocno_objects): Renamed from ira_create_allocno_object;
	all callers changed.
	(merge_hard_reg_conflicts): Iterate over allocno subobjects.
	(finish_allocno): Likewise.
	(move_allocno_live_ranges, copy_allocno_live_ranges): Likewise.
	(remove_low_level_allocnos): Likewise.
	(update_bad_spill_attribute): Likewise.
	(setup_min_max_allocno_live_range_point): Likewise.
	(sort_conflict_id_map): Likewise.
	(ira_flattening): Likewise.  Use ior_hard_reg_conflicts.
	(ior_hard_reg_conflicts): New function.
	(ior_allocate_object_conflicts): Renamed first argument to OBJ.
	(compress_conflict_vecs): Iterate over objects, not allocnos.
	(ira_add_live_range_to_object): New function.
	(object_range_compare_func): Renamed from allocno_range_compare_func.
	All callers changed.
	(setup_min_max_conflict_allocno_ids): For allocnos with multiple
	subobjects, widen the min/max range of the lowest-order object to
	potentially include all other such low-order objects.
	* ira.c (ira_bad_reload_regno_1): Iterate over allocno subobjects.
	(check_allocation): Likewise.  Use more fine-grained tests for register
	conflicts.
	* ira-color.c (allocnos_have_intersected_live_ranges_p): Iterate over
	allocno subobjects.
	(assign_hard_reg): Keep multiple sets of conflicts.  Make finer-grained
	choices about which bits to set in each set.  Don't use
	ira_hard_reg_not_in_set_p, perform a more elaborate test for conflicts
	using the multiple sets we computed.
	(push_allocno_to_stack): Iterate over allocno subobjects.
	(all_conflicting_hard_regs_coalesced): New static function.
	(setup_allocno_available_regs_num): Use it.
	(setup_allocno_left_conflicts_size): Likewise.  Iterate over allocno
	subobjects.
	(coalesced_allocno_conflict): Test subobject 0 in each allocno.
	(setup_allocno_priorities): Divide ALLOCNO_EXCESS_PRESSURE_POINTS_NUM
	by ALLOCNO_NUM_OBJECTS.
	(calculate_spill_cost): Likewise.
	(color_pass): Express if statement in a more normal way.
	(ira_reassign_conflict_allocnos): Iterate over allocno subobjects.
	(slot_coalesced_allocno_live_ranges_intersect_p): Likewise.
	(setup_slot_coalesced_allocno_live_ranges): Likewise.
	(allocno_reload_assign): Likewise.
	(ira_reassign_pseudos): Likewise.
	(fast_allocation): Likewise.
	* ira-conflicts.c (build_conflict_bit_table): Likewise.
	(print_allocno_conflicts): Likewise.
	(ira_build_conflicts): Likewise.
	(allocnos_conflict_for_copy_p): Renamed from allocnos_conflict_p.  All
	callers changed.  Test subword 0 of each allocno for conflicts.
	(build_object_conflicts): Renamed from build_allocno_conflicts.  All
	callers changed.  Iterate over allocno subobjects.
	* ira-emit.c (modify_move_list): Iterate over allocno subobjects.
	* ira-int.h (struct ira_allocno): New member. num_objects.  Rename object
	to objects and change it into an array.
	(ALLOCNO_OBJECT): Add new argument N.
	(ALLOCNO_NUM_OBJECTS, OBJECT_SUBWORD): New macros.
	(ira_create_allocno_objects): Renamed from ira_create_allocno_object.
	(ior_hard_reg_conflicts): Declare.
	(ira_add_live_range_to_object): Declare.
	(ira_allocno_object_iterator): New.
	(ira_allocno_object_iter_init, ira_allocno_object_iter_cond): New.
	(FOR_EACH_ALLOCNO_OBJECT): New macro.
	* ira-lives.c (objects_live): Renamed from allocnos_live; all uses changed.
	(allocnos_processed): New sparseset.
	(make_object_born): Renamed from make_allocno_born; take an ira_object_t
	argument.  All callers changed.
	(make_object_dead): Renamed from make_allocno_dead; take an ira_object t
	argument.  All callers changed.
	(update_allocno_pressure_excess_length): Take an ira_obejct_t argument.
	All callers changed.
	(mark_pseudo_regno_live): Iterate over allocno subobjects.
	(mark_pseudo_regno_dead): Likewise.
	(mark_pseudo_regno_subword_live, mark_pseudo_regno_subword_dead): New
	functions.
	(mark_ref_live): Detect subword accesses and call
	mark_pseudo_regno_subword_live as appropriate.
	(mark_ref_dead): Likewise for mark_pseudo_regno_subword_dead.
	(process_bb_nodes_live): Deal with object-related updates first; set
	and test bits in allocnos_processed to avoid computing allocno
	statistics more than once.
	(create_start_finish_chains): Iterate over objects, not allocnos.
	(print_object_live_ranges): New function.
	(print_allocno_live_ranges): Use it.
	(ira_create_allocno_live_ranges): Allocate and free allocnos_processed
	and objects_live.

From-SVN: r162418
This commit is contained in:
Bernd Schmidt 2010-07-22 15:48:30 +00:00 committed by Bernd Schmidt
parent cd1822b80e
commit ac0ab4f718
8 changed files with 1314 additions and 667 deletions

View File

@ -1,3 +1,94 @@
2010-07-22 Bernd Schmidt <bernds@codesourcery.com>
* ira-build.c (ira_create_object): New arg SUBWORD; all callers changed.
Initialize OBJECT_SUBWORD.
(ira_create_allocno): Clear ALLOCNO_NUM_OBJECTS.
(ira_create_allocno_objects): Renamed from ira_create_allocno_object;
all callers changed.
(merge_hard_reg_conflicts): Iterate over allocno subobjects.
(finish_allocno): Likewise.
(move_allocno_live_ranges, copy_allocno_live_ranges): Likewise.
(remove_low_level_allocnos): Likewise.
(update_bad_spill_attribute): Likewise.
(setup_min_max_allocno_live_range_point): Likewise.
(sort_conflict_id_map): Likewise.
(ira_flattening): Likewise. Use ior_hard_reg_conflicts.
(ior_hard_reg_conflicts): New function.
(ior_allocate_object_conflicts): Renamed first argument to OBJ.
(compress_conflict_vecs): Iterate over objects, not allocnos.
(ira_add_live_range_to_object): New function.
(object_range_compare_func): Renamed from allocno_range_compare_func.
All callers changed.
(setup_min_max_conflict_allocno_ids): For allocnos with multiple
subobjects, widen the min/max range of the lowest-order object to
potentially include all other such low-order objects.
* ira.c (ira_bad_reload_regno_1): Iterate over allocno subobjects.
(check_allocation): Likewise. Use more fine-grained tests for register
conflicts.
* ira-color.c (allocnos_have_intersected_live_ranges_p): Iterate over
allocno subobjects.
(assign_hard_reg): Keep multiple sets of conflicts. Make finer-grained
choices about which bits to set in each set. Don't use
ira_hard_reg_not_in_set_p, perform a more elaborate test for conflicts
using the multiple sets we computed.
(push_allocno_to_stack): Iterate over allocno subobjects.
(all_conflicting_hard_regs_coalesced): New static function.
(setup_allocno_available_regs_num): Use it.
(setup_allocno_left_conflicts_size): Likewise. Iterate over allocno
subobjects.
(coalesced_allocno_conflict): Test subobject 0 in each allocno.
(setup_allocno_priorities): Divide ALLOCNO_EXCESS_PRESSURE_POINTS_NUM
by ALLOCNO_NUM_OBJECTS.
(calculate_spill_cost): Likewise.
(color_pass): Express if statement in a more normal way.
(ira_reassign_conflict_allocnos): Iterate over allocno subobjects.
(slot_coalesced_allocno_live_ranges_intersect_p): Likewise.
(setup_slot_coalesced_allocno_live_ranges): Likewise.
(allocno_reload_assign): Likewise.
(ira_reassign_pseudos): Likewise.
(fast_allocation): Likewise.
* ira-conflicts.c (build_conflict_bit_table): Likewise.
(print_allocno_conflicts): Likewise.
(ira_build_conflicts): Likewise.
(allocnos_conflict_for_copy_p): Renamed from allocnos_conflict_p. All
callers changed. Test subword 0 of each allocno for conflicts.
(build_object_conflicts): Renamed from build_allocno_conflicts. All
callers changed. Iterate over allocno subobjects.
* ira-emit.c (modify_move_list): Iterate over allocno subobjects.
* ira-int.h (struct ira_allocno): New member. num_objects. Rename object
to objects and change it into an array.
(ALLOCNO_OBJECT): Add new argument N.
(ALLOCNO_NUM_OBJECTS, OBJECT_SUBWORD): New macros.
(ira_create_allocno_objects): Renamed from ira_create_allocno_object.
(ior_hard_reg_conflicts): Declare.
(ira_add_live_range_to_object): Declare.
(ira_allocno_object_iterator): New.
(ira_allocno_object_iter_init, ira_allocno_object_iter_cond): New.
(FOR_EACH_ALLOCNO_OBJECT): New macro.
* ira-lives.c (objects_live): Renamed from allocnos_live; all uses changed.
(allocnos_processed): New sparseset.
(make_object_born): Renamed from make_allocno_born; take an ira_object_t
argument. All callers changed.
(make_object_dead): Renamed from make_allocno_dead; take an ira_object t
argument. All callers changed.
(update_allocno_pressure_excess_length): Take an ira_obejct_t argument.
All callers changed.
(mark_pseudo_regno_live): Iterate over allocno subobjects.
(mark_pseudo_regno_dead): Likewise.
(mark_pseudo_regno_subword_live, mark_pseudo_regno_subword_dead): New
functions.
(mark_ref_live): Detect subword accesses and call
mark_pseudo_regno_subword_live as appropriate.
(mark_ref_dead): Likewise for mark_pseudo_regno_subword_dead.
(process_bb_nodes_live): Deal with object-related updates first; set
and test bits in allocnos_processed to avoid computing allocno
statistics more than once.
(create_start_finish_chains): Iterate over objects, not allocnos.
(print_object_live_ranges): New function.
(print_allocno_live_ranges): Use it.
(ira_create_allocno_live_ranges): Allocate and free allocnos_processed
and objects_live.
2010-07-22 Richard Guenther <rguenther@suse.de>
PR lto/42451

View File

@ -422,12 +422,13 @@ initiate_allocnos (void)
/* Create and return an object corresponding to a new allocno A. */
static ira_object_t
ira_create_object (ira_allocno_t a)
ira_create_object (ira_allocno_t a, int subword)
{
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
ira_object_t obj = (ira_object_t) pool_alloc (object_pool);
OBJECT_ALLOCNO (obj) = a;
OBJECT_SUBWORD (obj) = subword;
OBJECT_CONFLICT_ID (obj) = ira_objects_num;
OBJECT_CONFLICT_VEC_P (obj) = false;
OBJECT_CONFLICT_ARRAY (obj) = NULL;
@ -446,6 +447,7 @@ ira_create_object (ira_allocno_t a)
ira_object_id_map
= VEC_address (ira_object_t, ira_object_id_map_vec);
ira_objects_num = VEC_length (ira_object_t, ira_object_id_map_vec);
return obj;
}
@ -510,10 +512,12 @@ ira_create_allocno (int regno, bool cap_p, ira_loop_tree_node_t loop_tree_node)
ALLOCNO_PREV_BUCKET_ALLOCNO (a) = NULL;
ALLOCNO_FIRST_COALESCED_ALLOCNO (a) = a;
ALLOCNO_NEXT_COALESCED_ALLOCNO (a) = a;
ALLOCNO_NUM_OBJECTS (a) = 0;
VEC_safe_push (ira_allocno_t, heap, allocno_vec, a);
ira_allocnos = VEC_address (ira_allocno_t, allocno_vec);
ira_allocnos_num = VEC_length (ira_allocno_t, allocno_vec);
return a;
}
@ -524,14 +528,27 @@ ira_set_allocno_cover_class (ira_allocno_t a, enum reg_class cover_class)
ALLOCNO_COVER_CLASS (a) = cover_class;
}
/* Allocate an object for allocno A and set ALLOCNO_OBJECT. */
/* Determine the number of objects we should associate with allocno A
and allocate them. */
void
ira_create_allocno_object (ira_allocno_t a)
ira_create_allocno_objects (ira_allocno_t a)
{
ALLOCNO_OBJECT (a) = ira_create_object (a);
enum machine_mode mode = ALLOCNO_MODE (a);
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
int n = ira_reg_class_nregs[cover_class][mode];
int i;
if (GET_MODE_SIZE (mode) != 2 * UNITS_PER_WORD || n != 2)
n = 1;
ALLOCNO_NUM_OBJECTS (a) = n;
for (i = 0; i < n; i++)
ALLOCNO_OBJECT (a, i) = ira_create_object (a, i);
}
/* For each allocno, create the corresponding ALLOCNO_OBJECT structure. */
/* For each allocno, set ALLOCNO_NUM_OBJECTS and create the
ALLOCNO_OBJECT structures. This must be called after the cover
classes are known. */
static void
create_allocno_objects (void)
{
@ -539,22 +556,28 @@ create_allocno_objects (void)
ira_allocno_iterator ai;
FOR_EACH_ALLOCNO (a, ai)
ira_create_allocno_object (a);
ira_create_allocno_objects (a);
}
/* Merge hard register conflicts from allocno FROM into allocno TO. If
TOTAL_ONLY is true, we ignore ALLOCNO_CONFLICT_HARD_REGS. */
/* Merge hard register conflict information for all objects associated with
allocno TO into the corresponding objects associated with FROM.
If TOTAL_ONLY is true, we only merge OBJECT_TOTAL_CONFLICT_HARD_REGS. */
static void
merge_hard_reg_conflicts (ira_allocno_t from, ira_allocno_t to,
bool total_only)
{
ira_object_t from_obj = ALLOCNO_OBJECT (from);
ira_object_t to_obj = ALLOCNO_OBJECT (to);
if (!total_only)
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj),
OBJECT_CONFLICT_HARD_REGS (from_obj));
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj),
OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj));
int i;
gcc_assert (ALLOCNO_NUM_OBJECTS (to) == ALLOCNO_NUM_OBJECTS (from));
for (i = 0; i < ALLOCNO_NUM_OBJECTS (to); i++)
{
ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
if (!total_only)
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj),
OBJECT_CONFLICT_HARD_REGS (from_obj));
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj),
OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj));
}
#ifdef STACK_REGS
if (!total_only && ALLOCNO_NO_STACK_REG_P (from))
ALLOCNO_NO_STACK_REG_P (to) = true;
@ -563,6 +586,20 @@ merge_hard_reg_conflicts (ira_allocno_t from, ira_allocno_t to,
#endif
}
/* Update hard register conflict information for all objects associated with
A to include the regs in SET. */
void
ior_hard_reg_conflicts (ira_allocno_t a, HARD_REG_SET *set)
{
ira_allocno_object_iterator i;
ira_object_t obj;
FOR_EACH_ALLOCNO_OBJECT (a, obj, i)
{
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), *set);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), *set);
}
}
/* Return TRUE if a conflict vector with NUM elements is more
profitable than a conflict bit vector for OBJ. */
bool
@ -617,14 +654,14 @@ allocate_conflict_bit_vec (ira_object_t obj)
}
/* Allocate and initialize the conflict vector or conflict bit vector
of A for NUM conflicting allocnos whatever is more profitable. */
of OBJ for NUM conflicting allocnos whatever is more profitable. */
void
ira_allocate_object_conflicts (ira_object_t a, int num)
ira_allocate_object_conflicts (ira_object_t obj, int num)
{
if (ira_conflict_vector_profitable_p (a, num))
ira_allocate_conflict_vec (a, num);
if (ira_conflict_vector_profitable_p (obj, num))
ira_allocate_conflict_vec (obj, num);
else
allocate_conflict_bit_vec (a);
allocate_conflict_bit_vec (obj);
}
/* Add OBJ2 to the conflicts of OBJ1. */
@ -772,15 +809,14 @@ compress_conflict_vec (ira_object_t obj)
static void
compress_conflict_vecs (void)
{
ira_allocno_t a;
ira_allocno_iterator ai;
ira_object_t obj;
ira_object_iterator oi;
conflict_check = (int *) ira_allocate (sizeof (int) * ira_objects_num);
memset (conflict_check, 0, sizeof (int) * ira_objects_num);
curr_conflict_check_tick = 0;
FOR_EACH_ALLOCNO (a, ai)
FOR_EACH_OBJECT (obj, oi)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
if (OBJECT_CONFLICT_VEC_P (obj))
compress_conflict_vec (obj);
}
@ -823,7 +859,7 @@ create_cap_allocno (ira_allocno_t a)
ALLOCNO_MODE (cap) = ALLOCNO_MODE (a);
cover_class = ALLOCNO_COVER_CLASS (a);
ira_set_allocno_cover_class (cap, cover_class);
ira_create_allocno_object (cap);
ira_create_allocno_objects (cap);
ALLOCNO_AVAILABLE_REGS_NUM (cap) = ALLOCNO_AVAILABLE_REGS_NUM (a);
ALLOCNO_CAP_MEMBER (cap) = a;
ALLOCNO_CAP (a) = cap;
@ -838,7 +874,9 @@ create_cap_allocno (ira_allocno_t a)
ALLOCNO_NREFS (cap) = ALLOCNO_NREFS (a);
ALLOCNO_FREQ (cap) = ALLOCNO_FREQ (a);
ALLOCNO_CALL_FREQ (cap) = ALLOCNO_CALL_FREQ (a);
merge_hard_reg_conflicts (a, cap, false);
ALLOCNO_CALLS_CROSSED_NUM (cap) = ALLOCNO_CALLS_CROSSED_NUM (a);
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
{
@ -849,7 +887,7 @@ create_cap_allocno (ira_allocno_t a)
return cap;
}
/* Create and return allocno live range with given attributes. */
/* Create and return a live range for OBJECT with given attributes. */
live_range_t
ira_create_live_range (ira_object_t obj, int start, int finish,
live_range_t next)
@ -864,6 +902,17 @@ ira_create_live_range (ira_object_t obj, int start, int finish,
return p;
}
/* Create a new live range for OBJECT and queue it at the head of its
live range list. */
void
ira_add_live_range_to_object (ira_object_t object, int start, int finish)
{
live_range_t p;
p = ira_create_live_range (object, start, finish,
OBJECT_LIVE_RANGES (object));
OBJECT_LIVE_RANGES (object) = p;
}
/* Copy allocno live range R and return the result. */
static live_range_t
copy_live_range (live_range_t r)
@ -1032,13 +1081,17 @@ static void
finish_allocno (ira_allocno_t a)
{
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t obj;
ira_allocno_object_iterator oi;
ira_finish_live_range_list (OBJECT_LIVE_RANGES (obj));
ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
ira_free (OBJECT_CONFLICT_ARRAY (obj));
pool_free (object_pool, obj);
FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
{
ira_finish_live_range_list (OBJECT_LIVE_RANGES (obj));
ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
ira_free (OBJECT_CONFLICT_ARRAY (obj));
pool_free (object_pool, obj);
}
ira_allocnos[ALLOCNO_NUM (a)] = NULL;
if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
@ -1708,44 +1761,58 @@ change_object_in_range_list (live_range_t r, ira_object_t obj)
static void
move_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
ira_object_t from_obj = ALLOCNO_OBJECT (from);
ira_object_t to_obj = ALLOCNO_OBJECT (to);
live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
int i;
int n = ALLOCNO_NUM_OBJECTS (from);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
gcc_assert (n == ALLOCNO_NUM_OBJECTS (to));
for (i = 0; i < n; i++)
{
fprintf (ira_dump_file,
" Moving ranges of a%dr%d to a%dr%d: ",
ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
ira_print_live_range_list (ira_dump_file, lr);
ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
{
fprintf (ira_dump_file,
" Moving ranges of a%dr%d to a%dr%d: ",
ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
ira_print_live_range_list (ira_dump_file, lr);
}
change_object_in_range_list (lr, to_obj);
OBJECT_LIVE_RANGES (to_obj)
= ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
OBJECT_LIVE_RANGES (from_obj) = NULL;
}
change_object_in_range_list (lr, to_obj);
OBJECT_LIVE_RANGES (to_obj)
= ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
OBJECT_LIVE_RANGES (from_obj) = NULL;
}
/* Copy all live ranges associated with allocno FROM to allocno TO. */
static void
copy_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
ira_object_t from_obj = ALLOCNO_OBJECT (from);
ira_object_t to_obj = ALLOCNO_OBJECT (to);
live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
int i;
int n = ALLOCNO_NUM_OBJECTS (from);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
gcc_assert (n == ALLOCNO_NUM_OBJECTS (to));
for (i = 0; i < n; i++)
{
fprintf (ira_dump_file,
" Copying ranges of a%dr%d to a%dr%d: ",
ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
ira_print_live_range_list (ira_dump_file, lr);
ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
{
fprintf (ira_dump_file, " Copying ranges of a%dr%d to a%dr%d: ",
ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
ira_print_live_range_list (ira_dump_file, lr);
}
lr = ira_copy_live_range_list (lr);
change_object_in_range_list (lr, to_obj);
OBJECT_LIVE_RANGES (to_obj)
= ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
}
lr = ira_copy_live_range_list (lr);
change_object_in_range_list (lr, to_obj);
OBJECT_LIVE_RANGES (to_obj)
= ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
}
/* Return TRUE if NODE represents a loop with low register
@ -2125,13 +2192,15 @@ remove_low_level_allocnos (void)
regno = ALLOCNO_REGNO (a);
if (ira_loop_tree_root->regno_allocno_map[regno] == a)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t obj;
ira_allocno_object_iterator oi;
ira_regno_allocno_map[regno] = a;
ALLOCNO_NEXT_REGNO_ALLOCNO (a) = NULL;
ALLOCNO_CAP_MEMBER (a) = NULL;
COPY_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
COPY_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
#ifdef STACK_REGS
if (ALLOCNO_TOTAL_NO_STACK_REG_P (a))
ALLOCNO_NO_STACK_REG_P (a) = true;
@ -2194,6 +2263,8 @@ update_bad_spill_attribute (void)
int i;
ira_allocno_t a;
ira_allocno_iterator ai;
ira_allocno_object_iterator aoi;
ira_object_t obj;
live_range_t r;
enum reg_class cover_class;
bitmap_head dead_points[N_REG_CLASSES];
@ -2205,31 +2276,36 @@ update_bad_spill_attribute (void)
}
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
cover_class = ALLOCNO_COVER_CLASS (a);
if (cover_class == NO_REGS)
continue;
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
bitmap_set_bit (&dead_points[cover_class], r->finish);
FOR_EACH_ALLOCNO_OBJECT (a, obj, aoi)
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
bitmap_set_bit (&dead_points[cover_class], r->finish);
}
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
cover_class = ALLOCNO_COVER_CLASS (a);
if (cover_class == NO_REGS)
continue;
if (! ALLOCNO_BAD_SPILL_P (a))
continue;
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
FOR_EACH_ALLOCNO_OBJECT (a, obj, aoi)
{
for (i = r->start + 1; i < r->finish; i++)
if (bitmap_bit_p (&dead_points[cover_class], i))
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
{
for (i = r->start + 1; i < r->finish; i++)
if (bitmap_bit_p (&dead_points[cover_class], i))
break;
if (i < r->finish)
break;
}
if (r != NULL)
{
ALLOCNO_BAD_SPILL_P (a) = false;
break;
if (i < r->finish)
break;
}
}
if (r != NULL)
ALLOCNO_BAD_SPILL_P (a) = false;
}
for (i = 0; i < ira_reg_class_cover_size; i++)
{
@ -2247,57 +2323,69 @@ setup_min_max_allocno_live_range_point (void)
int i;
ira_allocno_t a, parent_a, cap;
ira_allocno_iterator ai;
#ifdef ENABLE_IRA_CHECKING
ira_object_iterator oi;
ira_object_t obj;
#endif
live_range_t r;
ira_loop_tree_node_t parent;
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
r = OBJECT_LIVE_RANGES (obj);
if (r == NULL)
continue;
OBJECT_MAX (obj) = r->finish;
for (; r->next != NULL; r = r->next)
;
OBJECT_MIN (obj) = r->start;
int n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
r = OBJECT_LIVE_RANGES (obj);
if (r == NULL)
continue;
OBJECT_MAX (obj) = r->finish;
for (; r->next != NULL; r = r->next)
;
OBJECT_MIN (obj) = r->start;
}
}
for (i = max_reg_num () - 1; i >= FIRST_PSEUDO_REGISTER; i--)
for (a = ira_regno_allocno_map[i];
a != NULL;
a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t parent_obj;
if (OBJECT_MAX (obj) < 0)
continue;
ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
/* Accumulation of range info. */
if (ALLOCNO_CAP (a) != NULL)
int j;
int n = ALLOCNO_NUM_OBJECTS (a);
for (j = 0; j < n; j++)
{
for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
ira_object_t obj = ALLOCNO_OBJECT (a, j);
ira_object_t parent_obj;
if (OBJECT_MAX (obj) < 0)
continue;
ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
/* Accumulation of range info. */
if (ALLOCNO_CAP (a) != NULL)
{
ira_object_t cap_obj = ALLOCNO_OBJECT (cap);
if (OBJECT_MAX (cap_obj) < OBJECT_MAX (obj))
OBJECT_MAX (cap_obj) = OBJECT_MAX (obj);
if (OBJECT_MIN (cap_obj) > OBJECT_MIN (obj))
OBJECT_MIN (cap_obj) = OBJECT_MIN (obj);
for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
{
ira_object_t cap_obj = ALLOCNO_OBJECT (cap, j);
if (OBJECT_MAX (cap_obj) < OBJECT_MAX (obj))
OBJECT_MAX (cap_obj) = OBJECT_MAX (obj);
if (OBJECT_MIN (cap_obj) > OBJECT_MIN (obj))
OBJECT_MIN (cap_obj) = OBJECT_MIN (obj);
}
continue;
}
continue;
if ((parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL)
continue;
parent_a = parent->regno_allocno_map[i];
parent_obj = ALLOCNO_OBJECT (parent_a, j);
if (OBJECT_MAX (parent_obj) < OBJECT_MAX (obj))
OBJECT_MAX (parent_obj) = OBJECT_MAX (obj);
if (OBJECT_MIN (parent_obj) > OBJECT_MIN (obj))
OBJECT_MIN (parent_obj) = OBJECT_MIN (obj);
}
if ((parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL)
continue;
parent_a = parent->regno_allocno_map[i];
parent_obj = ALLOCNO_OBJECT (parent_a);
if (OBJECT_MAX (parent_obj) < OBJECT_MAX (obj))
OBJECT_MAX (parent_obj) = OBJECT_MAX (obj);
if (OBJECT_MIN (parent_obj) > OBJECT_MIN (obj))
OBJECT_MIN (parent_obj) = OBJECT_MIN (obj);
}
#ifdef ENABLE_IRA_CHECKING
FOR_EACH_ALLOCNO (a, ai)
FOR_EACH_OBJECT (obj, oi)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
if ((0 <= OBJECT_MIN (obj) && OBJECT_MIN (obj) <= ira_max_point)
&& (0 <= OBJECT_MAX (obj) && OBJECT_MAX (obj) <= ira_max_point))
continue;
@ -2312,7 +2400,7 @@ setup_min_max_allocno_live_range_point (void)
(min). Allocnos with the same start are ordered according their
finish (max). */
static int
allocno_range_compare_func (const void *v1p, const void *v2p)
object_range_compare_func (const void *v1p, const void *v2p)
{
int diff;
ira_object_t obj1 = *(const ira_object_t *) v1p;
@ -2340,9 +2428,15 @@ sort_conflict_id_map (void)
num = 0;
FOR_EACH_ALLOCNO (a, ai)
ira_object_id_map[num++] = ALLOCNO_OBJECT (a);
{
ira_allocno_object_iterator oi;
ira_object_t obj;
FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
ira_object_id_map[num++] = obj;
}
qsort (ira_object_id_map, num, sizeof (ira_object_t),
allocno_range_compare_func);
object_range_compare_func);
for (i = 0; i < num; i++)
{
ira_object_t obj = ira_object_id_map[i];
@ -2361,7 +2455,9 @@ setup_min_max_conflict_allocno_ids (void)
int cover_class;
int i, j, min, max, start, finish, first_not_finished, filled_area_start;
int *live_range_min, *last_lived;
int word0_min, word0_max;
ira_allocno_t a;
ira_allocno_iterator ai;
live_range_min = (int *) ira_allocate (sizeof (int) * ira_objects_num);
cover_class = -1;
@ -2388,10 +2484,10 @@ setup_min_max_conflict_allocno_ids (void)
/* If we skip an allocno, the allocno with smaller ids will
be also skipped because of the secondary sorting the
range finishes (see function
allocno_range_compare_func). */
object_range_compare_func). */
while (first_not_finished < i
&& start > OBJECT_MAX (ira_object_id_map
[first_not_finished]))
[first_not_finished]))
first_not_finished++;
min = first_not_finished;
}
@ -2442,6 +2538,38 @@ setup_min_max_conflict_allocno_ids (void)
}
ira_free (last_lived);
ira_free (live_range_min);
/* For allocnos with more than one object, we may later record extra conflicts in
subobject 0 that we cannot really know about here.
For now, simply widen the min/max range of these subobjects. */
word0_min = INT_MAX;
word0_max = INT_MIN;
FOR_EACH_ALLOCNO (a, ai)
{
int n = ALLOCNO_NUM_OBJECTS (a);
ira_object_t obj0;
if (n < 2)
continue;
obj0 = ALLOCNO_OBJECT (a, 0);
if (OBJECT_CONFLICT_ID (obj0) < word0_min)
word0_min = OBJECT_CONFLICT_ID (obj0);
if (OBJECT_CONFLICT_ID (obj0) > word0_max)
word0_max = OBJECT_CONFLICT_ID (obj0);
}
FOR_EACH_ALLOCNO (a, ai)
{
int n = ALLOCNO_NUM_OBJECTS (a);
ira_object_t obj0;
if (n < 2)
continue;
obj0 = ALLOCNO_OBJECT (a, 0);
if (OBJECT_MIN (obj0) > word0_min)
OBJECT_MIN (obj0) = word0_min;
if (OBJECT_MAX (obj0) < word0_max)
OBJECT_MAX (obj0) = word0_max;
}
}
@ -2529,6 +2657,7 @@ copy_info_to_removed_store_destinations (int regno)
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))])
/* This allocno will be removed. */
continue;
/* Caps will be removed. */
ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
for (parent = ALLOCNO_LOOP_TREE_NODE (a)->parent;
@ -2541,8 +2670,10 @@ copy_info_to_removed_store_destinations (int regno)
break;
if (parent == NULL || parent_a == NULL)
continue;
copy_allocno_live_ranges (a, parent_a);
merge_hard_reg_conflicts (a, parent_a, true);
ALLOCNO_CALL_FREQ (parent_a) += ALLOCNO_CALL_FREQ (a);
ALLOCNO_CALLS_CROSSED_NUM (parent_a)
+= ALLOCNO_CALLS_CROSSED_NUM (a);
@ -2582,14 +2713,16 @@ ira_flattening (int max_regno_before_emit, int ira_max_point_before_emit)
new_pseudos_p = merged_p = false;
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_allocno_object_iterator oi;
ira_object_t obj;
if (ALLOCNO_CAP_MEMBER (a) != NULL)
/* Caps are not in the regno allocno maps and they are never
will be transformed into allocnos existing after IR
flattening. */
continue;
COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
OBJECT_CONFLICT_HARD_REGS (obj));
FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
OBJECT_CONFLICT_HARD_REGS (obj));
#ifdef STACK_REGS
ALLOCNO_TOTAL_NO_STACK_REG_P (a) = ALLOCNO_NO_STACK_REG_P (a);
#endif
@ -2674,13 +2807,17 @@ ira_flattening (int max_regno_before_emit, int ira_max_point_before_emit)
/* Rebuild conflicts. */
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_allocno_object_iterator oi;
ira_object_t obj;
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))]
|| ALLOCNO_CAP_MEMBER (a) != NULL)
continue;
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
ira_assert (r->object == obj);
clear_conflicts (obj);
FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
{
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
ira_assert (r->object == obj);
clear_conflicts (obj);
}
}
objects_live = sparseset_alloc (ira_objects_num);
for (i = 0; i < ira_max_point; i++)
@ -2692,6 +2829,7 @@ ira_flattening (int max_regno_before_emit, int ira_max_point_before_emit)
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))]
|| ALLOCNO_CAP_MEMBER (a) != NULL)
continue;
cover_class = ALLOCNO_COVER_CLASS (a);
sparseset_set_bit (objects_live, OBJECT_CONFLICT_ID (obj));
EXECUTE_IF_SET_IN_SPARSESET (objects_live, n)
@ -2699,7 +2837,6 @@ ira_flattening (int max_regno_before_emit, int ira_max_point_before_emit)
ira_object_t live_obj = ira_object_id_map[n];
ira_allocno_t live_a = OBJECT_ALLOCNO (live_obj);
enum reg_class live_cover = ALLOCNO_COVER_CLASS (live_a);
if (ira_reg_classes_intersect_p[cover_class][live_cover]
/* Don't set up conflict for the allocno with itself. */
&& live_a != a)
@ -2931,40 +3068,39 @@ ira_build (bool loops_p)
allocno crossing calls. */
FOR_EACH_ALLOCNO (a, ai)
if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
}
ior_hard_reg_conflicts (a, &call_used_reg_set);
}
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
print_copies (ira_dump_file);
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
{
int n, nr;
int n, nr, nr_big;
ira_allocno_t a;
live_range_t r;
ira_allocno_iterator ai;
n = 0;
nr = 0;
nr_big = 0;
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
n += OBJECT_NUM_CONFLICTS (obj);
int j, nobj = ALLOCNO_NUM_OBJECTS (a);
if (nobj > 1)
nr_big++;
for (j = 0; j < nobj; j++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, j);
n += OBJECT_NUM_CONFLICTS (obj);
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
nr++;
}
}
nr = 0;
FOR_EACH_ALLOCNO (a, ai)
for (r = OBJECT_LIVE_RANGES (ALLOCNO_OBJECT (a)); r != NULL;
r = r->next)
nr++;
fprintf (ira_dump_file, " regions=%d, blocks=%d, points=%d\n",
VEC_length (loop_p, ira_loops.larray), n_basic_blocks,
ira_max_point);
fprintf (ira_dump_file,
" allocnos=%d, copies=%d, conflicts=%d, ranges=%d\n",
ira_allocnos_num, ira_copies_num, n, nr);
" allocnos=%d (big %d), copies=%d, conflicts=%d, ranges=%d\n",
ira_allocnos_num, nr_big, ira_copies_num, n, nr);
}
return loops_p;
}

View File

@ -94,16 +94,29 @@ static VEC(ira_allocno_t,heap) *removed_splay_allocno_vec;
static bool
allocnos_have_intersected_live_ranges_p (ira_allocno_t a1, ira_allocno_t a2)
{
ira_object_t obj1 = ALLOCNO_OBJECT (a1);
ira_object_t obj2 = ALLOCNO_OBJECT (a2);
int i, j;
int n1 = ALLOCNO_NUM_OBJECTS (a1);
int n2 = ALLOCNO_NUM_OBJECTS (a2);
if (a1 == a2)
return false;
if (ALLOCNO_REG (a1) != NULL && ALLOCNO_REG (a2) != NULL
&& (ORIGINAL_REGNO (ALLOCNO_REG (a1))
== ORIGINAL_REGNO (ALLOCNO_REG (a2))))
return false;
return ira_live_ranges_intersect_p (OBJECT_LIVE_RANGES (obj1),
OBJECT_LIVE_RANGES (obj2));
for (i = 0; i < n1; i++)
{
ira_object_t c1 = ALLOCNO_OBJECT (a1, i);
for (j = 0; j < n2; j++)
{
ira_object_t c2 = ALLOCNO_OBJECT (a2, j);
if (ira_live_ranges_intersect_p (OBJECT_LIVE_RANGES (c1),
OBJECT_LIVE_RANGES (c2)))
return true;
}
}
return false;
}
#ifdef ENABLE_IRA_CHECKING
@ -442,12 +455,11 @@ print_coalesced_allocno (ira_allocno_t allocno)
static bool
assign_hard_reg (ira_allocno_t allocno, bool retry_p)
{
HARD_REG_SET conflicting_regs;
int i, j, k, hard_regno, best_hard_regno, class_size;
int cost, mem_cost, min_cost, full_cost, min_full_cost;
HARD_REG_SET conflicting_regs[2];
int i, j, hard_regno, nregs, best_hard_regno, class_size;
int cost, mem_cost, min_cost, full_cost, min_full_cost, nwords;
int *a_costs;
int *conflict_costs;
enum reg_class cover_class, conflict_cover_class;
enum reg_class cover_class;
enum machine_mode mode;
ira_allocno_t a;
static int costs[FIRST_PSEUDO_REGISTER], full_costs[FIRST_PSEUDO_REGISTER];
@ -459,11 +471,13 @@ assign_hard_reg (ira_allocno_t allocno, bool retry_p)
bool no_stack_reg_p;
#endif
nwords = ALLOCNO_NUM_OBJECTS (allocno);
ira_assert (! ALLOCNO_ASSIGNED_P (allocno));
cover_class = ALLOCNO_COVER_CLASS (allocno);
class_size = ira_class_hard_regs_num[cover_class];
mode = ALLOCNO_MODE (allocno);
CLEAR_HARD_REG_SET (conflicting_regs);
for (i = 0; i < nwords; i++)
CLEAR_HARD_REG_SET (conflicting_regs[i]);
best_hard_regno = -1;
memset (full_costs, 0, sizeof (int) * class_size);
mem_cost = 0;
@ -478,13 +492,9 @@ assign_hard_reg (ira_allocno_t allocno, bool retry_p)
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
int word;
mem_cost += ALLOCNO_UPDATED_MEMORY_COST (a);
IOR_HARD_REG_SET (conflicting_regs,
OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
ira_allocate_and_copy_costs (&ALLOCNO_UPDATED_HARD_REG_COSTS (a),
cover_class, ALLOCNO_HARD_REG_COSTS (a));
a_costs = ALLOCNO_UPDATED_HARD_REG_COSTS (a);
@ -503,44 +513,68 @@ assign_hard_reg (ira_allocno_t allocno, bool retry_p)
costs[i] += cost;
full_costs[i] += cost;
}
/* Take preferences of conflicting allocnos into account. */
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
for (word = 0; word < nwords; word++)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
ira_object_t conflict_obj;
ira_object_t obj = ALLOCNO_OBJECT (allocno, word);
ira_object_conflict_iterator oci;
/* Reload can give another class so we need to check all
allocnos. */
if (retry_p || bitmap_bit_p (consideration_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
IOR_HARD_REG_SET (conflicting_regs[word],
OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
/* Take preferences of conflicting allocnos into account. */
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
enum reg_class conflict_cover_class;
/* Reload can give another class so we need to check all
allocnos. */
if (!retry_p && !bitmap_bit_p (consideration_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
continue;
conflict_cover_class = ALLOCNO_COVER_CLASS (conflict_allocno);
ira_assert (ira_reg_classes_intersect_p
[cover_class][conflict_cover_class]);
if (allocno_coalesced_p)
{
if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
continue;
bitmap_set_bit (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno));
}
if (ALLOCNO_ASSIGNED_P (conflict_allocno))
{
if ((hard_regno = ALLOCNO_HARD_REGNO (conflict_allocno)) >= 0
hard_regno = ALLOCNO_HARD_REGNO (conflict_allocno);
if (hard_regno >= 0
&& ira_class_hard_reg_index[cover_class][hard_regno] >= 0)
{
IOR_HARD_REG_SET
(conflicting_regs,
ira_reg_mode_hard_regset
[hard_regno][ALLOCNO_MODE (conflict_allocno)]);
enum machine_mode mode = ALLOCNO_MODE (conflict_allocno);
int conflict_nregs = hard_regno_nregs[hard_regno][mode];
int n_objects = ALLOCNO_NUM_OBJECTS (conflict_allocno);
if (conflict_nregs == n_objects && conflict_nregs > 1)
{
int num = OBJECT_SUBWORD (conflict_obj);
if (WORDS_BIG_ENDIAN)
SET_HARD_REG_BIT (conflicting_regs[word],
hard_regno + n_objects - num - 1);
else
SET_HARD_REG_BIT (conflicting_regs[word],
hard_regno + num);
}
else
IOR_HARD_REG_SET (conflicting_regs[word],
ira_reg_mode_hard_regset[hard_regno][mode]);
if (hard_reg_set_subset_p (reg_class_contents[cover_class],
conflicting_regs))
conflicting_regs[word]))
goto fail;
}
}
else if (! ALLOCNO_MAY_BE_SPILLED_P (ALLOCNO_FIRST_COALESCED_ALLOCNO
(conflict_allocno)))
{
int k, *conflict_costs;
if (allocno_coalesced_p)
{
if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
continue;
bitmap_set_bit (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno));
}
ira_allocate_and_copy_costs
(&ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (conflict_allocno),
conflict_cover_class,
@ -581,6 +615,7 @@ assign_hard_reg (ira_allocno_t allocno, bool retry_p)
}
update_conflict_hard_regno_costs (full_costs, cover_class, false);
min_cost = min_full_cost = INT_MAX;
/* We don't care about giving callee saved registers to allocnos no
living through calls because call clobbered registers are
allocated first (it is usual practice to put them first in
@ -588,14 +623,34 @@ assign_hard_reg (ira_allocno_t allocno, bool retry_p)
for (i = 0; i < class_size; i++)
{
hard_regno = ira_class_hard_regs[cover_class][i];
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (allocno)];
#ifdef STACK_REGS
if (no_stack_reg_p
&& FIRST_STACK_REG <= hard_regno && hard_regno <= LAST_STACK_REG)
continue;
#endif
if (! ira_hard_reg_not_in_set_p (hard_regno, mode, conflicting_regs)
|| TEST_HARD_REG_BIT (prohibited_class_mode_regs[cover_class][mode],
hard_regno))
if (TEST_HARD_REG_BIT (prohibited_class_mode_regs[cover_class][mode],
hard_regno))
continue;
for (j = 0; j < nregs; j++)
{
int k;
int set_to_test_start = 0, set_to_test_end = nwords;
if (nregs == nwords)
{
if (WORDS_BIG_ENDIAN)
set_to_test_start = nwords - j - 1;
else
set_to_test_start = j;
set_to_test_end = set_to_test_start + 1;
}
for (k = set_to_test_start; k < set_to_test_end; k++)
if (TEST_HARD_REG_BIT (conflicting_regs[k], hard_regno + j))
break;
if (k != set_to_test_end)
break;
}
if (j != nregs)
continue;
cost = costs[i];
full_cost = full_costs[i];
@ -876,7 +931,7 @@ static splay_tree uncolorable_allocnos_splay_tree[N_REG_CLASSES];
static void
push_allocno_to_stack (ira_allocno_t allocno)
{
int left_conflicts_size, conflict_size, size;
int size;
ira_allocno_t a;
enum reg_class cover_class;
@ -886,77 +941,90 @@ push_allocno_to_stack (ira_allocno_t allocno)
if (cover_class == NO_REGS)
return;
size = ira_reg_class_nregs[cover_class][ALLOCNO_MODE (allocno)];
if (ALLOCNO_NUM_OBJECTS (allocno) > 1)
{
/* We will deal with the subwords individually. */
gcc_assert (size == ALLOCNO_NUM_OBJECTS (allocno));
size = 1;
}
if (allocno_coalesced_p)
bitmap_clear (processed_coalesced_allocno_bitmap);
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
int i, n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
ira_object_t obj = ALLOCNO_OBJECT (a, i);
int conflict_size;
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
conflict_allocno = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
if (bitmap_bit_p (coloring_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
int left_conflicts_size;
conflict_allocno = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
if (!bitmap_bit_p (coloring_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
continue;
ira_assert (cover_class
== ALLOCNO_COVER_CLASS (conflict_allocno));
if (allocno_coalesced_p)
{
conflict_obj = ALLOCNO_OBJECT (conflict_allocno,
OBJECT_SUBWORD (conflict_obj));
if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
OBJECT_CONFLICT_ID (conflict_obj)))
continue;
bitmap_set_bit (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno));
OBJECT_CONFLICT_ID (conflict_obj));
}
if (ALLOCNO_IN_GRAPH_P (conflict_allocno)
&& ! ALLOCNO_ASSIGNED_P (conflict_allocno))
if (!ALLOCNO_IN_GRAPH_P (conflict_allocno)
|| ALLOCNO_ASSIGNED_P (conflict_allocno))
continue;
left_conflicts_size = ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno);
conflict_size
= (ira_reg_class_nregs
[cover_class][ALLOCNO_MODE (conflict_allocno)]);
ira_assert (left_conflicts_size >= size);
if (left_conflicts_size + conflict_size
<= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
{
ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) -= size;
continue;
}
left_conflicts_size -= size;
if (uncolorable_allocnos_splay_tree[cover_class] != NULL
&& !ALLOCNO_SPLAY_REMOVED_P (conflict_allocno)
&& USE_SPLAY_P (cover_class))
{
left_conflicts_size
= ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno);
conflict_size
= (ira_reg_class_nregs
[cover_class][ALLOCNO_MODE (conflict_allocno)]);
ira_assert
(ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) >= size);
if (left_conflicts_size + conflict_size
<= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
{
ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) -= size;
continue;
}
left_conflicts_size
= ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) - size;
if (uncolorable_allocnos_splay_tree[cover_class] != NULL
&& !ALLOCNO_SPLAY_REMOVED_P (conflict_allocno)
&& USE_SPLAY_P (cover_class))
{
ira_assert
(splay_tree_lookup
(uncolorable_allocnos_splay_tree[cover_class],
(splay_tree_key) conflict_allocno) != NULL);
splay_tree_remove
(uncolorable_allocnos_splay_tree[cover_class],
(splay_tree_key) conflict_allocno);
ALLOCNO_SPLAY_REMOVED_P (conflict_allocno) = true;
VEC_safe_push (ira_allocno_t, heap,
removed_splay_allocno_vec,
conflict_allocno);
}
ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno)
= left_conflicts_size;
if (left_conflicts_size + conflict_size
<= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
{
delete_allocno_from_bucket
(conflict_allocno, &uncolorable_allocno_bucket);
add_allocno_to_ordered_bucket
(conflict_allocno, &colorable_allocno_bucket);
}
(splay_tree_lookup
(uncolorable_allocnos_splay_tree[cover_class],
(splay_tree_key) conflict_allocno) != NULL);
splay_tree_remove
(uncolorable_allocnos_splay_tree[cover_class],
(splay_tree_key) conflict_allocno);
ALLOCNO_SPLAY_REMOVED_P (conflict_allocno) = true;
VEC_safe_push (ira_allocno_t, heap,
removed_splay_allocno_vec,
conflict_allocno);
}
ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno)
= left_conflicts_size;
if (left_conflicts_size + conflict_size
<= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
{
delete_allocno_from_bucket
(conflict_allocno, &uncolorable_allocno_bucket);
add_allocno_to_ordered_bucket
(conflict_allocno, &colorable_allocno_bucket);
}
}
}
@ -1370,6 +1438,28 @@ pop_allocnos_from_stack (void)
}
}
/* Loop over all coalesced allocnos of ALLOCNO and their subobjects, collecting
total hard register conflicts in PSET (which the caller must initialize). */
static void
all_conflicting_hard_regs_coalesced (ira_allocno_t allocno, HARD_REG_SET *pset)
{
ira_allocno_t a;
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
int i;
int n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
IOR_HARD_REG_SET (*pset, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
}
if (a == allocno)
break;
}
}
/* Set up number of available hard registers for ALLOCNO. */
static void
setup_allocno_available_regs_num (ira_allocno_t allocno)
@ -1377,7 +1467,6 @@ setup_allocno_available_regs_num (ira_allocno_t allocno)
int i, n, hard_regs_num, hard_regno;
enum machine_mode mode;
enum reg_class cover_class;
ira_allocno_t a;
HARD_REG_SET temp_set;
cover_class = ALLOCNO_COVER_CLASS (allocno);
@ -1387,14 +1476,8 @@ setup_allocno_available_regs_num (ira_allocno_t allocno)
CLEAR_HARD_REG_SET (temp_set);
ira_assert (ALLOCNO_FIRST_COALESCED_ALLOCNO (allocno) == allocno);
hard_regs_num = ira_class_hard_regs_num[cover_class];
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
IOR_HARD_REG_SET (temp_set, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
if (a == allocno)
break;
}
all_conflicting_hard_regs_coalesced (allocno, &temp_set);
mode = ALLOCNO_MODE (allocno);
for (n = 0, i = hard_regs_num - 1; i >= 0; i--)
{
@ -1423,16 +1506,11 @@ setup_allocno_left_conflicts_size (ira_allocno_t allocno)
hard_regs_num = ira_class_hard_regs_num[cover_class];
CLEAR_HARD_REG_SET (temp_set);
ira_assert (ALLOCNO_FIRST_COALESCED_ALLOCNO (allocno) == allocno);
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
IOR_HARD_REG_SET (temp_set, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
if (a == allocno)
break;
}
all_conflicting_hard_regs_coalesced (allocno, &temp_set);
AND_HARD_REG_SET (temp_set, reg_class_contents[cover_class]);
AND_COMPL_HARD_REG_SET (temp_set, ira_no_alloc_regs);
conflict_allocnos_size = 0;
if (! hard_reg_set_empty_p (temp_set))
for (i = 0; i < (int) hard_regs_num; i++)
@ -1453,19 +1531,23 @@ setup_allocno_left_conflicts_size (ira_allocno_t allocno)
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
int n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
ira_object_t obj = ALLOCNO_OBJECT (a, i);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
conflict_allocno
= ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
if (bitmap_bit_p (consideration_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
conflict_allocno
= ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
if (!bitmap_bit_p (consideration_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
continue;
ira_assert (cover_class
== ALLOCNO_COVER_CLASS (conflict_allocno));
if (allocno_coalesced_p)
@ -1476,6 +1558,7 @@ setup_allocno_left_conflicts_size (ira_allocno_t allocno)
bitmap_set_bit (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno));
}
if (! ALLOCNO_ASSIGNED_P (conflict_allocno))
conflict_allocnos_size
+= (ira_reg_class_nregs
@ -1485,7 +1568,7 @@ setup_allocno_left_conflicts_size (ira_allocno_t allocno)
{
int last = (hard_regno
+ hard_regno_nregs
[hard_regno][ALLOCNO_MODE (conflict_allocno)]);
[hard_regno][ALLOCNO_MODE (conflict_allocno)]);
while (hard_regno < last)
{
@ -1568,9 +1651,9 @@ merge_allocnos (ira_allocno_t a1, ira_allocno_t a2)
ALLOCNO_NEXT_COALESCED_ALLOCNO (last) = next;
}
/* Return TRUE if there are conflicting allocnos from two sets of
coalesced allocnos given correspondingly by allocnos A1 and A2. If
RELOAD_P is TRUE, we use live ranges to find conflicts because
/* Given two sets of coalesced sets of allocnos, A1 and A2, this
function determines if any conflicts exist between the two sets.
If RELOAD_P is TRUE, we use live ranges to find conflicts because
conflicts are represented only for allocnos of the same cover class
and during the reload pass we coalesce allocnos for sharing stack
memory slots. */
@ -1578,15 +1661,20 @@ static bool
coalesced_allocno_conflict_p (ira_allocno_t a1, ira_allocno_t a2,
bool reload_p)
{
ira_allocno_t a;
ira_allocno_t a, conflict_allocno;
/* When testing for conflicts, it is sufficient to examine only the
subobjects of order 0, due to the canonicalization of conflicts
we do in record_object_conflict. */
bitmap_clear (processed_coalesced_allocno_bitmap);
if (allocno_coalesced_p)
{
bitmap_clear (processed_coalesced_allocno_bitmap);
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a1);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
bitmap_set_bit (processed_coalesced_allocno_bitmap, ALLOCNO_NUM (a));
bitmap_set_bit (processed_coalesced_allocno_bitmap,
OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (a, 0)));
if (a == a1)
break;
}
@ -1596,7 +1684,6 @@ coalesced_allocno_conflict_p (ira_allocno_t a1, ira_allocno_t a2,
{
if (reload_p)
{
ira_allocno_t conflict_allocno;
for (conflict_allocno = ALLOCNO_NEXT_COALESCED_ALLOCNO (a1);;
conflict_allocno
= ALLOCNO_NEXT_COALESCED_ALLOCNO (conflict_allocno))
@ -1610,20 +1697,17 @@ coalesced_allocno_conflict_p (ira_allocno_t a1, ira_allocno_t a2,
}
else
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t a_obj = ALLOCNO_OBJECT (a, 0);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
if (conflict_allocno == a1
|| (allocno_coalesced_p
&& bitmap_bit_p (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno))))
return true;
}
FOR_EACH_OBJECT_CONFLICT (a_obj, conflict_obj, oci)
if (conflict_obj == ALLOCNO_OBJECT (a1, 0)
|| (allocno_coalesced_p
&& bitmap_bit_p (processed_coalesced_allocno_bitmap,
OBJECT_CONFLICT_ID (conflict_obj))))
return true;
}
if (a == a2)
break;
}
@ -1760,6 +1844,8 @@ setup_allocno_priorities (ira_allocno_t *consideration_allocnos, int n)
{
a = consideration_allocnos[i];
length = ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a);
if (ALLOCNO_NUM_OBJECTS (a) > 1)
length /= ALLOCNO_NUM_OBJECTS (a);
if (length <= 0)
length = 1;
allocno_priorities[ALLOCNO_NUM (a)]
@ -1969,9 +2055,8 @@ color_pass (ira_loop_tree_node_t loop_tree_node)
EXECUTE_IF_SET_IN_BITMAP (consideration_allocno_bitmap, 0, j, bi)
{
a = ira_allocnos[j];
if (! ALLOCNO_ASSIGNED_P (a))
continue;
bitmap_clear_bit (coloring_allocno_bitmap, ALLOCNO_NUM (a));
if (ALLOCNO_ASSIGNED_P (a))
bitmap_clear_bit (coloring_allocno_bitmap, ALLOCNO_NUM (a));
}
/* Color all mentioned allocnos including transparent ones. */
color_allocnos ();
@ -2322,9 +2407,7 @@ ira_reassign_conflict_allocnos (int start_regno)
allocnos_to_color_num = 0;
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
int n = ALLOCNO_NUM_OBJECTS (a);
if (! ALLOCNO_ASSIGNED_P (a)
&& ! bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (a)))
@ -2343,15 +2426,21 @@ ira_reassign_conflict_allocnos (int start_regno)
if (ALLOCNO_REGNO (a) < start_regno
|| (cover_class = ALLOCNO_COVER_CLASS (a)) == NO_REGS)
continue;
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
for (i = 0; i < n; i++)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
ira_assert (ira_reg_classes_intersect_p
[cover_class][ALLOCNO_COVER_CLASS (conflict_a)]);
if (bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (conflict_a)))
continue;
bitmap_set_bit (allocnos_to_color, ALLOCNO_NUM (conflict_a));
sorted_allocnos[allocnos_to_color_num++] = conflict_a;
ira_object_t obj = ALLOCNO_OBJECT (a, i);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
ira_assert (ira_reg_classes_intersect_p
[cover_class][ALLOCNO_COVER_CLASS (conflict_a)]);
if (bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (conflict_a)))
continue;
bitmap_set_bit (allocnos_to_color, ALLOCNO_NUM (conflict_a));
sorted_allocnos[allocnos_to_color_num++] = conflict_a;
}
}
}
ira_free_bitmap (allocnos_to_color);
@ -2539,10 +2628,15 @@ slot_coalesced_allocno_live_ranges_intersect_p (ira_allocno_t allocno, int n)
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
if (ira_live_ranges_intersect_p
(slot_coalesced_allocnos_live_ranges[n], OBJECT_LIVE_RANGES (obj)))
return true;
int i;
int nr = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < nr; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
if (ira_live_ranges_intersect_p (slot_coalesced_allocnos_live_ranges[n],
OBJECT_LIVE_RANGES (obj)))
return true;
}
if (a == allocno)
break;
}
@ -2554,7 +2648,7 @@ slot_coalesced_allocno_live_ranges_intersect_p (ira_allocno_t allocno, int n)
static void
setup_slot_coalesced_allocno_live_ranges (ira_allocno_t allocno)
{
int n;
int i, n;
ira_allocno_t a;
live_range_t r;
@ -2562,11 +2656,15 @@ setup_slot_coalesced_allocno_live_ranges (ira_allocno_t allocno)
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
r = ira_copy_live_range_list (OBJECT_LIVE_RANGES (obj));
slot_coalesced_allocnos_live_ranges[n]
= ira_merge_live_ranges
(slot_coalesced_allocnos_live_ranges[n], r);
int nr = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < nr; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
r = ira_copy_live_range_list (OBJECT_LIVE_RANGES (obj));
slot_coalesced_allocnos_live_ranges[n]
= ira_merge_live_ranges
(slot_coalesced_allocnos_live_ranges[n], r);
}
if (a == allocno)
break;
}
@ -2823,13 +2921,19 @@ allocno_reload_assign (ira_allocno_t a, HARD_REG_SET forbidden_regs)
int hard_regno;
enum reg_class cover_class;
int regno = ALLOCNO_REGNO (a);
HARD_REG_SET saved;
ira_object_t obj = ALLOCNO_OBJECT (a);
HARD_REG_SET saved[2];
int i, n;
COPY_HARD_REG_SET (saved, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), forbidden_regs);
if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), call_used_reg_set);
n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
COPY_HARD_REG_SET (saved[i], OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), forbidden_regs);
if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
}
ALLOCNO_ASSIGNED_P (a) = false;
cover_class = ALLOCNO_COVER_CLASS (a);
update_curr_costs (a);
@ -2868,7 +2972,11 @@ allocno_reload_assign (ira_allocno_t a, HARD_REG_SET forbidden_regs)
}
else if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
fprintf (ira_dump_file, "\n");
COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), saved);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), saved[i]);
}
return reg_renumber[regno] >= 0;
}
@ -2916,25 +3024,31 @@ ira_reassign_pseudos (int *spilled_pseudo_regs, int num,
for (i = 0, n = num; i < n; i++)
{
ira_object_t obj, conflict_obj;
ira_object_conflict_iterator oci;
int nr, j;
int regno = spilled_pseudo_regs[i];
bitmap_set_bit (temp, regno);
a = ira_regno_allocno_map[regno];
obj = ALLOCNO_OBJECT (a);
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
nr = ALLOCNO_NUM_OBJECTS (a);
for (j = 0; j < nr; j++)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
if (ALLOCNO_HARD_REGNO (conflict_a) < 0
&& ! ALLOCNO_DONT_REASSIGN_P (conflict_a)
&& ! bitmap_bit_p (temp, ALLOCNO_REGNO (conflict_a)))
ira_object_t conflict_obj;
ira_object_t obj = ALLOCNO_OBJECT (a, j);
ira_object_conflict_iterator oci;
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
spilled_pseudo_regs[num++] = ALLOCNO_REGNO (conflict_a);
bitmap_set_bit (temp, ALLOCNO_REGNO (conflict_a));
/* ?!? This seems wrong. */
bitmap_set_bit (consideration_allocno_bitmap,
ALLOCNO_NUM (conflict_a));
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
if (ALLOCNO_HARD_REGNO (conflict_a) < 0
&& ! ALLOCNO_DONT_REASSIGN_P (conflict_a)
&& ! bitmap_bit_p (temp, ALLOCNO_REGNO (conflict_a)))
{
spilled_pseudo_regs[num++] = ALLOCNO_REGNO (conflict_a);
bitmap_set_bit (temp, ALLOCNO_REGNO (conflict_a));
/* ?!? This seems wrong. */
bitmap_set_bit (consideration_allocno_bitmap,
ALLOCNO_NUM (conflict_a));
}
}
}
}
@ -3147,7 +3261,7 @@ calculate_spill_cost (int *regnos, rtx in, rtx out, rtx insn,
hard_regno = reg_renumber[regno];
ira_assert (hard_regno >= 0);
a = ira_regno_allocno_map[regno];
length += ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a);
length += ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a) / ALLOCNO_NUM_OBJECTS (a);
cost += ALLOCNO_MEMORY_COST (a) - ALLOCNO_COVER_CLASS_COST (a);
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
for (j = 0; j < nregs; j++)
@ -3301,13 +3415,20 @@ fast_allocation (void)
allocno_priority_compare_func);
for (i = 0; i < num; i++)
{
ira_object_t obj;
int nr, l;
a = sorted_allocnos[i];
obj = ALLOCNO_OBJECT (a);
COPY_HARD_REG_SET (conflict_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
for (j = r->start; j <= r->finish; j++)
IOR_HARD_REG_SET (conflict_hard_regs, used_hard_regs[j]);
nr = ALLOCNO_NUM_OBJECTS (a);
CLEAR_HARD_REG_SET (conflict_hard_regs);
for (l = 0; l < nr; l++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, l);
IOR_HARD_REG_SET (conflict_hard_regs,
OBJECT_CONFLICT_HARD_REGS (obj));
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
for (j = r->start; j <= r->finish; j++)
IOR_HARD_REG_SET (conflict_hard_regs, used_hard_regs[j]);
}
cover_class = ALLOCNO_COVER_CLASS (a);
ALLOCNO_ASSIGNED_P (a) = true;
ALLOCNO_HARD_REGNO (a) = -1;
@ -3332,10 +3453,14 @@ fast_allocation (void)
(prohibited_class_mode_regs[cover_class][mode], hard_regno)))
continue;
ALLOCNO_HARD_REGNO (a) = hard_regno;
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
for (k = r->start; k <= r->finish; k++)
IOR_HARD_REG_SET (used_hard_regs[k],
ira_reg_mode_hard_regset[hard_regno][mode]);
for (l = 0; l < nr; l++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, l);
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
for (k = r->start; k <= r->finish; k++)
IOR_HARD_REG_SET (used_hard_regs[k],
ira_reg_mode_hard_regset[hard_regno][mode]);
}
break;
}
}

View File

@ -48,11 +48,11 @@ along with GCC; see the file COPYING3. If not see
allocno's conflict (can't go in the same hardware register).
Some arrays will be used as conflict bit vector of the
corresponding allocnos see function build_allocno_conflicts. */
corresponding allocnos see function build_object_conflicts. */
static IRA_INT_TYPE **conflicts;
/* Macro to test a conflict of C1 and C2 in `conflicts'. */
#define OBJECTS_CONFLICT_P(C1, C2) \
#define OBJECTS_CONFLICT_P(C1, C2) \
(OBJECT_MIN (C1) <= OBJECT_CONFLICT_ID (C2) \
&& OBJECT_CONFLICT_ID (C2) <= OBJECT_MAX (C1) \
&& TEST_MINMAX_SET_BIT (conflicts[OBJECT_CONFLICT_ID (C1)], \
@ -60,6 +60,36 @@ static IRA_INT_TYPE **conflicts;
OBJECT_MIN (C1), OBJECT_MAX (C1)))
/* Record a conflict between objects OBJ1 and OBJ2. If necessary,
canonicalize the conflict by recording it for lower-order subobjects
of the corresponding allocnos. */
static void
record_object_conflict (ira_object_t obj1, ira_object_t obj2)
{
ira_allocno_t a1 = OBJECT_ALLOCNO (obj1);
ira_allocno_t a2 = OBJECT_ALLOCNO (obj2);
int w1 = OBJECT_SUBWORD (obj1);
int w2 = OBJECT_SUBWORD (obj2);
int id1, id2;
/* Canonicalize the conflict. If two identically-numbered words
conflict, always record this as a conflict between words 0. That
is the only information we need, and it is easier to test for if
it is collected in each allocno's lowest-order object. */
if (w1 == w2 && w1 > 0)
{
obj1 = ALLOCNO_OBJECT (a1, 0);
obj2 = ALLOCNO_OBJECT (a2, 0);
}
id1 = OBJECT_CONFLICT_ID (obj1);
id2 = OBJECT_CONFLICT_ID (obj2);
SET_MINMAX_SET_BIT (conflicts[id1], id2, OBJECT_MIN (obj1),
OBJECT_MAX (obj1));
SET_MINMAX_SET_BIT (conflicts[id2], id1, OBJECT_MIN (obj2),
OBJECT_MAX (obj2));
}
/* Build allocno conflict table by processing allocno live ranges.
Return true if the table was built. The table is not built if it
is too big. */
@ -74,51 +104,53 @@ build_conflict_bit_table (void)
ira_allocno_t allocno;
ira_allocno_iterator ai;
sparseset objects_live;
ira_object_t obj;
ira_allocno_object_iterator aoi;
allocated_words_num = 0;
FOR_EACH_ALLOCNO (allocno, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (allocno);
if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
FOR_EACH_ALLOCNO_OBJECT (allocno, obj, aoi)
{
if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
continue;
conflict_bit_vec_words_num
= ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
allocated_words_num += conflict_bit_vec_words_num;
if ((unsigned long long) allocated_words_num * sizeof (IRA_INT_TYPE)
> (unsigned long long) IRA_MAX_CONFLICT_TABLE_SIZE * 1024 * 1024)
{
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
fprintf
(ira_dump_file,
"+++Conflict table will be too big(>%dMB) -- don't use it\n",
IRA_MAX_CONFLICT_TABLE_SIZE);
return false;
}
}
conflict_bit_vec_words_num
= ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
allocated_words_num += conflict_bit_vec_words_num;
if ((unsigned long long) allocated_words_num * sizeof (IRA_INT_TYPE)
> (unsigned long long) IRA_MAX_CONFLICT_TABLE_SIZE * 1024 * 1024)
{
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
fprintf
(ira_dump_file,
"+++Conflict table will be too big(>%dMB) -- don't use it\n",
IRA_MAX_CONFLICT_TABLE_SIZE);
return false;
}
}
conflicts = (IRA_INT_TYPE **) ira_allocate (sizeof (IRA_INT_TYPE *)
* ira_objects_num);
allocated_words_num = 0;
FOR_EACH_ALLOCNO (allocno, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (allocno);
int id = OBJECT_CONFLICT_ID (obj);
if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
{
conflicts[id] = NULL;
continue;
}
conflict_bit_vec_words_num
= ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
allocated_words_num += conflict_bit_vec_words_num;
conflicts[id]
= (IRA_INT_TYPE *) ira_allocate (sizeof (IRA_INT_TYPE)
* conflict_bit_vec_words_num);
memset (conflicts[id], 0,
sizeof (IRA_INT_TYPE) * conflict_bit_vec_words_num);
}
FOR_EACH_ALLOCNO_OBJECT (allocno, obj, aoi)
{
int id = OBJECT_CONFLICT_ID (obj);
if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
{
conflicts[id] = NULL;
continue;
}
conflict_bit_vec_words_num
= ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
allocated_words_num += conflict_bit_vec_words_num;
conflicts[id]
= (IRA_INT_TYPE *) ira_allocate (sizeof (IRA_INT_TYPE)
* conflict_bit_vec_words_num);
memset (conflicts[id], 0,
sizeof (IRA_INT_TYPE) * conflict_bit_vec_words_num);
}
object_set_words = (ira_objects_num + IRA_INT_BITS - 1) / IRA_INT_BITS;
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
@ -137,33 +169,27 @@ build_conflict_bit_table (void)
ira_allocno_t allocno = OBJECT_ALLOCNO (obj);
int id = OBJECT_CONFLICT_ID (obj);
gcc_assert (id < ira_objects_num);
cover_class = ALLOCNO_COVER_CLASS (allocno);
sparseset_set_bit (objects_live, id);
EXECUTE_IF_SET_IN_SPARSESET (objects_live, j)
{
ira_object_t live_cr = ira_object_id_map[j];
ira_allocno_t live_a = OBJECT_ALLOCNO (live_cr);
ira_object_t live_obj = ira_object_id_map[j];
ira_allocno_t live_a = OBJECT_ALLOCNO (live_obj);
enum reg_class live_cover_class = ALLOCNO_COVER_CLASS (live_a);
if (ira_reg_classes_intersect_p[cover_class][live_cover_class]
/* Don't set up conflict for the allocno with itself. */
&& id != (int) j)
&& live_a != allocno)
{
SET_MINMAX_SET_BIT (conflicts[id], j,
OBJECT_MIN (obj),
OBJECT_MAX (obj));
SET_MINMAX_SET_BIT (conflicts[j], id,
OBJECT_MIN (live_cr),
OBJECT_MAX (live_cr));
record_object_conflict (obj, live_obj);
}
}
}
for (r = ira_finish_point_ranges[i]; r != NULL; r = r->finish_next)
{
ira_object_t obj = r->object;
sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (obj));
}
sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (r->object));
}
sparseset_free (objects_live);
return true;
@ -173,10 +199,13 @@ build_conflict_bit_table (void)
register due to conflicts. */
static bool
allocnos_conflict_p (ira_allocno_t a1, ira_allocno_t a2)
allocnos_conflict_for_copy_p (ira_allocno_t a1, ira_allocno_t a2)
{
ira_object_t obj1 = ALLOCNO_OBJECT (a1);
ira_object_t obj2 = ALLOCNO_OBJECT (a2);
/* Due to the fact that we canonicalize conflicts (see
record_object_conflict), we only need to test for conflicts of
the lowest order words. */
ira_object_t obj1 = ALLOCNO_OBJECT (a1, 0);
ira_object_t obj2 = ALLOCNO_OBJECT (a2, 0);
return OBJECTS_CONFLICT_P (obj1, obj2);
}
@ -387,7 +416,7 @@ process_regs_for_copy (rtx reg1, rtx reg2, bool constraint_p,
{
ira_allocno_t a1 = ira_curr_regno_allocno_map[REGNO (reg1)];
ira_allocno_t a2 = ira_curr_regno_allocno_map[REGNO (reg2)];
if (!allocnos_conflict_p (a1, a2) && offset1 == offset2)
if (!allocnos_conflict_for_copy_p (a1, a2) && offset1 == offset2)
{
cp = ira_add_allocno_copy (a1, a2, freq, constraint_p, insn,
ira_curr_loop_tree_node);
@ -560,7 +589,7 @@ propagate_copies (void)
parent_a1 = ira_parent_or_cap_allocno (a1);
parent_a2 = ira_parent_or_cap_allocno (a2);
ira_assert (parent_a1 != NULL && parent_a2 != NULL);
if (! allocnos_conflict_p (parent_a1, parent_a2))
if (! allocnos_conflict_for_copy_p (parent_a1, parent_a2))
ira_add_allocno_copy (parent_a1, parent_a2, cp->freq,
cp->constraint_p, cp->insn, cp->loop_tree_node);
}
@ -570,23 +599,20 @@ propagate_copies (void)
static ira_object_t *collected_conflict_objects;
/* Build conflict vectors or bit conflict vectors (whatever is more
profitable) for allocno A from the conflict table and propagate the
conflicts to upper level allocno. */
profitable) for object OBJ from the conflict table. */
static void
build_allocno_conflicts (ira_allocno_t a)
build_object_conflicts (ira_object_t obj)
{
int i, px, parent_num;
int conflict_bit_vec_words_num;
ira_allocno_t parent_a, another_parent_a;
ira_object_t *vec;
IRA_INT_TYPE *allocno_conflicts;
ira_object_t obj, parent_obj;
ira_object_t parent_obj;
ira_allocno_t a = OBJECT_ALLOCNO (obj);
IRA_INT_TYPE *object_conflicts;
minmax_set_iterator asi;
obj = ALLOCNO_OBJECT (a);
allocno_conflicts = conflicts[OBJECT_CONFLICT_ID (obj)];
object_conflicts = conflicts[OBJECT_CONFLICT_ID (obj)];
px = 0;
FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
FOR_EACH_BIT_IN_MINMAX_SET (object_conflicts,
OBJECT_MIN (obj), OBJECT_MAX (obj), i, asi)
{
ira_object_t another_obj = ira_object_id_map[i];
@ -597,6 +623,7 @@ build_allocno_conflicts (ira_allocno_t a)
}
if (ira_conflict_vector_profitable_p (obj, px))
{
ira_object_t *vec;
ira_allocate_conflict_vec (obj, px);
vec = OBJECT_CONFLICT_VEC (obj);
memcpy (vec, collected_conflict_objects, sizeof (ira_object_t) * px);
@ -605,7 +632,8 @@ build_allocno_conflicts (ira_allocno_t a)
}
else
{
OBJECT_CONFLICT_ARRAY (obj) = allocno_conflicts;
int conflict_bit_vec_words_num;
OBJECT_CONFLICT_ARRAY (obj) = object_conflicts;
if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
conflict_bit_vec_words_num = 0;
else
@ -615,28 +643,35 @@ build_allocno_conflicts (ira_allocno_t a)
OBJECT_CONFLICT_ARRAY_SIZE (obj)
= conflict_bit_vec_words_num * sizeof (IRA_INT_TYPE);
}
parent_a = ira_parent_or_cap_allocno (a);
if (parent_a == NULL)
return;
ira_assert (ALLOCNO_COVER_CLASS (a) == ALLOCNO_COVER_CLASS (parent_a));
parent_obj = ALLOCNO_OBJECT (parent_a);
ira_assert (ALLOCNO_NUM_OBJECTS (a) == ALLOCNO_NUM_OBJECTS (parent_a));
parent_obj = ALLOCNO_OBJECT (parent_a, OBJECT_SUBWORD (obj));
parent_num = OBJECT_CONFLICT_ID (parent_obj);
FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
FOR_EACH_BIT_IN_MINMAX_SET (object_conflicts,
OBJECT_MIN (obj), OBJECT_MAX (obj), i, asi)
{
ira_object_t another_obj = ira_object_id_map[i];
ira_allocno_t another_a = OBJECT_ALLOCNO (another_obj);
int another_word = OBJECT_SUBWORD (another_obj);
ira_assert (ira_reg_classes_intersect_p
[ALLOCNO_COVER_CLASS (a)][ALLOCNO_COVER_CLASS (another_a)]);
another_parent_a = ira_parent_or_cap_allocno (another_a);
if (another_parent_a == NULL)
continue;
ira_assert (ALLOCNO_NUM (another_parent_a) >= 0);
ira_assert (ALLOCNO_COVER_CLASS (another_a)
== ALLOCNO_COVER_CLASS (another_parent_a));
ira_assert (ALLOCNO_NUM_OBJECTS (another_a)
== ALLOCNO_NUM_OBJECTS (another_parent_a));
SET_MINMAX_SET_BIT (conflicts[parent_num],
OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (another_parent_a)),
OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (another_parent_a,
another_word)),
OBJECT_MIN (parent_obj),
OBJECT_MAX (parent_obj));
}
@ -658,9 +693,18 @@ build_conflicts (void)
a != NULL;
a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
{
build_allocno_conflicts (a);
for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
build_allocno_conflicts (cap);
int j, nregs = ALLOCNO_NUM_OBJECTS (a);
for (j = 0; j < nregs; j++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, j);
build_object_conflicts (obj);
for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
{
ira_object_t cap_obj = ALLOCNO_OBJECT (cap, j);
gcc_assert (ALLOCNO_NUM_OBJECTS (cap) == ALLOCNO_NUM_OBJECTS (a));
build_object_conflicts (cap_obj);
}
}
}
ira_free (collected_conflict_objects);
}
@ -700,9 +744,8 @@ static void
print_allocno_conflicts (FILE * file, bool reg_p, ira_allocno_t a)
{
HARD_REG_SET conflicting_hard_regs;
ira_object_t obj, conflict_obj;
ira_object_conflict_iterator oci;
basic_block bb;
int n, i;
if (reg_p)
fprintf (file, ";; r%d", ALLOCNO_REGNO (a));
@ -717,39 +760,52 @@ print_allocno_conflicts (FILE * file, bool reg_p, ira_allocno_t a)
}
fputs (" conflicts:", file);
obj = ALLOCNO_OBJECT (a);
if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
if (reg_p)
fprintf (file, " r%d,", ALLOCNO_REGNO (conflict_a));
else
{
fprintf (file, " a%d(r%d,", ALLOCNO_NUM (conflict_a),
ALLOCNO_REGNO (conflict_a));
if ((bb = ALLOCNO_LOOP_TREE_NODE (conflict_a)->bb) != NULL)
fprintf (file, "b%d)", bb->index);
else
fprintf (file, "l%d)",
ALLOCNO_LOOP_TREE_NODE (conflict_a)->loop->num);
}
}
n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
AND_HARD_REG_SET (conflicting_hard_regs,
reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
print_hard_reg_set (file, "\n;; total conflict hard regs:",
conflicting_hard_regs);
if (OBJECT_CONFLICT_ARRAY (obj) == NULL)
continue;
if (n > 1)
fprintf (file, "\n;; subobject %d:", i);
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
if (reg_p)
fprintf (file, " r%d,", ALLOCNO_REGNO (conflict_a));
else
{
fprintf (file, " a%d(r%d", ALLOCNO_NUM (conflict_a),
ALLOCNO_REGNO (conflict_a));
if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1)
fprintf (file, ",w%d", OBJECT_SUBWORD (conflict_obj));
if ((bb = ALLOCNO_LOOP_TREE_NODE (conflict_a)->bb) != NULL)
fprintf (file, ",b%d", bb->index);
else
fprintf (file, ",l%d",
ALLOCNO_LOOP_TREE_NODE (conflict_a)->loop->num);
putc (')', file);
}
}
COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
AND_HARD_REG_SET (conflicting_hard_regs,
reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
print_hard_reg_set (file, "\n;; total conflict hard regs:",
conflicting_hard_regs);
COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
AND_HARD_REG_SET (conflicting_hard_regs,
reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
print_hard_reg_set (file, ";; conflict hard regs:",
conflicting_hard_regs);
putc ('\n', file);
}
COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
AND_HARD_REG_SET (conflicting_hard_regs,
reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
print_hard_reg_set (file, ";; conflict hard regs:",
conflicting_hard_regs);
putc ('\n', file);
}
/* Print information about allocno or only regno (if REG_P) conflicts
@ -799,7 +855,7 @@ ira_build_conflicts (void)
propagate_copies ();
/* Now we can free memory for the conflict table (see function
build_allocno_conflicts for details). */
build_object_conflicts for details). */
FOR_EACH_OBJECT (obj, oi)
{
if (OBJECT_CONFLICT_ARRAY (obj) != conflicts[OBJECT_CONFLICT_ID (obj)])
@ -819,29 +875,38 @@ ira_build_conflicts (void)
}
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
reg_attrs *attrs;
tree decl;
int i, n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
reg_attrs *attrs = REG_ATTRS (regno_reg_rtx [ALLOCNO_REGNO (a)]);
tree decl;
if ((! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
/* For debugging purposes don't put user defined variables in
callee-clobbered registers. */
|| (optimize == 0
&& (attrs = REG_ATTRS (regno_reg_rtx [ALLOCNO_REGNO (a)])) != NULL
&& (decl = attrs->decl) != NULL
&& VAR_OR_FUNCTION_DECL_P (decl)
&& ! DECL_ARTIFICIAL (decl)))
{
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), call_used_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), call_used_reg_set);
}
else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
{
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
no_caller_save_reg_set);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), temp_hard_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), no_caller_save_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), temp_hard_reg_set);
if ((! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
/* For debugging purposes don't put user defined variables in
callee-clobbered registers. */
|| (optimize == 0
&& attrs != NULL
&& (decl = attrs->decl) != NULL
&& VAR_OR_FUNCTION_DECL_P (decl)
&& ! DECL_ARTIFICIAL (decl)))
{
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
}
else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
{
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
no_caller_save_reg_set);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
temp_hard_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
no_caller_save_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
temp_hard_reg_set);
}
}
}
if (optimize && ira_conflicts_p

View File

@ -715,8 +715,8 @@ modify_move_list (move_t list)
&& ALLOCNO_HARD_REGNO
(hard_regno_last_set[hard_regno + i]->to) >= 0)
{
int n, j;
ira_allocno_t new_allocno;
ira_object_t new_obj;
set_move = hard_regno_last_set[hard_regno + i];
/* It does not matter what loop_tree_node (of TO or
@ -729,19 +729,25 @@ modify_move_list (move_t list)
ALLOCNO_MODE (new_allocno) = ALLOCNO_MODE (set_move->to);
ira_set_allocno_cover_class
(new_allocno, ALLOCNO_COVER_CLASS (set_move->to));
ira_create_allocno_object (new_allocno);
ira_create_allocno_objects (new_allocno);
ALLOCNO_ASSIGNED_P (new_allocno) = true;
ALLOCNO_HARD_REGNO (new_allocno) = -1;
ALLOCNO_REG (new_allocno)
= create_new_reg (ALLOCNO_REG (set_move->to));
new_obj = ALLOCNO_OBJECT (new_allocno);
/* Make it possibly conflicting with all earlier
created allocnos. Cases where temporary allocnos
created to remove the cycles are quite rare. */
OBJECT_MIN (new_obj) = 0;
OBJECT_MAX (new_obj) = ira_objects_num - 1;
n = ALLOCNO_NUM_OBJECTS (new_allocno);
gcc_assert (n == ALLOCNO_NUM_OBJECTS (set_move->to));
for (j = 0; j < n; j++)
{
ira_object_t new_obj = ALLOCNO_OBJECT (new_allocno, j);
OBJECT_MIN (new_obj) = 0;
OBJECT_MAX (new_obj) = ira_objects_num - 1;
}
new_move = create_move (set_move->to, new_allocno);
set_move->to = new_allocno;
VEC_safe_push (move_t, heap, move_vec, new_move);
@ -937,21 +943,26 @@ add_range_and_copies_from_move_list (move_t list, ira_loop_tree_node_t node,
{
ira_allocno_t from = move->from;
ira_allocno_t to = move->to;
ira_object_t from_obj = ALLOCNO_OBJECT (from);
ira_object_t to_obj = ALLOCNO_OBJECT (to);
if (OBJECT_CONFLICT_ARRAY (to_obj) == NULL)
{
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file, " Allocate conflicts for a%dr%d\n",
ALLOCNO_NUM (to), REGNO (ALLOCNO_REG (to)));
ira_allocate_object_conflicts (to_obj, n);
}
int nr, i;
bitmap_clear_bit (live_through, ALLOCNO_REGNO (from));
bitmap_clear_bit (live_through, ALLOCNO_REGNO (to));
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (from_obj), hard_regs_live);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj), hard_regs_live);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj), hard_regs_live);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj), hard_regs_live);
nr = ALLOCNO_NUM_OBJECTS (to);
for (i = 0; i < nr; i++)
{
ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
if (OBJECT_CONFLICT_ARRAY (to_obj) == NULL)
{
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file, " Allocate conflicts for a%dr%d\n",
ALLOCNO_NUM (to), REGNO (ALLOCNO_REG (to)));
ira_allocate_object_conflicts (to_obj, n);
}
}
ior_hard_reg_conflicts (from, &hard_regs_live);
ior_hard_reg_conflicts (to, &hard_regs_live);
update_costs (from, true, freq);
update_costs (to, false, freq);
cp = ira_add_allocno_copy (from, to, freq, false, move->insn, NULL);
@ -960,58 +971,73 @@ add_range_and_copies_from_move_list (move_t list, ira_loop_tree_node_t node,
cp->num, ALLOCNO_NUM (cp->first),
REGNO (ALLOCNO_REG (cp->first)), ALLOCNO_NUM (cp->second),
REGNO (ALLOCNO_REG (cp->second)));
r = OBJECT_LIVE_RANGES (from_obj);
if (r == NULL || r->finish >= 0)
nr = ALLOCNO_NUM_OBJECTS (from);
for (i = 0; i < nr; i++)
{
OBJECT_LIVE_RANGES (from_obj)
= ira_create_live_range (from_obj, start, ira_max_point, r);
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
start, ira_max_point, ALLOCNO_NUM (from),
REGNO (ALLOCNO_REG (from)));
}
else
{
r->finish = ira_max_point;
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
r->start, ira_max_point, ALLOCNO_NUM (from),
REGNO (ALLOCNO_REG (from)));
ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
r = OBJECT_LIVE_RANGES (from_obj);
if (r == NULL || r->finish >= 0)
{
ira_add_live_range_to_object (from_obj, start, ira_max_point);
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
start, ira_max_point, ALLOCNO_NUM (from),
REGNO (ALLOCNO_REG (from)));
}
else
{
r->finish = ira_max_point;
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
r->start, ira_max_point, ALLOCNO_NUM (from),
REGNO (ALLOCNO_REG (from)));
}
}
ira_max_point++;
OBJECT_LIVE_RANGES (to_obj)
= ira_create_live_range (to_obj, ira_max_point, -1,
OBJECT_LIVE_RANGES (to_obj));
nr = ALLOCNO_NUM_OBJECTS (to);
for (i = 0; i < nr; i++)
{
ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
ira_add_live_range_to_object (to_obj, ira_max_point, -1);
}
ira_max_point++;
}
for (move = list; move != NULL; move = move->next)
{
ira_object_t to_obj = ALLOCNO_OBJECT (move->to);
r = OBJECT_LIVE_RANGES (to_obj);
if (r->finish < 0)
int nr, i;
nr = ALLOCNO_NUM_OBJECTS (move->to);
for (i = 0; i < nr; i++)
{
r->finish = ira_max_point - 1;
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
r->start, r->finish, ALLOCNO_NUM (move->to),
REGNO (ALLOCNO_REG (move->to)));
ira_object_t to_obj = ALLOCNO_OBJECT (move->to, i);
r = OBJECT_LIVE_RANGES (to_obj);
if (r->finish < 0)
{
r->finish = ira_max_point - 1;
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
r->start, r->finish, ALLOCNO_NUM (move->to),
REGNO (ALLOCNO_REG (move->to)));
}
}
}
EXECUTE_IF_SET_IN_BITMAP (live_through, FIRST_PSEUDO_REGISTER, regno, bi)
{
ira_allocno_t to;
ira_object_t obj;
int nr, i;
a = node->regno_allocno_map[regno];
to = ALLOCNO_MEM_OPTIMIZED_DEST (a);
if (to != NULL)
if ((to = ALLOCNO_MEM_OPTIMIZED_DEST (a)) != NULL)
a = to;
obj = ALLOCNO_OBJECT (a);
OBJECT_LIVE_RANGES (obj)
= ira_create_live_range (obj, start, ira_max_point - 1,
OBJECT_LIVE_RANGES (obj));
nr = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < nr; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
ira_add_live_range_to_object (obj, start, ira_max_point - 1);
}
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf
(ira_dump_file,

View File

@ -192,7 +192,6 @@ extern ira_loop_tree_node_t ira_loop_nodes;
#define IRA_LOOP_NODE(loop) IRA_LOOP_NODE_BY_INDEX ((loop)->num)
/* The structure describes program points where a given allocno lives.
To save memory we store allocno conflicts only for the same cover
class allocnos which is enough to assign hard registers. To find
@ -201,7 +200,7 @@ extern ira_loop_tree_node_t ira_loop_nodes;
intersected, the allocnos are in conflict. */
struct live_range
{
/* Allocno whose live range is described by given structure. */
/* Object whose live range is described by given structure. */
ira_object_t object;
/* Program point range. */
int start, finish;
@ -233,7 +232,7 @@ struct ira_object
ira_allocno_t allocno;
/* Vector of accumulated conflicting conflict_redords with NULL end
marker (if OBJECT_CONFLICT_VEC_P is true) or conflict bit vector
otherwise. Only objects belonging to allocnos with the
otherwise. Only ira_objects belonging to allocnos with the
same cover class are in the vector or in the bit vector. */
void *conflicts_array;
/* Pointer to structures describing at what program point the
@ -241,25 +240,27 @@ struct ira_object
ranges in the list are not intersected and ordered by decreasing
their program points*. */
live_range_t live_ranges;
/* The subword within ALLOCNO which is represented by this object.
Zero means the lowest-order subword (or the entire allocno in case
it is not being tracked in subwords). */
int subword;
/* Allocated size of the conflicts array. */
unsigned int conflicts_array_size;
/* A unique number for every instance of this structure which is used
/* A unique number for every instance of this structure, which is used
to represent it in conflict bit vectors. */
int id;
/* Before building conflicts, MIN and MAX are initialized to
correspondingly minimal and maximal points of the accumulated
allocno live ranges. Afterwards, they hold the minimal and
maximal ids of other objects that this one can conflict
with. */
live ranges. Afterwards, they hold the minimal and maximal ids
of other ira_objects that this one can conflict with. */
int min, max;
/* Initial and accumulated hard registers conflicting with this
conflict record and as a consequences can not be assigned to the
allocno. All non-allocatable hard regs and hard regs of cover
classes different from given allocno one are included in the
sets. */
object and as a consequences can not be assigned to the allocno.
All non-allocatable hard regs and hard regs of cover classes
different from given allocno one are included in the sets. */
HARD_REG_SET conflict_hard_regs, total_conflict_hard_regs;
/* Number of accumulated conflicts in the vector of conflicting
conflict records. */
objects. */
int num_accumulated_conflicts;
/* TRUE if conflicts are represented by a vector of pointers to
ira_object structures. Otherwise, we use a bit vector indexed
@ -346,9 +347,13 @@ struct ira_allocno
list is chained by NEXT_COALESCED_ALLOCNO. */
ira_allocno_t first_coalesced_allocno;
ira_allocno_t next_coalesced_allocno;
/* Pointer to a structure describing conflict information about this
allocno. */
ira_object_t object;
/* The number of objects tracked in the following array. */
int num_objects;
/* An array of structures describing conflict information and live
ranges for each object associated with the allocno. There may be
more than one such object in cases where the allocno represents a
multi-word register. */
ira_object_t objects[2];
/* Accumulated frequency of calls which given allocno
intersects. */
int call_freq;
@ -483,9 +488,11 @@ struct ira_allocno
#define ALLOCNO_TEMP(A) ((A)->temp)
#define ALLOCNO_FIRST_COALESCED_ALLOCNO(A) ((A)->first_coalesced_allocno)
#define ALLOCNO_NEXT_COALESCED_ALLOCNO(A) ((A)->next_coalesced_allocno)
#define ALLOCNO_OBJECT(A) ((A)->object)
#define ALLOCNO_OBJECT(A,N) ((A)->objects[N])
#define ALLOCNO_NUM_OBJECTS(A) ((A)->num_objects)
#define OBJECT_ALLOCNO(C) ((C)->allocno)
#define OBJECT_SUBWORD(C) ((C)->subword)
#define OBJECT_CONFLICT_ARRAY(C) ((C)->conflicts_array)
#define OBJECT_CONFLICT_VEC(C) ((ira_object_t *)(C)->conflicts_array)
#define OBJECT_CONFLICT_BITVEC(C) ((IRA_INT_TYPE *)(C)->conflicts_array)
@ -497,7 +504,7 @@ struct ira_allocno
#define OBJECT_MIN(C) ((C)->min)
#define OBJECT_MAX(C) ((C)->max)
#define OBJECT_CONFLICT_ID(C) ((C)->id)
#define OBJECT_LIVE_RANGES(C) ((C)->live_ranges)
#define OBJECT_LIVE_RANGES(A) ((A)->live_ranges)
/* Map regno -> allocnos with given regno (see comments for
allocno member `next_regno_allocno'). */
@ -593,6 +600,7 @@ extern int ira_move_loops_num, ira_additional_jumps_num;
/* The type used as elements in the array, and the number of bits in
this type. */
#define IRA_INT_BITS HOST_BITS_PER_WIDE_INT
#define IRA_INT_TYPE HOST_WIDE_INT
@ -690,7 +698,7 @@ minmax_set_iter_init (minmax_set_iterator *i, IRA_INT_TYPE *vec, int min,
i->word = i->nel == 0 ? 0 : vec[0];
}
/* Return TRUE if we have more elements to visit, in which case *N is
/* Return TRUE if we have more allocnos to visit, in which case *N is
set to the number of the element to be visited. Otherwise, return
FALSE. */
static inline bool
@ -929,12 +937,14 @@ extern void ira_traverse_loop_tree (bool, ira_loop_tree_node_t,
extern ira_allocno_t ira_parent_allocno (ira_allocno_t);
extern ira_allocno_t ira_parent_or_cap_allocno (ira_allocno_t);
extern ira_allocno_t ira_create_allocno (int, bool, ira_loop_tree_node_t);
extern void ira_create_allocno_object (ira_allocno_t);
extern void ira_create_allocno_objects (ira_allocno_t);
extern void ira_set_allocno_cover_class (ira_allocno_t, enum reg_class);
extern bool ira_conflict_vector_profitable_p (ira_object_t, int);
extern void ira_allocate_conflict_vec (ira_object_t, int);
extern void ira_allocate_object_conflicts (ira_object_t, int);
extern void ior_hard_reg_conflicts (ira_allocno_t, HARD_REG_SET *);
extern void ira_print_expanded_allocno (ira_allocno_t);
extern void ira_add_live_range_to_object (ira_object_t, int, int);
extern live_range_t ira_create_live_range (ira_object_t, int, int,
live_range_t);
extern live_range_t ira_copy_live_range_list (live_range_t);
@ -1059,7 +1069,7 @@ ira_allocno_iter_cond (ira_allocno_iterator *i, ira_allocno_t *a)
/* The iterator for all objects. */
typedef struct {
/* The number of the current element in IRA_OBJECT_ID_MAP. */
/* The number of the current element in ira_object_id_map. */
int n;
} ira_object_iterator;
@ -1087,13 +1097,44 @@ ira_object_iter_cond (ira_object_iterator *i, ira_object_t *obj)
return false;
}
/* Loop over all objects. In each iteration, A is set to the next
conflict. ITER is an instance of ira_object_iterator used to iterate
/* Loop over all objects. In each iteration, OBJ is set to the next
object. ITER is an instance of ira_object_iterator used to iterate
the objects. */
#define FOR_EACH_OBJECT(OBJ, ITER) \
for (ira_object_iter_init (&(ITER)); \
ira_object_iter_cond (&(ITER), &(OBJ));)
/* The iterator for objects associated with an allocno. */
typedef struct {
/* The number of the element the allocno's object array. */
int n;
} ira_allocno_object_iterator;
/* Initialize the iterator I. */
static inline void
ira_allocno_object_iter_init (ira_allocno_object_iterator *i)
{
i->n = 0;
}
/* Return TRUE if we have more objects to visit in allocno A, in which
case *O is set to the object to be visited. Otherwise, return
FALSE. */
static inline bool
ira_allocno_object_iter_cond (ira_allocno_object_iterator *i, ira_allocno_t a,
ira_object_t *o)
{
*o = ALLOCNO_OBJECT (a, i->n);
return i->n++ < ALLOCNO_NUM_OBJECTS (a);
}
/* Loop over all objects associated with allocno A. In each
iteration, O is set to the next object. ITER is an instance of
ira_allocno_object_iterator used to iterate the conflicts. */
#define FOR_EACH_ALLOCNO_OBJECT(A, O, ITER) \
for (ira_allocno_object_iter_init (&(ITER)); \
ira_allocno_object_iter_cond (&(ITER), (A), &(O));)
/* The iterator for copies. */
typedef struct {
@ -1132,9 +1173,10 @@ ira_copy_iter_cond (ira_copy_iterator *i, ira_copy_t *cp)
for (ira_copy_iter_init (&(ITER)); \
ira_copy_iter_cond (&(ITER), &(C));)
/* The iterator for allocno conflicts. */
/* The iterator for object conflicts. */
typedef struct {
/* TRUE if the conflicts are represented by vector of objects. */
/* TRUE if the conflicts are represented by vector of allocnos. */
bool conflict_vec_p;
/* The conflict vector or conflict bit vector. */

View File

@ -68,8 +68,12 @@ static int curr_point;
classes. */
static int high_pressure_start_point[N_REG_CLASSES];
/* Allocnos live at current point in the scan. */
static sparseset allocnos_live;
/* Objects live at current point in the scan. */
static sparseset objects_live;
/* A temporary bitmap used in functions that wish to avoid visiting an allocno
multiple times. */
static sparseset allocnos_processed;
/* Set of hard regs (except eliminable ones) currently live. */
static HARD_REG_SET hard_regs_live;
@ -82,18 +86,17 @@ static int last_call_num;
/* The number of last call at which given allocno was saved. */
static int *allocno_saved_at_call;
/* Record the birth of hard register REGNO, updating hard_regs_live
and hard reg conflict information for living allocno. */
/* Record the birth of hard register REGNO, updating hard_regs_live and
hard reg conflict information for living allocnos. */
static void
make_hard_regno_born (int regno)
{
unsigned int i;
SET_HARD_REG_BIT (hard_regs_live, regno);
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
EXECUTE_IF_SET_IN_SPARSESET (objects_live, i)
{
ira_allocno_t allocno = ira_allocnos[i];
ira_object_t obj = ALLOCNO_OBJECT (allocno);
ira_object_t obj = ira_object_id_map[i];
SET_HARD_REG_BIT (OBJECT_CONFLICT_HARD_REGS (obj), regno);
SET_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno);
}
@ -107,29 +110,29 @@ make_hard_regno_dead (int regno)
CLEAR_HARD_REG_BIT (hard_regs_live, regno);
}
/* Record the birth of allocno A, starting a new live range for
it if necessary, and updating hard reg conflict information. We also
record it in allocnos_live. */
/* Record the birth of object OBJ. Set a bit for it in objects_live,
start a new live range for it if necessary and update hard register
conflicts. */
static void
make_allocno_born (ira_allocno_t a)
make_object_born (ira_object_t obj)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
live_range_t p = OBJECT_LIVE_RANGES (obj);
live_range_t lr = OBJECT_LIVE_RANGES (obj);
sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
sparseset_set_bit (objects_live, OBJECT_CONFLICT_ID (obj));
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), hard_regs_live);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), hard_regs_live);
if (p == NULL
|| (p->finish != curr_point && p->finish + 1 != curr_point))
OBJECT_LIVE_RANGES (obj)
= ira_create_live_range (obj, curr_point, -1, p);
if (lr == NULL
|| (lr->finish != curr_point && lr->finish + 1 != curr_point))
ira_add_live_range_to_object (obj, curr_point, -1);
}
/* Update ALLOCNO_EXCESS_PRESSURE_POINTS_NUM for allocno A. */
/* Update ALLOCNO_EXCESS_PRESSURE_POINTS_NUM for the allocno
associated with object OBJ. */
static void
update_allocno_pressure_excess_length (ira_allocno_t a)
update_allocno_pressure_excess_length (ira_object_t obj)
{
ira_allocno_t a = OBJECT_ALLOCNO (obj);
int start, i;
enum reg_class cover_class, cl;
live_range_t p;
@ -139,7 +142,6 @@ update_allocno_pressure_excess_length (ira_allocno_t a)
(cl = ira_reg_class_super_classes[cover_class][i]) != LIM_REG_CLASSES;
i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
if (high_pressure_start_point[cl] < 0)
continue;
p = OBJECT_LIVE_RANGES (obj);
@ -150,18 +152,18 @@ update_allocno_pressure_excess_length (ira_allocno_t a)
}
}
/* Process the death of allocno A. This finishes the current live
range for it. */
/* Process the death of object OBJ, which is associated with allocno
A. This finishes the current live range for it. */
static void
make_allocno_dead (ira_allocno_t a)
make_object_dead (ira_object_t obj)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
live_range_t p = OBJECT_LIVE_RANGES (obj);
live_range_t lr;
ira_assert (p != NULL);
p->finish = curr_point;
update_allocno_pressure_excess_length (a);
sparseset_clear_bit (allocnos_live, ALLOCNO_NUM (a));
sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (obj));
lr = OBJECT_LIVE_RANGES (obj);
ira_assert (lr != NULL);
lr->finish = curr_point;
update_allocno_pressure_excess_length (obj);
}
/* The current register pressures for each cover class for the current
@ -216,8 +218,8 @@ dec_register_pressure (enum reg_class cover_class, int nregs)
}
if (set_p)
{
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, j)
update_allocno_pressure_excess_length (ira_allocnos[j]);
EXECUTE_IF_SET_IN_SPARSESET (objects_live, j)
update_allocno_pressure_excess_length (ira_object_id_map[j]);
for (i = 0;
(cl = ira_reg_class_super_classes[cover_class][i])
!= LIM_REG_CLASSES;
@ -234,8 +236,8 @@ static void
mark_pseudo_regno_live (int regno)
{
ira_allocno_t a = ira_curr_regno_allocno_map[regno];
int i, n, nregs;
enum reg_class cl;
int nregs;
if (a == NULL)
return;
@ -243,18 +245,66 @@ mark_pseudo_regno_live (int regno)
/* Invalidate because it is referenced. */
allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
if (sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
n = ALLOCNO_NUM_OBJECTS (a);
cl = ALLOCNO_COVER_CLASS (a);
nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
if (n > 1)
{
/* We track every subobject separately. */
gcc_assert (nregs == n);
nregs = 1;
}
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
if (sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
continue;
inc_register_pressure (cl, nregs);
make_object_born (obj);
}
}
/* Like mark_pseudo_regno_live, but try to only mark one subword of
the pseudo as live. SUBWORD indicates which; a value of 0
indicates the low part. */
static void
mark_pseudo_regno_subword_live (int regno, int subword)
{
ira_allocno_t a = ira_curr_regno_allocno_map[regno];
int n, nregs;
enum reg_class cl;
ira_object_t obj;
if (a == NULL)
return;
/* Invalidate because it is referenced. */
allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
n = ALLOCNO_NUM_OBJECTS (a);
if (n == 1)
{
mark_pseudo_regno_live (regno);
return;
}
cl = ALLOCNO_COVER_CLASS (a);
nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
gcc_assert (nregs == n);
obj = ALLOCNO_OBJECT (a, subword);
if (sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
return;
inc_register_pressure (cl, nregs);
make_allocno_born (a);
make_object_born (obj);
}
/* Mark the hard register REG as live. Store a 1 in hard_regs_live
for this register, record how many consecutive hardware registers
it actually needs. */
/* Mark the register REG as live. Store a 1 in hard_regs_live for
this register, record how many consecutive hardware registers it
actually needs. */
static void
mark_hard_reg_live (rtx reg)
{
@ -282,13 +332,22 @@ mark_hard_reg_live (rtx reg)
static void
mark_ref_live (df_ref ref)
{
rtx reg;
rtx reg = DF_REF_REG (ref);
rtx orig_reg = reg;
reg = DF_REF_REG (ref);
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
mark_pseudo_regno_live (REGNO (reg));
{
if (df_read_modify_subreg_p (orig_reg))
{
mark_pseudo_regno_subword_live (REGNO (reg),
subreg_lowpart_p (orig_reg) ? 0 : 1);
}
else
mark_pseudo_regno_live (REGNO (reg));
}
else
mark_hard_reg_live (reg);
}
@ -299,8 +358,8 @@ static void
mark_pseudo_regno_dead (int regno)
{
ira_allocno_t a = ira_curr_regno_allocno_map[regno];
int n, i, nregs;
enum reg_class cl;
int nregs;
if (a == NULL)
return;
@ -308,18 +367,61 @@ mark_pseudo_regno_dead (int regno)
/* Invalidate because it is referenced. */
allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
if (! sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
n = ALLOCNO_NUM_OBJECTS (a);
cl = ALLOCNO_COVER_CLASS (a);
nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
if (n > 1)
{
/* We track every subobject separately. */
gcc_assert (nregs == n);
nregs = 1;
}
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
if (!sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
continue;
dec_register_pressure (cl, nregs);
make_object_dead (obj);
}
}
/* Like mark_pseudo_regno_dead, but called when we know that only part of the
register dies. SUBWORD indicates which; a value of 0 indicates the low part. */
static void
mark_pseudo_regno_subword_dead (int regno, int subword)
{
ira_allocno_t a = ira_curr_regno_allocno_map[regno];
int n, nregs;
enum reg_class cl;
ira_object_t obj;
if (a == NULL)
return;
/* Invalidate because it is referenced. */
allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
n = ALLOCNO_NUM_OBJECTS (a);
if (n == 1)
/* The allocno as a whole doesn't die in this case. */
return;
cl = ALLOCNO_COVER_CLASS (a);
nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
dec_register_pressure (cl, nregs);
gcc_assert (nregs == n);
make_allocno_dead (a);
obj = ALLOCNO_OBJECT (a, subword);
if (!sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
return;
dec_register_pressure (cl, 1);
make_object_dead (obj);
}
/* Mark the hard register REG as dead. Store a 0 in hard_regs_live
for the register. */
/* Mark the hard register REG as dead. Store a 0 in hard_regs_live for the
register. */
static void
mark_hard_reg_dead (rtx reg)
{
@ -347,17 +449,31 @@ mark_hard_reg_dead (rtx reg)
static void
mark_ref_dead (df_ref def)
{
rtx reg;
rtx reg = DF_REF_REG (def);
rtx orig_reg = reg;
if (DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL)
|| DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL))
if (DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL))
return;
reg = DF_REF_REG (def);
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
if (DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL)
&& (GET_CODE (orig_reg) != SUBREG
|| REGNO (reg) < FIRST_PSEUDO_REGISTER
|| !df_read_modify_subreg_p (orig_reg)))
return;
if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
mark_pseudo_regno_dead (REGNO (reg));
{
if (df_read_modify_subreg_p (orig_reg))
{
mark_pseudo_regno_subword_dead (REGNO (reg),
subreg_lowpart_p (orig_reg) ? 0 : 1);
}
else
mark_pseudo_regno_dead (REGNO (reg));
}
else
mark_hard_reg_dead (reg);
}
@ -468,7 +584,7 @@ check_and_make_def_conflict (int alt, int def, enum reg_class def_cl)
/* If there's any alternative that allows USE to match DEF, do not
record a conflict. If that causes us to create an invalid
instruction due to the earlyclobber, reload must fix it up. */
instruction due to the earlyclobber, reload must fix it up. */
for (alt1 = 0; alt1 < recog_data.n_alternatives; alt1++)
if (recog_op_alt[use][alt1].matches == def
|| (use < recog_data.n_operands - 1
@ -836,13 +952,12 @@ process_single_reg_class_operands (bool in_p, int freq)
}
}
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, px)
EXECUTE_IF_SET_IN_SPARSESET (objects_live, px)
{
a = ira_allocnos[px];
ira_object_t obj = ira_object_id_map[px];
a = OBJECT_ALLOCNO (obj);
if (a != operand_a)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
/* We could increase costs of A instead of making it
conflicting with the hard register. But it works worse
because it will be spilled in reload in anyway. */
@ -897,7 +1012,7 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
}
curr_bb_node = loop_tree_node;
reg_live_out = DF_LR_OUT (bb);
sparseset_clear (allocnos_live);
sparseset_clear (objects_live);
REG_SET_TO_HARD_REG_SET (hard_regs_live, reg_live_out);
AND_COMPL_HARD_REG_SET (hard_regs_live, eliminable_regset);
AND_COMPL_HARD_REG_SET (hard_regs_live, ira_no_alloc_regs);
@ -1011,21 +1126,14 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
if (call_p)
{
last_call_num++;
sparseset_clear (allocnos_processed);
/* The current set of live allocnos are live across the call. */
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
EXECUTE_IF_SET_IN_SPARSESET (objects_live, i)
{
ira_allocno_t a = ira_allocnos[i];
ira_object_t obj = ira_object_id_map[i];
ira_allocno_t a = OBJECT_ALLOCNO (obj);
int num = ALLOCNO_NUM (a);
if (allocno_saved_at_call[i] != last_call_num)
/* Here we are mimicking caller-save.c behaviour
which does not save hard register at a call if
it was saved on previous call in the same basic
block and the hard register was not mentioned
between the two calls. */
ALLOCNO_CALL_FREQ (a) += freq;
/* Mark it as saved at the next call. */
allocno_saved_at_call[i] = last_call_num + 1;
ALLOCNO_CALLS_CROSSED_NUM (a)++;
/* Don't allocate allocnos that cross setjmps or any
call, if this function receives a nonlocal
goto. */
@ -1033,18 +1141,31 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
|| find_reg_note (insn, REG_SETJMP,
NULL_RTX) != NULL_RTX)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
SET_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj));
SET_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
}
if (can_throw_internal (insn))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
}
if (sparseset_bit_p (allocnos_processed, num))
continue;
sparseset_set_bit (allocnos_processed, num);
if (allocno_saved_at_call[num] != last_call_num)
/* Here we are mimicking caller-save.c behaviour
which does not save hard register at a call if
it was saved on previous call in the same basic
block and the hard register was not mentioned
between the two calls. */
ALLOCNO_CALL_FREQ (a) += freq;
/* Mark it as saved at the next call. */
allocno_saved_at_call[num] = last_call_num + 1;
ALLOCNO_CALLS_CROSSED_NUM (a)++;
}
}
@ -1102,10 +1223,11 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
if (bb_has_abnormal_pred (bb))
{
#ifdef STACK_REGS
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, px)
EXECUTE_IF_SET_IN_SPARSESET (objects_live, px)
{
ALLOCNO_NO_STACK_REG_P (ira_allocnos[px]) = true;
ALLOCNO_TOTAL_NO_STACK_REG_P (ira_allocnos[px]) = true;
ira_allocno_t a = OBJECT_ALLOCNO (ira_object_id_map[px]);
ALLOCNO_NO_STACK_REG_P (a) = true;
ALLOCNO_TOTAL_NO_STACK_REG_P (a) = true;
}
for (px = FIRST_STACK_REG; px <= LAST_STACK_REG; px++)
make_hard_regno_born (px);
@ -1119,8 +1241,8 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
make_hard_regno_born (px);
}
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
make_allocno_dead (ira_allocnos[i]);
EXECUTE_IF_SET_IN_SPARSESET (objects_live, i)
make_object_dead (ira_object_id_map[i]);
curr_point++;
@ -1144,31 +1266,24 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
static void
create_start_finish_chains (void)
{
ira_allocno_t a;
ira_allocno_iterator ai;
ira_object_t obj;
ira_object_iterator oi;
live_range_t r;
ira_start_point_ranges
= (live_range_t *) ira_allocate (ira_max_point
* sizeof (live_range_t));
memset (ira_start_point_ranges, 0,
ira_max_point * sizeof (live_range_t));
= (live_range_t *) ira_allocate (ira_max_point * sizeof (live_range_t));
memset (ira_start_point_ranges, 0, ira_max_point * sizeof (live_range_t));
ira_finish_point_ranges
= (live_range_t *) ira_allocate (ira_max_point
* sizeof (live_range_t));
memset (ira_finish_point_ranges, 0,
ira_max_point * sizeof (live_range_t));
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
{
r->start_next = ira_start_point_ranges[r->start];
ira_start_point_ranges[r->start] = r;
r->finish_next = ira_finish_point_ranges[r->finish];
= (live_range_t *) ira_allocate (ira_max_point * sizeof (live_range_t));
memset (ira_finish_point_ranges, 0, ira_max_point * sizeof (live_range_t));
FOR_EACH_OBJECT (obj, oi)
for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
{
r->start_next = ira_start_point_ranges[r->start];
ira_start_point_ranges[r->start] = r;
r->finish_next = ira_finish_point_ranges[r->finish];
ira_finish_point_ranges[r->finish] = r;
}
}
}
}
/* Rebuild IRA_START_POINT_RANGES and IRA_FINISH_POINT_RANGES after
@ -1202,7 +1317,7 @@ remove_some_program_points_and_update_live_ranges (void)
{
ira_assert (r->start <= r->finish);
bitmap_set_bit (born_or_died, r->start);
bitmap_set_bit (born_or_died, r->finish);
bitmap_set_bit (born_or_died, r->finish);
}
map = (int *) ira_allocate (sizeof (int) * ira_max_point);
@ -1223,6 +1338,7 @@ remove_some_program_points_and_update_live_ranges (void)
r->start = map[r->start];
r->finish = map[r->finish];
}
ira_free (map);
}
@ -1242,13 +1358,27 @@ ira_debug_live_range_list (live_range_t r)
ira_print_live_range_list (stderr, r);
}
/* Print live ranges of object OBJ to file F. */
static void
print_object_live_ranges (FILE *f, ira_object_t obj)
{
ira_print_live_range_list (f, OBJECT_LIVE_RANGES (obj));
}
/* Print live ranges of allocno A to file F. */
static void
print_allocno_live_ranges (FILE *f, ira_allocno_t a)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
fprintf (f, " a%d(r%d):", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
ira_print_live_range_list (f, OBJECT_LIVE_RANGES (obj));
int n = ALLOCNO_NUM_OBJECTS (a);
int i;
for (i = 0; i < n; i++)
{
fprintf (f, " a%d(r%d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
if (n > 1)
fprintf (f, " [%d]", i);
fprintf (f, "):");
print_object_live_ranges (f, ALLOCNO_OBJECT (a, i));
}
}
/* Print live ranges of allocno A to stderr. */
@ -1277,12 +1407,13 @@ ira_debug_live_ranges (void)
}
/* The main entry function creates live ranges, set up
CONFLICT_HARD_REGS and TOTAL_CONFLICT_HARD_REGS for allocnos, and
CONFLICT_HARD_REGS and TOTAL_CONFLICT_HARD_REGS for objects, and
calculate register pressure info. */
void
ira_create_allocno_live_ranges (void)
{
allocnos_live = sparseset_alloc (ira_allocnos_num);
objects_live = sparseset_alloc (ira_objects_num);
allocnos_processed = sparseset_alloc (ira_allocnos_num);
curr_point = 0;
last_call_num = 0;
allocno_saved_at_call
@ -1296,7 +1427,8 @@ ira_create_allocno_live_ranges (void)
print_live_ranges (ira_dump_file);
/* Clean up. */
ira_free (allocno_saved_at_call);
sparseset_free (allocnos_live);
sparseset_free (objects_live);
sparseset_free (allocnos_processed);
}
/* Compress allocno live ranges. */

View File

@ -1241,9 +1241,8 @@ setup_prohibited_mode_move_regs (void)
static bool
ira_bad_reload_regno_1 (int regno, rtx x)
{
int x_regno;
int x_regno, n, i;
ira_allocno_t a;
ira_object_t obj;
enum reg_class pref;
/* We only deal with pseudo regs. */
@ -1263,10 +1262,13 @@ ira_bad_reload_regno_1 (int regno, rtx x)
/* If the pseudo conflicts with REGNO, then we consider REGNO a
poor choice for a reload regno. */
a = ira_regno_allocno_map[x_regno];
obj = ALLOCNO_OBJECT (a);
if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
return true;
n = ALLOCNO_NUM_OBJECTS (a);
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
return true;
}
return false;
}
@ -1610,32 +1612,60 @@ static void
check_allocation (void)
{
ira_allocno_t a;
int hard_regno, nregs;
int hard_regno, nregs, conflict_nregs;
ira_allocno_iterator ai;
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj, conflict_obj;
ira_object_conflict_iterator oci;
int n = ALLOCNO_NUM_OBJECTS (a);
int i;
if (ALLOCNO_CAP_MEMBER (a) != NULL
|| (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
continue;
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
obj = ALLOCNO_OBJECT (a);
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
if (n > 1)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
if (conflict_hard_regno >= 0)
gcc_assert (n == nregs);
nregs = 1;
}
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
int this_regno = hard_regno;
if (n > 1)
{
int conflict_nregs
= (hard_regno_nregs
[conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
if ((conflict_hard_regno <= hard_regno
&& hard_regno < conflict_hard_regno + conflict_nregs)
|| (hard_regno <= conflict_hard_regno
&& conflict_hard_regno < hard_regno + nregs))
if (WORDS_BIG_ENDIAN)
this_regno += n - i - 1;
else
this_regno += i;
}
FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
if (conflict_hard_regno < 0)
continue;
if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1)
{
if (WORDS_BIG_ENDIAN)
conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
- OBJECT_SUBWORD (conflict_obj) - 1);
else
conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
conflict_nregs = 1;
}
else
conflict_nregs
= (hard_regno_nregs
[conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
if ((conflict_hard_regno <= this_regno
&& this_regno < conflict_hard_regno + conflict_nregs)
|| (this_regno <= conflict_hard_regno
&& conflict_hard_regno < this_regno + nregs))
{
fprintf (stderr, "bad allocation for %d and %d\n",
ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));