tree-optimization/95172 - avoid mixing conditionalized and ordered SM

The following testcase shows a missed optimization that then leads to
wrong-code when issueing SMed stores on exits.  When we were able to
compute an ordered sequence of stores for an exit we need to emit
that in the correct order and we can emit it disregarding to any
conditional for whether a store actually happened (we know it did).
We can also improve detection as of whether we need conditional
processing at all.  Both parts fix the testcase.

2020-05-18  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/95172
	* tree-ssa-loop-im.c (execute_sm): Get flag whether we
	eventually need the conditional processing.
	(execute_sm_exit): When processing an orderd sequence
	avoid doing any conditional processing.
	(hoist_memory_references): Pass down whether all edges
	have ordered processing for a ref to execute_sm.

	* gcc.dg/torture/pr95172.c: New testcase.
This commit is contained in:
Richard Biener 2020-05-18 09:17:24 +02:00
parent 03d549090e
commit 52a0f83980
4 changed files with 38 additions and 5 deletions

View File

@ -1,3 +1,13 @@
2020-05-18 Richard Biener <rguenther@suse.de>
PR tree-optimization/95172
* tree-ssa-loop-im.c (execute_sm): Get flag whether we
eventually need the conditional processing.
(execute_sm_exit): When processing an orderd sequence
avoid doing any conditional processing.
(hoist_memory_references): Pass down whether all edges
have ordered processing for a ref to execute_sm.
2020-05-17 Jeff Law <law@redhat.com>
* config/h8300/predicates.md (pc_or_label_operand): New predicate.

View File

@ -1,3 +1,8 @@
2020-05-18 Richard Biener <rguenther@suse.de>
PR tree-optimization/95172
* gcc.dg/torture/pr95172.c: New testcase.
2020-05-17 H.J. Lu <hongjiu.lu@intel.com>
PR target/95021

View File

@ -0,0 +1,17 @@
/* { dg-do run } */
int a, d;
int *b = &a;
short c;
int main()
{
for (; c <= 4; c--) {
for (; d;)
;
a = 1;
*b = 0;
}
if (a != 0)
__builtin_abort ();
return 0;
}

View File

@ -2130,7 +2130,7 @@ struct sm_aux
static void
execute_sm (class loop *loop, im_mem_ref *ref,
hash_map<im_mem_ref *, sm_aux *> &aux_map)
hash_map<im_mem_ref *, sm_aux *> &aux_map, bool maybe_mt)
{
gassign *load;
struct fmt_data fmt_data;
@ -2154,8 +2154,9 @@ execute_sm (class loop *loop, im_mem_ref *ref,
for_each_index (&ref->mem.ref, force_move_till, &fmt_data);
bool always_stored = ref_always_accessed_p (loop, ref, true);
if (bb_in_transaction (loop_preheader_edge (loop)->src)
|| (! flag_store_data_races && ! always_stored))
if (maybe_mt
&& (bb_in_transaction (loop_preheader_edge (loop)->src)
|| (! flag_store_data_races && ! always_stored)))
multi_threaded_model_p = true;
if (multi_threaded_model_p)
@ -2244,7 +2245,7 @@ execute_sm_exit (class loop *loop, edge ex, vec<seq_entry> &seq,
else
{
sm_aux *aux = *aux_map.get (ref);
if (!aux->store_flag)
if (!aux->store_flag || kind == sm_ord)
{
gassign *store;
store = gimple_build_assign (unshare_expr (ref->mem.ref),
@ -2630,7 +2631,7 @@ hoist_memory_references (class loop *loop, bitmap mem_refs,
EXECUTE_IF_SET_IN_BITMAP (mem_refs, 0, i, bi)
{
ref = memory_accesses.refs_list[i];
execute_sm (loop, ref, aux_map);
execute_sm (loop, ref, aux_map, bitmap_bit_p (refs_not_supported, i));
}
/* Materialize ordered store sequences on exits. */