builtin-types.def (BT_FN_VOID_BOOL, [...]): New.

* builtin-types.def (BT_FN_VOID_BOOL, BT_FN_VOID_SIZE_SIZE_PTR,
	BT_FN_UINT_UINT_PTR_PTR, BT_FN_UINT_OMPFN_PTR_UINT_UINT,
	BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
	BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
	BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
	BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR): New.
	* gengtype.c (open_base_files): Add omp-general.h.
	* gimple.c (gimple_build_omp_critical):
	(gimple_build_omp_taskgroup): Add CLAUSES argument.  Call
	gimple_omp_taskgroup_set_clauses.
	(gimple_build_omp_atomic_load): Add mo argument, call
	gimple_omp_atomic_set_memory_order.
	(gimple_build_omp_atomic_store): Likewise.
	(gimple_copy): Adjust handling of GIMPLE_OMP_TASKGROUP.
	* gimple.def (GIMPLE_OMP_TASKGROUP): Use GSS_OMP_SINGLE_LAYOUT
	instead of GSS_OMP.
	(GIMPLE_OMP_TEAMS): Use GSS_OMP_PARALLEL_LAYOUT instead
	of GSS_OMP_SINGLE_LAYOUT, adjust comments.
	* gimple.h (enum gf_mask): Add GF_OMP_TEAMS_HOST, GF_OMP_TASK_TASKWAIT
	and GF_OMP_ATOMIC_MEMORY_ORDER.  Remove GF_OMP_ATOMIC_SEQ_CST, use
	different value for GF_OMP_ATOMIC_NEED_VALUE.
	(struct gimple_statement_omp_taskreg): Add GIMPLE_OMP_TEAMS to
	comments.
	(struct gimple_statement_omp_single_layout): And remove here.
	(struct gomp_teams): Inherit from gimple_statement_omp_taskreg rather
	than gimple_statement_omp_single_layout.
	(is_a_helper <gimple_statement_omp_taskreg *>::test): Allow
	GIMPLE_OMP_TEAMS.
	(is_a_helper <const gimple_statement_omp_taskreg *>::test): Likewise.
	(gimple_omp_subcode): Formatting fix.
	(gimple_omp_teams_child_fn, gimple_omp_teams_child_fn_ptr,
	gimple_omp_teams_set_child_fn, gimple_omp_teams_data_arg,
	gimple_omp_teams_data_arg_ptr, gimple_omp_teams_set_data_arg,
	gimple_omp_teams_host, gimple_omp_teams_set_host,
	gimple_omp_task_taskwait_p, gimple_omp_task_set_taskwait_p,
	gimple_omp_taskgroup_clauses, gimple_omp_taskgroup_clauses_ptr,
	gimple_omp_taskgroup_set_clauses): New inline functions.
	(gimple_build_omp_atomic_load): Add enum omp_memory_order argument.
	(gimple_build_omp_atomic_store): Likewise.
	(gimple_omp_atomic_seq_cst_p): Remove.
	(gimple_omp_atomic_memory_order): New function.
	(gimple_omp_atomic_set_seq_cst): Remove.
	(gimple_omp_atomic_set_memory_order): New function.
	(gimple_build_omp_taskgroup): Add clauses argument.
	* gimple-pretty-print.c (dump_gimple_omp_taskgroup): New function.
	(dump_gimple_omp_task): Print taskwait with depend clauses.
	(dump_gimple_omp_atomic_load, dump_gimple_omp_atomic_store): Use
	dump_omp_atomic_memory_order.
	(pp_gimple_stmt_1): Handle GIMPLE_OMP_TASKGROUP.
	* gimplify.c (enum gimplify_omp_var_data): Add GOVD_MAP_ALLOC_ONLY,
	GOVD_MAP_FROM_ONLY and GOVD_NONTEMPORAL.
	(enum omp_region_type): Reserve bits 1 and 2 for auxiliary flags,
	renumber values of most of ORT_* enumerators, add ORT_HOST_TEAMS,
	ORT_COMBINED_HOST_TEAMS, ORT_TASKGROUP, ORT_TASKLOOP and
	ORT_UNTIED_TASKLOOP enumerators.
	(enum gimplify_defaultmap_kind): New.
	(struct gimplify_omp_ctx): Remove target_map_scalars_firstprivate and
	target_map_pointers_as_0len_arrays members, add defaultmap.
	(new_omp_context): Initialize defaultmap member.
	(gimple_add_tmp_var): Handle ORT_TASKGROUP like ORT_WORKSHARE.
	(maybe_fold_stmt): Don't fold even in host teams regions.
	(omp_firstprivatize_variable): Handle ORT_TASKGROUP like
	ORT_WORKSHARE.  Test ctx->defaultmap[GDMK_SCALAR] instead of
	ctx->omp_firstprivatize_variable.
	(omp_add_variable): Don't add private/firstprivate for VLAs in
	ORT_TASKGROUP.
	(omp_default_clause): Print "taskloop" rather than "task" if
	ORT_*TASKLOOP.
	(omp_notice_variable): Handle ORT_TASKGROUP like ORT_WORKSHARE.
	Handle new defaultmap clause kinds.
	(omp_is_private): Handle ORT_TASKGROUP like ORT_WORKSHARE.  Allow simd
	iterator to be lastprivate or private.  Fix up diagnostics if linear
	is used on collapse>1 simd iterator.
	(omp_check_private): Handle ORT_TASKGROUP like ORT_WORKSHARE.
	(gimplify_omp_depend): New function.
	(gimplify_scan_omp_clauses): Add shared clause on parallel for
	combined parallel master taskloop{, simd} if taskloop has
	firstprivate, lastprivate or reduction clause.  Handle
	OMP_CLAUSE_REDUCTION_TASK diagnostics.  Adjust tests for
	ORT_COMBINED_TEAMS.  Gimplify depend clauses with iterators.  Handle
	cancel and simd OMP_CLAUSE_IF_MODIFIERs.  Handle
	OMP_CLAUSE_NONTEMPORAL.  Handle new defaultmap clause kinds.  Handle
	OMP_CLAUSE_{TASK,IN}_REDUCTION.  Diagnose invalid conditional
	lastprivate.
	(gimplify_adjust_omp_clauses_1): Ignore GOVD_NONTEMPORAL.  Handle
	GOVD_MAP_ALLOC_ONLY and GOVD_MAP_FROM_ONLY.  
	(gimplify_adjust_omp_clauses): Handle OMP_CLAUSE_NONTEMPORAL.  Handle
	OMP_CLAUSE_{TASK,IN}_REDUCTION.
	(gimplify_omp_task): Handle taskwait with depend clauses.
	(gimplify_omp_for): Add shared clause on parallel for combined
	parallel master taskloop{, simd} if taskloop has firstprivate,
	lastprivate or reduction clause.  Use ORT_TASKLOOP or
	ORT_UNTIED_TASKLOOP instead of ORT_TASK or ORT_UNTIED_TASK.  Adjust
	tests for ORT_COMBINED_TEAMS.  Handle C++ range for loops with
	NULL TREE_PURPOSE in OMP_FOR_ORIG_DECLS.  Firstprivatize
	__for_end and __for_range temporaries on OMP_PARALLEL for
	distribute parallel for{, simd}.  Move OMP_CLAUSE_REDUCTION
	and OMP_CLAUSE_IN_REDUCTION from taskloop to the task construct
	sandwiched in between two taskloops.
	(computable_teams_clause): Test ctx->defaultmap[GDMK_SCALAR]
	instead of ctx->omp_firstprivatize_variable.
	(gimplify_omp_workshare): Set ort to ORT_HOST_TEAMS or
	ORT_COMBINED_HOST_TEAMS if not inside of target construct.  If
	host teams, use gimplify_and_return_first etc. for body like
	for target or target data constructs, and at the end call
	gimple_omp_teams_set_host on the GIMPLE_OMP_TEAMS object.
	(gimplify_omp_atomic): Use OMP_ATOMIC_MEMORY_ORDER instead
	of OMP_ATOMIC_SEQ_CST, pass it as new argument to
	gimple_build_omp_atomic_load and gimple_build_omp_atomic_store, remove
	gimple_omp_atomic_set_seq_cst calls.
	(gimplify_expr) <case OMP_TASKGROUP>: Move handling into a separate
	case, handle taskgroup clauses.
	* lto-streamer-out.c (hash_tree): Handle
	OMP_CLAUSE_{TASK,IN}_REDUCTION.
	* Makefile.in (GTFILES): Add omp-general.h.
	* omp-builtins.def (BUILT_IN_GOMP_TASKWAIT_DEPEND,
	BUILT_IN_GOMP_LOOP_NONMONOTONIC_RUNTIME_START,
	BUILT_IN_GOMP_LOOP_MAYBE_NONMONOTONIC_RUNTIME_START,
	BUILT_IN_GOMP_LOOP_START, BUILT_IN_GOMP_LOOP_ORDERED_START,
	BUILT_IN_GOMP_LOOP_DOACROSS_START,
	BUILT_IN_GOMP_LOOP_NONMONOTONIC_RUNTIME_NEXT,
	BUILT_IN_GOMP_LOOP_MAYBE_NONMONOTONIC_RUNTIME_NEXT,
	BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_RUNTIME_START,
	BUILT_IN_GOMP_LOOP_ULL_MAYBE_NONMONOTONIC_RUNTIME_START,
	BUILT_IN_GOMP_LOOP_ULL_START, BUILT_IN_GOMP_LOOP_ULL_ORDERED_START,
	BUILT_IN_GOMP_LOOP_ULL_DOACROSS_START,
	BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_RUNTIME_NEXT,
	BUILT_IN_GOMP_LOOP_ULL_MAYBE_NONMONOTONIC_RUNTIME_NEXT,
	BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_RUNTIME,
	BUILT_IN_GOMP_PARALLEL_LOOP_MAYBE_NONMONOTONIC_RUNTIME,
	BUILT_IN_GOMP_PARALLEL_REDUCTIONS, BUILT_IN_GOMP_SECTIONS2_START,
	BUILT_IN_GOMP_TEAMS_REG, BUILT_IN_GOMP_TASKGROUP_REDUCTION_REGISTER,
	BUILT_IN_GOMP_TASKGROUP_REDUCTION_UNREGISTER,
	BUILT_IN_GOMP_TASK_REDUCTION_REMAP,
	BUILT_IN_GOMP_WORKSHARE_TASK_REDUCTION_UNREGISTER): New builtins.
	* omp-expand.c (workshare_safe_to_combine_p): Return false for
	non-worksharing loops.
	(omp_adjust_chunk_size): Don't adjust anything if chunk_size is zero.
	(determine_parallel_type): Don't combine parallel with worksharing
	which has _reductemp_ clause.
	(expand_parallel_call): Emit the GOMP_*nonmonotonic_runtime* or
	GOMP_*maybe_nonmonotonic_runtime* builtins instead of GOMP_*runtime*
	if there is nonmonotonic modifier or if there is no modifier and no
	ordered clause.  For dynamic and guided schedule without monotonic
	and nonmonotonic modifier, default to nonmonotonic.
	(expand_omp_for): Likewise.  Adjust expand_omp_for_generic caller, use
	GOMP_loop{,_ull}{,_ordered,_doacross}_start builtins if there are
	task reductions.
	(expand_task_call): Add GOMP_TASK_FLAG_REDUCTION flag to flags if
	there are any reduction clauses.
	(expand_taskwait_call): New function.
	(expand_teams_call): New function.
	(expand_omp_taskreg): Allow GIMPLE_OMP_TEAMS and call
	expand_teams_call for it.  Formatting fix.  Handle taskwait with
	depend clauses.
	(expand_omp_for_generic): Add SCHED_ARG argument.  Handle expansion
	of worksharing loops with task reductions.
	(expand_omp_for_static_nochunk, expand_omp_for_static_chunk): Handle
	expansion of worksharing loops with task reductions.
	(expand_omp_sections): Handle expansion of sections with task
	reductions.
	(expand_omp_synch): For host teams call expand_omp_taskreg.
	(omp_memory_order_to_memmodel): New function.
	(expand_omp_atomic_load, expand_omp_atomic_store,
	expand_omp_atomic_fetch_op): Use it and gimple_omp_atomic_memory_order
	instead of gimple_omp_atomic_seq_cst_p.
	(build_omp_regions_1, omp_make_gimple_edges): Treat taskwait with
	depend clauses as a standalone directive.
	* omp-general.c (enum omp_requires): New variable.
	(omp_extract_for_data): Initialize have_reductemp member.  Allow
	NE_EXPR even in OpenMP loops, transform them into LT_EXPR or
	GT_EXPR loops depending on incr sign.  Formatting fixes.
	* omp-general.h (struct omp_for_data): Add have_reductemp member.
	(enum omp_requires): New enum.
	(omp_requires_mask): Declare.
	* omp-grid.c (grid_eliminate_combined_simd_part): Formatting fix.
	Fix comment typos.
	* omp-low.c (struct omp_context): Add task_reductions and
	task_reduction_map fields.
	(is_host_teams_ctx): New function.
	(is_taskreg_ctx): Return true also if is_host_teams_ctx.
	(use_pointer_for_field): Use is_global_var instead of
	TREE_STATIC || DECL_EXTERNAL, and apply only if not privatized
	in outer contexts.
	(build_outer_var_ref): Ignore taskgroup outer contexts.
	(delete_omp_context): Release task_reductions and task_reduction_map.
	(scan_sharing_clauses): Don't add any fields for reduction clause on
	taskloop.  Handle OMP_CLAUSE__REDUCTEMP_.  Handle
	OMP_CLAUSE_{IN,TASK}_REDUCTION and OMP_CLAUSE_REDUCTION with task
	modifier.  Don't ignore shared clauses in is_host_teams_ctx contexts.
	Handle OMP_CLAUSE_NONTEMPORAL.
	(add_taskreg_looptemp_clauses): Add OMP_CLAUSE__REDUCTEMP_ clause if
	needed.
	(scan_omp_parallel): Add _reductemp_ clause if there are any reduction
	clauses with task modifier.
	(scan_omp_task): Handle taskwait with depend clauses.
	(finish_taskreg_scan): Move field corresponding to _reductemp_ clause
	first.  Move also OMP_CLAUSE__REDUCTEMP_ clause in front if present.
	Handle GIMPLE_OMP_TEAMS like GIMPLE_OMP_PARALLEL.
	(scan_omp_for): Fix comment formatting.
	(scan_omp_teams): Handle host teams constructs.
	(check_omp_nesting_restrictions): Allow teams with no outer
	OpenMP context.  Adjust diagnostics for teams strictly nested into
	some explicit OpenMP construct other than target.  Allow OpenMP atomics
	inside of simd regions.
	(scan_omp_1_stmt): Call scan_sharing_clauses for taskgroups.
	(scan_omp_1_stmt) <case GIMPLE_OMP_TEAMS>: Temporarily bump
	taskreg_nesting_level while scanning host teams construct.
	(task_reduction_read): New function.
	(lower_rec_input_clauses): Handle OMP_CLAUSE_REDUCTION on taskloop
	construct.  Handle OMP_CLAUSE_IN_REDUCTION and OMP_CLAUSE__REDUCTEMP_
	clauses.  Handle OMP_CLAUSE_REDUCTION with task modifier.  Remove
	second argument create_tmp_var if it is NULL.  Don't ignore shared
	clauses in is_host_teams_ctx contexts.  Handle
	OMP_CLAUSE_FIRSTPRIVATE_NO_REFERENCE on OMP_CLAUSE_FIRSTPRIVATE
	clauses.
	(lower_reduction_clauses): Ignore reduction clauses with task
	modifier.  Remove second argument create_tmp_var if it is NULL.
	Initialize OMP_ATOMIC_MEMORY_ORDER to relaxed.
	(lower_send_clauses): Ignore reduction clauses with task modifier.
	Handle OMP_CLAUSE__REDUCTEMP_.  Don't send anything for
	OMP_CLAUSE_REDUCTION on taskloop.  Handle OMP_CLAUSE_IN_REDUCTION.
	(maybe_add_implicit_barrier_cancel): Add OMP_RETURN argument, don't
	rely that it is the last stmt in body so far.  Ignore outer taskgroup
	contexts.
	(omp_task_reductions_find_first, omp_task_reduction_iterate,
	lower_omp_task_reductions): New functions.
	(lower_omp_sections): Handle reduction clauses with taskgroup
	modifiers.  Adjust maybe_add_implicit_barrier_cancel caller.
	(lower_omp_single): Adjust maybe_add_implicit_barrier_cancel caller.
	(lower_omp_for): Likewise.  Handle reduction clauses with taskgroup
	modifiers.
	(lower_omp_taskgroup): Handle taskgroup reductions.
	(create_task_copyfn): Copy over OMP_CLAUSE__REDUCTEMP_ pointer.
	Handle OMP_CLAUSE_IN_REDUCTION and OMP_CLAUSE_REDUCTION clauses.
	(lower_depend_clauses): If there are any
	OMP_CLAUSE_DEPEND_DEPOBJ or OMP_CLAUSE_DEPEND_MUTEXINOUTSET
	depend clauses, use a new array format.  If OMP_CLAUSE_DEPEND_LAST is
	seen, assume lowering is done already and return early.  Set kind
	on artificial depend clause to OMP_CLAUSE_DEPEND_LAST.
	(lower_omp_taskreg): Handle reduction clauses with task modifier on
	parallel construct.  Handle reduction clause on taskloop construct.
	Handle taskwait with depend clauses.
	(lower_omp_1): Use lower_omp_taskreg instead of lower_omp_teams
	for host teams constructs.
	* tree.c (omp_clause_num_ops): Add in_reduction, task_reduction,
	nontemporal and _reductemp_ clause entries.
	(omp_clause_code_name): Likewise.
	(walk_tree_1): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION,
	OMP_CLAUSE_NONTEMPORAL and OMP_CLAUSE__REDUCTEMP_.
	* tree-core.h (enum omp_clause_code): Add
	OMP_CLAUSE_{{IN,TASK}_REDUCTION,NONTEMPORAL,_REDUCTEMP_}.
	(enum omp_clause_defaultmap_kind, enum omp_memory_order): New.
	(struct tree_base): Add omp_atomic_memory_order field into union.
	Remove OMP_ATOMIC_SEQ_CST comment.
	(enum omp_clause_depend_kind): Add OMP_CLAUSE_DEPEND_MUTEXINOUTSET
	and OMP_CLAUSE_DEPEND_DEPOBJ.
	(struct tree_omp_clause): Add subcode.defaultmap_kind.
	* tree.def (OMP_TASKGROUP): Add another operand, move next to other
	OpenMP constructs with body and clauses operands.
	* tree.h (OMP_BODY): Use OMP_MASTER instead of OMP_TASKGROUP.
	(OMP_CLAUSES): Use OMP_TASKGROUP instead of OMP_SINGLE.
	(OMP_TASKGROUP_CLAUSES): Define.
	(OMP_CLAUSE_DECL): Use OMP_CLAUSE__REDUCTEMP_ instead of
	OMP_CLAUSE__LOOPTEMP_.
	(OMP_ATOMIC_SEQ_CST): Remove.
	(OMP_ATOMIC_MEMORY_ORDER, OMP_CLAUSE_FIRSTPRIVATE_NO_REFERENCE,
	OMP_CLAUSE_LASTPRIVATE_CONDITIONAL): Define.
	(OMP_CLAUSE_REDUCTION_CODE, OMP_CLAUSE_REDUCTION_INIT,
	OMP_CLAUSE_REDUCTION_MERGE, OMP_CLAUSE_REDUCTION_PLACEHOLDER,
	OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER,
	OMP_CLAUSE_REDUCTION_OMP_ORIG_REF): Handle
	OMP_CLAUSE_{,IN_,TASK_}REDUCTION.
	(OMP_CLAUSE_REDUCTION_TASK, OMP_CLAUSE_REDUCTION_INSCAN,
	OMP_CLAUSE_DEFAULTMAP_KIND, OMP_CLAUSE_DEFAULTMAP_CATEGORY,
	OMP_CLAUSE_DEFAULTMAP_BEHAVIOR, OMP_CLAUSE_DEFAULTMAP_SET_KIND):
	Define.
	* tree-inline.c (remap_gimple_stmt): Remap taskgroup clauses.
	* tree-nested.c (convert_nonlocal_omp_clauses): Handle
	OMP_CLAUSE__REDUCTEMP_, OMP_CLAUSE_NONTEMPORAL.
	(convert_local_omp_clauses): Likewise.  Remove useless test.
	* tree-parloops.c (create_call_for_reduction_1): Pass
	OMP_MEMORY_ORDER_RELAXED as new argument to
	dump_gimple_omp_atomic_load and dump_gimple_omp_atomic_store.
	* tree-pretty-print.c (dump_omp_iterators): New function.
	(dump_omp_clause): Handle OMP_CLAUSE__REDUCTEMP_,
	OMP_CLAUSE_NONTEMPORAL, OMP_CLAUSE_{TASK,IN}_REDUCTION.  Print
	reduction modifiers.  Handle OMP_CLAUSE_DEPEND_DEPOBJ and
	OMP_CLAUSE_DEPEND_MUTEXINOUTSET.  Print iterators in depend clauses.
	Print __internal__ for OMP_CLAUSE_DEPEND_LAST.  Handle cancel and
	simd OMP_CLAUSE_IF_MODIFIERs.  Handle new kinds of
	OMP_CLAUSE_DEFAULTMAP. Print conditional: for
	OMP_CLAUSE_LASTPRIVATE_CONDITIONAL.
	(dump_omp_atomic_memory_order): New function.
	(dump_generic_node): Use it.  Print taskgroup clauses.  Print
	taskwait with depend clauses.
	* tree-pretty-print.h (dump_omp_atomic_memory_order): Declare.
	* tree-streamer-in.c (unpack_ts_omp_clause_value_fields):
	Handle OMP_CLAUSE_{TASK,IN}_REDUCTION.
	* tree-streamer-out.c (pack_ts_omp_clause_value_fields,
	write_ts_omp_clause_tree_pointers): Likewise.
gcc/c-family/
	* c-common.h (c_finish_omp_taskgroup): Add CLAUSES argument.
	(c_finish_omp_atomic): Replace bool SEQ_CST argument with
	enum omp_memory_order MEMORY_ORDER.
	(c_finish_omp_flush): Add MO argument.
	(c_omp_depend_t_p, c_finish_omp_depobj): Declare.
	(c_finish_omp_for): Add FINAL_P argument.
	* c-omp.c: Include memmodel.h.
	(c_finish_omp_taskgroup): Add CLAUSES argument.  Set
	OMP_TASKGROUP_CLAUSES to it.
	(c_finish_omp_atomic): Replace bool SEQ_CST argument with
	enum omp_memory_order MEMORY_ORDER.  Set OMP_ATOMIC_MEMORY_ORDER
	instead of OMP_ATOMIC_SEQ_CST.
	(c_omp_depend_t_p, c_finish_omp_depobj): New functions.
	(c_finish_omp_flush): Add MO argument, if not MEMMODEL_LAST, emit
	__atomic_thread_fence call with the given value.
	(check_omp_for_incr_expr): Formatting fixes.
	(c_finish_omp_for): Add FINAL_P argument.  Allow NE_EXPR
	even in OpenMP loops, diagnose if NE_EXPR and incr expression
	is not constant expression 1 or -1.  Transform NE_EXPR loops
	with iterators pointers to VLA into LT_EXPR or GT_EXPR loops.
	(c_omp_check_loop_iv_r): Look for orig decl of C++ range for
	loops too.
	(c_omp_split_clauses): Add support for combined
	#pragma omp parallel master and
	#pragma omp {,parallel }master taskloop{, simd} constructs.
	Handle OMP_CLAUSE_IN_REDUCTION.  Handle OMP_CLAUSE_REDUCTION_TASK.
	Handle OMP_CLAUSE_NONTEMPORAL.  Handle splitting OMP_CLAUSE_IF
	also to OMP_SIMD.  Copy OMP_CLAUSE_LASTPRIVATE_CONDITIONAL.
	(c_omp_predetermined_sharing): Don't return
	OMP_CLAUSE_DEFAULT_SHARED for const qualified decls.
	* c-pragma.c (omp_pragmas): Add PRAGMA_OMP_DEPOBJ and
	PRAGMA_OMP_REQUIRES.
	* c-pragma.h (enum pragma_kind): Likewise.
	(enum pragma_omp_clause): Add PRAGMA_OMP_CLAUSE_NONTEMPORAL
	and PRAGMA_OMP_CLAUSE_{IN,TASK}_REDUCTION.
gcc/c/
	* c-parser.c: Include memmode.h.
	(c_parser_omp_depobj, c_parser_omp_requires): New functions.
	(c_parser_pragma): Handle PRAGMA_OMP_DEPOBJ and PRAGMA_OMP_REQUIRES.
	(c_parser_omp_clause_name): Handle nontemporal, in_reduction and
	task_reduction clauses.
	(c_parser_omp_variable_list): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION.
	For OMP_CLAUSE_DEPEND, parse clause operands as either an array
	section, or lvalue assignment expression.
	(c_parser_omp_clause_if): Handle cancel and simd modifiers.
	(c_parser_omp_clause_lastprivate): Parse optional
	conditional: modifier.
	(c_parser_omp_clause_hint): Require constant integer expression rather
	than just integer expression.
	(c_parser_omp_clause_defaultmap): Parse new kinds of defaultmap
	clause.
	(c_parser_omp_clause_reduction): Add IS_OMP and KIND arguments.
	Parse reduction modifiers.  Pass KIND to c_parser_omp_variable_list.
	(c_parser_omp_clause_nontemporal, c_parser_omp_iterators): New
	functions.
	(c_parser_omp_clause_depend): Parse iterator modifier and handle
	iterators.  Parse mutexinoutset and depobj kinds.
	(c_parser_oacc_all_clauses): Adjust c_parser_omp_clause_reduction
	callers.
	(c_parser_omp_all_clauses): Likewise.  Handle
	PRAGMA_OMP_CLAUSE_NONTEMPORAL and
	PRAGMA_OMP_CLAUSE_{IN,TASK}_REDUCTION.
	(c_parser_omp_atomic): Parse hint and memory order clauses.  Handle
	default memory order from requires directive if any.  Adjust
	c_finish_omp_atomic caller.
	(c_parser_omp_critical): Allow comma in between (name) and hint clause.
	(c_parser_omp_flush): Parse flush with memory-order-clause.
	(c_parser_omp_for_loop): Allow NE_EXPR even in
	OpenMP loops, adjust c_finish_omp_for caller.
	(OMP_SIMD_CLAUSE_MASK): Add if and nontemporal clauses.
	(c_parser_omp_master): Add p_name, mask and cclauses arguments.
	Allow to be called while parsing combined parallel master.
	Parse combined master taskloop{, simd}.
	(c_parser_omp_parallel): Parse combined
	parallel master{, taskloop{, simd}} constructs.
	(OMP_TASK_CLAUSE_MASK): Add in_reduction clause.
	(OMP_TASKGROUP_CLAUSE_MASK): Define.
	(c_parser_omp_taskgroup): Add LOC argument.  Parse taskgroup clauses.
	(OMP_TASKWAIT_CLAUSE_MASK): Define.
	(c_parser_omp_taskwait): Handle taskwait with depend clauses.
	(c_parser_omp_teams): Force a BIND_EXPR with BLOCK
	around teams body.  Use SET_EXPR_LOCATION.
	(c_parser_omp_target_data): Allow target data
	with only use_device_ptr clauses.
	(c_parser_omp_target): Use SET_EXPR_LOCATION.  Set
	OMP_REQUIRES_TARGET_USED bit in omp_requires_mask.
	(c_parser_omp_requires): New function.
	(c_finish_taskloop_clauses): New function.
	(OMP_TASKLOOP_CLAUSE_MASK): Add reduction and in_reduction clauses.
	(c_parser_omp_taskloop): Use c_finish_taskloop_clauses.  Add forward
	declaration.  Disallow in_reduction clause when combined with parallel
	master.
	(c_parser_omp_construct): Adjust c_parser_omp_master and
	c_parser_omp_taskgroup callers.
	* c-typeck.c (c_finish_omp_cancel): Diagnose if clause with modifier
	other than cancel.
	(handle_omp_array_sections_1): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION
	like OMP_CLAUSE_REDUCTION.
	(handle_omp_array_sections): Likewise.  Call save_expr on array
	reductions before calling build_index_type.  Handle depend clauses
	with iterators.
	(struct c_find_omp_var_s): New type.
	(c_find_omp_var_r, c_omp_finish_iterators): New functions.
	(c_finish_omp_clauses): Don't diagnose nonmonotonic clause
	with static, runtime or auto schedule kinds.  Call save_expr for whole
	array reduction sizes.  Diagnose reductions with zero sized elements
	or variable length structures.  Diagnose nogroup clause used with
	reduction clause(s).  Handle depend clause with
	OMP_CLAUSE_DEPEND_DEPOBJ.  Diagnose bit-fields.  Require
	omp_depend_t type for OMP_CLAUSE_DEPEND_DEPOBJ kinds and
	some different type for other kinds.  Use build_unary_op with
	ADDR_EXPR and build_indirect_ref instead of c_mark_addressable.
	Handle depend clauses with iterators.  Remove no longer needed special
	case that predetermined const qualified vars may be specified in
	firstprivate clause.  Complain if const qualified vars are mentioned
	in data-sharing clauses other than firstprivate or shared.  Use
	error_at with OMP_CLAUSE_LOCATION (c) as first argument instead of
	error.  Formatting fix.  Handle OMP_CLAUSE_NONTEMPORAL and
	OMP_CLAUSE_{IN,TASK}_REDUCTION.  Allow any lvalue as
	OMP_CLAUSE_DEPEND operand (besides array section), adjust diagnostics.
gcc/cp/
	* constexpr.c (potential_constant_expression_1): Handle OMP_DEPOBJ.
	* cp-gimplify.c (cp_genericize_r): Handle
	OMP_CLAUSE_{IN,TASK}_REDUCTION.
	(cxx_omp_predetermined_sharing_1): Don't return
	OMP_CLAUSE_DEFAULT_SHARED for const qualified decls with no mutable
	member.  Return OMP_CLAUSE_DEFAULT_FIRSTPRIVATE for this pointer.
	* cp-objcp-common.c (cp_common_init_ts): Handle OMP_DEPOBJ.
	* cp-tree.def (OMP_DEPOBJ): New tree code.
	* cp-tree.h (OMP_ATOMIC_DEPENDENT_P): Return true also for first
	argument being OMP_CLAUSE.
	(OMP_DEPOBJ_DEPOBJ, OMP_DEPOBJ_CLAUSES): Define.
	(cp_convert_omp_range_for, cp_finish_omp_range_for): Declare.
	(finish_omp_atomic): Add LOC, CLAUSES and MO arguments.  Remove
	SEQ_CST argument.
	(finish_omp_for_block): Declare.
	(finish_omp_flush): Add MO argument.
	(finish_omp_depobj): Declare.
	* cxx-pretty-print.c (cxx_pretty_printer::statement): Handle
	OMP_DEPOBJ.
	* dump.c (cp_dump_tree): Likewise.
	* lex.c (cxx_init): Likewise.
	* parser.c: Include memmodel.h.
	(cp_parser_for): Pass false as new is_omp argument to
	cp_parser_range_for.
	(cp_parser_range_for): Add IS_OMP argument, return before finalizing
	if it is true.
	(cp_parser_omp_clause_name): Handle nontemporal, in_reduction and
	task_reduction clauses.
        (cp_parser_omp_var_list_no_open): Handle
	OMP_CLAUSE_{IN,TASK}_REDUCTION.  For OMP_CLAUSE_DEPEND, parse clause
	operands as either an array section, or lvalue assignment expression.
	(cp_parser_omp_clause_if): Handle cancel and simd modifiers.
	(cp_parser_omp_clause_defaultmap): Parse new kinds of defaultmap
	clause.
	(cp_parser_omp_clause_reduction): Add IS_OMP and KIND arguments.
	Parse reduction modifiers.  Pass KIND to c_parser_omp_variable_list.
	(cp_parser_omp_clause_lastprivate, cp_parser_omp_iterators): New
	functions.
	(cp_parser_omp_clause_depend): Parse iterator modifier and handle
	iterators.  Parse mutexinoutset and depobj kinds.
	(cp_parser_oacc_all_clauses): Adjust cp_parser_omp_clause_reduction
	callers.
	(cp_parser_omp_all_clauses): Likewise.  Handle
	PRAGMA_OMP_CLAUSE_NONTEMPORAL and
	PRAGMA_OMP_CLAUSE_{IN,TASK}_REDUCTION.  Call
	cp_parser_omp_clause_lastprivate for OpenMP lastprivate clause.
	(cp_parser_omp_atomic): Pass pragma_tok->location as
	LOC to finish_omp_atomic.  Parse hint and memory order clauses.
	Handle default memory order from requires directive if any.  Adjust
	finish_omp_atomic caller.
	(cp_parser_omp_critical): Allow comma in between (name) and hint
	clause.
	(cp_parser_omp_depobj): New function.
	(cp_parser_omp_flush): Parse flush with memory-order-clause.
	(cp_parser_omp_for_cond): Allow NE_EXPR even in OpenMP loops.
	(cp_convert_omp_range_for, cp_finish_omp_range_for): New functions.
	(cp_parser_omp_for_loop): Parse C++11 range for loops among omp
	loops.  Handle OMP_CLAUSE_IN_REDUCTION like OMP_CLAUSE_REDUCTION.
	(OMP_SIMD_CLAUSE_MASK): Add if and nontemporal clauses.
	(cp_parser_omp_simd, cp_parser_omp_for): Call keep_next_level before
	begin_omp_structured_block and call finish_omp_for_block on
	finish_omp_structured_block result.
	(cp_parser_omp_master): Add p_name, mask and cclauses arguments.
	Allow to be called while parsing combined parallel master.
	Parse combined master taskloop{, simd}.
	(cp_parser_omp_parallel): Parse combined
	parallel master{, taskloop{, simd}} constructs.
	(cp_parser_omp_single): Use SET_EXPR_LOCATION.
	(OMP_TASK_CLAUSE_MASK): Add in_reduction clause.
	(OMP_TASKWAIT_CLAUSE_MASK): Define.
	(cp_parser_omp_taskwait): Handle taskwait with depend clauses.
	(OMP_TASKGROUP_CLAUSE_MASK): Define.
	(cp_parser_omp_taskgroup): Parse taskgroup clauses, adjust
	c_finish_omp_taskgroup caller.
	(cp_parser_omp_distribute): Call keep_next_level before
	begin_omp_structured_block and call finish_omp_for_block on
	finish_omp_structured_block result.
	(cp_parser_omp_teams): Force a BIND_EXPR with BLOCK around teams
	body.
	(cp_parser_omp_target_data): Allow target data with only
	use_device_ptr clauses.
	(cp_parser_omp_target): Set OMP_REQUIRES_TARGET_USED bit in
	omp_requires_mask.
	(cp_parser_omp_requires): New function.
	(OMP_TASKLOOP_CLAUSE_MASK): Add reduction and in_reduction clauses.
	(cp_parser_omp_taskloop): Add forward declaration.  Disallow
	in_reduction clause when combined with parallel master.  Call
	keep_next_level before begin_omp_structured_block and call
	finish_omp_for_block on finish_omp_structured_block result.
	(cp_parser_omp_construct): Adjust cp_parser_omp_master caller.
	(cp_parser_pragma): Handle PRAGMA_OMP_DEPOBJ and PRAGMA_OMP_REQUIRES.
	* pt.c (tsubst_omp_clause_decl): Add iterators_cache argument.
	Adjust recursive calls.  Handle iterators.
	(tsubst_omp_clauses): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION and
	OMP_CLAUSE_NONTEMPORAL.  Adjust tsubst_omp_clause_decl callers.
	(tsubst_decomp_names):
	(tsubst_omp_for_iterator): Change orig_declv into a reference.
	Handle range for loops.  Move orig_declv handling after declv/initv
	handling.
	(tsubst_expr): Force a BIND_EXPR with BLOCK around teams body.
	Adjust finish_omp_atomic caller.  Call keep_next_level before
	begin_omp_structured_block.  Call cp_finish_omp_range_for for range
	for loops and use {begin,finish}_omp_structured_block instead of
	{push,pop}_stmt_list if there are any range for loops.  Call
	finish_omp_for_block on finish_omp_structured_block result.
	Handle OMP_DEPOBJ.  Handle taskwait with depend clauses.  For
	OMP_ATOMIC call tsubst_omp_clauses on clauses if any, adjust
	finish_omp_atomic caller.  Use OMP_ATOMIC_MEMORY_ORDER rather
	than OMP_ATOMIC_SEQ_CST.  Handle clauses on OMP_TASKGROUP.
	(dependent_omp_for_p): Always return true for range for loops if
	processing_template_decl.  Return true if class type iterator
	does not have INTEGER_CST increment.
	* semantics.c: Include memmodel.h.
	(handle_omp_array_sections_1): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION
	like OMP_CLAUSE_REDUCTION.
	(handle_omp_array_sections): Likewise.  Call save_expr on array
	reductions before calling build_index_type.  Handle depend clauses
	with iterators.
	(finish_omp_reduction_clause): Call save_expr for whole array
	reduction sizes.  Don't mark OMP_CLAUSE_DECL addressable if it has
	reference type.  Do mark decl_placeholder addressable if needed.
	Use error_at with OMP_CLAUSE_LOCATION (c) as first argument instead
	of error.
	(cp_omp_finish_iterators): New function.
	(finish_omp_clauses): Don't diagnose nonmonotonic clause with static,
	runtime or auto schedule kinds.  Diagnose nogroup clause used with
	reduction clause(s).  Handle depend clause with
	OMP_CLAUSE_DEPEND_DEPOBJ.  Diagnose bit-fields.  Require
	omp_depend_t type for OMP_CLAUSE_DEPEND_DEPOBJ kinds and
	some different type for other kinds.  Use cp_build_addr_expr
	and cp_build_indirect_ref instead of cxx_mark_addressable.
	Handle depend clauses with iterators.  Only handle static data members
	in the special case that const qualified vars may be specified in
	firstprivate clause.  Complain if const qualified vars without mutable
	members are mentioned in data-sharing clauses other than firstprivate
	or shared.  Use error_at with OMP_CLAUSE_LOCATION (c) as first
	argument instead of error.  Diagnose more than one nontemporal clause
	refering to the same variable.  Use error_at rather than error for
	priority and hint clause diagnostics.  Fix pasto for hint clause.
	Diagnose hint expression that doesn't fold into INTEGER_CST.
	Diagnose if clause with modifier other than cancel.  Handle
	OMP_CLAUSE_{IN,TASK}_REDUCTION like OMP_CLAUSE_REDUCTION.  Allow any
	lvalue as OMP_CLAUSE_DEPEND operand (besides array section), adjust
	diagnostics.
	(handle_omp_for_class_iterator): Don't create a new TREE_LIST if one
	has been created already for range for, just fill TREE_PURPOSE and
	TREE_VALUE.  Call cp_fully_fold on incr.
	(finish_omp_for): Don't check cond/incr if cond is global_namespace.
	Pass to c_omp_check_loop_iv_exprs orig_declv if non-NULL.  Don't
	use IS_EMPTY_STMT on NULL pre_body.  Adjust c_finish_omp_for caller.
	(finish_omp_for_block): New function.
	(finish_omp_atomic): Add LOC argument, pass it through
	to c_finish_omp_atomic and set it as location of OMP_ATOMIC* trees.
	Remove SEQ_CST argument.  Add CLAUSES and MO arguments.  Adjust
	c_finish_omp_atomic caller.  Stick clauses if any into first argument
	of wrapping OMP_ATOMIC.
	(finish_omp_depobj): New function.
	(finish_omp_flush): Add MO argument, if not
	MEMMODEL_LAST, emit __atomic_thread_fence call with the given value.
	(finish_omp_cancel): Diagnose if clause with modifier other than
	cancel.
gcc/fortran/
	* trans-openmp.c (gfc_trans_omp_clauses): Use
	OMP_CLAUSE_DEFAULTMAP_SET_KIND.
	(gfc_trans_omp_atomic): Set OMP_ATOMIC_MEMORY_ORDER
	rather than OMP_ATOMIC_SEQ_CST.
	(gfc_trans_omp_taskgroup): Build OMP_TASKGROUP using
	make_node instead of build1_loc.
	* types.def (BT_FN_VOID_BOOL, BT_FN_VOID_SIZE_SIZE_PTR,
	BT_FN_UINT_UINT_PTR_PTR, BT_FN_UINT_OMPFN_PTR_UINT_UINT,
	BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
	BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
	BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
	BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR): New.
	(BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_PTR): Formatting fix.
gcc/testsuite/
	* c-c++-common/gomp/atomic-17.c: New test.
	* c-c++-common/gomp/atomic-18.c: New test.
	* c-c++-common/gomp/atomic-19.c: New test.
	* c-c++-common/gomp/atomic-20.c: New test.
	* c-c++-common/gomp/atomic-21.c: New test.
	* c-c++-common/gomp/atomic-22.c: New test.
	* c-c++-common/gomp/clauses-1.c (r2): New variable.
	(foo): Add ntm argument and test if and nontemporal clauses on
	constructs with simd.
	(bar): Put taskloop simd inside of taskgroup with task_reduction,
	use in_reduction clause instead of reduction.  Add another
	taskloop simd without nogroup clause, but with reduction clause and
	a new in_reduction.  Add ntm and i3 arguments.  Test if and
	nontemporal clauses on constructs with simd.  Change if clauses on
	some constructs from specific to the particular constituents to one
	without a modifier.  Add new tests for combined host teams and for
	new parallel master and {,parallel }master taskloop{, simd} combined
	constructs.
	(baz): New function with host teams tests.
	* gcc.dg/gomp/combined-1.c: Moved to ...
	* c-c++-common/gomp/combined-1.c: ... here.  Adjust expected library
	call.
	* c-c++-common/gomp/combined-2.c: New test.
	* c-c++-common/gomp/combined-3.c: New test.
	* c-c++-common/gomp/critical-1.c: New test.
	* c-c++-common/gomp/critical-2.c: New test.
	* c-c++-common/gomp/default-1.c: New test.
	* c-c++-common/gomp/defaultmap-1.c: New test.
	* c-c++-common/gomp/defaultmap-2.c: New test.
	* c-c++-common/gomp/defaultmap-3.c: New test.
	* c-c++-common/gomp/depend-5.c: New test.
	* c-c++-common/gomp/depend-6.c: New test.
	* c-c++-common/gomp/depend-iterator-1.c: New test.
	* c-c++-common/gomp/depend-iterator-2.c: New test.
	* c-c++-common/gomp/depobj-1.c: New test.
	* c-c++-common/gomp/flush-1.c: New test.
	* c-c++-common/gomp/flush-2.c: New test.
	* c-c++-common/gomp/for-1.c: New test.
	* c-c++-common/gomp/for-2.c: New test.
	* c-c++-common/gomp/for-3.c: New test.
	* c-c++-common/gomp/for-4.c: New test.
	* c-c++-common/gomp/for-5.c: New test.
	* c-c++-common/gomp/for-6.c: New test.
	* c-c++-common/gomp/for-7.c: New test.
	* c-c++-common/gomp/if-1.c (foo): Add some further tests.
	* c-c++-common/gomp/if-2.c (foo): Likewise.  Expect slightly different
	diagnostics wording in one case.
	* c-c++-common/gomp/if-3.c: New test.
	* c-c++-common/gomp/master-combined-1.c: New test.
	* c-c++-common/gomp/master-combined-2.c: New test.
	* c-c++-common/gomp/nontemporal-1.c: New test.
	* c-c++-common/gomp/nontemporal-2.c: New test.
	* c-c++-common/gomp/reduction-task-1.c: New test.
	* c-c++-common/gomp/reduction-task-2.c: New test.
	* c-c++-common/gomp/requires-1.c: New test.
	* c-c++-common/gomp/requires-2.c: New test.
	* c-c++-common/gomp/requires-3.c: New test.
	* c-c++-common/gomp/requires-4.c: New test.
	* c-c++-common/gomp/schedule-modifiers-1.c (bar): Don't expect
	diagnostics for nonmonotonic modifier with static, runtime or auto
	schedule kinds.
	* c-c++-common/gomp/simd7.c: New test.
	* c-c++-common/gomp/target-data-1.c: New test.
	* c-c++-common/gomp/taskloop-reduction-1.c: New test.
	* c-c++-common/gomp/taskwait-depend-1.c: New test.
	* c-c++-common/gomp/teams-1.c: New test.
	* c-c++-common/gomp/teams-2.c: New test.
	* gcc.dg/gomp/appendix-a/a.24.1.c: Update from OpenMP examples.  Add
	shared(c) clause.
	* gcc.dg/gomp/atomic-5.c (f1): Add another expected error.
	* gcc.dg/gomp/clause-1.c: Adjust expected diagnostics for const
	qualified vars without mutable member no longer being predeterined
	shared.
	* gcc.dg/gomp/sharing-1.c: Likewise.
	* g++.dg/gomp/clause-3.C: Likewise.
	* g++.dg/gomp/member-2.C: Likewise.
	* g++.dg/gomp/predetermined-1.C: Likewise.
	* g++.dg/gomp/private-1.C: Likewise.
	* g++.dg/gomp/sharing-1.C: Likewise.
	* g++.dg/gomp/sharing-2.C: Likewise.  Add a few tests with aggregate
	const static data member without mutable elements.
	* gcc.dg/gomp/for-4.c: Expected nonmonotonic functions in the dumps.
	* gcc.dg/gomp/for-5.c: Likewise.
	* gcc.dg/gomp/for-6.c: Change expected library call.
	* gcc.dg/gomp/pr39495-2.c (foo): Don't expect errors on !=.
	* gcc.dg/gomp/reduction-2.c: New test.
	* gcc.dg/gomp/simd-1.c: New test.
	* gcc.dg/gomp/teams-1.c: Adjust expected diagnostic lines.
	* g++.dg/gomp/atomic-18.C: New test.
	* g++.dg/gomp/atomic-19.C: New test.
	* g++.dg/gomp/atomic-5.C (f1): Adjust expected lines of read-only
	variable messages.  Add another expected error.
	* g++.dg/gomp/critical-3.C: New test.
	* g++.dg/gomp/depend-iterator-1.C: New test.
	* g++.dg/gomp/depend-iterator-2.C: New test.
	* g++.dg/gomp/depobj-1.C: New test.
	* g++.dg/gomp/doacross-1.C: New test.
	* g++.dg/gomp/for-21.C: New test.
	* g++.dg/gomp/for-4.C: Expected nonmonotonic functions in the dumps.
	* g++.dg/gomp/for-5.C: Likewise.
	* g++.dg/gomp/for-6.C: Change expected library call.
	* g++.dg/gomp/loop-4.C: New test.
	* g++.dg/gomp/pr33372-1.C: Adjust location of the expected
	diagnostics.
	* g++.dg/gomp/pr33372-3.C: Likewise.
	* g++.dg/gomp/pr39495-2.C (foo): Don't expect errors on !=.
	* g++.dg/gomp/simd-2.C: New test.
	* g++.dg/gomp/tpl-atomic-2.C: Adjust expected diagnostic lines.
include/
	* gomp-constants.h (GOMP_TASK_FLAG_REDUCTION,
	GOMP_DEPEND_IN, GOMP_DEPEND_OUT, GOMP_DEPEND_INOUT,
	GOMP_DEPEND_MUTEXINOUTSET): Define.
libgomp/
	* affinity.c (gomp_display_affinity_place): New function.
	* affinity-fmt.c: New file.
	* alloc.c (gomp_aligned_alloc, gomp_aligned_free): New functions.
	* config/linux/affinity.c (gomp_display_affinity_place): New function.
	* config/nvptx/icv-device.c (omp_get_num_teams, omp_get_team_num):
	Move these functions to ...
	* config/nvptx/teams.c: ... here.  New file.
	* config/nvptx/target.c (omp_pause_resource, omp_pause_resource_all):
	New functions.
	* config/nvptx/team.c (gomp_team_start, gomp_pause_host): New
	functions.
	* configure.ac: Check for aligned_alloc, posix_memalign, memalign
	and _aligned_malloc.
	(HAVE_UNAME, HAVE_GETHOSTNAME, HAVE_GETPID): Add new tests.
	* configure.tgt: Add -DUSING_INITIAL_EXEC_TLS to XCFLAGS for Linux.
	* env.c (gomp_display_affinity_var, gomp_affinity_format_var,
	gomp_affinity_format_len): New variables.
	(parse_schedule): Parse monotonic and nonmonotonic modifiers in
	OMP_SCHEDULE variable.  Set GFS_MONOTONIC for monotonic schedules.
	(handle_omp_display_env): Display monotonic/nonmonotonic schedule
	modifiers.  Display (non-default) chunk sizes.  Print
	OMP_DISPLAY_AFFINITY and OMP_AFFINITY_FORMAT.
	(initialize_env): Don't call pthread_attr_setdetachstate.  Handle
	OMP_DISPLAY_AFFINITY and OMP_AFFINITY_FORMAT env vars.
	* fortran.c: Include stdio.h and string.h.
	(omp_pause_resource, omp_pause_resource_all): Add ialias_redirect.
	(omp_get_schedule_, omp_get_schedule_8_): Mask off GFS_MONOTONIC bit.
	(omp_set_affinity_format_, omp_get_affinity_format_,
	omp_display_affinity_, omp_capture_affinity_, omp_pause_resource_,
	omp_pause_resource_all_): New functions.
	* icv.c (omp_set_schedule): Mask off omp_sched_monotonic bit in
	switch.
	* icv-device.c (omp_get_num_teams, omp_get_team_num): Move these
	functions to ...
	* teams.c: ... here.  New file.
	* libgomp_g.h: Include gstdint.h.
	(GOMP_loop_nonmonotonic_runtime_start,
	GOMP_loop_maybe_nonmonotonic_runtime_start, GOMP_loop_start,
	GOMP_loop_ordered_start, GOMP_loop_nonmonotonic_runtime_next,
	GOMP_loop_maybe_nonmonotonic_runtime_next, GOMP_loop_doacross_start,
	GOMP_parallel_loop_nonmonotonic_runtime,
	GOMP_parallel_loop_maybe_nonmonotonic_runtime,
	GOMP_loop_ull_nonmonotonic_runtime_start,
	GOMP_loop_ull_maybe_nonmonotonic_runtime_start, GOMP_loop_ull_start,
	GOMP_loop_ull_ordered_start, GOMP_loop_ull_nonmonotonic_runtime_next,
	GOMP_loop_ull_maybe_nonmonotonic_runtime_next,
	GOMP_loop_ull_doacross_start, GOMP_parallel_reductions,
	GOMP_taskwait_depend, GOMP_taskgroup_reduction_register,
	GOMP_taskgroup_reduction_unregister, GOMP_task_reduction_remap,
	GOMP_workshare_task_reduction_unregister, GOMP_sections2_start,
	GOMP_teams_reg): Declare.
	* libgomp.h (GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC): Define unless
	gomp_aligned_alloc uses fallback implementation.
	(gomp_aligned_alloc, gomp_aligned_free): Declare.
	(enum gomp_schedule_type): Add GFS_MONOTONIC.
	(struct gomp_doacross_work_share): Add extra field.
	(struct gomp_work_share): Add task_reductions field.
	(struct gomp_taskgroup): Add workshare and reductions fields.
	(GOMP_NEEDS_THREAD_HANDLE): Define if needed.
	(gomp_thread_handle): New typedef.
	(gomp_display_affinity_place, gomp_set_affinity_format,
	gomp_display_string, gomp_display_affinity,
	gomp_display_affinity_thread): Declare.
	(gomp_doacross_init, gomp_doacross_ull_init): Add size_t argument.
	(gomp_parallel_reduction_register, gomp_workshare_taskgroup_start,
	gomp_workshare_task_reduction_register): Declare.
	(gomp_team_start): Add taskgroup argument.
	(gomp_pause_host): Declare.
	(gomp_init_work_share, gomp_work_share_start): Change bool argument
	to size_t.
	(gomp_thread_self, gomp_thread_to_pthread_t): New inline functions.
	* libgomp.map (GOMP_5.0): Export GOMP_loop_start,
	GOMP_loop_ordered_start, GOMP_loop_doacross_start,
	GOMP_loop_ull_start, GOMP_loop_ull_ordered_start,
	GOMP_loop_ull_doacross_start,
	GOMP_workshare_task_reduction_unregister, GOMP_sections2_start,
	GOMP_loop_maybe_nonmonotonic_runtime_next,
	GOMP_loop_maybe_nonmonotonic_runtime_start,
	GOMP_loop_nonmonotonic_runtime_next,
	GOMP_loop_nonmonotonic_runtime_start,
	GOMP_loop_ull_maybe_nonmonotonic_runtime_next,
	GOMP_loop_ull_maybe_nonmonotonic_runtime_start,
	GOMP_loop_ull_nonmonotonic_runtime_next,
	GOMP_loop_ull_nonmonotonic_runtime_start,
	GOMP_parallel_loop_maybe_nonmonotonic_runtime,
	GOMP_parallel_loop_nonmonotonic_runtime, GOMP_parallel_reductions,
	GOMP_taskgroup_reduction_register,
	GOMP_taskgroup_reduction_unregister, GOMP_task_reduction_remap,
	GOMP_teams_reg and GOMP_taskwait_depend.
	(OMP_5.0): Export omp_pause_resource{,_all}{,_},
	omp_{capture,display}_affinity{,_}, and
	omp_[gs]et_affinity_format{,_}.
	* loop.c: Include string.h.
	(GOMP_loop_runtime_next): Add ialias.
	(GOMP_taskgroup_reduction_register): Add ialias_redirect.
	(gomp_loop_static_start, gomp_loop_dynamic_start,
	gomp_loop_guided_start, gomp_loop_ordered_static_start,
	gomp_loop_ordered_dynamic_start, gomp_loop_ordered_guided_start,
	gomp_loop_doacross_static_start, gomp_loop_doacross_dynamic_start,
	gomp_loop_doacross_guided_start): Adjust gomp_work_share_start
	or gomp_doacross_init callers.
	(gomp_adjust_sched, GOMP_loop_start, GOMP_loop_ordered_start,
	GOMP_loop_doacross_start): New functions.
	(GOMP_loop_runtime_start, GOMP_loop_ordered_runtime_start,
	GOMP_loop_doacross_runtime_start, GOMP_parallel_loop_runtime_start):
	Mask off GFS_MONOTONIC bit.
	(GOMP_loop_maybe_nonmonotonic_runtime_next,
	GOMP_loop_maybe_nonmonotonic_runtime_start,
	GOMP_loop_nonmonotonic_runtime_next,
	GOMP_loop_nonmonotonic_runtime_start,
	GOMP_parallel_loop_maybe_nonmonotonic_runtime,
	GOMP_parallel_loop_nonmonotonic_runtime): New aliases or wrapper
	functions.
	(gomp_parallel_loop_start): Pass NULL as taskgroup to
	gomp_team_start.
	* loop_ull.c: Include string.h.
	(GOMP_loop_ull_runtime_next): Add ialias.
	(GOMP_taskgroup_reduction_register): Add ialias_redirect.
	(gomp_loop_ull_static_start, gomp_loop_ull_dynamic_start,
	gomp_loop_ull_guided_start, gomp_loop_ull_ordered_static_start,
	gomp_loop_ull_ordered_dynamic_start,
	gomp_loop_ull_ordered_guided_start,
	gomp_loop_ull_doacross_static_start,
	gomp_loop_ull_doacross_dynamic_start,
	gomp_loop_ull_doacross_guided_start): Adjust gomp_work_share_start
	and gomp_doacross_ull_init callers.
	(gomp_adjust_sched, GOMP_loop_ull_start, GOMP_loop_ull_ordered_start,
	GOMP_loop_ull_doacross_start): New functions.
	(GOMP_loop_ull_runtime_start,
	GOMP_loop_ull_ordered_runtime_start,
	GOMP_loop_ull_doacross_runtime_start): Mask off GFS_MONOTONIC bit.
	(GOMP_loop_ull_maybe_nonmonotonic_runtime_next,
	GOMP_loop_ull_maybe_nonmonotonic_runtime_start,
	GOMP_loop_ull_nonmonotonic_runtime_next,
	GOMP_loop_ull_nonmonotonic_runtime_start): Likewise.
	* Makefile.am (libgomp_la_SOURCES): Add teams.c and affinity-fmt.c.
	* omp.h.in (enum omp_sched_t): Add omp_sched_monotonic.
	(omp_pause_resource_t, omp_depend_t): New typedefs.
	(enum omp_lock_hint_t): Renamed to ...
	(enum omp_sync_hint_t): ... this.  Define omp_sync_hint_*
	enumerators using numbers and omp_lock_hint_* as their aliases.
	(omp_lock_hint_t): New typedef.  Rename to ...
	(omp_sync_hint_t): ... this.
	(omp_init_lock_with_hint, omp_init_nest_lock_with_hint): Use
	omp_sync_hint_t instead of omp_lock_hint_t.
	(omp_pause_resource, omp_pause_resource_all, omp_set_affinity_format,
	omp_get_affinity_format, omp_display_affinity, omp_capture_affinity):
	Declare.
	(omp_target_is_present, omp_target_disassociate_ptr):
	Change first argument from void * to const void *.
	(omp_target_memcpy, omp_target_memcpy_rect): Change second argument
	from void * to const void *.
	(omp_target_associate_ptr): Change first and second arguments from
	void * to const void *.
	* omp_lib.f90.in (omp_pause_resource_kind, omp_pause_soft,
	omp_pause_hard): New parameters.
	(omp_pause_resource, omp_pause_resource_all, omp_set_affinity_format,
	omp_get_affinity_format, omp_display_affinity, omp_capture_affinity):
	New interfaces.
	* omp_lib.h.in (omp_pause_resource_kind, omp_pause_soft,
	omp_pause_hard): New parameters.
	(omp_pause_resource, omp_pause_resource_all, omp_set_affinity_format,
	omp_get_affinity_format, omp_display_affinity, omp_capture_affinity):
	New externals.
	* ordered.c (gomp_doacross_init, gomp_doacross_ull_init): Add
	EXTRA argument.  If not needed to prepare array, if extra is 0,
	clear ws->doacross, otherwise allocate just doacross structure and
	extra payload.  If array is needed, allocate also extra payload.
	(GOMP_doacross_post, GOMP_doacross_wait, GOMP_doacross_ull_post,
	GOMP_doacross_ull_wait): Handle doacross->array == NULL like
	doacross == NULL.
	* parallel.c (GOMP_parallel_start): Pass NULL as taskgroup to
	gomp_team_start.
	(GOMP_parallel): Likewise.  Formatting fix.
	(GOMP_parallel_reductions): New function.
	(GOMP_cancellation_point): If taskgroup has workshare
	flag set, check cancelled of prev taskgroup if any.
	(GOMP_cancel): If taskgroup has workshare flag set, set cancelled
	on prev taskgroup if any.
	* sections.c: Include string.h.
	(GOMP_taskgroup_reduction_register): Add ialias_redirect.
	(GOMP_sections_start): Adjust gomp_work_share_start caller.
	(GOMP_sections2_start): New function.
	(GOMP_parallel_sections_start, GOMP_parallel_sections):
	Pass NULL as taskgroup to gomp_team_start.
	* single.c (GOMP_single_start, GOMP_single_copy_start): Adjust
	gomp_work_share_start callers.
	* target.c (GOMP_target_update_ext, GOMP_target_enter_exit_data):
	If taskgroup has workshare flag set, check cancelled on prev
	taskgroup if any.  Guard all cancellation tests with
	gomp_cancel_var test.
	(omp_target_is_present, omp_target_disassociate_ptr):
	Change ptr argument from void * to const void *.
	(omp_target_memcpy): Change src argument from void * to const void *.
	(omp_target_memcpy_rect): Likewise.
	(omp_target_memcpy_rect_worker): Likewise.  Use const char * casts
	instead of char * where needed.
	(omp_target_associate_ptr): Change host_ptr and device_ptr arguments
	from void * to const void *.
	(omp_pause_resource, omp_pause_resource_all): New functions.
	* task.c (gomp_task_handle_depend): Handle new depend array format
	in addition to the old.  Handle mutexinoutset kinds the same as
	inout for now, handle unspecified kinds.
	(gomp_create_target_task): If taskgroup has workshare flag set, check
	cancelled on prev taskgroup if any.  Guard all cancellation tests with
	gomp_cancel_var test.  Handle new depend array format count in
	addition to the old.
	(GOMP_task): Likewise.  Adjust function comment.
	(gomp_task_run_pre): If taskgroup has workshare flag set, check
	cancelled on prev taskgroup if any.  Guard all cancellation tests with
	gomp_cancel_var test.
	(GOMP_taskwait_depend): New function.
	(gomp_task_maybe_wait_for_dependencies): Handle new depend array
	format in addition to the old.  Handle mutexinoutset kinds the same as
	inout for now, handle unspecified kinds.  Fix a function comment typo.
	(gomp_taskgroup_init): New function.
	(GOMP_taskgroup_start): Use it.
	(gomp_reduction_register, gomp_create_artificial_team,
	GOMP_taskgroup_reduction_register,
	GOMP_taskgroup_reduction_unregister, GOMP_task_reduction_remap,
	gomp_parallel_reduction_register,
	gomp_workshare_task_reduction_register,
	gomp_workshare_taskgroup_start,
	GOMP_workshare_task_reduction_unregister): New functions.
	* taskloop.c (GOMP_taskloop): If taskgroup has workshare flag set,
	check cancelled on prev taskgroup if any.  Guard all cancellation
	tests with gomp_cancel_var test.  Handle GOMP_TASK_FLAG_REDUCTION flag
	by calling GOMP_taskgroup_reduction_register.
	* team.c (gomp_thread_attr): Remove comment.
	(struct gomp_thread_start_data): Add handle field.
	(gomp_thread_start): Call pthread_detach.
	(gomp_new_team): Adjust gomp_init_work_share caller.
	(gomp_free_pool_helper): Call pthread_detach.
	(gomp_team_start): Add taskgroup argument, initialize implicit
	tasks' taskgroup field to that.  Don't call
	pthread_attr_setdetachstate.  Handle OMP_DISPLAY_AFFINITY env var.
	(gomp_team_end): Determine nesting by thr->ts.level != 0
	rather than thr->ts.team != NULL.
	(gomp_pause_pool_helper, gomp_pause_host): New functions.
	* work.c (alloc_work_share): Use gomp_aligned_alloc instead of
	gomp_malloc if GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is defined.
	(gomp_init_work_share): Change ORDERED argument from bool to size_t,
	if more than 1 allocate also extra payload at the end of array.  Never
	keep ordered_team_ids NULL, set it to inline_ordered_team_ids instead.
	(gomp_work_share_start): Change ORDERED argument from bool to size_t,
	return true instead of ws.
	* Makefile.in: Regenerated.
	* configure: Regenerated.
	* config.h.in: Regenerated.
	* testsuite/libgomp.c/cancel-for-2.c (foo): Use cancel modifier
	in some cases.
	* testsuite/libgomp.c-c++-common/cancel-parallel-1.c: New test.
	* testsuite/libgomp.c-c++-common/cancel-taskgroup-3.c: New test.
	* testsuite/libgomp.c-c++-common/depend-iterator-1.c: New test.
	* testsuite/libgomp.c-c++-common/depend-iterator-2.c: New test.
	* testsuite/libgomp.c-c++-common/depend-mutexinout-1.c: New test.
	* testsuite/libgomp.c-c++-common/depend-mutexinout-2.c: New test.
	* testsuite/libgomp.c-c++-common/depobj-1.c: New test.
	* testsuite/libgomp.c-c++-common/display-affinity-1.c: New test.
	* testsuite/libgomp.c-c++-common/for-10.c: New test.
	* testsuite/libgomp.c-c++-common/for-11.c: New test.
	* testsuite/libgomp.c-c++-common/for-12.c: New test.
	* testsuite/libgomp.c-c++-common/for-13.c: New test.
	* testsuite/libgomp.c-c++-common/for-14.c: New test.
	* testsuite/libgomp.c-c++-common/for-15.c: New test.
	* testsuite/libgomp.c-c++-common/for-2.h: If CONDNE macro is defined,
	define a different N(test), don't define N(f0) to N(f14), but instead
	define N(f20) to N(f34) using != comparisons.
	* testsuite/libgomp.c-c++-common/for-7.c: New test.
	* testsuite/libgomp.c-c++-common/for-8.c: New test.
	* testsuite/libgomp.c-c++-common/for-9.c: New test.
	* testsuite/libgomp.c-c++-common/master-combined-1.c: New test.
	* testsuite/libgomp.c-c++-common/pause-1.c: New test.
	* testsuite/libgomp.c-c++-common/pause-2.c: New test.
	* testsuite/libgomp.c-c++-common/pr66199-10.c: New test.
	* testsuite/libgomp.c-c++-common/pr66199-11.c: New test.
	* testsuite/libgomp.c-c++-common/pr66199-12.c: New test.
	* testsuite/libgomp.c-c++-common/pr66199-13.c: New test.
	* testsuite/libgomp.c-c++-common/pr66199-14.c: New test.
	* testsuite/libgomp.c-c++-common/simd-1.c: New test.
	* testsuite/libgomp.c-c++-common/taskloop-reduction-1.c: New test.
	* testsuite/libgomp.c-c++-common/taskloop-reduction-2.c: New test.
	* testsuite/libgomp.c-c++-common/taskloop-reduction-3.c: New test.
	* testsuite/libgomp.c-c++-common/taskloop-reduction-4.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-11.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-12.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-1.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-2.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-3.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-4.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-5.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-6.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-7.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-8.c: New test.
	* testsuite/libgomp.c-c++-common/task-reduction-9.c: New test.
	* testsuite/libgomp.c-c++-common/taskwait-depend-1.c: New test.
	* testsuite/libgomp.c++/depend-1.C: New test.
	* testsuite/libgomp.c++/depend-iterator-1.C: New test.
	* testsuite/libgomp.c++/depobj-1.C: New test.
	* testsuite/libgomp.c++/for-16.C: New test.
	* testsuite/libgomp.c++/for-21.C: New test.
	* testsuite/libgomp.c++/for-22.C: New test.
	* testsuite/libgomp.c++/for-23.C: New test.
	* testsuite/libgomp.c++/for-24.C: New test.
	* testsuite/libgomp.c++/for-25.C: New test.
	* testsuite/libgomp.c++/for-26.C: New test.
	* testsuite/libgomp.c++/taskloop-reduction-1.C: New test.
	* testsuite/libgomp.c++/taskloop-reduction-2.C: New test.
	* testsuite/libgomp.c++/taskloop-reduction-3.C: New test.
	* testsuite/libgomp.c++/taskloop-reduction-4.C: New test.
	* testsuite/libgomp.c++/task-reduction-10.C: New test.
	* testsuite/libgomp.c++/task-reduction-11.C: New test.
	* testsuite/libgomp.c++/task-reduction-12.C: New test.
	* testsuite/libgomp.c++/task-reduction-13.C: New test.
	* testsuite/libgomp.c++/task-reduction-14.C: New test.
	* testsuite/libgomp.c++/task-reduction-15.C: New test.
	* testsuite/libgomp.c++/task-reduction-16.C: New test.
	* testsuite/libgomp.c++/task-reduction-17.C: New test.
	* testsuite/libgomp.c++/task-reduction-18.C: New test.
	* testsuite/libgomp.c++/task-reduction-19.C: New test.
	* testsuite/libgomp.c/task-reduction-1.c: New test.
	* testsuite/libgomp.c++/task-reduction-1.C: New test.
	* testsuite/libgomp.c/task-reduction-2.c: New test.
	* testsuite/libgomp.c++/task-reduction-2.C: New test.
	* testsuite/libgomp.c++/task-reduction-3.C: New test.
	* testsuite/libgomp.c++/task-reduction-4.C: New test.
	* testsuite/libgomp.c++/task-reduction-5.C: New test.
	* testsuite/libgomp.c++/task-reduction-6.C: New test.
	* testsuite/libgomp.c++/task-reduction-7.C: New test.
	* testsuite/libgomp.c++/task-reduction-8.C: New test.
	* testsuite/libgomp.c++/task-reduction-9.C: New test.
	* testsuite/libgomp.c/teams-1.c: New test.
	* testsuite/libgomp.c/teams-2.c: New test.
	* testsuite/libgomp.c/thread-limit-4.c: New test.
	* testsuite/libgomp.c/thread-limit-5.c: New test.
	* testsuite/libgomp.fortran/display-affinity-1.f90: New test.

From-SVN: r265930
This commit is contained in:
Jakub Jelinek 2018-11-08 18:13:04 +01:00 committed by Jakub Jelinek
parent dfe2a550e8
commit 28567c40e2
260 changed files with 26489 additions and 1441 deletions

View File

@ -1,3 +1,307 @@
2018-11-08 Jakub Jelinek <jakub@redhat.com>
* builtin-types.def (BT_FN_VOID_BOOL, BT_FN_VOID_SIZE_SIZE_PTR,
BT_FN_UINT_UINT_PTR_PTR, BT_FN_UINT_OMPFN_PTR_UINT_UINT,
BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR): New.
* gengtype.c (open_base_files): Add omp-general.h.
* gimple.c (gimple_build_omp_critical):
(gimple_build_omp_taskgroup): Add CLAUSES argument. Call
gimple_omp_taskgroup_set_clauses.
(gimple_build_omp_atomic_load): Add mo argument, call
gimple_omp_atomic_set_memory_order.
(gimple_build_omp_atomic_store): Likewise.
(gimple_copy): Adjust handling of GIMPLE_OMP_TASKGROUP.
* gimple.def (GIMPLE_OMP_TASKGROUP): Use GSS_OMP_SINGLE_LAYOUT
instead of GSS_OMP.
(GIMPLE_OMP_TEAMS): Use GSS_OMP_PARALLEL_LAYOUT instead
of GSS_OMP_SINGLE_LAYOUT, adjust comments.
* gimple.h (enum gf_mask): Add GF_OMP_TEAMS_HOST, GF_OMP_TASK_TASKWAIT
and GF_OMP_ATOMIC_MEMORY_ORDER. Remove GF_OMP_ATOMIC_SEQ_CST, use
different value for GF_OMP_ATOMIC_NEED_VALUE.
(struct gimple_statement_omp_taskreg): Add GIMPLE_OMP_TEAMS to
comments.
(struct gimple_statement_omp_single_layout): And remove here.
(struct gomp_teams): Inherit from gimple_statement_omp_taskreg rather
than gimple_statement_omp_single_layout.
(is_a_helper <gimple_statement_omp_taskreg *>::test): Allow
GIMPLE_OMP_TEAMS.
(is_a_helper <const gimple_statement_omp_taskreg *>::test): Likewise.
(gimple_omp_subcode): Formatting fix.
(gimple_omp_teams_child_fn, gimple_omp_teams_child_fn_ptr,
gimple_omp_teams_set_child_fn, gimple_omp_teams_data_arg,
gimple_omp_teams_data_arg_ptr, gimple_omp_teams_set_data_arg,
gimple_omp_teams_host, gimple_omp_teams_set_host,
gimple_omp_task_taskwait_p, gimple_omp_task_set_taskwait_p,
gimple_omp_taskgroup_clauses, gimple_omp_taskgroup_clauses_ptr,
gimple_omp_taskgroup_set_clauses): New inline functions.
(gimple_build_omp_atomic_load): Add enum omp_memory_order argument.
(gimple_build_omp_atomic_store): Likewise.
(gimple_omp_atomic_seq_cst_p): Remove.
(gimple_omp_atomic_memory_order): New function.
(gimple_omp_atomic_set_seq_cst): Remove.
(gimple_omp_atomic_set_memory_order): New function.
(gimple_build_omp_taskgroup): Add clauses argument.
* gimple-pretty-print.c (dump_gimple_omp_taskgroup): New function.
(dump_gimple_omp_task): Print taskwait with depend clauses.
(dump_gimple_omp_atomic_load, dump_gimple_omp_atomic_store): Use
dump_omp_atomic_memory_order.
(pp_gimple_stmt_1): Handle GIMPLE_OMP_TASKGROUP.
* gimplify.c (enum gimplify_omp_var_data): Add GOVD_MAP_ALLOC_ONLY,
GOVD_MAP_FROM_ONLY and GOVD_NONTEMPORAL.
(enum omp_region_type): Reserve bits 1 and 2 for auxiliary flags,
renumber values of most of ORT_* enumerators, add ORT_HOST_TEAMS,
ORT_COMBINED_HOST_TEAMS, ORT_TASKGROUP, ORT_TASKLOOP and
ORT_UNTIED_TASKLOOP enumerators.
(enum gimplify_defaultmap_kind): New.
(struct gimplify_omp_ctx): Remove target_map_scalars_firstprivate and
target_map_pointers_as_0len_arrays members, add defaultmap.
(new_omp_context): Initialize defaultmap member.
(gimple_add_tmp_var): Handle ORT_TASKGROUP like ORT_WORKSHARE.
(maybe_fold_stmt): Don't fold even in host teams regions.
(omp_firstprivatize_variable): Handle ORT_TASKGROUP like
ORT_WORKSHARE. Test ctx->defaultmap[GDMK_SCALAR] instead of
ctx->omp_firstprivatize_variable.
(omp_add_variable): Don't add private/firstprivate for VLAs in
ORT_TASKGROUP.
(omp_default_clause): Print "taskloop" rather than "task" if
ORT_*TASKLOOP.
(omp_notice_variable): Handle ORT_TASKGROUP like ORT_WORKSHARE.
Handle new defaultmap clause kinds.
(omp_is_private): Handle ORT_TASKGROUP like ORT_WORKSHARE. Allow simd
iterator to be lastprivate or private. Fix up diagnostics if linear
is used on collapse>1 simd iterator.
(omp_check_private): Handle ORT_TASKGROUP like ORT_WORKSHARE.
(gimplify_omp_depend): New function.
(gimplify_scan_omp_clauses): Add shared clause on parallel for
combined parallel master taskloop{, simd} if taskloop has
firstprivate, lastprivate or reduction clause. Handle
OMP_CLAUSE_REDUCTION_TASK diagnostics. Adjust tests for
ORT_COMBINED_TEAMS. Gimplify depend clauses with iterators. Handle
cancel and simd OMP_CLAUSE_IF_MODIFIERs. Handle
OMP_CLAUSE_NONTEMPORAL. Handle new defaultmap clause kinds. Handle
OMP_CLAUSE_{TASK,IN}_REDUCTION. Diagnose invalid conditional
lastprivate.
(gimplify_adjust_omp_clauses_1): Ignore GOVD_NONTEMPORAL. Handle
GOVD_MAP_ALLOC_ONLY and GOVD_MAP_FROM_ONLY.
(gimplify_adjust_omp_clauses): Handle OMP_CLAUSE_NONTEMPORAL. Handle
OMP_CLAUSE_{TASK,IN}_REDUCTION.
(gimplify_omp_task): Handle taskwait with depend clauses.
(gimplify_omp_for): Add shared clause on parallel for combined
parallel master taskloop{, simd} if taskloop has firstprivate,
lastprivate or reduction clause. Use ORT_TASKLOOP or
ORT_UNTIED_TASKLOOP instead of ORT_TASK or ORT_UNTIED_TASK. Adjust
tests for ORT_COMBINED_TEAMS. Handle C++ range for loops with
NULL TREE_PURPOSE in OMP_FOR_ORIG_DECLS. Firstprivatize
__for_end and __for_range temporaries on OMP_PARALLEL for
distribute parallel for{, simd}. Move OMP_CLAUSE_REDUCTION
and OMP_CLAUSE_IN_REDUCTION from taskloop to the task construct
sandwiched in between two taskloops.
(computable_teams_clause): Test ctx->defaultmap[GDMK_SCALAR]
instead of ctx->omp_firstprivatize_variable.
(gimplify_omp_workshare): Set ort to ORT_HOST_TEAMS or
ORT_COMBINED_HOST_TEAMS if not inside of target construct. If
host teams, use gimplify_and_return_first etc. for body like
for target or target data constructs, and at the end call
gimple_omp_teams_set_host on the GIMPLE_OMP_TEAMS object.
(gimplify_omp_atomic): Use OMP_ATOMIC_MEMORY_ORDER instead
of OMP_ATOMIC_SEQ_CST, pass it as new argument to
gimple_build_omp_atomic_load and gimple_build_omp_atomic_store, remove
gimple_omp_atomic_set_seq_cst calls.
(gimplify_expr) <case OMP_TASKGROUP>: Move handling into a separate
case, handle taskgroup clauses.
* lto-streamer-out.c (hash_tree): Handle
OMP_CLAUSE_{TASK,IN}_REDUCTION.
* Makefile.in (GTFILES): Add omp-general.h.
* omp-builtins.def (BUILT_IN_GOMP_TASKWAIT_DEPEND,
BUILT_IN_GOMP_LOOP_NONMONOTONIC_RUNTIME_START,
BUILT_IN_GOMP_LOOP_MAYBE_NONMONOTONIC_RUNTIME_START,
BUILT_IN_GOMP_LOOP_START, BUILT_IN_GOMP_LOOP_ORDERED_START,
BUILT_IN_GOMP_LOOP_DOACROSS_START,
BUILT_IN_GOMP_LOOP_NONMONOTONIC_RUNTIME_NEXT,
BUILT_IN_GOMP_LOOP_MAYBE_NONMONOTONIC_RUNTIME_NEXT,
BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_RUNTIME_START,
BUILT_IN_GOMP_LOOP_ULL_MAYBE_NONMONOTONIC_RUNTIME_START,
BUILT_IN_GOMP_LOOP_ULL_START, BUILT_IN_GOMP_LOOP_ULL_ORDERED_START,
BUILT_IN_GOMP_LOOP_ULL_DOACROSS_START,
BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_RUNTIME_NEXT,
BUILT_IN_GOMP_LOOP_ULL_MAYBE_NONMONOTONIC_RUNTIME_NEXT,
BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_RUNTIME,
BUILT_IN_GOMP_PARALLEL_LOOP_MAYBE_NONMONOTONIC_RUNTIME,
BUILT_IN_GOMP_PARALLEL_REDUCTIONS, BUILT_IN_GOMP_SECTIONS2_START,
BUILT_IN_GOMP_TEAMS_REG, BUILT_IN_GOMP_TASKGROUP_REDUCTION_REGISTER,
BUILT_IN_GOMP_TASKGROUP_REDUCTION_UNREGISTER,
BUILT_IN_GOMP_TASK_REDUCTION_REMAP,
BUILT_IN_GOMP_WORKSHARE_TASK_REDUCTION_UNREGISTER): New builtins.
* omp-expand.c (workshare_safe_to_combine_p): Return false for
non-worksharing loops.
(omp_adjust_chunk_size): Don't adjust anything if chunk_size is zero.
(determine_parallel_type): Don't combine parallel with worksharing
which has _reductemp_ clause.
(expand_parallel_call): Emit the GOMP_*nonmonotonic_runtime* or
GOMP_*maybe_nonmonotonic_runtime* builtins instead of GOMP_*runtime*
if there is nonmonotonic modifier or if there is no modifier and no
ordered clause. For dynamic and guided schedule without monotonic
and nonmonotonic modifier, default to nonmonotonic.
(expand_omp_for): Likewise. Adjust expand_omp_for_generic caller, use
GOMP_loop{,_ull}{,_ordered,_doacross}_start builtins if there are
task reductions.
(expand_task_call): Add GOMP_TASK_FLAG_REDUCTION flag to flags if
there are any reduction clauses.
(expand_taskwait_call): New function.
(expand_teams_call): New function.
(expand_omp_taskreg): Allow GIMPLE_OMP_TEAMS and call
expand_teams_call for it. Formatting fix. Handle taskwait with
depend clauses.
(expand_omp_for_generic): Add SCHED_ARG argument. Handle expansion
of worksharing loops with task reductions.
(expand_omp_for_static_nochunk, expand_omp_for_static_chunk): Handle
expansion of worksharing loops with task reductions.
(expand_omp_sections): Handle expansion of sections with task
reductions.
(expand_omp_synch): For host teams call expand_omp_taskreg.
(omp_memory_order_to_memmodel): New function.
(expand_omp_atomic_load, expand_omp_atomic_store,
expand_omp_atomic_fetch_op): Use it and gimple_omp_atomic_memory_order
instead of gimple_omp_atomic_seq_cst_p.
(build_omp_regions_1, omp_make_gimple_edges): Treat taskwait with
depend clauses as a standalone directive.
* omp-general.c (enum omp_requires): New variable.
(omp_extract_for_data): Initialize have_reductemp member. Allow
NE_EXPR even in OpenMP loops, transform them into LT_EXPR or
GT_EXPR loops depending on incr sign. Formatting fixes.
* omp-general.h (struct omp_for_data): Add have_reductemp member.
(enum omp_requires): New enum.
(omp_requires_mask): Declare.
* omp-grid.c (grid_eliminate_combined_simd_part): Formatting fix.
Fix comment typos.
* omp-low.c (struct omp_context): Add task_reductions and
task_reduction_map fields.
(is_host_teams_ctx): New function.
(is_taskreg_ctx): Return true also if is_host_teams_ctx.
(use_pointer_for_field): Use is_global_var instead of
TREE_STATIC || DECL_EXTERNAL, and apply only if not privatized
in outer contexts.
(build_outer_var_ref): Ignore taskgroup outer contexts.
(delete_omp_context): Release task_reductions and task_reduction_map.
(scan_sharing_clauses): Don't add any fields for reduction clause on
taskloop. Handle OMP_CLAUSE__REDUCTEMP_. Handle
OMP_CLAUSE_{IN,TASK}_REDUCTION and OMP_CLAUSE_REDUCTION with task
modifier. Don't ignore shared clauses in is_host_teams_ctx contexts.
Handle OMP_CLAUSE_NONTEMPORAL.
(add_taskreg_looptemp_clauses): Add OMP_CLAUSE__REDUCTEMP_ clause if
needed.
(scan_omp_parallel): Add _reductemp_ clause if there are any reduction
clauses with task modifier.
(scan_omp_task): Handle taskwait with depend clauses.
(finish_taskreg_scan): Move field corresponding to _reductemp_ clause
first. Move also OMP_CLAUSE__REDUCTEMP_ clause in front if present.
Handle GIMPLE_OMP_TEAMS like GIMPLE_OMP_PARALLEL.
(scan_omp_for): Fix comment formatting.
(scan_omp_teams): Handle host teams constructs.
(check_omp_nesting_restrictions): Allow teams with no outer
OpenMP context. Adjust diagnostics for teams strictly nested into
some explicit OpenMP construct other than target. Allow OpenMP atomics
inside of simd regions.
(scan_omp_1_stmt): Call scan_sharing_clauses for taskgroups.
(scan_omp_1_stmt) <case GIMPLE_OMP_TEAMS>: Temporarily bump
taskreg_nesting_level while scanning host teams construct.
(task_reduction_read): New function.
(lower_rec_input_clauses): Handle OMP_CLAUSE_REDUCTION on taskloop
construct. Handle OMP_CLAUSE_IN_REDUCTION and OMP_CLAUSE__REDUCTEMP_
clauses. Handle OMP_CLAUSE_REDUCTION with task modifier. Remove
second argument create_tmp_var if it is NULL. Don't ignore shared
clauses in is_host_teams_ctx contexts. Handle
OMP_CLAUSE_FIRSTPRIVATE_NO_REFERENCE on OMP_CLAUSE_FIRSTPRIVATE
clauses.
(lower_reduction_clauses): Ignore reduction clauses with task
modifier. Remove second argument create_tmp_var if it is NULL.
Initialize OMP_ATOMIC_MEMORY_ORDER to relaxed.
(lower_send_clauses): Ignore reduction clauses with task modifier.
Handle OMP_CLAUSE__REDUCTEMP_. Don't send anything for
OMP_CLAUSE_REDUCTION on taskloop. Handle OMP_CLAUSE_IN_REDUCTION.
(maybe_add_implicit_barrier_cancel): Add OMP_RETURN argument, don't
rely that it is the last stmt in body so far. Ignore outer taskgroup
contexts.
(omp_task_reductions_find_first, omp_task_reduction_iterate,
lower_omp_task_reductions): New functions.
(lower_omp_sections): Handle reduction clauses with taskgroup
modifiers. Adjust maybe_add_implicit_barrier_cancel caller.
(lower_omp_single): Adjust maybe_add_implicit_barrier_cancel caller.
(lower_omp_for): Likewise. Handle reduction clauses with taskgroup
modifiers.
(lower_omp_taskgroup): Handle taskgroup reductions.
(create_task_copyfn): Copy over OMP_CLAUSE__REDUCTEMP_ pointer.
Handle OMP_CLAUSE_IN_REDUCTION and OMP_CLAUSE_REDUCTION clauses.
(lower_depend_clauses): If there are any
OMP_CLAUSE_DEPEND_DEPOBJ or OMP_CLAUSE_DEPEND_MUTEXINOUTSET
depend clauses, use a new array format. If OMP_CLAUSE_DEPEND_LAST is
seen, assume lowering is done already and return early. Set kind
on artificial depend clause to OMP_CLAUSE_DEPEND_LAST.
(lower_omp_taskreg): Handle reduction clauses with task modifier on
parallel construct. Handle reduction clause on taskloop construct.
Handle taskwait with depend clauses.
(lower_omp_1): Use lower_omp_taskreg instead of lower_omp_teams
for host teams constructs.
* tree.c (omp_clause_num_ops): Add in_reduction, task_reduction,
nontemporal and _reductemp_ clause entries.
(omp_clause_code_name): Likewise.
(walk_tree_1): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION,
OMP_CLAUSE_NONTEMPORAL and OMP_CLAUSE__REDUCTEMP_.
* tree-core.h (enum omp_clause_code): Add
OMP_CLAUSE_{{IN,TASK}_REDUCTION,NONTEMPORAL,_REDUCTEMP_}.
(enum omp_clause_defaultmap_kind, enum omp_memory_order): New.
(struct tree_base): Add omp_atomic_memory_order field into union.
Remove OMP_ATOMIC_SEQ_CST comment.
(enum omp_clause_depend_kind): Add OMP_CLAUSE_DEPEND_MUTEXINOUTSET
and OMP_CLAUSE_DEPEND_DEPOBJ.
(struct tree_omp_clause): Add subcode.defaultmap_kind.
* tree.def (OMP_TASKGROUP): Add another operand, move next to other
OpenMP constructs with body and clauses operands.
* tree.h (OMP_BODY): Use OMP_MASTER instead of OMP_TASKGROUP.
(OMP_CLAUSES): Use OMP_TASKGROUP instead of OMP_SINGLE.
(OMP_TASKGROUP_CLAUSES): Define.
(OMP_CLAUSE_DECL): Use OMP_CLAUSE__REDUCTEMP_ instead of
OMP_CLAUSE__LOOPTEMP_.
(OMP_ATOMIC_SEQ_CST): Remove.
(OMP_ATOMIC_MEMORY_ORDER, OMP_CLAUSE_FIRSTPRIVATE_NO_REFERENCE,
OMP_CLAUSE_LASTPRIVATE_CONDITIONAL): Define.
(OMP_CLAUSE_REDUCTION_CODE, OMP_CLAUSE_REDUCTION_INIT,
OMP_CLAUSE_REDUCTION_MERGE, OMP_CLAUSE_REDUCTION_PLACEHOLDER,
OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER,
OMP_CLAUSE_REDUCTION_OMP_ORIG_REF): Handle
OMP_CLAUSE_{,IN_,TASK_}REDUCTION.
(OMP_CLAUSE_REDUCTION_TASK, OMP_CLAUSE_REDUCTION_INSCAN,
OMP_CLAUSE_DEFAULTMAP_KIND, OMP_CLAUSE_DEFAULTMAP_CATEGORY,
OMP_CLAUSE_DEFAULTMAP_BEHAVIOR, OMP_CLAUSE_DEFAULTMAP_SET_KIND):
Define.
* tree-inline.c (remap_gimple_stmt): Remap taskgroup clauses.
* tree-nested.c (convert_nonlocal_omp_clauses): Handle
OMP_CLAUSE__REDUCTEMP_, OMP_CLAUSE_NONTEMPORAL.
(convert_local_omp_clauses): Likewise. Remove useless test.
* tree-parloops.c (create_call_for_reduction_1): Pass
OMP_MEMORY_ORDER_RELAXED as new argument to
dump_gimple_omp_atomic_load and dump_gimple_omp_atomic_store.
* tree-pretty-print.c (dump_omp_iterators): New function.
(dump_omp_clause): Handle OMP_CLAUSE__REDUCTEMP_,
OMP_CLAUSE_NONTEMPORAL, OMP_CLAUSE_{TASK,IN}_REDUCTION. Print
reduction modifiers. Handle OMP_CLAUSE_DEPEND_DEPOBJ and
OMP_CLAUSE_DEPEND_MUTEXINOUTSET. Print iterators in depend clauses.
Print __internal__ for OMP_CLAUSE_DEPEND_LAST. Handle cancel and
simd OMP_CLAUSE_IF_MODIFIERs. Handle new kinds of
OMP_CLAUSE_DEFAULTMAP. Print conditional: for
OMP_CLAUSE_LASTPRIVATE_CONDITIONAL.
(dump_omp_atomic_memory_order): New function.
(dump_generic_node): Use it. Print taskgroup clauses. Print
taskwait with depend clauses.
* tree-pretty-print.h (dump_omp_atomic_memory_order): Declare.
* tree-streamer-in.c (unpack_ts_omp_clause_value_fields):
Handle OMP_CLAUSE_{TASK,IN}_REDUCTION.
* tree-streamer-out.c (pack_ts_omp_clause_value_fields,
write_ts_omp_clause_tree_pointers): Likewise.
2018-11-08 David Malcolm <dmalcolm@redhat.com>
PR ipa/86395

View File

@ -2579,6 +2579,7 @@ GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
$(srcdir)/internal-fn.h \
$(srcdir)/hsa-common.c \
$(srcdir)/calls.c \
$(srcdir)/omp-general.h \
@all_gtfiles@
# Compute the list of GT header files from the corresponding C sources,

View File

@ -251,6 +251,7 @@ DEF_FUNCTION_TYPE_1 (BT_FN_INT_CONST_STRING, BT_INT, BT_CONST_STRING)
DEF_FUNCTION_TYPE_1 (BT_FN_PTR_PTR, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_1 (BT_FN_VOID_VALIST_REF, BT_VOID, BT_VALIST_REF)
DEF_FUNCTION_TYPE_1 (BT_FN_VOID_INT, BT_VOID, BT_INT)
DEF_FUNCTION_TYPE_1 (BT_FN_VOID_BOOL, BT_VOID, BT_BOOL)
DEF_FUNCTION_TYPE_1 (BT_FN_FLOAT_CONST_STRING, BT_FLOAT, BT_CONST_STRING)
DEF_FUNCTION_TYPE_1 (BT_FN_DOUBLE_CONST_STRING, BT_DOUBLE, BT_CONST_STRING)
DEF_FUNCTION_TYPE_1 (BT_FN_LONGDOUBLE_CONST_STRING,
@ -621,6 +622,9 @@ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_UINT32_UINT64_PTR,
BT_VOID, BT_UINT32, BT_UINT64, BT_PTR)
DEF_FUNCTION_TYPE_3 (BT_FN_VOID_UINT32_UINT32_PTR,
BT_VOID, BT_UINT32, BT_UINT32, BT_PTR)
DEF_FUNCTION_TYPE_3 (BT_FN_VOID_SIZE_SIZE_PTR, BT_VOID, BT_SIZE, BT_SIZE,
BT_PTR)
DEF_FUNCTION_TYPE_3 (BT_FN_UINT_UINT_PTR_PTR, BT_UINT, BT_UINT, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_4 (BT_FN_SIZE_CONST_PTR_SIZE_SIZE_FILEPTR,
BT_SIZE, BT_CONST_PTR, BT_SIZE, BT_SIZE, BT_FILEPTR)
@ -644,6 +648,8 @@ DEF_FUNCTION_TYPE_4 (BT_FN_INT_FILEPTR_INT_CONST_STRING_VALIST_ARG,
BT_INT, BT_FILEPTR, BT_INT, BT_CONST_STRING, BT_VALIST_ARG)
DEF_FUNCTION_TYPE_4 (BT_FN_VOID_OMPFN_PTR_UINT_UINT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
DEF_FUNCTION_TYPE_4 (BT_FN_UINT_OMPFN_PTR_UINT_UINT,
BT_UINT, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
DEF_FUNCTION_TYPE_4 (BT_FN_VOID_PTR_WORD_WORD_PTR,
BT_VOID, BT_PTR, BT_WORD, BT_WORD, BT_PTR)
DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_VPTR_PTR_INT, BT_VOID, BT_SIZE,
@ -729,6 +735,12 @@ DEF_FUNCTION_TYPE_7 (BT_FN_VOID_INT_SIZE_PTR_PTR_PTR_UINT_PTR,
DEF_FUNCTION_TYPE_8 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT,
BT_LONG, BT_LONG, BT_LONG, BT_LONG, BT_UINT)
DEF_FUNCTION_TYPE_8 (BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_BOOL, BT_UINT, BT_PTR_LONG, BT_LONG, BT_LONG,
BT_PTR_LONG, BT_PTR_LONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_8 (BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
BT_BOOL, BT_UINT, BT_PTR_ULONGLONG, BT_LONG, BT_ULONGLONG,
BT_PTR_ULONGLONG, BT_PTR_ULONGLONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_9 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,
@ -737,6 +749,14 @@ DEF_FUNCTION_TYPE_9 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT,
DEF_FUNCTION_TYPE_9 (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_PTR,
BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
BT_PTR, BT_PTR, BT_UINT, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_9 (BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_BOOL, BT_LONG, BT_LONG, BT_LONG, BT_LONG, BT_LONG,
BT_PTR_LONG, BT_PTR_LONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_10 (BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
BT_BOOL, BT_BOOL, BT_ULONGLONG, BT_ULONGLONG,
BT_ULONGLONG, BT_LONG, BT_ULONGLONG, BT_PTR_ULONGLONG,
BT_PTR_ULONGLONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_11 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_LONG_LONG_LONG,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,

View File

@ -1,3 +1,41 @@
2018-11-08 Jakub Jelinek <jakub@redhat.com>
* c-common.h (c_finish_omp_taskgroup): Add CLAUSES argument.
(c_finish_omp_atomic): Replace bool SEQ_CST argument with
enum omp_memory_order MEMORY_ORDER.
(c_finish_omp_flush): Add MO argument.
(c_omp_depend_t_p, c_finish_omp_depobj): Declare.
(c_finish_omp_for): Add FINAL_P argument.
* c-omp.c: Include memmodel.h.
(c_finish_omp_taskgroup): Add CLAUSES argument. Set
OMP_TASKGROUP_CLAUSES to it.
(c_finish_omp_atomic): Replace bool SEQ_CST argument with
enum omp_memory_order MEMORY_ORDER. Set OMP_ATOMIC_MEMORY_ORDER
instead of OMP_ATOMIC_SEQ_CST.
(c_omp_depend_t_p, c_finish_omp_depobj): New functions.
(c_finish_omp_flush): Add MO argument, if not MEMMODEL_LAST, emit
__atomic_thread_fence call with the given value.
(check_omp_for_incr_expr): Formatting fixes.
(c_finish_omp_for): Add FINAL_P argument. Allow NE_EXPR
even in OpenMP loops, diagnose if NE_EXPR and incr expression
is not constant expression 1 or -1. Transform NE_EXPR loops
with iterators pointers to VLA into LT_EXPR or GT_EXPR loops.
(c_omp_check_loop_iv_r): Look for orig decl of C++ range for
loops too.
(c_omp_split_clauses): Add support for combined
#pragma omp parallel master and
#pragma omp {,parallel }master taskloop{, simd} constructs.
Handle OMP_CLAUSE_IN_REDUCTION. Handle OMP_CLAUSE_REDUCTION_TASK.
Handle OMP_CLAUSE_NONTEMPORAL. Handle splitting OMP_CLAUSE_IF
also to OMP_SIMD. Copy OMP_CLAUSE_LASTPRIVATE_CONDITIONAL.
(c_omp_predetermined_sharing): Don't return
OMP_CLAUSE_DEFAULT_SHARED for const qualified decls.
* c-pragma.c (omp_pragmas): Add PRAGMA_OMP_DEPOBJ and
PRAGMA_OMP_REQUIRES.
* c-pragma.h (enum pragma_kind): Likewise.
(enum pragma_omp_clause): Add PRAGMA_OMP_CLAUSE_NONTEMPORAL
and PRAGMA_OMP_CLAUSE_{IN,TASK}_REDUCTION.
2018-11-08 David Malcolm <dmalcolm@redhat.com>
* c-format.c (gcc_dump_printf_char_table): Add entry for %f.

View File

@ -1149,18 +1149,21 @@ enum c_omp_region_type
};
extern tree c_finish_omp_master (location_t, tree);
extern tree c_finish_omp_taskgroup (location_t, tree);
extern tree c_finish_omp_taskgroup (location_t, tree, tree);
extern tree c_finish_omp_critical (location_t, tree, tree, tree);
extern tree c_finish_omp_ordered (location_t, tree, tree);
extern void c_finish_omp_barrier (location_t);
extern tree c_finish_omp_atomic (location_t, enum tree_code, enum tree_code,
tree, tree, tree, tree, tree, bool, bool,
bool = false);
extern void c_finish_omp_flush (location_t);
tree, tree, tree, tree, tree, bool,
enum omp_memory_order, bool = false);
extern bool c_omp_depend_t_p (tree);
extern void c_finish_omp_depobj (location_t, tree, enum omp_clause_depend_kind,
tree);
extern void c_finish_omp_flush (location_t, int);
extern void c_finish_omp_taskwait (location_t);
extern void c_finish_omp_taskyield (location_t);
extern tree c_finish_omp_for (location_t, enum tree_code, tree, tree, tree,
tree, tree, tree, tree);
tree, tree, tree, tree, bool);
extern bool c_omp_check_loop_iv (tree, tree, walk_tree_lh);
extern bool c_omp_check_loop_iv_exprs (location_t, tree, tree, tree, tree,
walk_tree_lh);

View File

@ -28,8 +28,10 @@ along with GCC; see the file COPYING3. If not see
#include "c-common.h"
#include "gimple-expr.h"
#include "c-pragma.h"
#include "stringpool.h"
#include "omp-general.h"
#include "gomp-constants.h"
#include "memmodel.h"
/* Complete a #pragma oacc wait construct. LOC is the location of
@ -70,7 +72,7 @@ c_finish_oacc_wait (location_t loc, tree parms, tree clauses)
}
/* Complete a #pragma omp master construct. STMT is the structured-block
that follows the pragma. LOC is the l*/
that follows the pragma. LOC is the location of the #pragma. */
tree
c_finish_omp_master (location_t loc, tree stmt)
@ -80,18 +82,21 @@ c_finish_omp_master (location_t loc, tree stmt)
return t;
}
/* Complete a #pragma omp taskgroup construct. STMT is the structured-block
that follows the pragma. LOC is the l*/
/* Complete a #pragma omp taskgroup construct. BODY is the structured-block
that follows the pragma. LOC is the location of the #pragma. */
tree
c_finish_omp_taskgroup (location_t loc, tree stmt)
c_finish_omp_taskgroup (location_t loc, tree body, tree clauses)
{
tree t = add_stmt (build1 (OMP_TASKGROUP, void_type_node, stmt));
SET_EXPR_LOCATION (t, loc);
return t;
tree stmt = make_node (OMP_TASKGROUP);
TREE_TYPE (stmt) = void_type_node;
OMP_TASKGROUP_BODY (stmt) = body;
OMP_TASKGROUP_CLAUSES (stmt) = clauses;
SET_EXPR_LOCATION (stmt, loc);
return add_stmt (stmt);
}
/* Complete a #pragma omp critical construct. STMT is the structured-block
/* Complete a #pragma omp critical construct. BODY is the structured-block
that follows the pragma, NAME is the identifier in the pragma, or null
if it was omitted. LOC is the location of the #pragma. */
@ -181,8 +186,8 @@ c_finish_omp_taskyield (location_t loc)
tree
c_finish_omp_atomic (location_t loc, enum tree_code code,
enum tree_code opcode, tree lhs, tree rhs,
tree v, tree lhs1, tree rhs1, bool swapped, bool seq_cst,
bool test)
tree v, tree lhs1, tree rhs1, bool swapped,
enum omp_memory_order memory_order, bool test)
{
tree x, type, addr, pre = NULL_TREE;
HOST_WIDE_INT bitpos = 0, bitsize = 0;
@ -264,7 +269,7 @@ c_finish_omp_atomic (location_t loc, enum tree_code code,
{
x = build1 (OMP_ATOMIC_READ, type, addr);
SET_EXPR_LOCATION (x, loc);
OMP_ATOMIC_SEQ_CST (x) = seq_cst;
OMP_ATOMIC_MEMORY_ORDER (x) = memory_order;
if (blhs)
x = build3_loc (loc, BIT_FIELD_REF, TREE_TYPE (blhs), x,
bitsize_int (bitsize), bitsize_int (bitpos));
@ -315,7 +320,7 @@ c_finish_omp_atomic (location_t loc, enum tree_code code,
type = void_type_node;
x = build2 (code, type, addr, rhs);
SET_EXPR_LOCATION (x, loc);
OMP_ATOMIC_SEQ_CST (x) = seq_cst;
OMP_ATOMIC_MEMORY_ORDER (x) = memory_order;
/* Generally it is hard to prove lhs1 and lhs are the same memory
location, just diagnose different variables. */
@ -413,17 +418,173 @@ c_finish_omp_atomic (location_t loc, enum tree_code code,
}
/* Return true if TYPE is the implementation's omp_depend_t. */
bool
c_omp_depend_t_p (tree type)
{
type = TYPE_MAIN_VARIANT (type);
return (TREE_CODE (type) == RECORD_TYPE
&& TYPE_NAME (type)
&& ((TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
? DECL_NAME (TYPE_NAME (type)) : TYPE_NAME (type))
== get_identifier ("omp_depend_t"))
&& (!TYPE_CONTEXT (type)
|| TREE_CODE (TYPE_CONTEXT (type)) == TRANSLATION_UNIT_DECL)
&& COMPLETE_TYPE_P (type)
&& TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
&& !compare_tree_int (TYPE_SIZE (type),
2 * tree_to_uhwi (TYPE_SIZE (ptr_type_node))));
}
/* Complete a #pragma omp depobj construct. LOC is the location of the
#pragma. */
void
c_finish_omp_depobj (location_t loc, tree depobj,
enum omp_clause_depend_kind kind, tree clause)
{
tree t = NULL_TREE;
if (!error_operand_p (depobj))
{
if (!c_omp_depend_t_p (TREE_TYPE (depobj)))
{
error_at (EXPR_LOC_OR_LOC (depobj, loc),
"type of %<depobj%> expression is not %<omp_depend_t%>");
depobj = error_mark_node;
}
else if (TYPE_READONLY (TREE_TYPE (depobj)))
{
error_at (EXPR_LOC_OR_LOC (depobj, loc),
"%<const%> qualified %<depobj%> expression");
depobj = error_mark_node;
}
}
else
depobj = error_mark_node;
if (clause == error_mark_node)
return;
if (clause)
{
gcc_assert (TREE_CODE (clause) == OMP_CLAUSE
&& OMP_CLAUSE_CODE (clause) == OMP_CLAUSE_DEPEND);
if (OMP_CLAUSE_CHAIN (clause))
error_at (OMP_CLAUSE_LOCATION (clause),
"more than one locator in %<depend%> clause on %<depobj%> "
"construct");
switch (OMP_CLAUSE_DEPEND_KIND (clause))
{
case OMP_CLAUSE_DEPEND_DEPOBJ:
error_at (OMP_CLAUSE_LOCATION (clause),
"%<depobj%> dependence type specified in %<depend%> "
"clause on %<depobj%> construct");
return;
case OMP_CLAUSE_DEPEND_SOURCE:
case OMP_CLAUSE_DEPEND_SINK:
error_at (OMP_CLAUSE_LOCATION (clause),
"%<depend(%s)%> is only allowed in %<omp ordered%>",
OMP_CLAUSE_DEPEND_KIND (clause) == OMP_CLAUSE_DEPEND_SOURCE
? "source" : "sink");
return;
case OMP_CLAUSE_DEPEND_IN:
case OMP_CLAUSE_DEPEND_OUT:
case OMP_CLAUSE_DEPEND_INOUT:
case OMP_CLAUSE_DEPEND_MUTEXINOUTSET:
kind = OMP_CLAUSE_DEPEND_KIND (clause);
t = OMP_CLAUSE_DECL (clause);
gcc_assert (t);
if (TREE_CODE (t) == TREE_LIST
&& TREE_PURPOSE (t)
&& TREE_CODE (TREE_PURPOSE (t)) == TREE_VEC)
{
error_at (OMP_CLAUSE_LOCATION (clause),
"%<iterator%> modifier may not be specified on "
"%<depobj%> construct");
return;
}
if (TREE_CODE (t) == COMPOUND_EXPR)
{
tree t1 = build_fold_addr_expr (TREE_OPERAND (t, 1));
t = build2 (COMPOUND_EXPR, TREE_TYPE (t1), TREE_OPERAND (t, 0),
t1);
}
else
t = build_fold_addr_expr (t);
break;
default:
gcc_unreachable ();
}
}
else
gcc_assert (kind != OMP_CLAUSE_DEPEND_SOURCE);
if (depobj == error_mark_node)
return;
depobj = build_fold_addr_expr_loc (EXPR_LOC_OR_LOC (depobj, loc), depobj);
tree dtype
= build_pointer_type_for_mode (ptr_type_node, TYPE_MODE (ptr_type_node),
true);
depobj = fold_convert (dtype, depobj);
tree r;
if (clause)
{
depobj = save_expr (depobj);
r = build_indirect_ref (loc, depobj, RO_UNARY_STAR);
add_stmt (build2 (MODIFY_EXPR, void_type_node, r, t));
}
int k;
switch (kind)
{
case OMP_CLAUSE_DEPEND_IN:
k = GOMP_DEPEND_IN;
break;
case OMP_CLAUSE_DEPEND_OUT:
k = GOMP_DEPEND_OUT;
break;
case OMP_CLAUSE_DEPEND_INOUT:
k = GOMP_DEPEND_INOUT;
break;
case OMP_CLAUSE_DEPEND_MUTEXINOUTSET:
k = GOMP_DEPEND_MUTEXINOUTSET;
break;
case OMP_CLAUSE_DEPEND_LAST:
k = -1;
break;
default:
gcc_unreachable ();
}
t = build_int_cst (ptr_type_node, k);
depobj = build2_loc (loc, POINTER_PLUS_EXPR, TREE_TYPE (depobj), depobj,
TYPE_SIZE_UNIT (ptr_type_node));
r = build_indirect_ref (loc, depobj, RO_UNARY_STAR);
add_stmt (build2 (MODIFY_EXPR, void_type_node, r, t));
}
/* Complete a #pragma omp flush construct. We don't do anything with
the variable list that the syntax allows. LOC is the location of
the #pragma. */
void
c_finish_omp_flush (location_t loc)
c_finish_omp_flush (location_t loc, int mo)
{
tree x;
x = builtin_decl_explicit (BUILT_IN_SYNC_SYNCHRONIZE);
x = build_call_expr_loc (loc, x, 0);
if (mo == MEMMODEL_LAST)
{
x = builtin_decl_explicit (BUILT_IN_SYNC_SYNCHRONIZE);
x = build_call_expr_loc (loc, x, 0);
}
else
{
x = builtin_decl_explicit (BUILT_IN_ATOMIC_THREAD_FENCE);
x = build_call_expr_loc (loc, x, 1,
build_int_cst (integer_type_node, mo));
}
add_stmt (x);
}
@ -454,17 +615,17 @@ check_omp_for_incr_expr (location_t loc, tree exp, tree decl)
t = check_omp_for_incr_expr (loc, TREE_OPERAND (exp, 0), decl);
if (t != error_mark_node)
return fold_build2_loc (loc, MINUS_EXPR,
TREE_TYPE (exp), t, TREE_OPERAND (exp, 1));
TREE_TYPE (exp), t, TREE_OPERAND (exp, 1));
break;
case PLUS_EXPR:
t = check_omp_for_incr_expr (loc, TREE_OPERAND (exp, 0), decl);
if (t != error_mark_node)
return fold_build2_loc (loc, PLUS_EXPR,
TREE_TYPE (exp), t, TREE_OPERAND (exp, 1));
TREE_TYPE (exp), t, TREE_OPERAND (exp, 1));
t = check_omp_for_incr_expr (loc, TREE_OPERAND (exp, 1), decl);
if (t != error_mark_node)
return fold_build2_loc (loc, PLUS_EXPR,
TREE_TYPE (exp), TREE_OPERAND (exp, 0), t);
TREE_TYPE (exp), TREE_OPERAND (exp, 0), t);
break;
case COMPOUND_EXPR:
{
@ -530,7 +691,7 @@ c_omp_for_incr_canonicalize_ptr (location_t loc, tree decl, tree incr)
tree
c_finish_omp_for (location_t locus, enum tree_code code, tree declv,
tree orig_declv, tree initv, tree condv, tree incrv,
tree body, tree pre_body)
tree body, tree pre_body, bool final_p)
{
location_t elocus;
bool fail = false;
@ -667,7 +828,8 @@ c_finish_omp_for (location_t locus, enum tree_code code, tree declv,
{
if (!INTEGRAL_TYPE_P (TREE_TYPE (decl)))
{
cond_ok = false;
if (code == OACC_LOOP || TREE_CODE (cond) == EQ_EXPR)
cond_ok = false;
}
else if (operand_equal_p (TREE_OPERAND (cond, 1),
TYPE_MIN_VALUE (TREE_TYPE (decl)),
@ -679,7 +841,7 @@ c_finish_omp_for (location_t locus, enum tree_code code, tree declv,
0))
TREE_SET_CODE (cond, TREE_CODE (cond) == NE_EXPR
? LT_EXPR : GE_EXPR);
else
else if (code == OACC_LOOP || TREE_CODE (cond) == EQ_EXPR)
cond_ok = false;
}
@ -730,6 +892,21 @@ c_finish_omp_for (location_t locus, enum tree_code code, tree declv,
break;
incr_ok = true;
if (!fail
&& TREE_CODE (cond) == NE_EXPR
&& TREE_CODE (TREE_TYPE (decl)) == POINTER_TYPE
&& TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (decl)))
&& (TREE_CODE (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (decl))))
!= INTEGER_CST))
{
/* For pointer to VLA, transform != into < or >
depending on whether incr is increment or decrement. */
if (TREE_CODE (incr) == PREINCREMENT_EXPR
|| TREE_CODE (incr) == POSTINCREMENT_EXPR)
TREE_SET_CODE (cond, LT_EXPR);
else
TREE_SET_CODE (cond, GT_EXPR);
}
incr = c_omp_for_incr_canonicalize_ptr (elocus, decl, incr);
break;
@ -765,6 +942,58 @@ c_finish_omp_for (location_t locus, enum tree_code code, tree declv,
incr = build2 (MODIFY_EXPR, void_type_node, decl, t);
}
}
if (!fail
&& incr_ok
&& TREE_CODE (cond) == NE_EXPR)
{
tree i = TREE_OPERAND (incr, 1);
i = TREE_OPERAND (i, TREE_OPERAND (i, 0) == decl);
i = c_fully_fold (i, false, NULL);
if (!final_p
&& TREE_CODE (i) != INTEGER_CST)
;
else if (TREE_CODE (TREE_TYPE (decl)) == POINTER_TYPE)
{
tree unit
= TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (decl)));
if (unit)
{
enum tree_code ccode = GT_EXPR;
unit = c_fully_fold (unit, false, NULL);
i = fold_convert (TREE_TYPE (unit), i);
if (operand_equal_p (unit, i, 0))
ccode = LT_EXPR;
if (ccode == GT_EXPR)
{
i = fold_unary (NEGATE_EXPR, TREE_TYPE (i), i);
if (i == NULL_TREE
|| !operand_equal_p (unit, i, 0))
{
error_at (elocus,
"increment is not constant 1 or "
"-1 for != condition");
fail = true;
}
}
if (TREE_CODE (unit) != INTEGER_CST)
/* For pointer to VLA, transform != into < or >
depending on whether the pointer is
incremented or decremented in each
iteration. */
TREE_SET_CODE (cond, ccode);
}
}
else
{
if (!integer_onep (i) && !integer_minus_onep (i))
{
error_at (elocus,
"increment is not constant 1 or -1 for"
" != condition");
fail = true;
}
}
}
break;
default:
@ -829,7 +1058,13 @@ c_omp_check_loop_iv_r (tree *tp, int *walk_subtrees, void *data)
for (i = 0; i < TREE_VEC_LENGTH (d->declv); i++)
if (*tp == TREE_VEC_ELT (d->declv, i)
|| (TREE_CODE (TREE_VEC_ELT (d->declv, i)) == TREE_LIST
&& *tp == TREE_PURPOSE (TREE_VEC_ELT (d->declv, i))))
&& *tp == TREE_PURPOSE (TREE_VEC_ELT (d->declv, i)))
|| (TREE_CODE (TREE_VEC_ELT (d->declv, i)) == TREE_LIST
&& TREE_CHAIN (TREE_VEC_ELT (d->declv, i))
&& (TREE_CODE (TREE_CHAIN (TREE_VEC_ELT (d->declv, i)))
== TREE_VEC)
&& *tp == TREE_VEC_ELT (TREE_CHAIN (TREE_VEC_ELT (d->declv,
i)), 2)))
{
location_t loc = d->expr_loc;
if (loc == UNKNOWN_LOCATION)
@ -1025,18 +1260,24 @@ c_oacc_split_loop_clauses (tree clauses, tree *not_loop_clauses,
}
/* This function attempts to split or duplicate clauses for OpenMP
combined/composite constructs. Right now there are 21 different
combined/composite constructs. Right now there are 26 different
constructs. CODE is the innermost construct in the combined construct,
and MASK allows to determine which constructs are combined together,
as every construct has at least one clause that no other construct
has (except for OMP_SECTIONS, but that can be only combined with parallel).
has (except for OMP_SECTIONS, but that can be only combined with parallel,
and OMP_MASTER, which doesn't have any clauses at all).
OpenMP combined/composite constructs are:
#pragma omp distribute parallel for
#pragma omp distribute parallel for simd
#pragma omp distribute simd
#pragma omp for simd
#pragma omp master taskloop
#pragma omp master taskloop simd
#pragma omp parallel for
#pragma omp parallel for simd
#pragma omp parallel master
#pragma omp parallel master taskloop
#pragma omp parallel master taskloop simd
#pragma omp parallel sections
#pragma omp target parallel
#pragma omp target parallel for
@ -1070,8 +1311,9 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
{
case OMP_FOR:
case OMP_SIMD:
cclauses[C_OMP_CLAUSE_SPLIT_FOR]
= build_omp_clause (loc, OMP_CLAUSE_NOWAIT);
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_SCHEDULE)) != 0)
cclauses[C_OMP_CLAUSE_SPLIT_FOR]
= build_omp_clause (loc, OMP_CLAUSE_NOWAIT);
break;
case OMP_SECTIONS:
cclauses[C_OMP_CLAUSE_SPLIT_SECTIONS]
@ -1118,6 +1360,7 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
case OMP_CLAUSE_SAFELEN:
case OMP_CLAUSE_SIMDLEN:
case OMP_CLAUSE_ALIGNED:
case OMP_CLAUSE_NONTEMPORAL:
s = C_OMP_CLAUSE_SPLIT_SIMD;
break;
case OMP_CLAUSE_GRAINSIZE:
@ -1175,8 +1418,8 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
else
s = C_OMP_CLAUSE_SPLIT_DISTRIBUTE;
break;
/* Private clause is supported on all constructs,
it is enough to put it on the innermost one. For
/* Private clause is supported on all constructs but master,
it is enough to put it on the innermost one other than master. For
#pragma omp {for,sections} put it on parallel though,
as that's what we did for OpenMP 3.1. */
case OMP_CLAUSE_PRIVATE:
@ -1187,12 +1430,14 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
case OMP_PARALLEL: s = C_OMP_CLAUSE_SPLIT_PARALLEL; break;
case OMP_DISTRIBUTE: s = C_OMP_CLAUSE_SPLIT_DISTRIBUTE; break;
case OMP_TEAMS: s = C_OMP_CLAUSE_SPLIT_TEAMS; break;
case OMP_MASTER: s = C_OMP_CLAUSE_SPLIT_PARALLEL; break;
case OMP_TASKLOOP: s = C_OMP_CLAUSE_SPLIT_TASKLOOP; break;
default: gcc_unreachable ();
}
break;
/* Firstprivate clause is supported on all constructs but
simd. Put it on the outermost of those and duplicate on teams
and parallel. */
simd and master. Put it on the outermost of those and duplicate on
teams and parallel. */
case OMP_CLAUSE_FIRSTPRIVATE:
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_MAP))
!= 0)
@ -1231,6 +1476,11 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
else
s = C_OMP_CLAUSE_SPLIT_DISTRIBUTE;
}
else if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NOGROUP)) != 0)
/* This must be
#pragma omp parallel master taskloop{, simd}. */
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
else
/* This must be
#pragma omp parallel{, for{, simd}, sections}
@ -1260,8 +1510,10 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
else if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NOGROUP)) != 0)
{
/* This must be #pragma omp taskloop simd. */
gcc_assert (code == OMP_SIMD);
/* This must be #pragma omp {,{,parallel }master }taskloop simd
or
#pragma omp {,parallel }master taskloop. */
gcc_assert (code == OMP_SIMD || code == OMP_TASKLOOP);
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
}
else
@ -1271,9 +1523,9 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
s = C_OMP_CLAUSE_SPLIT_FOR;
}
break;
/* Lastprivate is allowed on distribute, for, sections and simd. In
parallel {for{, simd},sections} we actually want to put it on
parallel rather than for or sections. */
/* Lastprivate is allowed on distribute, for, sections, taskloop and
simd. In parallel {for{, simd},sections} we actually want to put
it on parallel rather than for or sections. */
case OMP_CLAUSE_LASTPRIVATE:
if (code == OMP_DISTRIBUTE)
{
@ -1287,6 +1539,8 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
OMP_CLAUSE_LASTPRIVATE);
OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_DISTRIBUTE];
OMP_CLAUSE_LASTPRIVATE_CONDITIONAL (c)
= OMP_CLAUSE_LASTPRIVATE_CONDITIONAL (clauses);
cclauses[C_OMP_CLAUSE_SPLIT_DISTRIBUTE] = c;
}
if (code == OMP_FOR || code == OMP_SECTIONS)
@ -1298,12 +1552,19 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
s = C_OMP_CLAUSE_SPLIT_FOR;
break;
}
if (code == OMP_TASKLOOP)
{
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
break;
}
gcc_assert (code == OMP_SIMD);
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_SCHEDULE)) != 0)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_LASTPRIVATE);
OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
OMP_CLAUSE_LASTPRIVATE_CONDITIONAL (c)
= OMP_CLAUSE_LASTPRIVATE_CONDITIONAL (clauses);
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NUM_THREADS))
!= 0)
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
@ -1312,6 +1573,16 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
OMP_CLAUSE_CHAIN (c) = cclauses[s];
cclauses[s] = c;
}
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NOGROUP)) != 0)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_LASTPRIVATE);
OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
OMP_CLAUSE_LASTPRIVATE_CONDITIONAL (c)
= OMP_CLAUSE_LASTPRIVATE_CONDITIONAL (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_TASKLOOP];
cclauses[C_OMP_CLAUSE_SPLIT_TASKLOOP] = c;
}
s = C_OMP_CLAUSE_SPLIT_SIMD;
break;
/* Shared and default clauses are allowed on parallel, teams and
@ -1321,6 +1592,19 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NOGROUP))
!= 0)
{
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NUM_THREADS))
!= 0)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_CODE (clauses));
if (OMP_CLAUSE_CODE (clauses) == OMP_CLAUSE_SHARED)
OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
else
OMP_CLAUSE_DEFAULT_KIND (c)
= OMP_CLAUSE_DEFAULT_KIND (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_PARALLEL];
cclauses[C_OMP_CLAUSE_SPLIT_PARALLEL] = c;
}
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
break;
}
@ -1345,10 +1629,33 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
}
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
break;
/* Reduction is allowed on simd, for, parallel, sections and teams.
Duplicate it on all of them, but omit on for or sections if
parallel is present. */
/* Reduction is allowed on simd, for, parallel, sections, taskloop
and teams. Duplicate it on all of them, but omit on for or
sections if parallel is present. If taskloop is combined with
parallel, omit it on parallel. */
case OMP_CLAUSE_REDUCTION:
if (OMP_CLAUSE_REDUCTION_TASK (clauses))
{
if (code == OMP_SIMD /* || code == OMP_LOOP */)
{
error_at (OMP_CLAUSE_LOCATION (clauses),
"invalid %<task%> reduction modifier on construct "
"combined with %<simd%>" /* or %<loop%> */);
OMP_CLAUSE_REDUCTION_TASK (clauses) = 0;
}
else if (code != OMP_SECTIONS
&& (mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_SCHEDULE)) == 0
&& (mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_SCHEDULE)) == 0)
{
error_at (OMP_CLAUSE_LOCATION (clauses),
"invalid %<task%> reduction modifier on construct "
"not combined with %<parallel%>, %<for%> or "
"%<sections%>");
OMP_CLAUSE_REDUCTION_TASK (clauses) = 0;
}
}
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_SCHEDULE)) != 0)
{
if (code == OMP_SIMD)
@ -1377,9 +1684,9 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
= OMP_CLAUSE_REDUCTION_PLACEHOLDER (clauses);
OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER (c)
= OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_PARALLEL];
cclauses[C_OMP_CLAUSE_SPLIT_PARALLEL] = c;
s = C_OMP_CLAUSE_SPLIT_TEAMS;
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_TEAMS];
cclauses[C_OMP_CLAUSE_SPLIT_TEAMS] = c;
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
}
else if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NUM_THREADS)) != 0)
@ -1387,46 +1694,154 @@ c_omp_split_clauses (location_t loc, enum tree_code code,
else
s = C_OMP_CLAUSE_SPLIT_FOR;
}
else if (code == OMP_SECTIONS || code == OMP_PARALLEL)
else if (code == OMP_SECTIONS
|| code == OMP_PARALLEL
|| code == OMP_MASTER)
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
else if (code == OMP_TASKLOOP)
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
else if (code == OMP_SIMD)
s = C_OMP_CLAUSE_SPLIT_SIMD;
{
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NOGROUP))
!= 0)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_REDUCTION);
OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
OMP_CLAUSE_REDUCTION_CODE (c)
= OMP_CLAUSE_REDUCTION_CODE (clauses);
OMP_CLAUSE_REDUCTION_PLACEHOLDER (c)
= OMP_CLAUSE_REDUCTION_PLACEHOLDER (clauses);
OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER (c)
= OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_TASKLOOP];
cclauses[C_OMP_CLAUSE_SPLIT_TASKLOOP] = c;
}
s = C_OMP_CLAUSE_SPLIT_SIMD;
}
else
s = C_OMP_CLAUSE_SPLIT_TEAMS;
break;
case OMP_CLAUSE_IN_REDUCTION:
/* in_reduction on taskloop simd becomes reduction on the simd
and keeps being in_reduction on taskloop. */
if (code == OMP_SIMD)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_REDUCTION);
OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
OMP_CLAUSE_REDUCTION_CODE (c)
= OMP_CLAUSE_REDUCTION_CODE (clauses);
OMP_CLAUSE_REDUCTION_PLACEHOLDER (c)
= OMP_CLAUSE_REDUCTION_PLACEHOLDER (clauses);
OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER (c)
= OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_SIMD];
cclauses[C_OMP_CLAUSE_SPLIT_SIMD] = c;
}
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
break;
case OMP_CLAUSE_IF:
if (OMP_CLAUSE_IF_MODIFIER (clauses) != ERROR_MARK)
{
s = C_OMP_CLAUSE_SPLIT_COUNT;
switch (OMP_CLAUSE_IF_MODIFIER (clauses))
{
case OMP_PARALLEL:
if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NUM_THREADS)) != 0)
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
break;
case OMP_SIMD:
if (code == OMP_SIMD)
s = C_OMP_CLAUSE_SPLIT_SIMD;
break;
case OMP_TASKLOOP:
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NOGROUP))
!= 0)
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
break;
case OMP_TARGET:
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_MAP))
!= 0)
s = C_OMP_CLAUSE_SPLIT_TARGET;
break;
default:
break;
}
if (s != C_OMP_CLAUSE_SPLIT_COUNT)
break;
/* Error-recovery here, invalid if-modifier specified, add the
clause to just one construct. */
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_MAP)) != 0)
s = C_OMP_CLAUSE_SPLIT_TARGET;
else if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NUM_THREADS)) != 0)
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
else if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NOGROUP)) != 0)
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
else if (code == OMP_SIMD)
s = C_OMP_CLAUSE_SPLIT_SIMD;
else
gcc_unreachable ();
break;
}
/* Otherwise, duplicate if clause to all constructs. */
if (code == OMP_SIMD)
{
if ((mask & ((OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_MAP)
| (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NUM_THREADS)
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NOGROUP)))
!= 0)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_IF);
OMP_CLAUSE_IF_MODIFIER (c)
= OMP_CLAUSE_IF_MODIFIER (clauses);
OMP_CLAUSE_IF_EXPR (c) = OMP_CLAUSE_IF_EXPR (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_SIMD];
cclauses[C_OMP_CLAUSE_SPLIT_SIMD] = c;
}
else
{
s = C_OMP_CLAUSE_SPLIT_SIMD;
break;
}
}
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NOGROUP))
!= 0)
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
{
if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NUM_THREADS)) != 0)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_IF);
OMP_CLAUSE_IF_MODIFIER (c)
= OMP_CLAUSE_IF_MODIFIER (clauses);
OMP_CLAUSE_IF_EXPR (c) = OMP_CLAUSE_IF_EXPR (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_TASKLOOP];
cclauses[C_OMP_CLAUSE_SPLIT_TASKLOOP] = c;
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
}
else
s = C_OMP_CLAUSE_SPLIT_TASKLOOP;
}
else if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_NUM_THREADS)) != 0)
{
if ((mask & (OMP_CLAUSE_MASK_1
<< PRAGMA_OMP_CLAUSE_MAP)) != 0)
{
if (OMP_CLAUSE_IF_MODIFIER (clauses) == OMP_PARALLEL)
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
else if (OMP_CLAUSE_IF_MODIFIER (clauses) == OMP_TARGET)
s = C_OMP_CLAUSE_SPLIT_TARGET;
else if (OMP_CLAUSE_IF_MODIFIER (clauses) == ERROR_MARK)
{
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_IF);
OMP_CLAUSE_IF_MODIFIER (c)
= OMP_CLAUSE_IF_MODIFIER (clauses);
OMP_CLAUSE_IF_EXPR (c) = OMP_CLAUSE_IF_EXPR (clauses);
OMP_CLAUSE_CHAIN (c)
= cclauses[C_OMP_CLAUSE_SPLIT_TARGET];
cclauses[C_OMP_CLAUSE_SPLIT_TARGET] = c;
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
}
else
{
error_at (OMP_CLAUSE_LOCATION (clauses),
"expected %<parallel%> or %<target%> %<if%> "
"clause modifier");
continue;
}
c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
OMP_CLAUSE_IF);
OMP_CLAUSE_IF_MODIFIER (c)
= OMP_CLAUSE_IF_MODIFIER (clauses);
OMP_CLAUSE_IF_EXPR (c) = OMP_CLAUSE_IF_EXPR (clauses);
OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_TARGET];
cclauses[C_OMP_CLAUSE_SPLIT_TARGET] = c;
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
}
else
s = C_OMP_CLAUSE_SPLIT_PARALLEL;
@ -1610,11 +2025,6 @@ c_omp_declare_simd_clauses_to_decls (tree fndecl, tree clauses)
enum omp_clause_default_kind
c_omp_predetermined_sharing (tree decl)
{
/* Variables with const-qualified type having no mutable member
are predetermined shared. */
if (TREE_READONLY (decl))
return OMP_CLAUSE_DEFAULT_SHARED;
/* Predetermine artificial variables holding integral values, those
are usually result of gimplify_one_sizepos or SAVE_EXPR
gimplification. */

View File

@ -1286,9 +1286,11 @@ static const struct omp_pragma_def omp_pragmas[] = {
{ "cancel", PRAGMA_OMP_CANCEL },
{ "cancellation", PRAGMA_OMP_CANCELLATION_POINT },
{ "critical", PRAGMA_OMP_CRITICAL },
{ "depobj", PRAGMA_OMP_DEPOBJ },
{ "end", PRAGMA_OMP_END_DECLARE_TARGET },
{ "flush", PRAGMA_OMP_FLUSH },
{ "master", PRAGMA_OMP_MASTER },
{ "requires", PRAGMA_OMP_REQUIRES },
{ "section", PRAGMA_OMP_SECTION },
{ "sections", PRAGMA_OMP_SECTIONS },
{ "single", PRAGMA_OMP_SINGLE },

View File

@ -47,6 +47,7 @@ enum pragma_kind {
PRAGMA_OMP_CANCELLATION_POINT,
PRAGMA_OMP_CRITICAL,
PRAGMA_OMP_DECLARE,
PRAGMA_OMP_DEPOBJ,
PRAGMA_OMP_DISTRIBUTE,
PRAGMA_OMP_END_DECLARE_TARGET,
PRAGMA_OMP_FLUSH,
@ -54,6 +55,7 @@ enum pragma_kind {
PRAGMA_OMP_MASTER,
PRAGMA_OMP_ORDERED,
PRAGMA_OMP_PARALLEL,
PRAGMA_OMP_REQUIRES,
PRAGMA_OMP_SECTION,
PRAGMA_OMP_SECTIONS,
PRAGMA_OMP_SIMD,
@ -75,8 +77,8 @@ enum pragma_kind {
};
/* All clauses defined by OpenACC 2.0, and OpenMP 2.5, 3.0, 3.1, 4.0 and 4.5.
Used internally by both C and C++ parsers. */
/* All clauses defined by OpenACC 2.0, and OpenMP 2.5, 3.0, 3.1, 4.0, 4.5
and 5.0. Used internally by both C and C++ parsers. */
enum pragma_omp_clause {
PRAGMA_OMP_CLAUSE_NONE = 0,
@ -96,6 +98,7 @@ enum pragma_omp_clause {
PRAGMA_OMP_CLAUSE_GRAINSIZE,
PRAGMA_OMP_CLAUSE_HINT,
PRAGMA_OMP_CLAUSE_IF,
PRAGMA_OMP_CLAUSE_IN_REDUCTION,
PRAGMA_OMP_CLAUSE_INBRANCH,
PRAGMA_OMP_CLAUSE_IS_DEVICE_PTR,
PRAGMA_OMP_CLAUSE_LASTPRIVATE,
@ -104,6 +107,7 @@ enum pragma_omp_clause {
PRAGMA_OMP_CLAUSE_MAP,
PRAGMA_OMP_CLAUSE_MERGEABLE,
PRAGMA_OMP_CLAUSE_NOGROUP,
PRAGMA_OMP_CLAUSE_NONTEMPORAL,
PRAGMA_OMP_CLAUSE_NOTINBRANCH,
PRAGMA_OMP_CLAUSE_NOWAIT,
PRAGMA_OMP_CLAUSE_NUM_TASKS,
@ -121,6 +125,7 @@ enum pragma_omp_clause {
PRAGMA_OMP_CLAUSE_SHARED,
PRAGMA_OMP_CLAUSE_SIMD,
PRAGMA_OMP_CLAUSE_SIMDLEN,
PRAGMA_OMP_CLAUSE_TASK_REDUCTION,
PRAGMA_OMP_CLAUSE_TASKGROUP,
PRAGMA_OMP_CLAUSE_THREAD_LIMIT,
PRAGMA_OMP_CLAUSE_THREADS,

View File

@ -1,3 +1,90 @@
2018-11-08 Jakub Jelinek <jakub@redhat.com>
* c-parser.c: Include memmode.h.
(c_parser_omp_depobj, c_parser_omp_requires): New functions.
(c_parser_pragma): Handle PRAGMA_OMP_DEPOBJ and PRAGMA_OMP_REQUIRES.
(c_parser_omp_clause_name): Handle nontemporal, in_reduction and
task_reduction clauses.
(c_parser_omp_variable_list): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION.
For OMP_CLAUSE_DEPEND, parse clause operands as either an array
section, or lvalue assignment expression.
(c_parser_omp_clause_if): Handle cancel and simd modifiers.
(c_parser_omp_clause_lastprivate): Parse optional
conditional: modifier.
(c_parser_omp_clause_hint): Require constant integer expression rather
than just integer expression.
(c_parser_omp_clause_defaultmap): Parse new kinds of defaultmap
clause.
(c_parser_omp_clause_reduction): Add IS_OMP and KIND arguments.
Parse reduction modifiers. Pass KIND to c_parser_omp_variable_list.
(c_parser_omp_clause_nontemporal, c_parser_omp_iterators): New
functions.
(c_parser_omp_clause_depend): Parse iterator modifier and handle
iterators. Parse mutexinoutset and depobj kinds.
(c_parser_oacc_all_clauses): Adjust c_parser_omp_clause_reduction
callers.
(c_parser_omp_all_clauses): Likewise. Handle
PRAGMA_OMP_CLAUSE_NONTEMPORAL and
PRAGMA_OMP_CLAUSE_{IN,TASK}_REDUCTION.
(c_parser_omp_atomic): Parse hint and memory order clauses. Handle
default memory order from requires directive if any. Adjust
c_finish_omp_atomic caller.
(c_parser_omp_critical): Allow comma in between (name) and hint clause.
(c_parser_omp_flush): Parse flush with memory-order-clause.
(c_parser_omp_for_loop): Allow NE_EXPR even in
OpenMP loops, adjust c_finish_omp_for caller.
(OMP_SIMD_CLAUSE_MASK): Add if and nontemporal clauses.
(c_parser_omp_master): Add p_name, mask and cclauses arguments.
Allow to be called while parsing combined parallel master.
Parse combined master taskloop{, simd}.
(c_parser_omp_parallel): Parse combined
parallel master{, taskloop{, simd}} constructs.
(OMP_TASK_CLAUSE_MASK): Add in_reduction clause.
(OMP_TASKGROUP_CLAUSE_MASK): Define.
(c_parser_omp_taskgroup): Add LOC argument. Parse taskgroup clauses.
(OMP_TASKWAIT_CLAUSE_MASK): Define.
(c_parser_omp_taskwait): Handle taskwait with depend clauses.
(c_parser_omp_teams): Force a BIND_EXPR with BLOCK
around teams body. Use SET_EXPR_LOCATION.
(c_parser_omp_target_data): Allow target data
with only use_device_ptr clauses.
(c_parser_omp_target): Use SET_EXPR_LOCATION. Set
OMP_REQUIRES_TARGET_USED bit in omp_requires_mask.
(c_parser_omp_requires): New function.
(c_finish_taskloop_clauses): New function.
(OMP_TASKLOOP_CLAUSE_MASK): Add reduction and in_reduction clauses.
(c_parser_omp_taskloop): Use c_finish_taskloop_clauses. Add forward
declaration. Disallow in_reduction clause when combined with parallel
master.
(c_parser_omp_construct): Adjust c_parser_omp_master and
c_parser_omp_taskgroup callers.
* c-typeck.c (c_finish_omp_cancel): Diagnose if clause with modifier
other than cancel.
(handle_omp_array_sections_1): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION
like OMP_CLAUSE_REDUCTION.
(handle_omp_array_sections): Likewise. Call save_expr on array
reductions before calling build_index_type. Handle depend clauses
with iterators.
(struct c_find_omp_var_s): New type.
(c_find_omp_var_r, c_omp_finish_iterators): New functions.
(c_finish_omp_clauses): Don't diagnose nonmonotonic clause
with static, runtime or auto schedule kinds. Call save_expr for whole
array reduction sizes. Diagnose reductions with zero sized elements
or variable length structures. Diagnose nogroup clause used with
reduction clause(s). Handle depend clause with
OMP_CLAUSE_DEPEND_DEPOBJ. Diagnose bit-fields. Require
omp_depend_t type for OMP_CLAUSE_DEPEND_DEPOBJ kinds and
some different type for other kinds. Use build_unary_op with
ADDR_EXPR and build_indirect_ref instead of c_mark_addressable.
Handle depend clauses with iterators. Remove no longer needed special
case that predetermined const qualified vars may be specified in
firstprivate clause. Complain if const qualified vars are mentioned
in data-sharing clauses other than firstprivate or shared. Use
error_at with OMP_CLAUSE_LOCATION (c) as first argument instead of
error. Formatting fix. Handle OMP_CLAUSE_NONTEMPORAL and
OMP_CLAUSE_{IN,TASK}_REDUCTION. Allow any lvalue as
OMP_CLAUSE_DEPEND operand (besides array section), adjust diagnostics.
2018-10-29 David Malcolm <dmalcolm@redhat.com>
* c-decl.c (implicit_decl_warning): Update "is there a suggestion"

File diff suppressed because it is too large Load Diff

View File

@ -12525,6 +12525,11 @@ c_finish_omp_cancel (location_t loc, tree clauses)
tree ifc = omp_find_clause (clauses, OMP_CLAUSE_IF);
if (ifc != NULL_TREE)
{
if (OMP_CLAUSE_IF_MODIFIER (ifc) != ERROR_MARK
&& OMP_CLAUSE_IF_MODIFIER (ifc) != VOID_CST)
error_at (OMP_CLAUSE_LOCATION (ifc),
"expected %<cancel%> %<if%> clause modifier");
tree type = TREE_TYPE (OMP_CLAUSE_IF_EXPR (ifc));
ifc = fold_build2_loc (OMP_CLAUSE_LOCATION (ifc), NE_EXPR,
boolean_type_node, OMP_CLAUSE_IF_EXPR (ifc),
@ -12717,7 +12722,9 @@ handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
if (!integer_nonzerop (length))
{
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_DEPEND
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION)
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_IN_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TASK_REDUCTION)
{
if (integer_zerop (length))
{
@ -12783,7 +12790,9 @@ handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
if (tree_int_cst_equal (size, low_bound))
{
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_DEPEND
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION)
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_IN_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TASK_REDUCTION)
{
error_at (OMP_CLAUSE_LOCATION (c),
"zero length array section in %qs clause",
@ -12802,7 +12811,9 @@ handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
else if (length == NULL_TREE)
{
if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_DEPEND
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_REDUCTION)
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_REDUCTION
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_IN_REDUCTION
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_TASK_REDUCTION)
maybe_zero_len = true;
if (first_non_one == types.length ())
first_non_one++;
@ -12838,7 +12849,9 @@ handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
else if (length == NULL_TREE)
{
if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_DEPEND
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_REDUCTION)
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_REDUCTION
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_IN_REDUCTION
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_TASK_REDUCTION)
maybe_zero_len = true;
if (first_non_one == types.length ())
first_non_one++;
@ -12910,7 +12923,13 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort)
bool maybe_zero_len = false;
unsigned int first_non_one = 0;
auto_vec<tree, 10> types;
tree first = handle_omp_array_sections_1 (c, OMP_CLAUSE_DECL (c), types,
tree *tp = &OMP_CLAUSE_DECL (c);
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_DEPEND
&& TREE_CODE (*tp) == TREE_LIST
&& TREE_PURPOSE (*tp)
&& TREE_CODE (TREE_PURPOSE (*tp)) == TREE_VEC)
tp = &TREE_VALUE (*tp);
tree first = handle_omp_array_sections_1 (c, *tp, types,
maybe_zero_len, first_non_one,
ort);
if (first == error_mark_node)
@ -12919,7 +12938,7 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort)
return false;
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_DEPEND)
{
tree t = OMP_CLAUSE_DECL (c);
tree t = *tp;
tree tem = NULL_TREE;
/* Need to evaluate side effects in the length expressions
if any. */
@ -12938,7 +12957,7 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort)
if (tem)
first = build2 (COMPOUND_EXPR, TREE_TYPE (first), tem, first);
first = c_fully_fold (first, false, NULL, true);
OMP_CLAUSE_DECL (c) = first;
*tp = first;
}
else
{
@ -13010,7 +13029,9 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort)
if (i > first_non_one
&& ((length && integer_nonzerop (length))
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION))
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_IN_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TASK_REDUCTION))
continue;
if (length)
l = fold_convert (sizetype, length);
@ -13038,7 +13059,9 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort)
tree eltype = TREE_TYPE (types[num - 1]);
while (TREE_CODE (eltype) == ARRAY_TYPE)
eltype = TREE_TYPE (eltype);
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION)
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_IN_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TASK_REDUCTION)
{
if (integer_zerop (size)
|| integer_zerop (size_in_bytes (eltype)))
@ -13062,10 +13085,13 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort)
}
if (side_effects)
size = build2 (COMPOUND_EXPR, sizetype, side_effects, size);
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION)
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_IN_REDUCTION
|| OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TASK_REDUCTION)
{
size = size_binop (MINUS_EXPR, size, size_one_node);
size = c_fully_fold (size, false, NULL);
size = save_expr (size);
tree index_type = build_index_type (size);
tree eltype = TREE_TYPE (first);
while (TREE_CODE (eltype) == ARRAY_TYPE)
@ -13195,6 +13221,178 @@ c_find_omp_placeholder_r (tree *tp, int *, void *data)
return NULL_TREE;
}
/* Similarly, but also walk aggregate fields. */
struct c_find_omp_var_s { tree var; hash_set<tree> *pset; };
static tree
c_find_omp_var_r (tree *tp, int *, void *data)
{
if (*tp == ((struct c_find_omp_var_s *) data)->var)
return *tp;
if (RECORD_OR_UNION_TYPE_P (*tp))
{
tree field;
hash_set<tree> *pset = ((struct c_find_omp_var_s *) data)->pset;
for (field = TYPE_FIELDS (*tp); field;
field = DECL_CHAIN (field))
if (TREE_CODE (field) == FIELD_DECL)
{
tree ret = walk_tree (&DECL_FIELD_OFFSET (field),
c_find_omp_var_r, data, pset);
if (ret)
return ret;
ret = walk_tree (&DECL_SIZE (field), c_find_omp_var_r, data, pset);
if (ret)
return ret;
ret = walk_tree (&DECL_SIZE_UNIT (field), c_find_omp_var_r, data,
pset);
if (ret)
return ret;
ret = walk_tree (&TREE_TYPE (field), c_find_omp_var_r, data, pset);
if (ret)
return ret;
}
}
else if (INTEGRAL_TYPE_P (*tp))
return walk_tree (&TYPE_MAX_VALUE (*tp), c_find_omp_var_r, data,
((struct c_find_omp_var_s *) data)->pset);
return NULL_TREE;
}
/* Finish OpenMP iterators ITER. Return true if they are errorneous
and clauses containing them should be removed. */
static bool
c_omp_finish_iterators (tree iter)
{
bool ret = false;
for (tree it = iter; it; it = TREE_CHAIN (it))
{
tree var = TREE_VEC_ELT (it, 0);
tree begin = TREE_VEC_ELT (it, 1);
tree end = TREE_VEC_ELT (it, 2);
tree step = TREE_VEC_ELT (it, 3);
tree orig_step;
tree type = TREE_TYPE (var);
location_t loc = DECL_SOURCE_LOCATION (var);
if (type == error_mark_node)
{
ret = true;
continue;
}
if (!INTEGRAL_TYPE_P (type) && !POINTER_TYPE_P (type))
{
error_at (loc, "iterator %qD has neither integral nor pointer type",
var);
ret = true;
continue;
}
else if (TYPE_ATOMIC (type))
{
error_at (loc, "iterator %qD has %<_Atomic%> qualified type", var);
ret = true;
continue;
}
else if (TYPE_READONLY (type))
{
error_at (loc, "iterator %qD has const qualified type", var);
ret = true;
continue;
}
else if (step == error_mark_node
|| TREE_TYPE (step) == error_mark_node)
{
ret = true;
continue;
}
else if (!INTEGRAL_TYPE_P (TREE_TYPE (step)))
{
error_at (EXPR_LOC_OR_LOC (step, loc),
"iterator step with non-integral type");
ret = true;
continue;
}
begin = c_fully_fold (build_c_cast (loc, type, begin), false, NULL);
end = c_fully_fold (build_c_cast (loc, type, end), false, NULL);
orig_step = save_expr (c_fully_fold (step, false, NULL));
tree stype = POINTER_TYPE_P (type) ? sizetype : type;
step = c_fully_fold (build_c_cast (loc, stype, orig_step), false, NULL);
if (POINTER_TYPE_P (type))
{
begin = save_expr (begin);
step = pointer_int_sum (loc, PLUS_EXPR, begin, step);
step = fold_build2_loc (loc, MINUS_EXPR, sizetype,
fold_convert (sizetype, step),
fold_convert (sizetype, begin));
step = fold_convert (ssizetype, step);
}
if (integer_zerop (step))
{
error_at (loc, "iterator %qD has zero step", var);
ret = true;
continue;
}
if (begin == error_mark_node
|| end == error_mark_node
|| step == error_mark_node
|| orig_step == error_mark_node)
{
ret = true;
continue;
}
hash_set<tree> pset;
tree it2;
for (it2 = TREE_CHAIN (it); it2; it2 = TREE_CHAIN (it2))
{
tree var2 = TREE_VEC_ELT (it2, 0);
tree begin2 = TREE_VEC_ELT (it2, 1);
tree end2 = TREE_VEC_ELT (it2, 2);
tree step2 = TREE_VEC_ELT (it2, 3);
tree type2 = TREE_TYPE (var2);
location_t loc2 = DECL_SOURCE_LOCATION (var2);
struct c_find_omp_var_s data = { var, &pset };
if (walk_tree (&type2, c_find_omp_var_r, &data, &pset))
{
error_at (loc2,
"type of iterator %qD refers to outer iterator %qD",
var2, var);
break;
}
else if (walk_tree (&begin2, c_find_omp_var_r, &data, &pset))
{
error_at (EXPR_LOC_OR_LOC (begin2, loc2),
"begin expression refers to outer iterator %qD", var);
break;
}
else if (walk_tree (&end2, c_find_omp_var_r, &data, &pset))
{
error_at (EXPR_LOC_OR_LOC (end2, loc2),
"end expression refers to outer iterator %qD", var);
break;
}
else if (walk_tree (&step2, c_find_omp_var_r, &data, &pset))
{
error_at (EXPR_LOC_OR_LOC (step2, loc2),
"step expression refers to outer iterator %qD", var);
break;
}
}
if (it2)
{
ret = true;
continue;
}
TREE_VEC_ELT (it, 1) = begin;
TREE_VEC_ELT (it, 2) = end;
TREE_VEC_ELT (it, 3) = step;
TREE_VEC_ELT (it, 4) = orig_step;
}
return ret;
}
/* For all elements of CLAUSES, validate them against their constraints.
Remove any elements from the list that are invalid. */
@ -13212,14 +13410,20 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
bool ordered_seen = false;
tree schedule_clause = NULL_TREE;
bool oacc_async = false;
tree last_iterators = NULL_TREE;
bool last_iterators_remove = false;
tree *nogroup_seen = NULL;
bool reduction_seen = false;
bitmap_obstack_initialize (NULL);
bitmap_initialize (&generic_head, &bitmap_default_obstack);
bitmap_initialize (&firstprivate_head, &bitmap_default_obstack);
bitmap_initialize (&lastprivate_head, &bitmap_default_obstack);
bitmap_initialize (&aligned_head, &bitmap_default_obstack);
/* If ort == C_ORT_OMP_DECLARE_SIMD used as uniform_head instead. */
bitmap_initialize (&map_head, &bitmap_default_obstack);
bitmap_initialize (&map_field_head, &bitmap_default_obstack);
/* If ort == C_ORT_OMP used as nontemporal_head instead. */
bitmap_initialize (&oacc_reduction_head, &bitmap_default_obstack);
if (ort & C_ORT_ACC)
@ -13248,6 +13452,10 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
goto check_dup_generic;
case OMP_CLAUSE_REDUCTION:
reduction_seen = true;
/* FALLTHRU */
case OMP_CLAUSE_IN_REDUCTION:
case OMP_CLAUSE_TASK_REDUCTION:
need_implicitly_determined = true;
t = OMP_CLAUSE_DECL (c);
if (TREE_CODE (t) == TREE_LIST)
@ -13296,6 +13504,7 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
break;
}
size = size_binop (MINUS_EXPR, size, size_one_node);
size = save_expr (size);
tree index_type = build_index_type (size);
tree atype = build_array_type (type, index_type);
tree ptype = build_pointer_type (type);
@ -13311,6 +13520,28 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
remove = true;
break;
}
if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_REDUCTION
|| OMP_CLAUSE_REDUCTION_TASK (c))
{
/* Disallow zero sized or potentially zero sized task
reductions. */
if (integer_zerop (TYPE_SIZE_UNIT (type)))
{
error_at (OMP_CLAUSE_LOCATION (c),
"zero sized type %qT in %qs clause", type,
omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
break;
}
else if (TREE_CODE (TYPE_SIZE_UNIT (type)) != INTEGER_CST)
{
error_at (OMP_CLAUSE_LOCATION (c),
"variable sized type %qT in %qs clause", type,
omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
break;
}
}
if (OMP_CLAUSE_REDUCTION_PLACEHOLDER (c) == NULL_TREE
&& (FLOAT_TYPE_P (type)
|| TREE_CODE (type) == COMPLEX_TYPE))
@ -13512,7 +13743,7 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
if (TYPE_ATOMIC (TREE_TYPE (t)))
{
error_at (OMP_CLAUSE_LOCATION (c),
"%<_Atomic%> %qD in %<linear%> clause", t);
"%<_Atomic%> %qD in %<linear%> clause", t);
remove = true;
break;
}
@ -13570,7 +13801,9 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
{
if (bitmap_bit_p (&oacc_reduction_head, DECL_UID (t)))
{
error ("%qD appears more than once in reduction clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in reduction clauses",
t);
remove = true;
}
else
@ -13588,9 +13821,11 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
&& bitmap_bit_p (&map_head, DECL_UID (t)))
{
if (ort == C_ORT_ACC)
error ("%qD appears more than once in data clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data clauses", t);
else
error ("%qD appears both in data and map clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears both in data and map clauses", t);
remove = true;
}
else
@ -13617,9 +13852,11 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
if (ort == C_ORT_ACC)
error ("%qD appears more than once in data clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data clauses", t);
else
error ("%qD appears both in data and map clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears both in data and map clauses", t);
remove = true;
}
else
@ -13681,6 +13918,25 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
bitmap_set_bit (&aligned_head, DECL_UID (t));
break;
case OMP_CLAUSE_NONTEMPORAL:
t = OMP_CLAUSE_DECL (c);
if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL)
{
error_at (OMP_CLAUSE_LOCATION (c),
"%qE is not a variable in %<nontemporal%> clause", t);
remove = true;
}
else if (bitmap_bit_p (&oacc_reduction_head, DECL_UID (t)))
{
error_at (OMP_CLAUSE_LOCATION (c),
"%qE appears more than once in %<nontemporal%> "
"clauses", t);
remove = true;
}
else
bitmap_set_bit (&oacc_reduction_head, DECL_UID (t));
break;
case OMP_CLAUSE_DEPEND:
t = OMP_CLAUSE_DECL (c);
if (t == NULL_TREE)
@ -13717,22 +13973,89 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
}
break;
}
if (TREE_CODE (t) == TREE_LIST
&& TREE_PURPOSE (t)
&& TREE_CODE (TREE_PURPOSE (t)) == TREE_VEC)
{
if (TREE_PURPOSE (t) != last_iterators)
last_iterators_remove
= c_omp_finish_iterators (TREE_PURPOSE (t));
last_iterators = TREE_PURPOSE (t);
t = TREE_VALUE (t);
if (last_iterators_remove)
t = error_mark_node;
}
else
last_iterators = NULL_TREE;
if (TREE_CODE (t) == TREE_LIST)
{
if (handle_omp_array_sections (c, ort))
remove = true;
else if (OMP_CLAUSE_DEPEND_KIND (c) == OMP_CLAUSE_DEPEND_DEPOBJ)
{
error_at (OMP_CLAUSE_LOCATION (c),
"%<depend%> clause with %<depobj%> dependence "
"type on array section");
remove = true;
}
break;
}
if (t == error_mark_node)
remove = true;
else if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL)
else if (!lvalue_p (t))
{
error_at (OMP_CLAUSE_LOCATION (c),
"%qE is not a variable in %<depend%> clause", t);
"%qE is not lvalue expression nor array section in "
"%<depend%> clause", t);
remove = true;
}
else if (!c_mark_addressable (t))
remove = true;
else if (TREE_CODE (t) == COMPONENT_REF
&& DECL_C_BIT_FIELD (TREE_OPERAND (t, 1)))
{
error_at (OMP_CLAUSE_LOCATION (c),
"bit-field %qE in %qs clause", t, "depend");
remove = true;
}
else if (OMP_CLAUSE_DEPEND_KIND (c) == OMP_CLAUSE_DEPEND_DEPOBJ)
{
if (!c_omp_depend_t_p (TREE_TYPE (t)))
{
error_at (OMP_CLAUSE_LOCATION (c),
"%qE does not have %<omp_depend_t%> type in "
"%<depend%> clause with %<depobj%> dependence "
"type", t);
remove = true;
}
}
else if (c_omp_depend_t_p (TREE_TYPE (t)))
{
error_at (OMP_CLAUSE_LOCATION (c),
"%qE should not have %<omp_depend_t%> type in "
"%<depend%> clause with dependence type other than "
"%<depobj%>", t);
remove = true;
}
if (!remove)
{
tree addr = build_unary_op (OMP_CLAUSE_LOCATION (c), ADDR_EXPR,
t, false);
if (addr == error_mark_node)
remove = true;
else
{
t = build_indirect_ref (OMP_CLAUSE_LOCATION (c), addr,
RO_UNARY_STAR);
if (t == error_mark_node)
remove = true;
else if (TREE_CODE (OMP_CLAUSE_DECL (c)) == TREE_LIST
&& TREE_PURPOSE (OMP_CLAUSE_DECL (c))
&& (TREE_CODE (TREE_PURPOSE (OMP_CLAUSE_DECL (c)))
== TREE_VEC))
TREE_VALUE (OMP_CLAUSE_DECL (c)) = t;
else
OMP_CLAUSE_DECL (c) = t;
}
}
break;
case OMP_CLAUSE_MAP:
@ -13774,14 +14097,17 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
error ("%qD appears more than once in motion"
" clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in motion "
"clauses", t);
else if (ort == C_ORT_ACC)
error ("%qD appears more than once in data"
" clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data "
"clauses", t);
else
error ("%qD appears more than once in map"
" clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in map "
"clauses", t);
remove = true;
}
else
@ -13891,15 +14217,18 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
if (bitmap_bit_p (&generic_head, DECL_UID (t))
|| bitmap_bit_p (&firstprivate_head, DECL_UID (t)))
{
error ("%qD appears more than once in data clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data clauses", t);
remove = true;
}
else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
if (ort == C_ORT_ACC)
error ("%qD appears more than once in data clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data clauses", t);
else
error ("%qD appears both in data and map clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears both in data and map clauses", t);
remove = true;
}
else
@ -13908,20 +14237,25 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
error ("%qD appears more than once in motion clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in motion clauses", t);
else if (ort == C_ORT_ACC)
error ("%qD appears more than once in data clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data clauses", t);
else
error ("%qD appears more than once in map clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in map clauses", t);
remove = true;
}
else if (bitmap_bit_p (&generic_head, DECL_UID (t))
|| bitmap_bit_p (&firstprivate_head, DECL_UID (t)))
{
if (ort == C_ORT_ACC)
error ("%qD appears more than once in data clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears more than once in data clauses", t);
else
error ("%qD appears both in data and map clauses", t);
error_at (OMP_CLAUSE_LOCATION (c),
"%qD appears both in data and map clauses", t);
remove = true;
}
else
@ -14041,7 +14375,6 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
case OMP_CLAUSE_PRIORITY:
case OMP_CLAUSE_GRAINSIZE:
case OMP_CLAUSE_NUM_TASKS:
case OMP_CLAUSE_NOGROUP:
case OMP_CLAUSE_THREADS:
case OMP_CLAUSE_SIMD:
case OMP_CLAUSE_HINT:
@ -14063,30 +14396,12 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
pc = &OMP_CLAUSE_CHAIN (c);
continue;
case OMP_CLAUSE_NOGROUP:
nogroup_seen = pc;
pc = &OMP_CLAUSE_CHAIN (c);
continue;
case OMP_CLAUSE_SCHEDULE:
if (OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
{
const char *p = NULL;
switch (OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_MASK)
{
case OMP_CLAUSE_SCHEDULE_STATIC: p = "static"; break;
case OMP_CLAUSE_SCHEDULE_DYNAMIC: break;
case OMP_CLAUSE_SCHEDULE_GUIDED: break;
case OMP_CLAUSE_SCHEDULE_AUTO: p = "auto"; break;
case OMP_CLAUSE_SCHEDULE_RUNTIME: p = "runtime"; break;
default: gcc_unreachable ();
}
if (p)
{
error_at (OMP_CLAUSE_LOCATION (c),
"%<nonmonotonic%> modifier specified for %qs "
"schedule kind", p);
OMP_CLAUSE_SCHEDULE_KIND (c)
= (enum omp_clause_schedule_kind)
(OMP_CLAUSE_SCHEDULE_KIND (c)
& ~OMP_CLAUSE_SCHEDULE_NONMONOTONIC);
}
}
schedule_clause = c;
pc = &OMP_CLAUSE_CHAIN (c);
continue;
@ -14145,10 +14460,6 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
case OMP_CLAUSE_DEFAULT_UNSPECIFIED:
break;
case OMP_CLAUSE_DEFAULT_SHARED:
/* const vars may be specified in firstprivate clause. */
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FIRSTPRIVATE
&& TREE_READONLY (t))
break;
share_name = "shared";
break;
case OMP_CLAUSE_DEFAULT_PRIVATE:
@ -14165,6 +14476,15 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
}
else if (TREE_READONLY (t)
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_SHARED
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE_FIRSTPRIVATE)
{
error_at (OMP_CLAUSE_LOCATION (c),
"%<const%> qualified %qE may appear only in "
"%<shared%> or %<firstprivate%> clauses", t);
remove = true;
}
}
}
@ -14222,6 +14542,14 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
pc = &OMP_CLAUSE_CHAIN (c);
}
if (nogroup_seen && reduction_seen)
{
error_at (OMP_CLAUSE_LOCATION (*nogroup_seen),
"%<nogroup%> clause must not be used together with "
"%<reduction%> clause");
*nogroup_seen = OMP_CLAUSE_CHAIN (*nogroup_seen);
}
bitmap_obstack_release (NULL);
return clauses;
}

View File

@ -1,3 +1,167 @@
2018-11-08 Jakub Jelinek <jakub@redhat.com>
* constexpr.c (potential_constant_expression_1): Handle OMP_DEPOBJ.
* cp-gimplify.c (cp_genericize_r): Handle
OMP_CLAUSE_{IN,TASK}_REDUCTION.
(cxx_omp_predetermined_sharing_1): Don't return
OMP_CLAUSE_DEFAULT_SHARED for const qualified decls with no mutable
member. Return OMP_CLAUSE_DEFAULT_FIRSTPRIVATE for this pointer.
* cp-objcp-common.c (cp_common_init_ts): Handle OMP_DEPOBJ.
* cp-tree.def (OMP_DEPOBJ): New tree code.
* cp-tree.h (OMP_ATOMIC_DEPENDENT_P): Return true also for first
argument being OMP_CLAUSE.
(OMP_DEPOBJ_DEPOBJ, OMP_DEPOBJ_CLAUSES): Define.
(cp_convert_omp_range_for, cp_finish_omp_range_for): Declare.
(finish_omp_atomic): Add LOC, CLAUSES and MO arguments. Remove
SEQ_CST argument.
(finish_omp_for_block): Declare.
(finish_omp_flush): Add MO argument.
(finish_omp_depobj): Declare.
* cxx-pretty-print.c (cxx_pretty_printer::statement): Handle
OMP_DEPOBJ.
* dump.c (cp_dump_tree): Likewise.
* lex.c (cxx_init): Likewise.
* parser.c: Include memmodel.h.
(cp_parser_for): Pass false as new is_omp argument to
cp_parser_range_for.
(cp_parser_range_for): Add IS_OMP argument, return before finalizing
if it is true.
(cp_parser_omp_clause_name): Handle nontemporal, in_reduction and
task_reduction clauses.
(cp_parser_omp_var_list_no_open): Handle
OMP_CLAUSE_{IN,TASK}_REDUCTION. For OMP_CLAUSE_DEPEND, parse clause
operands as either an array section, or lvalue assignment expression.
(cp_parser_omp_clause_if): Handle cancel and simd modifiers.
(cp_parser_omp_clause_defaultmap): Parse new kinds of defaultmap
clause.
(cp_parser_omp_clause_reduction): Add IS_OMP and KIND arguments.
Parse reduction modifiers. Pass KIND to c_parser_omp_variable_list.
(cp_parser_omp_clause_lastprivate, cp_parser_omp_iterators): New
functions.
(cp_parser_omp_clause_depend): Parse iterator modifier and handle
iterators. Parse mutexinoutset and depobj kinds.
(cp_parser_oacc_all_clauses): Adjust cp_parser_omp_clause_reduction
callers.
(cp_parser_omp_all_clauses): Likewise. Handle
PRAGMA_OMP_CLAUSE_NONTEMPORAL and
PRAGMA_OMP_CLAUSE_{IN,TASK}_REDUCTION. Call
cp_parser_omp_clause_lastprivate for OpenMP lastprivate clause.
(cp_parser_omp_atomic): Pass pragma_tok->location as
LOC to finish_omp_atomic. Parse hint and memory order clauses.
Handle default memory order from requires directive if any. Adjust
finish_omp_atomic caller.
(cp_parser_omp_critical): Allow comma in between (name) and hint
clause.
(cp_parser_omp_depobj): New function.
(cp_parser_omp_flush): Parse flush with memory-order-clause.
(cp_parser_omp_for_cond): Allow NE_EXPR even in OpenMP loops.
(cp_convert_omp_range_for, cp_finish_omp_range_for): New functions.
(cp_parser_omp_for_loop): Parse C++11 range for loops among omp
loops. Handle OMP_CLAUSE_IN_REDUCTION like OMP_CLAUSE_REDUCTION.
(OMP_SIMD_CLAUSE_MASK): Add if and nontemporal clauses.
(cp_parser_omp_simd, cp_parser_omp_for): Call keep_next_level before
begin_omp_structured_block and call finish_omp_for_block on
finish_omp_structured_block result.
(cp_parser_omp_master): Add p_name, mask and cclauses arguments.
Allow to be called while parsing combined parallel master.
Parse combined master taskloop{, simd}.
(cp_parser_omp_parallel): Parse combined
parallel master{, taskloop{, simd}} constructs.
(cp_parser_omp_single): Use SET_EXPR_LOCATION.
(OMP_TASK_CLAUSE_MASK): Add in_reduction clause.
(OMP_TASKWAIT_CLAUSE_MASK): Define.
(cp_parser_omp_taskwait): Handle taskwait with depend clauses.
(OMP_TASKGROUP_CLAUSE_MASK): Define.
(cp_parser_omp_taskgroup): Parse taskgroup clauses, adjust
c_finish_omp_taskgroup caller.
(cp_parser_omp_distribute): Call keep_next_level before
begin_omp_structured_block and call finish_omp_for_block on
finish_omp_structured_block result.
(cp_parser_omp_teams): Force a BIND_EXPR with BLOCK around teams
body.
(cp_parser_omp_target_data): Allow target data with only
use_device_ptr clauses.
(cp_parser_omp_target): Set OMP_REQUIRES_TARGET_USED bit in
omp_requires_mask.
(cp_parser_omp_requires): New function.
(OMP_TASKLOOP_CLAUSE_MASK): Add reduction and in_reduction clauses.
(cp_parser_omp_taskloop): Add forward declaration. Disallow
in_reduction clause when combined with parallel master. Call
keep_next_level before begin_omp_structured_block and call
finish_omp_for_block on finish_omp_structured_block result.
(cp_parser_omp_construct): Adjust cp_parser_omp_master caller.
(cp_parser_pragma): Handle PRAGMA_OMP_DEPOBJ and PRAGMA_OMP_REQUIRES.
* pt.c (tsubst_omp_clause_decl): Add iterators_cache argument.
Adjust recursive calls. Handle iterators.
(tsubst_omp_clauses): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION and
OMP_CLAUSE_NONTEMPORAL. Adjust tsubst_omp_clause_decl callers.
(tsubst_decomp_names):
(tsubst_omp_for_iterator): Change orig_declv into a reference.
Handle range for loops. Move orig_declv handling after declv/initv
handling.
(tsubst_expr): Force a BIND_EXPR with BLOCK around teams body.
Adjust finish_omp_atomic caller. Call keep_next_level before
begin_omp_structured_block. Call cp_finish_omp_range_for for range
for loops and use {begin,finish}_omp_structured_block instead of
{push,pop}_stmt_list if there are any range for loops. Call
finish_omp_for_block on finish_omp_structured_block result.
Handle OMP_DEPOBJ. Handle taskwait with depend clauses. For
OMP_ATOMIC call tsubst_omp_clauses on clauses if any, adjust
finish_omp_atomic caller. Use OMP_ATOMIC_MEMORY_ORDER rather
than OMP_ATOMIC_SEQ_CST. Handle clauses on OMP_TASKGROUP.
(dependent_omp_for_p): Always return true for range for loops if
processing_template_decl. Return true if class type iterator
does not have INTEGER_CST increment.
* semantics.c: Include memmodel.h.
(handle_omp_array_sections_1): Handle OMP_CLAUSE_{IN,TASK}_REDUCTION
like OMP_CLAUSE_REDUCTION.
(handle_omp_array_sections): Likewise. Call save_expr on array
reductions before calling build_index_type. Handle depend clauses
with iterators.
(finish_omp_reduction_clause): Call save_expr for whole array
reduction sizes. Don't mark OMP_CLAUSE_DECL addressable if it has
reference type. Do mark decl_placeholder addressable if needed.
Use error_at with OMP_CLAUSE_LOCATION (c) as first argument instead
of error.
(cp_omp_finish_iterators): New function.
(finish_omp_clauses): Don't diagnose nonmonotonic clause with static,
runtime or auto schedule kinds. Diagnose nogroup clause used with
reduction clause(s). Handle depend clause with
OMP_CLAUSE_DEPEND_DEPOBJ. Diagnose bit-fields. Require
omp_depend_t type for OMP_CLAUSE_DEPEND_DEPOBJ kinds and
some different type for other kinds. Use cp_build_addr_expr
and cp_build_indirect_ref instead of cxx_mark_addressable.
Handle depend clauses with iterators. Only handle static data members
in the special case that const qualified vars may be specified in
firstprivate clause. Complain if const qualified vars without mutable
members are mentioned in data-sharing clauses other than firstprivate
or shared. Use error_at with OMP_CLAUSE_LOCATION (c) as first
argument instead of error. Diagnose more than one nontemporal clause
refering to the same variable. Use error_at rather than error for
priority and hint clause diagnostics. Fix pasto for hint clause.
Diagnose hint expression that doesn't fold into INTEGER_CST.
Diagnose if clause with modifier other than cancel. Handle
OMP_CLAUSE_{IN,TASK}_REDUCTION like OMP_CLAUSE_REDUCTION. Allow any
lvalue as OMP_CLAUSE_DEPEND operand (besides array section), adjust
diagnostics.
(handle_omp_for_class_iterator): Don't create a new TREE_LIST if one
has been created already for range for, just fill TREE_PURPOSE and
TREE_VALUE. Call cp_fully_fold on incr.
(finish_omp_for): Don't check cond/incr if cond is global_namespace.
Pass to c_omp_check_loop_iv_exprs orig_declv if non-NULL. Don't
use IS_EMPTY_STMT on NULL pre_body. Adjust c_finish_omp_for caller.
(finish_omp_for_block): New function.
(finish_omp_atomic): Add LOC argument, pass it through
to c_finish_omp_atomic and set it as location of OMP_ATOMIC* trees.
Remove SEQ_CST argument. Add CLAUSES and MO arguments. Adjust
c_finish_omp_atomic caller. Stick clauses if any into first argument
of wrapping OMP_ATOMIC.
(finish_omp_depobj): New function.
(finish_omp_flush): Add MO argument, if not
MEMMODEL_LAST, emit __atomic_thread_fence call with the given value.
(finish_omp_cancel): Diagnose if clause with modifier other than
cancel.
2018-11-07 Nathan Sidwell <nathan@acm.org>
PR c++/87904

View File

@ -5917,6 +5917,7 @@ potential_constant_expression_1 (tree t, bool want_rval, bool strict, bool now,
case OMP_ATOMIC_READ:
case OMP_ATOMIC_CAPTURE_OLD:
case OMP_ATOMIC_CAPTURE_NEW:
case OMP_DEPOBJ:
case OACC_PARALLEL:
case OACC_KERNELS:
case OACC_DATA:

View File

@ -1178,6 +1178,8 @@ cp_genericize_r (tree *stmt_p, int *walk_subtrees, void *data)
*walk_subtrees = 0;
break;
case OMP_CLAUSE_REDUCTION:
case OMP_CLAUSE_IN_REDUCTION:
case OMP_CLAUSE_TASK_REDUCTION:
/* Don't dereference an invisiref in reduction clause's
OMP_CLAUSE_DECL either. OMP_CLAUSE_REDUCTION_{INIT,MERGE}
still needs to be genericized. */
@ -1986,10 +1988,10 @@ cxx_omp_predetermined_sharing_1 (tree decl)
return OMP_CLAUSE_DEFAULT_SHARED;
}
/* Const qualified vars having no mutable member are predetermined
shared. */
if (cxx_omp_const_qual_no_mutable (decl))
return OMP_CLAUSE_DEFAULT_SHARED;
/* this may not be specified in data-sharing clauses, still we need
to predetermined it firstprivate. */
if (decl == current_class_ptr)
return OMP_CLAUSE_DEFAULT_FIRSTPRIVATE;
return OMP_CLAUSE_DEFAULT_UNSPECIFIED;
}

View File

@ -446,6 +446,7 @@ cp_common_init_ts (void)
MARK_TS_TYPED (UNARY_RIGHT_FOLD_EXPR);
MARK_TS_TYPED (BINARY_LEFT_FOLD_EXPR);
MARK_TS_TYPED (BINARY_RIGHT_FOLD_EXPR);
MARK_TS_TYPED (OMP_DEPOBJ);
}
#include "gt-cp-cp-objcp-common.h"

View File

@ -499,6 +499,11 @@ DEFTREECODE (BASES, "bases", tcc_type, 0)
instantiation time. */
DEFTREECODE (TEMPLATE_INFO, "template_info", tcc_exceptional, 0)
/* OpenMP - #pragma omp depobj
Operand 0: OMP_DEPOBJ_DEPOBJ: Depobj expression
Operand 1: OMP_DEPOBJ_CLAUSES: List of clauses. */
DEFTREECODE (OMP_DEPOBJ, "omp_depobj", tcc_statement, 2)
/* Extensions for Concepts. */
/* Used to represent information associated with constrained declarations. */

View File

@ -4874,9 +4874,10 @@ more_aggr_init_expr_args_p (const aggr_init_expr_arg_iterator *iter)
(TREE_LANG_FLAG_1 (SCOPE_REF_CHECK (NODE)))
/* True for an OMP_ATOMIC that has dependent parameters. These are stored
as an expr in operand 1, and integer_zero_node in operand 0. */
as an expr in operand 1, and integer_zero_node or clauses in operand 0. */
#define OMP_ATOMIC_DEPENDENT_P(NODE) \
(TREE_CODE (TREE_OPERAND (OMP_ATOMIC_CHECK (NODE), 0)) == INTEGER_CST)
(TREE_CODE (TREE_OPERAND (OMP_ATOMIC_CHECK (NODE), 0)) == INTEGER_CST \
|| TREE_CODE (TREE_OPERAND (OMP_ATOMIC_CHECK (NODE), 0)) == OMP_CLAUSE)
/* Used while gimplifying continue statements bound to OMP_FOR nodes. */
#define OMP_FOR_GIMPLIFYING_P(NODE) \
@ -5016,6 +5017,13 @@ more_aggr_init_expr_args_p (const aggr_init_expr_arg_iterator *iter)
#define ALIGNOF_EXPR_STD_P(NODE) \
TREE_LANG_FLAG_0 (ALIGNOF_EXPR_CHECK (NODE))
/* OMP_DEPOBJ accessors. These give access to the depobj expression of the
#pragma omp depobj directive and the clauses, respectively. If
OMP_DEPOBJ_CLAUSES is INTEGER_CST, it is instead the update clause kind
or OMP_CLAUSE_DEPEND_LAST for destroy clause. */
#define OMP_DEPOBJ_DEPOBJ(NODE) TREE_OPERAND (OMP_DEPOBJ_CHECK (NODE), 0)
#define OMP_DEPOBJ_CLAUSES(NODE) TREE_OPERAND (OMP_DEPOBJ_CHECK (NODE), 1)
/* An enumeration of the kind of tags that C++ accepts. */
enum tag_types {
none_type = 0, /* Not a tag type. */
@ -6630,6 +6638,9 @@ extern bool maybe_clone_body (tree);
/* In parser.c */
extern tree cp_convert_range_for (tree, tree, tree, tree, unsigned int, bool,
unsigned short);
extern void cp_convert_omp_range_for (tree &, vec<tree, va_gc> *, tree &,
tree &, tree &, tree &, tree &, tree &);
extern void cp_finish_omp_range_for (tree, tree);
extern bool parsing_nsdmi (void);
extern bool parsing_default_capturing_generic_lambda_in_template (void);
extern void inject_this_parameter (tree, cp_cv_quals);
@ -7054,11 +7065,16 @@ extern tree finish_omp_task (tree, tree);
extern tree finish_omp_for (location_t, enum tree_code,
tree, tree, tree, tree, tree,
tree, tree, vec<tree> *, tree);
extern void finish_omp_atomic (enum tree_code, enum tree_code,
tree, tree, tree, tree, tree,
bool);
extern tree finish_omp_for_block (tree, tree);
extern void finish_omp_atomic (location_t, enum tree_code,
enum tree_code, tree, tree,
tree, tree, tree, tree,
enum omp_memory_order);
extern void finish_omp_barrier (void);
extern void finish_omp_flush (void);
extern void finish_omp_depobj (location_t, tree,
enum omp_clause_depend_kind,
tree);
extern void finish_omp_flush (int);
extern void finish_omp_taskwait (void);
extern void finish_omp_taskyield (void);
extern void finish_omp_cancel (tree);

View File

@ -2112,6 +2112,42 @@ cxx_pretty_printer::statement (tree t)
declaration (t);
break;
case OMP_DEPOBJ:
pp_cxx_ws_string (this, "#pragma omp depobj");
pp_space (this);
pp_cxx_left_paren (this);
expression (OMP_DEPOBJ_DEPOBJ (t));
pp_cxx_right_paren (this);
if (OMP_DEPOBJ_CLAUSES (t) && OMP_DEPOBJ_CLAUSES (t) != error_mark_node)
{
if (TREE_CODE (OMP_DEPOBJ_CLAUSES (t)) == OMP_CLAUSE)
dump_omp_clauses (this, OMP_DEPOBJ_CLAUSES (t),
pp_indentation (this), TDF_NONE);
else
switch (tree_to_uhwi (OMP_DEPOBJ_CLAUSES (t)))
{
case OMP_CLAUSE_DEPEND_IN:
pp_cxx_ws_string (this, " update(in)");
break;
case OMP_CLAUSE_DEPEND_INOUT:
pp_cxx_ws_string (this, " update(inout)");
break;
case OMP_CLAUSE_DEPEND_OUT:
pp_cxx_ws_string (this, " update(out)");
break;
case OMP_CLAUSE_DEPEND_MUTEXINOUTSET:
pp_cxx_ws_string (this, " update(mutexinoutset)");
break;
case OMP_CLAUSE_DEPEND_LAST:
pp_cxx_ws_string (this, " destroy");
break;
default:
break;
}
}
pp_needs_newline (this) = true;
break;
default:
c_pretty_printer::statement (t);
break;

View File

@ -328,6 +328,12 @@ cp_dump_tree (void* dump_info, tree t)
dump_child ("expr", EXPR_STMT_EXPR (t));
break;
case OMP_DEPOBJ:
dump_stmt (di, t);
dump_child ("depobj", OMP_DEPOBJ_DEPOBJ (t));
dump_child ("clauses", OMP_DEPOBJ_CLAUSES (t));
break;
default:
break;
}

View File

@ -293,7 +293,7 @@ cxx_init (void)
IF_STMT, CLEANUP_STMT, FOR_STMT,
RANGE_FOR_STMT, WHILE_STMT, DO_STMT,
BREAK_STMT, CONTINUE_STMT, SWITCH_STMT,
EXPR_STMT
EXPR_STMT, OMP_DEPOBJ
};
memset (&statement_code_p, 0, sizeof (statement_code_p));

File diff suppressed because it is too large Load Diff

View File

@ -16097,11 +16097,49 @@ tsubst_copy (tree t, tree args, tsubst_flags_t complain, tree in_decl)
static tree
tsubst_omp_clause_decl (tree decl, tree args, tsubst_flags_t complain,
tree in_decl)
tree in_decl, tree *iterator_cache)
{
if (decl == NULL_TREE)
return NULL_TREE;
/* Handle OpenMP iterators. */
if (TREE_CODE (decl) == TREE_LIST
&& TREE_PURPOSE (decl)
&& TREE_CODE (TREE_PURPOSE (decl)) == TREE_VEC)
{
tree ret;
if (iterator_cache[0] == TREE_PURPOSE (decl))
ret = iterator_cache[1];
else
{
tree *tp = &ret;
begin_scope (sk_omp, NULL);
for (tree it = TREE_PURPOSE (decl); it; it = TREE_CHAIN (it))
{
*tp = copy_node (it);
TREE_VEC_ELT (*tp, 0)
= tsubst_decl (TREE_VEC_ELT (it, 0), args, complain);
TREE_VEC_ELT (*tp, 1)
= tsubst_expr (TREE_VEC_ELT (it, 1), args, complain, in_decl,
/*integral_constant_expression_p=*/false);
TREE_VEC_ELT (*tp, 2)
= tsubst_expr (TREE_VEC_ELT (it, 2), args, complain, in_decl,
/*integral_constant_expression_p=*/false);
TREE_VEC_ELT (*tp, 3)
= tsubst_expr (TREE_VEC_ELT (it, 3), args, complain, in_decl,
/*integral_constant_expression_p=*/false);
TREE_CHAIN (*tp) = NULL_TREE;
tp = &TREE_CHAIN (*tp);
}
TREE_VEC_ELT (ret, 5) = poplevel (1, 1, 0);
iterator_cache[0] = TREE_PURPOSE (decl);
iterator_cache[1] = ret;
}
return build_tree_list (ret, tsubst_omp_clause_decl (TREE_VALUE (decl),
args, complain,
in_decl, NULL));
}
/* Handle an OpenMP array section represented as a TREE_LIST (or
OMP_CLAUSE_DEPEND_KIND). An OMP_CLAUSE_DEPEND (with a depend
kind of OMP_CLAUSE_DEPEND_SINK) can also be represented as a
@ -16116,7 +16154,7 @@ tsubst_omp_clause_decl (tree decl, tree args, tsubst_flags_t complain,
tree length = tsubst_expr (TREE_VALUE (decl), args, complain, in_decl,
/*integral_constant_expression_p=*/false);
tree chain = tsubst_omp_clause_decl (TREE_CHAIN (decl), args, complain,
in_decl);
in_decl, NULL);
if (TREE_PURPOSE (decl) == low_bound
&& TREE_VALUE (decl) == length
&& TREE_CHAIN (decl) == chain)
@ -16144,6 +16182,7 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
{
tree new_clauses = NULL_TREE, nc, oc;
tree linear_no_step = NULL_TREE;
tree iterator_cache[2] = { NULL_TREE, NULL_TREE };
for (oc = clauses; oc ; oc = OMP_CLAUSE_CHAIN (oc))
{
@ -16173,11 +16212,12 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
case OMP_CLAUSE_FROM:
case OMP_CLAUSE_TO:
case OMP_CLAUSE_MAP:
case OMP_CLAUSE_NONTEMPORAL:
case OMP_CLAUSE_USE_DEVICE_PTR:
case OMP_CLAUSE_IS_DEVICE_PTR:
OMP_CLAUSE_DECL (nc)
= tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
in_decl);
in_decl, iterator_cache);
break;
case OMP_CLAUSE_TILE:
case OMP_CLAUSE_IF:
@ -16208,6 +16248,8 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
in_decl, /*integral_constant_expression_p=*/false);
break;
case OMP_CLAUSE_REDUCTION:
case OMP_CLAUSE_IN_REDUCTION:
case OMP_CLAUSE_TASK_REDUCTION:
if (OMP_CLAUSE_REDUCTION_PLACEHOLDER (oc))
{
tree placeholder = OMP_CLAUSE_REDUCTION_PLACEHOLDER (oc);
@ -16225,13 +16267,13 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
}
OMP_CLAUSE_DECL (nc)
= tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
in_decl);
in_decl, NULL);
break;
case OMP_CLAUSE_GANG:
case OMP_CLAUSE_ALIGNED:
OMP_CLAUSE_DECL (nc)
= tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
in_decl);
in_decl, NULL);
OMP_CLAUSE_OPERAND (nc, 1)
= tsubst_expr (OMP_CLAUSE_OPERAND (oc, 1), args, complain,
in_decl, /*integral_constant_expression_p=*/false);
@ -16239,7 +16281,7 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
case OMP_CLAUSE_LINEAR:
OMP_CLAUSE_DECL (nc)
= tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
in_decl);
in_decl, NULL);
if (OMP_CLAUSE_LINEAR_STEP (oc) == NULL_TREE)
{
gcc_assert (!linear_no_step);
@ -16248,7 +16290,7 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
else if (OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (oc))
OMP_CLAUSE_LINEAR_STEP (nc)
= tsubst_omp_clause_decl (OMP_CLAUSE_LINEAR_STEP (oc), args,
complain, in_decl);
complain, in_decl, NULL);
else
OMP_CLAUSE_LINEAR_STEP (nc)
= tsubst_expr (OMP_CLAUSE_LINEAR_STEP (oc), args, complain,
@ -16289,6 +16331,8 @@ tsubst_omp_clauses (tree clauses, enum c_omp_region_type ort,
case OMP_CLAUSE_COPYPRIVATE:
case OMP_CLAUSE_LINEAR:
case OMP_CLAUSE_REDUCTION:
case OMP_CLAUSE_IN_REDUCTION:
case OMP_CLAUSE_TASK_REDUCTION:
case OMP_CLAUSE_USE_DEVICE_PTR:
case OMP_CLAUSE_IS_DEVICE_PTR:
/* tsubst_expr on SCOPE_REF results in returning
@ -16403,10 +16447,13 @@ tsubst_copy_asm_operands (tree t, tree args, tsubst_flags_t complain,
static tree *omp_parallel_combined_clauses;
static tree tsubst_decomp_names (tree, tree, tree, tsubst_flags_t, tree,
tree *, unsigned int *);
/* Substitute one OMP_FOR iterator. */
static void
tsubst_omp_for_iterator (tree t, int i, tree declv, tree orig_declv,
static bool
tsubst_omp_for_iterator (tree t, int i, tree declv, tree &orig_declv,
tree initv, tree condv, tree incrv, tree *clauses,
tree args, tsubst_flags_t complain, tree in_decl,
bool integral_constant_expression_p)
@ -16414,26 +16461,56 @@ tsubst_omp_for_iterator (tree t, int i, tree declv, tree orig_declv,
#define RECUR(NODE) \
tsubst_expr ((NODE), args, complain, in_decl, \
integral_constant_expression_p)
tree decl, init, cond, incr;
tree decl, init, cond = NULL_TREE, incr = NULL_TREE;
bool ret = false;
init = TREE_VEC_ELT (OMP_FOR_INIT (t), i);
gcc_assert (TREE_CODE (init) == MODIFY_EXPR);
if (orig_declv && OMP_FOR_ORIG_DECLS (t))
{
tree o = TREE_VEC_ELT (OMP_FOR_ORIG_DECLS (t), i);
if (TREE_CODE (o) == TREE_LIST)
TREE_VEC_ELT (orig_declv, i)
= tree_cons (RECUR (TREE_PURPOSE (o)),
RECUR (TREE_VALUE (o)), NULL_TREE);
else
TREE_VEC_ELT (orig_declv, i) = RECUR (o);
}
decl = TREE_OPERAND (init, 0);
init = TREE_OPERAND (init, 1);
tree decl_expr = NULL_TREE;
if (init && TREE_CODE (init) == DECL_EXPR)
bool range_for = TREE_VEC_ELT (OMP_FOR_COND (t), i) == global_namespace;
if (range_for)
{
bool decomp = false;
if (decl != error_mark_node && DECL_HAS_VALUE_EXPR_P (decl))
{
tree v = DECL_VALUE_EXPR (decl);
if (TREE_CODE (v) == ARRAY_REF
&& VAR_P (TREE_OPERAND (v, 0))
&& DECL_DECOMPOSITION_P (TREE_OPERAND (v, 0)))
{
tree decomp_first = NULL_TREE;
unsigned decomp_cnt = 0;
tree d = tsubst_decl (TREE_OPERAND (v, 0), args, complain);
maybe_push_decl (d);
d = tsubst_decomp_names (d, TREE_OPERAND (v, 0), args, complain,
in_decl, &decomp_first, &decomp_cnt);
decomp = true;
if (d == error_mark_node)
decl = error_mark_node;
else
for (unsigned int i = 0; i < decomp_cnt; i++)
{
if (!DECL_HAS_VALUE_EXPR_P (decomp_first))
{
tree v = build_nt (ARRAY_REF, d,
size_int (decomp_cnt - i - 1),
NULL_TREE, NULL_TREE);
SET_DECL_VALUE_EXPR (decomp_first, v);
DECL_HAS_VALUE_EXPR_P (decomp_first) = 1;
}
fit_decomposition_lang_decl (decomp_first, d);
decomp_first = DECL_CHAIN (decomp_first);
}
}
}
decl = tsubst_decl (decl, args, complain);
if (!decomp)
maybe_push_decl (decl);
}
else if (init && TREE_CODE (init) == DECL_EXPR)
{
/* We need to jump through some hoops to handle declarations in the
init-statement, since we might need to handle auto deduction,
@ -16480,14 +16557,44 @@ tsubst_omp_for_iterator (tree t, int i, tree declv, tree orig_declv,
}
init = RECUR (init);
if (orig_declv && OMP_FOR_ORIG_DECLS (t))
{
tree o = TREE_VEC_ELT (OMP_FOR_ORIG_DECLS (t), i);
if (TREE_CODE (o) == TREE_LIST)
TREE_VEC_ELT (orig_declv, i)
= tree_cons (RECUR (TREE_PURPOSE (o)),
RECUR (TREE_VALUE (o)),
NULL_TREE);
else
TREE_VEC_ELT (orig_declv, i) = RECUR (o);
}
if (range_for)
{
tree this_pre_body = NULL_TREE;
tree orig_init = NULL_TREE;
tree orig_decl = NULL_TREE;
cp_convert_omp_range_for (this_pre_body, NULL, decl, orig_decl, init,
orig_init, cond, incr);
if (orig_decl)
{
if (orig_declv == NULL_TREE)
orig_declv = copy_node (declv);
TREE_VEC_ELT (orig_declv, i) = orig_decl;
ret = true;
}
else if (orig_declv)
TREE_VEC_ELT (orig_declv, i) = decl;
}
tree auto_node = type_uses_auto (TREE_TYPE (decl));
if (auto_node && init)
if (!range_for && auto_node && init)
TREE_TYPE (decl)
= do_auto_deduction (TREE_TYPE (decl), init, auto_node, complain);
gcc_assert (!type_dependent_expression_p (decl));
if (!CLASS_TYPE_P (TREE_TYPE (decl)))
if (!CLASS_TYPE_P (TREE_TYPE (decl)) || range_for)
{
if (decl_expr)
{
@ -16498,22 +16605,27 @@ tsubst_omp_for_iterator (tree t, int i, tree declv, tree orig_declv,
DECL_INITIAL (DECL_EXPR_DECL (decl_expr)) = init_sav;
}
cond = RECUR (TREE_VEC_ELT (OMP_FOR_COND (t), i));
incr = TREE_VEC_ELT (OMP_FOR_INCR (t), i);
if (TREE_CODE (incr) == MODIFY_EXPR)
if (!range_for)
{
tree lhs = RECUR (TREE_OPERAND (incr, 0));
tree rhs = RECUR (TREE_OPERAND (incr, 1));
incr = build_x_modify_expr (EXPR_LOCATION (incr), lhs,
NOP_EXPR, rhs, complain);
cond = RECUR (TREE_VEC_ELT (OMP_FOR_COND (t), i));
incr = TREE_VEC_ELT (OMP_FOR_INCR (t), i);
if (TREE_CODE (incr) == MODIFY_EXPR)
{
tree lhs = RECUR (TREE_OPERAND (incr, 0));
tree rhs = RECUR (TREE_OPERAND (incr, 1));
incr = build_x_modify_expr (EXPR_LOCATION (incr), lhs,
NOP_EXPR, rhs, complain);
}
else
incr = RECUR (incr);
if (orig_declv && !OMP_FOR_ORIG_DECLS (t))
TREE_VEC_ELT (orig_declv, i) = decl;
}
else
incr = RECUR (incr);
TREE_VEC_ELT (declv, i) = decl;
TREE_VEC_ELT (initv, i) = init;
TREE_VEC_ELT (condv, i) = cond;
TREE_VEC_ELT (incrv, i) = incr;
return;
return ret;
}
if (decl_expr)
@ -16640,10 +16752,13 @@ tsubst_omp_for_iterator (tree t, int i, tree declv, tree orig_declv,
break;
}
if (orig_declv && !OMP_FOR_ORIG_DECLS (t))
TREE_VEC_ELT (orig_declv, i) = decl;
TREE_VEC_ELT (declv, i) = decl;
TREE_VEC_ELT (initv, i) = init;
TREE_VEC_ELT (condv, i) = cond;
TREE_VEC_ELT (incrv, i) = incr;
return false;
#undef RECUR
}
@ -17254,6 +17369,15 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
break;
case OMP_TASK:
if (OMP_TASK_BODY (t) == NULL_TREE)
{
tmp = tsubst_omp_clauses (OMP_TASK_CLAUSES (t), C_ORT_OMP, args,
complain, in_decl);
t = copy_node (t);
OMP_TASK_CLAUSES (t) = tmp;
add_stmt (t);
break;
}
r = push_omp_privatization_clauses (false);
tmp = tsubst_omp_clauses (OMP_TASK_CLAUSES (t), C_ORT_OMP, args,
complain, in_decl);
@ -17274,6 +17398,7 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
tree orig_declv = NULL_TREE;
tree incrv = NULL_TREE;
enum c_omp_region_type ort = C_ORT_OMP;
bool any_range_for = false;
int i;
if (TREE_CODE (t) == OACC_LOOP)
@ -17292,6 +17417,7 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
incrv = make_tree_vec (TREE_VEC_LENGTH (OMP_FOR_INIT (t)));
}
keep_next_level (true);
stmt = begin_omp_structured_block ();
pre_body = push_stmt_list ();
@ -17300,14 +17426,31 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
if (OMP_FOR_INIT (t) != NULL_TREE)
for (i = 0; i < TREE_VEC_LENGTH (OMP_FOR_INIT (t)); i++)
tsubst_omp_for_iterator (t, i, declv, orig_declv, initv, condv,
incrv, &clauses, args, complain, in_decl,
integral_constant_expression_p);
any_range_for
|= tsubst_omp_for_iterator (t, i, declv, orig_declv, initv,
condv, incrv, &clauses, args,
complain, in_decl,
integral_constant_expression_p);
omp_parallel_combined_clauses = NULL;
body = push_stmt_list ();
if (any_range_for)
{
gcc_assert (orig_declv);
body = begin_omp_structured_block ();
for (i = 0; i < TREE_VEC_LENGTH (OMP_FOR_INIT (t)); i++)
if (TREE_VEC_ELT (orig_declv, i) != TREE_VEC_ELT (declv, i)
&& TREE_CODE (TREE_VEC_ELT (orig_declv, i)) == TREE_LIST
&& TREE_CHAIN (TREE_VEC_ELT (orig_declv, i)))
cp_finish_omp_range_for (TREE_VEC_ELT (orig_declv, i),
TREE_VEC_ELT (declv, i));
}
else
body = push_stmt_list ();
RECUR (OMP_FOR_BODY (t));
body = pop_stmt_list (body);
if (any_range_for)
body = finish_omp_structured_block (body);
else
body = pop_stmt_list (body);
if (OMP_FOR_INIT (t) != NULL_TREE)
t = finish_omp_for (EXPR_LOCATION (t), TREE_CODE (t), declv,
@ -17324,7 +17467,8 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
add_stmt (t);
}
add_stmt (finish_omp_structured_block (stmt));
add_stmt (finish_omp_for_block (finish_omp_structured_block (stmt),
t));
pop_omp_privatization_clauses (r);
}
break;
@ -17335,13 +17479,24 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
case OMP_SINGLE:
case OMP_TEAMS:
case OMP_CRITICAL:
case OMP_TASKGROUP:
r = push_omp_privatization_clauses (TREE_CODE (t) == OMP_TEAMS
&& OMP_TEAMS_COMBINED (t));
tmp = tsubst_omp_clauses (OMP_CLAUSES (t), C_ORT_OMP, args, complain,
in_decl);
stmt = push_stmt_list ();
RECUR (OMP_BODY (t));
stmt = pop_stmt_list (stmt);
if (TREE_CODE (t) == OMP_TEAMS)
{
keep_next_level (true);
stmt = begin_omp_structured_block ();
RECUR (OMP_BODY (t));
stmt = finish_omp_structured_block (stmt);
}
else
{
stmt = push_stmt_list ();
RECUR (OMP_BODY (t));
stmt = pop_stmt_list (stmt);
}
t = copy_node (t);
OMP_BODY (t) = stmt;
@ -17350,6 +17505,32 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
pop_omp_privatization_clauses (r);
break;
case OMP_DEPOBJ:
r = RECUR (OMP_DEPOBJ_DEPOBJ (t));
if (OMP_DEPOBJ_CLAUSES (t) && OMP_DEPOBJ_CLAUSES (t) != error_mark_node)
{
enum omp_clause_depend_kind kind = OMP_CLAUSE_DEPEND_SOURCE;
if (TREE_CODE (OMP_DEPOBJ_CLAUSES (t)) == OMP_CLAUSE)
{
tmp = tsubst_omp_clauses (OMP_DEPOBJ_CLAUSES (t), C_ORT_OMP,
args, complain, in_decl);
if (tmp == NULL_TREE)
tmp = error_mark_node;
}
else
{
kind = (enum omp_clause_depend_kind)
tree_to_uhwi (OMP_DEPOBJ_CLAUSES (t));
tmp = NULL_TREE;
}
finish_omp_depobj (EXPR_LOCATION (t), r, kind, tmp);
}
else
finish_omp_depobj (EXPR_LOCATION (t), r,
OMP_CLAUSE_DEPEND_SOURCE,
OMP_DEPOBJ_CLAUSES (t));
break;
case OACC_DATA:
case OMP_TARGET_DATA:
case OMP_TARGET:
@ -17441,7 +17622,6 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
case OMP_SECTION:
case OMP_MASTER:
case OMP_TASKGROUP:
stmt = push_stmt_list ();
RECUR (OMP_BODY (t));
stmt = pop_stmt_list (stmt);
@ -17453,6 +17633,10 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
case OMP_ATOMIC:
gcc_assert (OMP_ATOMIC_DEPENDENT_P (t));
tmp = NULL_TREE;
if (TREE_CODE (TREE_OPERAND (t, 0)) == OMP_CLAUSE)
tmp = tsubst_omp_clauses (TREE_OPERAND (t, 0), C_ORT_OMP, args,
complain, in_decl);
if (TREE_CODE (TREE_OPERAND (t, 1)) != MODIFY_EXPR)
{
tree op1 = TREE_OPERAND (t, 1);
@ -17465,9 +17649,9 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
}
lhs = RECUR (TREE_OPERAND (op1, 0));
rhs = RECUR (TREE_OPERAND (op1, 1));
finish_omp_atomic (OMP_ATOMIC, TREE_CODE (op1), lhs, rhs,
NULL_TREE, NULL_TREE, rhs1,
OMP_ATOMIC_SEQ_CST (t));
finish_omp_atomic (EXPR_LOCATION (t), OMP_ATOMIC, TREE_CODE (op1),
lhs, rhs, NULL_TREE, NULL_TREE, rhs1, tmp,
OMP_ATOMIC_MEMORY_ORDER (t));
}
else
{
@ -17504,8 +17688,8 @@ tsubst_expr (tree t, tree args, tsubst_flags_t complain, tree in_decl,
lhs = RECUR (TREE_OPERAND (op1, 0));
rhs = RECUR (TREE_OPERAND (op1, 1));
}
finish_omp_atomic (code, opcode, lhs, rhs, v, lhs1, rhs1,
OMP_ATOMIC_SEQ_CST (t));
finish_omp_atomic (EXPR_LOCATION (t), code, opcode, lhs, rhs, v,
lhs1, rhs1, tmp, OMP_ATOMIC_MEMORY_ORDER (t));
}
break;
@ -25865,6 +26049,9 @@ dependent_omp_for_p (tree declv, tree initv, tree condv, tree incrv)
if (init && type_dependent_expression_p (init))
return true;
if (cond == global_namespace)
return true;
if (type_dependent_expression_p (cond))
return true;
@ -25891,6 +26078,20 @@ dependent_omp_for_p (tree declv, tree initv, tree condv, tree incrv)
if (type_dependent_expression_p (TREE_OPERAND (t, 0))
|| type_dependent_expression_p (TREE_OPERAND (t, 1)))
return true;
/* If this loop has a class iterator with != comparison
with increment other than i++/++i/i--/--i, make sure the
increment is constant. */
if (CLASS_TYPE_P (TREE_TYPE (decl))
&& TREE_CODE (cond) == NE_EXPR)
{
if (TREE_OPERAND (t, 0) == decl)
t = TREE_OPERAND (t, 1);
else
t = TREE_OPERAND (t, 0);
if (TREE_CODE (t) != INTEGER_CST)
return true;
}
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,19 @@
2018-11-08 Jakub Jelinek <jakub@redhat.com>
* trans-openmp.c (gfc_trans_omp_clauses): Use
OMP_CLAUSE_DEFAULTMAP_SET_KIND.
(gfc_trans_omp_atomic): Set OMP_ATOMIC_MEMORY_ORDER
rather than OMP_ATOMIC_SEQ_CST.
(gfc_trans_omp_taskgroup): Build OMP_TASKGROUP using
make_node instead of build1_loc.
* types.def (BT_FN_VOID_BOOL, BT_FN_VOID_SIZE_SIZE_PTR,
BT_FN_UINT_UINT_PTR_PTR, BT_FN_UINT_OMPFN_PTR_UINT_UINT,
BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR): New.
(BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_PTR): Formatting fix.
2018-11-02 Thomas Koenig <tkoenig@gcc.gnu.org>
PR fortran/46020

View File

@ -2866,6 +2866,8 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses,
if (clauses->defaultmap)
{
c = build_omp_clause (where.lb->location, OMP_CLAUSE_DEFAULTMAP);
OMP_CLAUSE_DEFAULTMAP_SET_KIND (c, OMP_CLAUSE_DEFAULTMAP_TOFROM,
OMP_CLAUSE_DEFAULTMAP_CATEGORY_SCALAR);
omp_clauses = gfc_trans_add_clause (c, omp_clauses);
}
if (clauses->depend_source)
@ -3166,7 +3168,9 @@ gfc_trans_omp_atomic (gfc_code *code)
enum tree_code op = ERROR_MARK;
enum tree_code aop = OMP_ATOMIC;
bool var_on_left = false;
bool seq_cst = (atomic_code->ext.omp_atomic & GFC_OMP_ATOMIC_SEQ_CST) != 0;
enum omp_memory_order mo
= ((atomic_code->ext.omp_atomic & GFC_OMP_ATOMIC_SEQ_CST)
? OMP_MEMORY_ORDER_SEQ_CST : OMP_MEMORY_ORDER_RELAXED);
code = code->block->next;
gcc_assert (code->op == EXEC_ASSIGN);
@ -3198,7 +3202,7 @@ gfc_trans_omp_atomic (gfc_code *code)
lhsaddr = gfc_build_addr_expr (NULL, lse.expr);
x = build1 (OMP_ATOMIC_READ, type, lhsaddr);
OMP_ATOMIC_SEQ_CST (x) = seq_cst;
OMP_ATOMIC_MEMORY_ORDER (x) = mo;
x = convert (TREE_TYPE (vse.expr), x);
gfc_add_modify (&block, vse.expr, x);
@ -3398,7 +3402,7 @@ gfc_trans_omp_atomic (gfc_code *code)
if (aop == OMP_ATOMIC)
{
x = build2_v (OMP_ATOMIC, lhsaddr, convert (type, x));
OMP_ATOMIC_SEQ_CST (x) = seq_cst;
OMP_ATOMIC_MEMORY_ORDER (x) = mo;
gfc_add_expr_to_block (&block, x);
}
else
@ -3421,7 +3425,7 @@ gfc_trans_omp_atomic (gfc_code *code)
gfc_add_block_to_block (&block, &lse.pre);
}
x = build2 (aop, type, lhsaddr, convert (type, x));
OMP_ATOMIC_SEQ_CST (x) = seq_cst;
OMP_ATOMIC_MEMORY_ORDER (x) = mo;
x = convert (TREE_TYPE (vse.expr), x);
gfc_add_modify (&block, vse.expr, x);
}
@ -4586,8 +4590,12 @@ gfc_trans_omp_task (gfc_code *code)
static tree
gfc_trans_omp_taskgroup (gfc_code *code)
{
tree stmt = gfc_trans_code (code->block->next);
return build1_loc (input_location, OMP_TASKGROUP, void_type_node, stmt);
tree body = gfc_trans_code (code->block->next);
tree stmt = make_node (OMP_TASKGROUP);
TREE_TYPE (stmt) = void_type_node;
OMP_TASKGROUP_BODY (stmt) = body;
OMP_TASKGROUP_CLAUSES (stmt) = NULL_TREE;
return stmt;
}
static tree

View File

@ -86,6 +86,7 @@ DEF_FUNCTION_TYPE_1 (BT_FN_INT_INT, BT_INT, BT_INT)
DEF_FUNCTION_TYPE_1 (BT_FN_UINT_UINT, BT_UINT, BT_UINT)
DEF_FUNCTION_TYPE_1 (BT_FN_PTR_PTR, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_1 (BT_FN_VOID_INT, BT_VOID, BT_INT)
DEF_FUNCTION_TYPE_1 (BT_FN_VOID_BOOL, BT_VOID, BT_BOOL)
DEF_FUNCTION_TYPE_1 (BT_FN_BOOL_INT, BT_BOOL, BT_INT)
DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR, BT_FN_VOID_PTR)
@ -145,9 +146,14 @@ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I2_INT, BT_VOID, BT_VOLATILE_PTR, BT_I2, BT
DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I4_INT, BT_VOID, BT_VOLATILE_PTR, BT_I4, BT_INT)
DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I8_INT, BT_VOID, BT_VOLATILE_PTR, BT_I8, BT_INT)
DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I16_INT, BT_VOID, BT_VOLATILE_PTR, BT_I16, BT_INT)
DEF_FUNCTION_TYPE_3 (BT_FN_VOID_SIZE_SIZE_PTR, BT_VOID, BT_SIZE, BT_SIZE,
BT_PTR)
DEF_FUNCTION_TYPE_3 (BT_FN_UINT_UINT_PTR_PTR, BT_UINT, BT_UINT, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_4 (BT_FN_VOID_OMPFN_PTR_UINT_UINT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
DEF_FUNCTION_TYPE_4 (BT_FN_UINT_OMPFN_PTR_UINT_UINT,
BT_UINT, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
DEF_FUNCTION_TYPE_4 (BT_FN_VOID_PTR_WORD_WORD_PTR,
BT_VOID, BT_PTR, BT_WORD, BT_WORD, BT_PTR)
DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_VPTR_PTR_INT, BT_VOID, BT_SIZE,
@ -217,14 +223,28 @@ DEF_FUNCTION_TYPE_7 (BT_FN_VOID_INT_SIZE_PTR_PTR_PTR_UINT_PTR,
DEF_FUNCTION_TYPE_8 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT,
BT_LONG, BT_LONG, BT_LONG, BT_LONG, BT_UINT)
DEF_FUNCTION_TYPE_8 (BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_BOOL, BT_UINT, BT_PTR_LONG, BT_LONG, BT_LONG,
BT_PTR_LONG, BT_PTR_LONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_8 (BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
BT_BOOL, BT_UINT, BT_PTR_ULONGLONG, BT_LONG, BT_ULONGLONG,
BT_PTR_ULONGLONG, BT_PTR_ULONGLONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_9 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,
BT_PTR_FN_VOID_PTR_PTR, BT_LONG, BT_LONG,
BT_BOOL, BT_UINT, BT_PTR, BT_INT)
DEF_FUNCTION_TYPE_9 (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_PTR,
BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
BT_PTR, BT_PTR, BT_UINT, BT_PTR, BT_PTR)
BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
BT_PTR, BT_PTR, BT_UINT, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_9 (BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
BT_BOOL, BT_LONG, BT_LONG, BT_LONG, BT_LONG, BT_LONG,
BT_PTR_LONG, BT_PTR_LONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_10 (BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
BT_BOOL, BT_BOOL, BT_ULONGLONG, BT_ULONGLONG,
BT_ULONGLONG, BT_LONG, BT_ULONGLONG, BT_PTR_ULONGLONG,
BT_PTR_ULONGLONG, BT_PTR, BT_PTR)
DEF_FUNCTION_TYPE_11 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_LONG_LONG_LONG,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,

View File

@ -1724,7 +1724,8 @@ open_base_files (void)
"tree-dfa.h", "tree-ssa.h", "reload.h", "cpplib.h", "tree-chrec.h",
"except.h", "output.h", "cfgloop.h", "target.h", "lto-streamer.h",
"target-globals.h", "ipa-ref.h", "cgraph.h", "symbol-summary.h",
"ipa-prop.h", "ipa-fnsummary.h", "dwarf2out.h", "omp-offload.h", NULL
"ipa-prop.h", "ipa-fnsummary.h", "dwarf2out.h", "omp-general.h",
"omp-offload.h", NULL
};
const char *const *ifp;
outf_p gtype_desc_c;

View File

@ -1554,6 +1554,35 @@ dump_gimple_omp_single (pretty_printer *buffer, gomp_single *gs,
}
}
/* Dump a GIMPLE_OMP_TASKGROUP tuple on the pretty_printer BUFFER. */
static void
dump_gimple_omp_taskgroup (pretty_printer *buffer, gimple *gs,
int spc, dump_flags_t flags)
{
if (flags & TDF_RAW)
{
dump_gimple_fmt (buffer, spc, flags, "%G <%+BODY <%S>%nCLAUSES <", gs,
gimple_omp_body (gs));
dump_omp_clauses (buffer, gimple_omp_taskgroup_clauses (gs), spc, flags);
dump_gimple_fmt (buffer, spc, flags, " >");
}
else
{
pp_string (buffer, "#pragma omp taskgroup");
dump_omp_clauses (buffer, gimple_omp_taskgroup_clauses (gs), spc, flags);
if (!gimple_seq_empty_p (gimple_omp_body (gs)))
{
newline_and_indent (buffer, spc + 2);
pp_left_brace (buffer);
pp_newline (buffer);
dump_gimple_seq (buffer, gimple_omp_body (gs), spc + 4, flags);
newline_and_indent (buffer, spc + 2);
pp_right_brace (buffer);
}
}
}
/* Dump a GIMPLE_OMP_TARGET tuple on the pretty_printer BUFFER. */
static void
@ -1712,7 +1741,7 @@ dump_gimple_omp_sections (pretty_printer *buffer, gomp_sections *gs,
}
}
/* Dump a GIMPLE_OMP_{MASTER,TASKGROUP,ORDERED,SECTION} tuple on the
/* Dump a GIMPLE_OMP_{MASTER,ORDERED,SECTION} tuple on the
pretty_printer BUFFER. */
static void
@ -2301,6 +2330,8 @@ dump_gimple_omp_task (pretty_printer *buffer, gomp_task *gs, int spc,
gimple_seq body;
if (gimple_omp_task_taskloop_p (gs))
pp_string (buffer, "#pragma omp taskloop");
else if (gimple_omp_task_taskwait_p (gs))
pp_string (buffer, "#pragma omp taskwait");
else
pp_string (buffer, "#pragma omp task");
dump_omp_clauses (buffer, gimple_omp_task_clauses (gs), spc, flags);
@ -2353,8 +2384,8 @@ dump_gimple_omp_atomic_load (pretty_printer *buffer, gomp_atomic_load *gs,
else
{
pp_string (buffer, "#pragma omp atomic_load");
if (gimple_omp_atomic_seq_cst_p (gs))
pp_string (buffer, " seq_cst");
dump_omp_atomic_memory_order (buffer,
gimple_omp_atomic_memory_order (gs));
if (gimple_omp_atomic_need_value_p (gs))
pp_string (buffer, " [needed]");
newline_and_indent (buffer, spc + 2);
@ -2385,9 +2416,10 @@ dump_gimple_omp_atomic_store (pretty_printer *buffer,
}
else
{
pp_string (buffer, "#pragma omp atomic_store ");
if (gimple_omp_atomic_seq_cst_p (gs))
pp_string (buffer, "seq_cst ");
pp_string (buffer, "#pragma omp atomic_store");
dump_omp_atomic_memory_order (buffer,
gimple_omp_atomic_memory_order (gs));
pp_space (buffer);
if (gimple_omp_atomic_need_value_p (gs))
pp_string (buffer, "[needed] ");
pp_left_paren (buffer);
@ -2569,8 +2601,11 @@ pp_gimple_stmt_1 (pretty_printer *buffer, gimple *gs, int spc,
pp_string (buffer, "GIMPLE_SECTIONS_SWITCH");
break;
case GIMPLE_OMP_MASTER:
case GIMPLE_OMP_TASKGROUP:
dump_gimple_omp_taskgroup (buffer, gs, spc, flags);
break;
case GIMPLE_OMP_MASTER:
case GIMPLE_OMP_SECTION:
case GIMPLE_OMP_GRID_BODY:
dump_gimple_omp_block (buffer, gs, spc, flags);

View File

@ -924,7 +924,7 @@ gimple_build_omp_critical (gimple_seq body, tree name, tree clauses)
BODY is sequence of statements inside the for loop.
KIND is the `for' variant.
CLAUSES, are any of the construct's clauses.
CLAUSES are any of the construct's clauses.
COLLAPSE is the collapse count.
PRE_BODY is the sequence of statements that are loop invariant. */
@ -950,7 +950,7 @@ gimple_build_omp_for (gimple_seq body, int kind, tree clauses, size_t collapse,
/* Build a GIMPLE_OMP_PARALLEL statement.
BODY is sequence of statements which are executed in parallel.
CLAUSES, are the OMP parallel construct's clauses.
CLAUSES are the OMP parallel construct's clauses.
CHILD_FN is the function created for the parallel threads to execute.
DATA_ARG are the shared data argument(s). */
@ -973,7 +973,7 @@ gimple_build_omp_parallel (gimple_seq body, tree clauses, tree child_fn,
/* Build a GIMPLE_OMP_TASK statement.
BODY is sequence of statements which are executed by the explicit task.
CLAUSES, are the OMP parallel construct's clauses.
CLAUSES are the OMP task construct's clauses.
CHILD_FN is the function created for the parallel threads to execute.
DATA_ARG are the shared data argument(s).
COPY_FN is the optional function for firstprivate initialization.
@ -1044,12 +1044,14 @@ gimple_build_omp_grid_body (gimple_seq body)
/* Build a GIMPLE_OMP_TASKGROUP statement.
BODY is the sequence of statements to be executed by the taskgroup
construct. */
construct.
CLAUSES are any of the construct's clauses. */
gimple *
gimple_build_omp_taskgroup (gimple_seq body)
gimple_build_omp_taskgroup (gimple_seq body, tree clauses)
{
gimple *p = gimple_alloc (GIMPLE_OMP_TASKGROUP, 0);
gimple_omp_taskgroup_set_clauses (p, clauses);
if (body)
gimple_omp_set_body (p, body);
@ -1192,12 +1194,13 @@ gimple_build_omp_teams (gimple_seq body, tree clauses)
/* Build a GIMPLE_OMP_ATOMIC_LOAD statement. */
gomp_atomic_load *
gimple_build_omp_atomic_load (tree lhs, tree rhs)
gimple_build_omp_atomic_load (tree lhs, tree rhs, enum omp_memory_order mo)
{
gomp_atomic_load *p
= as_a <gomp_atomic_load *> (gimple_alloc (GIMPLE_OMP_ATOMIC_LOAD, 0));
gimple_omp_atomic_load_set_lhs (p, lhs);
gimple_omp_atomic_load_set_rhs (p, rhs);
gimple_omp_atomic_set_memory_order (p, mo);
return p;
}
@ -1206,11 +1209,12 @@ gimple_build_omp_atomic_load (tree lhs, tree rhs)
VAL is the value we are storing. */
gomp_atomic_store *
gimple_build_omp_atomic_store (tree val)
gimple_build_omp_atomic_store (tree val, enum omp_memory_order mo)
{
gomp_atomic_store *p
= as_a <gomp_atomic_store *> (gimple_alloc (GIMPLE_OMP_ATOMIC_STORE, 0));
gimple_omp_atomic_store_set_val (p, val);
gimple_omp_atomic_set_memory_order (p, mo);
return p;
}
@ -1935,6 +1939,11 @@ gimple_copy (gimple *stmt)
gimple_omp_ordered_set_clauses (as_a <gomp_ordered *> (copy), t);
goto copy_omp_body;
case GIMPLE_OMP_TASKGROUP:
t = unshare_expr (gimple_omp_taskgroup_clauses (stmt));
gimple_omp_taskgroup_set_clauses (copy, t);
goto copy_omp_body;
case GIMPLE_OMP_SECTIONS:
t = unshare_expr (gimple_omp_sections_clauses (stmt));
gimple_omp_sections_set_clauses (copy, t);
@ -1971,7 +1980,6 @@ gimple_copy (gimple *stmt)
case GIMPLE_OMP_SECTION:
case GIMPLE_OMP_MASTER:
case GIMPLE_OMP_TASKGROUP:
case GIMPLE_OMP_GRID_BODY:
copy_omp_body:
new_seq = gimple_seq_copy (gimple_omp_body (stmt));

View File

@ -279,9 +279,10 @@ DEFGSCODE(GIMPLE_OMP_FOR, "gimple_omp_for", GSS_OMP_FOR)
BODY is the sequence of statements to execute in the master section. */
DEFGSCODE(GIMPLE_OMP_MASTER, "gimple_omp_master", GSS_OMP)
/* GIMPLE_OMP_TASKGROUP <BODY> represents #pragma omp taskgroup.
BODY is the sequence of statements to execute in the taskgroup section. */
DEFGSCODE(GIMPLE_OMP_TASKGROUP, "gimple_omp_taskgroup", GSS_OMP)
/* GIMPLE_OMP_TASKGROUP <BODY, CLAUSES> represents #pragma omp taskgroup.
BODY is the sequence of statements inside the taskgroup section.
CLAUSES is an OMP_CLAUSE chain holding the associated clauses. */
DEFGSCODE(GIMPLE_OMP_TASKGROUP, "gimple_omp_taskgroup", GSS_OMP_SINGLE_LAYOUT)
/* GIMPLE_OMP_PARALLEL <BODY, CLAUSES, CHILD_FN, DATA_ARG> represents
@ -366,10 +367,12 @@ DEFGSCODE(GIMPLE_OMP_SINGLE, "gimple_omp_single", GSS_OMP_SINGLE_LAYOUT)
implement the MAP clauses. */
DEFGSCODE(GIMPLE_OMP_TARGET, "gimple_omp_target", GSS_OMP_PARALLEL_LAYOUT)
/* GIMPLE_OMP_TEAMS <BODY, CLAUSES> represents #pragma omp teams
/* GIMPLE_OMP_TEAMS <BODY, CLAUSES, CHILD_FN, DATA_ARG> represents
#pragma omp teams
BODY is the sequence of statements inside the single section.
CLAUSES is an OMP_CLAUSE chain holding the associated clauses. */
DEFGSCODE(GIMPLE_OMP_TEAMS, "gimple_omp_teams", GSS_OMP_SINGLE_LAYOUT)
CLAUSES is an OMP_CLAUSE chain holding the associated clauses.
CHILD_FN and DATA_ARG like for GIMPLE_OMP_PARALLEL. */
DEFGSCODE(GIMPLE_OMP_TEAMS, "gimple_omp_teams", GSS_OMP_PARALLEL_LAYOUT)
/* GIMPLE_OMP_ORDERED <BODY, CLAUSES> represents #pragma omp ordered.
BODY is the sequence of statements to execute in the ordered section.

View File

@ -151,6 +151,7 @@ enum gf_mask {
GF_OMP_PARALLEL_COMBINED = 1 << 0,
GF_OMP_PARALLEL_GRID_PHONY = 1 << 1,
GF_OMP_TASK_TASKLOOP = 1 << 0,
GF_OMP_TASK_TASKWAIT = 1 << 1,
GF_OMP_FOR_KIND_MASK = (1 << 4) - 1,
GF_OMP_FOR_KIND_FOR = 0,
GF_OMP_FOR_KIND_DISTRIBUTE = 1,
@ -183,6 +184,7 @@ enum gf_mask {
GF_OMP_TARGET_KIND_OACC_DECLARE = 10,
GF_OMP_TARGET_KIND_OACC_HOST_DATA = 11,
GF_OMP_TEAMS_GRID_PHONY = 1 << 0,
GF_OMP_TEAMS_HOST = 1 << 1,
/* True on an GIMPLE_OMP_RETURN statement if the return does not require
a thread synchronization via some sort of barrier. The exact barrier
@ -191,8 +193,8 @@ enum gf_mask {
GF_OMP_RETURN_NOWAIT = 1 << 0,
GF_OMP_SECTION_LAST = 1 << 0,
GF_OMP_ATOMIC_NEED_VALUE = 1 << 0,
GF_OMP_ATOMIC_SEQ_CST = 1 << 1,
GF_OMP_ATOMIC_MEMORY_ORDER = (1 << 3) - 1,
GF_OMP_ATOMIC_NEED_VALUE = 1 << 3,
GF_PREDICT_TAKEN = 1 << 15
};
@ -637,7 +639,7 @@ struct GTY((tag("GSS_OMP_FOR")))
};
/* GIMPLE_OMP_PARALLEL, GIMPLE_OMP_TARGET, GIMPLE_OMP_TASK */
/* GIMPLE_OMP_PARALLEL, GIMPLE_OMP_TARGET, GIMPLE_OMP_TASK, GIMPLE_OMP_TEAMS */
struct GTY((tag("GSS_OMP_PARALLEL_LAYOUT")))
gimple_statement_omp_parallel_layout : public gimple_statement_omp
@ -663,7 +665,8 @@ struct GTY((tag("GSS_OMP_PARALLEL_LAYOUT")))
{
/* No extra fields; adds invariant:
stmt->code == GIMPLE_OMP_PARALLEL
|| stmt->code == GIMPLE_OMP_TASK. */
|| stmt->code == GIMPLE_OMP_TASK
|| stmt->code == GIMPLE_OMP_TEAMS. */
};
/* GIMPLE_OMP_PARALLEL */
@ -737,7 +740,7 @@ struct GTY((tag("GSS_OMP_CONTINUE")))
tree control_use;
};
/* GIMPLE_OMP_SINGLE, GIMPLE_OMP_TEAMS, GIMPLE_OMP_ORDERED */
/* GIMPLE_OMP_SINGLE, GIMPLE_OMP_ORDERED, GIMPLE_OMP_TASKGROUP. */
struct GTY((tag("GSS_OMP_SINGLE_LAYOUT")))
gimple_statement_omp_single_layout : public gimple_statement_omp
@ -755,8 +758,8 @@ struct GTY((tag("GSS_OMP_SINGLE_LAYOUT")))
stmt->code == GIMPLE_OMP_SINGLE. */
};
struct GTY((tag("GSS_OMP_SINGLE_LAYOUT")))
gomp_teams : public gimple_statement_omp_single_layout
struct GTY((tag("GSS_OMP_PARALLEL_LAYOUT")))
gomp_teams : public gimple_statement_omp_taskreg
{
/* No extra fields; adds invariant:
stmt->code == GIMPLE_OMP_TEAMS. */
@ -1121,7 +1124,9 @@ template <>
inline bool
is_a_helper <gimple_statement_omp_taskreg *>::test (gimple *gs)
{
return gs->code == GIMPLE_OMP_PARALLEL || gs->code == GIMPLE_OMP_TASK;
return (gs->code == GIMPLE_OMP_PARALLEL
|| gs->code == GIMPLE_OMP_TASK
|| gs->code == GIMPLE_OMP_TEAMS);
}
template <>
@ -1337,7 +1342,9 @@ template <>
inline bool
is_a_helper <const gimple_statement_omp_taskreg *>::test (const gimple *gs)
{
return gs->code == GIMPLE_OMP_PARALLEL || gs->code == GIMPLE_OMP_TASK;
return (gs->code == GIMPLE_OMP_PARALLEL
|| gs->code == GIMPLE_OMP_TASK
|| gs->code == GIMPLE_OMP_TEAMS);
}
template <>
@ -1463,7 +1470,7 @@ gomp_task *gimple_build_omp_task (gimple_seq, tree, tree, tree, tree,
gimple *gimple_build_omp_section (gimple_seq);
gimple *gimple_build_omp_master (gimple_seq);
gimple *gimple_build_omp_grid_body (gimple_seq);
gimple *gimple_build_omp_taskgroup (gimple_seq);
gimple *gimple_build_omp_taskgroup (gimple_seq, tree);
gomp_continue *gimple_build_omp_continue (tree, tree);
gomp_ordered *gimple_build_omp_ordered (gimple_seq, tree);
gimple *gimple_build_omp_return (bool);
@ -1472,8 +1479,9 @@ gimple *gimple_build_omp_sections_switch (void);
gomp_single *gimple_build_omp_single (gimple_seq, tree);
gomp_target *gimple_build_omp_target (gimple_seq, int, tree);
gomp_teams *gimple_build_omp_teams (gimple_seq, tree);
gomp_atomic_load *gimple_build_omp_atomic_load (tree, tree);
gomp_atomic_store *gimple_build_omp_atomic_store (tree);
gomp_atomic_load *gimple_build_omp_atomic_load (tree, tree,
enum omp_memory_order);
gomp_atomic_store *gimple_build_omp_atomic_store (tree, enum omp_memory_order);
gtransaction *gimple_build_transaction (gimple_seq);
extern void gimple_seq_add_stmt (gimple_seq *, gimple *);
extern void gimple_seq_add_stmt_without_update (gimple_seq *, gimple *);
@ -2193,7 +2201,7 @@ static inline unsigned
gimple_omp_subcode (const gimple *s)
{
gcc_gimple_checking_assert (gimple_code (s) >= GIMPLE_OMP_ATOMIC_LOAD
&& gimple_code (s) <= GIMPLE_OMP_TEAMS);
&& gimple_code (s) <= GIMPLE_OMP_TEAMS);
return s->subcode;
}
@ -2331,26 +2339,27 @@ gimple_omp_atomic_set_need_value (gimple *g)
}
/* Return true if OMP atomic load/store statement G has the
GF_OMP_ATOMIC_SEQ_CST flag set. */
/* Return the memory order of the OMP atomic load/store statement G. */
static inline bool
gimple_omp_atomic_seq_cst_p (const gimple *g)
static inline enum omp_memory_order
gimple_omp_atomic_memory_order (const gimple *g)
{
if (gimple_code (g) != GIMPLE_OMP_ATOMIC_LOAD)
GIMPLE_CHECK (g, GIMPLE_OMP_ATOMIC_STORE);
return (gimple_omp_subcode (g) & GF_OMP_ATOMIC_SEQ_CST) != 0;
return (enum omp_memory_order)
(gimple_omp_subcode (g) & GF_OMP_ATOMIC_MEMORY_ORDER);
}
/* Set the GF_OMP_ATOMIC_SEQ_CST flag on G. */
/* Set the memory order on G. */
static inline void
gimple_omp_atomic_set_seq_cst (gimple *g)
gimple_omp_atomic_set_memory_order (gimple *g, enum omp_memory_order mo)
{
if (gimple_code (g) != GIMPLE_OMP_ATOMIC_LOAD)
GIMPLE_CHECK (g, GIMPLE_OMP_ATOMIC_STORE);
g->subcode |= GF_OMP_ATOMIC_SEQ_CST;
g->subcode = ((g->subcode & ~GF_OMP_ATOMIC_MEMORY_ORDER)
| (mo & GF_OMP_ATOMIC_MEMORY_ORDER));
}
@ -4915,6 +4924,40 @@ gimple_omp_ordered_set_clauses (gomp_ordered *ord_stmt, tree clauses)
}
/* Return the clauses associated with OMP_TASKGROUP statement GS. */
static inline tree
gimple_omp_taskgroup_clauses (const gimple *gs)
{
GIMPLE_CHECK (gs, GIMPLE_OMP_TASKGROUP);
return
static_cast <const gimple_statement_omp_single_layout *> (gs)->clauses;
}
/* Return a pointer to the clauses associated with OMP taskgroup statement
GS. */
static inline tree *
gimple_omp_taskgroup_clauses_ptr (gimple *gs)
{
GIMPLE_CHECK (gs, GIMPLE_OMP_TASKGROUP);
return &static_cast <gimple_statement_omp_single_layout *> (gs)->clauses;
}
/* Set CLAUSES to be the clauses associated with OMP taskgroup statement
GS. */
static inline void
gimple_omp_taskgroup_set_clauses (gimple *gs, tree clauses)
{
GIMPLE_CHECK (gs, GIMPLE_OMP_TASKGROUP);
static_cast <gimple_statement_omp_single_layout *> (gs)->clauses
= clauses;
}
/* Return the kind of the OMP_FOR statemement G. */
static inline int
@ -5441,6 +5484,31 @@ gimple_omp_task_set_taskloop_p (gimple *g, bool taskloop_p)
}
/* Return true if OMP task statement G has the
GF_OMP_TASK_TASKWAIT flag set. */
static inline bool
gimple_omp_task_taskwait_p (const gimple *g)
{
GIMPLE_CHECK (g, GIMPLE_OMP_TASK);
return (gimple_omp_subcode (g) & GF_OMP_TASK_TASKWAIT) != 0;
}
/* Set the GF_OMP_TASK_TASKWAIT field in G depending on the boolean
value of TASKWAIT_P. */
static inline void
gimple_omp_task_set_taskwait_p (gimple *g, bool taskwait_p)
{
GIMPLE_CHECK (g, GIMPLE_OMP_TASK);
if (taskwait_p)
g->subcode |= GF_OMP_TASK_TASKWAIT;
else
g->subcode &= ~GF_OMP_TASK_TASKWAIT;
}
/* Return the child function used to hold the body of OMP_TASK GS. */
static inline tree
@ -5857,6 +5925,60 @@ gimple_omp_teams_set_clauses (gomp_teams *omp_teams_stmt, tree clauses)
omp_teams_stmt->clauses = clauses;
}
/* Return the child function used to hold the body of OMP_TEAMS_STMT. */
static inline tree
gimple_omp_teams_child_fn (const gomp_teams *omp_teams_stmt)
{
return omp_teams_stmt->child_fn;
}
/* Return a pointer to the child function used to hold the body of
OMP_TEAMS_STMT. */
static inline tree *
gimple_omp_teams_child_fn_ptr (gomp_teams *omp_teams_stmt)
{
return &omp_teams_stmt->child_fn;
}
/* Set CHILD_FN to be the child function for OMP_TEAMS_STMT. */
static inline void
gimple_omp_teams_set_child_fn (gomp_teams *omp_teams_stmt, tree child_fn)
{
omp_teams_stmt->child_fn = child_fn;
}
/* Return the artificial argument used to send variables and values
from the parent to the children threads in OMP_TEAMS_STMT. */
static inline tree
gimple_omp_teams_data_arg (const gomp_teams *omp_teams_stmt)
{
return omp_teams_stmt->data_arg;
}
/* Return a pointer to the data argument for OMP_TEAMS_STMT. */
static inline tree *
gimple_omp_teams_data_arg_ptr (gomp_teams *omp_teams_stmt)
{
return &omp_teams_stmt->data_arg;
}
/* Set DATA_ARG to be the data argument for OMP_TEAMS_STMT. */
static inline void
gimple_omp_teams_set_data_arg (gomp_teams *omp_teams_stmt, tree data_arg)
{
omp_teams_stmt->data_arg = data_arg;
}
/* Return the kernel_phony flag of an OMP_TEAMS_STMT. */
static inline bool
@ -5876,6 +5998,25 @@ gimple_omp_teams_set_grid_phony (gomp_teams *omp_teams_stmt, bool value)
omp_teams_stmt->subcode &= ~GF_OMP_TEAMS_GRID_PHONY;
}
/* Return the host flag of an OMP_TEAMS_STMT. */
static inline bool
gimple_omp_teams_host (const gomp_teams *omp_teams_stmt)
{
return (gimple_omp_subcode (omp_teams_stmt) & GF_OMP_TEAMS_HOST) != 0;
}
/* Set host flag of an OMP_TEAMS_STMT to VALUE. */
static inline void
gimple_omp_teams_set_host (gomp_teams *omp_teams_stmt, bool value)
{
if (value)
omp_teams_stmt->subcode |= GF_OMP_TEAMS_HOST;
else
omp_teams_stmt->subcode &= ~GF_OMP_TEAMS_HOST;
}
/* Return the clauses associated with OMP_SECTIONS GS. */
static inline tree

File diff suppressed because it is too large Load Diff

View File

@ -1356,6 +1356,8 @@ hash_tree (struct streamer_tree_cache_d *cache, hash_map<tree, hashval_t> *map,
val = OMP_CLAUSE_PROC_BIND_KIND (t);
break;
case OMP_CLAUSE_REDUCTION:
case OMP_CLAUSE_TASK_REDUCTION:
case OMP_CLAUSE_IN_REDUCTION:
val = OMP_CLAUSE_REDUCTION_CODE (t);
break;
default:

View File

@ -75,6 +75,8 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER_CANCEL, "GOMP_barrier_cancel",
BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT, "GOMP_taskwait",
BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT_DEPEND, "GOMP_taskwait_depend",
BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKYIELD, "GOMP_taskyield",
BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKGROUP_START, "GOMP_taskgroup_start",
@ -122,6 +124,14 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_GUIDED_START,
"GOMP_loop_nonmonotonic_guided_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_RUNTIME_START,
"GOMP_loop_nonmonotonic_runtime_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_MAYBE_NONMONOTONIC_RUNTIME_START,
"GOMP_loop_maybe_nonmonotonic_runtime_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ORDERED_STATIC_START,
"GOMP_loop_ordered_static_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
@ -154,6 +164,18 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_DOACROSS_RUNTIME_START,
"GOMP_loop_doacross_runtime_start",
BT_FN_BOOL_UINT_LONGPTR_LONGPTR_LONGPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_START,
"GOMP_loop_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ORDERED_START,
"GOMP_loop_ordered_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_DOACROSS_START,
"GOMP_loop_doacross_start",
BT_FN_BOOL_UINT_LONGPTR_LONG_LONG_LONGPTR_LONGPTR_PTR_PTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_STATIC_NEXT, "GOMP_loop_static_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_DYNAMIC_NEXT, "GOMP_loop_dynamic_next",
@ -168,6 +190,12 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_DYNAMIC_NEXT,
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_GUIDED_NEXT,
"GOMP_loop_nonmonotonic_guided_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_RUNTIME_NEXT,
"GOMP_loop_nonmonotonic_runtime_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_MAYBE_NONMONOTONIC_RUNTIME_NEXT,
"GOMP_loop_maybe_nonmonotonic_runtime_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ORDERED_STATIC_NEXT,
"GOMP_loop_ordered_static_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
@ -204,6 +232,14 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_GUIDED_START,
"GOMP_loop_ull_nonmonotonic_guided_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_RUNTIME_START,
"GOMP_loop_ull_nonmonotonic_runtime_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_MAYBE_NONMONOTONIC_RUNTIME_START,
"GOMP_loop_ull_maybe_nonmonotonic_runtime_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_ORDERED_STATIC_START,
"GOMP_loop_ull_ordered_static_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
@ -236,6 +272,18 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_DOACROSS_RUNTIME_START,
"GOMP_loop_ull_doacross_runtime_start",
BT_FN_BOOL_UINT_ULLPTR_ULLPTR_ULLPTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_START,
"GOMP_loop_ull_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_ORDERED_START,
"GOMP_loop_ull_ordered_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_DOACROSS_START,
"GOMP_loop_ull_doacross_start",
BT_FN_BOOL_UINT_ULLPTR_LONG_ULL_ULLPTR_ULLPTR_PTR_PTR,
ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_STATIC_NEXT,
"GOMP_loop_ull_static_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
@ -254,6 +302,12 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_DYNAMIC_NEXT,
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_GUIDED_NEXT,
"GOMP_loop_ull_nonmonotonic_guided_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_RUNTIME_NEXT,
"GOMP_loop_ull_nonmonotonic_runtime_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_MAYBE_NONMONOTONIC_RUNTIME_NEXT,
"GOMP_loop_ull_maybe_nonmonotonic_runtime_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_ORDERED_STATIC_NEXT,
"GOMP_loop_ull_ordered_static_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
@ -293,6 +347,14 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_GUIDED,
"GOMP_parallel_loop_nonmonotonic_guided",
BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_RUNTIME,
"GOMP_parallel_loop_nonmonotonic_runtime",
BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_LOOP_MAYBE_NONMONOTONIC_RUNTIME,
"GOMP_parallel_loop_maybe_nonmonotonic_runtime",
BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END, "GOMP_loop_end",
BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END_CANCEL, "GOMP_loop_end_cancel",
@ -313,6 +375,9 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_DOACROSS_ULL_WAIT, "GOMP_doacross_ull_wait",
BT_FN_VOID_ULL_VAR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL, "GOMP_parallel",
BT_FN_VOID_OMPFN_PTR_UINT_UINT, ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_REDUCTIONS,
"GOMP_parallel_reductions",
BT_FN_UINT_OMPFN_PTR_UINT_UINT, ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASK, "GOMP_task",
BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT,
ATTR_NOTHROW_LIST)
@ -324,6 +389,8 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKLOOP_ULL, "GOMP_taskloop_ull",
ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SECTIONS_START, "GOMP_sections_start",
BT_FN_UINT_UINT, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SECTIONS2_START, "GOMP_sections2_start",
BT_FN_UINT_UINT_PTR_PTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SECTIONS_NEXT, "GOMP_sections_next",
BT_FN_UINT, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_SECTIONS,
@ -363,5 +430,19 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_ENTER_EXIT_DATA,
BT_FN_VOID_INT_SIZE_PTR_PTR_PTR_UINT_PTR, ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TEAMS, "GOMP_teams",
BT_FN_VOID_UINT_UINT, ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TEAMS_REG, "GOMP_teams_reg",
BT_FN_VOID_OMPFN_PTR_UINT_UINT_UINT, ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKGROUP_REDUCTION_REGISTER,
"GOMP_taskgroup_reduction_register",
BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKGROUP_REDUCTION_UNREGISTER,
"GOMP_taskgroup_reduction_unregister",
BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASK_REDUCTION_REMAP,
"GOMP_task_reduction_remap",
BT_FN_VOID_SIZE_SIZE_PTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_WORKSHARE_TASK_REDUCTION_UNREGISTER,
"GOMP_workshare_task_reduction_unregister",
BT_FN_VOID_BOOL, ATTR_NOTHROW_LEAF_LIST)
DEF_GOACC_BUILTIN (BUILT_IN_GOACC_DECLARE, "GOACC_declare",
BT_FN_VOID_INT_SIZE_PTR_PTR_PTR, ATTR_NOTHROW_LIST)

View File

@ -174,6 +174,8 @@ workshare_safe_to_combine_p (basic_block ws_entry_bb)
return true;
gcc_assert (gimple_code (ws_stmt) == GIMPLE_OMP_FOR);
if (gimple_omp_for_kind (ws_stmt) != GF_OMP_FOR_KIND_FOR)
return false;
omp_extract_for_data (as_a <gomp_for *> (ws_stmt), &fd, NULL);
@ -202,7 +204,7 @@ workshare_safe_to_combine_p (basic_block ws_entry_bb)
static tree
omp_adjust_chunk_size (tree chunk_size, bool simd_schedule)
{
if (!simd_schedule)
if (!simd_schedule || integer_zerop (chunk_size))
return chunk_size;
poly_uint64 vf = omp_max_vf ();
@ -310,6 +312,13 @@ determine_parallel_type (struct omp_region *region)
ws_entry_bb = region->inner->entry;
ws_exit_bb = region->inner->exit;
/* Give up for task reductions on the parallel, while it is implementable,
adding another big set of APIs or slowing down the normal paths is
not acceptable. */
tree pclauses = gimple_omp_parallel_clauses (last_stmt (par_entry_bb));
if (omp_find_clause (pclauses, OMP_CLAUSE__REDUCTEMP_))
return;
if (single_succ (par_entry_bb) == ws_entry_bb
&& single_succ (ws_exit_bb) == par_exit_bb
&& workshare_safe_to_combine_p (ws_entry_bb)
@ -336,13 +345,14 @@ determine_parallel_type (struct omp_region *region)
if (c == NULL
|| ((OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_MASK)
== OMP_CLAUSE_SCHEDULE_STATIC)
|| omp_find_clause (clauses, OMP_CLAUSE_ORDERED))
{
region->is_combined_parallel = false;
region->inner->is_combined_parallel = false;
return;
}
|| omp_find_clause (clauses, OMP_CLAUSE_ORDERED)
|| omp_find_clause (clauses, OMP_CLAUSE__REDUCTEMP_))
return;
}
else if (region->inner->type == GIMPLE_OMP_SECTIONS
&& omp_find_clause (gimple_omp_sections_clauses (ws_stmt),
OMP_CLAUSE__REDUCTEMP_))
return;
region->is_combined_parallel = true;
region->inner->is_combined_parallel = true;
@ -534,7 +544,7 @@ adjust_context_and_scope (tree entry_block, tree child_fndecl)
}
}
/* Build the function calls to GOMP_parallel_start etc to actually
/* Build the function calls to GOMP_parallel etc to actually
generate the parallel operation. REGION is the parallel region
being expanded. BB is the block where to insert the code. WS_ARGS
will be set if this is a call to a combined parallel+workshare
@ -559,7 +569,10 @@ expand_parallel_call (struct omp_region *region, basic_block bb,
/* Determine what flavor of GOMP_parallel we will be
emitting. */
start_ix = BUILT_IN_GOMP_PARALLEL;
if (is_combined_parallel (region))
tree rtmp = omp_find_clause (clauses, OMP_CLAUSE__REDUCTEMP_);
if (rtmp)
start_ix = BUILT_IN_GOMP_PARALLEL_REDUCTIONS;
else if (is_combined_parallel (region))
{
switch (region->inner->type)
{
@ -568,12 +581,19 @@ expand_parallel_call (struct omp_region *region, basic_block bb,
switch (region->inner->sched_kind)
{
case OMP_CLAUSE_SCHEDULE_RUNTIME:
start_ix2 = 3;
if ((region->inner->sched_modifiers
& OMP_CLAUSE_SCHEDULE_NONMONOTONIC) != 0)
start_ix2 = 6;
else if ((region->inner->sched_modifiers
& OMP_CLAUSE_SCHEDULE_MONOTONIC) == 0)
start_ix2 = 7;
else
start_ix2 = 3;
break;
case OMP_CLAUSE_SCHEDULE_DYNAMIC:
case OMP_CLAUSE_SCHEDULE_GUIDED:
if (region->inner->sched_modifiers
& OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
if ((region->inner->sched_modifiers
& OMP_CLAUSE_SCHEDULE_MONOTONIC) == 0)
{
start_ix2 = 3 + region->inner->sched_kind;
break;
@ -716,6 +736,13 @@ expand_parallel_call (struct omp_region *region, basic_block bb,
t = build_call_expr_loc_vec (UNKNOWN_LOCATION,
builtin_decl_explicit (start_ix), args);
if (rtmp)
{
tree type = TREE_TYPE (OMP_CLAUSE_DECL (rtmp));
t = build2 (MODIFY_EXPR, type, OMP_CLAUSE_DECL (rtmp),
fold_convert (type,
fold_convert (pointer_sized_int_node, t)));
}
force_gimple_operand_gsi (&gsi, t, true, NULL_TREE,
false, GSI_CONTINUE_LINKING);
@ -792,6 +819,8 @@ expand_task_call (struct omp_region *region, basic_block bb,
if (omp_find_clause (tclauses, OMP_CLAUSE_NOGROUP))
iflags |= GOMP_TASK_FLAG_NOGROUP;
ull = fd.iter_type == long_long_unsigned_type_node;
if (omp_find_clause (clauses, OMP_CLAUSE_REDUCTION))
iflags |= GOMP_TASK_FLAG_REDUCTION;
}
else if (priority)
iflags |= GOMP_TASK_FLAG_PRIORITY;
@ -866,6 +895,82 @@ expand_task_call (struct omp_region *region, basic_block bb,
false, GSI_CONTINUE_LINKING);
}
/* Build the function call to GOMP_taskwait_depend to actually
generate the taskwait operation. BB is the block where to insert the
code. */
static void
expand_taskwait_call (basic_block bb, gomp_task *entry_stmt)
{
tree clauses = gimple_omp_task_clauses (entry_stmt);
tree depend = omp_find_clause (clauses, OMP_CLAUSE_DEPEND);
if (depend == NULL_TREE)
return;
depend = OMP_CLAUSE_DECL (depend);
gimple_stmt_iterator gsi = gsi_last_nondebug_bb (bb);
tree t
= build_call_expr (builtin_decl_explicit (BUILT_IN_GOMP_TASKWAIT_DEPEND),
1, depend);
force_gimple_operand_gsi (&gsi, t, true, NULL_TREE,
false, GSI_CONTINUE_LINKING);
}
/* Build the function call to GOMP_teams_reg to actually
generate the host teams operation. REGION is the teams region
being expanded. BB is the block where to insert the code. */
static void
expand_teams_call (basic_block bb, gomp_teams *entry_stmt)
{
tree clauses = gimple_omp_teams_clauses (entry_stmt);
tree num_teams = omp_find_clause (clauses, OMP_CLAUSE_NUM_TEAMS);
if (num_teams == NULL_TREE)
num_teams = build_int_cst (unsigned_type_node, 0);
else
{
num_teams = OMP_CLAUSE_NUM_TEAMS_EXPR (num_teams);
num_teams = fold_convert (unsigned_type_node, num_teams);
}
tree thread_limit = omp_find_clause (clauses, OMP_CLAUSE_THREAD_LIMIT);
if (thread_limit == NULL_TREE)
thread_limit = build_int_cst (unsigned_type_node, 0);
else
{
thread_limit = OMP_CLAUSE_THREAD_LIMIT_EXPR (thread_limit);
thread_limit = fold_convert (unsigned_type_node, thread_limit);
}
gimple_stmt_iterator gsi = gsi_last_nondebug_bb (bb);
tree t = gimple_omp_teams_data_arg (entry_stmt), t1;
if (t == NULL)
t1 = null_pointer_node;
else
t1 = build_fold_addr_expr (t);
tree child_fndecl = gimple_omp_teams_child_fn (entry_stmt);
tree t2 = build_fold_addr_expr (child_fndecl);
adjust_context_and_scope (gimple_block (entry_stmt), child_fndecl);
vec<tree, va_gc> *args;
vec_alloc (args, 5);
args->quick_push (t2);
args->quick_push (t1);
args->quick_push (num_teams);
args->quick_push (thread_limit);
/* For future extensibility. */
args->quick_push (build_zero_cst (unsigned_type_node));
t = build_call_expr_loc_vec (UNKNOWN_LOCATION,
builtin_decl_explicit (BUILT_IN_GOMP_TEAMS_REG),
args);
force_gimple_operand_gsi (&gsi, t, true, NULL_TREE,
false, GSI_CONTINUE_LINKING);
}
/* Chain all the DECLs in LIST by their TREE_CHAIN fields. */
static tree
@ -1112,6 +1217,17 @@ expand_omp_taskreg (struct omp_region *region)
vec<tree, va_gc> *ws_args;
entry_stmt = last_stmt (region->entry);
if (gimple_code (entry_stmt) == GIMPLE_OMP_TASK
&& gimple_omp_task_taskwait_p (entry_stmt))
{
new_bb = region->entry;
gsi = gsi_last_nondebug_bb (region->entry);
gcc_assert (gimple_code (gsi_stmt (gsi)) == GIMPLE_OMP_TASK);
gsi_remove (&gsi, true);
expand_taskwait_call (new_bb, as_a <gomp_task *> (entry_stmt));
return;
}
child_fn = gimple_omp_taskreg_child_fn (entry_stmt);
child_cfun = DECL_STRUCT_FUNCTION (child_fn);
@ -1137,7 +1253,8 @@ expand_omp_taskreg (struct omp_region *region)
gsi = gsi_last_nondebug_bb (entry_bb);
gcc_assert (gimple_code (gsi_stmt (gsi)) == GIMPLE_OMP_PARALLEL
|| gimple_code (gsi_stmt (gsi)) == GIMPLE_OMP_TASK);
|| gimple_code (gsi_stmt (gsi)) == GIMPLE_OMP_TASK
|| gimple_code (gsi_stmt (gsi)) == GIMPLE_OMP_TEAMS);
gsi_remove (&gsi, true);
new_bb = entry_bb;
@ -1190,8 +1307,8 @@ expand_omp_taskreg (struct omp_region *region)
effectively doing a STRIP_NOPS. */
if (TREE_CODE (arg) == ADDR_EXPR
&& TREE_OPERAND (arg, 0)
== gimple_omp_taskreg_data_arg (entry_stmt))
&& (TREE_OPERAND (arg, 0)
== gimple_omp_taskreg_data_arg (entry_stmt)))
{
parcopy_stmt = stmt;
break;
@ -1251,12 +1368,13 @@ expand_omp_taskreg (struct omp_region *region)
gsi = gsi_last_nondebug_bb (entry_bb);
stmt = gsi_stmt (gsi);
gcc_assert (stmt && (gimple_code (stmt) == GIMPLE_OMP_PARALLEL
|| gimple_code (stmt) == GIMPLE_OMP_TASK));
|| gimple_code (stmt) == GIMPLE_OMP_TASK
|| gimple_code (stmt) == GIMPLE_OMP_TEAMS));
e = split_block (entry_bb, stmt);
gsi_remove (&gsi, true);
entry_bb = e->dest;
edge e2 = NULL;
if (gimple_code (entry_stmt) == GIMPLE_OMP_PARALLEL)
if (gimple_code (entry_stmt) != GIMPLE_OMP_TASK)
single_succ_edge (entry_bb)->flags = EDGE_FALLTHRU;
else
{
@ -1382,6 +1500,8 @@ expand_omp_taskreg (struct omp_region *region)
if (gimple_code (entry_stmt) == GIMPLE_OMP_PARALLEL)
expand_parallel_call (region, new_bb,
as_a <gomp_parallel *> (entry_stmt), ws_args);
else if (gimple_code (entry_stmt) == GIMPLE_OMP_TEAMS)
expand_teams_call (new_bb, as_a <gomp_teams *> (entry_stmt));
else
expand_task_call (region, new_bb, as_a <gomp_task *> (entry_stmt));
if (gimple_in_ssa_p (cfun))
@ -2499,6 +2619,7 @@ expand_omp_for_generic (struct omp_region *region,
struct omp_for_data *fd,
enum built_in_function start_fn,
enum built_in_function next_fn,
tree sched_arg,
gimple *inner_stmt)
{
tree type, istart0, iend0, iend;
@ -2546,6 +2667,30 @@ expand_omp_for_generic (struct omp_region *region,
&& omp_find_clause (gimple_omp_for_clauses (gsi_stmt (gsi)),
OMP_CLAUSE_LASTPRIVATE))
ordered_lastprivate = false;
tree reductions = NULL_TREE;
tree mem = NULL_TREE;
if (sched_arg)
{
if (fd->have_reductemp)
{
tree c = omp_find_clause (gimple_omp_for_clauses (gsi_stmt (gsi)),
OMP_CLAUSE__REDUCTEMP_);
reductions = OMP_CLAUSE_DECL (c);
gcc_assert (TREE_CODE (reductions) == SSA_NAME);
gimple *g = SSA_NAME_DEF_STMT (reductions);
reductions = gimple_assign_rhs1 (g);
OMP_CLAUSE_DECL (c) = reductions;
entry_bb = gimple_bb (g);
edge e = split_block (entry_bb, g);
if (region->entry == entry_bb)
region->entry = e->dest;
gsi = gsi_last_bb (entry_bb);
}
else
reductions = null_pointer_node;
/* For now. */
mem = null_pointer_node;
}
if (fd->collapse > 1 || fd->ordered)
{
int first_zero_iter1 = -1, first_zero_iter2 = -1;
@ -2732,7 +2877,18 @@ expand_omp_for_generic (struct omp_region *region,
{
t = fold_convert (fd->iter_type, fd->chunk_size);
t = omp_adjust_chunk_size (t, fd->simd_schedule);
if (fd->ordered)
if (sched_arg)
{
if (fd->ordered)
t = build_call_expr (builtin_decl_explicit (start_fn),
8, t0, t1, sched_arg, t, t3, t4,
reductions, mem);
else
t = build_call_expr (builtin_decl_explicit (start_fn),
9, t0, t1, t2, sched_arg, t, t3, t4,
reductions, mem);
}
else if (fd->ordered)
t = build_call_expr (builtin_decl_explicit (start_fn),
5, t0, t1, t, t3, t4);
else
@ -2765,7 +2921,11 @@ expand_omp_for_generic (struct omp_region *region,
tree bfn_decl = builtin_decl_explicit (start_fn);
t = fold_convert (fd->iter_type, fd->chunk_size);
t = omp_adjust_chunk_size (t, fd->simd_schedule);
t = build_call_expr (bfn_decl, 7, t5, t0, t1, t2, t, t3, t4);
if (sched_arg)
t = build_call_expr (bfn_decl, 10, t5, t0, t1, t2, sched_arg,
t, t3, t4, reductions, mem);
else
t = build_call_expr (bfn_decl, 7, t5, t0, t1, t2, t, t3, t4);
}
else
t = build_call_expr (builtin_decl_explicit (start_fn),
@ -2784,6 +2944,17 @@ expand_omp_for_generic (struct omp_region *region,
gsi_insert_before (&gsi, gimple_build_assign (arr, clobber),
GSI_SAME_STMT);
}
if (fd->have_reductemp)
{
gimple *g = gsi_stmt (gsi);
gsi_remove (&gsi, true);
release_ssa_name (gimple_assign_lhs (g));
entry_bb = region->entry;
gsi = gsi_last_nondebug_bb (entry_bb);
gcc_assert (gimple_code (gsi_stmt (gsi)) == GIMPLE_OMP_FOR);
}
gsi_insert_after (&gsi, gimple_build_cond_empty (t), GSI_SAME_STMT);
/* Remove the GIMPLE_OMP_FOR statement. */
@ -3082,9 +3253,6 @@ expand_omp_for_generic (struct omp_region *region,
else
t = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END);
gcall *call_stmt = gimple_build_call (t, 0);
if (gimple_omp_return_lhs (gsi_stmt (gsi)))
gimple_call_set_lhs (call_stmt, gimple_omp_return_lhs (gsi_stmt (gsi)));
gsi_insert_after (&gsi, call_stmt, GSI_SAME_STMT);
if (fd->ordered)
{
tree arr = counts[fd->ordered];
@ -3093,6 +3261,17 @@ expand_omp_for_generic (struct omp_region *region,
gsi_insert_after (&gsi, gimple_build_assign (arr, clobber),
GSI_SAME_STMT);
}
if (gimple_omp_return_lhs (gsi_stmt (gsi)))
{
gimple_call_set_lhs (call_stmt, gimple_omp_return_lhs (gsi_stmt (gsi)));
if (fd->have_reductemp)
{
gimple *g = gimple_build_assign (reductions, NOP_EXPR,
gimple_call_lhs (call_stmt));
gsi_insert_after (&gsi, g, GSI_SAME_STMT);
}
}
gsi_insert_after (&gsi, call_stmt, GSI_SAME_STMT);
gsi_remove (&gsi, true);
/* Connect the new blocks. */
@ -3275,6 +3454,7 @@ expand_omp_for_static_nochunk (struct omp_region *region,
bool broken_loop = region->cont == NULL;
tree *counts = NULL;
tree n1, n2, step;
tree reductions = NULL_TREE;
itype = type = TREE_TYPE (fd->loop.v);
if (POINTER_TYPE_P (type))
@ -3358,6 +3538,29 @@ expand_omp_for_static_nochunk (struct omp_region *region,
gsi = gsi_last_bb (entry_bb);
}
if (fd->have_reductemp)
{
tree t1 = build_int_cst (long_integer_type_node, 0);
tree t2 = build_int_cst (long_integer_type_node, 1);
tree t3 = build_int_cstu (long_integer_type_node,
(HOST_WIDE_INT_1U << 31) + 1);
tree clauses = gimple_omp_for_clauses (fd->for_stmt);
clauses = omp_find_clause (clauses, OMP_CLAUSE__REDUCTEMP_);
reductions = OMP_CLAUSE_DECL (clauses);
gcc_assert (TREE_CODE (reductions) == SSA_NAME);
gimple *g = SSA_NAME_DEF_STMT (reductions);
reductions = gimple_assign_rhs1 (g);
OMP_CLAUSE_DECL (clauses) = reductions;
gimple_stmt_iterator gsi2 = gsi_for_stmt (g);
tree t
= build_call_expr (builtin_decl_explicit (BUILT_IN_GOMP_LOOP_START),
9, t1, t2, t2, t3, t1, null_pointer_node,
null_pointer_node, reductions, null_pointer_node);
force_gimple_operand_gsi (&gsi2, t, true, NULL_TREE,
true, GSI_SAME_STMT);
gsi_remove (&gsi2, true);
release_ssa_name (gimple_assign_lhs (g));
}
switch (gimple_omp_for_kind (fd->for_stmt))
{
case GF_OMP_FOR_KIND_FOR:
@ -3628,7 +3831,25 @@ expand_omp_for_static_nochunk (struct omp_region *region,
if (!gimple_omp_return_nowait_p (gsi_stmt (gsi)))
{
t = gimple_omp_return_lhs (gsi_stmt (gsi));
gsi_insert_after (&gsi, omp_build_barrier (t), GSI_SAME_STMT);
if (fd->have_reductemp)
{
tree fn;
if (t)
fn = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END_CANCEL);
else
fn = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END);
gcall *g = gimple_build_call (fn, 0);
if (t)
{
gimple_call_set_lhs (g, t);
gsi_insert_after (&gsi, gimple_build_assign (reductions,
NOP_EXPR, t),
GSI_SAME_STMT);
}
gsi_insert_after (&gsi, g, GSI_SAME_STMT);
}
else
gsi_insert_after (&gsi, omp_build_barrier (t), GSI_SAME_STMT);
}
gsi_remove (&gsi, true);
@ -3765,6 +3986,7 @@ expand_omp_for_static_chunk (struct omp_region *region,
bool broken_loop = region->cont == NULL;
tree *counts = NULL;
tree n1, n2, step;
tree reductions = NULL_TREE;
itype = type = TREE_TYPE (fd->loop.v);
if (POINTER_TYPE_P (type))
@ -3852,6 +4074,29 @@ expand_omp_for_static_chunk (struct omp_region *region,
gsi = gsi_last_bb (entry_bb);
}
if (fd->have_reductemp)
{
tree t1 = build_int_cst (long_integer_type_node, 0);
tree t2 = build_int_cst (long_integer_type_node, 1);
tree t3 = build_int_cstu (long_integer_type_node,
(HOST_WIDE_INT_1U << 31) + 1);
tree clauses = gimple_omp_for_clauses (fd->for_stmt);
clauses = omp_find_clause (clauses, OMP_CLAUSE__REDUCTEMP_);
reductions = OMP_CLAUSE_DECL (clauses);
gcc_assert (TREE_CODE (reductions) == SSA_NAME);
gimple *g = SSA_NAME_DEF_STMT (reductions);
reductions = gimple_assign_rhs1 (g);
OMP_CLAUSE_DECL (clauses) = reductions;
gimple_stmt_iterator gsi2 = gsi_for_stmt (g);
tree t
= build_call_expr (builtin_decl_explicit (BUILT_IN_GOMP_LOOP_START),
9, t1, t2, t2, t3, t1, null_pointer_node,
null_pointer_node, reductions, null_pointer_node);
force_gimple_operand_gsi (&gsi2, t, true, NULL_TREE,
true, GSI_SAME_STMT);
gsi_remove (&gsi2, true);
release_ssa_name (gimple_assign_lhs (g));
}
switch (gimple_omp_for_kind (fd->for_stmt))
{
case GF_OMP_FOR_KIND_FOR:
@ -4155,7 +4400,25 @@ expand_omp_for_static_chunk (struct omp_region *region,
if (!gimple_omp_return_nowait_p (gsi_stmt (gsi)))
{
t = gimple_omp_return_lhs (gsi_stmt (gsi));
gsi_insert_after (&gsi, omp_build_barrier (t), GSI_SAME_STMT);
if (fd->have_reductemp)
{
tree fn;
if (t)
fn = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END_CANCEL);
else
fn = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END);
gcall *g = gimple_build_call (fn, 0);
if (t)
{
gimple_call_set_lhs (g, t);
gsi_insert_after (&gsi, gimple_build_assign (reductions,
NOP_EXPR, t),
GSI_SAME_STMT);
}
gsi_insert_after (&gsi, g, GSI_SAME_STMT);
}
else
gsi_insert_after (&gsi, omp_build_barrier (t), GSI_SAME_STMT);
}
gsi_remove (&gsi, true);
@ -5690,39 +5953,72 @@ expand_omp_for (struct omp_region *region, gimple *inner_stmt)
else
{
int fn_index, start_ix, next_ix;
unsigned HOST_WIDE_INT sched = 0;
tree sched_arg = NULL_TREE;
gcc_assert (gimple_omp_for_kind (fd.for_stmt)
== GF_OMP_FOR_KIND_FOR);
if (fd.chunk_size == NULL
&& fd.sched_kind == OMP_CLAUSE_SCHEDULE_STATIC)
fd.chunk_size = integer_zero_node;
gcc_assert (fd.sched_kind != OMP_CLAUSE_SCHEDULE_AUTO);
switch (fd.sched_kind)
{
case OMP_CLAUSE_SCHEDULE_RUNTIME:
fn_index = 3;
if ((fd.sched_modifiers & OMP_CLAUSE_SCHEDULE_NONMONOTONIC) != 0)
{
gcc_assert (!fd.have_ordered);
fn_index = 6;
sched = 4;
}
else if ((fd.sched_modifiers & OMP_CLAUSE_SCHEDULE_MONOTONIC) == 0
&& !fd.have_ordered)
fn_index = 7;
else
{
fn_index = 3;
sched = (HOST_WIDE_INT_1U << 31);
}
break;
case OMP_CLAUSE_SCHEDULE_DYNAMIC:
case OMP_CLAUSE_SCHEDULE_GUIDED:
if ((fd.sched_modifiers & OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
&& !fd.ordered
if ((fd.sched_modifiers & OMP_CLAUSE_SCHEDULE_MONOTONIC) == 0
&& !fd.have_ordered)
{
fn_index = 3 + fd.sched_kind;
sched = (fd.sched_kind == OMP_CLAUSE_SCHEDULE_GUIDED) + 2;
break;
}
/* FALLTHRU */
default:
fn_index = fd.sched_kind;
sched = (fd.sched_kind == OMP_CLAUSE_SCHEDULE_GUIDED) + 2;
sched += (HOST_WIDE_INT_1U << 31);
break;
case OMP_CLAUSE_SCHEDULE_STATIC:
gcc_assert (fd.have_ordered);
fn_index = 0;
sched = (HOST_WIDE_INT_1U << 31) + 1;
break;
default:
gcc_unreachable ();
}
if (!fd.ordered)
fn_index += fd.have_ordered * 6;
fn_index += fd.have_ordered * 8;
if (fd.ordered)
start_ix = ((int)BUILT_IN_GOMP_LOOP_DOACROSS_STATIC_START) + fn_index;
else
start_ix = ((int)BUILT_IN_GOMP_LOOP_STATIC_START) + fn_index;
next_ix = ((int)BUILT_IN_GOMP_LOOP_STATIC_NEXT) + fn_index;
if (fd.have_reductemp)
{
if (fd.ordered)
start_ix = (int)BUILT_IN_GOMP_LOOP_DOACROSS_START;
else if (fd.have_ordered)
start_ix = (int)BUILT_IN_GOMP_LOOP_ORDERED_START;
else
start_ix = (int)BUILT_IN_GOMP_LOOP_START;
sched_arg = build_int_cstu (long_integer_type_node, sched);
if (!fd.chunk_size)
fd.chunk_size = integer_zero_node;
}
if (fd.iter_type == long_long_unsigned_type_node)
{
start_ix += ((int)BUILT_IN_GOMP_LOOP_ULL_STATIC_START
@ -5731,7 +6027,8 @@ expand_omp_for (struct omp_region *region, gimple *inner_stmt)
- (int)BUILT_IN_GOMP_LOOP_STATIC_NEXT);
}
expand_omp_for_generic (region, &fd, (enum built_in_function) start_ix,
(enum built_in_function) next_ix, inner_stmt);
(enum built_in_function) next_ix, sched_arg,
inner_stmt);
}
if (gimple_in_ssa_p (cfun))
@ -5831,7 +6128,25 @@ expand_omp_sections (struct omp_region *region)
sections_stmt = as_a <gomp_sections *> (gsi_stmt (si));
gcc_assert (gimple_code (sections_stmt) == GIMPLE_OMP_SECTIONS);
vin = gimple_omp_sections_control (sections_stmt);
if (!is_combined_parallel (region))
tree clauses = gimple_omp_sections_clauses (sections_stmt);
tree reductmp = omp_find_clause (clauses, OMP_CLAUSE__REDUCTEMP_);
if (reductmp)
{
tree reductions = OMP_CLAUSE_DECL (reductmp);
gcc_assert (TREE_CODE (reductions) == SSA_NAME);
gimple *g = SSA_NAME_DEF_STMT (reductions);
reductions = gimple_assign_rhs1 (g);
OMP_CLAUSE_DECL (reductmp) = reductions;
gimple_stmt_iterator gsi = gsi_for_stmt (g);
t = build_int_cst (unsigned_type_node, len - 1);
u = builtin_decl_explicit (BUILT_IN_GOMP_SECTIONS2_START);
stmt = gimple_build_call (u, 3, t, reductions, null_pointer_node);
gimple_call_set_lhs (stmt, vin);
gsi_insert_before (&gsi, stmt, GSI_SAME_STMT);
gsi_remove (&gsi, true);
release_ssa_name (gimple_assign_lhs (g));
}
else if (!is_combined_parallel (region))
{
/* If we are not inside a combined parallel+sections region,
call GOMP_sections_start. */
@ -5845,8 +6160,11 @@ expand_omp_sections (struct omp_region *region)
u = builtin_decl_explicit (BUILT_IN_GOMP_SECTIONS_NEXT);
stmt = gimple_build_call (u, 0);
}
gimple_call_set_lhs (stmt, vin);
gsi_insert_after (&si, stmt, GSI_SAME_STMT);
if (!reductmp)
{
gimple_call_set_lhs (stmt, vin);
gsi_insert_after (&si, stmt, GSI_SAME_STMT);
}
gsi_remove (&si, true);
/* The switch() statement replacing GIMPLE_OMP_SECTIONS_SWITCH goes in
@ -6004,6 +6322,12 @@ expand_omp_synch (struct omp_region *region)
|| gimple_code (gsi_stmt (si)) == GIMPLE_OMP_ORDERED
|| gimple_code (gsi_stmt (si)) == GIMPLE_OMP_CRITICAL
|| gimple_code (gsi_stmt (si)) == GIMPLE_OMP_TEAMS);
if (gimple_code (gsi_stmt (si)) == GIMPLE_OMP_TEAMS
&& gimple_omp_teams_host (as_a <gomp_teams *> (gsi_stmt (si))))
{
expand_omp_taskreg (region);
return;
}
gsi_remove (&si, true);
single_succ_edge (entry_bb)->flags = EDGE_FALLTHRU;
@ -6016,6 +6340,24 @@ expand_omp_synch (struct omp_region *region)
}
}
/* Translate enum omp_memory_order to enum memmodel. The two enums
are using different numbers so that OMP_MEMORY_ORDER_UNSPECIFIED
is 0. */
static enum memmodel
omp_memory_order_to_memmodel (enum omp_memory_order mo)
{
switch (mo)
{
case OMP_MEMORY_ORDER_RELAXED: return MEMMODEL_RELAXED;
case OMP_MEMORY_ORDER_ACQUIRE: return MEMMODEL_ACQUIRE;
case OMP_MEMORY_ORDER_RELEASE: return MEMMODEL_RELEASE;
case OMP_MEMORY_ORDER_ACQ_REL: return MEMMODEL_ACQ_REL;
case OMP_MEMORY_ORDER_SEQ_CST: return MEMMODEL_SEQ_CST;
default: gcc_unreachable ();
}
}
/* A subroutine of expand_omp_atomic. Attempt to implement the atomic
operation as a normal volatile load. */
@ -6047,11 +6389,9 @@ expand_omp_atomic_load (basic_block load_bb, tree addr,
type = TREE_TYPE (loaded_val);
itype = TREE_TYPE (TREE_TYPE (decl));
call = build_call_expr_loc (loc, decl, 2, addr,
build_int_cst (NULL,
gimple_omp_atomic_seq_cst_p (stmt)
? MEMMODEL_SEQ_CST
: MEMMODEL_RELAXED));
enum omp_memory_order omo = gimple_omp_atomic_memory_order (stmt);
tree mo = build_int_cst (NULL, omp_memory_order_to_memmodel (omo));
call = build_call_expr_loc (loc, decl, 2, addr, mo);
if (!useless_type_conversion_p (type, itype))
call = fold_build1_loc (loc, VIEW_CONVERT_EXPR, type, call);
call = build2_loc (loc, MODIFY_EXPR, void_type_node, loaded_val, call);
@ -6122,11 +6462,9 @@ expand_omp_atomic_store (basic_block load_bb, tree addr,
if (!useless_type_conversion_p (itype, type))
stored_val = fold_build1_loc (loc, VIEW_CONVERT_EXPR, itype, stored_val);
call = build_call_expr_loc (loc, decl, 3, addr, stored_val,
build_int_cst (NULL,
gimple_omp_atomic_seq_cst_p (stmt)
? MEMMODEL_SEQ_CST
: MEMMODEL_RELAXED));
enum omp_memory_order omo = gimple_omp_atomic_memory_order (stmt);
tree mo = build_int_cst (NULL, omp_memory_order_to_memmodel (omo));
call = build_call_expr_loc (loc, decl, 3, addr, stored_val, mo);
if (exchange)
{
if (!useless_type_conversion_p (type, itype))
@ -6167,7 +6505,6 @@ expand_omp_atomic_fetch_op (basic_block load_bb,
enum tree_code code;
bool need_old, need_new;
machine_mode imode;
bool seq_cst;
/* We expect to find the following sequences:
@ -6200,7 +6537,9 @@ expand_omp_atomic_fetch_op (basic_block load_bb,
return false;
need_new = gimple_omp_atomic_need_value_p (gsi_stmt (gsi));
need_old = gimple_omp_atomic_need_value_p (last_stmt (load_bb));
seq_cst = gimple_omp_atomic_seq_cst_p (last_stmt (load_bb));
enum omp_memory_order omo
= gimple_omp_atomic_memory_order (last_stmt (load_bb));
enum memmodel mo = omp_memory_order_to_memmodel (omo);
gcc_checking_assert (!need_old || !need_new);
if (!operand_equal_p (gimple_assign_lhs (stmt), stored_val, 0))
@ -6267,9 +6606,7 @@ expand_omp_atomic_fetch_op (basic_block load_bb,
use the RELAXED memory model. */
call = build_call_expr_loc (loc, decl, 3, addr,
fold_convert_loc (loc, itype, rhs),
build_int_cst (NULL,
seq_cst ? MEMMODEL_SEQ_CST
: MEMMODEL_RELAXED));
build_int_cst (NULL, mo));
if (need_old || need_new)
{
@ -7921,6 +8258,10 @@ build_omp_regions_1 (basic_block bb, struct omp_region *parent,
/* #pragma omp ordered depend is also just a stand-alone
directive. */
region = NULL;
else if (code == GIMPLE_OMP_TASK
&& gimple_omp_task_taskwait_p (stmt))
/* #pragma omp taskwait depend(...) is a stand-alone directive. */
region = NULL;
/* ..., this directive becomes the parent for a new region. */
if (region)
parent = region;
@ -8111,7 +8452,6 @@ omp_make_gimple_edges (basic_block bb, struct omp_region **region,
switch (code)
{
case GIMPLE_OMP_PARALLEL:
case GIMPLE_OMP_TASK:
case GIMPLE_OMP_FOR:
case GIMPLE_OMP_SINGLE:
case GIMPLE_OMP_TEAMS:
@ -8124,6 +8464,13 @@ omp_make_gimple_edges (basic_block bb, struct omp_region **region,
fallthru = true;
break;
case GIMPLE_OMP_TASK:
cur_region = new_omp_region (bb, code, cur_region);
fallthru = true;
if (gimple_omp_task_taskwait_p (last))
cur_region = cur_region->outer;
break;
case GIMPLE_OMP_ORDERED:
cur_region = new_omp_region (bb, code, cur_region);
fallthru = true;

View File

@ -36,6 +36,8 @@ along with GCC; see the file COPYING3. If not see
#include "stringpool.h"
#include "attribs.h"
enum omp_requires omp_requires_mask;
tree
omp_find_clause (tree clauses, enum omp_clause_code kind)
{
@ -136,6 +138,7 @@ omp_extract_for_data (gomp_for *for_stmt, struct omp_for_data *fd,
fd->pre = NULL;
fd->have_nowait = distribute || simd;
fd->have_ordered = false;
fd->have_reductemp = false;
fd->tiling = NULL_TREE;
fd->collapse = 1;
fd->ordered = 0;
@ -186,6 +189,8 @@ omp_extract_for_data (gomp_for *for_stmt, struct omp_for_data *fd,
collapse_iter = &OMP_CLAUSE_TILE_ITERVAR (t);
collapse_count = &OMP_CLAUSE_TILE_COUNT (t);
break;
case OMP_CLAUSE__REDUCTEMP_:
fd->have_reductemp = true;
default:
break;
}
@ -250,13 +255,45 @@ omp_extract_for_data (gomp_for *for_stmt, struct omp_for_data *fd,
loop->cond_code = gimple_omp_for_cond (for_stmt, i);
loop->n2 = gimple_omp_for_final (for_stmt, i);
gcc_assert (loop->cond_code != NE_EXPR);
gcc_assert (loop->cond_code != NE_EXPR
|| (gimple_omp_for_kind (for_stmt)
!= GF_OMP_FOR_KIND_OACC_LOOP));
omp_adjust_for_condition (loc, &loop->cond_code, &loop->n2);
t = gimple_omp_for_incr (for_stmt, i);
gcc_assert (TREE_OPERAND (t, 0) == var);
loop->step = omp_get_for_step_from_incr (loc, t);
if (loop->cond_code == NE_EXPR)
{
gcc_assert (TREE_CODE (loop->step) == INTEGER_CST);
if (TREE_CODE (TREE_TYPE (loop->v)) == INTEGER_TYPE)
{
if (integer_onep (loop->step))
loop->cond_code = LT_EXPR;
else
{
gcc_assert (integer_minus_onep (loop->step));
loop->cond_code = GT_EXPR;
}
}
else
{
tree unit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (loop->v)));
gcc_assert (TREE_CODE (unit) == INTEGER_CST);
if (tree_int_cst_equal (unit, loop->step))
loop->cond_code = LT_EXPR;
else
{
gcc_assert (wi::neg (wi::to_widest (unit))
== wi::to_widest (loop->step));
loop->cond_code = GT_EXPR;
}
}
}
omp_adjust_for_condition (loc, &loop->cond_code, &loop->n2);
if (simd
|| (fd->sched_kind == OMP_CLAUSE_SCHEDULE_STATIC
&& !fd->have_ordered))
@ -281,9 +318,8 @@ omp_extract_for_data (gomp_for *for_stmt, struct omp_for_data *fd,
tree n;
if (loop->cond_code == LT_EXPR)
n = fold_build2_loc (loc,
PLUS_EXPR, TREE_TYPE (loop->v),
loop->n2, loop->step);
n = fold_build2_loc (loc, PLUS_EXPR, TREE_TYPE (loop->v),
loop->n2, loop->step);
else
n = loop->n1;
if (TREE_CODE (n) != INTEGER_CST
@ -298,15 +334,13 @@ omp_extract_for_data (gomp_for *for_stmt, struct omp_for_data *fd,
if (loop->cond_code == LT_EXPR)
{
n1 = loop->n1;
n2 = fold_build2_loc (loc,
PLUS_EXPR, TREE_TYPE (loop->v),
loop->n2, loop->step);
n2 = fold_build2_loc (loc, PLUS_EXPR, TREE_TYPE (loop->v),
loop->n2, loop->step);
}
else
{
n1 = fold_build2_loc (loc,
MINUS_EXPR, TREE_TYPE (loop->v),
loop->n2, loop->step);
n1 = fold_build2_loc (loc, MINUS_EXPR, TREE_TYPE (loop->v),
loop->n2, loop->step);
n2 = loop->n1;
}
if (TREE_CODE (n1) != INTEGER_CST
@ -338,27 +372,31 @@ omp_extract_for_data (gomp_for *for_stmt, struct omp_for_data *fd,
if (POINTER_TYPE_P (itype))
itype = signed_type_for (itype);
t = build_int_cst (itype, (loop->cond_code == LT_EXPR ? -1 : 1));
t = fold_build2_loc (loc,
PLUS_EXPR, itype,
fold_convert_loc (loc, itype, loop->step), t);
t = fold_build2_loc (loc, PLUS_EXPR, itype,
fold_convert_loc (loc, itype, loop->step),
t);
t = fold_build2_loc (loc, PLUS_EXPR, itype, t,
fold_convert_loc (loc, itype, loop->n2));
fold_convert_loc (loc, itype, loop->n2));
t = fold_build2_loc (loc, MINUS_EXPR, itype, t,
fold_convert_loc (loc, itype, loop->n1));
fold_convert_loc (loc, itype, loop->n1));
if (TYPE_UNSIGNED (itype) && loop->cond_code == GT_EXPR)
t = fold_build2_loc (loc, TRUNC_DIV_EXPR, itype,
fold_build1_loc (loc, NEGATE_EXPR, itype, t),
fold_build1_loc (loc, NEGATE_EXPR, itype,
fold_convert_loc (loc, itype,
loop->step)));
{
tree step = fold_convert_loc (loc, itype, loop->step);
t = fold_build2_loc (loc, TRUNC_DIV_EXPR, itype,
fold_build1_loc (loc, NEGATE_EXPR,
itype, t),
fold_build1_loc (loc, NEGATE_EXPR,
itype, step));
}
else
t = fold_build2_loc (loc, TRUNC_DIV_EXPR, itype, t,
fold_convert_loc (loc, itype, loop->step));
fold_convert_loc (loc, itype,
loop->step));
t = fold_convert_loc (loc, long_long_unsigned_type_node, t);
if (count != NULL_TREE)
count = fold_build2_loc (loc,
MULT_EXPR, long_long_unsigned_type_node,
count, t);
count = fold_build2_loc (loc, MULT_EXPR,
long_long_unsigned_type_node,
count, t);
else
count = t;
if (TREE_CODE (count) != INTEGER_CST)

View File

@ -62,7 +62,7 @@ struct omp_for_data
tree tiling; /* Tiling values (if non null). */
int collapse; /* Collapsed loops, 1 for a non-collapsed loop. */
int ordered;
bool have_nowait, have_ordered, simd_schedule;
bool have_nowait, have_ordered, simd_schedule, have_reductemp;
unsigned char sched_modifiers;
enum omp_clause_schedule_kind sched_kind;
struct omp_for_data_loop *loops;
@ -89,4 +89,16 @@ extern bool offloading_function_p (tree fn);
extern int oacc_get_fn_dim_size (tree fn, int axis);
extern int oacc_get_ifn_dim_arg (const gimple *stmt);
enum omp_requires {
OMP_REQUIRES_ATOMIC_DEFAULT_MEM_ORDER = 0xf,
OMP_REQUIRES_UNIFIED_ADDRESS = 0x10,
OMP_REQUIRES_UNIFIED_SHARED_MEMORY = 0x20,
OMP_REQUIRES_DYNAMIC_ALLOCATORS = 0x40,
OMP_REQUIRES_REVERSE_OFFLOAD = 0x80,
OMP_REQUIRES_ATOMIC_DEFAULT_MEM_ORDER_USED = 0x100,
OMP_REQUIRES_TARGET_USED = 0x200
};
extern GTY(()) enum omp_requires omp_requires_mask;
#endif /* GCC_OMP_GENERAL_H */

View File

@ -1053,8 +1053,8 @@ grid_eliminate_combined_simd_part (gomp_for *parloop)
while (*tgt)
tgt = &OMP_CLAUSE_CHAIN (*tgt);
/* Copy over all clauses, except for linaer clauses, which are turned into
private clauses, and all other simd-specificl clauses, which are
/* Copy over all clauses, except for linear clauses, which are turned into
private clauses, and all other simd-specific clauses, which are
ignored. */
tree *pc = gimple_omp_for_clauses_ptr (simd);
while (*pc)
@ -1083,7 +1083,7 @@ grid_eliminate_combined_simd_part (gomp_for *parloop)
*pc = OMP_CLAUSE_CHAIN (c);
OMP_CLAUSE_CHAIN (c) = NULL;
*tgt = c;
tgt = &OMP_CLAUSE_CHAIN(c);
tgt = &OMP_CLAUSE_CHAIN (c);
break;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,114 @@
2018-11-08 Jakub Jelinek <jakub@redhat.com>
* c-c++-common/gomp/atomic-17.c: New test.
* c-c++-common/gomp/atomic-18.c: New test.
* c-c++-common/gomp/atomic-19.c: New test.
* c-c++-common/gomp/atomic-20.c: New test.
* c-c++-common/gomp/atomic-21.c: New test.
* c-c++-common/gomp/atomic-22.c: New test.
* c-c++-common/gomp/clauses-1.c (r2): New variable.
(foo): Add ntm argument and test if and nontemporal clauses on
constructs with simd.
(bar): Put taskloop simd inside of taskgroup with task_reduction,
use in_reduction clause instead of reduction. Add another
taskloop simd without nogroup clause, but with reduction clause and
a new in_reduction. Add ntm and i3 arguments. Test if and
nontemporal clauses on constructs with simd. Change if clauses on
some constructs from specific to the particular constituents to one
without a modifier. Add new tests for combined host teams and for
new parallel master and {,parallel }master taskloop{, simd} combined
constructs.
(baz): New function with host teams tests.
* gcc.dg/gomp/combined-1.c: Moved to ...
* c-c++-common/gomp/combined-1.c: ... here. Adjust expected library
call.
* c-c++-common/gomp/combined-2.c: New test.
* c-c++-common/gomp/combined-3.c: New test.
* c-c++-common/gomp/critical-1.c: New test.
* c-c++-common/gomp/critical-2.c: New test.
* c-c++-common/gomp/default-1.c: New test.
* c-c++-common/gomp/defaultmap-1.c: New test.
* c-c++-common/gomp/defaultmap-2.c: New test.
* c-c++-common/gomp/defaultmap-3.c: New test.
* c-c++-common/gomp/depend-5.c: New test.
* c-c++-common/gomp/depend-6.c: New test.
* c-c++-common/gomp/depend-iterator-1.c: New test.
* c-c++-common/gomp/depend-iterator-2.c: New test.
* c-c++-common/gomp/depobj-1.c: New test.
* c-c++-common/gomp/flush-1.c: New test.
* c-c++-common/gomp/flush-2.c: New test.
* c-c++-common/gomp/for-1.c: New test.
* c-c++-common/gomp/for-2.c: New test.
* c-c++-common/gomp/for-3.c: New test.
* c-c++-common/gomp/for-4.c: New test.
* c-c++-common/gomp/for-5.c: New test.
* c-c++-common/gomp/for-6.c: New test.
* c-c++-common/gomp/for-7.c: New test.
* c-c++-common/gomp/if-1.c (foo): Add some further tests.
* c-c++-common/gomp/if-2.c (foo): Likewise. Expect slightly different
diagnostics wording in one case.
* c-c++-common/gomp/if-3.c: New test.
* c-c++-common/gomp/master-combined-1.c: New test.
* c-c++-common/gomp/master-combined-2.c: New test.
* c-c++-common/gomp/nontemporal-1.c: New test.
* c-c++-common/gomp/nontemporal-2.c: New test.
* c-c++-common/gomp/reduction-task-1.c: New test.
* c-c++-common/gomp/reduction-task-2.c: New test.
* c-c++-common/gomp/requires-1.c: New test.
* c-c++-common/gomp/requires-2.c: New test.
* c-c++-common/gomp/requires-3.c: New test.
* c-c++-common/gomp/requires-4.c: New test.
* c-c++-common/gomp/schedule-modifiers-1.c (bar): Don't expect
diagnostics for nonmonotonic modifier with static, runtime or auto
schedule kinds.
* c-c++-common/gomp/simd7.c: New test.
* c-c++-common/gomp/target-data-1.c: New test.
* c-c++-common/gomp/taskloop-reduction-1.c: New test.
* c-c++-common/gomp/taskwait-depend-1.c: New test.
* c-c++-common/gomp/teams-1.c: New test.
* c-c++-common/gomp/teams-2.c: New test.
* gcc.dg/gomp/appendix-a/a.24.1.c: Update from OpenMP examples. Add
shared(c) clause.
* gcc.dg/gomp/atomic-5.c (f1): Add another expected error.
* gcc.dg/gomp/clause-1.c: Adjust expected diagnostics for const
qualified vars without mutable member no longer being predeterined
shared.
* gcc.dg/gomp/sharing-1.c: Likewise.
* g++.dg/gomp/clause-3.C: Likewise.
* g++.dg/gomp/member-2.C: Likewise.
* g++.dg/gomp/predetermined-1.C: Likewise.
* g++.dg/gomp/private-1.C: Likewise.
* g++.dg/gomp/sharing-1.C: Likewise.
* g++.dg/gomp/sharing-2.C: Likewise. Add a few tests with aggregate
const static data member without mutable elements.
* gcc.dg/gomp/for-4.c: Expected nonmonotonic functions in the dumps.
* gcc.dg/gomp/for-5.c: Likewise.
* gcc.dg/gomp/for-6.c: Change expected library call.
* gcc.dg/gomp/pr39495-2.c (foo): Don't expect errors on !=.
* gcc.dg/gomp/reduction-2.c: New test.
* gcc.dg/gomp/simd-1.c: New test.
* gcc.dg/gomp/teams-1.c: Adjust expected diagnostic lines.
* g++.dg/gomp/atomic-18.C: New test.
* g++.dg/gomp/atomic-19.C: New test.
* g++.dg/gomp/atomic-5.C (f1): Adjust expected lines of read-only
variable messages. Add another expected error.
* g++.dg/gomp/critical-3.C: New test.
* g++.dg/gomp/depend-iterator-1.C: New test.
* g++.dg/gomp/depend-iterator-2.C: New test.
* g++.dg/gomp/depobj-1.C: New test.
* g++.dg/gomp/doacross-1.C: New test.
* g++.dg/gomp/for-21.C: New test.
* g++.dg/gomp/for-4.C: Expected nonmonotonic functions in the dumps.
* g++.dg/gomp/for-5.C: Likewise.
* g++.dg/gomp/for-6.C: Change expected library call.
* g++.dg/gomp/loop-4.C: New test.
* g++.dg/gomp/pr33372-1.C: Adjust location of the expected
diagnostics.
* g++.dg/gomp/pr33372-3.C: Likewise.
* g++.dg/gomp/pr39495-2.C (foo): Don't expect errors on !=.
* g++.dg/gomp/simd-2.C: New test.
* g++.dg/gomp/tpl-atomic-2.C: Adjust expected diagnostic lines.
2018-11-08 Uros Bizjak <ubizjak@gmail.com>
* gcc.dg/pr87874.c (em): Declare uint64_max as

View File

@ -0,0 +1,29 @@
int i, v;
float f;
void
foo ()
{
#pragma omp atomic release, hint (0), update
i = i + 1;
#pragma omp atomic hint(0)seq_cst
i = i + 1;
#pragma omp atomic relaxed,update,hint (0)
i = i + 1;
#pragma omp atomic release
i = i + 1;
#pragma omp atomic relaxed
i = i + 1;
#pragma omp atomic acq_rel capture
v = i = i + 1;
#pragma omp atomic capture,acq_rel , hint (1)
v = i = i + 1;
#pragma omp atomic hint(0),acquire capture
v = i = i + 1;
#pragma omp atomic read acquire
v = i;
#pragma omp atomic release,write
i = v;
#pragma omp atomic hint(1),update,release
f = f + 2.0;
}

View File

@ -0,0 +1,35 @@
int i, v;
float f;
void
foo (int j)
{
#pragma omp atomic update,update /* { dg-error "too many atomic clauses" } */
i = i + 1;
#pragma omp atomic seq_cst release /* { dg-error "too many memory order clauses" } */
i = i + 1;
#pragma omp atomic read,release /* { dg-error "incompatible with 'acq_rel' or 'release' clauses" } */
v = i;
#pragma omp atomic acq_rel read /* { dg-error "incompatible with 'acq_rel' or 'release' clauses" } */
v = i;
#pragma omp atomic write acq_rel /* { dg-error "incompatible with 'acq_rel' or 'acquire' clauses" } */
i = v;
#pragma omp atomic acquire , write /* { dg-error "incompatible with 'acq_rel' or 'acquire' clauses" } */
i = v;
#pragma omp atomic update ,acquire /* { dg-error "incompatible with 'acq_rel' or 'acquire' clauses" } */
i = i + 1;
#pragma omp atomic acq_rel update /* { dg-error "incompatible with 'acq_rel' or 'acquire' clauses" } */
i = i + 1;
#pragma omp atomic acq_rel,hint(0) /* { dg-error "incompatible with 'acq_rel' or 'acquire' clauses" } */
i = i + 1;
#pragma omp atomic acquire /* { dg-error "incompatible with 'acq_rel' or 'acquire' clauses" } */
i = i + 1;
#pragma omp atomic capture hint (0) capture /* { dg-error "too many atomic clauses" } */
v = i = i + 1;
#pragma omp atomic hint(j + 2) /* { dg-error "constant integer expression" } */
i = i + 1;
#pragma omp atomic hint(f) /* { dg-error "integ" } */
i = i + 1;
#pragma omp atomic foobar /* { dg-error "expected 'read', 'write', 'update', 'capture', 'seq_cst', 'acq_rel', 'release', 'relaxed' or 'hint' clause" } */
i = i + 1; /* { dg-error "expected end of line before" "" { target *-*-* } .-1 } */
}

View File

@ -0,0 +1,27 @@
/* { dg-do compile } */
/* { dg-additional-options "-fdump-tree-original" } */
/* { dg-final { scan-tree-dump-times "omp atomic release" 1 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic relaxed" 3 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic read relaxed" 1 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic capture relaxed" 1 "original" } } */
int i, j, k, l, m, n;
void
foo ()
{
int v;
#pragma omp atomic release
i = i + 1;
#pragma omp requires atomic_default_mem_order (relaxed)
#pragma omp atomic
j = j + 1;
#pragma omp atomic update
k = k + 1;
#pragma omp atomic read
v = l;
#pragma omp atomic write
m = v;
#pragma omp atomic capture
v = n = n + 1;
}

View File

@ -0,0 +1,27 @@
/* { dg-do compile } */
/* { dg-additional-options "-fdump-tree-original" } */
/* { dg-final { scan-tree-dump-times "omp atomic release" 1 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic seq_cst" 3 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic read seq_cst" 1 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic capture seq_cst" 1 "original" } } */
int i, j, k, l, m, n;
void
foo ()
{
int v;
#pragma omp atomic release
i = i + 1;
#pragma omp requires atomic_default_mem_order (seq_cst)
#pragma omp atomic
j = j + 1;
#pragma omp atomic update
k = k + 1;
#pragma omp atomic read
v = l;
#pragma omp atomic write
m = v;
#pragma omp atomic capture
v = n = n + 1;
}

View File

@ -0,0 +1,26 @@
/* { dg-do compile } */
/* { dg-additional-options "-fdump-tree-original" } */
/* { dg-final { scan-tree-dump-times "omp atomic release" 4 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic read acquire" 1 "original" } } */
/* { dg-final { scan-tree-dump-times "omp atomic capture acq_rel" 1 "original" } } */
int i, j, k, l, m, n;
void
foo ()
{
int v;
#pragma omp atomic release
i = i + 1;
#pragma omp requires atomic_default_mem_order (acq_rel)
#pragma omp atomic
j = j + 1;
#pragma omp atomic update
k = k + 1;
#pragma omp atomic read
v = l;
#pragma omp atomic write
m = v;
#pragma omp atomic capture
v = n = n + 1;
}

View File

@ -0,0 +1,12 @@
int i, j;
void
foo ()
{
int v;
#pragma omp atomic release
i = i + 1;
#pragma omp atomic read
v = j;
#pragma omp requires atomic_default_mem_order (acq_rel) /* { dg-error "'atomic_default_mem_order' clause used lexically after first 'atomic' construct without memory order clause" } */
}

View File

@ -5,11 +5,11 @@ int t;
#pragma omp threadprivate (t)
#pragma omp declare target
int f, l, ll, r;
int f, l, ll, r, r2;
void
foo (int d, int m, int i1, int i2, int p, int *idp, int s,
int nte, int tl, int nth, int g, int nta, int fi, int pp, int *q)
int nte, int tl, int nth, int g, int nta, int fi, int pp, int *q, int ntm)
{
#pragma omp distribute parallel for \
private (p) firstprivate (f) collapse(1) dist_schedule(static, 16) \
@ -19,26 +19,50 @@ foo (int d, int m, int i1, int i2, int p, int *idp, int s,
ll++;
#pragma omp distribute parallel for simd \
private (p) firstprivate (f) collapse(1) dist_schedule(static, 16) \
if (parallel: i2) default(shared) shared(s) reduction(+:r) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) \
if (parallel: i2) if(simd: i1) default(shared) shared(s) reduction(+:r) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) nontemporal(ntm) \
safelen(8) simdlen(4) aligned(q: 32)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp distribute simd \
private (p) firstprivate (f) collapse(1) dist_schedule(static, 16) \
safelen(8) simdlen(4) aligned(q: 32) reduction(+:r)
safelen(8) simdlen(4) aligned(q: 32) reduction(+:r) if(i1) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
}
#pragma omp end declare target
void
bar (int d, int m, int i1, int i2, int p, int *idp, int s,
int nte, int tl, int nth, int g, int nta, int fi, int pp, int *q, int *dd)
baz (int d, int m, int i1, int i2, int p, int *idp, int s,
int nte, int tl, int nth, int g, int nta, int fi, int pp, int *q, int ntm)
{
#pragma omp distribute parallel for \
private (p) firstprivate (f) collapse(1) dist_schedule(static, 16) \
if (parallel: i2) default(shared) shared(s) reduction(+:r) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp distribute parallel for simd \
private (p) firstprivate (f) collapse(1) dist_schedule(static, 16) \
if (parallel: i2) if(simd: i1) default(shared) shared(s) reduction(+:r) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) nontemporal(ntm) \
safelen(8) simdlen(4) aligned(q: 32) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp distribute simd \
private (p) firstprivate (f) collapse(1) dist_schedule(static, 16) \
safelen(8) simdlen(4) aligned(q: 32) reduction(+:r) if(i1) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
}
void
bar (int d, int m, int i1, int i2, int i3, int p, int *idp, int s,
int nte, int tl, int nth, int g, int nta, int fi, int pp, int *q, int *dd, int ntm)
{
#pragma omp for simd \
private (p) firstprivate (f) lastprivate (l) linear (ll:1) reduction(+:r) schedule(static, 4) collapse(1) nowait \
safelen(8) simdlen(4) aligned(q: 32)
safelen(8) simdlen(4) aligned(q: 32) nontemporal(ntm) if(i1)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel for \
@ -47,9 +71,9 @@ bar (int d, int m, int i1, int i2, int p, int *idp, int s,
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel for simd \
private (p) firstprivate (f) if (parallel: i2) default(shared) shared(s) copyin(t) reduction(+:r) num_threads (nth) proc_bind(spread) \
private (p) firstprivate (f) if (i2) default(shared) shared(s) copyin(t) reduction(+:r) num_threads (nth) proc_bind(spread) \
lastprivate (l) linear (ll:1) schedule(static, 4) collapse(1) \
safelen(8) simdlen(4) aligned(q: 32)
safelen(8) simdlen(4) aligned(q: 32) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel sections \
@ -76,7 +100,7 @@ bar (int d, int m, int i1, int i2, int p, int *idp, int s,
device(d) map (tofrom: m) if (target: i1) private (p) firstprivate (f) defaultmap(tofrom: scalar) is_device_ptr (idp) \
if (parallel: i2) default(shared) shared(s) reduction(+:r) num_threads (nth) proc_bind(spread) \
lastprivate (l) linear (ll:1) schedule(static, 4) collapse(1) \
safelen(8) simdlen(4) aligned(q: 32) nowait depend(inout: dd[0])
safelen(8) simdlen(4) aligned(q: 32) nowait depend(inout: dd[0]) nontemporal(ntm) if (simd: i3)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp target teams \
@ -103,31 +127,38 @@ bar (int d, int m, int i1, int i2, int p, int *idp, int s,
collapse(1) dist_schedule(static, 16) \
if (parallel: i2) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) \
safelen(8) simdlen(4) aligned(q: 32) nowait depend(inout: dd[0])
safelen(8) simdlen(4) aligned(q: 32) nowait depend(inout: dd[0]) nontemporal(ntm) if (simd: i3)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp target teams distribute simd \
device(d) map (tofrom: m) if (target: i1) private (p) firstprivate (f) defaultmap(tofrom: scalar) is_device_ptr (idp) \
device(d) map (tofrom: m) if (i1) private (p) firstprivate (f) defaultmap(tofrom: scalar) is_device_ptr (idp) \
shared(s) default(shared) reduction(+:r) num_teams(nte) thread_limit(tl) \
collapse(1) dist_schedule(static, 16) \
safelen(8) simdlen(4) aligned(q: 32) nowait depend(inout: dd[0])
safelen(8) simdlen(4) aligned(q: 32) nowait depend(inout: dd[0]) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp target simd \
device(d) map (tofrom: m) if (target: i1) private (p) firstprivate (f) defaultmap(tofrom: scalar) is_device_ptr (idp) \
safelen(8) simdlen(4) lastprivate (l) linear(ll: 1) aligned(q: 32) reduction(+:r) \
nowait depend(inout: dd[0])
nowait depend(inout: dd[0]) nontemporal(ntm) if(simd:i3)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp taskgroup task_reduction(+:r2)
#pragma omp taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(taskloop: i1) final(fi) mergeable nogroup priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(+:r)
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(taskloop: i1) if(simd: i2) final(fi) mergeable priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(default, +:r) in_reduction(+:r2) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp taskgroup task_reduction(+:r)
#pragma omp taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(i1) final(fi) mergeable nogroup priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) in_reduction(+:r) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp taskwait
#pragma omp taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) num_tasks (nta) collapse(1) if(taskloop: i1) final(fi) priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(+:r)
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(+:r) if (simd: i3) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp target nowait depend(inout: dd[0])
@ -150,14 +181,83 @@ bar (int d, int m, int i1, int i2, int p, int *idp, int s,
collapse(1) dist_schedule(static, 16) \
if (parallel: i2) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) \
safelen(8) simdlen(4) aligned(q: 32)
safelen(8) simdlen(4) aligned(q: 32) if (simd: i3) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp target
#pragma omp teams distribute simd \
private(p) firstprivate (f) shared(s) default(shared) reduction(+:r) num_teams(nte) thread_limit(tl) \
collapse(1) dist_schedule(static, 16) \
safelen(8) simdlen(4) aligned(q: 32)
safelen(8) simdlen(4) aligned(q: 32) if(i3) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp teams distribute parallel for \
private(p) firstprivate (f) shared(s) default(shared) reduction(+:r) num_teams(nte) thread_limit(tl) \
collapse(1) dist_schedule(static, 16) \
if (parallel: i2) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp teams distribute parallel for simd \
private(p) firstprivate (f) shared(s) default(shared) reduction(+:r) num_teams(nte) thread_limit(tl) \
collapse(1) dist_schedule(static, 16) \
if (parallel: i2) num_threads (nth) proc_bind(spread) \
lastprivate (l) schedule(static, 4) \
safelen(8) simdlen(4) aligned(q: 32) if (simd: i3) nontemporal(ntm) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp teams distribute simd \
private(p) firstprivate (f) shared(s) default(shared) reduction(+:r) num_teams(nte) thread_limit(tl) \
collapse(1) dist_schedule(static, 16) \
safelen(8) simdlen(4) aligned(q: 32) if(i3) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel master \
private (p) firstprivate (f) if (parallel: i2) default(shared) shared(s) reduction(+:r) \
num_threads (nth) proc_bind(spread) copyin(t)
;
#pragma omp taskgroup task_reduction (+:r2)
#pragma omp master taskloop \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(taskloop: i1) final(fi) mergeable priority (pp) \
reduction(default, +:r) in_reduction(+:r2)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp taskgroup task_reduction (+:r2)
#pragma omp master taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(taskloop: i1) if(simd: i2) final(fi) mergeable priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(default, +:r) in_reduction(+:r2) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel master taskloop \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(taskloop: i1) final(fi) mergeable priority (pp) \
reduction(default, +:r) if (parallel: i2) num_threads (nth) proc_bind(spread) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel master taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) grainsize (g) collapse(1) untied if(taskloop: i1) if(simd: i2) final(fi) mergeable priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(default, +:r) nontemporal(ntm) if (parallel: i2) num_threads (nth) proc_bind(spread) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp taskgroup task_reduction (+:r2)
#pragma omp master taskloop \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) num_tasks (nta) collapse(1) untied if(i1) final(fi) mergeable priority (pp) \
reduction(default, +:r) in_reduction(+:r2)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp taskgroup task_reduction (+:r2)
#pragma omp master taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) num_tasks (nta) collapse(1) untied if(i1) final(fi) mergeable priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(default, +:r) in_reduction(+:r2) nontemporal(ntm)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel master taskloop \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) num_tasks (nta) collapse(1) untied if(i1) final(fi) mergeable priority (pp) \
reduction(default, +:r) num_threads (nth) proc_bind(spread) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
#pragma omp parallel master taskloop simd \
private (p) firstprivate (f) lastprivate (l) shared (s) default(shared) num_tasks (nta) collapse(1) untied if(i1) final(fi) mergeable priority (pp) \
safelen(8) simdlen(4) linear(ll: 1) aligned(q: 32) reduction(default, +:r) nontemporal(ntm) num_threads (nth) proc_bind(spread) copyin(t)
for (int i = 0; i < 64; i++)
ll++;
}

View File

@ -0,0 +1,23 @@
/* { dg-do compile } */
/* { dg-options "-O1 -fopenmp -fdump-tree-optimized" } */
int a[10];
void foo (void)
{
int i;
#pragma omp parallel for schedule(runtime)
for (i = 0; i < 10; i++)
a[i] = i;
#pragma omp parallel
#pragma omp for schedule(runtime)
for (i = 0; i < 10; i++)
a[i] = 10 - i;
#pragma omp parallel
{
#pragma omp for schedule(runtime)
for (i = 0; i < 10; i++)
a[i] = i;
}
}
/* { dg-final { scan-tree-dump-times "GOMP_parallel_loop_maybe_nonmonotonic_runtime" 3 "optimized" } } */

View File

@ -0,0 +1,23 @@
/* { dg-do compile } */
/* { dg-options "-O1 -fopenmp -fdump-tree-optimized" } */
int a[10];
void foo (void)
{
int i;
#pragma omp parallel for schedule(monotonic:runtime)
for (i = 0; i < 10; i++)
a[i] = i;
#pragma omp parallel
#pragma omp for schedule(monotonic :runtime)
for (i = 0; i < 10; i++)
a[i] = 10 - i;
#pragma omp parallel
{
#pragma omp for schedule(monotonic: runtime)
for (i = 0; i < 10; i++)
a[i] = i;
}
}
/* { dg-final { scan-tree-dump-times "GOMP_parallel_loop_runtime" 3 "optimized" } } */

View File

@ -0,0 +1,23 @@
/* { dg-do compile } */
/* { dg-options "-O1 -fopenmp -fdump-tree-optimized" } */
int a[10];
void foo (void)
{
int i;
#pragma omp parallel for schedule(nonmonotonic:runtime)
for (i = 0; i < 10; i++)
a[i] = i;
#pragma omp parallel
#pragma omp for schedule(nonmonotonic :runtime)
for (i = 0; i < 10; i++)
a[i] = 10 - i;
#pragma omp parallel
{
#pragma omp for schedule(nonmonotonic: runtime)
for (i = 0; i < 10; i++)
a[i] = i;
}
}
/* { dg-final { scan-tree-dump-times "GOMP_parallel_loop_nonmonotonic_runtime" 3 "optimized" } } */

View File

@ -0,0 +1,14 @@
int i;
void
foo (void)
{
#pragma omp critical
i = i + 1;
#pragma omp critical (foo)
i = i + 1;
#pragma omp critical (foo) hint (0)
i = i + 1;
#pragma omp critical (foo),hint(1)
i = i + 1;
}

View File

@ -0,0 +1,10 @@
int i;
void
foo (int j)
{
#pragma omp critical (foo) hint (j + 1) /* { dg-error "constant integer expression" } */
i = i + 1;
#pragma omp critical (foo),hint(j) /* { dg-error "constant integer expression" } */
i = i + 1;
}

View File

@ -0,0 +1,22 @@
void
foo (void)
{
int x = 0, i;
#pragma omp task default(none) /* { dg-error "enclosing 'task'" } */
{
x++; /* { dg-error "'x' not specified in enclosing 'task'" } */
}
#pragma omp taskloop default(none) /* { dg-error "enclosing 'taskloop'" } */
for (i = 0; i < 64; i++)
{
x++; /* { dg-error "'x' not specified in enclosing 'taskloop'" } */
}
#pragma omp teams default(none) /* { dg-error "enclosing 'teams'" } */
{
x++; /* { dg-error "'x' not specified in enclosing 'teams'" } */
}
#pragma omp parallel default(none) /* { dg-error "enclosing 'parallel'" } */
{
x++; /* { dg-error "'x' not specified in enclosing 'parallel'" } */
}
}

View File

@ -0,0 +1,30 @@
void
foo (void)
{
#pragma omp target defaultmap(alloc) defaultmap(alloc) /* { dg-error "too many 'defaultmap' clauses with unspecified category" } */
;
#pragma omp target defaultmap(to) defaultmap(from) /* { dg-error "too many 'defaultmap' clauses with unspecified category" } */
;
#pragma omp target defaultmap(tofrom) defaultmap(firstprivate:scalar) /* { dg-error "too many 'defaultmap' clauses with 'scalar' category" } */
;
#pragma omp target defaultmap(none:aggregate) defaultmap(alloc:scalar) defaultmap(none:scalar) /* { dg-error "too many 'defaultmap' clauses with 'scalar' category" } */
;
#pragma omp target defaultmap(none : pointer) defaultmap ( none ) /* { dg-error "too many 'defaultmap' clauses with 'pointer' category" } */
;
#pragma omp target defaultmap() /* { dg-error "expected" } */
;
#pragma omp target defaultmap(for) /* { dg-error "expected" } */
;
#pragma omp target defaultmap(blah) /* { dg-error "expected" } */
;
#pragma omp target defaultmap(tofrom:) /* { dg-error "expected" } */
;
#pragma omp target defaultmap(tofrom scalar) /* { dg-error "expected" } */
;
#pragma omp target defaultmap(tofrom,scalar) /* { dg-error "expected" } */
;
#pragma omp target defaultmap(default ;) /* { dg-error "expected" } */
;
#pragma omp target defaultmap(default : qux) /* { dg-error "expected" } */
;
}

View File

@ -0,0 +1,131 @@
/* { dg-do compile } */
/* { dg-additional-options "-fdump-tree-gimple" } */
struct S { int s; };
void foo (char *);
void bar (int, char *, struct S, int *);
#pragma omp declare target to (bar)
#define N 16
void
f1 (int sc1, struct S ag1, int *pt1)
{
char ar1[N];
foo (ar1);
#pragma omp target
bar (sc1, ar1, ag1, pt1);
/* { dg-final { scan-tree-dump "firstprivate\\(sc1\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:ar1" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:ag1" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(firstprivate:pt1 .pointer assign" "gimple" } } */
}
void
f2 (int sc2, struct S ag2, int *pt2)
{
char ar2[N];
foo (ar2);
#pragma omp target firstprivate (sc2, ar2, ag2, pt2) defaultmap (none)
bar (sc2, ar2, ag2, pt2);
/* { dg-final { scan-tree-dump "firstprivate\\(sc2\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(ar2\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(ag2\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(pt2\\)" "gimple" } } */
}
void
f3 (int sc3, struct S ag3, int *pt3)
{
char ar3[N];
foo (ar3);
#pragma omp target defaultmap(none:scalar) defaultmap(none:aggregate) \
map (sc3, ar3, ag3, pt3) defaultmap(none:pointer)
bar (sc3, ar3, ag3, pt3);
/* { dg-final { scan-tree-dump "map\\(tofrom:sc3" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:ar3" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:ag3" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:pt3" "gimple" } } */
}
void
f4 (int sc4, struct S ag4, int *pt4)
{
char ar4[N];
foo (ar4);
#pragma omp target defaultmap(tofrom:scalar)
bar (sc4, ar4, ag4, pt4);
/* { dg-final { scan-tree-dump "map\\(tofrom:sc4" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:ar4" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:ag4" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(firstprivate:pt4 .pointer assign" "gimple" } } */
}
void
f5 (int sc5, struct S ag5, int *pt5)
{
char ar5[N];
foo (ar5);
#pragma omp target defaultmap(to)
bar (sc5, ar5, ag5, pt5);
/* { dg-final { scan-tree-dump "map\\(to:sc5" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(to:ar5" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(to:ag5" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(to:pt5" "gimple" } } */
}
void
f6 (int sc6, struct S ag6, int *pt6)
{
char ar6[N];
foo (ar6);
#pragma omp target defaultmap(firstprivate)
bar (sc6, ar6, ag6, pt6);
/* { dg-final { scan-tree-dump "firstprivate\\(sc6\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(ar6\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(ag6\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(pt6\\)" "gimple" } } */
}
void
f7 (int sc7, struct S ag7, int *pt7)
{
char ar7[N];
foo (ar7);
#pragma omp target defaultmap(alloc: scalar) defaultmap(from: aggregate) defaultmap(default: pointer)
{
int *q = &sc7;
*q = 6;
ag7.s = 5;
int i;
for (i = 0; i < N; ++i)
ar7[i] = 7;
bar (sc7, ar7, ag7, pt7);
}
/* { dg-final { scan-tree-dump "map\\(alloc:sc7" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(from:ar7" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(from:ag7" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(firstprivate:pt7 .pointer assign" "gimple" } } */
}
void
f8 (int sc8, struct S ag8, int *pt8)
{
char ar8[N];
foo (ar8);
#pragma omp target defaultmap(firstprivate:aggregate) defaultmap(none:scalar) \
defaultmap(tofrom:pointer) map(to: sc8)
bar (sc8, ar8, ag8, pt8);
/* { dg-final { scan-tree-dump "map\\(to:sc8" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(ar8\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "firstprivate\\(ag8\\)" "gimple" } } */
/* { dg-final { scan-tree-dump "map\\(tofrom:pt8" "gimple" } } */
}
void
f9 (int sc9, struct S ag9)
{
char ar9[sc9 + 2];
foo (ar9);
#pragma omp target defaultmap(none) map(to: ar9, ag9) firstprivate (sc9)
bar (sc9, ar9, ag9, &sc9);
}

View File

@ -0,0 +1,34 @@
/* { dg-do compile } */
struct S { int s; };
void foo (char *);
void bar (int, char *, struct S, int *);
#pragma omp declare target to (bar)
#define N 16
void
f1 (int sc1, struct S ag1, int *pt1)
{
char ar1[N];
foo (ar1);
#pragma omp target defaultmap(default:scalar) defaultmap(to:aggregate) defaultmap(none:pointer) /* { dg-error "enclosing 'target'" } */
bar (sc1, ar1, ag1, pt1); /* { dg-error "'pt1' not specified in enclosing 'target'" } */
}
void
f2 (int sc2, struct S ag2, int *pt2)
{
char ar2[N];
foo (ar2);
#pragma omp target defaultmap(none:scalar) defaultmap(from:aggregate) defaultmap(default:pointer) /* { dg-error "enclosing 'target'" } */
bar (sc2, ar2, ag2, pt2); /* { dg-error "'sc2' not specified in enclosing 'target'" } */
}
void
f3 (int sc3, struct S ag3, int *pt3)
{
char ar3[N];
foo (ar3);
#pragma omp target defaultmap(firstprivate:scalar) defaultmap(none:aggregate) defaultmap(to:pointer) /* { dg-error "enclosing 'target'" } */
bar (sc3, ar3, ag3, pt3); /* { dg-error "'ar3' not specified in enclosing 'target'" } */
} /* { dg-error "'ag3' not specified in enclosing 'target'" "" { target *-*-* } .-1 } */

View File

@ -0,0 +1,48 @@
/* { dg-do compile } */
/* { dg-options "-fopenmp" } */
struct T { int c[3]; };
struct S { int a; struct T *b; struct T g; };
struct S d[10];
struct S *e[10];
struct S *f;
struct S h;
void
foo (void)
{
#pragma omp task depend(inout: d)
;
#pragma omp task depend(out: d[2])
;
#pragma omp task depend(in: d[:])
;
#pragma omp task depend(in: d[2:2])
;
#pragma omp task depend(in: d[:2])
;
#pragma omp task depend(inout: d[1].b->c[2])
;
#pragma omp task depend(out: d[0].a)
;
#pragma omp task depend(in: e[3]->a)
;
#pragma omp task depend(inout: e[2]->b->c)
;
#pragma omp task depend(in: e[1]->b->c[2])
;
#pragma omp task depend(out: (*f).a)
;
#pragma omp task depend(inout: f->b->c[0])
;
#pragma omp task depend(in: f)
;
#pragma omp task depend(out: *f)
;
#pragma omp task depend(inout: f[0])
;
#pragma omp task depend(in: f[0].a)
;
#pragma omp task depend(inout: h.g.c[2])
;
}

View File

@ -0,0 +1,36 @@
/* { dg-do compile } */
/* { dg-options "-fopenmp" } */
struct T { int c[3]; };
struct S { int a; struct T *b; struct T g; };
struct U { int a : 5; };
struct S d[10];
struct S *e[10];
struct S *f;
struct S h;
struct U i;
void
foo (void)
{
#pragma omp task depend(in: d[:2].b->c[2]) /* { dg-error "expected" } */
;
#pragma omp task depend(inout: d[1:].b->c[2]) /* { dg-error "expected" } */
;
#pragma omp task depend(out: d[0:1].a) /* { dg-error "expected" } */
;
#pragma omp task depend(in: e[3:2]->a) /* { dg-error "expected" } */
;
#pragma omp task depend(inout: e[2:2]->b->c) /* { dg-error "expected" } */
;
#pragma omp task depend(in: e[1]->b->c[2:1]) /* { dg-error "expected" } */
;
#pragma omp task depend(out: f + 0) /* { dg-error "not lvalue expression" } */
;
#pragma omp task depend(inout: f[0:1].a) /* { dg-error "expected" } */
;
#pragma omp task depend(inout: h.g.c[2:1]) /* { dg-error "expected" } */
;
#pragma omp task depend(in: i.a) /* { dg-error "bit-field '\[^\n\r]*' in 'depend' clause" } */
;
}

View File

@ -0,0 +1,75 @@
int arr[64], arr2[64];
struct S { int a[4]; } k;
short arr4[4];
volatile int v;
#define TEST_EQ(x,y) ({ int o[x == y ? 1 : -1]; 0; })
void
foo (unsigned char i, signed char j)
{
#pragma omp task depend (iterator (j=6:2:-2) , out : \
arr[TEST_EQ (sizeof (j), sizeof (int)), \
TEST_EQ (sizeof (i), sizeof (unsigned char)), \
TEST_EQ (sizeof (k), sizeof (struct S)), j], \
arr2[TEST_EQ (((__typeof (j)) -1) < 0, 1), \
TEST_EQ (((__typeof (i)) -1) < 0, 0), \
TEST_EQ (((__typeof (k.a[0])) -1) < 0, 1), j]) \
depend(out: arr[0]) \
depend (iterator (long long i=__LONG_LONG_MAX__ - 4:__LONG_LONG_MAX__ - 2:2, \
unsigned short j=~0U-16:~0U-8:3, \
short *k=&arr4[1]:&arr4[2]:1) , in : \
arr[TEST_EQ (sizeof (i), sizeof (long long)), \
TEST_EQ (sizeof (j), sizeof (unsigned short)), \
TEST_EQ (sizeof (k), sizeof (short *)), \
TEST_EQ (sizeof (*k), sizeof (short)), i - __LONG_LONG_MAX__ + 4], \
arr2[TEST_EQ (((__typeof (i)) -1) < 0, 1), \
TEST_EQ (((__typeof (j)) -1) < 0, 0), \
TEST_EQ (((__typeof (*k)) -1) < 0, 1), j - (~0U-16)], \
arr2[k - &arr4[0]]) \
depend(in : k)
v++;
}
void
bar (unsigned char i, signed char j)
{
int m = j;
int n = j + 2;
#pragma omp task depend (iterator (j=6:2:m) , out : \
arr[TEST_EQ (sizeof (j), sizeof (int)), \
TEST_EQ (sizeof (i), sizeof (unsigned char)), \
TEST_EQ (sizeof (k), sizeof (struct S)), j], \
arr2[TEST_EQ (((__typeof (j)) -1) < 0, 1), \
TEST_EQ (((__typeof (i)) -1) < 0, 0), \
TEST_EQ (((__typeof (k.a[0])) -1) < 0, 1), j]) \
depend(out: arr[0]) \
depend (iterator (long long i=__LONG_LONG_MAX__ - 4 - n:__LONG_LONG_MAX__ - 2:2, \
unsigned short j=~0U-16:~0U-8-n:3, \
short *k=&arr4[1]:&arr4[n + 2]:1) , in : \
arr[TEST_EQ (sizeof (i), sizeof (long long)), \
TEST_EQ (sizeof (j), sizeof (unsigned short)), \
TEST_EQ (sizeof (k), sizeof (short *)), \
TEST_EQ (sizeof (*k), sizeof (short)), i - __LONG_LONG_MAX__ + 4], \
arr2[TEST_EQ (((__typeof (i)) -1) < 0, 1), \
TEST_EQ (((__typeof (j)) -1) < 0, 0), \
TEST_EQ (((__typeof (*k)) -1) < 0, 1), j - (~0U-16)], \
arr2[k - &arr4[0]:10]) \
depend(in : k)
v++;
}
void
baz (void)
{
#pragma omp parallel
#pragma omp master
{
#pragma omp task depend(iterator(unsigned long int k = 0 : 2) , inout : \
arr[TEST_EQ (sizeof (k), sizeof (unsigned long)), \
TEST_EQ (((__typeof (k)) -1) < 0, 0), k]) \
depend(iterator(signed char s = -3 : -12 : -1) , out : \
arr[TEST_EQ (sizeof (s), sizeof (signed char)), \
TEST_EQ (((__typeof (s)) -1) < 0, 1), s + 12])
v++;
}
}

View File

@ -0,0 +1,97 @@
int a, b[64];
struct S { int c; } *d, *e;
struct T;
struct T *f, *g;
int *h;
void
f1 (void)
{
#pragma omp task depend (iterator , in : a) /* { dg-error "expected" } */
;
#pragma omp task depend (iterator (for = 0 : 2) , in : a) /* { dg-error "expected" } */
;
#pragma omp task depend (iterator (5 = 0 : 2) , in : a) /* { dg-error "expected" } */
;
#pragma omp task depend (iterator (i : 0 : 2) , in : a) /* { dg-error "expected '='|name a type|expected" } */
;
#pragma omp task depend (iterator (i = 0, 1 : 2) , in : a) /* { dg-error "expected" } */
;
#pragma omp task depend (iterator (i = (0, 1) : 2) , in : a)
;
#pragma omp task depend (iterator (i = 0 : 1 : 2 : 3) , in : a) /* { dg-error "expected '.'" } */
;
#pragma omp task depend (iterator (i = 0 : 2, 3) , in : a) /* { dg-error "expected" } */
;
#pragma omp task depend (iterator (i = 0 : 10 : 2, 3) , in : a) /* { dg-error "expected" } */
;
#pragma omp task depend (iterator (i = 0:1), iterator (j = 0:1) , in : a) /* { dg-error "invalid depend kind" } */
;
#pragma omp task depend (iterator (i = 0:32) , in : b[i*2:2])
;
#pragma omp task depend (iterator (struct S i = 0:1), in : a) /* { dg-error "iterator 'i' has neither integral nor pointer type" } */
;
#pragma omp task depend (iterator (void i = 0:1) , in : a) /* { dg-error "iterator 'i' has neither integral nor pointer type" } */
;
#pragma omp task depend (iterator (float f = 0.2:0.4) , in : a) /* { dg-error "iterator 'f' has neither integral nor pointer type" } */
;
#pragma omp task depend (iterator (struct S *p = d:e:2) , in : a)
;
#pragma omp task depend (iterator (struct T *p = f:g) , in : a) /* { dg-error "invalid use of" } */
;
#pragma omp task depend (iterator (int i = 0:4, \
struct U { int (*p)[i + 2]; } *p = 0:2) , in : a) /* { dg-error "type of iterator 'p' refers to outer iterator 'i'" "" { target c } } */
; /* { dg-error "types may not be defined in iterator type|not an integer constant" "" { target c++ } .-1 } */
#pragma omp task depend (iterator (i = 0:4, j = i:16) , in : a) /* { dg-error "begin expression refers to outer iterator 'i'" } */
;
#pragma omp task depend (iterator (i = 0:4, j = 2:i:1) , in : a) /* { dg-error "end expression refers to outer iterator 'i'" } */
;
#pragma omp task depend (iterator (i = 0:4, j = 2:8:i) , in : a) /* { dg-error "step expression refers to outer iterator 'i'" } */
;
#pragma omp task depend (iterator (i = *d:2) , in : a) /* { dg-error "aggregate value used where an integer was expected" "" { target c } } */
; /* { dg-error "invalid cast from type 'S' to type 'int'" "" { target c++ } .-1 } */
#pragma omp task depend (iterator (i = 2:*d:2) , in : a) /* { dg-error "aggregate value used where an integer was expected" "" { target c } } */
; /* { dg-error "invalid cast from type 'S' to type 'int'" "" { target c++ } .-1 } */
#pragma omp task depend (iterator (i = 2:4:*d) , in : a) /* { dg-error "iterator step with non-integral type" } */
;
#pragma omp task depend (iterator (i = 1.25:2.5:3) , in : a)
;
#pragma omp task depend (iterator (i = 1:2:3.5) , in : a) /* { dg-error "iterator step with non-integral type" } */
;
#pragma omp task depend (iterator (int *p = 23 : h) , in : a)
;
#pragma omp task depend (iterator (short i=1:3:0) , in : a) /* { dg-error "iterator 'i' has zero step" } */
;
#pragma omp task depend (iterator (i = 1 : 3 : 3 - 3) , in : a) /* { dg-error "iterator 'i' has zero step" } */
;
#pragma omp task depend (iterator (int *p = &b[6]:&b[9]:4 - 4) , in : a) /* { dg-error "iterator 'p' has zero step" } */
;
#pragma omp task depend (iterator (const int i = 0 : 2) , in : a) /* { dg-error "const qualified" } */
;
#pragma omp task depend (iterator (const long long unsigned i = 0 : 2) , in : a) /* { dg-error "const qualified" } */
;
#if !defined (__cplusplus) && __STDC_VERSION__ >= 201112L
#pragma omp task depend (iterator (_Atomic unsigned i = 0 : 2) , in : a) /* { dg-error "_Atomic" "" { target c } } */
;
#endif
}
void
f2 (void)
{
int i, j;
#pragma omp for ordered(2)
for (i = 0; i < 64; i++)
for (j = 0; j < 64; j++)
{
#pragma omp ordered depend (iterator (k=0:1) , sink: i - 1, j - 1) /* { dg-error "'iterator' modifier incompatible with 'sink'" } */
#pragma omp ordered depend (iterator (int l = 0:2:3) , source) /* { dg-error "'iterator' modifier incompatible with 'source'" } */
}
}
void
f3 (void)
{
#pragma omp task depend (iterator (i = 0:1), iterator (j = 0:1) , in : a) /* { dg-error "invalid depend kind" } */
;
}

View File

@ -0,0 +1,63 @@
typedef struct __attribute__((__aligned__ (sizeof (void *)))) omp_depend_t {
char __omp_depend_t__[2 * sizeof (void *)];
} omp_depend_t;
omp_depend_t bar (void);
extern const omp_depend_t cdepobj;
extern omp_depend_t depobj;
extern omp_depend_t depobja[4];
extern omp_depend_t *pdepobj;
int a, b, i, j;
void
f1 (void)
{
#pragma omp depobj(depobj) depend(in : a)
#pragma omp depobj(depobj) update(inout)
#pragma omp task depend (depobj: depobj)
;
#pragma omp depobj(depobj) destroy
#pragma omp task depend (iterator (i=1:3) , depobj: *(depobja + i))
;
#pragma omp depobj(pdepobj[0]) depend(mutexinoutset:a)
#pragma omp depobj(*pdepobj) destroy
}
void
f2 (void)
{
omp_depend_t depobjb[4];
#pragma omp depobj /* { dg-error "expected" } */
#pragma omp depobj destroy /* { dg-error "expected" } */
#pragma omp depobj (depobj) /* { dg-error "expected 'depend', 'destroy' or 'update' clause" } */
#pragma omp depobj (depobj) foobar /* { dg-error "expected 'depend', 'destroy' or 'update' clause" } */
#pragma omp depobj(bar ()) update(inout) /* { dg-error "'depobj' expression is not lvalue expression" } */
#pragma omp depobj (cdepobj) update(in) /* { dg-error "'const' qualified 'depobj' expression" } */
#pragma omp depobj (depobjb) depend(in: a) /* { dg-error "type of 'depobj' expression is not 'omp_depend_t'" } */
#pragma omp depobj (pdepobj) depend(in: a) /* { dg-error "type of 'depobj' expression is not 'omp_depend_t'" } */
#pragma omp depobj (a) destroy /* { dg-error "type of 'depobj' expression is not 'omp_depend_t'" } */
#pragma omp depobj (depobj) depend(depobj:a) /* { dg-error "does not have 'omp_depend_t' type in 'depend' clause with 'depobj' dependence type" } */
#pragma omp depobj (depobj) depend(depobj:*depobjb) /* { dg-error "'depobj' dependence type specified in 'depend' clause on 'depobj' construct" } */
#pragma omp depobj (depobj) update(foobar) /* { dg-error "expected 'in', 'out', 'inout' or 'mutexinoutset'" } */
#pragma omp depobj (depobj) depend(in: *depobja) /* { dg-error "should not have 'omp_depend_t' type in 'depend' clause with dependence type" } */
#pragma omp depobj (depobj) depend(in: a) depend(in: b) /* { dg-error "expected" } */
#pragma omp depobj (depobj) depend(in: a) update(out) /* { dg-error "expected" } */
#pragma omp depobj (depobj) depend(in: a, b) /* { dg-error "more than one locator in 'depend' clause on 'depobj' construct" } */
#pragma omp depobj (depobj) depend(source) /* { dg-error "'depend\\(source\\)' is only allowed in 'omp ordered'" } */
#pragma omp depobj (depobj) depend(sink: i + 1, j - 1) /* { dg-error "'depend\\(sink\\)' is only allowed in 'omp ordered'" } */
#pragma omp depobj (depobj) depend(iterator (i = 0:2) , in : a) /* { dg-error "'iterator' modifier may not be specified on 'depobj' construct" } */
if (0)
#pragma omp depobj (depobj) destroy /* { dg-error "'#pragma omp depobj' may only be used in compound statements" } */
;
}
void
f3 (void)
{
#pragma omp task depend (depobj: depobja[1:2]) /* { dg-error "'depend' clause with 'depobj' dependence type on array section" } */
;
#pragma omp task depend (depobj: a) /* { dg-error "'a' does not have 'omp_depend_t' type in 'depend' clause with 'depobj' dependence type" } */
;
#pragma omp task depend (in: depobj) /* { dg-error "'depobj' should not have 'omp_depend_t' type in 'depend' clause with dependence type" } */
;
}

View File

@ -0,0 +1,39 @@
/* { dg-additional-options "-fdump-tree-gimple" } */
/* { dg-final { scan-tree-dump "foo \\(4\\);\[\n\r]* __atomic_thread_fence \\(4\\);\[\n\r]* foo \\(4\\);" "gimple" } } */
/* { dg-final { scan-tree-dump "foo \\(3\\);\[\n\r]* __atomic_thread_fence \\(3\\);\[\n\r]* foo \\(3\\);" "gimple" } } */
/* { dg-final { scan-tree-dump "foo \\(2\\);\[\n\r]* __atomic_thread_fence \\(2\\);\[\n\r]* foo \\(2\\);" "gimple" } } */
/* { dg-final { scan-tree-dump "foo \\(5\\);\[\n\r]* __sync_synchronize \\(\\);\[\n\r]* foo \\(5\\);" "gimple" } } */
void foo (int);
void
f1 (void)
{
foo (4);
#pragma omp flush acq_rel
foo (4);
}
void
f2 (void)
{
foo (3);
#pragma omp flush release
foo (3);
}
void
f3 (void)
{
foo (2);
#pragma omp flush acquire
foo (2);
}
void
f4 (void)
{
foo (5);
#pragma omp flush
foo (5);
}

View File

@ -0,0 +1,17 @@
int a, b;
void
foo (void)
{
#pragma omp flush
#pragma omp flush (a, b)
#pragma omp flush acquire
#pragma omp flush release
#pragma omp flush acq_rel
#pragma omp flush relaxed /* { dg-error "expected 'acq_rel', 'release' or 'acquire'" } */
#pragma omp flush seq_cst /* { dg-error "expected 'acq_rel', 'release' or 'acquire'" } */
#pragma omp flush foobar /* { dg-error "expected 'acq_rel', 'release' or 'acquire'" } */
#pragma omp flush acquire (a, b) /* { dg-error "'flush' list specified together with memory order clause" } */
#pragma omp flush release (a, b) /* { dg-error "'flush' list specified together with memory order clause" } */
#pragma omp flush acq_rel (a, b) /* { dg-error "'flush' list specified together with memory order clause" } */
}

View File

@ -0,0 +1,60 @@
void bar (int);
int a[256];
void
foo (void)
{
int i;
#pragma omp for
for (i = 0; i != 64; i++)
bar (i);
#pragma omp for
for (i = 128; i != 64; i--)
bar (i);
#pragma omp for
for (i = 0; i != 64; i = i + 1)
bar (i);
#pragma omp for
for (i = 128; i != 64; i = i - 1)
bar (i);
#pragma omp for
for (i = 0; i != 64; i = 1 + i)
bar (i);
#pragma omp for
for (i = 128; i != 64; i = -1 + i)
bar (i);
#pragma omp for
for (i = 0; i != 64; i += 1)
bar (i);
#pragma omp for
for (i = 128; i != 64; i -= 1)
bar (i);
#pragma omp single
{
#pragma omp simd
for (i = 0; i != 64; i++)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i--)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 0; i != 64; i = i + 1)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i = i - 1)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 0; i != 64; i = 1 + i)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i = -1 + i)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 0; i != 64; i += 1)
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i -= 1)
a[i] = a[i] + 1;
}
}

View File

@ -0,0 +1,31 @@
void bar (short *);
void
foo (short *q, short *r, short *s)
{
short *p;
#pragma omp for
for (p = q; p != r; p++)
bar (p);
#pragma omp for
for (p = s; p != r; p--)
bar (p);
#pragma omp for
for (p = q; p != r; p = p + 1)
bar (p);
#pragma omp for
for (p = s; p != r; p = p - 1)
bar (p);
#pragma omp for
for (p = q; p != r; p = 1 + p)
bar (p);
#pragma omp for
for (p = s; p != r; p = -1 + p)
bar (p);
#pragma omp for
for (p = q; p != r; p += 1)
bar (p);
#pragma omp for
for (p = s; p != r; p -= 1)
bar (p);
}

View File

@ -0,0 +1,48 @@
void bar (int);
int a[256];
void
foo (int j)
{
int i;
#pragma omp for
for (i = 0; i != 64; i = i + 4) /* { dg-error "increment is not constant 1 or -1" } */
bar (i);
#pragma omp for
for (i = 128; i != 64; i = i - 4) /* { dg-error "increment is not constant 1 or -1" } */
bar (i);
#pragma omp for
for (i = 0; i != 64; i = j + i) /* { dg-error "increment is not constant 1 or -1" } */
bar (i);
#pragma omp for
for (i = 128; i != 64; i = -16 + i) /* { dg-error "increment is not constant 1 or -1" } */
bar (i);
#pragma omp for
for (i = 0; i != 64; i += j) /* { dg-error "increment is not constant 1 or -1" } */
bar (i);
#pragma omp for
for (i = 128; i != 64; i -= 8) /* { dg-error "increment is not constant 1 or -1" } */
bar (i);
#pragma omp single
{
#pragma omp simd
for (i = 0; i != 64; i = i + 16) /* { dg-error "increment is not constant 1 or -1" } */
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i = i - 2) /* { dg-error "increment is not constant 1 or -1" } */
a[i] = a[i] + 1;
#pragma omp simd
for (i = 0; i != 64; i = j + i) /* { dg-error "increment is not constant 1 or -1" } */
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i = -j + i) /* { dg-error "increment is not constant 1 or -1" } */
a[i] = a[i] + 1;
#pragma omp simd
for (i = 0; i != 64; i += 8) /* { dg-error "increment is not constant 1 or -1" } */
a[i] = a[i] + 1;
#pragma omp simd
for (i = 128; i != 64; i -= j) /* { dg-error "increment is not constant 1 or -1" } */
a[i] = a[i] + 1;
}
}

View File

@ -0,0 +1,25 @@
void bar (short *);
void
foo (short *q, short *r, short *s, long t)
{
short *p;
#pragma omp for
for (p = q; p != r; p = p + 5) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p = p - 2) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p = t + p) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p = -t + p) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p += t) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p -= 7) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
}

View File

@ -0,0 +1,50 @@
// { dg-options "-fopenmp" }
void bar (void *);
__attribute__((noinline, noclone)) void
foo (void *qx, void *rx, void *sx, int n)
{
unsigned short (*q)[n], (*r)[n], (*s)[n], (*p)[n];
q = (typeof (q)) qx;
r = (typeof (r)) rx;
s = (typeof (s)) sx;
int t = 1;
int o = -1;
#pragma omp for
for (p = q; p != r; p += t) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p += o) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p = p + t) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p = p + o) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p = t + p) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p = o + p) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p += 2) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p -= 2) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p = p + 3) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p = p - 3) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = q; p != r; p = 4 + p) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
#pragma omp for
for (p = s; p != r; p = -5 + p) /* { dg-error "increment is not constant 1 or -1" } */
bar (p);
}

View File

@ -0,0 +1,16 @@
/* { dg-do compile } */
/* { dg-options "-fopenmp -fdump-tree-ompexp" } */
extern void bar(int);
void foo (int n)
{
int i;
#pragma omp for schedule(monotonic:runtime)
for (i = 0; i < n; ++i)
bar(i);
}
/* { dg-final { scan-tree-dump-times "GOMP_loop_runtime_start" 1 "ompexp" } } */
/* { dg-final { scan-tree-dump-times "GOMP_loop_runtime_next" 1 "ompexp" } } */

View File

@ -0,0 +1,16 @@
/* { dg-do compile } */
/* { dg-options "-fopenmp -fdump-tree-ompexp" } */
extern void bar(int);
void foo (int n)
{
int i;
#pragma omp for schedule(nonmonotonic:runtime)
for (i = 0; i < n; ++i)
bar(i);
}
/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_runtime_start" 1 "ompexp" } } */
/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_runtime_next" 1 "ompexp" } } */

View File

@ -12,6 +12,12 @@ foo (int a, int b, int *p, int *q)
for (i = 0; i < 16; i++)
;
#pragma omp parallel for simd if (parallel : a)
for (i = 0; i < 16; i++)
;
#pragma omp parallel for simd if (simd : a)
for (i = 0; i < 16; i++)
;
#pragma omp parallel for simd if (simd : a) if (parallel:b)
for (i = 0; i < 16; i++)
;
#pragma omp task if (a)
@ -22,16 +28,37 @@ foo (int a, int b, int *p, int *q)
for (i = 0; i < 16; i++)
;
#pragma omp taskloop if (taskloop : a)
for (i = 0; i < 16; i++)
;
#pragma omp taskloop simd if (a)
for (i = 0; i < 16; i++)
;
#pragma omp taskloop simd if (taskloop : a)
for (i = 0; i < 16; i++)
;
#pragma omp taskloop simd if (simd : a)
for (i = 0; i < 16; i++)
;
#pragma omp taskloop simd if (taskloop:b) if (simd : a)
for (i = 0; i < 16; i++)
;
#pragma omp target if (a)
;
#pragma omp target if (target: a)
;
#pragma omp target simd if (a)
for (i = 0; i < 16; i++)
;
#pragma omp target simd if (simd : a) if (target: b)
for (i = 0; i < 16; i++)
;
#pragma omp target teams distribute parallel for simd if (a)
for (i = 0; i < 16; i++)
;
#pragma omp target teams distribute parallel for simd if (parallel : a) if (target: b)
for (i = 0; i < 16; i++)
;
#pragma omp target teams distribute parallel for simd if (simd : a) if (target: b)
for (i = 0; i < 16; i++)
;
#pragma omp target data if (a) map (p[0:2])
@ -44,4 +71,47 @@ foo (int a, int b, int *p, int *q)
#pragma omp target exit data if (target exit data: a) map (from: p[0:2])
#pragma omp target update if (a) to (q[0:3])
#pragma omp target update if (target update:a) to (q[0:3])
#pragma omp parallel
{
#pragma omp cancel parallel if (a)
}
#pragma omp parallel
{
#pragma omp cancel parallel if (cancel:a)
}
#pragma omp for
for (i = 0; i < 16; i++)
{
#pragma omp cancel for if (a)
}
#pragma omp for
for (i = 0; i < 16; i++)
{
#pragma omp cancel for if (cancel: a)
}
#pragma omp sections
{
#pragma omp section
{
#pragma omp cancel sections if (a)
}
}
#pragma omp sections
{
#pragma omp section
{
#pragma omp cancel sections if (cancel: a)
}
}
#pragma omp taskgroup
{
#pragma omp task
{
#pragma omp cancel taskgroup if (a)
}
#pragma omp task
{
#pragma omp cancel taskgroup if (cancel: a)
}
}
}

View File

@ -18,6 +18,8 @@ foo (int a, int b, int *p, int *q, int task)
;
#pragma omp parallel if (target update:a) /* { dg-error "expected .parallel. .if. clause modifier rather than .target update." } */
;
#pragma omp parallel if (cancel:a) /* { dg-error "expected .parallel. .if. clause modifier rather than .cancel." } */
;
#pragma omp parallel for simd if (target update: a) /* { dg-error "expected .parallel. .if. clause modifier rather than .target update." } */
for (i = 0; i < 16; i++)
;
@ -27,12 +29,15 @@ foo (int a, int b, int *p, int *q, int task)
;
#pragma omp task if (parallel: a) /* { dg-error "expected .task. .if. clause modifier rather than .parallel." } */
;
#pragma omp simd if (cancel: a) /* { dg-error "expected .simd. .if. clause modifier rather than .cancel." } */
for (i = 0; i < 16; i++)
;
#pragma omp taskloop if (task : a) /* { dg-error "expected .taskloop. .if. clause modifier rather than .task." } */
for (i = 0; i < 16; i++)
;
#pragma omp target if (taskloop: a) /* { dg-error "expected .target. .if. clause modifier rather than .taskloop." } */
;
#pragma omp target teams distribute parallel for simd if (target exit data : a) /* { dg-error "expected .parallel. or .target. .if. clause modifier" } */
#pragma omp target teams distribute parallel for simd if (target exit data : a) /* { dg-error "expected .target. .if. clause modifier" } */
for (i = 0; i < 16; i++)
;
#pragma omp target data if (target: a) map (p[0:2]) /* { dg-error "expected .target data. .if. clause modifier rather than .target." } */
@ -40,4 +45,9 @@ foo (int a, int b, int *p, int *q, int task)
#pragma omp target enter data if (target data: a) map (to: p[0:2]) /* { dg-error "expected .target enter data. .if. clause modifier rather than .target data." } */
#pragma omp target exit data if (target enter data: a) map (from: p[0:2]) /* { dg-error "expected .target exit data. .if. clause modifier rather than .target enter data." } */
#pragma omp target update if (target exit data:a) to (q[0:3]) /* { dg-error "expected .target update. .if. clause modifier rather than .target exit data." } */
#pragma omp for
for (i = 0; i < 16; i++)
{
#pragma omp cancel for if (target exit data:a) /* { dg-error "expected .cancel. .if. clause modifier" } */
}
}

View File

@ -0,0 +1,13 @@
/* { dg-do compile } */
/* { dg-additional-options "-O2" } */
#define N 1024
void
foo (int *x, int *y, int *z, int a)
{
int i;
#pragma omp simd if (simd: a > 2) aligned (x, y, z : 16)
for (i = 0; i < N; i++)
x[i] = y[i] + z[i];
}

View File

@ -0,0 +1,32 @@
void bar (int *);
void
foo (int *a)
{
int i, j, k, u = 0, v = 0, w = 0, x = 0, y = 0, z = 0;
#pragma omp parallel master default(none) private (k)
bar (&k);
#pragma omp parallel default(none) firstprivate(a) shared(x, y, z)
{
#pragma omp master taskloop reduction (+:x) default(none) firstprivate(a)
for (i = 0; i < 64; i++)
x += a[i];
#pragma omp master taskloop simd reduction (+:y) default(none) firstprivate(a) private (i)
for (i = 0; i < 64; i++)
y += a[i];
#pragma omp master taskloop simd collapse(2) reduction (+:z) default(none) firstprivate(a) private (i, j)
for (j = 0; j < 1; j++)
for (i = 0; i < 64; ++i)
z += a[i];
}
#pragma omp parallel master taskloop reduction (+:u) default(none) firstprivate(a)
for (i = 0; i < 64; i++)
u += a[i];
#pragma omp parallel master taskloop simd reduction (+:v) default(none) firstprivate(a)
for (i = 0; i < 64; i++)
v += a[i];
#pragma omp parallel master taskloop simd collapse(2) reduction (+:w) default(none) firstprivate(a)
for (j = 0; j < 1; j++)
for (i = 0; i < 64; ++i)
w += a[i];
}

View File

@ -0,0 +1,13 @@
void
foo (int *a)
{
int i, r = 0, s = 0;
#pragma omp taskgroup task_reduction(+:r)
#pragma omp parallel master taskloop in_reduction(+:r) /* { dg-error "'in_reduction' is not valid for '#pragma omp parallel master taskloop'" } */
for (i = 0; i < 64; i++)
r += a[i];
#pragma omp taskgroup task_reduction(+:s)
#pragma omp parallel master taskloop simd in_reduction(+:s) /* { dg-error "'in_reduction' is not valid for '#pragma omp parallel master taskloop simd'" } */
for (i = 0; i < 64; i++)
s += a[i];
}

View File

@ -0,0 +1,17 @@
/* { dg-do compile } */
/* { dg-additional-options "-O2" } */
#define N 1024
int a[N], b[N], c[N], d[N];
void
foo (void)
{
int i;
#pragma omp simd nontemporal (a, b)
for (i = 0; i < N; ++i)
a[i] = b[i] + c[i];
#pragma omp simd nontemporal (d)
for (i = 0; i < N; ++i)
d[i] = 2 * c[i];
}

View File

@ -0,0 +1,19 @@
/* { dg-do compile } */
#define N 1024
extern int a[N], b[N], c[N], d[N];
void
foo (void)
{
int i;
#pragma omp simd nontemporal (a, b) aligned (a, b, c)
for (i = 0; i < N; ++i)
a[i] = b[i] + c[i];
#pragma omp simd nontemporal (d) nontemporal (d) /* { dg-error "'d' appears more than once in 'nontemporal' clauses" } */
for (i = 0; i < N; ++i)
d[i] = 2 * c[i];
#pragma omp simd nontemporal (a, b, b) /* { dg-error "'b' appears more than once in 'nontemporal' clauses" } */
for (i = 0; i < N; ++i)
a[i] += b[i] + c[i];
}

View File

@ -0,0 +1,86 @@
int v;
extern void foo (int);
void
bar (void)
{
int i;
#pragma omp for reduction (task, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp sections reduction (task, +: v)
{
foo (-2);
#pragma omp section
foo (-3);
}
#pragma omp parallel reduction (task, +: v)
foo (-1);
#pragma omp parallel for reduction (task, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp parallel sections reduction (task, +: v)
{
foo (-2);
#pragma omp section
foo (-3);
}
#pragma omp teams distribute parallel for reduction (task, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp for reduction (default, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp sections reduction (default, +: v)
{
foo (-2);
#pragma omp section
foo (-3);
}
#pragma omp parallel reduction (default, +: v)
foo (-1);
#pragma omp parallel for reduction (default, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp parallel sections reduction (default, +: v)
{
foo (-2);
#pragma omp section
foo (-3);
}
#pragma omp teams distribute parallel for reduction (default, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp for reduction (default, +: v) nowait
for (i = 0; i < 64; i++)
foo (i);
#pragma omp sections nowait reduction (default, +: v)
{
foo (-2);
#pragma omp section
foo (-3);
}
#pragma omp simd reduction (default, +: v)
for (i = 0; i < 64; i++)
v++;
#pragma omp for simd reduction (default, +: v)
for (i = 0; i < 64; i++)
v++;
#pragma omp parallel for simd reduction (default, +: v)
for (i = 0; i < 64; i++)
v++;
#pragma omp teams distribute parallel for simd reduction (default, +: v)
for (i = 0; i < 64; i++)
v++;
#pragma omp taskloop reduction (default, +: v)
for (i = 0; i < 64; i++)
foo (i);
#pragma omp taskloop simd reduction (default, +: v)
for (i = 0; i < 64; i++)
v++;
#pragma omp teams reduction (default, +: v)
foo (i);
#pragma omp teams distribute reduction (default, +: v)
for (i = 0; i < 64; i++)
foo (i);
}

View File

@ -0,0 +1,40 @@
int v;
extern void foo (int);
void
bar (void)
{
int i;
#pragma omp for reduction (task, +: v) nowait /* { dg-error "'task' reduction modifier on a construct with a 'nowait' clause" } */
for (i = 0; i < 64; i++)
foo (i);
#pragma omp sections nowait reduction (task, +: v) /* { dg-error "'task' reduction modifier on a construct with a 'nowait' clause" } */
{
foo (-2);
#pragma omp section
foo (-3);
}
#pragma omp simd reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct other than 'parallel', 'for' or 'sections'" } */
for (i = 0; i < 64; i++)
v++;
#pragma omp for simd reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct combined with 'simd'" } */
for (i = 0; i < 64; i++)
v++;
#pragma omp parallel for simd reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct combined with 'simd'" } */
for (i = 0; i < 64; i++)
v++;
#pragma omp teams distribute parallel for simd reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct combined with 'simd'" } */
for (i = 0; i < 64; i++)
v++;
#pragma omp taskloop reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct other than 'parallel', 'for' or 'sections'" } */
for (i = 0; i < 64; i++)
foo (i);
#pragma omp taskloop simd reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct combined with 'simd'" } */
for (i = 0; i < 64; i++)
v++;
#pragma omp teams reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct other than 'parallel', 'for' or 'sections'" } */
foo (i);
#pragma omp teams distribute reduction (task, +: v) /* { dg-error "invalid 'task' reduction modifier on construct not combined with 'parallel', 'for' or 'sections'" } */
for (i = 0; i < 64; i++)
foo (i);
}

View File

@ -0,0 +1,15 @@
#pragma omp requires unified_address
#pragma omp requires unified_shared_memory
#pragma omp requires unified_shared_memory unified_address
#pragma omp requires dynamic_allocators,reverse_offload
int i;
void
foo ()
{
if (0)
#pragma omp requires unified_shared_memory unified_address
i++;
#pragma omp requries atomic_default_mem_order(seq_cst)
}

View File

@ -0,0 +1,18 @@
#pragma omp requires /* { dg-error "requires at least one clause" } */
#pragma omp requires unified_shared_memory,unified_shared_memory /* { dg-error "too many 'unified_shared_memory' clauses" } */
#pragma omp requires unified_address unified_address /* { dg-error "too many 'unified_address' clauses" } */
#pragma omp requires reverse_offload reverse_offload /* { dg-error "too many 'reverse_offload' clauses" } */
#pragma omp requires foobarbaz /* { dg-error "expected 'unified_address', 'unified_shared_memory', 'dynamic_allocators', 'reverse_offload' or 'atomic_default_mem_order' clause" } */
int i;
void
foo ()
{
#pragma omp requires dynamic_allocators , dynamic_allocators /* { dg-error "too many 'dynamic_allocators' clauses" } */
if (0)
#pragma omp requires atomic_default_mem_order(seq_cst) atomic_default_mem_order(seq_cst) /* { dg-error "too many 'atomic_default_mem_order' clauses" } */
i++;
}
#pragma omp requires atomic_default_mem_order (seq_cst) /* { dg-error "more than one 'atomic_default_mem_order' clause in a single compilation unit" } */

View File

@ -0,0 +1,3 @@
#pragma omp requires atomic_default_mem_order(acquire) /* { dg-error "expected 'seq_cst', 'relaxed' or 'acq_rel'" } */
#pragma omp requires atomic_default_mem_order(release) /* { dg-error "expected 'seq_cst', 'relaxed' or 'acq_rel'" } */
#pragma omp requires atomic_default_mem_order(foobar) /* { dg-error "expected 'seq_cst', 'relaxed' or 'acq_rel'" } */

View File

@ -0,0 +1,11 @@
#pragma omp requires unified_shared_memory,unified_address,reverse_offload
void
foo (void)
{
#pragma omp target
;
#pragma omp requires unified_shared_memory /* { dg-error "'unified_shared_memory' clause used lexically after first target construct or offloading API" } */
}
#pragma omp requires unified_address /* { dg-error "'unified_address' clause used lexically after first target construct or offloading API" } */
#pragma omp requires reverse_offload /* { dg-error "'reverse_offload' clause used lexically after first target construct or offloading API" } */

View File

@ -68,18 +68,26 @@ void
bar (void)
{
int i;
#pragma omp for schedule (nonmonotonic: static, 2) /* { dg-error ".nonmonotonic. modifier specified for .static. schedule kind" } */
#pragma omp for schedule (nonmonotonic: static, 2)
for (i = 0; i < 64; i++)
;
#pragma omp for schedule (nonmonotonic : static) /* { dg-error ".nonmonotonic. modifier specified for .static. schedule kind" } */
#pragma omp for schedule (nonmonotonic : static)
for (i = 0; i < 64; i++)
;
#pragma omp for schedule (nonmonotonic : runtime) /* { dg-error ".nonmonotonic. modifier specified for .runtime. schedule kind" } */
#pragma omp for schedule (nonmonotonic : runtime)
for (i = 0; i < 64; i++)
;
#pragma omp for schedule (nonmonotonic : auto) /* { dg-error ".nonmonotonic. modifier specified for .auto. schedule kind" } */
#pragma omp for schedule (nonmonotonic : auto)
for (i = 0; i < 64; i++)
;
#pragma omp for schedule (nonmonotonic : static) ordered /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
for (i = 0; i < 64; i++)
#pragma omp ordered
;
#pragma omp for ordered schedule (nonmonotonic: static, 4) /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
for (i = 0; i < 64; i++)
#pragma omp ordered
;
#pragma omp for schedule (nonmonotonic : dynamic) ordered /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
for (i = 0; i < 64; i++)
#pragma omp ordered
@ -95,6 +103,12 @@ bar (void)
#pragma omp ordered depend(source)
}
#pragma omp for ordered(1) schedule(nonmonotonic : guided, 2) /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
for (i = 0; i < 64; i++)
{
#pragma omp ordered depend(source)
#pragma omp ordered depend(sink: i - 1)
}
#pragma omp for schedule(nonmonotonic : runtime) ordered(1) /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
for (i = 0; i < 64; i++)
{
#pragma omp ordered depend(source)

View File

@ -0,0 +1,21 @@
int a[64];
#pragma omp declare simd linear(x)
int
bar (int x, int y)
{
int v;
#pragma omp atomic capture
v = a[x] += y;
return v;
}
void
foo (void)
{
int i;
#pragma omp simd
for (i = 0; i < 64; i++)
#pragma omp atomic
a[i] += 1;
}

View File

@ -0,0 +1,18 @@
/* { dg-do compile } */
void
foo (void)
{
int a[4] = { 1, 2, 3, 4 };
#pragma omp target data map(to:a)
#pragma omp target data use_device_ptr(a)
#pragma omp target is_device_ptr(a)
{
a[0]++;
}
#pragma omp target data /* { dg-error "must contain at least one" } */
a[0]++;
#pragma omp target data map(to:a)
#pragma omp target data use_device_ptr(a) use_device_ptr(a) /* { dg-error "appears more than once in data clauses" } */
a[0]++;
}

View File

@ -0,0 +1,10 @@
int
foo (int *a)
{
int x = 0;
#pragma omp taskloop reduction (+:x) nogroup /* { dg-error "'nogroup' clause must not be used together with 'reduction' clause" } */
for (int i = 0; i < 64; i++)
x += a[i];
#pragma omp taskwait
return x;
}

View File

@ -0,0 +1,11 @@
void
foo (int *p)
{
#pragma omp taskwait depend(iterator(i = 0:16) , in : p[i]) depend(out : p[32])
}
void
bar (int *p)
{
#pragma omp taskwait depend(mutexinoutset : p[0]) /* { dg-error "'mutexinoutset' kind in 'depend' clause on a 'taskwait' construct" } */
}

View File

@ -0,0 +1,64 @@
#ifdef __cplusplus
extern "C" {
#endif
int omp_get_num_teams (void);
int omp_get_team_num (void);
#ifdef __cplusplus
}
#endif
void bar (int *, int *, int *, int, int, int, int);
void
foo (void)
{
int a = 1, b = 2, c = 3, d = 4, e = 5, f = 6;
#pragma omp teams num_teams (4) shared (b) firstprivate (c, d) private (e, f)
{
f = 7;
bar (&a, &c, &e, b, d, f, 0);
}
bar (&a, (int *) 0, (int *) 0, b, 0, 0, 1);
}
void
baz (void)
{
#pragma omp teams
{
#pragma omp distribute
for (int i = 0; i < 64; i++)
;
#pragma omp distribute simd
for (int i = 0; i < 64; i++)
;
#pragma omp distribute parallel for
for (int i = 0; i < 64; i++)
;
#pragma omp distribute parallel for
for (int i = 0; i < 64; i++)
;
#pragma omp distribute parallel for simd
for (int i = 0; i < 64; i++)
;
#pragma omp parallel
;
#pragma omp parallel for
for (int i = 0; i < 64; i++)
;
#pragma omp parallel for simd
for (int i = 0; i < 64; i++)
;
int a, b;
#pragma omp parallel sections
{
a = 5;
#pragma omp section
b = 6;
}
int c = omp_get_num_teams ();
int d = omp_get_team_num ();
}
}

View File

@ -0,0 +1,119 @@
void
foo (void)
{
int i;
#pragma omp parallel
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
#pragma omp teams
{
#pragma omp teams /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
}
#pragma omp target
{
#pragma omp parallel
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
}
#pragma omp for
for (i = 0; i < 4; i++)
if (i == 0)
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
#pragma omp single
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
#pragma omp master
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
#pragma omp critical
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
#pragma omp sections
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
#pragma omp section
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
}
#pragma omp target data map (to: i)
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
#pragma omp task
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
#pragma omp taskgroup
{
#pragma omp teams /* { dg-error "'teams' construct must be closely nested inside of 'target' construct or not nested in any OpenMP construct" } */
;
}
}
void
bar (void)
{
#pragma omp teams
{
int x, y, v = 4;
#pragma omp target /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp target data map (to: v) /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp for /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
for (int i = 0; i < 64; ++i)
;
#pragma omp simd /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
for (int i = 0; i < 64; ++i)
;
#pragma omp for simd /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
for (int i = 0; i < 64; ++i)
;
#pragma omp single /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp master /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp sections /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
{
x = 1;
#pragma omp section
y = 2;
}
#pragma omp critical /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp target enter data map (to: v) /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
#pragma omp target exit data map (from: v) /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
#pragma omp cancel parallel /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
#pragma omp cancellation point parallel /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
#pragma omp barrier /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
#pragma omp ordered /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp task /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp taskloop /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
for (int i = 0; i < 64; ++i)
;
#pragma omp atomic /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
v++;
#pragma omp taskgroup /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
;
#pragma omp taskwait /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
#pragma omp taskyield /* { dg-error "only 'distribute' or 'parallel' regions are allowed to be strictly nested inside 'teams' region" } */
}
}

View File

@ -0,0 +1,50 @@
// { dg-do compile }
// { dg-additional-options "-fdump-tree-original" }
// { dg-final { scan-tree-dump-times "omp atomic release" 5 "original" } }
// { dg-final { scan-tree-dump-times "omp atomic seq_cst" 1 "original" } }
// { dg-final { scan-tree-dump-times "omp atomic relaxed" 2 "original" } }
// { dg-final { scan-tree-dump-times "omp atomic capture acq_rel" 3 "original" } }
// { dg-final { scan-tree-dump-times "omp atomic capture acquire" 1 "original" } }
// { dg-final { scan-tree-dump-times "omp atomic read acquire" 1 "original" } }
int i, v;
float f;
template <int N, int M, typename T>
void
foo (T *p)
{
#pragma omp atomic release, hint (N), update
i = i + 1;
#pragma omp atomic hint(0)seq_cst
i = i + 1;
#pragma omp atomic relaxed,update,hint (N)
i = i + 1;
#pragma omp atomic release
i = i + 1;
#pragma omp atomic relaxed
i = i + 1;
#pragma omp atomic acq_rel capture
v = i = i + 1;
#pragma omp atomic capture,acq_rel , hint (M)
v = i = i + 1;
#pragma omp atomic hint(N),acquire capture
v = i = i + 1;
#pragma omp atomic read acquire
v = i;
#pragma omp atomic release,write
i = v;
#pragma omp atomic hint(1),update,release
f = f + 2.0;
#pragma omp requires atomic_default_mem_order (acq_rel)
#pragma omp atomic hint (M - 1) update
*p += 1;
#pragma omp atomic capture, hint (M)
v = *p = *p + 1;
}
void
bar ()
{
foo <0, 1, int> (&i);
}

View File

@ -0,0 +1,17 @@
int i;
template <int N, typename T>
void
foo (T x)
{
#pragma omp atomic hint (x) // { dg-error "must be integral" }
i = i + 1;
#pragma omp atomic hint (N + i) // { dg-error "constant integer expression" }
i = i + 1;
}
void
bar ()
{
foo <0, float> (1.0f);
}

View File

@ -12,18 +12,18 @@ void f1(void)
x = x + 1;
#pragma omp atomic
x = 1; /* { dg-error "invalid form" } */
#pragma omp atomic
#pragma omp atomic /* { dg-error "read-only variable" } */
++y; /* { dg-error "read-only variable" } */
#pragma omp atomic
#pragma omp atomic /* { dg-error "read-only variable" } */
y--; /* { dg-error "read-only variable" } */
#pragma omp atomic
y += 1; /* { dg-error "read-only variable" } */
#pragma omp atomic /* { dg-error "read-only variable" } */
y += 1;
#pragma omp atomic
bar(); /* { dg-error "invalid operator" } */
#pragma omp atomic
bar() += 1; /* { dg-error "lvalue required" } */
#pragma omp atomic a /* { dg-error "expected end of line" } */
x++;
x++; /* { dg-error "expected 'read', 'write', 'update', 'capture', 'seq_cst', 'acq_rel', 'release', 'relaxed' or 'hint' clause" "" { target *-*-* } .-1 } */
#pragma omp atomic
; /* { dg-error "expected primary-expression" } */
#pragma omp atomic

View File

@ -86,18 +86,18 @@ foo (int x)
#pragma omp p for linear (t) // { dg-error "predetermined 'threadprivate'" }
for (i = 0; i < 10; i++)
;
#pragma omp p shared (c) // { dg-error "predetermined 'shared'" }
#pragma omp p shared (c)
;
#pragma omp p private (c) // { dg-error "predetermined 'shared'" }
#pragma omp p private (c) // { dg-error "may appear only in 'shared' or 'firstprivate' clauses" }
;
#pragma omp p firstprivate (c)
;
#pragma omp p for lastprivate (c) // { dg-error "predetermined 'shared'" }
#pragma omp p for lastprivate (c) // { dg-error "may appear only in 'shared' or 'firstprivate' clauses" }
for (i = 0; i < 10; i++)
;
#pragma omp p reduction (*:c) // { dg-error "predetermined 'shared'" }
#pragma omp p reduction (*:c) // { dg-error "may appear only in 'shared' or 'firstprivate' clauses" }
;
#pragma omp p for linear (c:2) // { dg-error "predetermined 'shared'" }
#pragma omp p for linear (c:2) // { dg-error "may appear only in 'shared' or 'firstprivate' clauses" }
for (i = 0; i < 10; i++)
;
}

View File

@ -0,0 +1,33 @@
int i;
template <int N>
void
foo (void)
{
#pragma omp critical (foo), hint (N + 1)
i++;
}
template <int N>
void
bar (void)
{
#pragma omp critical (bar), hint (N + i) // { dg-error "constant integer expression" }
i++;
}
template <typename T>
void
baz (T x)
{
#pragma omp critical (baz) hint (x) // { dg-error "expression must be integral" }
i++;
}
void
test ()
{
foo <0> ();
bar <0> ();
baz (0.0);
}

View File

@ -0,0 +1,86 @@
int arr[64], arr2[64];
struct S { int a[4]; } k;
short arr4[4];
volatile int v;
#define TEST_EQ(x,y) ({ int o[x == y ? 1 : -1]; 0; })
template <typename T, typename U, typename V, typename W, int N>
void
foo (unsigned char i, signed char j)
{
#pragma omp task depend (iterator (T j=6:N:-2) , out : \
arr[TEST_EQ (sizeof (j), sizeof (int)), \
TEST_EQ (sizeof (i), sizeof (unsigned char)), \
TEST_EQ (sizeof (k), sizeof (struct S)), j], \
arr2[TEST_EQ (((__typeof (j)) -1) < 0, 1), \
TEST_EQ (((__typeof (i)) -1) < 0, 0), \
TEST_EQ (((__typeof (k.a[0])) -1) < 0, 1), j]) \
depend(out: arr[0]) \
depend (iterator (U i=__LONG_LONG_MAX__ - 4:__LONG_LONG_MAX__ - N:N, \
V j=~0U-16:~0U-8:3, \
W *k=&arr4[1]:&arr4[2]:1) , in : \
arr[TEST_EQ (sizeof (i), sizeof (long long)), \
TEST_EQ (sizeof (j), sizeof (unsigned short)), \
TEST_EQ (sizeof (k), sizeof (short *)), \
TEST_EQ (sizeof (*k), sizeof (short)), i - __LONG_LONG_MAX__ + 4], \
arr2[TEST_EQ (((__typeof (i)) -1) < 0, 1), \
TEST_EQ (((__typeof (j)) -1) < 0, 0), \
TEST_EQ (((__typeof (*k)) -1) < 0, 1), j - (~0U-16)], \
arr2[k - &arr4[0]]) \
depend(in : k)
v++;
}
template <typename U, typename W, int N>
void
bar (unsigned char i, signed char j)
{
int m = j;
int n = j + 2;
#pragma omp task depend (iterator (j=N:2:m) , out : \
arr[TEST_EQ (sizeof (j), sizeof (int)), \
TEST_EQ (sizeof (i), sizeof (unsigned char)), \
TEST_EQ (sizeof (k), sizeof (struct S)), j], \
arr2[TEST_EQ (((__typeof (j)) -1) < 0, 1), \
TEST_EQ (((__typeof (i)) -1) < 0, 0), \
TEST_EQ (((__typeof (k.a[0])) -1) < 0, 1), j]) \
depend(out: arr[0]) \
depend (iterator (U i=__LONG_LONG_MAX__ - 4 - n:__LONG_LONG_MAX__ - 2:2, \
unsigned short j=~0U-16:~0U-8-n:3, \
W k=&arr4[N-5]:&arr4[n + 2]:1) , in : \
arr[TEST_EQ (sizeof (i), sizeof (long long)), \
TEST_EQ (sizeof (j), sizeof (unsigned short)), \
TEST_EQ (sizeof (k), sizeof (short *)), \
TEST_EQ (sizeof (*k), sizeof (short)), i - __LONG_LONG_MAX__ + 4], \
arr2[TEST_EQ (((__typeof (i)) -1) < 0, 1), \
TEST_EQ (((__typeof (j)) -1) < 0, 0), \
TEST_EQ (((__typeof (*k)) -1) < 0, 1), j - (~0U-16)], \
arr2[k - &arr4[0]:10]) \
depend(in : k)
v++;
}
template <typename T, typename U, int N>
void
baz (void)
{
#pragma omp parallel
#pragma omp master
{
#pragma omp task depend(iterator(T k = N : 2) , inout : \
arr[TEST_EQ (sizeof (k), sizeof (unsigned long)), \
TEST_EQ (((__typeof (k)) -1) < N, 0), k]) \
depend(iterator(U s = -3 : -12 : -1 + N) , out : \
arr[TEST_EQ (sizeof (s), sizeof (signed char)), \
TEST_EQ (((__typeof (s)) -1) < 0, 1), s + 12])
v++;
}
}
void
test (void)
{
foo <int, long long, unsigned short, short, 2> (0, 0);
bar <long long, short *, 6> (0, -2);
baz <unsigned long int, signed char, 0> ();
}

View File

@ -0,0 +1,121 @@
int a, b[64];
struct S { int c; } *d, *e;
struct T;
struct T *f, *g;
int *h;
template <typename U, typename V, typename W, W N>
void
f1 ()
{
#pragma omp task depend (iterator , in : a) // { dg-error "expected" }
;
#pragma omp task depend (iterator (for = 0 : 2) , in : a) // { dg-error "expected" }
;
#pragma omp task depend (iterator (5 = 0 : 2) , in : a) // { dg-error "expected" }
;
#pragma omp task depend (iterator (i : N : 2) , in : a) // { dg-error "expected '='|name a type|expected" }
;
#pragma omp task depend (iterator (i = 0, 1 : 2) , in : a) // { dg-error "expected" }
;
#pragma omp task depend (iterator (i = (0, 1) : 2) , in : a)
;
#pragma omp task depend (iterator (i = 0 : 1 : 2 : 3) , in : a) // { dg-error "expected '.'" }
;
#pragma omp task depend (iterator (i = 0 : 2, 3) , in : a) // { dg-error "expected" }
;
#pragma omp task depend (iterator (i = N : 10 : 2, 3) , in : a) // { dg-error "expected" }
;
#pragma omp task depend (iterator (i = 0:1), iterator (j = 0:1) , in : a) // { dg-error "invalid depend kind" }
;
#pragma omp task depend (iterator (i = N:32) , in : b[i*2:2])
;
#pragma omp task depend (iterator (void i = 0:1) , in : a) // { dg-error "iterator 'i' has neither integral nor pointer type" }
;
#pragma omp task depend (iterator (U *p = d:e:2) , in : a)
;
#pragma omp task depend (iterator (W i = N:4, \
struct U2 { W *p; } *p = 0:2) , in : a) // { dg-error "types may not be defined in iterator type" }
;
#pragma omp task depend (iterator (i = 0:4, j = i:16) , in : a) // { dg-error "begin expression refers to outer iterator 'i'" }
;
#pragma omp task depend (iterator (i = N:4, j = 2:i:1) , in : a) // { dg-error "end expression refers to outer iterator 'i'" }
;
#pragma omp task depend (iterator (i = 0:4, j = 2:8:i) , in : a) // { dg-error "step expression refers to outer iterator 'i'" }
;
#pragma omp task depend (iterator (i = 1.25:2.5:3) , in : a)
;
#pragma omp task depend (iterator (i = 1:2:3.5) , in : a) // { dg-error "iterator step with non-integral type" }
;
#pragma omp task depend (iterator (W *p = 23 : h) , in : a)
;
#pragma omp task depend (iterator (const int i = N : 2) , in : a) // { dg-error "const qualified" }
;
#pragma omp task depend (iterator (const long long unsigned i = 0 : 2) , in : a) // { dg-error "const qualified" }
;
}
template <typename W, int N>
void
f2 ()
{
int i, j;
#pragma omp for ordered(2)
for (i = 0; i < 64; i++)
for (j = 0; j < 64; j++)
{
#pragma omp ordered depend (iterator (k=0:N) , sink: i - 1, j - 1) // { dg-error "'iterator' modifier incompatible with 'sink'" }
#pragma omp ordered depend (iterator (W l = 0:2:3) , source) // { dg-error "'iterator' modifier incompatible with 'source'" }
}
}
template <typename U, typename V, typename W, W N, typename X, typename Y>
void
f3 ()
{
#pragma omp task depend (iterator (U i = 0:1), in : a) // { dg-error "iterator 'i' has neither integral nor pointer type" }
;
#pragma omp task depend (iterator (V f = 0.2:0.4) , in : a) // { dg-error "iterator 'f' has neither integral nor pointer type" }
;
#pragma omp task depend (iterator (struct T *p = f:g) , in : a) // { dg-error "invalid use of" }
;
#pragma omp task depend (iterator (i = *d:2) , in : a) // { dg-error "invalid cast from type 'S' to type 'int'" }
;
#pragma omp task depend (iterator (i = 2:*d:2) , in : a) // { dg-error "invalid cast from type 'S' to type 'int'" }
;
#pragma omp task depend (iterator (i = 2:4:*d) , in : a) // { dg-error "iterator step with non-integral type" }
;
#pragma omp task depend (iterator (i = 1.25:2.5:3) , in : a)
;
#pragma omp task depend (iterator (i = 1:2:3.5) , in : a) // { dg-error "iterator step with non-integral type" }
;
#pragma omp task depend (iterator (W *p = 23 : h) , in : a)
;
#pragma omp task depend (iterator (short i=1:3:N) , in : a) // { dg-error "iterator 'i' has zero step" }
;
#pragma omp task depend (iterator (i = 1 : 3 : N + 3 - 3) , in : a) // { dg-error "iterator 'i' has zero step" }
;
#pragma omp task depend (iterator (int *p = &b[6]:&b[9]:4 - 4) , in : a) // { dg-error "iterator 'p' has zero step" }
;
#pragma omp task depend (iterator (X i = N : 2) , in : a) // { dg-error "const qualified" }
;
#pragma omp task depend (iterator (Y i = 0 : 2) , in : a) // { dg-error "const qualified" }
;
}
template <int N>
void
f4 ()
{
#pragma omp task depend (iterator (i = 0:1), iterator (j = 0:1) , in : a) // { dg-error "invalid depend kind" }
;
}
void
f5 ()
{
f1 <struct S, float, int, 0> ();
f2 <int, 1> ();
f3 <struct S, float, int, 0, const int, const long long unsigned> ();
f4 <0> ();
}

View File

@ -0,0 +1,118 @@
typedef struct __attribute__((__aligned__ (sizeof (void *)))) omp_depend_t {
char __omp_depend_t__[2 * sizeof (void *)];
} omp_depend_t;
omp_depend_t bar (void);
extern const omp_depend_t cdepobj;
extern omp_depend_t depobj, depobj4;
extern omp_depend_t depobja[4];
extern omp_depend_t *pdepobj;
int a, b, i, j;
template <int N>
void
f1 (bool x)
{
#pragma omp depobj(x ? depobj : depobj4) depend(in : x ? a : b)
#pragma omp depobj(x ? depobj : depobj4) update(inout)
#pragma omp task depend (depobj:depobj)
;
#pragma omp depobj(depobj) destroy
#pragma omp task depend (iterator (i=1:3) , depobj: *(depobja + i))
;
#pragma omp depobj(pdepobj[0]) depend(mutexinoutset:a)
#pragma omp depobj(*pdepobj) destroy
}
template <typename T, typename T2>
void
f2 (T &depobj2, T2 depobj3, T *pdepobj)
{
T depobj1;
T depobja[4];
#pragma omp depobj(depobj1) depend(in : --a)
#pragma omp depobj(depobj1) update(inout)
#pragma omp task depend (depobj: depobj1)
;
#pragma omp depobj(depobj1) destroy
#pragma omp depobj(depobj2) depend(in : a)
#pragma omp depobj(depobj2) update(inout)
#pragma omp task depend (depobj :depobj2)
;
#pragma omp depobj(depobj2) destroy
#pragma omp depobj(depobj3) depend(in : a)
#pragma omp depobj(depobj3) update(inout)
#pragma omp task depend (depobj : depobj3)
;
#pragma omp depobj(depobj3) destroy
for (int q = 1; q < 3; q++)
{
#pragma omp depobj(depobja[q]) depend (in:a)
}
#pragma omp task depend (iterator (i=1:3) , depobj : *(depobja + i))
;
for (int q = 1; q < 3; q++)
{
#pragma omp depobj(depobja[q]) destroy
}
#pragma omp depobj(pdepobj[0]) depend(mutexinoutset:a)
#pragma omp depobj(*pdepobj) destroy
}
void
f3 (bool x)
{
omp_depend_t depobjx, depobjy;
f1 <0> (x);
f2 <omp_depend_t, omp_depend_t &> (depobjx, depobjy, pdepobj);
}
template <int N>
void
f4 (void)
{
omp_depend_t depobjb[4];
#pragma omp depobj // { dg-error "expected" }
#pragma omp depobj destroy // { dg-error "expected" }
#pragma omp depobj (depobj) // { dg-error "expected 'depend', 'destroy' or 'update' clause" }
#pragma omp depobj (depobj) foobar // { dg-error "expected 'depend', 'destroy' or 'update' clause" }
#pragma omp depobj(bar ()) update(inout) // { dg-error "'depobj' expression is not lvalue expression" }
#pragma omp depobj (cdepobj) update(in) // { dg-error "'const' qualified 'depobj' expression" }
#pragma omp depobj (depobjb) depend(in: a) // { dg-error "type of 'depobj' expression is not 'omp_depend_t'" }
#pragma omp depobj (pdepobj) depend(in: a) // { dg-error "type of 'depobj' expression is not 'omp_depend_t'" }
#pragma omp depobj (a) destroy // { dg-error "type of 'depobj' expression is not 'omp_depend_t'" }
#pragma omp depobj (depobj) depend(depobj:a) // { dg-error "does not have 'omp_depend_t' type in 'depend' clause with 'depobj' dependence type" }
#pragma omp depobj (depobj) depend(depobj:*depobjb) // { dg-error "'depobj' dependence type specified in 'depend' clause on 'depobj' construct" }
#pragma omp depobj (depobj) update(foobar) // { dg-error "expected 'in', 'out', 'inout' or 'mutexinoutset'" }
#pragma omp depobj (depobj) depend(in: *depobja) // { dg-error "should not have 'omp_depend_t' type in 'depend' clause with dependence type" }
#pragma omp depobj (depobj) depend(in: a) depend(in: b) // { dg-error "expected" }
#pragma omp depobj (depobj) depend(in: a) update(out) // { dg-error "expected" }
#pragma omp depobj (depobj) depend(in: a, b) // { dg-error "more than one locator in 'depend' clause on 'depobj' construct" }
#pragma omp depobj (depobj) depend(source) // { dg-error "'depend\\(source\\)' is only allowed in 'omp ordered'" }
#pragma omp depobj (depobj) depend(sink: i + 1, j - 1) // { dg-error "'depend\\(sink\\)' is only allowed in 'omp ordered'" }
#pragma omp depobj (depobj) depend(iterator (i = 0:2) , in : a) // { dg-error "'iterator' modifier may not be specified on 'depobj' construct" }
if (0)
#pragma omp depobj (depobj) destroy // { dg-error "'#pragma omp depobj' may only be used in compound statements" }
;
}
template <int N>
void
f5 (void)
{
#pragma omp task depend (depobj:depobja[1:2]) // { dg-error "'depend' clause with 'depobj' dependence type on array section" }
;
#pragma omp task depend (depobj : a) // { dg-error "'a' does not have 'omp_depend_t' type in 'depend' clause with 'depobj' dependence type" }
;
#pragma omp task depend (in: depobj) // { dg-error "'depobj' should not have 'omp_depend_t' type in 'depend' clause with dependence type" }
;
}
void
f6 (omp_depend_t &x)
{
f4 <0> ();
f5 <0> ();
#pragma omp depobj (x) depend(in: a)
#pragma omp depobj (depobj) depend(in: x) // { dg-error "should not have 'omp_depend_t' type in 'depend' clause with dependence type" }
}

View File

@ -0,0 +1,21 @@
// { dg-do compile { target c++11 } }
// { dg-options "-fopenmp" }
int a[42];
void
foo ()
{
#pragma omp for ordered (1) // { dg-error "'ordered' clause with parameter on range-based 'for' loop" }
for (auto x : a)
;
}
void
bar ()
{
#pragma omp for ordered (2) // { dg-error "'ordered' clause with parameter on range-based 'for' loop" }
for (int i = 0; i < 1; i++)
for (auto x : a)
;
}

View File

@ -0,0 +1,104 @@
// { dg-do compile { target c++17 } }
void
f1 (int a[10][10])
{
#pragma omp for collapse (2)
for (int i = 0; i < 10; ++i)
for (auto j : a[i]) // { dg-error "initializer expression refers to iteration variable 'i'" }
;
}
void
f2 (int (&a)[10])
{
#pragma omp for collapse (2)
for (auto i : a)
for (int j = i * 2; j < i * 4; j++) // { dg-error "initializer expression refers to iteration variable 'i'" }
;
}
struct S { int a, b, c; };
void
f3 (S (&a)[10])
{
#pragma omp for collapse (2)
for (auto [i, j, k] : a) // { dg-error "use of 'i' before deduction of 'auto'" "" { target *-*-* } .+1 }
for (int l = i; l < j; l += k) // { dg-error "use of 'j' before deduction of 'auto'" }
; // { dg-error "use of 'k' before deduction of 'auto'" "" { target *-*-* } .-1 }
}
template <int N>
void
f4 (int a[10][10])
{
#pragma omp for collapse (2)
for (int i = 0; i < 10; ++i) // { dg-error "initializer expression refers to iteration variable 'i'" }
for (auto j : a[i])
;
}
template <int N>
void
f5 (int (&a)[10])
{
#pragma omp for collapse (2)
for (auto i : a)
for (int j = i * 2; j < i * 4; j++) // { dg-error "initializer expression refers to iteration variable 'i'" }
;
}
template <int N>
void
f6 (S (&a)[10])
{
#pragma omp for collapse (2)
for (auto [i, j, k] : a) // { dg-error "use of 'i' before deduction of 'auto'" "" { target *-*-* } .-1 }
for (int l = i; l < j; l += k) // { dg-error "use of 'j' before deduction of 'auto'" }
; // { dg-error "use of 'k' before deduction of 'auto'" "" { target *-*-* } .-3 }
}
template <typename T>
void
f7 (T a[10][10])
{
#pragma omp for collapse (2)
for (T i = 0; i < 10; ++i)
for (auto j : a[i]) // { dg-error "initializer expression refers to iteration variable 'i'" }
;
}
template <typename T>
void
f8 (T (&a)[10])
{
#pragma omp for collapse (2)
for (auto i : a)
for (T j = i * 2; j < i * 4; j++) // { dg-error "initializer expression refers to iteration variable 'i'" }
;
}
template <typename T, typename U>
void
f9 (U (&a)[10])
{
#pragma omp for collapse (2)
for (auto [i, j, k] : a) // { dg-error "use of 'i' before deduction of 'auto'" "" { target *-*-* } .-1 }
for (T l = i; l < j; l += k) // { dg-error "use of 'j' before deduction of 'auto'" }
; // { dg-error "use of 'k' before deduction of 'auto'" "" { target *-*-* } .-3 }
}
void
test ()
{
int a[10][10] {};
int b[10] {};
S c[10] {};
f4 <0> (a);
f5 <0> (b);
f6 <0> (c);
f7 (a);
f8 (b);
f9 <int, S> (c);
}

Some files were not shown because too many files have changed in this diff Show More