gimplify.c (omp_notice_variable): If n is non-NULL and no flags change in ORT_TARGET region, don't jump to do_outer.
* gimplify.c (omp_notice_variable): If n is non-NULL and no flags change in ORT_TARGET region, don't jump to do_outer. (struct gimplify_adjust_omp_clauses_data): New type. (gimplify_adjust_omp_clauses_1): Adjust for data being a struct gimplify_adjust_omp_clauses_data pointer instead of tree *. Pass pre_p as a new argument to lang_hooks.decls.omp_finish_clause hook. (gimplify_adjust_omp_clauses): Add pre_p argument, adjust splay_tree_foreach to pass both list_p and pre_p. (gimplify_omp_parallel, gimplify_omp_task, gimplify_omp_for, gimplify_omp_workshare, gimplify_omp_target_update): Adjust gimplify_adjust_omp_clauses callers. * langhooks.c (lhd_omp_finish_clause): New function. * langhooks-def.h (lhd_omp_finish_clause): New prototype. (LANG_HOOKS_OMP_FINISH_CLAUSE): Define to lhd_omp_finish_clause. * langhooks.h (struct lang_hooks_for_decls): Add a new gimple_seq * argument to omp_finish_clause hook. * omp-low.c (scan_sharing_clauses): Call scan_omp_op on non-DECL_P OMP_CLAUSE_DECL if ctx->outer. (scan_omp_parallel, lower_omp_for): When adding _LOOPTEMP_ clause var, add it to outer ctx's decl_map as identity. * tree-core.h (OMP_CLAUSE_MAP_TO_PSET): New map kind. * tree-nested.c (convert_nonlocal_omp_clauses, convert_local_omp_clauses): Handle various OpenMP 4.0 clauses. * tree-pretty-print.c (dump_omp_clause): Handle OMP_CLAUSE_MAP_TO_PSET. gcc/cp/ * cp-gimplify.c (cxx_omp_finish_clause): Add a gimple_seq * argument. * cp-tree.h (cxx_omp_finish_clause): Adjust prototype. gcc/fortran/ * cpp.c (cpp_define_builtins): Change _OPENMP macro to 201307. * dump-parse-tree.c (show_omp_namelist): Add list_type argument. Adjust for rop being u.reduction_op now, handle depend_op or map_op. (show_omp_node): Adjust callers. Print some new OpenMP 4.0 clauses, adjust for OMP_LIST_DEPEND_{IN,OUT} becoming a single OMP_LIST_DEPEND. * f95-lang.c (gfc_handle_omp_declare_target_attribute): New function. (gfc_attribute_table): New variable. (LANG_HOOKS_OMP_FINISH_CLAUSE, LANG_HOOKS_ATTRIBUTE_TABLE): Redefine. * frontend-passes.c (gfc_code_walker): Handle new OpenMP target EXEC_OMP_* codes and new clauses. * gfortran.h (gfc_statement): Add ST_OMP_TARGET, ST_OMP_END_TARGET, ST_OMP_TARGET_DATA, ST_OMP_END_TARGET_DATA, ST_OMP_TARGET_UPDATE, ST_OMP_DECLARE_TARGET, ST_OMP_TEAMS, ST_OMP_END_TEAMS, ST_OMP_DISTRIBUTE, ST_OMP_END_DISTRIBUTE, ST_OMP_DISTRIBUTE_SIMD, ST_OMP_END_DISTRIBUTE_SIMD, ST_OMP_DISTRIBUTE_PARALLEL_DO, ST_OMP_END_DISTRIBUTE_PARALLEL_DO, ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD, ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD, ST_OMP_TARGET_TEAMS, ST_OMP_END_TARGET_TEAMS, ST_OMP_TEAMS_DISTRIBUTE, ST_OMP_END_TEAMS_DISTRIBUTE, ST_OMP_TEAMS_DISTRIBUTE_SIMD, ST_OMP_END_TEAMS_DISTRIBUTE_SIMD, ST_OMP_TARGET_TEAMS_DISTRIBUTE, ST_OMP_END_TARGET_TEAMS_DISTRIBUTE, ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD, ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD, ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO, ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO, ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO, ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO, ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD, ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD, ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD and ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD. (symbol_attribute): Add omp_declare_target field. (gfc_omp_depend_op, gfc_omp_map_op): New enums. (gfc_omp_namelist): Replace rop field with union containing reduction_op, depend_op and map_op. (OMP_LIST_DEPEND_IN, OMP_LIST_DEPEND_OUT): Remove. (OMP_LIST_DEPEND, OMP_LIST_MAP, OMP_LIST_TO, OMP_LIST_FROM): New. (gfc_omp_clauses): Add num_teams, device, thread_limit, dist_sched_kind, dist_chunk_size fields. (gfc_common_head): Add omp_declare_target field. (gfc_exec_op): Add EXEC_OMP_TARGET, EXEC_OMP_TARGET_DATA, EXEC_OMP_TEAMS, EXEC_OMP_DISTRIBUTE, EXEC_OMP_DISTRIBUTE_SIMD, EXEC_OMP_DISTRIBUTE_PARALLEL_DO, EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD, EXEC_OMP_TARGET_TEAMS, EXEC_OMP_TEAMS_DISTRIBUTE, EXEC_OMP_TEAMS_DISTRIBUTE_SIMD, EXEC_OMP_TARGET_TEAMS_DISTRIBUTE, EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD, EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO, EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO, EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD, EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD and EXEC_OMP_TARGET_UPDATE. (gfc_add_omp_declare_target): New prototype. * match.h (gfc_match_omp_declare_target, gfc_match_omp_distribute, gfc_match_omp_distribute_parallel_do, gfc_match_omp_distribute_parallel_do_simd, gfc_match_omp_distribute_simd, gfc_match_omp_target, gfc_match_omp_target_data, gfc_match_omp_target_teams, gfc_match_omp_target_teams_distribute, gfc_match_omp_target_teams_distribute_parallel_do, gfc_match_omp_target_teams_distribute_parallel_do_simd, gfc_match_omp_target_teams_distribute_simd, gfc_match_omp_target_update, gfc_match_omp_teams, gfc_match_omp_teams_distribute, gfc_match_omp_teams_distribute_parallel_do, gfc_match_omp_teams_distribute_parallel_do_simd, gfc_match_omp_teams_distribute_simd): New prototypes. * module.c (ab_attribute): Add AB_OMP_DECLARE_TARGET. (attr_bits): Likewise. (mio_symbol_attribute): Handle omp_declare_target attribute. (gfc_free_omp_clauses): Free num_teams, device, thread_limit and dist_chunk_size expressions. (OMP_CLAUSE_PRIVATE, OMP_CLAUSE_FIRSTPRIVATE, OMP_CLAUSE_LASTPRIVATE, OMP_CLAUSE_COPYPRIVATE, OMP_CLAUSE_SHARED, OMP_CLAUSE_COPYIN, OMP_CLAUSE_REDUCTION, OMP_CLAUSE_IF, OMP_CLAUSE_NUM_THREADS, OMP_CLAUSE_SCHEDULE, OMP_CLAUSE_DEFAULT, OMP_CLAUSE_ORDERED, OMP_CLAUSE_COLLAPSE, OMP_CLAUSE_UNTIED, OMP_CLAUSE_FINAL, OMP_CLAUSE_MERGEABLE, OMP_CLAUSE_ALIGNED, OMP_CLAUSE_DEPEND, OMP_CLAUSE_INBRANCH, OMP_CLAUSE_LINEAR, OMP_CLAUSE_NOTINBRANCH, OMP_CLAUSE_PROC_BIND, OMP_CLAUSE_SAFELEN, OMP_CLAUSE_SIMDLEN, OMP_CLAUSE_UNIFORM): Use 1U instead of 1. (OMP_CLAUSE_DEVICE, OMP_CLAUSE_MAP, OMP_CLAUSE_TO, OMP_CLAUSE_FROM, OMP_CLAUSE_NUM_TEAMS, OMP_CLAUSE_THREAD_LIMIT, OMP_CLAUSE_DIST_SCHEDULE): Define. (gfc_match_omp_clauses): Change mask parameter to unsigned int. Adjust for rop becoming u.reduction_op. Disallow inbranch with notinbranch. For depend clause, always create OMP_LIST_DEPEND and fill in u.depend_op. Handle num_teams, device, map, to, from, thread_limit and dist_schedule clauses. (OMP_DECLARE_SIMD_CLAUSES): Or in OMP_CLAUSE_INBRANCH and OMP_CLAUSE_NOTINBRANCH. (OMP_TARGET_CLAUSES, OMP_TARGET_DATA_CLAUSES, OMP_TARGET_UPDATE_CLAUSES, OMP_TEAMS_CLAUSES, OMP_DISTRIBUTE_CLAUSES): Define. (match_omp): New function. (gfc_match_omp_do, gfc_match_omp_do_simd, gfc_match_omp_parallel, gfc_match_omp_parallel_do, gfc_match_omp_parallel_do_simd, gfc_match_omp_parallel_sections, gfc_match_omp_parallel_workshare, gfc_match_omp_sections, gfc_match_omp_simd, gfc_match_omp_single, gfc_match_omp_task): Rewritten using match_omp. (gfc_match_omp_threadprivate, gfc_match_omp_declare_reduction): Diagnose if the directives are followed by unexpected junk. (gfc_match_omp_distribute, gfc_match_omp_distribute_parallel_do, gfc_match_omp_distribute_parallel_do_simd, gfc_match_omp_distrbute_simd, gfc_match_omp_declare_target, gfc_match_omp_target, gfc_match_omp_target_data, gfc_match_omp_target_teams, gfc_match_omp_target_teams_distribute, gfc_match_omp_target_teams_distribute_parallel_do, gfc_match_omp_target_teams_distribute_parallel_do_simd, gfc_match_omp_target_teams_distrbute_simd, gfc_match_omp_target_update, gfc_match_omp_teams, gfc_match_omp_teams_distribute, gfc_match_omp_teams_distribute_parallel_do, gfc_match_omp_teams_distribute_parallel_do_simd, gfc_match_omp_teams_distrbute_simd): New functions. * openmp.c (resolve_omp_clauses): Adjust for OMP_LIST_DEPEND_{IN,OUT} being changed to OMP_LIST_DEPEND. Handle OMP_LIST_MAP, OMP_LIST_FROM, OMP_LIST_TO, num_teams, device, dist_chunk_size and thread_limit. (gfc_resolve_omp_parallel_blocks): Only put sharing clauses into ctx.sharing_clauses. Call gfc_resolve_omp_do_blocks for various new EXEC_OMP_* codes. (resolve_omp_do): Handle various new EXEC_OMP_* codes. (gfc_resolve_omp_directive): Likewise. (gfc_resolve_omp_declare_simd): Add missing space to diagnostics. * parse.c (decode_omp_directive): Handle parsing of OpenMP 4.0 offloading related directives. (case_executable): Add ST_OMP_TARGET_UPDATE. (case_exec_markers): Add ST_OMP_TARGET*, ST_OMP_TEAMS*, ST_OMP_DISTRIBUTE*. (case_decl): Add ST_OMP_DECLARE_TARGET. (gfc_ascii_statement): Handle new ST_OMP_* codes. (parse_omp_do): Handle various new ST_OMP_* codes. (parse_executable): Likewise. * resolve.c (gfc_resolve_blocks): Handle various new EXEC_OMP_* codes. (resolve_code): Likewise. (resolve_symbol): Change that !$OMP DECLARE TARGET variables are saved. * st.c (gfc_free_statement): Handle various new EXEC_OMP_* codes. * symbol.c (check_conflict): Check omp_declare_target conflicts. (gfc_add_omp_declare_target): New function. (gfc_copy_attr): Copy omp_declare_target. * trans.c (trans_code): Handle various new EXEC_OMP_* codes. * trans-common.c (build_common_decl): Add "omp declare target" attribute if needed. * trans-decl.c (add_attributes_to_decl): Likewise. * trans.h (gfc_omp_finish_clause): New prototype. * trans-openmp.c (gfc_omp_finish_clause): New function. (gfc_trans_omp_reduction_list): Adjust for rop being renamed to u.reduction_op. (gfc_trans_omp_clauses): Adjust for OMP_LIST_DEPEND_{IN,OUT} change to OMP_LIST_DEPEND and fix up depend handling. Handle OMP_LIST_MAP, OMP_LIST_TO, OMP_LIST_FROM, num_teams, thread_limit, device, dist_chunk_size and dist_sched_kind. (gfc_trans_omp_do): Handle EXEC_OMP_DISTRIBUTE. (GFC_OMP_SPLIT_DISTRIBUTE, GFC_OMP_SPLIT_TEAMS, GFC_OMP_SPLIT_TARGET, GFC_OMP_SPLIT_NUM, GFC_OMP_MASK_DISTRIBUTE, GFC_OMP_MASK_TEAMS, GFC_OMP_MASK_TARGET, GFC_OMP_MASK_NUM): New. (gfc_split_omp_clauses): Handle splitting of clauses for new EXEC_OMP_* codes. (gfc_trans_omp_do_simd): Add pblock argument, adjust for being callable for combined constructs. (gfc_trans_omp_parallel_do, gfc_trans_omp_parallel_do_simd): Likewise. (gfc_trans_omp_distribute, gfc_trans_omp_teams, gfc_trans_omp_target, gfc_trans_omp_target_data, gfc_trans_omp_target_update): New functions. (gfc_trans_omp_directive): Adjust gfc_trans_omp_* callers, handle new EXEC_OMP_* codes. gcc/testsuite/ * gfortran.dg/gomp/declare-simd-1.f90: New test. * gfortran.dg/gomp/depend-1.f90: New test. * gfortran.dg/gomp/target1.f90: New test. * gfortran.dg/gomp/target2.f90: New test. * gfortran.dg/gomp/target3.f90: New test. * gfortran.dg/gomp/udr4.f90: Adjust expected diagnostics. * gfortran.dg/openmp-define-3.f90: Expect _OPENMP 201307 instead of 201107. libgomp/ * omp_lib.f90.in (openmp_version): Set to 201307. * omp_lib.h.in (openmp_version): Likewise. * testsuite/libgomp.c/target-8.c: New test. * testsuite/libgomp.fortran/declare-simd-1.f90: Add notinbranch and inbranch clauses. * testsuite/libgomp.fortran/depend-3.f90: New test. * testsuite/libgomp.fortran/openmp_version-1.f: Adjust for new openmp_version. * testsuite/libgomp.fortran/openmp_version-2.f90: Likewise. * testsuite/libgomp.fortran/target1.f90: New test. * testsuite/libgomp.fortran/target2.f90: New test. * testsuite/libgomp.fortran/target3.f90: New test. * testsuite/libgomp.fortran/target4.f90: New test. * testsuite/libgomp.fortran/target5.f90: New test. * testsuite/libgomp.fortran/target6.f90: New test. * testsuite/libgomp.fortran/target7.f90: New test. From-SVN: r211768
This commit is contained in:
parent
3e9c4087cc
commit
f014c65363
@ -1,3 +1,34 @@
|
||||
2014-06-18 Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
* gimplify.c (omp_notice_variable): If n is non-NULL
|
||||
and no flags change in ORT_TARGET region, don't jump to
|
||||
do_outer.
|
||||
(struct gimplify_adjust_omp_clauses_data): New type.
|
||||
(gimplify_adjust_omp_clauses_1): Adjust for data being
|
||||
a struct gimplify_adjust_omp_clauses_data pointer instead
|
||||
of tree *. Pass pre_p as a new argument to
|
||||
lang_hooks.decls.omp_finish_clause hook.
|
||||
(gimplify_adjust_omp_clauses): Add pre_p argument, adjust
|
||||
splay_tree_foreach to pass both list_p and pre_p.
|
||||
(gimplify_omp_parallel, gimplify_omp_task, gimplify_omp_for,
|
||||
gimplify_omp_workshare, gimplify_omp_target_update): Adjust
|
||||
gimplify_adjust_omp_clauses callers.
|
||||
* langhooks.c (lhd_omp_finish_clause): New function.
|
||||
* langhooks-def.h (lhd_omp_finish_clause): New prototype.
|
||||
(LANG_HOOKS_OMP_FINISH_CLAUSE): Define to lhd_omp_finish_clause.
|
||||
* langhooks.h (struct lang_hooks_for_decls): Add a new
|
||||
gimple_seq * argument to omp_finish_clause hook.
|
||||
* omp-low.c (scan_sharing_clauses): Call scan_omp_op on
|
||||
non-DECL_P OMP_CLAUSE_DECL if ctx->outer.
|
||||
(scan_omp_parallel, lower_omp_for): When adding
|
||||
_LOOPTEMP_ clause var, add it to outer ctx's decl_map
|
||||
as identity.
|
||||
* tree-core.h (OMP_CLAUSE_MAP_TO_PSET): New map kind.
|
||||
* tree-nested.c (convert_nonlocal_omp_clauses,
|
||||
convert_local_omp_clauses): Handle various OpenMP 4.0 clauses.
|
||||
* tree-pretty-print.c (dump_omp_clause): Handle
|
||||
OMP_CLAUSE_MAP_TO_PSET.
|
||||
|
||||
2014-06-17 Andrew MacLeod <amacleod@redhat.com>
|
||||
|
||||
* tree-dfa.h (get_addr_base_and_unit_offset_1): Move from here.
|
||||
|
@ -1,3 +1,9 @@
|
||||
2014-06-18 Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
* cp-gimplify.c (cxx_omp_finish_clause): Add a gimple_seq *
|
||||
argument.
|
||||
* cp-tree.h (cxx_omp_finish_clause): Adjust prototype.
|
||||
|
||||
2014-06-17 Jason Merrill <jason@redhat.com>
|
||||
|
||||
PR c++/60605
|
||||
|
@ -1592,7 +1592,7 @@ cxx_omp_predetermined_sharing (tree decl)
|
||||
/* Finalize an implicitly determined clause. */
|
||||
|
||||
void
|
||||
cxx_omp_finish_clause (tree c)
|
||||
cxx_omp_finish_clause (tree c, gimple_seq *)
|
||||
{
|
||||
tree decl, inner_type;
|
||||
bool make_shared = false;
|
||||
|
@ -6228,7 +6228,7 @@ extern tree cxx_omp_clause_default_ctor (tree, tree, tree);
|
||||
extern tree cxx_omp_clause_copy_ctor (tree, tree, tree);
|
||||
extern tree cxx_omp_clause_assign_op (tree, tree, tree);
|
||||
extern tree cxx_omp_clause_dtor (tree, tree);
|
||||
extern void cxx_omp_finish_clause (tree);
|
||||
extern void cxx_omp_finish_clause (tree, gimple_seq *);
|
||||
extern bool cxx_omp_privatize_by_reference (const_tree);
|
||||
|
||||
/* in name-lookup.c */
|
||||
|
@ -1,3 +1,177 @@
|
||||
2014-06-18 Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
* cpp.c (cpp_define_builtins): Change _OPENMP macro to
|
||||
201307.
|
||||
* dump-parse-tree.c (show_omp_namelist): Add list_type
|
||||
argument. Adjust for rop being u.reduction_op now,
|
||||
handle depend_op or map_op.
|
||||
(show_omp_node): Adjust callers. Print some new
|
||||
OpenMP 4.0 clauses, adjust for OMP_LIST_DEPEND_{IN,OUT}
|
||||
becoming a single OMP_LIST_DEPEND.
|
||||
* f95-lang.c (gfc_handle_omp_declare_target_attribute): New
|
||||
function.
|
||||
(gfc_attribute_table): New variable.
|
||||
(LANG_HOOKS_OMP_FINISH_CLAUSE, LANG_HOOKS_ATTRIBUTE_TABLE): Redefine.
|
||||
* frontend-passes.c (gfc_code_walker): Handle new OpenMP target
|
||||
EXEC_OMP_* codes and new clauses.
|
||||
* gfortran.h (gfc_statement): Add ST_OMP_TARGET, ST_OMP_END_TARGET,
|
||||
ST_OMP_TARGET_DATA, ST_OMP_END_TARGET_DATA, ST_OMP_TARGET_UPDATE,
|
||||
ST_OMP_DECLARE_TARGET, ST_OMP_TEAMS, ST_OMP_END_TEAMS,
|
||||
ST_OMP_DISTRIBUTE, ST_OMP_END_DISTRIBUTE, ST_OMP_DISTRIBUTE_SIMD,
|
||||
ST_OMP_END_DISTRIBUTE_SIMD, ST_OMP_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_END_DISTRIBUTE_PARALLEL_DO, ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD, ST_OMP_TARGET_TEAMS,
|
||||
ST_OMP_END_TARGET_TEAMS, ST_OMP_TEAMS_DISTRIBUTE,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE, ST_OMP_TEAMS_DISTRIBUTE_SIMD,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_SIMD, ST_OMP_TARGET_TEAMS_DISTRIBUTE,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD,
|
||||
ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD and
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD.
|
||||
(symbol_attribute): Add omp_declare_target field.
|
||||
(gfc_omp_depend_op, gfc_omp_map_op): New enums.
|
||||
(gfc_omp_namelist): Replace rop field with union
|
||||
containing reduction_op, depend_op and map_op.
|
||||
(OMP_LIST_DEPEND_IN, OMP_LIST_DEPEND_OUT): Remove.
|
||||
(OMP_LIST_DEPEND, OMP_LIST_MAP, OMP_LIST_TO, OMP_LIST_FROM): New.
|
||||
(gfc_omp_clauses): Add num_teams, device, thread_limit,
|
||||
dist_sched_kind, dist_chunk_size fields.
|
||||
(gfc_common_head): Add omp_declare_target field.
|
||||
(gfc_exec_op): Add EXEC_OMP_TARGET, EXEC_OMP_TARGET_DATA,
|
||||
EXEC_OMP_TEAMS, EXEC_OMP_DISTRIBUTE, EXEC_OMP_DISTRIBUTE_SIMD,
|
||||
EXEC_OMP_DISTRIBUTE_PARALLEL_DO, EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
EXEC_OMP_TARGET_TEAMS, EXEC_OMP_TEAMS_DISTRIBUTE,
|
||||
EXEC_OMP_TEAMS_DISTRIBUTE_SIMD, EXEC_OMP_TARGET_TEAMS_DISTRIBUTE,
|
||||
EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD,
|
||||
EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD and
|
||||
EXEC_OMP_TARGET_UPDATE.
|
||||
(gfc_add_omp_declare_target): New prototype.
|
||||
* match.h (gfc_match_omp_declare_target, gfc_match_omp_distribute,
|
||||
gfc_match_omp_distribute_parallel_do,
|
||||
gfc_match_omp_distribute_parallel_do_simd,
|
||||
gfc_match_omp_distribute_simd, gfc_match_omp_target,
|
||||
gfc_match_omp_target_data, gfc_match_omp_target_teams,
|
||||
gfc_match_omp_target_teams_distribute,
|
||||
gfc_match_omp_target_teams_distribute_parallel_do,
|
||||
gfc_match_omp_target_teams_distribute_parallel_do_simd,
|
||||
gfc_match_omp_target_teams_distribute_simd,
|
||||
gfc_match_omp_target_update, gfc_match_omp_teams,
|
||||
gfc_match_omp_teams_distribute,
|
||||
gfc_match_omp_teams_distribute_parallel_do,
|
||||
gfc_match_omp_teams_distribute_parallel_do_simd,
|
||||
gfc_match_omp_teams_distribute_simd): New prototypes.
|
||||
* module.c (ab_attribute): Add AB_OMP_DECLARE_TARGET.
|
||||
(attr_bits): Likewise.
|
||||
(mio_symbol_attribute): Handle omp_declare_target attribute.
|
||||
(gfc_free_omp_clauses): Free num_teams, device, thread_limit
|
||||
and dist_chunk_size expressions.
|
||||
(OMP_CLAUSE_PRIVATE, OMP_CLAUSE_FIRSTPRIVATE, OMP_CLAUSE_LASTPRIVATE,
|
||||
OMP_CLAUSE_COPYPRIVATE, OMP_CLAUSE_SHARED, OMP_CLAUSE_COPYIN,
|
||||
OMP_CLAUSE_REDUCTION, OMP_CLAUSE_IF, OMP_CLAUSE_NUM_THREADS,
|
||||
OMP_CLAUSE_SCHEDULE, OMP_CLAUSE_DEFAULT, OMP_CLAUSE_ORDERED,
|
||||
OMP_CLAUSE_COLLAPSE, OMP_CLAUSE_UNTIED, OMP_CLAUSE_FINAL,
|
||||
OMP_CLAUSE_MERGEABLE, OMP_CLAUSE_ALIGNED, OMP_CLAUSE_DEPEND,
|
||||
OMP_CLAUSE_INBRANCH, OMP_CLAUSE_LINEAR, OMP_CLAUSE_NOTINBRANCH,
|
||||
OMP_CLAUSE_PROC_BIND, OMP_CLAUSE_SAFELEN, OMP_CLAUSE_SIMDLEN,
|
||||
OMP_CLAUSE_UNIFORM): Use 1U instead of 1.
|
||||
(OMP_CLAUSE_DEVICE, OMP_CLAUSE_MAP, OMP_CLAUSE_TO, OMP_CLAUSE_FROM,
|
||||
OMP_CLAUSE_NUM_TEAMS, OMP_CLAUSE_THREAD_LIMIT,
|
||||
OMP_CLAUSE_DIST_SCHEDULE): Define.
|
||||
(gfc_match_omp_clauses): Change mask parameter to unsigned int.
|
||||
Adjust for rop becoming u.reduction_op. Disallow inbranch with
|
||||
notinbranch. For depend clause, always create OMP_LIST_DEPEND
|
||||
and fill in u.depend_op. Handle num_teams, device, map,
|
||||
to, from, thread_limit and dist_schedule clauses.
|
||||
(OMP_DECLARE_SIMD_CLAUSES): Or in OMP_CLAUSE_INBRANCH and
|
||||
OMP_CLAUSE_NOTINBRANCH.
|
||||
(OMP_TARGET_CLAUSES, OMP_TARGET_DATA_CLAUSES,
|
||||
OMP_TARGET_UPDATE_CLAUSES, OMP_TEAMS_CLAUSES,
|
||||
OMP_DISTRIBUTE_CLAUSES): Define.
|
||||
(match_omp): New function.
|
||||
(gfc_match_omp_do, gfc_match_omp_do_simd, gfc_match_omp_parallel,
|
||||
gfc_match_omp_parallel_do, gfc_match_omp_parallel_do_simd,
|
||||
gfc_match_omp_parallel_sections, gfc_match_omp_parallel_workshare,
|
||||
gfc_match_omp_sections, gfc_match_omp_simd, gfc_match_omp_single,
|
||||
gfc_match_omp_task): Rewritten using match_omp.
|
||||
(gfc_match_omp_threadprivate, gfc_match_omp_declare_reduction):
|
||||
Diagnose if the directives are followed by unexpected junk.
|
||||
(gfc_match_omp_distribute, gfc_match_omp_distribute_parallel_do,
|
||||
gfc_match_omp_distribute_parallel_do_simd,
|
||||
gfc_match_omp_distrbute_simd, gfc_match_omp_declare_target,
|
||||
gfc_match_omp_target, gfc_match_omp_target_data,
|
||||
gfc_match_omp_target_teams, gfc_match_omp_target_teams_distribute,
|
||||
gfc_match_omp_target_teams_distribute_parallel_do,
|
||||
gfc_match_omp_target_teams_distribute_parallel_do_simd,
|
||||
gfc_match_omp_target_teams_distrbute_simd, gfc_match_omp_target_update,
|
||||
gfc_match_omp_teams, gfc_match_omp_teams_distribute,
|
||||
gfc_match_omp_teams_distribute_parallel_do,
|
||||
gfc_match_omp_teams_distribute_parallel_do_simd,
|
||||
gfc_match_omp_teams_distrbute_simd): New functions.
|
||||
* openmp.c (resolve_omp_clauses): Adjust for
|
||||
OMP_LIST_DEPEND_{IN,OUT} being changed to OMP_LIST_DEPEND. Handle
|
||||
OMP_LIST_MAP, OMP_LIST_FROM, OMP_LIST_TO, num_teams, device,
|
||||
dist_chunk_size and thread_limit.
|
||||
(gfc_resolve_omp_parallel_blocks): Only put sharing clauses into
|
||||
ctx.sharing_clauses. Call gfc_resolve_omp_do_blocks for various
|
||||
new EXEC_OMP_* codes.
|
||||
(resolve_omp_do): Handle various new EXEC_OMP_* codes.
|
||||
(gfc_resolve_omp_directive): Likewise.
|
||||
(gfc_resolve_omp_declare_simd): Add missing space to diagnostics.
|
||||
* parse.c (decode_omp_directive): Handle parsing of OpenMP 4.0
|
||||
offloading related directives.
|
||||
(case_executable): Add ST_OMP_TARGET_UPDATE.
|
||||
(case_exec_markers): Add ST_OMP_TARGET*, ST_OMP_TEAMS*,
|
||||
ST_OMP_DISTRIBUTE*.
|
||||
(case_decl): Add ST_OMP_DECLARE_TARGET.
|
||||
(gfc_ascii_statement): Handle new ST_OMP_* codes.
|
||||
(parse_omp_do): Handle various new ST_OMP_* codes.
|
||||
(parse_executable): Likewise.
|
||||
* resolve.c (gfc_resolve_blocks): Handle various new EXEC_OMP_*
|
||||
codes.
|
||||
(resolve_code): Likewise.
|
||||
(resolve_symbol): Change that !$OMP DECLARE TARGET variables
|
||||
are saved.
|
||||
* st.c (gfc_free_statement): Handle various new EXEC_OMP_* codes.
|
||||
* symbol.c (check_conflict): Check omp_declare_target conflicts.
|
||||
(gfc_add_omp_declare_target): New function.
|
||||
(gfc_copy_attr): Copy omp_declare_target.
|
||||
* trans.c (trans_code): Handle various new EXEC_OMP_* codes.
|
||||
* trans-common.c (build_common_decl): Add "omp declare target"
|
||||
attribute if needed.
|
||||
* trans-decl.c (add_attributes_to_decl): Likewise.
|
||||
* trans.h (gfc_omp_finish_clause): New prototype.
|
||||
* trans-openmp.c (gfc_omp_finish_clause): New function.
|
||||
(gfc_trans_omp_reduction_list): Adjust for rop being renamed
|
||||
to u.reduction_op.
|
||||
(gfc_trans_omp_clauses): Adjust for OMP_LIST_DEPEND_{IN,OUT}
|
||||
change to OMP_LIST_DEPEND and fix up depend handling.
|
||||
Handle OMP_LIST_MAP, OMP_LIST_TO, OMP_LIST_FROM, num_teams,
|
||||
thread_limit, device, dist_chunk_size and dist_sched_kind.
|
||||
(gfc_trans_omp_do): Handle EXEC_OMP_DISTRIBUTE.
|
||||
(GFC_OMP_SPLIT_DISTRIBUTE, GFC_OMP_SPLIT_TEAMS,
|
||||
GFC_OMP_SPLIT_TARGET, GFC_OMP_SPLIT_NUM, GFC_OMP_MASK_DISTRIBUTE,
|
||||
GFC_OMP_MASK_TEAMS, GFC_OMP_MASK_TARGET, GFC_OMP_MASK_NUM): New.
|
||||
(gfc_split_omp_clauses): Handle splitting of clauses for new
|
||||
EXEC_OMP_* codes.
|
||||
(gfc_trans_omp_do_simd): Add pblock argument, adjust for being
|
||||
callable for combined constructs.
|
||||
(gfc_trans_omp_parallel_do, gfc_trans_omp_parallel_do_simd): Likewise.
|
||||
(gfc_trans_omp_distribute, gfc_trans_omp_teams,
|
||||
gfc_trans_omp_target, gfc_trans_omp_target_data,
|
||||
gfc_trans_omp_target_update): New functions.
|
||||
(gfc_trans_omp_directive): Adjust gfc_trans_omp_* callers, handle
|
||||
new EXEC_OMP_* codes.
|
||||
|
||||
2014-06-18 Tobias Burnus <burnus@net-b.de>
|
||||
|
||||
PR fortran/61126
|
||||
|
@ -171,7 +171,7 @@ cpp_define_builtins (cpp_reader *pfile)
|
||||
cpp_define (pfile, "_LANGUAGE_FORTRAN=1");
|
||||
|
||||
if (gfc_option.gfc_flag_openmp)
|
||||
cpp_define (pfile, "_OPENMP=201107");
|
||||
cpp_define (pfile, "_OPENMP=201307");
|
||||
|
||||
/* The defines below are necessary for the TARGET_* macros.
|
||||
|
||||
|
@ -1016,32 +1016,51 @@ show_code (int level, gfc_code *c)
|
||||
}
|
||||
|
||||
static void
|
||||
show_omp_namelist (gfc_omp_namelist *n)
|
||||
show_omp_namelist (int list_type, gfc_omp_namelist *n)
|
||||
{
|
||||
for (; n; n = n->next)
|
||||
{
|
||||
switch (n->rop)
|
||||
{
|
||||
case OMP_REDUCTION_PLUS:
|
||||
case OMP_REDUCTION_TIMES:
|
||||
case OMP_REDUCTION_MINUS:
|
||||
case OMP_REDUCTION_AND:
|
||||
case OMP_REDUCTION_OR:
|
||||
case OMP_REDUCTION_EQV:
|
||||
case OMP_REDUCTION_NEQV:
|
||||
fprintf (dumpfile, "%s:", gfc_op2string ((gfc_intrinsic_op) n->rop));
|
||||
break;
|
||||
case OMP_REDUCTION_MAX: fputs ("max:", dumpfile); break;
|
||||
case OMP_REDUCTION_MIN: fputs ("min:", dumpfile); break;
|
||||
case OMP_REDUCTION_IAND: fputs ("iand:", dumpfile); break;
|
||||
case OMP_REDUCTION_IOR: fputs ("ior:", dumpfile); break;
|
||||
case OMP_REDUCTION_IEOR: fputs ("ieor:", dumpfile); break;
|
||||
case OMP_REDUCTION_USER:
|
||||
if (n->udr)
|
||||
fprintf (dumpfile, "%s:", n->udr->name);
|
||||
break;
|
||||
default: break;
|
||||
}
|
||||
if (list_type == OMP_LIST_REDUCTION)
|
||||
switch (n->u.reduction_op)
|
||||
{
|
||||
case OMP_REDUCTION_PLUS:
|
||||
case OMP_REDUCTION_TIMES:
|
||||
case OMP_REDUCTION_MINUS:
|
||||
case OMP_REDUCTION_AND:
|
||||
case OMP_REDUCTION_OR:
|
||||
case OMP_REDUCTION_EQV:
|
||||
case OMP_REDUCTION_NEQV:
|
||||
fprintf (dumpfile, "%s:",
|
||||
gfc_op2string ((gfc_intrinsic_op) n->u.reduction_op));
|
||||
break;
|
||||
case OMP_REDUCTION_MAX: fputs ("max:", dumpfile); break;
|
||||
case OMP_REDUCTION_MIN: fputs ("min:", dumpfile); break;
|
||||
case OMP_REDUCTION_IAND: fputs ("iand:", dumpfile); break;
|
||||
case OMP_REDUCTION_IOR: fputs ("ior:", dumpfile); break;
|
||||
case OMP_REDUCTION_IEOR: fputs ("ieor:", dumpfile); break;
|
||||
case OMP_REDUCTION_USER:
|
||||
if (n->udr)
|
||||
fprintf (dumpfile, "%s:", n->udr->name);
|
||||
break;
|
||||
default: break;
|
||||
}
|
||||
else if (list_type == OMP_LIST_DEPEND)
|
||||
switch (n->u.depend_op)
|
||||
{
|
||||
case OMP_DEPEND_IN: fputs ("in:", dumpfile); break;
|
||||
case OMP_DEPEND_OUT: fputs ("out:", dumpfile); break;
|
||||
case OMP_DEPEND_INOUT: fputs ("inout:", dumpfile); break;
|
||||
default: break;
|
||||
}
|
||||
else if (list_type == OMP_LIST_MAP)
|
||||
switch (n->u.map_op)
|
||||
{
|
||||
case OMP_MAP_ALLOC: fputs ("alloc:", dumpfile); break;
|
||||
case OMP_MAP_TO: fputs ("to:", dumpfile); break;
|
||||
case OMP_MAP_FROM: fputs ("from:", dumpfile); break;
|
||||
case OMP_MAP_TOFROM: fputs ("tofrom:", dumpfile); break;
|
||||
default: break;
|
||||
}
|
||||
fprintf (dumpfile, "%s", n->sym->name);
|
||||
if (n->expr)
|
||||
{
|
||||
@ -1117,7 +1136,7 @@ show_omp_node (int level, gfc_code *c)
|
||||
if (c->ext.omp_namelist)
|
||||
{
|
||||
fputs (" (", dumpfile);
|
||||
show_omp_namelist (c->ext.omp_namelist);
|
||||
show_omp_namelist (OMP_LIST_NUM, c->ext.omp_namelist);
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
return;
|
||||
@ -1226,18 +1245,12 @@ show_omp_node (int level, gfc_code *c)
|
||||
case OMP_LIST_ALIGNED: type = "ALIGNED"; break;
|
||||
case OMP_LIST_LINEAR: type = "LINEAR"; break;
|
||||
case OMP_LIST_REDUCTION: type = "REDUCTION"; break;
|
||||
case OMP_LIST_DEPEND_IN:
|
||||
fprintf (dumpfile, " DEPEND(IN:");
|
||||
break;
|
||||
case OMP_LIST_DEPEND_OUT:
|
||||
fprintf (dumpfile, " DEPEND(OUT:");
|
||||
break;
|
||||
case OMP_LIST_DEPEND: type = "DEPEND"; break;
|
||||
default:
|
||||
gcc_unreachable ();
|
||||
}
|
||||
if (type)
|
||||
fprintf (dumpfile, " %s(", type);
|
||||
show_omp_namelist (omp_clauses->lists[list_type]);
|
||||
fprintf (dumpfile, " %s(", type);
|
||||
show_omp_namelist (list_type, omp_clauses->lists[list_type]);
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
if (omp_clauses->safelen_expr)
|
||||
@ -1269,6 +1282,34 @@ show_omp_node (int level, gfc_code *c)
|
||||
}
|
||||
fprintf (dumpfile, " PROC_BIND(%s)", type);
|
||||
}
|
||||
if (omp_clauses->num_teams)
|
||||
{
|
||||
fputs (" NUM_TEAMS(", dumpfile);
|
||||
show_expr (omp_clauses->num_teams);
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
if (omp_clauses->device)
|
||||
{
|
||||
fputs (" DEVICE(", dumpfile);
|
||||
show_expr (omp_clauses->device);
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
if (omp_clauses->thread_limit)
|
||||
{
|
||||
fputs (" THREAD_LIMIT(", dumpfile);
|
||||
show_expr (omp_clauses->thread_limit);
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
if (omp_clauses->dist_sched_kind != OMP_SCHED_NONE)
|
||||
{
|
||||
fprintf (dumpfile, " DIST_SCHEDULE (static");
|
||||
if (omp_clauses->dist_chunk_size)
|
||||
{
|
||||
fputc (',', dumpfile);
|
||||
show_expr (omp_clauses->dist_chunk_size);
|
||||
}
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
}
|
||||
fputc ('\n', dumpfile);
|
||||
if (c->op == EXEC_OMP_SECTIONS || c->op == EXEC_OMP_PARALLEL_SECTIONS)
|
||||
@ -1296,7 +1337,8 @@ show_omp_node (int level, gfc_code *c)
|
||||
if (omp_clauses->lists[OMP_LIST_COPYPRIVATE])
|
||||
{
|
||||
fputs (" COPYPRIVATE(", dumpfile);
|
||||
show_omp_namelist (omp_clauses->lists[OMP_LIST_COPYPRIVATE]);
|
||||
show_omp_namelist (OMP_LIST_COPYPRIVATE,
|
||||
omp_clauses->lists[OMP_LIST_COPYPRIVATE]);
|
||||
fputc (')', dumpfile);
|
||||
}
|
||||
else if (omp_clauses->nowait)
|
||||
|
@ -87,6 +87,24 @@ static alias_set_type gfc_get_alias_set (tree);
|
||||
static void gfc_init_ts (void);
|
||||
static tree gfc_builtin_function (tree);
|
||||
|
||||
/* Handle an "omp declare target" attribute; arguments as in
|
||||
struct attribute_spec.handler. */
|
||||
static tree
|
||||
gfc_handle_omp_declare_target_attribute (tree *, tree, tree, int, bool *)
|
||||
{
|
||||
return NULL_TREE;
|
||||
}
|
||||
|
||||
/* Table of valid Fortran attributes. */
|
||||
static const struct attribute_spec gfc_attribute_table[] =
|
||||
{
|
||||
/* { name, min_len, max_len, decl_req, type_req, fn_type_req, handler,
|
||||
affects_type_identity } */
|
||||
{ "omp declare target", 0, 0, true, false, false,
|
||||
gfc_handle_omp_declare_target_attribute, false },
|
||||
{ NULL, 0, 0, false, false, false, NULL, false }
|
||||
};
|
||||
|
||||
#undef LANG_HOOKS_NAME
|
||||
#undef LANG_HOOKS_INIT
|
||||
#undef LANG_HOOKS_FINISH
|
||||
@ -109,6 +127,7 @@ static tree gfc_builtin_function (tree);
|
||||
#undef LANG_HOOKS_OMP_CLAUSE_COPY_CTOR
|
||||
#undef LANG_HOOKS_OMP_CLAUSE_ASSIGN_OP
|
||||
#undef LANG_HOOKS_OMP_CLAUSE_DTOR
|
||||
#undef LANG_HOOKS_OMP_FINISH_CLAUSE
|
||||
#undef LANG_HOOKS_OMP_DISREGARD_VALUE_EXPR
|
||||
#undef LANG_HOOKS_OMP_PRIVATE_DEBUG_CLAUSE
|
||||
#undef LANG_HOOKS_OMP_PRIVATE_OUTER_REF
|
||||
@ -116,6 +135,7 @@ static tree gfc_builtin_function (tree);
|
||||
#undef LANG_HOOKS_BUILTIN_FUNCTION
|
||||
#undef LANG_HOOKS_BUILTIN_FUNCTION
|
||||
#undef LANG_HOOKS_GET_ARRAY_DESCR_INFO
|
||||
#undef LANG_HOOKS_ATTRIBUTE_TABLE
|
||||
|
||||
/* Define lang hooks. */
|
||||
#define LANG_HOOKS_NAME "GNU Fortran"
|
||||
@ -139,13 +159,15 @@ static tree gfc_builtin_function (tree);
|
||||
#define LANG_HOOKS_OMP_CLAUSE_COPY_CTOR gfc_omp_clause_copy_ctor
|
||||
#define LANG_HOOKS_OMP_CLAUSE_ASSIGN_OP gfc_omp_clause_assign_op
|
||||
#define LANG_HOOKS_OMP_CLAUSE_DTOR gfc_omp_clause_dtor
|
||||
#define LANG_HOOKS_OMP_FINISH_CLAUSE gfc_omp_finish_clause
|
||||
#define LANG_HOOKS_OMP_DISREGARD_VALUE_EXPR gfc_omp_disregard_value_expr
|
||||
#define LANG_HOOKS_OMP_PRIVATE_DEBUG_CLAUSE gfc_omp_private_debug_clause
|
||||
#define LANG_HOOKS_OMP_PRIVATE_OUTER_REF gfc_omp_private_outer_ref
|
||||
#define LANG_HOOKS_OMP_FIRSTPRIVATIZE_TYPE_SIZES \
|
||||
gfc_omp_firstprivatize_type_sizes
|
||||
#define LANG_HOOKS_BUILTIN_FUNCTION gfc_builtin_function
|
||||
#define LANG_HOOKS_GET_ARRAY_DESCR_INFO gfc_get_array_descr_info
|
||||
#define LANG_HOOKS_BUILTIN_FUNCTION gfc_builtin_function
|
||||
#define LANG_HOOKS_GET_ARRAY_DESCR_INFO gfc_get_array_descr_info
|
||||
#define LANG_HOOKS_ATTRIBUTE_TABLE gfc_attribute_table
|
||||
|
||||
struct lang_hooks lang_hooks = LANG_HOOKS_INITIALIZER;
|
||||
|
||||
|
@ -2147,14 +2147,31 @@ gfc_code_walker (gfc_code **c, walk_code_fn_t codefn, walk_expr_fn_t exprfn,
|
||||
in_omp_workshare = true;
|
||||
|
||||
/* Fall through */
|
||||
|
||||
|
||||
case EXEC_OMP_DISTRIBUTE:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_DO:
|
||||
case EXEC_OMP_DO_SIMD:
|
||||
case EXEC_OMP_SECTIONS:
|
||||
case EXEC_OMP_SINGLE:
|
||||
case EXEC_OMP_END_SINGLE:
|
||||
case EXEC_OMP_SIMD:
|
||||
case EXEC_OMP_TARGET:
|
||||
case EXEC_OMP_TARGET_DATA:
|
||||
case EXEC_OMP_TARGET_TEAMS:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_TARGET_UPDATE:
|
||||
case EXEC_OMP_TASK:
|
||||
case EXEC_OMP_TEAMS:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
|
||||
/* Come to this label only from the
|
||||
EXEC_OMP_PARALLEL_* cases above. */
|
||||
@ -2163,28 +2180,28 @@ gfc_code_walker (gfc_code **c, walk_code_fn_t codefn, walk_expr_fn_t exprfn,
|
||||
|
||||
if (co->ext.omp_clauses)
|
||||
{
|
||||
gfc_omp_namelist *n;
|
||||
static int list_types[]
|
||||
= { OMP_LIST_ALIGNED, OMP_LIST_LINEAR, OMP_LIST_DEPEND,
|
||||
OMP_LIST_MAP, OMP_LIST_TO, OMP_LIST_FROM };
|
||||
size_t idx;
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->if_expr);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->final_expr);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->num_threads);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->chunk_size);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->safelen_expr);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->simdlen_expr);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->num_teams);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->device);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->thread_limit);
|
||||
WALK_SUBEXPR (co->ext.omp_clauses->dist_chunk_size);
|
||||
for (idx = 0;
|
||||
idx < sizeof (list_types) / sizeof (list_types[0]);
|
||||
idx++)
|
||||
for (n = co->ext.omp_clauses->lists[list_types[idx]];
|
||||
n; n = n->next)
|
||||
WALK_SUBEXPR (n->expr);
|
||||
}
|
||||
{
|
||||
gfc_omp_namelist *n;
|
||||
for (n = co->ext.omp_clauses->lists[OMP_LIST_ALIGNED];
|
||||
n; n = n->next)
|
||||
WALK_SUBEXPR (n->expr);
|
||||
for (n = co->ext.omp_clauses->lists[OMP_LIST_LINEAR];
|
||||
n; n = n->next)
|
||||
WALK_SUBEXPR (n->expr);
|
||||
for (n = co->ext.omp_clauses->lists[OMP_LIST_DEPEND_IN];
|
||||
n; n = n->next)
|
||||
WALK_SUBEXPR (n->expr);
|
||||
for (n = co->ext.omp_clauses->lists[OMP_LIST_DEPEND_OUT];
|
||||
n; n = n->next)
|
||||
WALK_SUBEXPR (n->expr);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
@ -215,6 +215,24 @@ typedef enum
|
||||
ST_OMP_TASKGROUP, ST_OMP_END_TASKGROUP, ST_OMP_SIMD, ST_OMP_END_SIMD,
|
||||
ST_OMP_DO_SIMD, ST_OMP_END_DO_SIMD, ST_OMP_PARALLEL_DO_SIMD,
|
||||
ST_OMP_END_PARALLEL_DO_SIMD, ST_OMP_DECLARE_SIMD, ST_OMP_DECLARE_REDUCTION,
|
||||
ST_OMP_TARGET, ST_OMP_END_TARGET, ST_OMP_TARGET_DATA, ST_OMP_END_TARGET_DATA,
|
||||
ST_OMP_TARGET_UPDATE, ST_OMP_DECLARE_TARGET,
|
||||
ST_OMP_TEAMS, ST_OMP_END_TEAMS, ST_OMP_DISTRIBUTE, ST_OMP_END_DISTRIBUTE,
|
||||
ST_OMP_DISTRIBUTE_SIMD, ST_OMP_END_DISTRIBUTE_SIMD,
|
||||
ST_OMP_DISTRIBUTE_PARALLEL_DO, ST_OMP_END_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD, ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_TARGET_TEAMS, ST_OMP_END_TARGET_TEAMS, ST_OMP_TEAMS_DISTRIBUTE,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE, ST_OMP_TEAMS_DISTRIBUTE_SIMD,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_SIMD, ST_OMP_TARGET_TEAMS_DISTRIBUTE,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE, ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD, ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
ST_PROCEDURE, ST_GENERIC, ST_CRITICAL, ST_END_CRITICAL,
|
||||
ST_GET_FCN_CHARACTERISTICS, ST_LOCK, ST_UNLOCK, ST_NONE
|
||||
}
|
||||
@ -821,6 +839,9 @@ typedef struct
|
||||
!$OMP DECLARE REDUCTION. */
|
||||
unsigned omp_udr_artificial_var:1;
|
||||
|
||||
/* Mentioned in OMP DECLARE TARGET. */
|
||||
unsigned omp_declare_target:1;
|
||||
|
||||
/* Attributes set by compiler extensions (!GCC$ ATTRIBUTES). */
|
||||
unsigned ext_attr:EXT_ATTR_NUM;
|
||||
|
||||
@ -1060,6 +1081,23 @@ typedef enum
|
||||
}
|
||||
gfc_omp_reduction_op;
|
||||
|
||||
typedef enum
|
||||
{
|
||||
OMP_DEPEND_IN,
|
||||
OMP_DEPEND_OUT,
|
||||
OMP_DEPEND_INOUT
|
||||
}
|
||||
gfc_omp_depend_op;
|
||||
|
||||
typedef enum
|
||||
{
|
||||
OMP_MAP_ALLOC,
|
||||
OMP_MAP_TO,
|
||||
OMP_MAP_FROM,
|
||||
OMP_MAP_TOFROM
|
||||
}
|
||||
gfc_omp_map_op;
|
||||
|
||||
/* For use in OpenMP clauses in case we need extra information
|
||||
(aligned clause alignment, linear clause step, etc.). */
|
||||
|
||||
@ -1067,7 +1105,12 @@ typedef struct gfc_omp_namelist
|
||||
{
|
||||
struct gfc_symbol *sym;
|
||||
struct gfc_expr *expr;
|
||||
gfc_omp_reduction_op rop;
|
||||
union
|
||||
{
|
||||
gfc_omp_reduction_op reduction_op;
|
||||
gfc_omp_depend_op depend_op;
|
||||
gfc_omp_map_op map_op;
|
||||
} u;
|
||||
struct gfc_omp_udr *udr;
|
||||
struct gfc_omp_namelist *next;
|
||||
}
|
||||
@ -1086,8 +1129,10 @@ enum
|
||||
OMP_LIST_UNIFORM,
|
||||
OMP_LIST_ALIGNED,
|
||||
OMP_LIST_LINEAR,
|
||||
OMP_LIST_DEPEND_IN,
|
||||
OMP_LIST_DEPEND_OUT,
|
||||
OMP_LIST_DEPEND,
|
||||
OMP_LIST_MAP,
|
||||
OMP_LIST_TO,
|
||||
OMP_LIST_FROM,
|
||||
OMP_LIST_REDUCTION,
|
||||
OMP_LIST_NUM
|
||||
};
|
||||
@ -1147,6 +1192,11 @@ typedef struct gfc_omp_clauses
|
||||
enum gfc_omp_proc_bind_kind proc_bind;
|
||||
struct gfc_expr *safelen_expr;
|
||||
struct gfc_expr *simdlen_expr;
|
||||
struct gfc_expr *num_teams;
|
||||
struct gfc_expr *device;
|
||||
struct gfc_expr *thread_limit;
|
||||
enum gfc_omp_sched_kind dist_sched_kind;
|
||||
struct gfc_expr *dist_chunk_size;
|
||||
}
|
||||
gfc_omp_clauses;
|
||||
|
||||
@ -1387,7 +1437,7 @@ struct gfc_undo_change_set
|
||||
typedef struct gfc_common_head
|
||||
{
|
||||
locus where;
|
||||
char use_assoc, saved, threadprivate;
|
||||
char use_assoc, saved, threadprivate, omp_declare_target;
|
||||
char name[GFC_MAX_SYMBOL_LEN + 1];
|
||||
struct gfc_symbol *head;
|
||||
const char* binding_label;
|
||||
@ -2217,7 +2267,17 @@ typedef enum
|
||||
EXEC_OMP_END_SINGLE, EXEC_OMP_TASK, EXEC_OMP_TASKWAIT,
|
||||
EXEC_OMP_TASKYIELD, EXEC_OMP_CANCEL, EXEC_OMP_CANCELLATION_POINT,
|
||||
EXEC_OMP_TASKGROUP, EXEC_OMP_SIMD, EXEC_OMP_DO_SIMD,
|
||||
EXEC_OMP_PARALLEL_DO_SIMD
|
||||
EXEC_OMP_PARALLEL_DO_SIMD, EXEC_OMP_TARGET, EXEC_OMP_TARGET_DATA,
|
||||
EXEC_OMP_TEAMS, EXEC_OMP_DISTRIBUTE, EXEC_OMP_DISTRIBUTE_SIMD,
|
||||
EXEC_OMP_DISTRIBUTE_PARALLEL_DO, EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
EXEC_OMP_TARGET_TEAMS, EXEC_OMP_TEAMS_DISTRIBUTE,
|
||||
EXEC_OMP_TEAMS_DISTRIBUTE_SIMD, EXEC_OMP_TARGET_TEAMS_DISTRIBUTE,
|
||||
EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD,
|
||||
EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO,
|
||||
EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD,
|
||||
EXEC_OMP_TARGET_UPDATE
|
||||
}
|
||||
gfc_exec_op;
|
||||
|
||||
@ -2682,6 +2742,7 @@ bool gfc_add_protected (symbol_attribute *, const char *, locus *);
|
||||
bool gfc_add_result (symbol_attribute *, const char *, locus *);
|
||||
bool gfc_add_save (symbol_attribute *, save_state, const char *, locus *);
|
||||
bool gfc_add_threadprivate (symbol_attribute *, const char *, locus *);
|
||||
bool gfc_add_omp_declare_target (symbol_attribute *, const char *, locus *);
|
||||
bool gfc_add_saved_common (symbol_attribute *, locus *);
|
||||
bool gfc_add_target (symbol_attribute *, locus *);
|
||||
bool gfc_add_dummy (symbol_attribute *, const char *, locus *);
|
||||
|
@ -131,6 +131,11 @@ match gfc_match_omp_cancellation_point (void);
|
||||
match gfc_match_omp_critical (void);
|
||||
match gfc_match_omp_declare_reduction (void);
|
||||
match gfc_match_omp_declare_simd (void);
|
||||
match gfc_match_omp_declare_target (void);
|
||||
match gfc_match_omp_distribute (void);
|
||||
match gfc_match_omp_distribute_parallel_do (void);
|
||||
match gfc_match_omp_distribute_parallel_do_simd (void);
|
||||
match gfc_match_omp_distribute_simd (void);
|
||||
match gfc_match_omp_do (void);
|
||||
match gfc_match_omp_do_simd (void);
|
||||
match gfc_match_omp_flush (void);
|
||||
@ -144,10 +149,23 @@ match gfc_match_omp_parallel_workshare (void);
|
||||
match gfc_match_omp_sections (void);
|
||||
match gfc_match_omp_simd (void);
|
||||
match gfc_match_omp_single (void);
|
||||
match gfc_match_omp_target (void);
|
||||
match gfc_match_omp_target_data (void);
|
||||
match gfc_match_omp_target_teams (void);
|
||||
match gfc_match_omp_target_teams_distribute (void);
|
||||
match gfc_match_omp_target_teams_distribute_parallel_do (void);
|
||||
match gfc_match_omp_target_teams_distribute_parallel_do_simd (void);
|
||||
match gfc_match_omp_target_teams_distribute_simd (void);
|
||||
match gfc_match_omp_target_update (void);
|
||||
match gfc_match_omp_task (void);
|
||||
match gfc_match_omp_taskgroup (void);
|
||||
match gfc_match_omp_taskwait (void);
|
||||
match gfc_match_omp_taskyield (void);
|
||||
match gfc_match_omp_teams (void);
|
||||
match gfc_match_omp_teams_distribute (void);
|
||||
match gfc_match_omp_teams_distribute_parallel_do (void);
|
||||
match gfc_match_omp_teams_distribute_parallel_do_simd (void);
|
||||
match gfc_match_omp_teams_distribute_simd (void);
|
||||
match gfc_match_omp_threadprivate (void);
|
||||
match gfc_match_omp_workshare (void);
|
||||
match gfc_match_omp_end_nowait (void);
|
||||
|
@ -1877,7 +1877,7 @@ typedef enum
|
||||
AB_IS_BIND_C, AB_IS_C_INTEROP, AB_IS_ISO_C, AB_ABSTRACT, AB_ZERO_COMP,
|
||||
AB_IS_CLASS, AB_PROCEDURE, AB_PROC_POINTER, AB_ASYNCHRONOUS, AB_CODIMENSION,
|
||||
AB_COARRAY_COMP, AB_VTYPE, AB_VTAB, AB_CONTIGUOUS, AB_CLASS_POINTER,
|
||||
AB_IMPLICIT_PURE, AB_ARTIFICIAL, AB_UNLIMITED_POLY
|
||||
AB_IMPLICIT_PURE, AB_ARTIFICIAL, AB_UNLIMITED_POLY, AB_OMP_DECLARE_TARGET
|
||||
}
|
||||
ab_attribute;
|
||||
|
||||
@ -1932,6 +1932,7 @@ static const mstring attr_bits[] =
|
||||
minit ("CLASS_POINTER", AB_CLASS_POINTER),
|
||||
minit ("IMPLICIT_PURE", AB_IMPLICIT_PURE),
|
||||
minit ("UNLIMITED_POLY", AB_UNLIMITED_POLY),
|
||||
minit ("OMP_DECLARE_TARGET", AB_OMP_DECLARE_TARGET),
|
||||
minit (NULL, -1)
|
||||
};
|
||||
|
||||
@ -2110,6 +2111,8 @@ mio_symbol_attribute (symbol_attribute *attr)
|
||||
MIO_NAME (ab_attribute) (AB_VTYPE, attr_bits);
|
||||
if (attr->vtab)
|
||||
MIO_NAME (ab_attribute) (AB_VTAB, attr_bits);
|
||||
if (attr->omp_declare_target)
|
||||
MIO_NAME (ab_attribute) (AB_OMP_DECLARE_TARGET, attr_bits);
|
||||
|
||||
mio_rparen ();
|
||||
|
||||
@ -2273,6 +2276,9 @@ mio_symbol_attribute (symbol_attribute *attr)
|
||||
case AB_VTAB:
|
||||
attr->vtab = 1;
|
||||
break;
|
||||
case AB_OMP_DECLARE_TARGET:
|
||||
attr->omp_declare_target = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -633,12 +633,29 @@ decode_omp_directive (void)
|
||||
ST_OMP_DECLARE_REDUCTION);
|
||||
matchs ("declare simd", gfc_match_omp_declare_simd,
|
||||
ST_OMP_DECLARE_SIMD);
|
||||
matcho ("declare target", gfc_match_omp_declare_target,
|
||||
ST_OMP_DECLARE_TARGET);
|
||||
matchs ("distribute parallel do simd",
|
||||
gfc_match_omp_distribute_parallel_do_simd,
|
||||
ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD);
|
||||
matcho ("distribute parallel do", gfc_match_omp_distribute_parallel_do,
|
||||
ST_OMP_DISTRIBUTE_PARALLEL_DO);
|
||||
matchs ("distribute simd", gfc_match_omp_distribute_simd,
|
||||
ST_OMP_DISTRIBUTE_SIMD);
|
||||
matcho ("distribute", gfc_match_omp_distribute, ST_OMP_DISTRIBUTE);
|
||||
matchs ("do simd", gfc_match_omp_do_simd, ST_OMP_DO_SIMD);
|
||||
matcho ("do", gfc_match_omp_do, ST_OMP_DO);
|
||||
break;
|
||||
case 'e':
|
||||
matcho ("end atomic", gfc_match_omp_eos, ST_OMP_END_ATOMIC);
|
||||
matcho ("end critical", gfc_match_omp_critical, ST_OMP_END_CRITICAL);
|
||||
matchs ("end distribute parallel do simd", gfc_match_omp_eos,
|
||||
ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD);
|
||||
matcho ("end distribute parallel do", gfc_match_omp_eos,
|
||||
ST_OMP_END_DISTRIBUTE_PARALLEL_DO);
|
||||
matchs ("end distribute simd", gfc_match_omp_eos,
|
||||
ST_OMP_END_DISTRIBUTE_SIMD);
|
||||
matcho ("end distribute", gfc_match_omp_eos, ST_OMP_END_DISTRIBUTE);
|
||||
matchs ("end do simd", gfc_match_omp_end_nowait, ST_OMP_END_DO_SIMD);
|
||||
matcho ("end do", gfc_match_omp_end_nowait, ST_OMP_END_DO);
|
||||
matchs ("end simd", gfc_match_omp_eos, ST_OMP_END_SIMD);
|
||||
@ -654,8 +671,29 @@ decode_omp_directive (void)
|
||||
matcho ("end parallel", gfc_match_omp_eos, ST_OMP_END_PARALLEL);
|
||||
matcho ("end sections", gfc_match_omp_end_nowait, ST_OMP_END_SECTIONS);
|
||||
matcho ("end single", gfc_match_omp_end_single, ST_OMP_END_SINGLE);
|
||||
matcho ("end target data", gfc_match_omp_eos, ST_OMP_END_TARGET_DATA);
|
||||
matchs ("end target teams distribute parallel do simd",
|
||||
gfc_match_omp_eos,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD);
|
||||
matcho ("end target teams distribute parallel do", gfc_match_omp_eos,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO);
|
||||
matchs ("end target teams distribute simd", gfc_match_omp_eos,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD);
|
||||
matcho ("end target teams distribute", gfc_match_omp_eos,
|
||||
ST_OMP_END_TARGET_TEAMS_DISTRIBUTE);
|
||||
matcho ("end target teams", gfc_match_omp_eos, ST_OMP_END_TARGET_TEAMS);
|
||||
matcho ("end target", gfc_match_omp_eos, ST_OMP_END_TARGET);
|
||||
matcho ("end taskgroup", gfc_match_omp_eos, ST_OMP_END_TASKGROUP);
|
||||
matcho ("end task", gfc_match_omp_eos, ST_OMP_END_TASK);
|
||||
matchs ("end teams distribute parallel do simd", gfc_match_omp_eos,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD);
|
||||
matcho ("end teams distribute parallel do", gfc_match_omp_eos,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO);
|
||||
matchs ("end teams distribute simd", gfc_match_omp_eos,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE_SIMD);
|
||||
matcho ("end teams distribute", gfc_match_omp_eos,
|
||||
ST_OMP_END_TEAMS_DISTRIBUTE);
|
||||
matcho ("end teams", gfc_match_omp_eos, ST_OMP_END_TEAMS);
|
||||
matcho ("end workshare", gfc_match_omp_end_nowait,
|
||||
ST_OMP_END_WORKSHARE);
|
||||
break;
|
||||
@ -685,10 +723,37 @@ decode_omp_directive (void)
|
||||
matcho ("single", gfc_match_omp_single, ST_OMP_SINGLE);
|
||||
break;
|
||||
case 't':
|
||||
matcho ("target data", gfc_match_omp_target_data, ST_OMP_TARGET_DATA);
|
||||
matchs ("target teams distribute parallel do simd",
|
||||
gfc_match_omp_target_teams_distribute_parallel_do_simd,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD);
|
||||
matcho ("target teams distribute parallel do",
|
||||
gfc_match_omp_target_teams_distribute_parallel_do,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO);
|
||||
matchs ("target teams distribute simd",
|
||||
gfc_match_omp_target_teams_distribute_simd,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD);
|
||||
matcho ("target teams distribute", gfc_match_omp_target_teams_distribute,
|
||||
ST_OMP_TARGET_TEAMS_DISTRIBUTE);
|
||||
matcho ("target teams", gfc_match_omp_target_teams, ST_OMP_TARGET_TEAMS);
|
||||
matcho ("target update", gfc_match_omp_target_update,
|
||||
ST_OMP_TARGET_UPDATE);
|
||||
matcho ("target", gfc_match_omp_target, ST_OMP_TARGET);
|
||||
matcho ("taskgroup", gfc_match_omp_taskgroup, ST_OMP_TASKGROUP);
|
||||
matcho ("taskwait", gfc_match_omp_taskwait, ST_OMP_TASKWAIT);
|
||||
matcho ("taskyield", gfc_match_omp_taskyield, ST_OMP_TASKYIELD);
|
||||
matcho ("task", gfc_match_omp_task, ST_OMP_TASK);
|
||||
matchs ("teams distribute parallel do simd",
|
||||
gfc_match_omp_teams_distribute_parallel_do_simd,
|
||||
ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD);
|
||||
matcho ("teams distribute parallel do",
|
||||
gfc_match_omp_teams_distribute_parallel_do,
|
||||
ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO);
|
||||
matchs ("teams distribute simd", gfc_match_omp_teams_distribute_simd,
|
||||
ST_OMP_TEAMS_DISTRIBUTE_SIMD);
|
||||
matcho ("teams distribute", gfc_match_omp_teams_distribute,
|
||||
ST_OMP_TEAMS_DISTRIBUTE);
|
||||
matcho ("teams", gfc_match_omp_teams, ST_OMP_TEAMS);
|
||||
matcho ("threadprivate", gfc_match_omp_threadprivate,
|
||||
ST_OMP_THREADPRIVATE);
|
||||
break;
|
||||
@ -1094,8 +1159,8 @@ next_statement (void)
|
||||
case ST_LABEL_ASSIGNMENT: case ST_FLUSH: case ST_OMP_FLUSH: \
|
||||
case ST_OMP_BARRIER: case ST_OMP_TASKWAIT: case ST_OMP_TASKYIELD: \
|
||||
case ST_OMP_CANCEL: case ST_OMP_CANCELLATION_POINT: \
|
||||
case ST_ERROR_STOP: case ST_SYNC_ALL: case ST_SYNC_IMAGES: \
|
||||
case ST_SYNC_MEMORY: case ST_LOCK: case ST_UNLOCK
|
||||
case ST_OMP_TARGET_UPDATE: case ST_ERROR_STOP: case ST_SYNC_ALL: \
|
||||
case ST_SYNC_IMAGES: case ST_SYNC_MEMORY: case ST_LOCK: case ST_UNLOCK
|
||||
|
||||
/* Statements that mark other executable statements. */
|
||||
|
||||
@ -1108,14 +1173,27 @@ next_statement (void)
|
||||
case ST_OMP_DO: case ST_OMP_PARALLEL_DO: case ST_OMP_ATOMIC: \
|
||||
case ST_OMP_WORKSHARE: case ST_OMP_PARALLEL_WORKSHARE: \
|
||||
case ST_OMP_TASK: case ST_OMP_TASKGROUP: case ST_OMP_SIMD: \
|
||||
case ST_OMP_DO_SIMD: case ST_OMP_PARALLEL_DO_SIMD: case ST_CRITICAL
|
||||
case ST_OMP_DO_SIMD: case ST_OMP_PARALLEL_DO_SIMD: case ST_OMP_TARGET: \
|
||||
case ST_OMP_TARGET_DATA: case ST_OMP_TARGET_TEAMS: \
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE: \
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD: \
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO: \
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD: \
|
||||
case ST_OMP_TEAMS: case ST_OMP_TEAMS_DISTRIBUTE: \
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_SIMD: \
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO: \
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD: case ST_OMP_DISTRIBUTE: \
|
||||
case ST_OMP_DISTRIBUTE_SIMD: case ST_OMP_DISTRIBUTE_PARALLEL_DO: \
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD: \
|
||||
case ST_CRITICAL
|
||||
|
||||
/* Declaration statements */
|
||||
|
||||
#define case_decl case ST_ATTR_DECL: case ST_COMMON: case ST_DATA_DECL: \
|
||||
case ST_EQUIVALENCE: case ST_NAMELIST: case ST_STATEMENT_FUNCTION: \
|
||||
case ST_TYPE: case ST_INTERFACE: case ST_OMP_THREADPRIVATE: \
|
||||
case ST_PROCEDURE: case ST_OMP_DECLARE_SIMD: case ST_OMP_DECLARE_REDUCTION
|
||||
case ST_PROCEDURE: case ST_OMP_DECLARE_SIMD: case ST_OMP_DECLARE_REDUCTION: \
|
||||
case ST_OMP_DECLARE_TARGET
|
||||
|
||||
/* Block end statements. Errors associated with interchanging these
|
||||
are detected in gfc_match_end(). */
|
||||
@ -1621,6 +1699,21 @@ gfc_ascii_statement (gfc_statement st)
|
||||
case ST_OMP_DECLARE_SIMD:
|
||||
p = "!$OMP DECLARE SIMD";
|
||||
break;
|
||||
case ST_OMP_DECLARE_TARGET:
|
||||
p = "!$OMP DECLARE TARGET";
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE:
|
||||
p = "!$OMP DISTRIBUTE";
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
p = "!$OMP DISTRIBUTE PARALLEL DO";
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
p = "!$OMP DISTRIBUTE PARALLEL DO SIMD";
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_SIMD:
|
||||
p = "!$OMP DISTRIBUTE SIMD";
|
||||
break;
|
||||
case ST_OMP_DO:
|
||||
p = "!$OMP DO";
|
||||
break;
|
||||
@ -1633,6 +1726,18 @@ gfc_ascii_statement (gfc_statement st)
|
||||
case ST_OMP_END_CRITICAL:
|
||||
p = "!$OMP END CRITICAL";
|
||||
break;
|
||||
case ST_OMP_END_DISTRIBUTE:
|
||||
p = "!$OMP END DISTRIBUTE";
|
||||
break;
|
||||
case ST_OMP_END_DISTRIBUTE_PARALLEL_DO:
|
||||
p = "!$OMP END DISTRIBUTE PARALLEL DO";
|
||||
break;
|
||||
case ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
p = "!$OMP END DISTRIBUTE PARALLEL DO SIMD";
|
||||
break;
|
||||
case ST_OMP_END_DISTRIBUTE_SIMD:
|
||||
p = "!$OMP END DISTRIBUTE SIMD";
|
||||
break;
|
||||
case ST_OMP_END_DO:
|
||||
p = "!$OMP END DO";
|
||||
break;
|
||||
@ -1672,9 +1777,45 @@ gfc_ascii_statement (gfc_statement st)
|
||||
case ST_OMP_END_TASK:
|
||||
p = "!$OMP END TASK";
|
||||
break;
|
||||
case ST_OMP_END_TARGET:
|
||||
p = "!$OMP END TARGET";
|
||||
break;
|
||||
case ST_OMP_END_TARGET_DATA:
|
||||
p = "!$OMP END TARGET DATA";
|
||||
break;
|
||||
case ST_OMP_END_TARGET_TEAMS:
|
||||
p = "!$OMP END TARGET TEAMS";
|
||||
break;
|
||||
case ST_OMP_END_TARGET_TEAMS_DISTRIBUTE:
|
||||
p = "!$OMP END TARGET TEAMS DISTRIBUTE";
|
||||
break;
|
||||
case ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
p = "!$OMP END TARGET TEAMS DISTRIBUTE PARALLEL DO";
|
||||
break;
|
||||
case ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
p = "!$OMP END TARGET TEAMS DISTRIBUTE PARALLEL DO SIMD";
|
||||
break;
|
||||
case ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
p = "!$OMP END TARGET TEAMS DISTRIBUTE SIMD";
|
||||
break;
|
||||
case ST_OMP_END_TASKGROUP:
|
||||
p = "!$OMP END TASKGROUP";
|
||||
break;
|
||||
case ST_OMP_END_TEAMS:
|
||||
p = "!$OMP END TEAMS";
|
||||
break;
|
||||
case ST_OMP_END_TEAMS_DISTRIBUTE:
|
||||
p = "!$OMP END TEAMS DISTRIBUTE";
|
||||
break;
|
||||
case ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
p = "!$OMP END TEAMS DISTRIBUTE PARALLEL DO";
|
||||
break;
|
||||
case ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
p = "!$OMP END TEAMS DISTRIBUTE PARALLEL DO SIMD";
|
||||
break;
|
||||
case ST_OMP_END_TEAMS_DISTRIBUTE_SIMD:
|
||||
p = "!$OMP END TEAMS DISTRIBUTE SIMD";
|
||||
break;
|
||||
case ST_OMP_END_WORKSHARE:
|
||||
p = "!$OMP END WORKSHARE";
|
||||
break;
|
||||
@ -1714,6 +1855,30 @@ gfc_ascii_statement (gfc_statement st)
|
||||
case ST_OMP_SINGLE:
|
||||
p = "!$OMP SINGLE";
|
||||
break;
|
||||
case ST_OMP_TARGET:
|
||||
p = "!$OMP TARGET";
|
||||
break;
|
||||
case ST_OMP_TARGET_DATA:
|
||||
p = "!$OMP TARGET DATA";
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS:
|
||||
p = "!$OMP TARGET TEAMS";
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
p = "!$OMP TARGET TEAMS DISTRIBUTE";
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
p = "!$OMP TARGET TEAMS DISTRIBUTE PARALLEL DO";
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
p = "!$OMP TARGET TEAMS DISTRIBUTE PARALLEL DO SIMD";
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
p = "!$OMP TARGET TEAMS DISTRIBUTE SIMD";
|
||||
break;
|
||||
case ST_OMP_TARGET_UPDATE:
|
||||
p = "!$OMP TARGET UPDATE";
|
||||
break;
|
||||
case ST_OMP_TASK:
|
||||
p = "!$OMP TASK";
|
||||
break;
|
||||
@ -1726,6 +1891,21 @@ gfc_ascii_statement (gfc_statement st)
|
||||
case ST_OMP_TASKYIELD:
|
||||
p = "!$OMP TASKYIELD";
|
||||
break;
|
||||
case ST_OMP_TEAMS:
|
||||
p = "!$OMP TEAMS";
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE:
|
||||
p = "!$OMP TEAMS DISTRIBUTE";
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
p = "!$OMP TEAMS DISTRIBUTE PARALLEL DO";
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
p = "!$OMP TEAMS DISTRIBUTE PARALLEL DO SIMD";
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
p = "!$OMP TEAMS DISTRIBUTE SIMD";
|
||||
break;
|
||||
case ST_OMP_THREADPRIVATE:
|
||||
p = "!$OMP THREADPRIVATE";
|
||||
break;
|
||||
@ -3699,13 +3879,47 @@ parse_omp_do (gfc_statement omp_st)
|
||||
gfc_statement omp_end_st = ST_OMP_END_DO;
|
||||
switch (omp_st)
|
||||
{
|
||||
case ST_OMP_SIMD: omp_end_st = ST_OMP_END_SIMD; break;
|
||||
case ST_OMP_DISTRIBUTE: omp_end_st = ST_OMP_END_DISTRIBUTE; break;
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE_PARALLEL_DO;
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_SIMD:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE_SIMD;
|
||||
break;
|
||||
case ST_OMP_DO: omp_end_st = ST_OMP_END_DO; break;
|
||||
case ST_OMP_DO_SIMD: omp_end_st = ST_OMP_END_DO_SIMD; break;
|
||||
case ST_OMP_PARALLEL_DO: omp_end_st = ST_OMP_END_PARALLEL_DO; break;
|
||||
case ST_OMP_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_SIMD: omp_end_st = ST_OMP_END_SIMD; break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE_SIMD;
|
||||
break;
|
||||
default: gcc_unreachable ();
|
||||
}
|
||||
if (st == omp_end_st)
|
||||
@ -3814,12 +4028,60 @@ parse_omp_structured_block (gfc_statement omp_st, bool workshare_stmts_only)
|
||||
case ST_OMP_SINGLE:
|
||||
omp_end_st = ST_OMP_END_SINGLE;
|
||||
break;
|
||||
case ST_OMP_TARGET:
|
||||
omp_end_st = ST_OMP_END_TARGET;
|
||||
break;
|
||||
case ST_OMP_TARGET_DATA:
|
||||
omp_end_st = ST_OMP_END_TARGET_DATA;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
omp_end_st = ST_OMP_END_TARGET_TEAMS_DISTRIBUTE_SIMD;
|
||||
break;
|
||||
case ST_OMP_TASK:
|
||||
omp_end_st = ST_OMP_END_TASK;
|
||||
break;
|
||||
case ST_OMP_TASKGROUP:
|
||||
omp_end_st = ST_OMP_END_TASKGROUP;
|
||||
break;
|
||||
case ST_OMP_TEAMS:
|
||||
omp_end_st = ST_OMP_END_TEAMS;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
omp_end_st = ST_OMP_END_TEAMS_DISTRIBUTE_SIMD;
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE;
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE_PARALLEL_DO;
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE_PARALLEL_DO_SIMD;
|
||||
break;
|
||||
case ST_OMP_DISTRIBUTE_SIMD:
|
||||
omp_end_st = ST_OMP_END_DISTRIBUTE_SIMD;
|
||||
break;
|
||||
case ST_OMP_WORKSHARE:
|
||||
omp_end_st = ST_OMP_END_WORKSHARE;
|
||||
break;
|
||||
@ -4052,6 +4314,10 @@ parse_executable (gfc_statement st)
|
||||
case ST_OMP_CRITICAL:
|
||||
case ST_OMP_MASTER:
|
||||
case ST_OMP_SINGLE:
|
||||
case ST_OMP_TARGET:
|
||||
case ST_OMP_TARGET_DATA:
|
||||
case ST_OMP_TARGET_TEAMS:
|
||||
case ST_OMP_TEAMS:
|
||||
case ST_OMP_TASK:
|
||||
case ST_OMP_TASKGROUP:
|
||||
parse_omp_structured_block (st, false);
|
||||
@ -4062,11 +4328,23 @@ parse_executable (gfc_statement st)
|
||||
parse_omp_structured_block (st, true);
|
||||
break;
|
||||
|
||||
case ST_OMP_DISTRIBUTE:
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
case ST_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case ST_OMP_DISTRIBUTE_SIMD:
|
||||
case ST_OMP_DO:
|
||||
case ST_OMP_DO_SIMD:
|
||||
case ST_OMP_PARALLEL_DO:
|
||||
case ST_OMP_PARALLEL_DO_SIMD:
|
||||
case ST_OMP_SIMD:
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case ST_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case ST_OMP_TEAMS_DISTRIBUTE:
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case ST_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
st = parse_omp_do (st);
|
||||
if (st == ST_IMPLIED_ENDDO)
|
||||
return st;
|
||||
|
@ -9032,6 +9032,10 @@ gfc_resolve_blocks (gfc_code *b, gfc_namespace *ns)
|
||||
|
||||
case EXEC_OMP_ATOMIC:
|
||||
case EXEC_OMP_CRITICAL:
|
||||
case EXEC_OMP_DISTRIBUTE:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_DO:
|
||||
case EXEC_OMP_DO_SIMD:
|
||||
case EXEC_OMP_MASTER:
|
||||
@ -9044,10 +9048,23 @@ gfc_resolve_blocks (gfc_code *b, gfc_namespace *ns)
|
||||
case EXEC_OMP_SECTIONS:
|
||||
case EXEC_OMP_SIMD:
|
||||
case EXEC_OMP_SINGLE:
|
||||
case EXEC_OMP_TARGET:
|
||||
case EXEC_OMP_TARGET_DATA:
|
||||
case EXEC_OMP_TARGET_TEAMS:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_TARGET_UPDATE:
|
||||
case EXEC_OMP_TASK:
|
||||
case EXEC_OMP_TASKGROUP:
|
||||
case EXEC_OMP_TASKWAIT:
|
||||
case EXEC_OMP_TASKYIELD:
|
||||
case EXEC_OMP_TEAMS:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_WORKSHARE:
|
||||
break;
|
||||
|
||||
@ -9827,11 +9844,23 @@ resolve_code (gfc_code *code, gfc_namespace *ns)
|
||||
case EXEC_OMP_PARALLEL_DO:
|
||||
case EXEC_OMP_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_PARALLEL_SECTIONS:
|
||||
case EXEC_OMP_TARGET_TEAMS:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_TASK:
|
||||
case EXEC_OMP_TEAMS:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
omp_workshare_save = omp_workshare_flag;
|
||||
omp_workshare_flag = 0;
|
||||
gfc_resolve_omp_parallel_blocks (code, ns);
|
||||
break;
|
||||
case EXEC_OMP_DISTRIBUTE:
|
||||
case EXEC_OMP_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_DO:
|
||||
case EXEC_OMP_DO_SIMD:
|
||||
case EXEC_OMP_SIMD:
|
||||
@ -10160,6 +10189,10 @@ resolve_code (gfc_code *code, gfc_namespace *ns)
|
||||
case EXEC_OMP_CANCELLATION_POINT:
|
||||
case EXEC_OMP_CRITICAL:
|
||||
case EXEC_OMP_FLUSH:
|
||||
case EXEC_OMP_DISTRIBUTE:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_DO:
|
||||
case EXEC_OMP_DO_SIMD:
|
||||
case EXEC_OMP_MASTER:
|
||||
@ -10167,9 +10200,23 @@ resolve_code (gfc_code *code, gfc_namespace *ns)
|
||||
case EXEC_OMP_SECTIONS:
|
||||
case EXEC_OMP_SIMD:
|
||||
case EXEC_OMP_SINGLE:
|
||||
case EXEC_OMP_TARGET:
|
||||
case EXEC_OMP_TARGET_DATA:
|
||||
case EXEC_OMP_TARGET_TEAMS:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_TARGET_UPDATE:
|
||||
case EXEC_OMP_TASK:
|
||||
case EXEC_OMP_TASKGROUP:
|
||||
case EXEC_OMP_TASKWAIT:
|
||||
case EXEC_OMP_TASKYIELD:
|
||||
case EXEC_OMP_TEAMS:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_WORKSHARE:
|
||||
gfc_resolve_omp_directive (code, ns);
|
||||
break;
|
||||
@ -10179,7 +10226,6 @@ resolve_code (gfc_code *code, gfc_namespace *ns)
|
||||
case EXEC_OMP_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_PARALLEL_SECTIONS:
|
||||
case EXEC_OMP_PARALLEL_WORKSHARE:
|
||||
case EXEC_OMP_TASK:
|
||||
omp_workshare_save = omp_workshare_flag;
|
||||
omp_workshare_flag = 0;
|
||||
gfc_resolve_omp_directive (code, ns);
|
||||
@ -13541,6 +13587,18 @@ resolve_symbol (gfc_symbol *sym)
|
||||
|| sym->ns->proc_name->attr.flavor != FL_MODULE)))
|
||||
gfc_error ("Threadprivate at %L isn't SAVEd", &sym->declared_at);
|
||||
|
||||
/* Check omp declare target restrictions. */
|
||||
if (sym->attr.omp_declare_target
|
||||
&& sym->attr.flavor == FL_VARIABLE
|
||||
&& !sym->attr.save
|
||||
&& !sym->ns->save_all
|
||||
&& (!sym->attr.in_common
|
||||
&& sym->module == NULL
|
||||
&& (sym->ns->proc_name == NULL
|
||||
|| sym->ns->proc_name->attr.flavor != FL_MODULE)))
|
||||
gfc_error ("!$OMP DECLARE TARGET variable '%s' at %L isn't SAVEd",
|
||||
sym->name, &sym->declared_at);
|
||||
|
||||
/* If we have come this far we can apply default-initializers, as
|
||||
described in 14.7.5, to those variables that have not already
|
||||
been assigned one. */
|
||||
|
@ -187,6 +187,10 @@ gfc_free_statement (gfc_code *p)
|
||||
|
||||
case EXEC_OMP_CANCEL:
|
||||
case EXEC_OMP_CANCELLATION_POINT:
|
||||
case EXEC_OMP_DISTRIBUTE:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_DO:
|
||||
case EXEC_OMP_DO_SIMD:
|
||||
case EXEC_OMP_END_SINGLE:
|
||||
@ -197,7 +201,20 @@ gfc_free_statement (gfc_code *p)
|
||||
case EXEC_OMP_SECTIONS:
|
||||
case EXEC_OMP_SIMD:
|
||||
case EXEC_OMP_SINGLE:
|
||||
case EXEC_OMP_TARGET:
|
||||
case EXEC_OMP_TARGET_DATA:
|
||||
case EXEC_OMP_TARGET_TEAMS:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_TARGET_UPDATE:
|
||||
case EXEC_OMP_TASK:
|
||||
case EXEC_OMP_TEAMS:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_WORKSHARE:
|
||||
case EXEC_OMP_PARALLEL_WORKSHARE:
|
||||
gfc_free_omp_clauses (p->ext.omp_clauses);
|
||||
|
@ -367,6 +367,7 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
*asynchronous = "ASYNCHRONOUS", *codimension = "CODIMENSION",
|
||||
*contiguous = "CONTIGUOUS", *generic = "GENERIC";
|
||||
static const char *threadprivate = "THREADPRIVATE";
|
||||
static const char *omp_declare_target = "OMP DECLARE TARGET";
|
||||
|
||||
const char *a1, *a2;
|
||||
int standard;
|
||||
@ -453,6 +454,7 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
conf (dummy, entry);
|
||||
conf (dummy, intrinsic);
|
||||
conf (dummy, threadprivate);
|
||||
conf (dummy, omp_declare_target);
|
||||
conf (pointer, target);
|
||||
conf (pointer, intrinsic);
|
||||
conf (pointer, elemental);
|
||||
@ -495,6 +497,7 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
conf (in_equivalence, entry);
|
||||
conf (in_equivalence, allocatable);
|
||||
conf (in_equivalence, threadprivate);
|
||||
conf (in_equivalence, omp_declare_target);
|
||||
|
||||
conf (dummy, result);
|
||||
conf (entry, result);
|
||||
@ -543,6 +546,7 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
conf (cray_pointee, in_common);
|
||||
conf (cray_pointee, in_equivalence);
|
||||
conf (cray_pointee, threadprivate);
|
||||
conf (cray_pointee, omp_declare_target);
|
||||
|
||||
conf (data, dummy);
|
||||
conf (data, function);
|
||||
@ -596,6 +600,8 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
|
||||
conf (proc_pointer, abstract)
|
||||
|
||||
conf (entry, omp_declare_target)
|
||||
|
||||
a1 = gfc_code2string (flavors, attr->flavor);
|
||||
|
||||
if (attr->in_namelist
|
||||
@ -631,6 +637,7 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
conf2 (function);
|
||||
conf2 (subroutine);
|
||||
conf2 (threadprivate);
|
||||
conf2 (omp_declare_target);
|
||||
|
||||
if (attr->access == ACCESS_PUBLIC || attr->access == ACCESS_PRIVATE)
|
||||
{
|
||||
@ -712,6 +719,7 @@ check_conflict (symbol_attribute *attr, const char *name, locus *where)
|
||||
conf2 (subroutine);
|
||||
conf2 (threadprivate);
|
||||
conf2 (result);
|
||||
conf2 (omp_declare_target);
|
||||
|
||||
if (attr->intent != INTENT_UNKNOWN)
|
||||
{
|
||||
@ -1206,6 +1214,22 @@ gfc_add_threadprivate (symbol_attribute *attr, const char *name, locus *where)
|
||||
}
|
||||
|
||||
|
||||
bool
|
||||
gfc_add_omp_declare_target (symbol_attribute *attr, const char *name,
|
||||
locus *where)
|
||||
{
|
||||
|
||||
if (check_used (attr, name, where))
|
||||
return false;
|
||||
|
||||
if (attr->omp_declare_target)
|
||||
return true;
|
||||
|
||||
attr->omp_declare_target = 1;
|
||||
return check_conflict (attr, name, where);
|
||||
}
|
||||
|
||||
|
||||
bool
|
||||
gfc_add_target (symbol_attribute *attr, locus *where)
|
||||
{
|
||||
@ -1761,6 +1785,9 @@ gfc_copy_attr (symbol_attribute *dest, symbol_attribute *src, locus *where)
|
||||
if (src->threadprivate
|
||||
&& !gfc_add_threadprivate (dest, NULL, where))
|
||||
goto fail;
|
||||
if (src->omp_declare_target
|
||||
&& !gfc_add_omp_declare_target (dest, NULL, where))
|
||||
goto fail;
|
||||
if (src->target && !gfc_add_target (dest, where))
|
||||
goto fail;
|
||||
if (src->dummy && !gfc_add_dummy (dest, NULL, where))
|
||||
|
@ -456,6 +456,11 @@ build_common_decl (gfc_common_head *com, tree union_type, bool is_init)
|
||||
if (com->threadprivate)
|
||||
set_decl_tls_model (decl, decl_default_tls_model (decl));
|
||||
|
||||
if (com->omp_declare_target)
|
||||
DECL_ATTRIBUTES (decl)
|
||||
= tree_cons (get_identifier ("omp declare target"),
|
||||
NULL_TREE, DECL_ATTRIBUTES (decl));
|
||||
|
||||
/* Place the back end declaration for this common block in
|
||||
GLOBAL_BINDING_LEVEL. */
|
||||
gfc_map_of_all_commons[identifier] = pushdecl_top_level (decl);
|
||||
|
@ -1222,6 +1222,10 @@ add_attributes_to_decl (symbol_attribute sym_attr, tree list)
|
||||
list = chainon (list, attr);
|
||||
}
|
||||
|
||||
if (sym_attr.omp_declare_target)
|
||||
list = tree_cons (get_identifier ("omp declare target"),
|
||||
NULL_TREE, list);
|
||||
|
||||
return list;
|
||||
}
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1851,6 +1851,10 @@ trans_code (gfc_code * code, tree cond)
|
||||
case EXEC_OMP_CANCEL:
|
||||
case EXEC_OMP_CANCELLATION_POINT:
|
||||
case EXEC_OMP_CRITICAL:
|
||||
case EXEC_OMP_DISTRIBUTE:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_DO:
|
||||
case EXEC_OMP_DO_SIMD:
|
||||
case EXEC_OMP_FLUSH:
|
||||
@ -1864,10 +1868,23 @@ trans_code (gfc_code * code, tree cond)
|
||||
case EXEC_OMP_SECTIONS:
|
||||
case EXEC_OMP_SIMD:
|
||||
case EXEC_OMP_SINGLE:
|
||||
case EXEC_OMP_TARGET:
|
||||
case EXEC_OMP_TARGET_DATA:
|
||||
case EXEC_OMP_TARGET_TEAMS:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TARGET_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_TARGET_UPDATE:
|
||||
case EXEC_OMP_TASK:
|
||||
case EXEC_OMP_TASKGROUP:
|
||||
case EXEC_OMP_TASKWAIT:
|
||||
case EXEC_OMP_TASKYIELD:
|
||||
case EXEC_OMP_TEAMS:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_PARALLEL_DO_SIMD:
|
||||
case EXEC_OMP_TEAMS_DISTRIBUTE_SIMD:
|
||||
case EXEC_OMP_WORKSHARE:
|
||||
res = gfc_trans_omp_directive (code);
|
||||
break;
|
||||
|
@ -671,6 +671,7 @@ tree gfc_omp_clause_default_ctor (tree, tree, tree);
|
||||
tree gfc_omp_clause_copy_ctor (tree, tree, tree);
|
||||
tree gfc_omp_clause_assign_op (tree, tree, tree);
|
||||
tree gfc_omp_clause_dtor (tree, tree);
|
||||
void gfc_omp_finish_clause (tree, gimple_seq *);
|
||||
bool gfc_omp_disregard_value_expr (tree, bool);
|
||||
bool gfc_omp_private_debug_clause (tree, bool);
|
||||
bool gfc_omp_private_outer_ref (tree);
|
||||
|
@ -5650,6 +5650,7 @@ omp_notice_variable (struct gimplify_omp_ctx *ctx, tree decl, bool in_code)
|
||||
n = splay_tree_lookup (ctx->variables, (splay_tree_key)decl);
|
||||
if (ctx->region_type == ORT_TARGET)
|
||||
{
|
||||
ret = lang_hooks.decls.omp_disregard_value_expr (decl, true);
|
||||
if (n == NULL)
|
||||
{
|
||||
if (!lang_hooks.types.omp_mappable_type (TREE_TYPE (decl)))
|
||||
@ -5662,8 +5663,12 @@ omp_notice_variable (struct gimplify_omp_ctx *ctx, tree decl, bool in_code)
|
||||
omp_add_variable (ctx, decl, GOVD_MAP | flags);
|
||||
}
|
||||
else
|
||||
n->value |= flags;
|
||||
ret = lang_hooks.decls.omp_disregard_value_expr (decl, true);
|
||||
{
|
||||
/* If nothing changed, there's nothing left to do. */
|
||||
if ((n->value & flags) == flags)
|
||||
return ret;
|
||||
n->value |= flags;
|
||||
}
|
||||
goto do_outer;
|
||||
}
|
||||
|
||||
@ -6201,13 +6206,21 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
|
||||
gimplify_omp_ctxp = ctx;
|
||||
}
|
||||
|
||||
struct gimplify_adjust_omp_clauses_data
|
||||
{
|
||||
tree *list_p;
|
||||
gimple_seq *pre_p;
|
||||
};
|
||||
|
||||
/* For all variables that were not actually used within the context,
|
||||
remove PRIVATE, SHARED, and FIRSTPRIVATE clauses. */
|
||||
|
||||
static int
|
||||
gimplify_adjust_omp_clauses_1 (splay_tree_node n, void *data)
|
||||
{
|
||||
tree *list_p = (tree *) data;
|
||||
tree *list_p = ((struct gimplify_adjust_omp_clauses_data *) data)->list_p;
|
||||
gimple_seq *pre_p
|
||||
= ((struct gimplify_adjust_omp_clauses_data *) data)->pre_p;
|
||||
tree decl = (tree) n->key;
|
||||
unsigned flags = n->value;
|
||||
enum omp_clause_code code;
|
||||
@ -6308,15 +6321,21 @@ gimplify_adjust_omp_clauses_1 (splay_tree_node n, void *data)
|
||||
OMP_CLAUSE_LASTPRIVATE_FIRSTPRIVATE (nc) = 1;
|
||||
OMP_CLAUSE_CHAIN (nc) = *list_p;
|
||||
OMP_CLAUSE_CHAIN (clause) = nc;
|
||||
lang_hooks.decls.omp_finish_clause (nc);
|
||||
struct gimplify_omp_ctx *ctx = gimplify_omp_ctxp;
|
||||
gimplify_omp_ctxp = ctx->outer_context;
|
||||
lang_hooks.decls.omp_finish_clause (nc, pre_p);
|
||||
gimplify_omp_ctxp = ctx;
|
||||
}
|
||||
*list_p = clause;
|
||||
lang_hooks.decls.omp_finish_clause (clause);
|
||||
struct gimplify_omp_ctx *ctx = gimplify_omp_ctxp;
|
||||
gimplify_omp_ctxp = ctx->outer_context;
|
||||
lang_hooks.decls.omp_finish_clause (clause, pre_p);
|
||||
gimplify_omp_ctxp = ctx;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
gimplify_adjust_omp_clauses (tree *list_p)
|
||||
gimplify_adjust_omp_clauses (gimple_seq *pre_p, tree *list_p)
|
||||
{
|
||||
struct gimplify_omp_ctx *ctx = gimplify_omp_ctxp;
|
||||
tree c, decl;
|
||||
@ -6521,7 +6540,10 @@ gimplify_adjust_omp_clauses (tree *list_p)
|
||||
}
|
||||
|
||||
/* Add in any implicit data sharing. */
|
||||
splay_tree_foreach (ctx->variables, gimplify_adjust_omp_clauses_1, list_p);
|
||||
struct gimplify_adjust_omp_clauses_data data;
|
||||
data.list_p = list_p;
|
||||
data.pre_p = pre_p;
|
||||
splay_tree_foreach (ctx->variables, gimplify_adjust_omp_clauses_1, &data);
|
||||
|
||||
gimplify_omp_ctxp = ctx->outer_context;
|
||||
delete_omp_context (ctx);
|
||||
@ -6552,7 +6574,7 @@ gimplify_omp_parallel (tree *expr_p, gimple_seq *pre_p)
|
||||
else
|
||||
pop_gimplify_context (NULL);
|
||||
|
||||
gimplify_adjust_omp_clauses (&OMP_PARALLEL_CLAUSES (expr));
|
||||
gimplify_adjust_omp_clauses (pre_p, &OMP_PARALLEL_CLAUSES (expr));
|
||||
|
||||
g = gimple_build_omp_parallel (body,
|
||||
OMP_PARALLEL_CLAUSES (expr),
|
||||
@ -6588,7 +6610,7 @@ gimplify_omp_task (tree *expr_p, gimple_seq *pre_p)
|
||||
else
|
||||
pop_gimplify_context (NULL);
|
||||
|
||||
gimplify_adjust_omp_clauses (&OMP_TASK_CLAUSES (expr));
|
||||
gimplify_adjust_omp_clauses (pre_p, &OMP_TASK_CLAUSES (expr));
|
||||
|
||||
g = gimple_build_omp_task (body,
|
||||
OMP_TASK_CLAUSES (expr),
|
||||
@ -6934,7 +6956,7 @@ gimplify_omp_for (tree *expr_p, gimple_seq *pre_p)
|
||||
TREE_OPERAND (TREE_OPERAND (t, 1), 0) = var;
|
||||
}
|
||||
|
||||
gimplify_adjust_omp_clauses (&OMP_FOR_CLAUSES (orig_for_stmt));
|
||||
gimplify_adjust_omp_clauses (pre_p, &OMP_FOR_CLAUSES (orig_for_stmt));
|
||||
|
||||
int kind;
|
||||
switch (TREE_CODE (orig_for_stmt))
|
||||
@ -7034,7 +7056,7 @@ gimplify_omp_workshare (tree *expr_p, gimple_seq *pre_p)
|
||||
}
|
||||
else
|
||||
gimplify_and_add (OMP_BODY (expr), &body);
|
||||
gimplify_adjust_omp_clauses (&OMP_CLAUSES (expr));
|
||||
gimplify_adjust_omp_clauses (pre_p, &OMP_CLAUSES (expr));
|
||||
|
||||
switch (TREE_CODE (expr))
|
||||
{
|
||||
@ -7073,7 +7095,7 @@ gimplify_omp_target_update (tree *expr_p, gimple_seq *pre_p)
|
||||
|
||||
gimplify_scan_omp_clauses (&OMP_TARGET_UPDATE_CLAUSES (expr), pre_p,
|
||||
ORT_WORKSHARE);
|
||||
gimplify_adjust_omp_clauses (&OMP_TARGET_UPDATE_CLAUSES (expr));
|
||||
gimplify_adjust_omp_clauses (pre_p, &OMP_TARGET_UPDATE_CLAUSES (expr));
|
||||
stmt = gimple_build_omp_target (NULL, GF_OMP_TARGET_KIND_UPDATE,
|
||||
OMP_TARGET_UPDATE_CLAUSES (expr));
|
||||
|
||||
|
@ -75,6 +75,7 @@ extern bool lhd_handle_option (size_t, const char *, int, int, location_t,
|
||||
extern int lhd_gimplify_expr (tree *, gimple_seq *, gimple_seq *);
|
||||
extern enum omp_clause_default_kind lhd_omp_predetermined_sharing (tree);
|
||||
extern tree lhd_omp_assignment (tree, tree, tree);
|
||||
extern void lhd_omp_finish_clause (tree, gimple_seq *);
|
||||
struct gimplify_omp_ctx;
|
||||
extern void lhd_omp_firstprivatize_type_sizes (struct gimplify_omp_ctx *,
|
||||
tree);
|
||||
@ -215,7 +216,7 @@ extern tree lhd_make_node (enum tree_code);
|
||||
#define LANG_HOOKS_OMP_CLAUSE_COPY_CTOR lhd_omp_assignment
|
||||
#define LANG_HOOKS_OMP_CLAUSE_ASSIGN_OP lhd_omp_assignment
|
||||
#define LANG_HOOKS_OMP_CLAUSE_DTOR hook_tree_tree_tree_null
|
||||
#define LANG_HOOKS_OMP_FINISH_CLAUSE hook_void_tree
|
||||
#define LANG_HOOKS_OMP_FINISH_CLAUSE lhd_omp_finish_clause
|
||||
|
||||
#define LANG_HOOKS_DECLS { \
|
||||
LANG_HOOKS_GLOBAL_BINDINGS_P, \
|
||||
|
@ -515,6 +515,13 @@ lhd_omp_assignment (tree clause ATTRIBUTE_UNUSED, tree dst, tree src)
|
||||
return build2 (MODIFY_EXPR, TREE_TYPE (dst), dst, src);
|
||||
}
|
||||
|
||||
/* Finalize clause C. */
|
||||
|
||||
void
|
||||
lhd_omp_finish_clause (tree, gimple_seq *)
|
||||
{
|
||||
}
|
||||
|
||||
/* Register language specific type size variables as potentially OpenMP
|
||||
firstprivate variables. */
|
||||
|
||||
|
@ -230,7 +230,7 @@ struct lang_hooks_for_decls
|
||||
tree (*omp_clause_dtor) (tree clause, tree decl);
|
||||
|
||||
/* Do language specific checking on an implicitly determined clause. */
|
||||
void (*omp_finish_clause) (tree clause);
|
||||
void (*omp_finish_clause) (tree clause, gimple_seq *pre_p);
|
||||
};
|
||||
|
||||
/* Language hooks related to LTO serialization. */
|
||||
|
@ -1678,6 +1678,11 @@ scan_sharing_clauses (tree clauses, omp_context *ctx)
|
||||
}
|
||||
else
|
||||
{
|
||||
if (ctx->outer)
|
||||
{
|
||||
scan_omp_op (&OMP_CLAUSE_DECL (c), ctx->outer);
|
||||
decl = OMP_CLAUSE_DECL (c);
|
||||
}
|
||||
gcc_assert (!splay_tree_lookup (ctx->field_map,
|
||||
(splay_tree_key) decl));
|
||||
tree field
|
||||
@ -2011,6 +2016,7 @@ scan_omp_parallel (gimple_stmt_iterator *gsi, omp_context *outer_ctx)
|
||||
tree temp = create_tmp_var (type, NULL);
|
||||
tree c = build_omp_clause (UNKNOWN_LOCATION,
|
||||
OMP_CLAUSE__LOOPTEMP_);
|
||||
insert_decl_map (&outer_ctx->cb, temp, temp);
|
||||
OMP_CLAUSE_DECL (c) = temp;
|
||||
OMP_CLAUSE_CHAIN (c) = gimple_omp_parallel_clauses (stmt);
|
||||
gimple_omp_parallel_set_clauses (stmt, c);
|
||||
@ -2508,6 +2514,23 @@ check_omp_nesting_restrictions (gimple stmt, omp_context *ctx)
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
case GIMPLE_OMP_TARGET:
|
||||
for (; ctx != NULL; ctx = ctx->outer)
|
||||
if (gimple_code (ctx->stmt) == GIMPLE_OMP_TARGET
|
||||
&& gimple_omp_target_kind (ctx->stmt) == GF_OMP_TARGET_KIND_REGION)
|
||||
{
|
||||
const char *name;
|
||||
switch (gimple_omp_target_kind (stmt))
|
||||
{
|
||||
case GF_OMP_TARGET_KIND_REGION: name = "target"; break;
|
||||
case GF_OMP_TARGET_KIND_DATA: name = "target data"; break;
|
||||
case GF_OMP_TARGET_KIND_UPDATE: name = "target update"; break;
|
||||
default: gcc_unreachable ();
|
||||
}
|
||||
warning_at (gimple_location (stmt), 0,
|
||||
"%s construct inside of target region", name);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@ -9041,7 +9064,10 @@ lower_omp_for (gimple_stmt_iterator *gsi_p, omp_context *ctx)
|
||||
OMP_CLAUSE__LOOPTEMP_);
|
||||
}
|
||||
else
|
||||
temp = create_tmp_var (type, NULL);
|
||||
{
|
||||
temp = create_tmp_var (type, NULL);
|
||||
insert_decl_map (&ctx->outer->cb, temp, temp);
|
||||
}
|
||||
*pc = build_omp_clause (UNKNOWN_LOCATION, OMP_CLAUSE__LOOPTEMP_);
|
||||
OMP_CLAUSE_DECL (*pc) = temp;
|
||||
pc = &OMP_CLAUSE_CHAIN (*pc);
|
||||
|
@ -1,3 +1,14 @@
|
||||
2014-06-18 Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
* gfortran.dg/gomp/declare-simd-1.f90: New test.
|
||||
* gfortran.dg/gomp/depend-1.f90: New test.
|
||||
* gfortran.dg/gomp/target1.f90: New test.
|
||||
* gfortran.dg/gomp/target2.f90: New test.
|
||||
* gfortran.dg/gomp/target3.f90: New test.
|
||||
* gfortran.dg/gomp/udr4.f90: Adjust expected diagnostics.
|
||||
* gfortran.dg/openmp-define-3.f90: Expect _OPENMP 201307 instead of
|
||||
201107.
|
||||
|
||||
2014-06-18 Dominique d'Humieres <dominiq@lps.ens.fr>
|
||||
|
||||
PR fortran/61126
|
||||
|
9
gcc/testsuite/gfortran.dg/gomp/declare-simd-1.f90
Normal file
9
gcc/testsuite/gfortran.dg/gomp/declare-simd-1.f90
Normal file
@ -0,0 +1,9 @@
|
||||
! { dg-do compile }
|
||||
|
||||
subroutine fn1 (x)
|
||||
integer :: x
|
||||
!$omp declare simd (fn1) inbranch notinbranch uniform (x) ! { dg-error "Unclassifiable OpenMP directive" }
|
||||
end subroutine fn1
|
||||
subroutine fn2 (x)
|
||||
!$omp declare simd (fn100) ! { dg-error "should refer to containing procedure" }
|
||||
end subroutine fn2
|
13
gcc/testsuite/gfortran.dg/gomp/depend-1.f90
Normal file
13
gcc/testsuite/gfortran.dg/gomp/depend-1.f90
Normal file
@ -0,0 +1,13 @@
|
||||
! { dg-do compile }
|
||||
|
||||
subroutine foo (x)
|
||||
integer :: x(5, *)
|
||||
!$omp parallel
|
||||
!$omp single
|
||||
!$omp task depend(in:x(:,5))
|
||||
!$omp end task
|
||||
!$omp task depend(in:x(5,:)) ! { dg-error "Rightmost upper bound of assumed size array section|proper array section" }
|
||||
!$omp end task
|
||||
!$omp end single
|
||||
!$omp end parallel
|
||||
end
|
520
gcc/testsuite/gfortran.dg/gomp/target1.f90
Normal file
520
gcc/testsuite/gfortran.dg/gomp/target1.f90
Normal file
@ -0,0 +1,520 @@
|
||||
! { dg-do compile }
|
||||
! { dg-options "-fopenmp" }
|
||||
|
||||
module target1
|
||||
interface
|
||||
subroutine dosomething (a, n, m)
|
||||
integer :: a (:), n, m
|
||||
!$omp declare target
|
||||
end subroutine dosomething
|
||||
end interface
|
||||
contains
|
||||
subroutine foo (n, o, p, q, r, pp)
|
||||
integer :: n, o, p, q, r, s, i, j
|
||||
integer :: a (2:o)
|
||||
integer, pointer :: pp
|
||||
!$omp target data device (n + 1) if (n .ne. 6) map (tofrom: n, r)
|
||||
!$omp target device (n + 1) if (n .ne. 6) map (from: n) map (alloc: a(2:o))
|
||||
call dosomething (a, n, 0)
|
||||
!$omp end target
|
||||
!$omp target teams device (n + 1) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r)
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
!$omp end target teams
|
||||
!$omp target teams distribute device (n + 1) num_teams (n + 4) collapse (2) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp target teams distribute device (n + 1) num_teams (n + 4) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams distribute
|
||||
!$omp target teams distribute parallel do device (n + 1) num_teams (n + 4) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) collapse (2) &
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s) &
|
||||
!$omp & ordered schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp target teams distribute parallel do device (n + 1) num_teams (n + 4) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) ordered schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end target teams distribute parallel do
|
||||
!$omp target teams distribute parallel do simd device (n + 1) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) collapse (2) &
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s) &
|
||||
!$omp & schedule (static, 8) num_teams (n + 4) safelen(8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp target teams distribute parallel do simd device (n + 1) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) schedule (static, 8) &
|
||||
!$omp & num_teams (n + 4) safelen(16) linear(i:1) aligned (pp:4)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end target teams distribute parallel do simd
|
||||
!$omp target teams distribute simd device (n + 1) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) collapse (2) &
|
||||
!$omp & lastprivate (s) num_teams (n + 4) safelen(8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp target teams distribute simd device (n + 1) &
|
||||
!$omp & if (n .ne. 6)map (from: n) map (alloc: a(2:o)) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) lastprivate (s) &
|
||||
!$omp & num_teams (n + 4) safelen(16) linear(i:1) aligned (pp:4)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end target teams distribute simd
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams num_teams (n + 4) thread_limit (n * 2) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r)
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
!$omp end teams
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute num_teams (n + 4) collapse (2) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute num_teams (n + 4) default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end teams distribute
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute parallel do num_teams (n + 4) &
|
||||
!$omp & if (n .ne. 6) default(shared) ordered schedule (static, 8) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) collapse (2) &
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute parallel do num_teams (n + 4)if(n.ne.6)default(shared)&
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) ordered schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end teams distribute parallel do
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute parallel do simd if(n.ne.6)default(shared)&
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) collapse (2) &
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s) &
|
||||
!$omp & schedule (static, 8) num_teams (n + 4) safelen(8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute parallel do simd if (n .ne. 6)default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) schedule (static, 8) &
|
||||
!$omp & num_teams (n + 4) safelen(16) linear(i:1) aligned (pp:4)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end teams distribute parallel do simd
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute simd default(shared) safelen(8) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) collapse (2) &
|
||||
!$omp & lastprivate (s) num_teams (n + 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target device (n + 1) if (n .ne. 6)map (from: n) map (alloc: a(2:o))
|
||||
!$omp teams distribute simd default(shared) aligned (pp:4) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & thread_limit (n * 2) dist_schedule (static, 4) lastprivate (s)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end teams distribute simd
|
||||
!$omp end target
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction ( + : r )
|
||||
!$omp distribute collapse (2) firstprivate (q) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute firstprivate (q) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end distribute
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do if (n .ne. 6) default(shared) &
|
||||
!$omp & ordered schedule (static, 8) private (p) firstprivate (q) &
|
||||
!$omp & shared(n)reduction(+:r)dist_schedule(static,4)collapse(2)&
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do if(n.ne.6)default(shared)&
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) ordered schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end distribute parallel do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do simd if(n.ne.6)default(shared)&
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) collapse (2) safelen(8) &
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s) &
|
||||
!$omp & schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do simd if (n .ne. 6)default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) schedule (static, 8) &
|
||||
!$omp & safelen(16) linear(i:1) aligned (pp:4)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end distribute parallel do simd
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute simd safelen(8) lastprivate(s) &
|
||||
!$omp & private (p) firstprivate (q) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) collapse (2)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute simd aligned (pp:4) &
|
||||
!$omp & private (p) firstprivate (q) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) lastprivate (s)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end distribute simd
|
||||
!$omp end target teams
|
||||
!$omp end target data
|
||||
end subroutine
|
||||
subroutine bar (n, o, p, r, pp)
|
||||
integer :: n, o, p, q, r, s, i, j
|
||||
integer :: a (2:o)
|
||||
integer, pointer :: pp
|
||||
common /blk/ i, j, q
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction ( + : r )
|
||||
!$omp distribute collapse (2) firstprivate (q) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute firstprivate (q) dist_schedule (static, 4)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
end do
|
||||
!$omp end distribute
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do if (n .ne. 6) default(shared) &
|
||||
!$omp & ordered schedule (static, 8) private (p) firstprivate (q) &
|
||||
!$omp & shared(n)reduction(+:r)dist_schedule(static,4)collapse(2)&
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do if(n.ne.6)default(shared)&
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) ordered schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
call dosomething (a, n, p + q)
|
||||
end do
|
||||
!$omp ordered
|
||||
p = q
|
||||
!$omp end ordered
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end distribute parallel do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do simd if(n.ne.6)default(shared)&
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) collapse (2) safelen(8) &
|
||||
!$omp & num_threads (n + 4) proc_bind (spread) lastprivate (s) &
|
||||
!$omp & schedule (static, 8)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute parallel do simd if (n .ne. 6)default(shared) &
|
||||
!$omp & private (p) firstprivate (q) shared (n) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) num_threads (n + 4) &
|
||||
!$omp & proc_bind (master) lastprivate (s) schedule (static, 8) &
|
||||
!$omp & safelen(16) linear(i:1) aligned (pp:4)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end distribute parallel do simd
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute simd safelen(8) lastprivate(s) &
|
||||
!$omp & private (p) firstprivate (q) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) collapse (2)
|
||||
do i = 1, 10
|
||||
do j = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(2+i*10+j) = p + q
|
||||
s = i * 10 + j
|
||||
end do
|
||||
end do
|
||||
!$omp end target teams
|
||||
!$omp target teams device (n + 1) if (n .ne. 6)map (from: n) &
|
||||
!$omp & map (alloc: a(2:o)) num_teams (n + 4) thread_limit (n * 2) &
|
||||
!$omp & default(shared) shared(n) private (p) reduction(+:r)
|
||||
!$omp distribute simd aligned (pp:4) &
|
||||
!$omp & private (p) firstprivate (q) reduction (+: r) &
|
||||
!$omp & dist_schedule (static, 4) lastprivate (s)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
p = q
|
||||
a(1+i) = p + q
|
||||
s = i * 10
|
||||
end do
|
||||
!$omp end distribute simd
|
||||
!$omp end target teams
|
||||
end subroutine
|
||||
end module
|
74
gcc/testsuite/gfortran.dg/gomp/target2.f90
Normal file
74
gcc/testsuite/gfortran.dg/gomp/target2.f90
Normal file
@ -0,0 +1,74 @@
|
||||
! { dg-do compile }
|
||||
! { dg-options "-fopenmp -ffree-line-length-160" }
|
||||
|
||||
subroutine foo (n, s, t, u, v, w)
|
||||
integer :: n, i, s, t, u, v, w
|
||||
common /bar/ i
|
||||
!$omp simd safelen(s + 1)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp do schedule (static, t * 2)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp do simd safelen(s + 1) schedule (static, t * 2)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp parallel do schedule (static, t * 2) num_threads (u - 1)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp parallel do simd safelen(s + 1) schedule (static, t * 2) num_threads (u - 1)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp distribute dist_schedule (static, v + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp distribute simd dist_schedule (static, v + 8) safelen(s + 1)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp distribute parallel do simd dist_schedule (static, v + 8) safelen(s + 1) &
|
||||
!$omp & schedule (static, t * 2) num_threads (u - 1)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp distribute parallel do dist_schedule (static, v + 8) num_threads (u - 1) &
|
||||
!$omp & schedule (static, t * 2)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp target
|
||||
!$omp teams distribute dist_schedule (static, v + 8) num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target
|
||||
!$omp teams distribute simd dist_schedule (static, v + 8) safelen(s + 1) &
|
||||
!$omp & num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target
|
||||
!$omp teams distribute parallel do simd dist_schedule (static, v + 8) safelen(s + 1) &
|
||||
!$omp & schedule (static, t * 2) num_threads (u - 1) num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target
|
||||
!$omp teams distribute parallel do dist_schedule (static, v + 8) num_threads (u - 1) &
|
||||
!$omp & schedule (static, t * 2) num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target teams distribute dist_schedule (static, v + 8) num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp target teams distribute simd dist_schedule (static, v + 8) safelen(s + 1) &
|
||||
!$omp & num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp target teams distribute parallel do simd dist_schedule (static, v + 8) safelen(s + 1) &
|
||||
!$omp & schedule (static, t * 2) num_threads (u - 1) num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
!$omp target teams distribute parallel do dist_schedule (static, v + 8) num_threads (u - 1) &
|
||||
!$omp & schedule (static, t * 2) num_teams (w + 8)
|
||||
do i = 1, n
|
||||
end do
|
||||
end subroutine
|
12
gcc/testsuite/gfortran.dg/gomp/target3.f90
Normal file
12
gcc/testsuite/gfortran.dg/gomp/target3.f90
Normal file
@ -0,0 +1,12 @@
|
||||
! { dg-do compile }
|
||||
! { dg-options "-fopenmp" }
|
||||
|
||||
subroutine foo (r)
|
||||
integer :: i, r
|
||||
!$omp target
|
||||
!$omp target teams distribute parallel do reduction (+: r) ! { dg-warning "target construct inside of target region" }
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
end do
|
||||
!$omp end target
|
||||
end subroutine
|
@ -6,7 +6,7 @@ subroutine f3
|
||||
!$omp declare reduction (foo) ! { dg-error "Unclassifiable OpenMP directive" }
|
||||
!$omp declare reduction (foo:integer) ! { dg-error "Unclassifiable OpenMP directive" }
|
||||
!$omp declare reduction (foo:integer:omp_out=omp_out+omp_in) &
|
||||
!$omp & initializer(omp_priv=0) initializer(omp_priv=0) ! { dg-error "Unclassifiable statement" }
|
||||
!$omp & initializer(omp_priv=0) initializer(omp_priv=0) ! { dg-error "Unexpected junk after" }
|
||||
end subroutine f3
|
||||
subroutine f4
|
||||
implicit integer (o)
|
||||
|
@ -6,6 +6,6 @@
|
||||
# error _OPENMP not defined
|
||||
#endif
|
||||
|
||||
#if _OPENMP != 201107
|
||||
#if _OPENMP != 201307
|
||||
# error _OPENMP defined to wrong value
|
||||
#endif
|
||||
|
@ -1152,6 +1152,11 @@ enum omp_clause_map_kind
|
||||
array sections. OMP_CLAUSE_SIZE for these is not the pointer size,
|
||||
which is implicitly POINTER_SIZE / BITS_PER_UNIT, but the bias. */
|
||||
OMP_CLAUSE_MAP_POINTER,
|
||||
/* Also internal, behaves like OMP_CLAUS_MAP_TO, but additionally any
|
||||
OMP_CLAUSE_MAP_POINTER records consecutive after it which have addresses
|
||||
falling into that range will not be ignored if OMP_CLAUSE_MAP_TO_PSET
|
||||
wasn't mapped already. */
|
||||
OMP_CLAUSE_MAP_TO_PSET,
|
||||
OMP_CLAUSE_MAP_LAST
|
||||
};
|
||||
|
||||
|
@ -1085,6 +1085,10 @@ convert_nonlocal_omp_clauses (tree *pclauses, struct walk_stmt_info *wi)
|
||||
case OMP_CLAUSE_LINEAR:
|
||||
if (OMP_CLAUSE_LINEAR_GIMPLE_SEQ (clause))
|
||||
need_stmts = true;
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_nonlocal_reference_op (&OMP_CLAUSE_LINEAR_STEP (clause),
|
||||
&dummy, wi);
|
||||
goto do_decl_clause;
|
||||
|
||||
case OMP_CLAUSE_PRIVATE:
|
||||
@ -1113,10 +1117,42 @@ convert_nonlocal_omp_clauses (tree *pclauses, struct walk_stmt_info *wi)
|
||||
case OMP_CLAUSE_IF:
|
||||
case OMP_CLAUSE_NUM_THREADS:
|
||||
case OMP_CLAUSE_DEPEND:
|
||||
case OMP_CLAUSE_DEVICE:
|
||||
case OMP_CLAUSE_NUM_TEAMS:
|
||||
case OMP_CLAUSE_THREAD_LIMIT:
|
||||
case OMP_CLAUSE_SAFELEN:
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_nonlocal_reference_op (&OMP_CLAUSE_OPERAND (clause, 0),
|
||||
&dummy, wi);
|
||||
&dummy, wi);
|
||||
break;
|
||||
|
||||
case OMP_CLAUSE_DIST_SCHEDULE:
|
||||
if (OMP_CLAUSE_DIST_SCHEDULE_CHUNK_EXPR (clause) != NULL)
|
||||
{
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_nonlocal_reference_op (&OMP_CLAUSE_OPERAND (clause, 0),
|
||||
&dummy, wi);
|
||||
}
|
||||
break;
|
||||
|
||||
case OMP_CLAUSE_MAP:
|
||||
case OMP_CLAUSE_TO:
|
||||
case OMP_CLAUSE_FROM:
|
||||
if (OMP_CLAUSE_SIZE (clause))
|
||||
{
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_nonlocal_reference_op (&OMP_CLAUSE_SIZE (clause),
|
||||
&dummy, wi);
|
||||
}
|
||||
if (DECL_P (OMP_CLAUSE_DECL (clause)))
|
||||
goto do_decl_clause;
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_nonlocal_reference_op (&OMP_CLAUSE_DECL (clause),
|
||||
&dummy, wi);
|
||||
break;
|
||||
|
||||
case OMP_CLAUSE_NOWAIT:
|
||||
@ -1126,6 +1162,7 @@ convert_nonlocal_omp_clauses (tree *pclauses, struct walk_stmt_info *wi)
|
||||
case OMP_CLAUSE_COLLAPSE:
|
||||
case OMP_CLAUSE_UNTIED:
|
||||
case OMP_CLAUSE_MERGEABLE:
|
||||
case OMP_CLAUSE_PROC_BIND:
|
||||
break;
|
||||
|
||||
default:
|
||||
@ -1620,6 +1657,10 @@ convert_local_omp_clauses (tree *pclauses, struct walk_stmt_info *wi)
|
||||
case OMP_CLAUSE_LINEAR:
|
||||
if (OMP_CLAUSE_LINEAR_GIMPLE_SEQ (clause))
|
||||
need_stmts = true;
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_local_reference_op (&OMP_CLAUSE_LINEAR_STEP (clause), &dummy,
|
||||
wi);
|
||||
goto do_decl_clause;
|
||||
|
||||
case OMP_CLAUSE_PRIVATE:
|
||||
@ -1653,12 +1694,45 @@ convert_local_omp_clauses (tree *pclauses, struct walk_stmt_info *wi)
|
||||
case OMP_CLAUSE_IF:
|
||||
case OMP_CLAUSE_NUM_THREADS:
|
||||
case OMP_CLAUSE_DEPEND:
|
||||
case OMP_CLAUSE_DEVICE:
|
||||
case OMP_CLAUSE_NUM_TEAMS:
|
||||
case OMP_CLAUSE_THREAD_LIMIT:
|
||||
case OMP_CLAUSE_SAFELEN:
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_local_reference_op (&OMP_CLAUSE_OPERAND (clause, 0), &dummy,
|
||||
wi);
|
||||
break;
|
||||
|
||||
case OMP_CLAUSE_DIST_SCHEDULE:
|
||||
if (OMP_CLAUSE_DIST_SCHEDULE_CHUNK_EXPR (clause) != NULL)
|
||||
{
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_local_reference_op (&OMP_CLAUSE_OPERAND (clause, 0),
|
||||
&dummy, wi);
|
||||
}
|
||||
break;
|
||||
|
||||
case OMP_CLAUSE_MAP:
|
||||
case OMP_CLAUSE_TO:
|
||||
case OMP_CLAUSE_FROM:
|
||||
if (OMP_CLAUSE_SIZE (clause))
|
||||
{
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_local_reference_op (&OMP_CLAUSE_SIZE (clause),
|
||||
&dummy, wi);
|
||||
}
|
||||
if (DECL_P (OMP_CLAUSE_DECL (clause)))
|
||||
goto do_decl_clause;
|
||||
wi->val_only = true;
|
||||
wi->is_lhs = false;
|
||||
convert_local_reference_op (&OMP_CLAUSE_DECL (clause),
|
||||
&dummy, wi);
|
||||
break;
|
||||
|
||||
|
||||
case OMP_CLAUSE_NOWAIT:
|
||||
case OMP_CLAUSE_ORDERED:
|
||||
case OMP_CLAUSE_DEFAULT:
|
||||
@ -1666,6 +1740,7 @@ convert_local_omp_clauses (tree *pclauses, struct walk_stmt_info *wi)
|
||||
case OMP_CLAUSE_COLLAPSE:
|
||||
case OMP_CLAUSE_UNTIED:
|
||||
case OMP_CLAUSE_MERGEABLE:
|
||||
case OMP_CLAUSE_PROC_BIND:
|
||||
break;
|
||||
|
||||
default:
|
||||
|
@ -500,6 +500,7 @@ dump_omp_clause (pretty_printer *buffer, tree clause, int spc, int flags)
|
||||
pp_string (buffer, "alloc");
|
||||
break;
|
||||
case OMP_CLAUSE_MAP_TO:
|
||||
case OMP_CLAUSE_MAP_TO_PSET:
|
||||
pp_string (buffer, "to");
|
||||
break;
|
||||
case OMP_CLAUSE_MAP_FROM:
|
||||
@ -520,6 +521,9 @@ dump_omp_clause (pretty_printer *buffer, tree clause, int spc, int flags)
|
||||
if (OMP_CLAUSE_CODE (clause) == OMP_CLAUSE_MAP
|
||||
&& OMP_CLAUSE_MAP_KIND (clause) == OMP_CLAUSE_MAP_POINTER)
|
||||
pp_string (buffer, " [pointer assign, bias: ");
|
||||
else if (OMP_CLAUSE_CODE (clause) == OMP_CLAUSE_MAP
|
||||
&& OMP_CLAUSE_MAP_KIND (clause) == OMP_CLAUSE_MAP_TO_PSET)
|
||||
pp_string (buffer, " [pointer set, len: ");
|
||||
else
|
||||
pp_string (buffer, " [len: ");
|
||||
dump_generic_node (buffer, OMP_CLAUSE_SIZE (clause),
|
||||
|
@ -1,3 +1,22 @@
|
||||
2014-06-18 Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
* omp_lib.f90.in (openmp_version): Set to 201307.
|
||||
* omp_lib.h.in (openmp_version): Likewise.
|
||||
* testsuite/libgomp.c/target-8.c: New test.
|
||||
* testsuite/libgomp.fortran/declare-simd-1.f90: Add notinbranch
|
||||
and inbranch clauses.
|
||||
* testsuite/libgomp.fortran/depend-3.f90: New test.
|
||||
* testsuite/libgomp.fortran/openmp_version-1.f: Adjust for new
|
||||
openmp_version.
|
||||
* testsuite/libgomp.fortran/openmp_version-2.f90: Likewise.
|
||||
* testsuite/libgomp.fortran/target1.f90: New test.
|
||||
* testsuite/libgomp.fortran/target2.f90: New test.
|
||||
* testsuite/libgomp.fortran/target3.f90: New test.
|
||||
* testsuite/libgomp.fortran/target4.f90: New test.
|
||||
* testsuite/libgomp.fortran/target5.f90: New test.
|
||||
* testsuite/libgomp.fortran/target6.f90: New test.
|
||||
* testsuite/libgomp.fortran/target7.f90: New test.
|
||||
|
||||
2014-06-10 Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
PR fortran/60928
|
||||
|
@ -42,7 +42,7 @@
|
||||
module omp_lib
|
||||
use omp_lib_kinds
|
||||
implicit none
|
||||
integer, parameter :: openmp_version = 201107
|
||||
integer, parameter :: openmp_version = 201307
|
||||
|
||||
interface
|
||||
subroutine omp_init_lock (svar)
|
||||
|
@ -45,7 +45,7 @@
|
||||
parameter (omp_proc_bind_master = 2)
|
||||
parameter (omp_proc_bind_close = 3)
|
||||
parameter (omp_proc_bind_spread = 4)
|
||||
parameter (openmp_version = 201107)
|
||||
parameter (openmp_version = 201307)
|
||||
|
||||
external omp_init_lock, omp_init_nest_lock
|
||||
external omp_destroy_lock, omp_destroy_nest_lock
|
||||
|
26
libgomp/testsuite/libgomp.c/target-8.c
Normal file
26
libgomp/testsuite/libgomp.c/target-8.c
Normal file
@ -0,0 +1,26 @@
|
||||
/* { dg-do run } */
|
||||
/* { dg-options "-fopenmp" } */
|
||||
|
||||
void
|
||||
foo (int *p)
|
||||
{
|
||||
int i;
|
||||
#pragma omp parallel
|
||||
#pragma omp single
|
||||
#pragma omp target teams distribute parallel for map(p[0:24])
|
||||
for (i = 0; i < 24; i++)
|
||||
p[i] = p[i] + 1;
|
||||
}
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
int p[24], i;
|
||||
for (i = 0; i < 24; i++)
|
||||
p[i] = i;
|
||||
foo (p);
|
||||
for (i = 0; i < 24; i++)
|
||||
if (p[i] != i + 1)
|
||||
__builtin_abort ();
|
||||
return 0;
|
||||
}
|
@ -6,7 +6,8 @@
|
||||
module declare_simd_1_mod
|
||||
contains
|
||||
real function foo (a, b, c)
|
||||
!$omp declare simd (foo) simdlen (4) uniform (a) linear (b : 5)
|
||||
!$omp declare simd (foo) simdlen (4) uniform (a) linear (b : 5) &
|
||||
!$omp & notinbranch
|
||||
double precision, value :: a
|
||||
real, value :: c
|
||||
!$omp declare simd (foo)
|
||||
@ -22,6 +23,7 @@ end module declare_simd_1_mod
|
||||
real, value :: c
|
||||
real :: bar
|
||||
!$omp declare simd (bar) simdlen (4) linear (b : 2)
|
||||
!$omp declare simd (bar) simdlen (16) inbranch
|
||||
double precision, value :: a
|
||||
end function bar
|
||||
end interface
|
||||
|
42
libgomp/testsuite/libgomp.fortran/depend-3.f90
Normal file
42
libgomp/testsuite/libgomp.fortran/depend-3.f90
Normal file
@ -0,0 +1,42 @@
|
||||
! { dg-do run }
|
||||
|
||||
integer :: x(2, 3)
|
||||
integer, allocatable :: z(:, :)
|
||||
allocate (z(-2:3, 2:4))
|
||||
call foo (x, z)
|
||||
contains
|
||||
subroutine foo (x, z)
|
||||
integer :: x(:, :), y
|
||||
integer, allocatable :: z(:, :)
|
||||
y = 1
|
||||
!$omp parallel shared (x, y, z)
|
||||
!$omp single
|
||||
!$omp taskgroup
|
||||
!$omp task depend(in: x)
|
||||
if (y.ne.1) call abort
|
||||
!$omp end task
|
||||
!$omp task depend(out: x(1:2, 1:3))
|
||||
y = 2
|
||||
!$omp end task
|
||||
!$omp end taskgroup
|
||||
!$omp taskgroup
|
||||
!$omp task depend(in: z)
|
||||
if (y.ne.2) call abort
|
||||
!$omp end task
|
||||
!$omp task depend(out: z(-2:3, 2:4))
|
||||
y = 3
|
||||
!$omp end task
|
||||
!$omp end taskgroup
|
||||
!$omp taskgroup
|
||||
!$omp task depend(in: x)
|
||||
if (y.ne.3) call abort
|
||||
!$omp end task
|
||||
!$omp task depend(out: x(1:, 1:))
|
||||
y = 4
|
||||
!$omp end task
|
||||
!$omp end taskgroup
|
||||
!$omp end single
|
||||
!$omp end parallel
|
||||
if (y.ne.4) call abort
|
||||
end subroutine
|
||||
end
|
@ -4,6 +4,6 @@
|
||||
implicit none
|
||||
include "omp_lib.h"
|
||||
|
||||
if (openmp_version .ne. 201107) call abort;
|
||||
if (openmp_version .ne. 201307) call abort;
|
||||
|
||||
end program main
|
||||
|
@ -4,6 +4,6 @@ program main
|
||||
use omp_lib
|
||||
implicit none
|
||||
|
||||
if (openmp_version .ne. 201107) call abort;
|
||||
if (openmp_version .ne. 201307) call abort;
|
||||
|
||||
end program main
|
||||
|
58
libgomp/testsuite/libgomp.fortran/target1.f90
Normal file
58
libgomp/testsuite/libgomp.fortran/target1.f90
Normal file
@ -0,0 +1,58 @@
|
||||
! { dg-do run }
|
||||
|
||||
module target1
|
||||
contains
|
||||
subroutine foo (p, v, w, n)
|
||||
double precision, pointer :: p(:), v(:), w(:)
|
||||
double precision :: q(n)
|
||||
integer :: i, n
|
||||
!$omp target if (n > 256) map (to: v(1:n), w(:n)) map (from: p(1:n), q)
|
||||
!$omp parallel do simd
|
||||
do i = 1, n
|
||||
p(i) = v(i) * w(i)
|
||||
q(i) = p(i)
|
||||
end do
|
||||
!$omp end target
|
||||
if (any (p /= q)) call abort
|
||||
do i = 1, n
|
||||
if (p(i) /= i * iand (i, 63)) call abort
|
||||
end do
|
||||
!$omp target data if (n > 256) map (to: v(1:n), w) map (from: p, q)
|
||||
!$omp target if (n > 256)
|
||||
do i = 1, n
|
||||
p(i) = 1.0
|
||||
q(i) = 2.0
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target if (n > 256)
|
||||
do i = 1, n
|
||||
p(i) = p(i) + v(i) * w(i)
|
||||
q(i) = q(i) + v(i) * w(i)
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target if (n > 256)
|
||||
!$omp teams distribute parallel do simd linear(i:1)
|
||||
do i = 1, n
|
||||
p(i) = p(i) + 2.0
|
||||
q(i) = q(i) + 3.0
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp end target data
|
||||
if (any (p + 2.0 /= q)) call abort
|
||||
end subroutine
|
||||
end module target1
|
||||
use target1, only : foo
|
||||
integer :: n, i
|
||||
double precision, pointer :: p(:), v(:), w(:)
|
||||
n = 10000
|
||||
allocate (p(n), v(n), w(n))
|
||||
do i = 1, n
|
||||
v(i) = i
|
||||
w(i) = iand (i, 63)
|
||||
end do
|
||||
call foo (p, v, w, n)
|
||||
do i = 1, n
|
||||
if (p(i) /= i * iand (i, 63) + 3) call abort
|
||||
end do
|
||||
deallocate (p, v, w)
|
||||
end
|
96
libgomp/testsuite/libgomp.fortran/target2.f90
Normal file
96
libgomp/testsuite/libgomp.fortran/target2.f90
Normal file
@ -0,0 +1,96 @@
|
||||
! { dg-do run }
|
||||
! { dg-options "-fopenmp -ffree-line-length-160" }
|
||||
|
||||
module target2
|
||||
contains
|
||||
subroutine foo (a, b, c, d, e, f, g, n, q)
|
||||
integer :: n, q
|
||||
integer :: a, b(3:n), c(5:), d(2:*), e(:,:)
|
||||
integer, pointer :: f, g(:)
|
||||
integer :: h, i(3:n)
|
||||
integer, pointer :: j, k(:)
|
||||
logical :: r
|
||||
allocate (j, k(4:n))
|
||||
h = 14
|
||||
i = 15
|
||||
j = 16
|
||||
k = 17
|
||||
!$omp target map (to: a, b, c, d(2:n+1), e, f, g, h, i, j, k, n) map (from: r)
|
||||
r = a /= 7
|
||||
r = r .or. (any (b /= 8)) .or. (lbound (b, 1) /= 3) .or. (ubound (b, 1) /= n)
|
||||
r = r .or. (any (c /= 9)) .or. (lbound (c, 1) /= 5) .or. (ubound (c, 1) /= n + 4)
|
||||
r = r .or. (any (d(2:n+1) /= 10)) .or. (lbound (d, 1) /= 2)
|
||||
r = r .or. (any (e /= 11)) .or. (lbound (e, 1) /= 1) .or. (ubound (e, 1) /= 2)
|
||||
r = r .or. (lbound (e, 2) /= 1) .or. (ubound (e, 2) /= 2)
|
||||
r = r .or. (f /= 12)
|
||||
r = r .or. (any (g /= 13)) .or. (lbound (g, 1) /= 3) .or. (ubound (g, 1) /= n)
|
||||
r = r .or. (h /= 14)
|
||||
r = r .or. (any (i /= 15)) .or. (lbound (i, 1) /= 3) .or. (ubound (i, 1) /= n)
|
||||
r = r .or. (j /= 16)
|
||||
r = r .or. (any (k /= 17)) .or. (lbound (k, 1) /= 4) .or. (ubound (k, 1) /= n)
|
||||
!$omp end target
|
||||
if (r) call abort
|
||||
!$omp target map (to: b(3:n), c(5:n+4), d(2:n+1), e(1:,:2), g(3:n), i(3:n), k(4:n), n) map (from: r)
|
||||
r = (any (b /= 8)) .or. (lbound (b, 1) /= 3) .or. (ubound (b, 1) /= n)
|
||||
r = r .or. (any (c /= 9)) .or. (lbound (c, 1) /= 5) .or. (ubound (c, 1) /= n + 4)
|
||||
r = r .or. (any (d(2:n+1) /= 10)) .or. (lbound (d, 1) /= 2)
|
||||
r = r .or. (any (e /= 11)) .or. (lbound (e, 1) /= 1) .or. (ubound (e, 1) /= 2)
|
||||
r = r .or. (lbound (e, 2) /= 1) .or. (ubound (e, 2) /= 2)
|
||||
r = r .or. (any (g /= 13)) .or. (lbound (g, 1) /= 3) .or. (ubound (g, 1) /= n)
|
||||
r = r .or. (any (i /= 15)) .or. (lbound (i, 1) /= 3) .or. (ubound (i, 1) /= n)
|
||||
r = r .or. (any (k /= 17)) .or. (lbound (k, 1) /= 4) .or. (ubound (k, 1) /= n)
|
||||
!$omp end target
|
||||
if (r) call abort
|
||||
!$omp target map (to: b(5:n-2), c(7:n), d(4:n-2), e(1:,2:), g(5:n-3), i(6:n-4), k(5:n-5), n) map (from: r)
|
||||
r = (any (b(5:n-2) /= 8)) .or. (lbound (b, 1) /= 3) .or. (ubound (b, 1) /= n)
|
||||
r = r .or. (any (c(7:n) /= 9)) .or. (lbound (c, 1) /= 5) .or. (ubound (c, 1) /= n + 4)
|
||||
r = r .or. (any (d(4:n-2) /= 10)) .or. (lbound (d, 1) /= 2)
|
||||
r = r .or. (any (e(1:,2:) /= 11)) .or. (lbound (e, 1) /= 1) .or. (ubound (e, 1) /= 2)
|
||||
r = r .or. (lbound (e, 2) /= 1) .or. (ubound (e, 2) /= 2)
|
||||
r = r .or. (any (g(5:n-3) /= 13)) .or. (lbound (g, 1) /= 3) .or. (ubound (g, 1) /= n)
|
||||
r = r .or. (any (i(6:n-4) /= 15)) .or. (lbound (i, 1) /= 3) .or. (ubound (i, 1) /= n)
|
||||
r = r .or. (any (k(5:n-5) /= 17)) .or. (lbound (k, 1) /= 4) .or. (ubound (k, 1) /= n)
|
||||
!$omp end target
|
||||
!$omp target map (to: b(q+5:n-2+q), c(q+7:q+n), d(q+4:q+n-2), e(1:q+2,2:q+2), g(5+q:n-3+q), &
|
||||
!$omp & i(6+q:n-4+q), k(5+q:n-5+q), n) map (from: r)
|
||||
r = (any (b(5:n-2) /= 8)) .or. (lbound (b, 1) /= 3) .or. (ubound (b, 1) /= n)
|
||||
r = r .or. (any (c(7:n) /= 9)) .or. (lbound (c, 1) /= 5) .or. (ubound (c, 1) /= n + 4)
|
||||
r = r .or. (any (d(4:n-2) /= 10)) .or. (lbound (d, 1) /= 2)
|
||||
r = r .or. (any (e(1:,2:) /= 11)) .or. (lbound (e, 1) /= 1) .or. (ubound (e, 1) /= 2)
|
||||
r = r .or. (lbound (e, 2) /= 1) .or. (ubound (e, 2) /= 2)
|
||||
r = r .or. (any (g(5:n-3) /= 13)) .or. (lbound (g, 1) /= 3) .or. (ubound (g, 1) /= n)
|
||||
r = r .or. (any (i(6:n-4) /= 15)) .or. (lbound (i, 1) /= 3) .or. (ubound (i, 1) /= n)
|
||||
r = r .or. (any (k(5:n-5) /= 17)) .or. (lbound (k, 1) /= 4) .or. (ubound (k, 1) /= n)
|
||||
!$omp end target
|
||||
if (r) call abort
|
||||
!$omp target map (to: d(2:n+1), n)
|
||||
r = a /= 7
|
||||
r = r .or. (any (b /= 8)) .or. (lbound (b, 1) /= 3) .or. (ubound (b, 1) /= n)
|
||||
r = r .or. (any (c /= 9)) .or. (lbound (c, 1) /= 5) .or. (ubound (c, 1) /= n + 4)
|
||||
r = r .or. (any (d(2:n+1) /= 10)) .or. (lbound (d, 1) /= 2)
|
||||
r = r .or. (any (e /= 11)) .or. (lbound (e, 1) /= 1) .or. (ubound (e, 1) /= 2)
|
||||
r = r .or. (lbound (e, 2) /= 1) .or. (ubound (e, 2) /= 2)
|
||||
r = r .or. (f /= 12)
|
||||
r = r .or. (any (g /= 13)) .or. (lbound (g, 1) /= 3) .or. (ubound (g, 1) /= n)
|
||||
r = r .or. (h /= 14)
|
||||
r = r .or. (any (i /= 15)) .or. (lbound (i, 1) /= 3) .or. (ubound (i, 1) /= n)
|
||||
r = r .or. (j /= 16)
|
||||
r = r .or. (any (k /= 17)) .or. (lbound (k, 1) /= 4) .or. (ubound (k, 1) /= n)
|
||||
!$omp end target
|
||||
if (r) call abort
|
||||
end subroutine foo
|
||||
end module target2
|
||||
use target2, only : foo
|
||||
integer, parameter :: n = 15, q = 0
|
||||
integer :: a, b(2:n-1), c(n), d(n), e(3:4, 3:4)
|
||||
integer, pointer :: f, g(:)
|
||||
allocate (f, g(3:n))
|
||||
a = 7
|
||||
b = 8
|
||||
c = 9
|
||||
d = 10
|
||||
e = 11
|
||||
f = 12
|
||||
g = 13
|
||||
call foo (a, b, c, d, e, f, g, n, q)
|
||||
end
|
29
libgomp/testsuite/libgomp.fortran/target3.f90
Normal file
29
libgomp/testsuite/libgomp.fortran/target3.f90
Normal file
@ -0,0 +1,29 @@
|
||||
! { dg-do run }
|
||||
|
||||
module target3
|
||||
contains
|
||||
subroutine foo (f, g)
|
||||
integer :: n
|
||||
integer, pointer :: f, g(:)
|
||||
integer, pointer :: j, k(:)
|
||||
logical :: r
|
||||
nullify (j)
|
||||
k => null ()
|
||||
!$omp target map (tofrom: f, g, j, k) map (from: r)
|
||||
r = associated (f) .or. associated (g)
|
||||
r = r .or. associated (j) .or. associated (k)
|
||||
!$omp end target
|
||||
if (r) call abort
|
||||
!$omp target
|
||||
r = associated (f) .or. associated (g)
|
||||
r = r .or. associated (j) .or. associated (k)
|
||||
!$omp end target
|
||||
if (r) call abort
|
||||
end subroutine foo
|
||||
end module target3
|
||||
use target3, only : foo
|
||||
integer, pointer :: f, g(:)
|
||||
f => null ()
|
||||
nullify (g)
|
||||
call foo (f, g)
|
||||
end
|
48
libgomp/testsuite/libgomp.fortran/target4.f90
Normal file
48
libgomp/testsuite/libgomp.fortran/target4.f90
Normal file
@ -0,0 +1,48 @@
|
||||
! { dg-do run }
|
||||
|
||||
module target4
|
||||
contains
|
||||
subroutine foo (a,m,n)
|
||||
integer :: m,n,i,j
|
||||
double precision :: a(m, n), t
|
||||
!$omp target data map(a) map(to: m, n)
|
||||
do i=1,n
|
||||
t = 0.0d0
|
||||
!$omp target
|
||||
!$omp parallel do reduction(+:t)
|
||||
do j=1,m
|
||||
t = t + a(j,i) * a(j,i)
|
||||
end do
|
||||
!$omp end target
|
||||
t = 2.0d0 * t
|
||||
!$omp target
|
||||
!$omp parallel do
|
||||
do j=1,m
|
||||
a(j,i) = a(j,i) * t
|
||||
end do
|
||||
!$omp end target
|
||||
end do
|
||||
!$omp end target data
|
||||
end subroutine foo
|
||||
end module target4
|
||||
use target4, only : foo
|
||||
integer :: i, j
|
||||
double precision :: a(8, 9), res(8, 9)
|
||||
do i = 1, 8
|
||||
do j = 1, 9
|
||||
a(i, j) = i + j
|
||||
end do
|
||||
end do
|
||||
call foo (a, 8, 9)
|
||||
res = reshape ((/ 1136.0d0, 1704.0d0, 2272.0d0, 2840.0d0, 3408.0d0, 3976.0d0, &
|
||||
& 4544.0d0, 5112.0d0, 2280.0d0, 3040.0d0, 3800.0d0, 4560.0d0, 5320.0d0, 6080.0d0, &
|
||||
& 6840.0d0, 7600.0d0, 3936.0d0, 4920.0d0, 5904.0d0, 6888.0d0, 7872.0d0, 8856.0d0, &
|
||||
& 9840.0d0, 10824.0d0, 6200.0d0, 7440.0d0, 8680.0d0, 9920.0d0, 11160.0d0, 12400.0d0, &
|
||||
& 13640.0d0, 14880.0d0, 9168.0d0, 10696.0d0, 12224.0d0, 13752.0d0, 15280.0d0, 16808.0d0, &
|
||||
& 18336.0d0, 19864.0d0, 12936.0d0, 14784.0d0, 16632.0d0, 18480.0d0, 20328.0d0, 22176.0d0, &
|
||||
& 24024.0d0, 25872.0d0, 17600.0d0, 19800.0d0, 22000.0d0, 24200.0d0, 26400.0d0, 28600.0d0, &
|
||||
& 30800.0d0, 33000.0d0, 23256.0d0, 25840.0d0, 28424.0d0, 31008.0d0, 33592.0d0, 36176.0d0, &
|
||||
& 38760.0d0, 41344.0d0, 30000.0d0, 33000.0d0, 36000.0d0, 39000.0d0, 42000.0d0, 45000.0d0, &
|
||||
& 48000.0d0, 51000.0d0 /), (/ 8, 9 /))
|
||||
if (any (a /= res)) call abort
|
||||
end
|
21
libgomp/testsuite/libgomp.fortran/target5.f90
Normal file
21
libgomp/testsuite/libgomp.fortran/target5.f90
Normal file
@ -0,0 +1,21 @@
|
||||
! { dg-do compile }
|
||||
! { dg-options "-fopenmp" }
|
||||
|
||||
integer :: r
|
||||
r = 0
|
||||
call foo (r)
|
||||
if (r /= 11) call abort
|
||||
contains
|
||||
subroutine foo (r)
|
||||
integer :: i, r
|
||||
!$omp parallel
|
||||
!$omp single
|
||||
!$omp target teams distribute parallel do reduction (+: r)
|
||||
do i = 1, 10
|
||||
r = r + 1
|
||||
end do
|
||||
r = r + 1
|
||||
!$omp end single
|
||||
!$omp end parallel
|
||||
end subroutine
|
||||
end
|
50
libgomp/testsuite/libgomp.fortran/target6.f90
Normal file
50
libgomp/testsuite/libgomp.fortran/target6.f90
Normal file
@ -0,0 +1,50 @@
|
||||
! { dg-do run }
|
||||
|
||||
module target6
|
||||
contains
|
||||
subroutine foo (p, v, w, n)
|
||||
double precision, pointer :: p(:), v(:), w(:)
|
||||
double precision :: q(n)
|
||||
integer :: i, n
|
||||
!$omp target data if (n > 256) map (to: v(1:n), w(:n)) map (from: p(1:n), q)
|
||||
!$omp target if (n > 256)
|
||||
!$omp parallel do simd
|
||||
do i = 1, n
|
||||
p(i) = v(i) * w(i)
|
||||
q(i) = p(i)
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp target update if (n > 256) from (p)
|
||||
do i = 1, n
|
||||
if (p(i) /= i * iand (i, 63)) call abort
|
||||
v(i) = v(i) + 1
|
||||
end do
|
||||
!$omp target update if (n > 256) to (v(1:n))
|
||||
!$omp target if (n > 256)
|
||||
!$omp parallel do simd
|
||||
do i = 1, n
|
||||
p(i) = v(i) * w(i)
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp end target data
|
||||
do i = 1, n
|
||||
if (q(i) /= (v(i) - 1) * w(i)) call abort
|
||||
if (p(i) /= q(i) + w(i)) call abort
|
||||
end do
|
||||
end subroutine
|
||||
end module target6
|
||||
use target6, only : foo
|
||||
integer :: n, i
|
||||
double precision, pointer :: p(:), v(:), w(:)
|
||||
n = 10000
|
||||
allocate (p(n), v(n), w(n))
|
||||
do i = 1, n
|
||||
v(i) = i
|
||||
w(i) = iand (i, 63)
|
||||
end do
|
||||
call foo (p, v, w, n)
|
||||
do i = 1, n
|
||||
if (p(i) /= (i + 1) * iand (i, 63)) call abort
|
||||
end do
|
||||
deallocate (p, v, w)
|
||||
end
|
34
libgomp/testsuite/libgomp.fortran/target7.f90
Normal file
34
libgomp/testsuite/libgomp.fortran/target7.f90
Normal file
@ -0,0 +1,34 @@
|
||||
! { dg-do run }
|
||||
|
||||
interface
|
||||
real function foo (x)
|
||||
!$omp declare target
|
||||
real, intent(in) :: x
|
||||
end function foo
|
||||
end interface
|
||||
integer, parameter :: n = 1000
|
||||
integer, parameter :: c = 100
|
||||
integer :: i, j
|
||||
real :: a(n)
|
||||
do i = 1, n
|
||||
a(i) = i
|
||||
end do
|
||||
do i = 1, n, c
|
||||
!$omp task shared(a)
|
||||
!$omp target map(a(i:i+c-1))
|
||||
!$omp parallel do
|
||||
do j = i, i + c - 1
|
||||
a(j) = foo (a(j))
|
||||
end do
|
||||
!$omp end target
|
||||
!$omp end task
|
||||
end do
|
||||
do i = 1, n
|
||||
if (a(i) /= i + 1) call abort
|
||||
end do
|
||||
end
|
||||
real function foo (x)
|
||||
!$omp declare target
|
||||
real, intent(in) :: x
|
||||
foo = x + 1
|
||||
end function foo
|
Loading…
Reference in New Issue
Block a user