d9a6bd32ad
gcc/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Aldy Hernandez <aldyh@redhat.com> Ilya Verbin <ilya.verbin@intel.com> * builtin-types.def (BT_FN_BOOL_UINT_LONGPTR_LONGPTR_LONGPTR, BT_FN_BOOL_UINT_ULLPTR_ULLPTR_ULLPTR, BT_FN_BOOL_UINT_LONGPTR_LONG_LONGPTR_LONGPTR, BT_FN_BOOL_UINT_ULLPTR_ULL_ULLPTR_ULLPTR, BT_FN_VOID_INT_SIZE_PTR_PTR_PTR_UINT_PTR, BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_LONG_LONG_LONG, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_ULL_ULL_ULL, BT_FN_VOID_LONG_VAR, BT_FN_VOID_ULL_VAR): New. (BT_FN_VOID_INT_PTR_SIZE_PTR_PTR_PTR, BT_FN_VOID_INT_OMPFN_PTR_SIZE_PTR_PTR_PTR, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR): Remove. * cgraph.h (enum cgraph_simd_clone_arg_type): Add SIMD_CLONE_ARG_TYPE_LINEAR_REF_CONSTANT_STEP, SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP and SIMD_CLONE_ARG_TYPE_LINEAR_VAL_CONSTANT_STEP. (struct cgraph_simd_clone_arg): Adjust comment. * coretypes.h (struct gomp_ordered): New forward decl. * gimple.c (gimple_build_omp_critical): Add CLAUSES argument, set critical clauses to it. (gimple_build_omp_ordered): Return gomp_ordered * instead of gimple *. Add CLAUSES argument, set ordered clauses to it. (gimple_copy): Unshare clauses on GIMPLE_OMP_CRITICAL and GIMPLE_OMP_ORDERED. * gimple.def (GIMPLE_OMP_ORDERED): Change from GSS_OMP to GSS_OMP_SINGLE_LAYOUT, move it after GIMPLE_OMP_TEAMS. * gimple.h (enum gf_mask): Add GF_OMP_TASK_TASKLOOP. Add another bit to GF_OMP_FOR_KIND_MASK mask. Add GF_OMP_FOR_KIND_TASKLOOP, renumber GF_OMP_FOR_KIND_CILKFOR and GF_OMP_FOR_KIND_OACC_LOOP. Adjust GF_OMP_FOR_SIMD, GF_OMP_FOR_COMBINED and GF_OMP_FOR_COMBINED_INTO. Add another bit to GF_OMP_TARGET_KIND_MASK mask. Add GF_OMP_TARGET_KIND_ENTER_DATA and GF_OMP_TARGET_KIND_EXIT_DATA, renumber GF_OMP_TARGET_KIND_OACC_{PARALLEL,KERNELS,DATA,UPDATE,ENTER_EXIT_DATA}. (gomp_critical): Add clauses field. (gomp_ordered): New struct. (is_a_helper <gomp_ordered *>::test): New inline. (gimple_build_omp_critical): Add CLAUSES argument. (gimple_build_omp_ordered): Likewise. Return gomp_ordered * instead of gimple *. (gimple_omp_critical_clauses, gimple_omp_critical_clauses_ptr, gimple_omp_critical_set_clauses, gimple_omp_ordered_clauses, gimple_omp_ordered_clauses_ptr, gimple_omp_ordered_set_clauses, gimple_omp_task_taskloop_p, gimple_omp_task_set_taskloop_p): New inline functions. * gimple-pretty-print.c (dump_gimple_omp_for): Handle taskloop. (dump_gimple_omp_target): Handle enter data and exit data. (dump_gimple_omp_block): Don't handle GIMPLE_OMP_ORDERED here. (dump_gimple_omp_critical): Print clauses. (dump_gimple_omp_ordered): New function. (dump_gimple_omp_task): Handle taskloop. (pp_gimple_stmt_1): Use dump_gimple_omp_ordered for GIMPLE_OMP_ORDERED. * gimple-walk.c (walk_gimple_op): Walk clauses on GIMPLE_OMP_CRITICAL and GIMPLE_OMP_ORDERED. * gimplify.c (enum gimplify_omp_var_data): Add GOVD_MAP_0LEN_ARRAY. (enum omp_region_type): Add ORT_COMBINED_TARGET and ORT_NONE. (struct gimplify_omp_ctx): Add loop_iter_var, target_map_scalars_firstprivate, target_map_pointers_as_0len_arrays and target_firstprivatize_array_bases fields. (delete_omp_context): Release loop_iter_var. (gimplify_bind_expr): Handle ORT_NONE. (maybe_fold_stmt): Adjust check for ORT_TARGET for the addition of ORT_COMBINED_TARGET. (is_gimple_stmt): Return true for OMP_TASKLOOP, OMP_TEAMS and OMP_TARGET{,_DATA,_UPDATE,_ENTER_DATA,_EXIT_DATA}. (omp_firstprivatize_variable): Handle ORT_NONE. Adjust check for ORT_TARGET for the addition of ORT_COMBINED_TARGET. Handle ctx->target_map_scalars_firstprivate. (omp_add_variable): Handle ORT_NONE. Allow map clause together with data sharing clauses. For data sharing clause with VLA decl on omp target/target data don't add firstprivate for the pointer. Call omp_notice_variable on TYPE_SIZE_UNIT only if it is a DECL_P. (omp_notice_threadprivate_variable): Adjust check for ORT_TARGET for the addition of ORT_COMBINED_TARGET. (omp_notice_variable): Handle ORT_NONE. Adjust check for ORT_TARGET for the addition of ORT_COMBINED_TARGET. Handle implicit mapping of pointers as zero length array sections and ctx->target_map_scalars_firstprivate mapping of scalars as firstprivate data sharing. (omp_check_private): Handle omp_member_access_dummy_var vars. (find_decl_expr): New function. (gimplify_scan_omp_clauses): Add CODE argument. For OMP_CLAUSE_IF complain if OMP_CLAUSE_IF_MODIFIER is present and does not match code. Handle OMP_CLAUSE_GANG separately. Handle OMP_CLAUSE_{PRIORITY,GRAINSIZE,NUM_TASKS,NOGROUP,THREADS,SIMD,SIMDLEN} clauses. Diagnose linear clause on combined distribute {, parallel for} simd construct, unless it is the loop iterator. Handle struct element GOMP_MAP_FIRSTPRIVATE_POINTER. Handle map clauses with COMPONENT_REF. Initialize ctx->target_map_scalars_firstprivate, ctx->target_firstprivatize_array_bases and ctx->target_map_pointers_as_0len_arrays. Add firstprivate for linear clause even to target region if combined. Remove map clauses with GOMP_MAP_FIRSTPRIVATE_POINTER kind from OMP_TARGET_{,ENTER_,EXIT_}DATA. For GOMP_MAP_FIRSTPRIVATE_POINTER map kind with non-INTEGER_CST OMP_CLAUSE_SIZE firstprivatize the bias. Handle OMP_CLAUSE_DEPEND_{SINK,SOURCE}. Handle OMP_CLAUSE_{{USE,IS}_DEVICE_PTR,DEFAULTMAP,HINT}. For linear clause on worksharing loop combined with parallel add shared clause on the parallel. Handle OMP_CLAUSE_REDUCTION with MEM_REF OMP_CLAUSE_DECL. Set DECL_NAME on omp_member_access_dummy_var vars. Add lastprivate clause to outer taskloop if needed. (gimplify_adjust_omp_clauses_1): Handle GOVD_MAP_0LEN_ARRAY. If gimplify_omp_ctxp->target_firstprivatize_array_bases, use GOMP_MAP_FIRSTPRIVATE_POINTER map kind instead of GOMP_MAP_POINTER. (gimplify_adjust_omp_clauses): Add CODE argument. Handle removal of GOMP_MAP_FIRSTPRIVATE_POINTER struct elements for struct not seen in target body. Handle removal of struct mapping if struct is not seen in target body. Remove GOMP_MAP_STRUCT map clause on OMP_TARGET_EXIT_DATA. Adjust check for ORT_TARGET for the addition of ORT_COMBINED_TARGET. Use GOMP_MAP_FIRSTPRIVATE_POINTER instead of GOMP_MAP_POINTER if ctx->target_firstprivatize_array_bases for VLAs. Set OMP_CLAUSE_MAP_PRIVATE if both data sharing and map clause appear together. Handle OMP_CLAUSE_{{USE,IS}_DEVICE_PTR,DEFAULTMAP,HINT}. Don't remove map clause if it has map-type-modifier always. Handle OMP_CLAUSE_{PRIORITY,GRAINSIZE,NUM_TASKS,NOGROUP,THREADS,SIMD,SIMDLEN} clauses. (gimplify_oacc_cache, gimplify_omp_parallel, gimplify_omp_task): Adjust gimplify_scan_omp_clauses and gimplify_adjust_omp_clauses callers. (gimplify_omp_for): Likewise. Handle OMP_TASKLOOP. Initialize loop_iter_var. Use OMP_FOR_ORIG_DECLS. Fix handling of lastprivate iterators in doacross loops. (gimplify_omp_workshare): Adjust gimplify_scan_omp_clauses and gimplify_adjust_omp_clauses callers. Use ORT_COMBINED_TARGET for OMP_TARGET_COMBINED. Adjust check for ORT_TARGET for the addition of ORT_COMBINED_TARGET. (gimplify_omp_target_update): Adjust gimplify_scan_omp_clauses and gimplify_adjust_omp_clauses callers. Handle OMP_TARGET_ENTER_DATA and OMP_TARGET_EXIT_DATA. (gimplify_omp_ordered): New function. (gimplify_expr): Handle OMP_TASKLOOP, OMP_TARGET_ENTER_DATA and OMP_TARGET_EXIT_DATA. Use gimplify_omp_ordered for OMP_ORDERED. Gimplify clauses on OMP_CRITICAL. * internal-fn.c (expand_GOMP_SIMD_ORDERED_START, expand_GOMP_SIMD_ORDERED_END): New functions. * internal-fn.def (GOMP_SIMD_ORDERED_START, GOMP_SIMD_ORDERED_END): New internal functions. * omp-builtins.def (BUILT_IN_GOMP_LOOP_DOACROSS_STATIC_START, BUILT_IN_GOMP_LOOP_DOACROSS_DYNAMIC_START, BUILT_IN_GOMP_LOOP_DOACROSS_GUIDED_START, BUILT_IN_GOMP_LOOP_DOACROSS_RUNTIME_START, BUILT_IN_GOMP_LOOP_ULL_DOACROSS_STATIC_START, BUILT_IN_GOMP_LOOP_ULL_DOACROSS_DYNAMIC_START, BUILT_IN_GOMP_LOOP_ULL_DOACROSS_GUIDED_START, BUILT_IN_GOMP_LOOP_ULL_DOACROSS_RUNTIME_START, BUILT_IN_GOMP_DOACROSS_POST, BUILT_IN_GOMP_DOACROSS_WAIT, BUILT_IN_GOMP_DOACROSS_ULL_POST, BUILT_IN_GOMP_DOACROSS_ULL_WAIT, BUILT_IN_GOMP_TARGET_ENTER_EXIT_DATA, BUILT_IN_GOMP_TASKLOOP, BUILT_IN_GOMP_TASKLOOP_ULL): New built-ins. (BUILT_IN_GOMP_TASK): Add INT argument to the end. (BUILT_IN_GOMP_TARGET): Rename from GOMP_target to GOMP_target_41, adjust type. (BUILT_IN_GOMP_TARGET_DATA): Rename from GOMP_target_data to GOMP_target_data_41, adjust type. (BUILT_IN_GOMP_TARGET_UPDATE): Rename from GOMP_target_update to GOMP_target_update_41, adjust type. * omp-low.c (struct omp_region): Adjust comments, add ord_stmt field. (struct omp_for_data): Add ordered and simd_schedule fields. (omp_member_access_dummy_var, unshare_and_remap_1, unshare_and_remap, is_taskloop_ctx): New functions. (is_taskreg_ctx): Use is_parallel_ctx and is_task_ctx. (extract_omp_for_data): Handle taskloops and doacross loops and simd schedule modifier. (omp_adjust_chunk_size): New function. (get_ws_args_for): Use it. (lookup_sfield): Change first argument to splay_tree_key, add overload with first argument tree. (maybe_lookup_field): Likewise. (use_pointer_for_field): Handle omp_member_access_dummy_var. (omp_copy_decl_2): If var is TREE_ADDRESSABLE listed in task_shared_vars, clear TREE_ADDRESSABLE on the copy. (build_outer_var_ref): Add LASTPRIVATE argument, handle taskloops and omp_member_access_dummy_var vars. (build_sender_ref): Change first argument to splay_tree_key, add overload with first argument tree. (install_var_field): For mask & 8 use &DECL_UID as key instead of the tree itself. (fixup_child_record_type): Const qualify *.omp_data_i. (scan_sharing_clauses): Handle OMP_CLAUSE_SHARED_FIRSTPRIVATE, C/C++ array reductions, OMP_CLAUSE_{IS,USE}_DEVICE_PTR clauses, OMP_CLAUSE_{PRIORITY,GRAINSIZE,NUM_TASKS,SIMDLEN,THREADS,SIMD} and OMP_CLAUSE_{NOGROUP,DEFAULTMAP} clauses, OMP_CLAUSE__LOOPTEMP_ clause on taskloop, GOMP_MAP_FIRSTPRIVATE_POINTER, OMP_CLAUSE_MAP_PRIVATE. (create_omp_child_function): Set TREE_READONLY on .omp_data_i. (find_combined_for): Allow searching for different GIMPLE_OMP_FOR kinds. (add_taskreg_looptemp_clauses): New function. (scan_omp_parallel): Use it. (scan_omp_task): Likewise. (finish_taskreg_scan): Handle OMP_CLAUSE_SHARED_FIRSTPRIVATE. For taskloop, move fields for the first two _LOOPTEMP_ clauses first. (check_omp_nesting_restrictions): Handle GF_OMP_TARGET_KIND_ENTER_DATA and GF_OMP_TARGET_KIND_EXIT_DATA. Formatting fixes. Allow the sandwiched taskloop constructs. Type check OMP_CLAUSE_DEPEND_{KIND,SOURCE}. Allow ordered simd inside of simd region. Diagnose depend(source) or depend(sink:...) on target constructs or task/taskloop. (handle_simd_reference): Use get_name. (lower_rec_input_clauses): Likewise. Ignore all OMP_CLAUSE_LASTPRIVATE_FIRSTPRIVATE clauses on taskloop construct. Allow _LOOPTEMP_ clause on GOMP_TASK. Unshare new_var before passing it to omp_clause_{default,copy}_ctor. Handle OMP_CLAUSE_REDUCTION with MEM_REF OMP_CLAUSE_DECL. Set lastprivate_firstprivate flag for linear that needs copyin and copyout. Use BUILT_IN_ALLOCA_WITH_ALIGN instead of BUILT_IN_ALLOCA. (lower_lastprivate_clauses): For OMP_CLAUSE_LASTPRIVATE_FIRSTPRIVATE on taskloop lookup decl in outer context. Pass true to build_outer_var_ref lastprivate argument. Handle OMP_CLAUSE_LASTPRIVATE_TASKLOOP_IV lastprivate if the decl is global outside of outer taskloop for. (lower_reduction_clauses): Handle OMP_CLAUSE_REDUCTION with MEM_REF OMP_CLAUSE_DECL. (lower_send_clauses): Ignore first two _LOOPTEMP_ clauses in taskloop GOMP_TASK. Handle OMP_CLAUSE_SHARED_FIRSTPRIVATE. Handle omp_member_access_dummy_var vars. Handle OMP_CLAUSE_REDUCTION with MEM_REF OMP_CLAUSE_DECL. Use new lookup_sfield overload. (lower_send_shared_vars): Ignore fields with NULL or FIELD_DECL abstract origin. Handle omp_member_access_dummy_var vars. (expand_parallel_call): Use expand_omp_build_assign. (expand_task_call): Handle taskloop construct expansion. Add REGION argument. Use GOMP_TASK_* defines instead of hardcoded integers. Add priority argument to GOMP_task* calls. Or in GOMP_TASK_FLAG_PRIORITY into flags if priority is present for GOMP_task call. (expand_omp_build_assign): Add prototype. Add AFTER argument, if true emit statements after *GSI_P and continue linking. (expand_omp_taskreg): Adjust expand_task_call caller. (expand_omp_for_init_counts): Rename zero_iter_bb argument to zero_iter1_bb and first_zero_iter to first_zero_iter1, add zero_iter2_bb and first_zero_iter2 arguments, handle computation of counts even for ordered loops. (expand_omp_for_init_vars): Handle GOMP_TASK inner_stmt. (expand_omp_ordered_source, expand_omp_ordered_sink, expand_omp_ordered_source_sink, expand_omp_for_ordered_loops): New functions. (expand_omp_for_generic): Use omp_adjust_chunk_size. Handle linear clauses on worksharing loop. Handle DOACROSS loop expansion. (expand_omp_for_static_nochunk): Handle linear clauses on worksharing loop. Adjust expand_omp_for_init_counts callers. (expand_omp_for_static_chunk): Likewise. Use omp_adjust_chunk_size. (expand_omp_simd): Handle addressable fd->loop.v. Adjust expand_omp_for_init_counts callers. (expand_omp_taskloop_for_outer, expand_omp_taskloop_for_inner): New functions. (expand_omp_for): Call expand_omp_taskloop_for_* for taskloop. Handle doacross loops. (expand_omp_target): Handle GF_OMP_TARGET_KIND_ENTER_DATA and GF_OMP_TARGET_KIND_EXIT_DATA. Pass flags and depend arguments to GOMP_target_{41,update_41,enter_exit_data} libcalls. (expand_omp): Don't expand ordered depend constructs here, record ord_stmt instead for later expand_omp_for_generic. (build_omp_regions_1): Handle GF_OMP_TARGET_KIND_ENTER_DATA and GF_OMP_TARGET_KIND_EXIT_DATA. Treat GIMPLE_OMP_ORDERED with depend clause as stand-alone directive. (lower_omp_ordered_clauses): New function. (lower_omp_ordered): Handle OMP_CLAUSE_SIMD, for OMP_CLAUSE_DEPEND don't lower anything. (lower_omp_for_lastprivate): Use last _looptemp_ clause on taskloop for comparison. (lower_omp_for): Handle taskloop constructs. Adjust OMP_CLAUSE_DECL and OMP_CLAUSE_LINEAR_STEP so that expand_omp_for_* can use it during expansion for linear adjustments. (create_task_copyfn): Handle OMP_CLAUSE_SHARED_FIRSTPRIVATE. (lower_depend_clauses): Assert not seeing sink/source depend kinds. Set TREE_ADDRESSABLE on array. Change first argument from gimple * to tree * pointing to the stmt's clauses. (lower_omp_taskreg): Adjust lower_depend_clauses caller. (lower_omp_target): Handle GF_OMP_TARGET_KIND_ENTER_DATA and GF_OMP_TARGET_KIND_EXIT_DATA, depend clauses, GOMP_MAP_{RELEASE,ALWAYS_{TO,FROM,TOFROM},FIRSTPRIVATE_POINTER,STRUCT} map kinds, OMP_CLAUSE_{FIRSTPRIVATE,PRIVATE,{IS,USE}_DEVICE_PTR clauses. Always use short kind and 8-bit align shift. (lower_omp_regimplify_p): Use IS_TYPE_OR_DECL_P macro. (struct lower_omp_regimplify_operands_data): New type. (lower_omp_regimplify_operands_p, lower_omp_regimplify_operands): New functions. (lower_omp_1): Use lower_omp_regimplify_operands instead of gimple_regimplify_operands. (make_gimple_omp_edges): Handle GF_OMP_TARGET_KIND_ENTER_DATA and GF_OMP_TARGET_KIND_EXIT_DATA. Treat GIMPLE_OMP_ORDERED with depend clause as stand-alone directive. (simd_clone_clauses_extract): Honor OMP_CLAUSE_LINEAR_KIND. (simd_clone_mangle): Mangle the various linear kinds per the new ABI. (simd_clone_adjust_argument_types): Handle SIMD_CLONE_ARG_TYPE_LINEAR_*_CONSTANT_STEP. (simd_clone_init_simd_arrays): Don't do anything for uval. (simd_clone_adjust): Handle SIMD_CLONE_ARG_TYPE_LINEAR_REF_CONSTANT_STEP like SIMD_CLONE_ARG_TYPE_LINEAR_CONSTANT_STEP. Handle SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP. * omp-low.h (omp_member_access_dummy_var): New prototype. * passes.def (pass_simduid_cleanup): Schedule another copy of the pass after all optimizations. * tree.c (omp_clause_code_name): Add entries for OMP_CLAUSE_{TO_DECLARE,LINK,{USE,IS}_DEVICE_PTR,DEFAULTMAP,HINT} and OMP_CLAUSE_{PRIORITY,GRAINSIZE,NUM_TASKS,NOGROUP,THREADS,SIMD}. (omp_clause_num_ops): Likewise. Bump number of OMP_CLAUSE_REDUCTION arguments to 5 and for OMP_CLAUSE_ORDERED to 1. (walk_tree_1): Adjust for OMP_CLAUSE_ORDERED having 1 argument and OMP_CLAUSE_REDUCTION 5 arguments. Handle OMP_CLAUSE_{TO_DECLARE,LINK,{USE,IS}_DEVICE_PTR,DEFAULTMAP,HINT} and OMP_CLAUSE_{PRIORITY,GRAINSIZE,NUM_TASKS,NOGROUP,THREADS,SIMD} clauses. * tree-core.h (enum omp_clause_linear_kind): New. (struct tree_omp_clause): Change type of map_kind from unsigned char to unsigned int. Add subcode.if_modifier and subcode.linear_kind fields. (enum omp_clause_code): Add OMP_CLAUSE_{TO_DECLARE,LINK,{USE,IS}_DEVICE_PTR,DEFAULTMAP,HINT} and OMP_CLAUSE_{PRIORITY,GRAINSIZE,NUM_TASKS,NOGROUP,THREADS,SIMD}. (OMP_CLAUSE_REDUCTION): Document OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER. (enum omp_clause_depend_kind): Add OMP_CLAUSE_DEPEND_{SOURCE,SINK}. * tree.def (OMP_FOR): Add OMP_FOR_ORIG_DECLS operand. (OMP_CRITICAL): Move before OMP_SINGLE. Add OMP_CRITICAL_CLAUSES operand. (OMP_ORDERED): Move before OMP_SINGLE. Add OMP_ORDERED_CLAUSES operand. (OMP_TASKLOOP, OMP_TARGET_ENTER_DATA, OMP_TARGET_EXIT_DATA): New tree codes. * tree.h (OMP_BODY): Replace OMP_CRITICAL with OMP_TASKGROUP. (OMP_CLAUSE_SET_MAP_KIND): Cast to unsigned int rather than unsigned char. (OMP_CRITICAL_NAME): Adjust to be 3rd operand instead of 2nd. (OMP_CLAUSE_NUM_TASKS_EXPR): Formatting fix. (OMP_STANDALONE_CLAUSES): Adjust to cover OMP_TARGET_{ENTER,EXIT}_DATA. (OMP_CLAUSE_DEPEND_SINK_NEGATIVE, OMP_TARGET_COMBINED, OMP_CLAUSE_MAP_PRIVATE, OMP_FOR_ORIG_DECLS, OMP_CLAUSE_IF_MODIFIER, OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION, OMP_CRITICAL_CLAUSES, OMP_CLAUSE_PRIVATE_TASKLOOP_IV, OMP_CLAUSE_LASTPRIVATE_TASKLOOP_IV, OMP_CLAUSE_HINT_EXPR, OMP_CLAUSE_SCHEDULE_SIMD, OMP_CLAUSE_LINEAR_KIND, OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER, OMP_CLAUSE_SHARED_FIRSTPRIVATE, OMP_ORDERED_CLAUSES, OMP_TARGET_ENTER_DATA_CLAUSES, OMP_TARGET_EXIT_DATA_CLAUSES, OMP_CLAUSE_NUM_TASKS_EXPR, OMP_CLAUSE_GRAINSIZE_EXPR, OMP_CLAUSE_PRIORITY_EXPR, OMP_CLAUSE_ORDERED_EXPR): Define. * tree-inline.c (remap_gimple_stmt): Handle clauses on GIMPLE_OMP_ORDERED and GIMPLE_OMP_CRITICAL. For IFN_GOMP_SIMD_ORDERED_{START,END} set has_simduid_loops. * tree-nested.c (convert_nonlocal_omp_clauses): Handle OMP_CLAUSE_{TO_DECLARE,LINK,{USE,IS}_DEVICE_PTR,SIMDLEN,PRIORITY,SIMD} and OMP_CLAUSE_{GRAINSIZE,NUM_TASKS,HINT,NOGROUP,THREADS,DEFAULTMAP} clauses. Handle OMP_CLAUSE_REDUCTION_DECL_PLACEHOLDER. (convert_local_omp_clauses): Likewise. * tree-pretty-print.c (dump_omp_clause): Handle OMP_CLAUSE_{TO_DECLARE,LINK,{USE,IS}_DEVICE_PTR,SIMDLEN,PRIORITY,SIMD} and OMP_CLAUSE_{GRAINSIZE,NUM_TASKS,HINT,NOGROUP,THREADS,DEFAULTMAP} clauses. Handle OMP_CLAUSE_IF_MODIFIER, OMP_CLAUSE_ORDERED_EXPR, OMP_CLAUSE_SCHEDULE_SIMD, OMP_CLAUSE_LINEAR_KIND, OMP_CLAUSE_DEPEND_{SOURCE,SINK}. Use "delete" for GOMP_MAP_FORCE_DEALLOC. Handle GOMP_MAP_{ALWAYS_{TO,FROM,TOFROM},RELEASE,FIRSTPRIVATE_POINTER,STRUCT}. (dump_generic_node): Handle OMP_TASKLOOP, OMP_TARGET_{ENTER,EXIT}_DATA and clauses on OMP_ORDERED and OMP_CRITICAL. * tree-vectorizer.c (adjust_simduid_builtins): Adjust comment. Remove IFN_GOMP_SIMD_ORDERED_{START,END}. (vectorize_loops): Adjust comments. (pass_simduid_cleanup::execute): Likewise. * tree-vect-stmts.c (vectorizable_simd_clone_call): Handle SIMD_CLONE_ARG_TYPE_LINEAR_{REF,VAL,UVAL}_CONSTANT_STEP. * wide-int.h (wi::gcd): New. gcc/c-family/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Aldy Hernandez <aldyh@redhat.com> * c-common.c (enum c_builtin_type): Define DEF_FUNCTION_TYPE_9, DEF_FUNCTION_TYPE_10 and DEF_FUNCTION_TYPE_11. (c_define_builtins): Likewise. * c-common.h (enum c_omp_clause_split): Add C_OMP_CLAUSE_SPLIT_TASKLOOP. (c_finish_omp_critical, c_finish_omp_ordered): Add CLAUSES argument. (c_finish_omp_for): Add ORIG_DECLV argument. * c-cppbuiltin.c (c_cpp_builtins): Predefine _OPENMP as 201511 instead of 201307. * c-omp.c (c_finish_omp_critical): Add CLAUSES argument, set OMP_CRITICAL_CLAUSES to it. (c_finish_omp_ordered): Add CLAUSES argument, set OMP_ORDERED_CLAUSES to it. (c_finish_omp_for): Add ORIG_DECLV argument, set OMP_FOR_ORIG_DECLS to it if OMP_FOR. Clear DECL_INITIAL on the IVs. (c_omp_split_clauses): Handle OpenMP 4.5 combined/composite constructs and new OpenMP 4.5 clauses. Clear OMP_CLAUSE_SCHEDULE_SIMD if not combined with OMP_SIMD. Add verification code. * c-pragma.c (omp_pragmas_simd): Add taskloop. * c-pragma.h (enum pragma_kind): Add PRAGMA_OMP_TASKLOOP. (enum pragma_omp_clause): Add PRAGMA_OMP_CLAUSE_{DEFAULTMAP,GRAINSIZE,HINT,{IS,USE}_DEVICE_PTR} and PRAGMA_OMP_CLAUSE_{LINK,NOGROUP,NUM_TASKS,PRIORITY,SIMD,THREADS}. gcc/c/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Aldy Hernandez <aldyh@redhat.com> * c-parser.c (c_parser_pragma): Handle PRAGMA_OMP_ORDERED here. (c_parser_omp_clause_name): Handle OpenMP 4.5 clauses. (c_parser_omp_variable_list): Handle structure elements for map, to and from clauses. Handle array sections in reduction clause. Formatting fixes. (c_parser_omp_clause_if): Add IS_OMP argument, handle parsing of if clause modifiers. (c_parser_omp_clause_num_tasks, c_parser_omp_clause_grainsize, c_parser_omp_clause_priority, c_parser_omp_clause_hint, c_parser_omp_clause_defaultmap, c_parser_omp_clause_use_device_ptr, c_parser_omp_clause_is_device_ptr): New functions. (c_parser_omp_clause_ordered): Parse optional parameter. (c_parser_omp_clause_reduction): Handle array reductions. (c_parser_omp_clause_schedule): Parse optional simd modifier. (c_parser_omp_clause_nogroup, c_parser_omp_clause_orderedkind): New functions. (c_parser_omp_clause_linear): Parse linear clause modifiers. (c_parser_omp_clause_depend_sink): New function. (c_parser_omp_clause_depend): Parse source/sink depend kinds. (c_parser_omp_clause_map): Parse release/delete map kinds and optional always modifier. (c_parser_oacc_all_clauses): Adjust c_parser_omp_clause_if and c_finish_omp_clauses callers. (c_parser_omp_all_clauses): Likewise. Parse OpenMP 4.5 clauses. Parse "to" as OMP_CLAUSE_TO_DECLARE if on declare target directive. (c_parser_oacc_cache): Adjust c_finish_omp_clauses caller. (OMP_CRITICAL_CLAUSE_MASK): Define. (c_parser_omp_critical): Parse critical clauses. (c_parser_omp_for_loop): Handle doacross loops, adjust c_finish_omp_for and c_finish_omp_clauses callers. (OMP_SIMD_CLAUSE_MASK): Add simdlen clause. (c_parser_omp_simd): Allow ordered clause if it has no parameter. (OMP_FOR_CLAUSE_MASK): Add linear clause. (c_parser_omp_for): Disallow ordered clause when combined with distribute. Disallow linear clause when combined with distribute and not combined with simd. (OMP_ORDERED_CLAUSE_MASK, OMP_ORDERED_DEPEND_CLAUSE_MASK): Define. (c_parser_omp_ordered): Add CONTEXT argument, remove LOC argument, parse clauses and if depend clause is found, don't parse a body. (c_parser_omp_parallel): Disallow copyin clause on target parallel. Allow target parallel without for after it. (OMP_TASK_CLAUSE_MASK): Add priority clause. (OMP_TARGET_DATA_CLAUSE_MASK): Add use_device_ptr clause. (c_parser_omp_target_data): Diagnose no map clauses or clauses with invalid kinds. (OMP_TARGET_UPDATE_CLAUSE_MASK): Add depend and nowait clauses. (OMP_TARGET_ENTER_DATA_CLAUSE_MASK, OMP_TARGET_EXIT_DATA_CLAUSE_MASK): Define. (c_parser_omp_target_enter_data, c_parser_omp_target_exit_data): New functions. (OMP_TARGET_CLAUSE_MASK): Add depend, nowait, private, firstprivate, defaultmap and is_device_ptr clauses. (c_parser_omp_target): Parse target parallel and target simd. Set OMP_TARGET_COMBINED on combined constructs. Parse target enter data and target exit data. Diagnose invalid map kinds. (OMP_DECLARE_TARGET_CLAUSE_MASK): Define. (c_parser_omp_declare_target): Parse OpenMP 4.5 forms of this construct. (c_parser_omp_declare_reduction): Use STRIP_NOPS when checking for &omp_priv. (OMP_TASKLOOP_CLAUSE_MASK): Define. (c_parser_omp_taskloop): New function. (c_parser_omp_construct): Don't handle PRAGMA_OMP_ORDERED here, handle PRAGMA_OMP_TASKLOOP. (c_parser_cilk_for): Adjust c_finish_omp_clauses callers. * c-tree.h (c_finish_omp_clauses): Add two new arguments. * c-typeck.c (handle_omp_array_sections_1): Fix comment typo. Add IS_OMP argument, handle structure element bases, diagnose bitfields, pass IS_OMP recursively, diagnose known zero length array sections in depend clauses, handle array sections in reduction clause, diagnose negative length even for pointers. (handle_omp_array_sections): Add IS_OMP argument, use auto_vec for types, pass IS_OMP down to handle_omp_array_sections_1, handle array sections in reduction clause, set OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION if map could be zero length array section, use GOMP_MAP_FIRSTPRIVATE_POINTER for IS_OMP. (c_finish_omp_clauses): Add IS_OMP and DECLARE_SIMD arguments. Handle new OpenMP 4.5 clauses and new restrictions for the old ones. gcc/cp/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Aldy Hernandez <aldyh@redhat.com> * class.c (finish_struct_1): Call finish_omp_declare_simd_methods. * cp-gimplify.c (cp_gimplify_expr): Handle OMP_TASKLOOP. (cp_genericize_r): Likewise. (cxx_omp_finish_clause): Don't diagnose references. (cxx_omp_disregard_value_expr): New function. * cp-objcp-common.h (LANG_HOOKS_OMP_DISREGARD_VALUE_EXPR): Redefine. * cp-tree.h (OMP_FOR_GIMPLIFYING_P): Document for OMP_TASKLOOP. (DECL_OMP_PRIVATIZED_MEMBER): Define. (finish_omp_declare_simd_methods, push_omp_privatization_clauses, pop_omp_privatization_clauses, save_omp_privatization_clauses, restore_omp_privatization_clauses, omp_privatize_field, cxx_omp_disregard_value_expr): New prototypes. (finish_omp_clauses): Add two new arguments. (finish_omp_for): Add ORIG_DECLV argument. * parser.c (cp_parser_lambda_body): Call save_omp_privatization_clauses and restore_omp_privatization_clauses. (cp_parser_omp_clause_name): Handle OpenMP 4.5 clauses. (cp_parser_omp_var_list_no_open): Handle structure elements for map, to and from clauses. Handle array sections in reduction clause. Parse this keyword. Formatting fixes. (cp_parser_omp_clause_if): Add IS_OMP argument, handle parsing of if clause modifiers. (cp_parser_omp_clause_num_tasks, cp_parser_omp_clause_grainsize, cp_parser_omp_clause_priority, cp_parser_omp_clause_hint, cp_parser_omp_clause_defaultmap): New functions. (cp_parser_omp_clause_ordered): Parse optional parameter. (cp_parser_omp_clause_reduction): Handle array reductions. (cp_parser_omp_clause_schedule): Parse optional simd modifier. (cp_parser_omp_clause_nogroup, cp_parser_omp_clause_orderedkind): New functions. (cp_parser_omp_clause_linear): Parse linear clause modifiers. (cp_parser_omp_clause_depend_sink): New function. (cp_parser_omp_clause_depend): Parse source/sink depend kinds. (cp_parser_omp_clause_map): Parse release/delete map kinds and optional always modifier. (cp_parser_oacc_all_clauses): Adjust cp_parser_omp_clause_if and finish_omp_clauses callers. (cp_parser_omp_all_clauses): Likewise. Parse OpenMP 4.5 clauses. Parse "to" as OMP_CLAUSE_TO_DECLARE if on declare target directive. (OMP_CRITICAL_CLAUSE_MASK): Define. (cp_parser_omp_critical): Parse critical clauses. (cp_parser_omp_for_incr): Use cp_tree_equal if processing_template_decl. (cp_parser_omp_for_loop_init): Return tree instead of bool. Handle non-static data member iterators. (cp_parser_omp_for_loop): Handle doacross loops, adjust finish_omp_for and finish_omp_clauses callers. (cp_omp_split_clauses): Adjust finish_omp_clauses caller. (OMP_SIMD_CLAUSE_MASK): Add simdlen clause. (cp_parser_omp_simd): Allow ordered clause if it has no parameter. (OMP_FOR_CLAUSE_MASK): Add linear clause. (cp_parser_omp_for): Disallow ordered clause when combined with distribute. Disallow linear clause when combined with distribute and not combined with simd. (OMP_ORDERED_CLAUSE_MASK, OMP_ORDERED_DEPEND_CLAUSE_MASK): Define. (cp_parser_omp_ordered): Add CONTEXT argument, return bool instead of tree, parse clauses and if depend clause is found, don't parse a body. (cp_parser_omp_parallel): Disallow copyin clause on target parallel. Allow target parallel without for after it. (OMP_TASK_CLAUSE_MASK): Add priority clause. (OMP_TARGET_DATA_CLAUSE_MASK): Add use_device_ptr clause. (cp_parser_omp_target_data): Diagnose no map clauses or clauses with invalid kinds. (OMP_TARGET_UPDATE_CLAUSE_MASK): Add depend and nowait clauses. (OMP_TARGET_ENTER_DATA_CLAUSE_MASK, OMP_TARGET_EXIT_DATA_CLAUSE_MASK): Define. (cp_parser_omp_target_enter_data, cp_parser_omp_target_exit_data): New functions. (OMP_TARGET_CLAUSE_MASK): Add depend, nowait, private, firstprivate, defaultmap and is_device_ptr clauses. (cp_parser_omp_target): Parse target parallel and target simd. Set OMP_TARGET_COMBINED on combined constructs. Parse target enter data and target exit data. Diagnose invalid map kinds. (cp_parser_oacc_cache): Adjust finish_omp_clauses caller. (OMP_DECLARE_TARGET_CLAUSE_MASK): Define. (cp_parser_omp_declare_target): Parse OpenMP 4.5 forms of this construct. (OMP_TASKLOOP_CLAUSE_MASK): Define. (cp_parser_omp_taskloop): New function. (cp_parser_omp_construct): Don't handle PRAGMA_OMP_ORDERED here, handle PRAGMA_OMP_TASKLOOP. (cp_parser_pragma): Handle PRAGMA_OMP_ORDERED here directly, handle PRAGMA_OMP_TASKLOOP, call push_omp_privatization_clauses and pop_omp_privatization_clauses around parsing calls. (cp_parser_cilk_for): Adjust finish_omp_clauses caller. * pt.c (apply_late_template_attributes): Adjust tsubst_omp_clauses and finish_omp_clauses callers. (tsubst_omp_clause_decl): Return NULL if decl is NULL. For TREE_LIST, copy over OMP_CLAUSE_DEPEND_SINK_NEGATIVE bit. Use tsubst_expr instead of tsubst_copy, undo convert_from_reference effects. (tsubst_omp_clauses): Add ALLOW_FIELDS argument. Handle new OpenMP 4.5 clauses. Use tsubst_omp_clause_decl for more clauses. If ALLOW_FIELDS, handle non-static data members in the clauses. Clear OMP_CLAUSE_LINEAR_STEP if it has been cleared before. (omp_parallel_combined_clauses): New variable. (tsubst_omp_for_iterator): Add ORIG_DECLV argument, recur on OMP_FOR_ORIG_DECLS, handle non-static data member iterators. Improve handling of clauses on combined constructs. (tsubst_expr): Call push_omp_privatization_clauses and pop_omp_privatization_clauses around instantiation of certain OpenMP constructs, improve handling of clauses on combined constructs, handle OMP_TASKLOOP, adjust tsubst_omp_for_iterator, tsubst_omp_clauses and finish_omp_for callers, handle clauses on critical and ordered, handle OMP_TARGET_{ENTER,EXIT}_DATA. (instantiate_decl): Call save_omp_privatization_clauses and restore_omp_privatization_clauses around instantiation. (dependent_omp_for_p): Fix up comment typo. Handle SCOPE_REF. * semantics.c (omp_private_member_map, omp_private_member_vec, omp_private_member_ignore_next): New variables. (finish_non_static_data_member): Return dummy decl for privatized non-static data members. (omp_clause_decl_field, omp_clause_printable_decl, omp_note_field_privatization, omp_privatize_field): New functions. (handle_omp_array_sections_1): Fix comment typo. Add IS_OMP argument, handle structure element bases, diagnose bitfields, pass IS_OMP recursively, diagnose known zero length array sections in depend clauses, handle array sections in reduction clause, diagnose negative length even for pointers. (handle_omp_array_sections): Add IS_OMP argument, use auto_vec for types, pass IS_OMP down to handle_omp_array_sections_1, handle array sections in reduction clause, set OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION if map could be zero length array section, use GOMP_MAP_FIRSTPRIVATE_POINTER for IS_OMP. (finish_omp_reduction_clause): Handle array sections and arrays. Use omp_clause_printable_decl. (finish_omp_declare_simd_methods, cp_finish_omp_clause_depend_sink): New functions. (finish_omp_clauses): Add ALLOW_FIELDS and DECLARE_SIMD arguments. Handle new OpenMP 4.5 clauses and new restrictions for the old ones, handle non-static data members, reject this keyword when not allowed. (push_omp_privatization_clauses, pop_omp_privatization_clauses, save_omp_privatization_clauses, restore_omp_privatization_clauses): New functions. (handle_omp_for_class_iterator): Handle OMP_TASKLOOP class iterators. Add collapse and ordered arguments. Fix handling of lastprivate iterators in doacross loops. (finish_omp_for): Add ORIG_DECLV argument, handle doacross loops, adjust c_finish_omp_for, handle_omp_for_class_iterator and finish_omp_clauses callers. Fill in OMP_CLAUSE_LINEAR_STEP on simd loops with non-static data member iterators. gcc/fortran/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Ilya Verbin <ilya.verbin@intel.com> * f95-lang.c (DEF_FUNCTION_TYPE_9, DEF_FUNCTION_TYPE_10, DEF_FUNCTION_TYPE_11, DEF_FUNCTION_TYPE_VAR_1): Define. * trans-openmp.c (gfc_trans_omp_clauses): Set OMP_CLAUSE_IF_MODIFIER to ERROR_MARK, OMP_CLAUSE_ORDERED_EXPR to NULL. (gfc_trans_omp_critical): Adjust for addition of clauses. (gfc_trans_omp_ordered): Likewise. * types.def (BT_FN_BOOL_UINT_LONGPTR_LONGPTR_LONGPTR, BT_FN_BOOL_UINT_ULLPTR_ULLPTR_ULLPTR, BT_FN_BOOL_UINT_LONGPTR_LONG_LONGPTR_LONGPTR, BT_FN_BOOL_UINT_ULLPTR_ULL_ULLPTR_ULLPTR, BT_FN_VOID_INT_SIZE_PTR_PTR_PTR_UINT_PTR, BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_LONG_LONG_LONG, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_ULL_ULL_ULL, BT_FN_VOID_LONG_VAR, BT_FN_VOID_ULL_VAR): New. (BT_FN_VOID_INT_PTR_SIZE_PTR_PTR_PTR, BT_FN_VOID_INT_OMPFN_PTR_SIZE_PTR_PTR_PTR, BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR): Remove. gcc/lto/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> * lto-lang.c (DEF_FUNCTION_TYPE_9, DEF_FUNCTION_TYPE_10, DEF_FUNCTION_TYPE_11): Define. gcc/jit/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> * jit-builtins.c (DEF_FUNCTION_TYPE_9, DEF_FUNCTION_TYPE_10, DEF_FUNCTION_TYPE_11): Define. * jit-builtins.h (DEF_FUNCTION_TYPE_9, DEF_FUNCTION_TYPE_10, DEF_FUNCTION_TYPE_11): Define. gcc/ada/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> * gcc-interface/utils.c (DEF_FUNCTION_TYPE_9, DEF_FUNCTION_TYPE_10, DEF_FUNCTION_TYPE_11): Define. gcc/testsuite/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Aldy Hernandez <aldyh@redhat.com> * c-c++-common/gomp/cancel-1.c (f2): Add map clause to target data. * c-c++-common/gomp/clauses-1.c: New test. * c-c++-common/gomp/clauses-2.c: New test. * c-c++-common/gomp/clauses-3.c: New test. * c-c++-common/gomp/clauses-4.c: New test. * c-c++-common/gomp/declare-target-1.c: New test. * c-c++-common/gomp/declare-target-2.c: New test. * c-c++-common/gomp/depend-3.c: New test. * c-c++-common/gomp/depend-4.c: New test. * c-c++-common/gomp/doacross-1.c: New test. * c-c++-common/gomp/if-1.c: New test. * c-c++-common/gomp/if-2.c: New test. * c-c++-common/gomp/linear-1.c: New test. * c-c++-common/gomp/map-2.c: New test. * c-c++-common/gomp/map-3.c: New test. * c-c++-common/gomp/nesting-1.c (f_omp_parallel, f_omp_target_data): Add map clause to target data. * c-c++-common/gomp/nesting-warn-1.c (f_omp_target): Likewise. * c-c++-common/gomp/ordered-1.c: New test. * c-c++-common/gomp/ordered-2.c: New test. * c-c++-common/gomp/ordered-3.c: New test. * c-c++-common/gomp/pr61486-1.c (foo): Remove linear clause on non-iterator. * c-c++-common/gomp/pr61486-2.c (test, test2): Remove ordered clause and ordered construct where no longer allowed. * c-c++-common/gomp/priority-1.c: New test. * c-c++-common/gomp/reduction-1.c: New test. * c-c++-common/gomp/schedule-simd-1.c: New test. * c-c++-common/gomp/sink-1.c: New test. * c-c++-common/gomp/sink-2.c: New test. * c-c++-common/gomp/sink-3.c: New test. * c-c++-common/gomp/sink-4.c: New test. * c-c++-common/gomp/udr-1.c: New test. * c-c++-common/taskloop-1.c: New test. * c-c++-common/cpp/openmp-define-3.c: Adjust for the new value of _OPENMP macro. * c-c++-common/cilk-plus/PS/body.c (foo): Adjust expected diagnostics. * c-c++-common/goacc-gomp/nesting-fail-1.c (f_acc_parallel, f_acc_kernels, f_acc_data, f_acc_loop): Add map clause to target data. * gcc.dg/gomp/clause-1.c: * gcc.dg/gomp/reduction-1.c: New test. * gcc.dg/gomp/sink-fold-1.c: New test. * gcc.dg/gomp/sink-fold-2.c: New test. * gcc.dg/gomp/sink-fold-3.c: New test. * gcc.dg/vect/vect-simd-clone-15.c: New test. * g++.dg/gomp/clause-1.C (T::test): Remove dg-error on privatization of non-static data members. * g++.dg/gomp/clause-3.C (foo): Remove one dg-error directive. Add some linear clause tests. * g++.dg/gomp/declare-simd-3.C: New test. * g++.dg/gomp/linear-1.C: New test. * g++.dg/gomp/member-1.C: New test. * g++.dg/gomp/member-2.C: New test. * g++.dg/gomp/pr66571-2.C: New test. * g++.dg/gomp/pr67504.C (foo): Add test for ordered clause with dependent argument. * g++.dg/gomp/pr67522.C (foo): Add test for invalid array section in reduction clause. * g++.dg/gomp/reference-1.C: New test. * g++.dg/gomp/sink-1.C: New test. * g++.dg/gomp/sink-2.C: New test. * g++.dg/gomp/sink-3.C: New test. * g++.dg/gomp/task-1.C: Remove both dg-error directives. * g++.dg/gomp/this-1.C: New test. * g++.dg/gomp/this-2.C: New test. * g++.dg/vect/simd-clone-2.cc: New test. * g++.dg/vect/simd-clone-2.h: New test. * g++.dg/vect/simd-clone-3.cc: New test. * g++.dg/vect/simd-clone-4.cc: New test. * g++.dg/vect/simd-clone-4.h: New test. * g++.dg/vect/simd-clone-5.cc: New test. include/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Ilya Verbin <ilya.verbin@intel.com> * gomp-constants.h (GOMP_MAP_FLAG_ALWAYS): Define. (enum gomp_map_kind): Add GOMP_MAP_FIRSTPRIVATE, GOMP_MAP_FIRSTPRIVATE_INT, GOMP_MAP_USE_DEVICE_PTR, GOMP_MAP_ZERO_LEN_ARRAY_SECTION, GOMP_MAP_ALWAYS_TO, GOMP_MAP_ALWAYS_FROM, GOMP_MAP_ALWAYS_TOFROM, GOMP_MAP_STRUCT, GOMP_MAP_DELETE_ZERO_LEN_ARRAY_SECTION, GOMP_MAP_DELETE, GOMP_MAP_RELEASE, GOMP_MAP_FIRSTPRIVATE_POINTER. (GOMP_MAP_ALWAYS_TO_P, GOMP_MAP_ALWAYS_FROM_P): Define. (GOMP_TASK_FLAG_UNTIED, GOMP_TASK_FLAG_FINAL, GOMP_TASK_FLAG_MERGEABLE, GOMP_TASK_FLAG_DEPEND, GOMP_TASK_FLAG_PRIORITY, GOMP_TASK_FLAG_UP, GOMP_TASK_FLAG_GRAINSIZE, GOMP_TASK_FLAG_IF, GOMP_TASK_FLAG_NOGROUP, GOMP_TARGET_FLAG_NOWAIT, GOMP_TARGET_FLAG_EXIT_DATA, GOMP_TARGET_FLAG_UPDATE): Define. libgomp/ 2015-10-13 Jakub Jelinek <jakub@redhat.com> Aldy Hernandez <aldyh@redhat.com> Ilya Verbin <ilya.verbin@intel.com> * config/linux/affinity.c (omp_get_place_num_procs, omp_get_place_proc_ids, gomp_get_place_proc_ids_8): New functions. * config/linux/doacross.h: New file. * config/posix/affinity.c (omp_get_place_num_procs, omp_get_place_proc_ids, gomp_get_place_proc_ids_8): New functions. * config/posix/doacross.h: New file. * env.c: Include gomp-constants.h. (struct gomp_task_icv): Rename run_sched_modifier to run_sched_chunk_size. (gomp_max_task_priority_var): New variable. (parse_schedule): Rename run_sched_modifier to run_sched_chunk_size. (handle_omp_display_env): Change _OPENMP value from 201307 to 201511. Print OMP_MAX_TASK_PRIORITY. (initialize_env): Parse OMP_MAX_TASK_PRIORITY. (omp_set_schedule, omp_get_schedule): Rename modifier argument to chunk_size and run_sched_modifier to run_sched_chunk_size. (omp_get_max_task_priority, omp_get_initial_device, omp_get_num_places, omp_get_place_num, omp_get_partition_num_places, omp_get_partition_place_nums): New functions. * fortran.c (omp_set_schedule_, omp_set_schedule_8_, omp_get_schedule_, omp_get_schedule_8_): Rename modifier argument to chunk_size. (omp_get_num_places_, omp_get_place_num_procs_, omp_get_place_num_procs_8_, omp_get_place_proc_ids_, omp_get_place_proc_ids_8_, omp_get_place_num_, omp_get_partition_num_places_, omp_get_partition_place_nums_, omp_get_partition_place_nums_8_, omp_get_initial_device_, omp_get_max_task_priority_): New functions. * libgomp_g.h (GOMP_loop_doacross_static_start, GOMP_loop_doacross_dynamic_start, GOMP_loop_doacross_guided_start, GOMP_loop_doacross_runtime_start, GOMP_loop_ull_doacross_static_start, GOMP_loop_ull_doacross_dynamic_start, GOMP_loop_ull_doacross_guided_start, GOMP_loop_ull_doacross_runtime_start, GOMP_doacross_post, GOMP_doacross_wait, GOMP_doacross_ull_post, GOMP_doacross_wait, GOMP_taskloop, GOMP_taskloop_ull, GOMP_target_41, GOMP_target_data_41, GOMP_target_update_41, GOMP_target_enter_exit_data): New prototypes. (GOMP_task): Add prototype argument. * libgomp.h (_LIBGOMP_CHECKING_): Define to 0 if not yet defined. (struct gomp_doacross_work_share): New type. (struct gomp_work_share): Add doacross field. (struct gomp_task_icv): Rename run_sched_modifier to run_sched_chunk_size. (enum gomp_task_kind): Rename GOMP_TASK_IFFALSE to GOMP_TASK_UNDEFERRED. Add comments. (struct gomp_task_depend_entry): Add comments. (struct gomp_task): Likewise. (struct gomp_taskgroup): Likewise. (struct gomp_target_task): New type. (struct gomp_team): Add comment. (gomp_get_place_proc_ids_8, gomp_doacross_init, gomp_doacross_ull_init, gomp_task_maybe_wait_for_dependencies, gomp_create_target_task, gomp_target_task_fn): New prototypes. (struct target_var_desc): New type. (struct target_mem_desc): Adjust comment. Use struct target_var_desc instead of splay_tree_key for list. (REFCOUNT_INFINITY): Define. (struct splay_tree_key_s): Remove copy_from field. (struct gomp_device_descr): Add dev2dev_func field. (enum gomp_map_vars_kind): New enum. (gomp_map_vars): Add one argument. * libgomp.map (OMP_4.5): Export omp_get_max_task_priority, omp_get_max_task_priority_, omp_get_num_places, omp_get_num_places_, omp_get_place_num_procs, omp_get_place_num_procs_, omp_get_place_num_procs_8_, omp_get_place_proc_ids, omp_get_place_proc_ids_, omp_get_place_proc_ids_8_, omp_get_place_num, omp_get_place_num_, omp_get_partition_num_places, omp_get_partition_num_places_, omp_get_partition_place_nums, omp_get_partition_place_nums_, omp_get_partition_place_nums_8_, omp_get_initial_device, omp_get_initial_device_, omp_target_alloc, omp_target_free, omp_target_is_present, omp_target_memcpy, omp_target_memcpy_rect, omp_target_associate_ptr and omp_target_disassociate_ptr. (GOMP_4.0.2): Renamed to ... (GOMP_4.5): ... this. Export GOMP_target_41, GOMP_target_data_41, GOMP_target_update_41, GOMP_target_enter_exit_data, GOMP_taskloop, GOMP_taskloop_ull, GOMP_loop_doacross_dynamic_start, GOMP_loop_doacross_guided_start, GOMP_loop_doacross_runtime_start, GOMP_loop_doacross_static_start, GOMP_doacross_post, GOMP_doacross_wait, GOMP_loop_ull_doacross_dynamic_start, GOMP_loop_ull_doacross_guided_start, GOMP_loop_ull_doacross_runtime_start, GOMP_loop_ull_doacross_static_start, GOMP_doacross_ull_post and GOMP_doacross_ull_wait. * libgomp.texi: Document omp_get_max_task_priority. Rename modifier argument to chunk_size for omp_set_schedule and omp_get_schedule. Document OMP_MAX_TASK_PRIORITY env var. * loop.c (GOMP_loop_runtime_start): Adjust for run_sched_modifier to run_sched_chunk_size renaming. (GOMP_loop_ordered_runtime_start): Likewise. (gomp_loop_doacross_static_start, gomp_loop_doacross_dynamic_start, gomp_loop_doacross_guided_start, GOMP_loop_doacross_runtime_start, GOMP_parallel_loop_runtime_start): New functions. (GOMP_parallel_loop_runtime): Adjust for run_sched_modifier to run_sched_chunk_size renaming. (GOMP_loop_doacross_static_start, GOMP_loop_doacross_dynamic_start, GOMP_loop_doacross_guided_start): New functions or aliases. * loop_ull.c (GOMP_loop_ull_runtime_start): Adjust for run_sched_modifier to run_sched_chunk_size renaming. (GOMP_loop_ull_ordered_runtime_start): Likewise. (gomp_loop_ull_doacross_static_start, gomp_loop_ull_doacross_dynamic_start, gomp_loop_ull_doacross_guided_start, GOMP_loop_ull_doacross_runtime_start): New functions. (GOMP_loop_ull_doacross_static_start, GOMP_loop_ull_doacross_dynamic_start, GOMP_loop_ull_doacross_guided_start): New functions or aliases. * oacc-mem.c (acc_map_data, present_create_copy, gomp_acc_insert_pointer): Pass GOMP_MAP_VARS_OPENACC instead of false to gomp_map_vars. (gomp_acc_remove_pointer): Use copy_from from target_var_desc. * oacc-parallel.c (GOACC_data_start): Pass GOMP_MAP_VARS_OPENACC instead of false to gomp_map_vars. (GOACC_parallel_keyed): Likewise. Use copy_from from target_var_desc. * omp.h.in (omp_lock_hint_t): New type. (omp_init_lock_with_hint, omp_init_nest_lock_with_hint, omp_get_num_places, omp_get_place_num_procs, omp_get_place_proc_ids, omp_get_place_num, omp_get_partition_num_places, omp_get_partition_place_nums, omp_get_initial_device, omp_get_max_task_priority, omp_target_alloc, omp_target_free, omp_target_is_present, omp_target_memcpy, omp_target_memcpy_rect, omp_target_associate_ptr, omp_target_disassociate_ptr): New prototypes. * omp_lib.f90.in (omp_lock_hint_kind): New parameter. (omp_lock_hint_none, omp_lock_hint_uncontended, omp_lock_hint_contended, omp_lock_hint_nonspeculative, omp_lock_hint_speculative): New parameters. (omp_init_lock_with_hint, omp_init_nest_lock_with_hint, omp_get_num_places, omp_get_place_num_procs, omp_get_place_proc_ids, omp_get_place_num, omp_get_partition_num_places, omp_get_partition_place_nums, omp_get_initial_device, omp_get_max_task_priority): New interfaces. (omp_set_schedule, omp_get_schedule): Rename modifier argument to chunk_size. * omp_lib.h.in (omp_lock_hint_kind): New parameter. (omp_lock_hint_none, omp_lock_hint_uncontended, omp_lock_hint_contended, omp_lock_hint_nonspeculative, omp_lock_hint_speculative): New parameters. (omp_init_lock_with_hint, omp_init_nest_lock_with_hint, omp_get_num_places, omp_get_place_num_procs, omp_get_place_proc_ids, omp_get_place_num, omp_get_partition_num_places, omp_get_partition_place_nums, omp_get_initial_device, omp_get_max_task_priority): New functions and subroutines. * ordered.c: Include stdarg.h and string.h. (MAX_COLLAPSED_BITS): Define. (gomp_doacross_init, GOMP_doacross_post, GOMP_doacross_wait, gomp_doacross_ull_init, GOMP_doacross_ull_post, GOMP_doacross_ull_wait): New functions. * target.c: Include errno.h. (resolve_device): If device is not initialized, call gomp_init_device on it. (gomp_map_lookup): New function. (gomp_map_vars_existing): Add tgt_var argument, fill it in. Don't bump refcount if REFCOUNT_INFINITY. Handle GOMP_MAP_ALWAYS_TO_P. (get_kind): Rename is_openacc argument to short_mapkind. (gomp_map_pointer): Use gomp_map_lookup. (gomp_map_fields_existing): New function. (gomp_map_vars): Rename is_openacc argument to short_mapkind and is_target to pragma_kind. Handle GOMP_MAP_VARS_ENTER_DATA, handle GOMP_MAP_FIRSTPRIVATE_INT, GOMP_MAP_STRUCT, GOMP_MAP_USE_DEVICE_PTR, GOMP_MAP_ZERO_LEN_ARRAY_SECTION. Adjust for tgt->list changed type and copy_from living in there. (gomp_copy_from_async): Adjust for tgt->list changed type and copy_from living in there. (gomp_unmap_vars): Likewise. (gomp_update): Likewise. Rename is_openacc argument to short_mapkind. Don't fail if object is not mapped. (gomp_load_image_to_device): Initialize refcount to REFCOUNT_INFINITY. (gomp_target_fallback): New function. (gomp_get_target_fn_addr): Likewise. (GOMP_target): Adjust gomp_map_vars caller, use gomp_get_target_fn_addr and gomp_target_fallback. (GOMP_target_41): New function. (gomp_target_data_fallback): New function. (GOMP_target_data): Use it, adjust gomp_map_vars caller. (GOMP_target_data_41): New function. (GOMP_target_update): Adjust gomp_update caller. (GOMP_target_update_41): New function. (gomp_exit_data, GOMP_target_enter_exit_data, gomp_target_task_fn, omp_target_alloc, omp_target_free, omp_target_is_present, omp_target_memcpy, omp_target_memcpy_rect_worker, omp_target_memcpy_rect, omp_target_associate_ptr, omp_target_disassociate_ptr, gomp_load_plugin_for_device): New functions. * task.c: Include gomp-constants.h. Include taskloop.c twice to get GOMP_taskloop and GOMP_taskloop_ull definitions. (gomp_task_handle_depend): New function. (GOMP_task): Use it. Add priority argument. Use gomp-constant.h constants instead of hardcoded numbers. Rename GOMP_TASK_IFFALSE to GOMP_TASK_UNDEFERRED. (gomp_create_target_task): New function. (verify_children_queue, verify_taskgroup_queue, verify_task_queue): New functions. (gomp_task_run_pre): Call verify_*_queue functions. If an upcoming tied task is about to leave the sibling or taskgroup queues in an invalid state, adjust appropriately. Remove taskgroup argument. Add comments. (gomp_task_run_post_handle_dependers): Add comments. (gomp_task_run_post_remove_parent): Likewise. (gomp_barrier_handle_tasks): Adjust gomp_task_run_pre caller. (GOMP_taskwait): Likewise. Add comments. (gomp_task_maybe_wait_for_dependencies): Fix scheduling problem such that the first non parent_depends_on task does not end up at the end of the children queue. (GOMP_taskgroup_start): Rename GOMP_TASK_IFFALSE to GOMP_TASK_UNDEFERRED. (GOMP_taskgroup_end): Adjust gomp_task_run_pre caller. * taskloop.c: New file. * testsuite/lib/libgomp.exp (check_effective_target_offload_device_nonshared_as): New proc. * testsuite/libgomp.c/affinity-2.c: New test. * testsuite/libgomp.c/doacross-1.c: New test. * testsuite/libgomp.c/doacross-2.c: New test. * testsuite/libgomp.c/examples-4/declare_target-1.c (fib_wrapper): Add map clause to target. * testsuite/libgomp.c/examples-4/declare_target-4.c (accum): Likewise. * testsuite/libgomp.c/examples-4/declare_target-5.c (accum): Likewise. * testsuite/libgomp.c/examples-4/device-1.c (main): Likewise. * testsuite/libgomp.c/examples-4/device-3.c (main): Likewise. * testsuite/libgomp.c/examples-4/target_data-3.c (gramSchmidt): Likewise. * testsuite/libgomp.c/examples-4/teams-2.c (dotprod): Likewise. * testsuite/libgomp.c/examples-4/teams-3.c (dotprod): Likewise. * testsuite/libgomp.c/examples-4/teams-4.c (dotprod): Likewise. * testsuite/libgomp.c/for-2.h (OMPTGT, OMPTO, OMPFROM): Define if not defined. Use those where needed. * testsuite/libgomp.c/for-4.c: New test. * testsuite/libgomp.c/for-5.c: New test. * testsuite/libgomp.c/for-6.c: New test. * testsuite/libgomp.c/linear-1.c: New test. * testsuite/libgomp.c/ordered-4.c: New test. * testsuite/libgomp.c/pr66199-2.c (f2): Adjust for linear clause only allowed on the loop iterator. * testsuite/libgomp.c/pr66199-3.c: New test. * testsuite/libgomp.c/pr66199-4.c: New test. * testsuite/libgomp.c/reduction-7.c: New test. * testsuite/libgomp.c/reduction-8.c: New test. * testsuite/libgomp.c/reduction-9.c: New test. * testsuite/libgomp.c/reduction-10.c: New test. * testsuite/libgomp.c/target-1.c (fn2, fn3, fn4): Add map(tofrom:s). * testsuite/libgomp.c/target-2.c (fn2, fn3, fn4): Likewise. * testsuite/libgomp.c/target-7.c (foo): Add map(h) where needed. * testsuite/libgomp.c/target-11.c: New test. * testsuite/libgomp.c/target-12.c: New test. * testsuite/libgomp.c/target-13.c: New test. * testsuite/libgomp.c/target-14.c: New test. * testsuite/libgomp.c/target-15.c: New test. * testsuite/libgomp.c/target-16.c: New test. * testsuite/libgomp.c/target-17.c: New test. * testsuite/libgomp.c/target-18.c: New test. * testsuite/libgomp.c/target-19.c: New test. * testsuite/libgomp.c/target-20.c: New test. * testsuite/libgomp.c/target-21.c: New test. * testsuite/libgomp.c/target-22.c: New test. * testsuite/libgomp.c/target-23.c: New test. * testsuite/libgomp.c/target-24.c: New test. * testsuite/libgomp.c/target-25.c: New test. * testsuite/libgomp.c/target-26.c: New test. * testsuite/libgomp.c/target-27.c: New test. * testsuite/libgomp.c/taskloop-1.c: New test. * testsuite/libgomp.c/taskloop-2.c: New test. * testsuite/libgomp.c/taskloop-3.c: New test. * testsuite/libgomp.c/taskloop-4.c: New test. * testsuite/libgomp.c++/ctor-13.C: New test. * testsuite/libgomp.c++/doacross-1.C: New test. * testsuite/libgomp.c++/examples-4/declare_target-2.C: Replace offload_device with offload_device_nonshared_as. * testsuite/libgomp.c++/for-12.C: New test. * testsuite/libgomp.c++/for-13.C: New test. * testsuite/libgomp.c++/for-14.C: New test. * testsuite/libgomp.c++/linear-1.C: New test. * testsuite/libgomp.c++/member-1.C: New test. * testsuite/libgomp.c++/member-2.C: New test. * testsuite/libgomp.c++/member-3.C: New test. * testsuite/libgomp.c++/member-4.C: New test. * testsuite/libgomp.c++/member-5.C: New test. * testsuite/libgomp.c++/ordered-1.C: New test. * testsuite/libgomp.c++/reduction-5.C: New test. * testsuite/libgomp.c++/reduction-6.C: New test. * testsuite/libgomp.c++/reduction-7.C: New test. * testsuite/libgomp.c++/reduction-8.C: New test. * testsuite/libgomp.c++/reduction-9.C: New test. * testsuite/libgomp.c++/reduction-10.C: New test. * testsuite/libgomp.c++/reference-1.C: New test. * testsuite/libgomp.c++/simd14.C: New test. * testsuite/libgomp.c++/target-2.C (fn2): Add map(tofrom: s) clause. * testsuite/libgomp.c++/target-5.C: New test. * testsuite/libgomp.c++/target-6.C: New test. * testsuite/libgomp.c++/target-7.C: New test. * testsuite/libgomp.c++/target-8.C: New test. * testsuite/libgomp.c++/target-9.C: New test. * testsuite/libgomp.c++/target-10.C: New test. * testsuite/libgomp.c++/target-11.C: New test. * testsuite/libgomp.c++/target-12.C: New test. * testsuite/libgomp.c++/taskloop-1.C: New test. * testsuite/libgomp.c++/taskloop-2.C: New test. * testsuite/libgomp.c++/taskloop-3.C: New test. * testsuite/libgomp.c++/taskloop-4.C: New test. * testsuite/libgomp.c++/taskloop-5.C: New test. * testsuite/libgomp.c++/taskloop-6.C: New test. * testsuite/libgomp.c++/taskloop-7.C: New test. * testsuite/libgomp.c++/taskloop-8.C: New test. * testsuite/libgomp.c++/taskloop-9.C: New test. * testsuite/libgomp.fortran/affinity1.f90: New test. * testsuite/libgomp.fortran/affinity2.f90: New test. liboffloadmic/ 2015-10-13 Ilya Verbin <ilya.verbin@intel.com> * plugin/libgomp-plugin-intelmic.cpp (GOMP_OFFLOAD_dev2dev): New function. * plugin/offload_target_main.cpp (__offload_target_tgt2tgt): New static function, register it in liboffloadmic. From-SVN: r228777
2227 lines
69 KiB
Plaintext
2227 lines
69 KiB
Plaintext
\input texinfo @c -*-texinfo-*-
|
|
|
|
@c %**start of header
|
|
@setfilename libgomp.info
|
|
@settitle GNU libgomp
|
|
@c %**end of header
|
|
|
|
|
|
@copying
|
|
Copyright @copyright{} 2006-2015 Free Software Foundation, Inc.
|
|
|
|
Permission is granted to copy, distribute and/or modify this document
|
|
under the terms of the GNU Free Documentation License, Version 1.3 or
|
|
any later version published by the Free Software Foundation; with the
|
|
Invariant Sections being ``Funding Free Software'', the Front-Cover
|
|
texts being (a) (see below), and with the Back-Cover Texts being (b)
|
|
(see below). A copy of the license is included in the section entitled
|
|
``GNU Free Documentation License''.
|
|
|
|
(a) The FSF's Front-Cover Text is:
|
|
|
|
A GNU Manual
|
|
|
|
(b) The FSF's Back-Cover Text is:
|
|
|
|
You have freedom to copy and modify this GNU Manual, like GNU
|
|
software. Copies published by the Free Software Foundation raise
|
|
funds for GNU development.
|
|
@end copying
|
|
|
|
@ifinfo
|
|
@dircategory GNU Libraries
|
|
@direntry
|
|
* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
|
|
@end direntry
|
|
|
|
This manual documents libgomp, the GNU Offloading and Multi Processing
|
|
Runtime library. This is the GNU implementation of the OpenMP and
|
|
OpenACC APIs for parallel and accelerator programming in C/C++ and
|
|
Fortran.
|
|
|
|
Published by the Free Software Foundation
|
|
51 Franklin Street, Fifth Floor
|
|
Boston, MA 02110-1301 USA
|
|
|
|
@insertcopying
|
|
@end ifinfo
|
|
|
|
|
|
@setchapternewpage odd
|
|
|
|
@titlepage
|
|
@title GNU Offloading and Multi Processing Runtime Library
|
|
@subtitle The GNU OpenMP and OpenACC Implementation
|
|
@page
|
|
@vskip 0pt plus 1filll
|
|
@comment For the @value{version-GCC} Version*
|
|
@sp 1
|
|
Published by the Free Software Foundation @*
|
|
51 Franklin Street, Fifth Floor@*
|
|
Boston, MA 02110-1301, USA@*
|
|
@sp 1
|
|
@insertcopying
|
|
@end titlepage
|
|
|
|
@summarycontents
|
|
@contents
|
|
@page
|
|
|
|
|
|
@node Top
|
|
@top Introduction
|
|
@cindex Introduction
|
|
|
|
This manual documents the usage of libgomp, the GNU Offloading and
|
|
Multi Processing Runtime Library. This includes the GNU
|
|
implementation of the @uref{http://www.openmp.org, OpenMP} Application
|
|
Programming Interface (API) for multi-platform shared-memory parallel
|
|
programming in C/C++ and Fortran, and the GNU implementation of the
|
|
@uref{http://www.openacc.org/, OpenACC} Application Programming
|
|
Interface (API) for offloading of code to accelerator devices in C/C++
|
|
and Fortran.
|
|
|
|
Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
|
|
on this, support for OpenACC and offloading (both OpenACC and OpenMP
|
|
4's target construct) has been added later on, and the library's name
|
|
changed to GNU Offloading and Multi Processing Runtime Library.
|
|
|
|
|
|
|
|
@comment
|
|
@comment When you add a new menu item, please keep the right hand
|
|
@comment aligned to the same column. Do not use tabs. This provides
|
|
@comment better formatting.
|
|
@comment
|
|
@menu
|
|
* Enabling OpenMP:: How to enable OpenMP for your applications.
|
|
* Runtime Library Routines:: The OpenMP runtime application programming
|
|
interface.
|
|
* Environment Variables:: Influencing runtime behavior with environment
|
|
variables.
|
|
* The libgomp ABI:: Notes on the external ABI presented by libgomp.
|
|
* Reporting Bugs:: How to report bugs in the GNU Offloading and
|
|
Multi Processing Runtime Library.
|
|
* Copying:: GNU general public license says
|
|
how you can copy and share libgomp.
|
|
* GNU Free Documentation License::
|
|
How you can copy and share this manual.
|
|
* Funding:: How to help assure continued work for free
|
|
software.
|
|
* Library Index:: Index of this documentation.
|
|
@end menu
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Enabling OpenMP
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Enabling OpenMP
|
|
@chapter Enabling OpenMP
|
|
|
|
To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
|
|
flag @command{-fopenmp} must be specified. This enables the OpenMP directive
|
|
@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
|
|
@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
|
|
@code{!$} conditional compilation sentinels in free form and @code{c$},
|
|
@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
|
|
arranges for automatic linking of the OpenMP runtime library
|
|
(@ref{Runtime Library Routines}).
|
|
|
|
A complete description of all OpenMP directives accepted may be found in
|
|
the @uref{http://www.openmp.org, OpenMP Application Program Interface} manual,
|
|
version 4.0.
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Runtime Library Routines
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Runtime Library Routines
|
|
@chapter Runtime Library Routines
|
|
|
|
The runtime routines described here are defined by Section 3 of the OpenMP
|
|
specification in version 4.0. The routines are structured in following
|
|
three parts:
|
|
|
|
@menu
|
|
Control threads, processors and the parallel environment. They have C
|
|
linkage, and do not throw exceptions.
|
|
|
|
* omp_get_active_level:: Number of active parallel regions
|
|
* omp_get_ancestor_thread_num:: Ancestor thread ID
|
|
* omp_get_cancellation:: Whether cancellation support is enabled
|
|
* omp_get_default_device:: Get the default device for target regions
|
|
* omp_get_dynamic:: Dynamic teams setting
|
|
* omp_get_level:: Number of parallel regions
|
|
* omp_get_max_active_levels:: Maximum number of active regions
|
|
* omp_get_max_task_priority:: Maximum task priority value that can be set
|
|
* omp_get_max_threads:: Maximum number of threads of parallel region
|
|
* omp_get_nested:: Nested parallel regions
|
|
* omp_get_num_devices:: Number of target devices
|
|
* omp_get_num_procs:: Number of processors online
|
|
* omp_get_num_teams:: Number of teams
|
|
* omp_get_num_threads:: Size of the active team
|
|
* omp_get_proc_bind:: Whether theads may be moved between CPUs
|
|
* omp_get_schedule:: Obtain the runtime scheduling method
|
|
* omp_get_team_num:: Get team number
|
|
* omp_get_team_size:: Number of threads in a team
|
|
* omp_get_thread_limit:: Maximum number of threads
|
|
* omp_get_thread_num:: Current thread ID
|
|
* omp_in_parallel:: Whether a parallel region is active
|
|
* omp_in_final:: Whether in final or included task region
|
|
* omp_is_initial_device:: Whether executing on the host device
|
|
* omp_set_default_device:: Set the default device for target regions
|
|
* omp_set_dynamic:: Enable/disable dynamic teams
|
|
* omp_set_max_active_levels:: Limits the number of active parallel regions
|
|
* omp_set_nested:: Enable/disable nested parallel regions
|
|
* omp_set_num_threads:: Set upper team size limit
|
|
* omp_set_schedule:: Set the runtime scheduling method
|
|
|
|
Initialize, set, test, unset and destroy simple and nested locks.
|
|
|
|
* omp_init_lock:: Initialize simple lock
|
|
* omp_set_lock:: Wait for and set simple lock
|
|
* omp_test_lock:: Test and set simple lock if available
|
|
* omp_unset_lock:: Unset simple lock
|
|
* omp_destroy_lock:: Destroy simple lock
|
|
* omp_init_nest_lock:: Initialize nested lock
|
|
* omp_set_nest_lock:: Wait for and set simple lock
|
|
* omp_test_nest_lock:: Test and set nested lock if available
|
|
* omp_unset_nest_lock:: Unset nested lock
|
|
* omp_destroy_nest_lock:: Destroy nested lock
|
|
|
|
Portable, thread-based, wall clock timer.
|
|
|
|
* omp_get_wtick:: Get timer precision.
|
|
* omp_get_wtime:: Elapsed wall clock time.
|
|
@end menu
|
|
|
|
|
|
|
|
@node omp_get_active_level
|
|
@section @code{omp_get_active_level} -- Number of parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the nesting level for the active parallel blocks,
|
|
which enclose the calling call.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.20.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_ancestor_thread_num
|
|
@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the thread identification number for the given
|
|
nesting level of the current thread. For values of @var{level} outside
|
|
zero to @code{omp_get_level} -1 is returned; if @var{level} is
|
|
@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
|
|
@item @tab @code{integer level}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.18.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_cancellation
|
|
@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if cancellation is activated, @code{false}
|
|
otherwise. Here, @code{true} and @code{false} represent their language-specific
|
|
counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
|
|
deactivated.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_CANCELLATION}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.9.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_default_device
|
|
@section @code{omp_get_default_device} -- Get the default device for target regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Get the default device for target regions without device clause.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.24.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_dynamic
|
|
@section @code{omp_get_dynamic} -- Dynamic teams setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if enabled, @code{false} otherwise.
|
|
Here, @code{true} and @code{false} represent their language-specific
|
|
counterparts.
|
|
|
|
The dynamic team setting may be initialized at startup by the
|
|
@env{OMP_DYNAMIC} environment variable or at runtime using
|
|
@code{omp_set_dynamic}. If undefined, dynamic adjustment is
|
|
disabled by default.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.8.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_level
|
|
@section @code{omp_get_level} -- Obtain the current nesting level
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the nesting level for the parallel blocks,
|
|
which enclose the calling call.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_level()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_active_level}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.17.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_max_active_levels
|
|
@section @code{omp_get_max_active_levels} -- Maximum number of active regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function obtains the maximum allowed number of nested, active parallel regions.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.16.
|
|
@end table
|
|
|
|
|
|
@node omp_get_max_task_priority
|
|
@section @code{omp_get_max_task_priority} -- Maximum priority value
|
|
that can be set for tasks.
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function obtains the maximum allowed priority number for tasks.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.5}, Section 3.2.29.
|
|
@end table
|
|
|
|
|
|
@node omp_get_max_threads
|
|
@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Return the maximum number of threads used for the current parallel region
|
|
that does not use the clause @code{num_threads}.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.3.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_nested
|
|
@section @code{omp_get_nested} -- Nested parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if nested parallel regions are
|
|
enabled, @code{false} otherwise. Here, @code{true} and @code{false}
|
|
represent their language-specific counterparts.
|
|
|
|
Nested parallel regions may be initialized at startup by the
|
|
@env{OMP_NESTED} environment variable or at runtime using
|
|
@code{omp_set_nested}. If undefined, nested parallel regions are
|
|
disabled by default.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_nested}, @ref{OMP_NESTED}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.11.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_devices
|
|
@section @code{omp_get_num_devices} -- Number of target devices
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of target devices.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.25.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_procs
|
|
@section @code{omp_get_num_procs} -- Number of processors online
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of processors online on that device.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.5.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_teams
|
|
@section @code{omp_get_num_teams} -- Number of teams
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of teams in the current team region.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.26.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_threads
|
|
@section @code{omp_get_num_threads} -- Size of the active team
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of threads in the current team. In a sequential section of
|
|
the program @code{omp_get_num_threads} returns 1.
|
|
|
|
The default team size may be initialized at startup by the
|
|
@env{OMP_NUM_THREADS} environment variable. At runtime, the size
|
|
of the current team may be set either by the @code{NUM_THREADS}
|
|
clause or by @code{omp_set_num_threads}. If none of the above were
|
|
used to define a specific value and @env{OMP_DYNAMIC} is disabled,
|
|
one thread per CPU online is used.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_proc_bind
|
|
@section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This functions returns the currently active thread affinity policy, which is
|
|
set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
|
|
@code{omp_proc_bind_true}, @code{omp_proc_bind_master},
|
|
@code{omp_proc_bind_close} and @code{omp_proc_bind_spread}.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.22.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_schedule
|
|
@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Obtain the runtime scheduling method. The @var{kind} argument will be
|
|
set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
|
|
@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
|
|
@var{chunk_size}, is set to the chunk size.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
|
|
@item @tab @code{integer(kind=omp_sched_kind) kind}
|
|
@item @tab @code{integer chunk_size}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.13.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_team_num
|
|
@section @code{omp_get_team_num} -- Get team number
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the team number of the calling thread.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.27.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_team_size
|
|
@section @code{omp_get_team_size} -- Number of threads in a team
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the number of threads in a thread team to which
|
|
either the current thread or its ancestor belongs. For values of @var{level}
|
|
outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
|
|
1 is returned, and for @code{omp_get_level}, the result is identical
|
|
to @code{omp_get_num_threads}.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
|
|
@item @tab @code{integer level}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.19.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_thread_limit
|
|
@section @code{omp_get_thread_limit} -- Maximum number of threads
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Return the maximum number of threads of the program.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.14.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_thread_num
|
|
@section @code{omp_get_thread_num} -- Current thread ID
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns a unique thread identification number within the current team.
|
|
In a sequential parts of the program, @code{omp_get_thread_num}
|
|
always returns 0. In parallel regions the return value varies
|
|
from 0 to @code{omp_get_num_threads}-1 inclusive. The return
|
|
value of the master thread of a team is always 0.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.4.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_in_parallel
|
|
@section @code{omp_in_parallel} -- Whether a parallel region is active
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if currently running in parallel,
|
|
@code{false} otherwise. Here, @code{true} and @code{false} represent
|
|
their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.6.
|
|
@end table
|
|
|
|
|
|
@node omp_in_final
|
|
@section @code{omp_in_final} -- Whether in final or included task region
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if currently running in a final
|
|
or included task region, @code{false} otherwise. Here, @code{true}
|
|
and @code{false} represent their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_in_final()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.21.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_is_initial_device
|
|
@section @code{omp_is_initial_device} -- Whether executing on the host device
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if currently running on the host device,
|
|
@code{false} otherwise. Here, @code{true} and @code{false} represent
|
|
their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.28.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_default_device
|
|
@section @code{omp_set_default_device} -- Set the default device for target regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set the default device for target regions without device clause. The argument
|
|
shall be a nonnegative device number.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
|
|
@item @tab @code{integer device_num}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.23.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_dynamic
|
|
@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable the dynamic adjustment of the number of threads
|
|
within a team. The function takes the language-specific equivalent
|
|
of @code{true} and @code{false}, where @code{true} enables dynamic
|
|
adjustment of team sizes and @code{false} disables it.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
|
|
@item @tab @code{logical, intent(in) :: dynamic_threads}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.7.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_max_active_levels
|
|
@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function limits the maximum allowed number of nested, active
|
|
parallel regions.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
|
|
@item @tab @code{integer max_levels}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_active_levels}, @ref{omp_get_active_level}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.15.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_nested
|
|
@section @code{omp_set_nested} -- Enable/disable nested parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable nested parallel regions, i.e., whether team members
|
|
are allowed to create new teams. The function takes the language-specific
|
|
equivalent of @code{true} and @code{false}, where @code{true} enables
|
|
dynamic adjustment of team sizes and @code{false} disables it.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
|
|
@item @tab @code{logical, intent(in) :: nested}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_NESTED}, @ref{omp_get_nested}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.10.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_num_threads
|
|
@section @code{omp_set_num_threads} -- Set upper team size limit
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the number of threads used by default in subsequent parallel
|
|
sections, if those do not specify a @code{num_threads} clause. The
|
|
argument of @code{omp_set_num_threads} shall be a positive integer.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
|
|
@item @tab @code{integer, intent(in) :: num_threads}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.1.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_schedule
|
|
@section @code{omp_set_schedule} -- Set the runtime scheduling method
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Sets the runtime scheduling method. The @var{kind} argument can have the
|
|
value @code{omp_sched_static}, @code{omp_sched_dynamic},
|
|
@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
|
|
@code{omp_sched_auto}, the chunk size is set to the value of
|
|
@var{chunk_size} if positive, or to the default value if zero or negative.
|
|
For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
|
|
@item @tab @code{integer(kind=omp_sched_kind) kind}
|
|
@item @tab @code{integer chunk_size}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_schedule}
|
|
@ref{OMP_SCHEDULE}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.12.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_init_lock
|
|
@section @code{omp_init_lock} -- Initialize simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Initialize a simple lock. After initialization, the lock is in
|
|
an unlocked state.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_destroy_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.1.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_lock
|
|
@section @code{omp_set_lock} -- Wait for and set simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a simple lock, the lock variable must be initialized by
|
|
@code{omp_init_lock}. The calling thread is blocked until the lock
|
|
is available. If the lock is already held by the current thread,
|
|
a deadlock occurs.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.3.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_test_lock
|
|
@section @code{omp_test_lock} -- Test and set simple lock if available
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a simple lock, the lock variable must be initialized by
|
|
@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
|
|
does not block if the lock is not available. This function returns
|
|
@code{true} upon success, @code{false} otherwise. Here, @code{true} and
|
|
@code{false} represent their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.5.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_unset_lock
|
|
@section @code{omp_unset_lock} -- Unset simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
A simple lock about to be unset must have been locked by @code{omp_set_lock}
|
|
or @code{omp_test_lock} before. In addition, the lock must be held by the
|
|
thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
|
|
or more threads attempted to set the lock before, one of them is chosen to,
|
|
again, set the lock to itself.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_lock}, @ref{omp_test_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.4.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_destroy_lock
|
|
@section @code{omp_destroy_lock} -- Destroy simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Destroy a simple lock. In order to be destroyed, a simple lock must be
|
|
in the unlocked state.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_init_nest_lock
|
|
@section @code{omp_init_nest_lock} -- Initialize nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Initialize a nested lock. After initialization, the lock is in
|
|
an unlocked state and the nesting count is set to zero.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_destroy_nest_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.1.
|
|
@end table
|
|
|
|
|
|
@node omp_set_nest_lock
|
|
@section @code{omp_set_nest_lock} -- Wait for and set nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a nested lock, the lock variable must be initialized by
|
|
@code{omp_init_nest_lock}. The calling thread is blocked until the lock
|
|
is available. If the lock is already held by the current thread, the
|
|
nesting count for the lock is incremented.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.3.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_test_nest_lock
|
|
@section @code{omp_test_nest_lock} -- Test and set nested lock if available
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a nested lock, the lock variable must be initialized by
|
|
@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
|
|
@code{omp_test_nest_lock} does not block if the lock is not available.
|
|
If the lock is already held by the current thread, the new nesting count
|
|
is returned. Otherwise, the return value equals zero.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.5.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_unset_nest_lock
|
|
@section @code{omp_unset_nest_lock} -- Unset nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
|
|
or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
|
|
thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
|
|
lock becomes unlocked. If one ore more threads attempted to set the lock before,
|
|
one of them is chosen to, again, set the lock to itself.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_nest_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.4.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_destroy_nest_lock
|
|
@section @code{omp_destroy_nest_lock} -- Destroy nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Destroy a nested lock. In order to be destroyed, a nested lock must be
|
|
in the unlocked state and its nesting count must equal zero.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_wtick
|
|
@section @code{omp_get_wtick} -- Get timer precision
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Gets the timer precision, i.e., the number of seconds between two
|
|
successive clock ticks.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_wtime}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.4.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_wtime
|
|
@section @code{omp_get_wtime} -- Elapsed wall clock time
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Elapsed wall clock time in seconds. The time is measured per thread, no
|
|
guarantee can be made that two distinct threads measure the same time.
|
|
Time is measured from some "time in the past", which is an arbitrary time
|
|
guaranteed not to change during the execution of the program.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_wtick}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.4.1.
|
|
@end table
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Environment Variables
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Environment Variables
|
|
@chapter Environment Variables
|
|
|
|
The environment variables which beginning with @env{OMP_} are defined by
|
|
section 4 of the OpenMP specification in version 4.0, while those
|
|
beginning with @env{GOMP_} are GNU extensions.
|
|
|
|
@menu
|
|
* OMP_CANCELLATION:: Set whether cancellation is activated
|
|
* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
|
|
* OMP_DEFAULT_DEVICE:: Set the device used in target regions
|
|
* OMP_DYNAMIC:: Dynamic adjustment of threads
|
|
* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
|
|
* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
|
|
* OMP_NESTED:: Nested parallel regions
|
|
* OMP_NUM_THREADS:: Specifies the number of threads to use
|
|
* OMP_PROC_BIND:: Whether theads may be moved between CPUs
|
|
* OMP_PLACES:: Specifies on which CPUs the theads should be placed
|
|
* OMP_STACKSIZE:: Set default thread stack size
|
|
* OMP_SCHEDULE:: How threads are scheduled
|
|
* OMP_THREAD_LIMIT:: Set the maximum number of threads
|
|
* OMP_WAIT_POLICY:: How waiting threads are handled
|
|
* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
|
|
* GOMP_DEBUG:: Enable debugging output
|
|
* GOMP_STACKSIZE:: Set default thread stack size
|
|
* GOMP_SPINCOUNT:: Set the busy-wait spin count
|
|
* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
|
|
@end menu
|
|
|
|
|
|
@node OMP_CANCELLATION
|
|
@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
|
|
if unset, cancellation is disabled and the @code{cancel} construct is ignored.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_cancellation}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.11
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_DISPLAY_ENV
|
|
@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
If set to @code{TRUE}, the OpenMP version number and the values
|
|
associated with the OpenMP environment variables are printed to @code{stderr}.
|
|
If set to @code{VERBOSE}, it additionally shows the value of the environment
|
|
variables which are GNU extensions. If undefined or set to @code{FALSE},
|
|
this information will not be shown.
|
|
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.12
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_DEFAULT_DEVICE
|
|
@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set to choose the device which is used in a @code{target} region, unless the
|
|
value is overridden by @code{omp_set_default_device} or by a @code{device}
|
|
clause. The value shall be the nonnegative device number. If no device with
|
|
the given device number exists, the code is executed on the host. If unset,
|
|
device number 0 will be used.
|
|
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_default_device}, @ref{omp_set_default_device},
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.11
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_DYNAMIC
|
|
@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable the dynamic adjustment of the number of threads
|
|
within a team. The value of this environment variable shall be
|
|
@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
|
|
disabled by default.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_dynamic}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.3
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_MAX_ACTIVE_LEVELS
|
|
@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the initial value for the maximum number of nested parallel
|
|
regions. The value of this variable shall be a positive integer.
|
|
If undefined, the number of active levels is unlimited.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_max_active_levels}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.9
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_MAX_TASK_PRIORITY
|
|
@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
|
|
number that can be set for a task.
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the initial value for the maximum priority value that can be
|
|
set for a task. The value of this variable shall be a non-negative
|
|
integer, and zero is allowed. If undefined, the default priority is
|
|
0.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_task_priority}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.5}, Section 4.14
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_NESTED
|
|
@section @env{OMP_NESTED} -- Nested parallel regions
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable nested parallel regions, i.e., whether team members
|
|
are allowed to create new teams. The value of this environment variable
|
|
shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
|
|
regions are disabled by default.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_nested}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.6
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_NUM_THREADS
|
|
@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the default number of threads to use in parallel regions. The
|
|
value of this variable shall be a comma-separated list of positive integers;
|
|
the value specified the number of threads to use for the corresponding nested
|
|
level. If undefined one thread per CPU is used.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_num_threads}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.2
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_PROC_BIND
|
|
@section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies whether threads may be moved between processors. If set to
|
|
@code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
|
|
they may be moved. Alternatively, a comma separated list with the
|
|
values @code{MASTER}, @code{CLOSE} and @code{SPREAD} can be used to specify
|
|
the thread affinity policy for the corresponding nesting level. With
|
|
@code{MASTER} the worker threads are in the same place partition as the
|
|
master thread. With @code{CLOSE} those are kept close to the master thread
|
|
in contiguous place partitions. And with @code{SPREAD} a sparse distribution
|
|
across the place partitions is used.
|
|
|
|
When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
|
|
@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.4
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_PLACES
|
|
@section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
The thread placement can be either specified using an abstract name or by an
|
|
explicit list of the places. The abstract names @code{threads}, @code{cores}
|
|
and @code{sockets} can be optionally followed by a positive number in
|
|
parentheses, which denotes the how many places shall be created. With
|
|
@code{threads} each place corresponds to a single hardware thread; @code{cores}
|
|
to a single core with the corresponding number of hardware threads; and with
|
|
@code{sockets} the place corresponds to a single socket. The resulting
|
|
placement can be shown by setting the @env{OMP_DISPLAY_ENV} environment
|
|
variable.
|
|
|
|
Alternatively, the placement can be specified explicitly as comma-separated
|
|
list of places. A place is specified by set of nonnegative numbers in curly
|
|
braces, denoting the denoting the hardware threads. The hardware threads
|
|
belonging to a place can either be specified as comma-separated list of
|
|
nonnegative thread numbers or using an interval. Multiple places can also be
|
|
either specified by a comma-separated list of places or by an interval. To
|
|
specify an interval, a colon followed by the count is placed after after
|
|
the hardware thread number or the place. Optionally, the length can be
|
|
followed by a colon and the stride number -- otherwise a unit stride is
|
|
assumed. For instance, the following specifies the same places list:
|
|
@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
|
|
@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
|
|
|
|
If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
|
|
@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
|
|
between CPUs following no placement policy.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
|
|
@ref{OMP_DISPLAY_ENV}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.5
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_STACKSIZE
|
|
@section @env{OMP_STACKSIZE} -- Set default thread stack size
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set the default thread stack size in kilobytes, unless the number
|
|
is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
|
|
case the size is, respectively, in bytes, kilobytes, megabytes
|
|
or gigabytes. This is different from @code{pthread_attr_setstacksize}
|
|
which gets the number of bytes as an argument. If the stack size cannot
|
|
be set due to system constraints, an error is reported and the initial
|
|
stack size is left unchanged. If undefined, the stack size is system
|
|
dependent.
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.7
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_SCHEDULE
|
|
@section @env{OMP_SCHEDULE} -- How threads are scheduled
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Allows to specify @code{schedule type} and @code{chunk size}.
|
|
The value of the variable shall have the form: @code{type[,chunk]} where
|
|
@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
|
|
The optional @code{chunk} size shall be a positive integer. If undefined,
|
|
dynamic scheduling and a chunk size of 1 is used.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_schedule}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Sections 2.7.1 and 4.1
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_THREAD_LIMIT
|
|
@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the number of threads to use for the whole program. The
|
|
value of this variable shall be a positive integer. If undefined,
|
|
the number of threads is not limited.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.10
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_WAIT_POLICY
|
|
@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies whether waiting threads should be active or passive. If
|
|
the value is @code{PASSIVE}, waiting threads should not consume CPU
|
|
power while waiting; while the value is @code{ACTIVE} specifies that
|
|
they should. If undefined, threads wait actively for a short time
|
|
before waiting passively.
|
|
|
|
@item @emph{See also}:
|
|
@ref{GOMP_SPINCOUNT}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.8
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_CPU_AFFINITY
|
|
@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Binds threads to specific CPUs. The variable should contain a space-separated
|
|
or comma-separated list of CPUs. This list may contain different kinds of
|
|
entries: either single CPU numbers in any order, a range of CPUs (M-N)
|
|
or a range with some stride (M-N:S). CPU numbers are zero based. For example,
|
|
@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
|
|
to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
|
|
CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
|
|
and 14 respectively and then start assigning back from the beginning of
|
|
the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
|
|
|
|
There is no libgomp library routine to determine whether a CPU affinity
|
|
specification is in effect. As a workaround, language-specific library
|
|
functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
|
|
Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
|
|
environment variable. A defined CPU affinity on startup cannot be changed
|
|
or disabled during the runtime of the application.
|
|
|
|
If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
|
|
@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
|
|
@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
|
|
@code{FALSE}, the host system will handle the assignment of threads to CPUs.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_DEBUG
|
|
@section @env{GOMP_DEBUG} -- Enable debugging output
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable debugging output. The variable should be set to @code{0}
|
|
(disabled, also the default if not set), or @code{1} (enabled).
|
|
|
|
If enabled, some debugging output will be printed during execution.
|
|
This is currently not specified in more detail, and subject to change.
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_STACKSIZE
|
|
@section @env{GOMP_STACKSIZE} -- Set default thread stack size
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set the default thread stack size in kilobytes. This is different from
|
|
@code{pthread_attr_setstacksize} which gets the number of bytes as an
|
|
argument. If the stack size cannot be set due to system constraints, an
|
|
error is reported and the initial stack size is left unchanged. If undefined,
|
|
the stack size is system dependent.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_STACKSIZE}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
|
|
GCC Patches Mailinglist},
|
|
@uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
|
|
GCC Patches Mailinglist}
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_SPINCOUNT
|
|
@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Determines how long a threads waits actively with consuming CPU power
|
|
before waiting passively without consuming CPU power. The value may be
|
|
either @code{INFINITE}, @code{INFINITY} to always wait actively or an
|
|
integer which gives the number of spins of the busy-wait loop. The
|
|
integer may optionally be followed by the following suffixes acting
|
|
as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
|
|
million), @code{G} (giga, billion), or @code{T} (tera, trillion).
|
|
If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
|
|
300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
|
|
30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
|
|
If there are more OpenMP threads than available CPUs, 1000 and 100
|
|
spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
|
|
undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
|
|
or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_WAIT_POLICY}
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_RTEMS_THREAD_POOLS
|
|
@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This environment variable is only used on the RTEMS real-time operating system.
|
|
It determines the scheduler instance specific thread pools. The format for
|
|
@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
|
|
@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
|
|
separated by @code{:} where:
|
|
@itemize @bullet
|
|
@item @code{<thread-pool-count>} is the thread pool count for this scheduler
|
|
instance.
|
|
@item @code{$<priority>} is an optional priority for the worker threads of a
|
|
thread pool according to @code{pthread_setschedparam}. In case a priority
|
|
value is omitted, then a worker thread will inherit the priority of the OpenMP
|
|
master thread that created it. The priority of the worker thread is not
|
|
changed after creation, even if a new OpenMP master thread using the worker has
|
|
a different priority.
|
|
@item @code{@@<scheduler-name>} is the scheduler instance name according to the
|
|
RTEMS application configuration.
|
|
@end itemize
|
|
In case no thread pool configuration is specified for a scheduler instance,
|
|
then each OpenMP master thread of this scheduler instance will use its own
|
|
dynamically allocated thread pool. To limit the worker thread count of the
|
|
thread pools, each OpenMP master thread must call @code{omp_set_num_threads}.
|
|
@item @emph{Example}:
|
|
Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
|
|
@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
|
|
@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
|
|
scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
|
|
one thread pool available. Since no priority is specified for this scheduler
|
|
instance, the worker thread inherits the priority of the OpenMP master thread
|
|
that created it. In the scheduler instance @code{WRK1} there are three thread
|
|
pools available and their worker threads run at priority four.
|
|
@end table
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c The libgomp ABI
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node The libgomp ABI
|
|
@chapter The libgomp ABI
|
|
|
|
The following sections present notes on the external ABI as
|
|
presented by libgomp. Only maintainers should need them.
|
|
|
|
@menu
|
|
* Implementing MASTER construct::
|
|
* Implementing CRITICAL construct::
|
|
* Implementing ATOMIC construct::
|
|
* Implementing FLUSH construct::
|
|
* Implementing BARRIER construct::
|
|
* Implementing THREADPRIVATE construct::
|
|
* Implementing PRIVATE clause::
|
|
* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
|
|
* Implementing REDUCTION clause::
|
|
* Implementing PARALLEL construct::
|
|
* Implementing FOR construct::
|
|
* Implementing ORDERED construct::
|
|
* Implementing SECTIONS construct::
|
|
* Implementing SINGLE construct::
|
|
@end menu
|
|
|
|
|
|
@node Implementing MASTER construct
|
|
@section Implementing MASTER construct
|
|
|
|
@smallexample
|
|
if (omp_get_thread_num () == 0)
|
|
block
|
|
@end smallexample
|
|
|
|
Alternately, we generate two copies of the parallel subfunction
|
|
and only include this in the version run by the master thread.
|
|
Surely this is not worthwhile though...
|
|
|
|
|
|
|
|
@node Implementing CRITICAL construct
|
|
@section Implementing CRITICAL construct
|
|
|
|
Without a specified name,
|
|
|
|
@smallexample
|
|
void GOMP_critical_start (void);
|
|
void GOMP_critical_end (void);
|
|
@end smallexample
|
|
|
|
so that we don't get COPY relocations from libgomp to the main
|
|
application.
|
|
|
|
With a specified name, use omp_set_lock and omp_unset_lock with
|
|
name being transformed into a variable declared like
|
|
|
|
@smallexample
|
|
omp_lock_t gomp_critical_user_<name> __attribute__((common))
|
|
@end smallexample
|
|
|
|
Ideally the ABI would specify that all zero is a valid unlocked
|
|
state, and so we wouldn't need to initialize this at
|
|
startup.
|
|
|
|
|
|
|
|
@node Implementing ATOMIC construct
|
|
@section Implementing ATOMIC construct
|
|
|
|
The target should implement the @code{__sync} builtins.
|
|
|
|
Failing that we could add
|
|
|
|
@smallexample
|
|
void GOMP_atomic_enter (void)
|
|
void GOMP_atomic_exit (void)
|
|
@end smallexample
|
|
|
|
which reuses the regular lock code, but with yet another lock
|
|
object private to the library.
|
|
|
|
|
|
|
|
@node Implementing FLUSH construct
|
|
@section Implementing FLUSH construct
|
|
|
|
Expands to the @code{__sync_synchronize} builtin.
|
|
|
|
|
|
|
|
@node Implementing BARRIER construct
|
|
@section Implementing BARRIER construct
|
|
|
|
@smallexample
|
|
void GOMP_barrier (void)
|
|
@end smallexample
|
|
|
|
|
|
@node Implementing THREADPRIVATE construct
|
|
@section Implementing THREADPRIVATE construct
|
|
|
|
In _most_ cases we can map this directly to @code{__thread}. Except
|
|
that OMP allows constructors for C++ objects. We can either
|
|
refuse to support this (how often is it used?) or we can
|
|
implement something akin to .ctors.
|
|
|
|
Even more ideally, this ctor feature is handled by extensions
|
|
to the main pthreads library. Failing that, we can have a set
|
|
of entry points to register ctor functions to be called.
|
|
|
|
|
|
|
|
@node Implementing PRIVATE clause
|
|
@section Implementing PRIVATE clause
|
|
|
|
In association with a PARALLEL, or within the lexical extent
|
|
of a PARALLEL block, the variable becomes a local variable in
|
|
the parallel subfunction.
|
|
|
|
In association with FOR or SECTIONS blocks, create a new
|
|
automatic variable within the current function. This preserves
|
|
the semantic of new variable creation.
|
|
|
|
|
|
|
|
@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
|
|
@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
|
|
|
|
This seems simple enough for PARALLEL blocks. Create a private
|
|
struct for communicating between the parent and subfunction.
|
|
In the parent, copy in values for scalar and "small" structs;
|
|
copy in addresses for others TREE_ADDRESSABLE types. In the
|
|
subfunction, copy the value into the local variable.
|
|
|
|
It is not clear what to do with bare FOR or SECTION blocks.
|
|
The only thing I can figure is that we do something like:
|
|
|
|
@smallexample
|
|
#pragma omp for firstprivate(x) lastprivate(y)
|
|
for (int i = 0; i < n; ++i)
|
|
body;
|
|
@end smallexample
|
|
|
|
which becomes
|
|
|
|
@smallexample
|
|
@{
|
|
int x = x, y;
|
|
|
|
// for stuff
|
|
|
|
if (i == n)
|
|
y = y;
|
|
@}
|
|
@end smallexample
|
|
|
|
where the "x=x" and "y=y" assignments actually have different
|
|
uids for the two variables, i.e. not something you could write
|
|
directly in C. Presumably this only makes sense if the "outer"
|
|
x and y are global variables.
|
|
|
|
COPYPRIVATE would work the same way, except the structure
|
|
broadcast would have to happen via SINGLE machinery instead.
|
|
|
|
|
|
|
|
@node Implementing REDUCTION clause
|
|
@section Implementing REDUCTION clause
|
|
|
|
The private struct mentioned in the previous section should have
|
|
a pointer to an array of the type of the variable, indexed by the
|
|
thread's @var{team_id}. The thread stores its final value into the
|
|
array, and after the barrier, the master thread iterates over the
|
|
array to collect the values.
|
|
|
|
|
|
@node Implementing PARALLEL construct
|
|
@section Implementing PARALLEL construct
|
|
|
|
@smallexample
|
|
#pragma omp parallel
|
|
@{
|
|
body;
|
|
@}
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
void subfunction (void *data)
|
|
@{
|
|
use data;
|
|
body;
|
|
@}
|
|
|
|
setup data;
|
|
GOMP_parallel_start (subfunction, &data, num_threads);
|
|
subfunction (&data);
|
|
GOMP_parallel_end ();
|
|
@end smallexample
|
|
|
|
@smallexample
|
|
void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
|
|
@end smallexample
|
|
|
|
The @var{FN} argument is the subfunction to be run in parallel.
|
|
|
|
The @var{DATA} argument is a pointer to a structure used to
|
|
communicate data in and out of the subfunction, as discussed
|
|
above with respect to FIRSTPRIVATE et al.
|
|
|
|
The @var{NUM_THREADS} argument is 1 if an IF clause is present
|
|
and false, or the value of the NUM_THREADS clause, if
|
|
present, or 0.
|
|
|
|
The function needs to create the appropriate number of
|
|
threads and/or launch them from the dock. It needs to
|
|
create the team structure and assign team ids.
|
|
|
|
@smallexample
|
|
void GOMP_parallel_end (void)
|
|
@end smallexample
|
|
|
|
Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
|
|
|
|
|
|
|
|
@node Implementing FOR construct
|
|
@section Implementing FOR construct
|
|
|
|
@smallexample
|
|
#pragma omp parallel for
|
|
for (i = lb; i <= ub; i++)
|
|
body;
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
void subfunction (void *data)
|
|
@{
|
|
long _s0, _e0;
|
|
while (GOMP_loop_static_next (&_s0, &_e0))
|
|
@{
|
|
long _e1 = _e0, i;
|
|
for (i = _s0; i < _e1; i++)
|
|
body;
|
|
@}
|
|
GOMP_loop_end_nowait ();
|
|
@}
|
|
|
|
GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
|
|
subfunction (NULL);
|
|
GOMP_parallel_end ();
|
|
@end smallexample
|
|
|
|
@smallexample
|
|
#pragma omp for schedule(runtime)
|
|
for (i = 0; i < n; i++)
|
|
body;
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
@{
|
|
long i, _s0, _e0;
|
|
if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
|
|
do @{
|
|
long _e1 = _e0;
|
|
for (i = _s0, i < _e0; i++)
|
|
body;
|
|
@} while (GOMP_loop_runtime_next (&_s0, _&e0));
|
|
GOMP_loop_end ();
|
|
@}
|
|
@end smallexample
|
|
|
|
Note that while it looks like there is trickiness to propagating
|
|
a non-constant STEP, there isn't really. We're explicitly allowed
|
|
to evaluate it as many times as we want, and any variables involved
|
|
should automatically be handled as PRIVATE or SHARED like any other
|
|
variables. So the expression should remain evaluable in the
|
|
subfunction. We can also pull it into a local variable if we like,
|
|
but since its supposed to remain unchanged, we can also not if we like.
|
|
|
|
If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
|
|
able to get away with no work-sharing context at all, since we can
|
|
simply perform the arithmetic directly in each thread to divide up
|
|
the iterations. Which would mean that we wouldn't need to call any
|
|
of these routines.
|
|
|
|
There are separate routines for handling loops with an ORDERED
|
|
clause. Bookkeeping for that is non-trivial...
|
|
|
|
|
|
|
|
@node Implementing ORDERED construct
|
|
@section Implementing ORDERED construct
|
|
|
|
@smallexample
|
|
void GOMP_ordered_start (void)
|
|
void GOMP_ordered_end (void)
|
|
@end smallexample
|
|
|
|
|
|
|
|
@node Implementing SECTIONS construct
|
|
@section Implementing SECTIONS construct
|
|
|
|
A block as
|
|
|
|
@smallexample
|
|
#pragma omp sections
|
|
@{
|
|
#pragma omp section
|
|
stmt1;
|
|
#pragma omp section
|
|
stmt2;
|
|
#pragma omp section
|
|
stmt3;
|
|
@}
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
|
|
switch (i)
|
|
@{
|
|
case 1:
|
|
stmt1;
|
|
break;
|
|
case 2:
|
|
stmt2;
|
|
break;
|
|
case 3:
|
|
stmt3;
|
|
break;
|
|
@}
|
|
GOMP_barrier ();
|
|
@end smallexample
|
|
|
|
|
|
@node Implementing SINGLE construct
|
|
@section Implementing SINGLE construct
|
|
|
|
A block like
|
|
|
|
@smallexample
|
|
#pragma omp single
|
|
@{
|
|
body;
|
|
@}
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
if (GOMP_single_start ())
|
|
body;
|
|
GOMP_barrier ();
|
|
@end smallexample
|
|
|
|
while
|
|
|
|
@smallexample
|
|
#pragma omp single copyprivate(x)
|
|
body;
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
datap = GOMP_single_copy_start ();
|
|
if (datap == NULL)
|
|
@{
|
|
body;
|
|
data.x = x;
|
|
GOMP_single_copy_end (&data);
|
|
@}
|
|
else
|
|
x = datap->x;
|
|
GOMP_barrier ();
|
|
@end smallexample
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Reporting Bugs
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Reporting Bugs
|
|
@chapter Reporting Bugs
|
|
|
|
Bugs in the GNU Offloading and Multi Processing Runtime Library should
|
|
be reported via @uref{http://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
|
|
"openacc", or "openmp", or both to the keywords field in the bug
|
|
report, as appropriate.
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c GNU General Public License
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@include gpl_v3.texi
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c GNU Free Documentation License
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@include fdl.texi
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Funding Free Software
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@include funding.texi
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Index
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Library Index
|
|
@unnumbered Library Index
|
|
|
|
@printindex cp
|
|
|
|
@bye
|