Merge master r12-8312.

* Merge master r12-8312-gb85e79dce149.
This commit is contained in:
Iain Sandoe 2022-04-29 17:54:39 +01:00
commit 3e5f7ca352
611 changed files with 20235 additions and 4899 deletions

View File

@ -1,3 +1,7 @@
2022-04-19 Richard Henderson <rth@gcc.gnu.org>
* MAINTAINERS: Update my email address.
2022-04-01 Qian Jianhua <qianjh@fujitsu.com>
* MAINTAINERS: Update my email address.

View File

@ -53,7 +53,7 @@ aarch64 port Richard Earnshaw <richard.earnshaw@arm.com>
aarch64 port Richard Sandiford <richard.sandiford@arm.com>
aarch64 port Marcus Shawcroft <marcus.shawcroft@arm.com>
aarch64 port Kyrylo Tkachov <kyrylo.tkachov@arm.com>
alpha port Richard Henderson <rth@twiddle.net>
alpha port Richard Henderson <rth@gcc.gnu.org>
amdgcn port Julian Brown <julian@codesourcery.com>
amdgcn port Andrew Stubbs <ams@codesourcery.com>
arc port Joern Rennecke <gnu@amylaar.uk>

View File

@ -1,3 +1,13 @@
2022-04-25 Martin Liska <mliska@suse.cz>
* filter-clang-warnings.py: Filter out
-Wc++20-attribute-extensions in lex.cc.
2022-04-25 Martin Liska <mliska@suse.cz>
* filter-clang-warnings.py: Filter out
-Wbitwise-instead-of-logical.
2022-04-04 Martin Liska <mliska@suse.cz>
* gcc-changelog/git_update_version.py: Ignore the revision.

View File

@ -38,7 +38,8 @@ def skip_warning(filename, message):
'when in C++ mode, this behavior is deprecated',
'-Wignored-attributes', '-Wgnu-zero-variadic-macro-arguments',
'-Wformat-security', '-Wundefined-internal',
'-Wunknown-warning-option', '-Wc++20-extensions'],
'-Wunknown-warning-option', '-Wc++20-extensions',
'-Wbitwise-instead-of-logical'],
'insn-modes.cc': ['-Wshift-count-overflow'],
'insn-emit.cc': ['-Wtautological-compare'],
'insn-attrtab.cc': ['-Wparentheses-equality'],
@ -52,7 +53,8 @@ def skip_warning(filename, message):
'genautomata.cc': ['-Wstring-plus-int'],
'fold-const-call.cc': ['-Wreturn-type'],
'gfortran.texi': [''],
'libtool': ['']
'libtool': [''],
'lex.cc': ['-Wc++20-attribute-extensions'],
}
for name, ignores in ignores.items():

View File

@ -1,3 +1,838 @@
2022-04-27 Lulu Cheng <chenglulu@loongson.cn>
* config/loongarch/loongarch.md: Add fdiv define_expand template,
then generate floating-point division and floating-point reciprocal
instructions.
2022-04-27 Lulu Cheng <chenglulu@loongson.cn>
* config/loongarch/loongarch.md: Add '(clobber (mem:BLK (scratch)))'
to PLV instruction templates.
2022-04-27 Richard Biener <rguenther@suse.de>
PR middle-end/104492
* gimple-ssa-warn-access.cc
(pass_waccess::warn_invalid_pointer): Exclude equality compare
diagnostics for all kind of invalidations.
(pass_waccess::check_dangling_uses): Fix post-dominator query.
(pass_waccess::check_pointer_uses): Likewise.
2022-04-27 Andreas Krebbel <krebbel@linux.ibm.com>
PR target/102024
* config/s390/s390-protos.h (s390_function_arg_vector): Remove
prototype.
* config/s390/s390.cc (s390_single_field_struct_p): New function.
(s390_function_arg_vector): Invoke s390_single_field_struct_p.
(s390_function_arg_float): Likewise.
2022-04-27 Jakub Jelinek <jakub@redhat.com>
PR sanitizer/105396
* asan.cc (asan_redzone_buffer::emit_redzone_byte): Handle the case
where offset is bigger than off but smaller than m_prev_offset + 32
bits by pushing one or more 0 bytes. Sink the
m_shadow_bytes.safe_push (value); flush_if_full (); statements from
all cases to the end of the function.
2022-04-27 Kewen Lin <linkw@linux.ibm.com>
PR target/105271
* config/rs6000/rs6000-builtins.def (NEG_V2DI): Move to [power8-vector]
stanza.
2022-04-26 Thomas Schwinge <thomas@codesourcery.com>
* config/gcn/gcn.cc (gcn_print_lds_decl): Make "gang-private
data-share memory exhausted" error more verbose.
2022-04-26 Martin Liska <mliska@suse.cz>
PR lto/105364
* lto-wrapper.cc (print_lto_docs_link): Use global_dc.
(run_gcc): Parse OPT_fdiagnostics_urls_.
(main): Initialize global_dc.
2022-04-26 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/105314
* ifcvt.cc (noce_try_store_flag_mask): Don't require that the non-zero
operand is equal to if_info->x, instead use the non-zero operand
as one of the operands of AND with if_info->x as target.
2022-04-26 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105374
* tree-ssa-reassoc.cc (eliminate_redundant_comparison): Punt if
!fold_convertible_p rather than assuming fold_convert must succeed.
2022-04-26 Jakub Jelinek <jakub@redhat.com>
PR target/105367
* config/i386/i386.cc (ix86_veclibabi_svml, ix86_veclibabi_acml): Pass
el_mode == DFmode ? double_type_node : float_type_node instead of
TREE_TYPE (type_in) as first arguments to mathfn_built_in.
2022-04-25 David Malcolm <dmalcolm@redhat.com>
PR analyzer/104308
* gimple-fold.cc (gimple_fold_builtin_memory_op): Explicitly set
the location of new_stmt in all places that don't already set it,
whether explicitly, or via a call to gsi_replace.
2022-04-25 Paul A. Clarke <pc@us.ibm.com>
* doc/extend.texi (Other Builtins): Correct reference to 'modff'.
2022-04-25 Andrew MacLeod <amacleod@redhat.com>
PR tree-optimization/105276
* gimple-range.cc (gimple_ranger::prefill_stmt_dependencies): Include
existing global range with calculated value.
2022-04-25 Richard Biener <rguenther@suse.de>
PR tree-optimization/105368
* tree-ssa-math-opts.cc (powi_cost): Use absu_hwi.
2022-04-25 Richard Biener <rguenther@suse.de>
PR tree-optimization/100810
* tree-ssa-loop-ivopts.cc (struct iv_cand): Add involves_undefs flag.
(find_ssa_undef): New function.
(add_candidate_1): Avoid adding derived candidates with
undefined SSA names and mark the original ones.
(determine_group_iv_cost_generic): Reject rewriting
uses with a different IV when that involves undefined SSA names.
2022-04-25 Steven G. Kargl <kargl@gcc.gnu.org>
PR target/89125
* config/freebsd.h: Define TARGET_LIBC_HAS_FUNCTION to be
bsd_libc_has_function.
* targhooks.cc (bsd_libc_has_function): New function.
Expand the supported math functions to inclue C99 libm.
* targhooks.h (bsd_libc_has_function): New Prototype.
2022-04-25 Richard Biener <rguenther@suse.de>
PR rtl-optimization/105231
* combine.cc (distribute_notes): Assert that a REG_EH_REGION
with landing pad > 0 is from i3. Put any REG_EH_REGION note
on i3 or drop it if the insn can not trap.
(try_combine): Ensure that we can merge REG_EH_REGION notes
with non-call exceptions. Ensure we are not splitting a
trapping part of an insn with non-call exceptions when there
is any REG_EH_REGION note to preserve.
2022-04-25 Hongyu Wang <hongyu.wang@intel.com>
PR target/105339
* config/i386/avx512fintrin.h (_mm512_scalef_round_pd):
Add parentheses for parameters and djust format.
(_mm512_mask_scalef_round_pd): Ditto.
(_mm512_maskz_scalef_round_pd): Ditto.
(_mm512_scalef_round_ps): Ditto.
(_mm512_mask_scalef_round_ps): Ditto.
(_mm512_maskz_scalef_round_ps): Ditto.
(_mm_scalef_round_sd): Use _mm_undefined_pd.
(_mm_scalef_round_ss): Use _mm_undefined_ps.
(_mm_mask_scalef_round_sd): New macro.
(_mm_mask_scalef_round_ss): Ditto.
(_mm_maskz_scalef_round_sd): Ditto.
(_mm_maskz_scalef_round_ss): Ditto.
2022-04-23 Jakub Jelinek <jakub@redhat.com>
PR target/105338
* config/i386/i386-expand.cc (ix86_expand_int_movcc): Handle
op0 == cst1 ? op0 : op3 like op0 == cst1 ? cst1 : op3 for the non-cmov
cases.
2022-04-22 Segher Boessenkool <segher@kernel.crashing.org>
PR target/105334
* config/rs6000/rs6000.md (pack<mode> for FMOVE128): New expander.
(pack<mode> for FMOVE128): Rename and split the insn_and_split to...
(pack<mode>_hard for FMOVE128): ... this...
(pack<mode>_soft for FMOVE128): ... and this.
2022-04-22 Paul A. Clarke <pc@us.ibm.com>
* doc/extend.texi: Correct "This" to "These".
2022-04-22 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/105333
* rtlanal.cc (replace_rtx): Use simplify_subreg or
simplify_unary_operation if CONST_SCALAR_INT_P rather than just
CONST_INT_P.
2022-04-21 Segher Boessenkool <segher@kernel.crashing.org>
PR target/103197
PR target/102146
* config/rs6000/rs6000.md (zero_extendqi<mode>2 for EXTQI): Disparage
the "Z" alternatives in {l,st}{f,xs}iwzx.
(zero_extendhi<mode>2 for EXTHI): Ditto.
(zero_extendsi<mode>2 for EXTSI): Ditto.
(*movsi_internal1): Ditto.
(*mov<mode>_internal1 for QHI): Ditto.
(movsd_hardfloat): Ditto.
2022-04-21 Martin Liska <mliska@suse.cz>
* configure.ac: Enable compressed debug sections for mold
linker.
* configure: Regenerate.
2022-04-21 Jakub Jelinek <jakub@redhat.com>
PR debug/105203
* emit-rtl.cc (emit_copy_of_insn_after): Don't call mark_jump_label
on DEBUG_INSNs.
2022-04-20 Richard Biener <rguenther@suse.de>
PR tree-optimization/104912
* tree-vect-loop-manip.cc (vect_loop_versioning): Split
the cost model check to a separate BB to make sure it is
checked first and not combined with other version checks.
2022-04-20 Richard Biener <rguenther@suse.de>
PR tree-optimization/105312
* gimple-isel.cc (gimple_expand_vec_cond_expr): Query both
VCOND and VCONDU for EQ and NE.
2022-04-20 Jan Hubicka <hubicka@ucw.cz>
PR ipa/103818
* ipa-modref-tree.cc (modref_access_node::closer_pair_p): Use
poly_offset_int to avoid overflow.
(modref_access_node::update2): likewise.
2022-04-20 Jakub Jelinek <jakub@redhat.com>
PR ipa/105306
* cgraph.cc (cgraph_node::create): Set node->semantic_interposition
to opt_for_fn (decl, flag_semantic_interposition).
* cgraphclones.cc (cgraph_node::create_clone): Copy over
semantic_interposition flag.
2022-04-19 Sergei Trofimovich <siarheit@google.com>
PR gcov-profile/105282
* value-prof.cc (stream_out_histogram_value): Allow negative counts
on HIST_TYPE_INDIR_CALL.
2022-04-19 Jakub Jelinek <jakub@redhat.com>
PR target/105257
* config/sparc/sparc.cc (epilogue_renumber): If ORIGINAL_REGNO,
use gen_raw_REG instead of gen_rtx_REG and copy over also
ORIGINAL_REGNO. Use return 0; instead of /* fallthrough */.
2022-04-19 Richard Biener <rguenther@suse.de>
PR tree-optimization/104010
PR tree-optimization/103941
* tree-vect-slp.cc (vect_bb_slp_scalar_cost): When
we run into stmts in patterns continue walking those
for uses outside of the vectorized region instead of
marking the lane live.
2022-04-18 Hans-Peter Nilsson <hp@axis.com>
* doc/install.texi <CRIS>: Remove references to removed websites and
adjust for cris-*-elf being the only remaining toolchain.
2022-04-18 Hans-Peter Nilsson <hp@axis.com>
* doc/invoke.texi <CRIS>: Remove references to options for removed
subtarget cris-axis-linux-gnu and tweak wording accordingly.
2022-04-16 Gerald Pfeifer <gerald@pfeifer.com>
* doc/install.texi (Specific): Adjust mingw-w64 download link.
2022-04-15 Hongyu Wang <hongyu.wang@intel.com>
* config/i386/smmintrin.h: Correct target pragma from sse4.1
and sse4.2 to crc32 for crc32 intrinsics.
2022-04-14 Indu Bhagat <indu.bhagat@oracle.com>
PR debug/105089
* ctfc.cc (ctf_dvd_ignore_insert): New function.
(ctf_dvd_ignore_lookup): Likewise.
(ctf_add_variable): Keep track of non-defining decl DIEs.
(new_ctf_container): Initialize the new hash-table.
(ctfc_delete_container): Empty hash-table.
* ctfc.h (struct ctf_container): Add new hash-table.
(ctf_dvd_ignore_lookup): New declaration.
(ctf_add_variable): Add additional argument.
* ctfout.cc (ctf_dvd_preprocess_cb): Skip adding CTF variable
record for non-defining decl for which a defining decl exists
in the same TU.
(ctf_preprocess): Defer updating the number of global objts
until here.
(output_ctf_header): Use ctfc_vars_list_count as some CTF
variables may not make it to the final output.
(output_ctf_vars): Likewise.
* dwarf2ctf.cc (gen_ctf_variable): Skip generating CTF variable
if this is known to be a non-defining decl DIE.
2022-04-14 Indu Bhagat <indu.bhagat@oracle.com>
* ctfc.h (struct ctf_container): Introduce a new member.
* ctfout.cc (ctf_list_add_ctf_vars): Use it instead of static
variable.
2022-04-14 Jakub Jelinek <jakub@redhat.com>
PR target/105247
* simplify-rtx.cc (simplify_const_binary_operation): For shifts
or rotates by VOIDmode constant integer shift count use word_mode
for the operand if int_mode is narrower than word.
2022-04-14 Robin Dapp <rdapp@linux.ibm.com>
* config/s390/s390.cc (s390_get_sched_attrmask): Add z16.
(s390_get_unit_mask): Likewise.
(s390_is_fpd): Likewise.
(s390_is_fxd): Likewise.
* config/s390/s390.h (s390_tune_attr): Set max tune level to z16.
* config/s390/s390.md (z900,z990,z9_109,z9_ec,z10,z196,zEC12,z13,z14,z15):
Add z16.
(z900,z990,z9_109,z9_ec,z10,z196,zEC12,z13,z14,z15,z16):
Likewise.
* config/s390/3931.md: New file.
2022-04-13 Richard Sandiford <richard.sandiford@arm.com>
PR tree-optimization/105254
* config/aarch64/aarch64.cc
(aarch64_vector_costs::determine_suggested_unroll_factor): Take a
loop_vec_info as argument. Restrict the unroll factor to values
that divide the VF.
(aarch64_vector_costs::finish_cost): Update call accordingly.
2022-04-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/105263
* tree-ssa-reassoc.cc (try_special_add_to_ops): Do not consume
negates in multiplication chains with DFP.
2022-04-13 Jakub Jelinek <jakub@redhat.com>
PR middle-end/105253
* tree.cc (tree_builtin_call_types_compatible_p): If PROP_gimple,
use useless_type_conversion_p checks instead of TYPE_MAIN_VARIANT
comparisons or tree_nop_conversion_p checks.
2022-04-13 Hongyu Wang <hongyu.wang@intel.com>
PR target/103069
* config/i386/i386-expand.cc (ix86_expand_cmpxchg_loop):
Add missing set to target_val at pause label.
2022-04-13 Jakub Jelinek <jakub@redhat.com>
PR target/105234
* attribs.cc (decl_attributes): Don't set
DECL_FUNCTION_SPECIFIC_TARGET if target_option_default_node is
NULL.
2022-04-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/105250
* fold-const.cc (fold_convertible_p): Revert
r12-7979-geaaf77dd85c333, instead check for size equality
of the vector types involved.
2022-04-13 Richard Biener <rguenther@suse.de>
Revert:
2022-04-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/104912
* tree-vect-loop-manip.cc (vect_loop_versioning): Split
the cost model check to a separate BB to make sure it is
checked first and not combined with other version checks.
2022-04-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/104912
* tree-vect-loop-manip.cc (vect_loop_versioning): Split
the cost model check to a separate BB to make sure it is
checked first and not combined with other version checks.
2022-04-13 Jakub Jelinek <jakub@redhat.com>
* tree-scalar-evolution.cc (expression_expensive_p): Fix a comment typo.
2022-04-12 Antoni Boucher <bouanto@zoho.com>
PR jit/104072
* reginfo.cc: New functions (clear_global_regs_cache,
reginfo_cc_finalize) to avoid an issue where compiling the same
code multiple times gives an error about assigning the same
register to 2 global variables.
* rtl.h: New function (reginfo_cc_finalize).
* toplev.cc: Call it.
2022-04-12 Antoni Boucher <bouanto@zoho.com>
PR jit/104071
* toplev.cc: Call the new function tree_cc_finalize in
toplev::finalize.
* tree.cc: New functions (clear_nonstandard_integer_type_cache
and tree_cc_finalize) to clear the cache of non-standard integer
types to avoid having issues with some optimizations of
bitcast where the SSA_NAME will have a size of a cached
integer type that should have been invalidated, causing a
comparison of integer constant to fail.
* tree.h: New function (tree_cc_finalize).
2022-04-12 Thomas Schwinge <thomas@codesourcery.com>
PR target/97348
* config/nvptx/nvptx.h (ASM_SPEC): Don't set.
* config/nvptx/nvptx.opt (misa): Adjust comment.
2022-04-12 Thomas Schwinge <thomas@codesourcery.com>
Revert:
2022-03-03 Tom de Vries <tdevries@suse.de>
* config/nvptx/nvptx.h (ASM_SPEC): Add %{misa=sm_30:--no-verify}.
2022-04-12 Thomas Schwinge <thomas@codesourcery.com>
Revert:
2022-03-31 Tom de Vries <tdevries@suse.de>
* config/nvptx/nvptx.h (ASM_SPEC): Use "-m sm_35" for -misa=sm_30.
2022-04-12 Richard Biener <rguenther@suse.de>
PR ipa/104303
* tree-ssa-dce.cc (mark_stmt_if_obviously_necessary): Do not
include local escaped memory as obviously necessary stores.
2022-04-12 Richard Biener <rguenther@suse.de>
PR tree-optimization/105235
* tree-ssa-math-opts.cc (execute_cse_conv_1): Clean EH and
return whether the CFG changed.
(execute_cse_sincos_1): Adjust.
2022-04-12 Przemyslaw Wirkus <Przemyslaw.Wirkus@arm.com>
PR target/104144
* config/arm/t-aprofile (MULTI_ARCH_OPTS_A): Remove Armv9-a options.
(MULTI_ARCH_DIRS_A): Remove Armv9-a diretories.
(MULTILIB_REQUIRED): Don't require Armv9-a libraries.
(MULTILIB_MATCHES): Treat Armv9-a as equivalent to Armv8-a.
(MULTILIB_REUSE): Remove remap rules for Armv9-a.
* config/arm/t-multilib (v9_a_nosimd_variants): Delete.
(MULTILIB_MATCHES): Remove mappings for v9_a_nosimd_variants.
2022-04-12 Richard Biener <rguenther@suse.de>
PR tree-optimization/105232
* tree.cc (component_ref_size): Bail out for too large
or non-constant sizes.
2022-04-12 Richard Biener <rguenther@suse.de>
PR tree-optimization/105226
* tree-vect-loop-manip.cc (vect_loop_versioning): Verify
we can split the exit of an outer loop we choose to version.
2022-04-12 Jakub Jelinek <jakub@redhat.com>
* config/i386/i386-expand.cc (ix86_emit_i387_sinh, ix86_emit_i387_cosh,
ix86_emit_i387_tanh, ix86_emit_i387_asinh, ix86_emit_i387_acosh,
ix86_emit_i387_atanh, ix86_emit_i387_log1p, ix86_emit_i387_round,
ix86_emit_swdivsf, ix86_emit_swsqrtsf,
ix86_expand_atomic_fetch_op_loop, ix86_expand_cmpxchg_loop):
Formatting fix.
* config/i386/i386.cc (warn_once_call_ms2sysv_xlogues): Likewise.
2022-04-12 Jakub Jelinek <jakub@redhat.com>
PR target/105214
* config/i386/i386-expand.cc (ix86_emit_i387_log1p): Call
do_pending_stack_adjust.
2022-04-12 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/105211
* builtins.cc (expand_builtin_int_roundingfn_2): If mathfn_built_in_1
fails for TREE_TYPE (arg), retry it with
TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl))) and if even that
fails, emit call normally.
2022-04-12 Andreas Krebbel <krebbel@linux.ibm.com>
* common/config/s390/s390-common.cc: Rename PF_ARCH14 to PF_Z16.
* config.gcc: Add z16 as march/mtune switch.
* config/s390/driver-native.cc (s390_host_detect_local_cpu):
Recognize z16 with -march=native.
* config/s390/s390-opts.h (enum processor_type): Rename
PROCESSOR_ARCH14 to PROCESSOR_3931_Z16.
* config/s390/s390.cc (PROCESSOR_ARCH14): Rename to ...
(PROCESSOR_3931_Z16): ... throughout the file.
(s390_processor processor_table): Add z16 as cpu string.
* config/s390/s390.h (enum processor_flags): Rename PF_ARCH14 to
PF_Z16.
(TARGET_CPU_ARCH14): Rename to ...
(TARGET_CPU_Z16): ... this.
(TARGET_CPU_ARCH14_P): Rename to ...
(TARGET_CPU_Z16_P): ... this.
(TARGET_ARCH14): Rename to ...
(TARGET_Z16): ... this.
(TARGET_ARCH14_P): Rename to ...
(TARGET_Z16_P): ... this.
* config/s390/s390.md (cpu_facility): Rename arch14 to z16 and
check TARGET_Z16 instead of TARGET_ARCH14.
* config/s390/s390.opt: Add z16 to processor_type.
* doc/invoke.texi: Document z16 and arch14.
2022-04-12 chenglulu <chenglulu@loongson.cn>
* config/loongarch/loongarch.cc: Fix bug for
tmpdir-g++.dg-struct-layout-1/t033.
2022-04-11 Peter Bergner <bergner@linux.ibm.com>
PR target/104894
* config/rs6000/rs6000.cc (rs6000_sibcall_aix): Handle pcrel sibcalls
to longcall functions.
2022-04-11 Jason Merrill <jason@redhat.com>
* ipa-free-lang-data.cc (free_lang_data_in_decl): Fix typos.
2022-04-11 Segher Boessenkool <segher@kernel.crashing.org>
PR target/105213
PR target/103623
* config/rs6000/rs6000.md (unpack<mode>_nodm): Add m,r,i alternative.
2022-04-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105218
* tree-ssa-phiopt.cc (value_replacement): If middle_bb has
more than one predecessor or phi's bb more than 2 predecessors,
reset phi result uses instead of adding a debug temp.
2022-04-11 Kito Cheng <kito.cheng@sifive.com>
PR target/104853
* config.gcc: Pass -misa-spec to arch-canonicalize and
multilib-generator.
* config/riscv/arch-canonicalize: Adding -misa-spec option.
(SUPPORTED_ISA_SPEC): New.
(arch_canonicalize): New argument `isa_spec`.
Handle multiple ISA spec versions.
* config/riscv/multilib-generator: Adding -misa-spec option.
2022-04-11 Kito Cheng <kito.cheng@sifive.com>
* config/riscv/arch-canonicalize: Add TODO item.
(IMPLIED_EXT): Sync.
(arch_canonicalize): Checking until no change.
2022-04-11 Tamar Christina <tamar.christina@arm.com>
PR target/105197
* tree-vect-stmts.cc (vectorizable_condition): Prevent cond swap when
not masked.
2022-04-11 Jason Merrill <jason@redhat.com>
PR c++/100370
* pointer-query.cc (compute_objsize_r) [POINTER_PLUS_EXPR]: Require
deref == -1.
2022-04-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/104639
* tree-ssa-phiopt.cc: Include tree-ssa-propagate.h.
(value_replacement): Optimize (x != cst1 ? x : cst2) != cst3
into x != cst3.
2022-04-11 Jeff Law <jeffreyalaw@gmail.com>
* config/bfin/bfin.md (rol_one): Fix pattern to indicate the
sign bit of the source ends up in CC.
2022-04-09 Jan Hubicka <hubicka@ucw.cz>
PR ipa/103376
* cgraphunit.cc (cgraph_node::analyze): update semantic_interposition
flag.
2022-04-09 Jan Hubicka <hubicka@ucw.cz>
* ipa-modref.cc (ipa_merge_modref_summary_after_inlining): Propagate
nondeterministic and side_effects flags.
2022-04-08 Andre Vieira <andre.simoesdiasvieira@arm.com>
PR target/105157
* config.gcc: Shift ext_mask by TARGET_CPU_NBITS.
* config/aarch64/aarch64.h (TARGET_CPU_NBITS): New macro.
(TARGET_CPU_MASK): Likewise.
(TARGET_CPU_DEFAULT): Use TARGET_CPU_NBITS.
* config/aarch64/aarch64.cc (aarch64_get_tune_cpu): Use TARGET_CPU_MASK.
(aarch64_get_arch): Likewise.
(aarch64_override_options): Use TARGET_CPU_NBITS.
2022-04-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/105198
* tree-predcom.cc (find_looparound_phi): Check whether
the found memory location of the entry value is clobbered
inbetween the value we want to use and loop entry.
2022-04-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105189
* fold-const.cc (make_range_step): Fix up handling of
(unsigned) x +[low, -] ranges for signed x if low fits into
typeof (x).
2022-04-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/105175
* tree-vect-stmts.cc (vectorizable_operation): Suppress
-Wvector-operation-performance if using emulated vectors.
* tree-vect-generic.cc (expand_vector_piecewise): Do not diagnose
-Wvector-operation-performance when suppressed.
(expand_vector_parallel): Likewise.
(expand_vector_comparison): Likewise.
(expand_vector_condition): Likewise.
(lower_vec_perm): Likewise.
(expand_vector_conversion): Likewise.
2022-04-07 Tamar Christina <tamar.christina@arm.com>
PR target/104409
* config/aarch64/aarch64-builtins.cc (handle_arm_acle_h): New.
(aarch64_general_init_builtins): Move LS64 code.
* config/aarch64/aarch64-c.cc (aarch64_pragma_aarch64): Support
arm_acle.h
* config/aarch64/aarch64-protos.h (handle_arm_acle_h): New.
* config/aarch64/arm_acle.h: Add pragma GCC aarch64 "arm_acle.h".
2022-04-07 Richard Biener <rguenther@suse.de>
Jan Hubicka <hubicka@ucw.cz>
PR ipa/104303
* tree-ssa-alias.h (ptr_deref_may_alias_global_p,
ref_may_alias_global_p, ref_may_alias_global_p,
stmt_may_clobber_global_p, pt_solution_includes_global): Add
bool parameters indicating whether escaped locals should be
considered global.
* tree-ssa-structalias.cc (pt_solution_includes_global):
When the new escaped_nonlocal_p flag is true also consider
pt->vars_contains_escaped.
* tree-ssa-alias.cc (ptr_deref_may_alias_global_p):
Pass down new escaped_nonlocal_p flag.
(ref_may_alias_global_p): Likewise.
(stmt_may_clobber_global_p): Likewise.
(ref_may_alias_global_p_1): Likewise. For decls also
query the escaped solution if true.
(ref_may_access_global_memory_p): Remove.
(modref_may_conflict): Use ref_may_alias_global_p with
escaped locals considered global.
(ref_maybe_used_by_stmt_p): Adjust.
* ipa-fnsummary.cc (points_to_local_or_readonly_memory_p):
Likewise.
* tree-ssa-dse.cc (dse_classify_store): Likewise.
* trans-mem.cc (thread_private_new_memory): Likewise, but
consider escaped locals global.
* tree-ssa-dce.cc (mark_stmt_if_obviously_necessary): Likewise.
2022-04-07 Richard Biener <rguenther@suse.de>
PR tree-optimization/105185
* tree-ssa-sccvn.cc (visit_reference_op_call): Simplify
modref query again.
2022-04-07 Tamar Christina <tamar.christina@arm.com>
PR target/104049
* config/aarch64/aarch64-simd.md
(aarch64_reduc_plus_internal<mode>): Fix RTL and rename to...
(reduc_plus_scal_<mode>): ... This.
(reduc_plus_scal_v4sf): Moved.
(aarch64_reduc_plus_internalv2si): Fix RTL and rename to...
(reduc_plus_scal_v2si): ... This.
2022-04-07 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/102586
* langhooks.h (struct lang_hooks_for_types): Add classtype_as_base
langhook.
* langhooks-def.h (LANG_HOOKS_CLASSTYPE_AS_BASE): Define.
(LANG_HOOKS_FOR_TYPES_INITIALIZER): Add it.
* gimple-fold.cc (clear_padding_type): Use ftype instead of
TREE_TYPE (field) some more. For artificial FIELD_DECLs without
name try the lang_hooks.types.classtype_as_base langhook and
if it returns non-NULL, use that instead of ftype for recursive call.
2022-04-07 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105150
* tree.cc (tree_builtin_call_types_compatible_p): New function.
(get_call_combined_fn): Use it.
2022-04-07 Richard Biener <rguenther@suse.de>
PR middle-end/105165
* tree-complex.cc (expand_complex_asm): Sorry for asm goto
_Complex outputs.
2022-04-07 liuhongt <hongtao.liu@intel.com>
* config/i386/sse.md (<sse2_avx2>_andnot<mode>3_mask):
Removed.
(<sse>_andnot<mode>3<mask_name>): Disable V*HFmode patterns
for mask_applied.
(<code><mode>3<mask_name>): Ditto.
(*<code><mode>3<mask_name>): Ditto.
(VFB_128_256): Adjust condition of V8HF/V16HFmode according to
real instruction.
(VFB_512): Ditto.
(VFB): Ditto.
2022-04-06 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/104985
* combine.cc (struct undo): Add where.regno member.
(do_SUBST_MODE): Rename to ...
(subst_mode): ... this. Change first argument from rtx * into int,
operate on regno_reg_rtx[regno] and save regno into where.regno.
(SUBST_MODE): Remove.
(try_combine): Use subst_mode instead of SUBST_MODE, change first
argument from regno_reg_rtx[whatever] to whatever. For UNDO_MODE, use
regno_reg_rtx[undo->where.regno] instead of *undo->where.r.
(undo_to_marker): For UNDO_MODE, use regno_reg_rtx[undo->where.regno]
instead of *undo->where.r.
(simplify_set): Use subst_mode instead of SUBST_MODE, change first
argument from regno_reg_rtx[whatever] to whatever.
2022-04-06 Jakub Jelinek <jakub@redhat.com>
PR target/105069
* config/sh/sh.opt (mdiv=): Add Save.
2022-04-06 Martin Liska <mliska@suse.cz>
PR driver/105096
* common.opt: Document properly based on what it does.
* gcc.cc (display_help): Unify with what we have in common.opt.
* opts.cc (common_handle_option): Do not print undocumented
options.
2022-04-06 Xi Ruoyao <xry111@mengyan1223.wang>
* config/mips/mips.cc (mips_fpr_return_fields): Ignore
cxx17_empty_base_field_p fields and set an indicator.
(mips_return_in_msb): Adjust for mips_fpr_return_fields change.
(mips_function_value_1): Inform psABI change about C++17 empty
bases.
2022-04-06 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105150
* gimple.cc (gimple_builtin_call_types_compatible_p): Use
builtin_decl_explicit here...
(gimple_call_builtin_p, gimple_call_combined_fn): ... rather than
here.
2022-04-06 Richard Biener <rguenther@suse.de>
PR tree-optimization/105173
* tree-ssa-reassoc.cc (find_insert_point): Get extra
insert_before output argument and compute it.
(insert_stmt_before_use): Adjust.
(rewrite_expr_tree): Likewise.
2022-04-06 Richard Biener <rguenther@suse.de>
PR ipa/105166
* ipa-modref-tree.cc (modref_access_node::get_ao_ref ): Bail
out for non-pointer arguments.
2022-04-06 Richard Biener <rguenther@suse.de>
PR tree-optimization/105163
* tree-ssa-reassoc.cc (repropagate_negates): Avoid propagating
negated abnormals.
2022-04-06 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105150
* gimple.cc (gimple_call_builtin_p, gimple_call_combined_fn):
For BUILT_IN_NORMAL calls, call gimple_builtin_call_types_compatible_p
preferrably on builtin_decl_explicit decl rather than fndecl.
* tree-ssa-strlen.cc (valid_builtin_call): Don't call
gimple_builtin_call_types_compatible_p here.
2022-04-06 Richard Sandiford <richard.sandiford@arm.com>
PR tree-optimization/103761
* tree-vect-stmts.cc (check_load_store_for_partial_vectors): Replace
the ncopies parameter with an slp_node parameter. Calculate the
number of vectors based on it and vectype. Rename lambda to
group_memory_nvectors.
(vectorizable_store, vectorizable_load): Update calls accordingly.
2022-04-06 Martin Liska <mliska@suse.cz>
* doc/invoke.texi: Document it.
2022-04-06 Richard Biener <rguenther@suse.de>
PR tree-optimization/105148
* tree-ssa-loop-ivopts.cc (idx_record_use): Walk raw operands
2 and 3 of ARRAY_REFs.
2022-04-06 Roger Sayle <roger@nextmovesoftware.com>
* config/i386/sse.md (ANDNOT_MODE): New mode iterator for TF and V1TI.
(*andnottf3): Replace with...
(*andnot<mode>3): New define_insn using ANDNOT_MODE.
2022-04-06 Richard Biener <rguenther@suse.de>
PR tree-optimization/105142
* gimple-fold.h (maybe_fold_and_comparisons): Add defaulted
basic-block parameter.
(maybe_fold_or_comparisons): Likewise.
* gimple-fold.cc (follow_outer_ssa_edges): New.
(maybe_fold_comparisons_from_match_pd): Use follow_outer_ssa_edges
when an outer condition basic-block is specified.
(and_comparisons_1, and_var_with_comparison,
and_var_with_comparison_1, or_comparisons_1,
or_var_with_comparison, or_var_with_comparison_1): Receive and pass
down the outer condition basic-block.
* tree-ssa-ifcombine.cc (ifcombine_ifandif): Pass down the
basic-block of the outer condition.
2022-04-06 Kewen Lin <linkw@linux.ibm.com>
PR target/105002
* config/rs6000/rs6000.cc (rs6000_maybe_emit_maxc_minc): Support more
comparison codes UNLT/UNLE/UNGT/UNGE.
2022-04-05 David Malcolm <dmalcolm@redhat.com>
* doc/extend.texi (Common Function Attributes): Document that

View File

@ -1 +1 @@
20220406
20220428

View File

@ -1,3 +1,14 @@
2022-04-27 Sebastian Huber <sebastian.huber@embedded-brains.de>
* tracebak.c: Add support for ARM RTEMS. Add support for RTEMS to PPC
ELF. Add support for RTEMS to SPARC. Merge aarch64 support of Linux
and RTEMS.
2022-04-27 Pierre-Marie de Rodat <derodat@adacore.com>
PR ada/104027
* gnat1drv.adb: Remove the goto End_Of_Program.
2022-03-24 Pascal Obry <obry@adacore.com>
PR ada/104767

View File

@ -1429,11 +1429,6 @@ begin
Ecode := E_Success;
Back_End.Gen_Or_Update_Object_File;
-- Use a goto instead of calling Exit_Program so that finalization
-- occurs normally.
goto End_Of_Program;
-- Otherwise the unit is missing a crucial piece that prevents code
-- generation.

View File

@ -316,6 +316,13 @@ __gnat_backtrace (void **array,
#define PC_ADJUST -2
#define USING_ARM_UNWINDING 1
/*---------------------- ARM RTEMS ------------------------------------ -*/
#elif (defined (__arm__) && defined (__rtems__))
#define USE_GCC_UNWINDER
#define PC_ADJUST -2
#define USING_ARM_UNWINDING 1
/*---------------------- PPC AIX/PPC Lynx 178/Older Darwin --------------*/
#elif ((defined (_POWER) && defined (_AIX)) || \
(defined (__powerpc__) && defined (__Lynx__) && !defined(__ELF__)) || \
@ -370,11 +377,12 @@ extern void __runnit(); /* thread entry point. */
#define BASE_SKIP 1
/*----------- PPC ELF (GNU/Linux & VxWorks & Lynx178e) -------------------*/
/*----------- PPC ELF (GNU/Linux & VxWorks & Lynx178e & RTEMS ) ----------*/
#elif (defined (_ARCH_PPC) && defined (__vxworks)) || \
(defined (__powerpc__) && defined (__Lynx__) && defined(__ELF__)) || \
(defined (__linux__) && defined (__powerpc__))
(defined (__linux__) && defined (__powerpc__)) || \
(defined (__powerpc__) && defined (__rtems__))
#if defined (_ARCH_PPC64) && !defined (__USING_SJLJ_EXCEPTIONS__)
#define USE_GCC_UNWINDER
@ -404,9 +412,9 @@ struct layout
#define BASE_SKIP 1
/*-------------------------- SPARC Solaris -----------------------------*/
/*-------------------------- SPARC Solaris or RTEMS --------------------*/
#elif defined (__sun__) && defined (__sparc__)
#elif (defined (__sun__) || defined (__rtems__)) && defined (__sparc__)
#define USE_GENERIC_UNWINDER
@ -551,21 +559,9 @@ is_return_from(void *symbol_addr, void *ret_addr)
#error Unhandled QNX architecture.
#endif
/*---------------------------- RTEMS ---------------------------------*/
/*------------------- aarch64-linux or aarch64-rtems -----------------*/
#elif defined (__rtems__)
#define USE_GCC_UNWINDER
#if defined (__aarch64__)
#define PC_ADJUST -4
#else
#error Unhandled RTEMS architecture.
#endif
/*------------------- aarch64-linux ----------------------------------*/
#elif (defined (__aarch64__) && defined (__linux__))
#elif (defined (__aarch64__) && (defined (__linux__) || defined (__rtems__)))
#define USE_GCC_UNWINDER
#define PC_ADJUST -4

View File

@ -1,3 +1,77 @@
2022-04-25 David Malcolm <dmalcolm@redhat.com>
PR analyzer/105365
PR analyzer/105366
* svalue.cc
(cmp_cst): Rename to...
(cmp_csts_same_type): ...this. Convert all recursive calls to
calls to...
(cmp_csts_and_types): ....this new function.
(svalue::cmp_ptr): Update for renaming of cmp_cst
2022-04-14 David Malcolm <dmalcolm@redhat.com>
PR analyzer/105264
* region-model-reachability.cc (reachable_regions::handle_parm):
Use maybe_get_deref_base_region rather than just region_svalue, to
handle pointer arithmetic also.
* svalue.cc (svalue::maybe_get_deref_base_region): New.
* svalue.h (svalue::maybe_get_deref_base_region): New decl.
2022-04-14 David Malcolm <dmalcolm@redhat.com>
PR analyzer/105252
* svalue.cc (cmp_cst): When comparing VECTOR_CSTs, compare the
types of the encoded elements before calling cmp_cst on them.
2022-04-09 David Malcolm <dmalcolm@redhat.com>
PR analyzer/103892
* region-model-manager.cc
(region_model_manager::get_unknown_symbolic_region): New,
extracted from...
(region_model_manager::get_field_region): ...here.
(region_model_manager::get_element_region): Use it here.
(region_model_manager::get_offset_region): Likewise.
(region_model_manager::get_sized_region): Likewise.
(region_model_manager::get_cast_region): Likewise.
(region_model_manager::get_bit_range): Likewise.
* region-model.h
(region_model_manager::get_unknown_symbolic_region): New decl.
* region.cc (symbolic_region::symbolic_region): Handle sval_ptr
having NULL type.
(symbolic_region::dump_to_pp): Handle having NULL type.
2022-04-07 David Malcolm <dmalcolm@redhat.com>
PR analyzer/102208
* store.cc (binding_map::remove_overlapping_bindings): Add
"always_overlap" param, using it to generalize to the case where
we want to remove all bindings. Update "uncertainty" logic to
only record maybe-bound values for cases where there is a symbolic
write involved.
(binding_cluster::mark_region_as_unknown): Split param "reg" into
"reg_to_bind" and "reg_for_overlap".
(binding_cluster::maybe_get_compound_binding): Pass "false" to
binding_map::remove_overlapping_bindings new "always_overlap" param.
(binding_cluster::remove_overlapping_bindings): Determine
"always_overlap" and pass it to
binding_map::remove_overlapping_bindings.
(store::set_value): Pass uncertainty to remove_overlapping_bindings
call. Update for new param of
binding_cluster::mark_region_as_unknown, passing both the base
region of the iter_cluster, and the lhs_reg.
(store::mark_region_as_unknown): Update for new param of
binding_cluster::mark_region_as_unknown, passing "reg" for both.
(store::remove_overlapping_bindings): Add param "uncertainty", and
pass it on to call to
binding_cluster::remove_overlapping_bindings.
* store.h (binding_map::remove_overlapping_bindings): Add
"always_overlap" param.
(binding_cluster::mark_region_as_unknown): Split param "reg" into
"reg_to_bind" and "reg_for_overlap".
(store::remove_overlapping_bindings): Add param "uncertainty".
2022-03-29 David Malcolm <dmalcolm@redhat.com>
PR testsuite/105085

View File

@ -1362,6 +1362,19 @@ region_model_manager::get_region_for_global (tree expr)
return reg;
}
/* Return the region for an unknown access of type REGION_TYPE,
creating it if necessary.
This is a symbolic_region, where the pointer is an unknown_svalue
of type &REGION_TYPE. */
const region *
region_model_manager::get_unknown_symbolic_region (tree region_type)
{
tree ptr_type = region_type ? build_pointer_type (region_type) : NULL_TREE;
const svalue *unknown_ptr = get_or_create_unknown_svalue (ptr_type);
return get_symbolic_region (unknown_ptr);
}
/* Return the region that describes accessing field FIELD of PARENT,
creating it if necessary. */
@ -1372,12 +1385,7 @@ region_model_manager::get_field_region (const region *parent, tree field)
/* (*UNKNOWN_PTR).field is (*UNKNOWN_PTR_OF_&FIELD_TYPE). */
if (parent->symbolic_for_unknown_ptr_p ())
{
tree ptr_to_field_type = build_pointer_type (TREE_TYPE (field));
const svalue *unknown_ptr_to_field
= get_or_create_unknown_svalue (ptr_to_field_type);
return get_symbolic_region (unknown_ptr_to_field);
}
return get_unknown_symbolic_region (TREE_TYPE (field));
field_region::key_t key (parent, field);
if (field_region *reg = m_field_regions.get (key))
@ -1397,6 +1405,10 @@ region_model_manager::get_element_region (const region *parent,
tree element_type,
const svalue *index)
{
/* (UNKNOWN_PTR[IDX]) is (UNKNOWN_PTR). */
if (parent->symbolic_for_unknown_ptr_p ())
return get_unknown_symbolic_region (element_type);
element_region::key_t key (parent, element_type, index);
if (element_region *reg = m_element_regions.get (key))
return reg;
@ -1416,6 +1428,10 @@ region_model_manager::get_offset_region (const region *parent,
tree type,
const svalue *byte_offset)
{
/* (UNKNOWN_PTR + OFFSET) is (UNKNOWN_PTR). */
if (parent->symbolic_for_unknown_ptr_p ())
return get_unknown_symbolic_region (type);
/* If BYTE_OFFSET is zero, return PARENT. */
if (tree cst_offset = byte_offset->maybe_get_constant ())
if (zerop (cst_offset))
@ -1451,6 +1467,9 @@ region_model_manager::get_sized_region (const region *parent,
tree type,
const svalue *byte_size_sval)
{
if (parent->symbolic_for_unknown_ptr_p ())
return get_unknown_symbolic_region (type);
if (byte_size_sval->get_type () != size_type_node)
byte_size_sval = get_or_create_cast (size_type_node, byte_size_sval);
@ -1486,6 +1505,9 @@ region_model_manager::get_cast_region (const region *original_region,
if (type == original_region->get_type ())
return original_region;
if (original_region->symbolic_for_unknown_ptr_p ())
return get_unknown_symbolic_region (type);
cast_region::key_t key (original_region, type);
if (cast_region *reg = m_cast_regions.get (key))
return reg;
@ -1558,6 +1580,9 @@ region_model_manager::get_bit_range (const region *parent, tree type,
{
gcc_assert (parent);
if (parent->symbolic_for_unknown_ptr_p ())
return get_unknown_symbolic_region (type);
bit_range_region::key_t key (parent, type, bits);
if (bit_range_region *reg = m_bit_range_regions.get (key))
return reg;

View File

@ -252,12 +252,8 @@ reachable_regions::handle_parm (const svalue *sval, tree param_type)
m_mutable_svals.add (sval);
else
m_reachable_svals.add (sval);
if (const region_svalue *parm_ptr
= sval->dyn_cast_region_svalue ())
{
const region *pointee_reg = parm_ptr->get_pointee ();
add (pointee_reg, is_mutable);
}
if (const region *base_reg = sval->maybe_get_deref_base_region ())
add (base_reg, is_mutable);
/* Treat all svalues within a compound_svalue as reachable. */
if (const compound_svalue *compound_sval
= sval->dyn_cast_compound_svalue ())

View File

@ -327,6 +327,8 @@ public:
const region *get_bit_range (const region *parent, tree type,
const bit_range &bits);
const region *get_unknown_symbolic_region (tree region_type);
const region *
get_region_for_unexpected_tree_code (region_model_context *ctxt,
tree t,

View File

@ -1016,7 +1016,9 @@ root_region::dump_to_pp (pretty_printer *pp, bool simple) const
symbolic_region::symbolic_region (unsigned id, region *parent,
const svalue *sval_ptr)
: region (complexity::from_pair (parent, sval_ptr), id, parent,
TREE_TYPE (sval_ptr->get_type ())),
(sval_ptr->get_type ()
? TREE_TYPE (sval_ptr->get_type ())
: NULL_TREE)),
m_sval_ptr (sval_ptr)
{
}
@ -1045,8 +1047,11 @@ symbolic_region::dump_to_pp (pretty_printer *pp, bool simple) const
{
pp_string (pp, "symbolic_region(");
get_parent_region ()->dump_to_pp (pp, simple);
pp_string (pp, ", ");
print_quoted_type (pp, get_type ());
if (get_type ())
{
pp_string (pp, ", ");
print_quoted_type (pp, get_type ());
}
pp_string (pp, ", ");
m_sval_ptr->dump_to_pp (pp, simple);
pp_string (pp, ")");

View File

@ -997,27 +997,61 @@ binding_map::get_overlapping_bindings (const binding_key *key,
value: {BITS_WITHIN(bytes 4-7, inner_val: INIT_VAL((*INIT_VAL(p_33(D))).arr))}
If UNCERTAINTY is non-NULL, use it to record any svalues that
were removed, as being maybe-bound. */
were removed, as being maybe-bound.
If ALWAYS_OVERLAP, then assume that DROP_KEY can overlap anything
in the map, due to one or both of the underlying clusters being
symbolic (but not the same symbolic region). Hence even if DROP_KEY is a
concrete binding it could actually be referring to the same memory as
distinct concrete bindings in the map. Remove all bindings, but
register any svalues with *UNCERTAINTY. */
void
binding_map::remove_overlapping_bindings (store_manager *mgr,
const binding_key *drop_key,
uncertainty_t *uncertainty)
uncertainty_t *uncertainty,
bool always_overlap)
{
/* Get the bindings of interest within this map. */
auto_vec<const binding_key *> bindings;
get_overlapping_bindings (drop_key, &bindings);
if (always_overlap)
for (auto iter : *this)
bindings.safe_push (iter.first); /* Add all bindings. */
else
/* Just add overlapping bindings. */
get_overlapping_bindings (drop_key, &bindings);
unsigned i;
const binding_key *iter_binding;
FOR_EACH_VEC_ELT (bindings, i, iter_binding)
{
/* Record any svalues that were removed to *UNCERTAINTY as being
maybe-bound, provided at least some part of the binding is symbolic.
Specifically, if at least one of the bindings is symbolic, or we
have ALWAYS_OVERLAP for the case where we have possibly aliasing
regions, then we don't know that the svalue has been overwritten,
and should record that to *UNCERTAINTY.
However, if we have concrete keys accessing within the same symbolic
region, then we *know* that the symbolic region has been overwritten,
so we don't record it to *UNCERTAINTY, as this could be a genuine
leak. */
const svalue *old_sval = get (iter_binding);
if (uncertainty)
if (uncertainty
&& (drop_key->symbolic_p ()
|| iter_binding->symbolic_p ()
|| always_overlap))
uncertainty->on_maybe_bound_sval (old_sval);
/* Begin by removing the old binding. */
m_map.remove (iter_binding);
/* Don't attempt to handle prefixes/suffixes for the
"always_overlap" case; everything's being removed. */
if (always_overlap)
continue;
/* Now potentially add the prefix and suffix. */
if (const concrete_binding *drop_ckey
= drop_key->dyn_cast_concrete_binding ())
@ -1335,22 +1369,30 @@ binding_cluster::zero_fill_region (store_manager *mgr, const region *reg)
fill_region (mgr, reg, zero_sval);
}
/* Mark REG within this cluster as being unknown.
/* Mark REG_TO_BIND within this cluster as being unknown.
Remove any bindings overlapping REG_FOR_OVERLAP.
If UNCERTAINTY is non-NULL, use it to record any svalues that
had bindings to them removed, as being maybe-bound. */
had bindings to them removed, as being maybe-bound.
REG_TO_BIND and REG_FOR_OVERLAP are the same for
store::mark_region_as_unknown, but are different in
store::set_value's alias handling, for handling the case where
we have a write to a symbolic REG_FOR_OVERLAP. */
void
binding_cluster::mark_region_as_unknown (store_manager *mgr,
const region *reg,
const region *reg_to_bind,
const region *reg_for_overlap,
uncertainty_t *uncertainty)
{
remove_overlapping_bindings (mgr, reg, uncertainty);
remove_overlapping_bindings (mgr, reg_for_overlap, uncertainty);
/* Add a default binding to "unknown". */
region_model_manager *sval_mgr = mgr->get_svalue_manager ();
const svalue *sval
= sval_mgr->get_or_create_unknown_svalue (reg->get_type ());
bind (mgr, reg, sval);
= sval_mgr->get_or_create_unknown_svalue (reg_to_bind->get_type ());
bind (mgr, reg_to_bind, sval);
}
/* Purge state involving SVAL. */
@ -1595,7 +1637,7 @@ binding_cluster::maybe_get_compound_binding (store_manager *mgr,
it overlaps with offset_concrete_key. */
default_map.remove_overlapping_bindings (mgr,
offset_concrete_key,
NULL);
NULL, false);
}
else if (bound_range.contains_p (reg_range, &subrange))
{
@ -1629,7 +1671,7 @@ binding_cluster::maybe_get_compound_binding (store_manager *mgr,
it overlaps with overlap_concrete_key. */
default_map.remove_overlapping_bindings (mgr,
overlap_concrete_key,
NULL);
NULL, false);
}
}
else
@ -1652,7 +1694,13 @@ binding_cluster::maybe_get_compound_binding (store_manager *mgr,
}
/* Remove, truncate, and/or split any bindings within this map that
overlap REG.
could overlap REG.
If REG's base region or this cluster is symbolic and they're different
base regions, then remove everything in this cluster's map, on the
grounds that REG could be referring to the same memory as anything
in the map.
If UNCERTAINTY is non-NULL, use it to record any svalues that
were removed, as being maybe-bound. */
@ -1663,7 +1711,19 @@ binding_cluster::remove_overlapping_bindings (store_manager *mgr,
{
const binding_key *reg_binding = binding_key::make (mgr, reg);
m_map.remove_overlapping_bindings (mgr, reg_binding, uncertainty);
const region *cluster_base_reg = get_base_region ();
const region *other_base_reg = reg->get_base_region ();
/* If at least one of the base regions involved is symbolic, and they're
not the same base region, then consider everything in the map as
potentially overlapping with reg_binding (even if it's a concrete
binding and things in the map are concrete - they could be referring
to the same memory when the symbolic base regions are taken into
account). */
bool always_overlap = (cluster_base_reg != other_base_reg
&& (cluster_base_reg->get_kind () == RK_SYMBOLIC
|| other_base_reg->get_kind () == RK_SYMBOLIC));
m_map.remove_overlapping_bindings (mgr, reg_binding, uncertainty,
always_overlap);
}
/* Attempt to merge CLUSTER_A and CLUSTER_B into OUT_CLUSTER, using
@ -2368,7 +2428,7 @@ store::set_value (store_manager *mgr, const region *lhs_reg,
logger *logger = mgr->get_logger ();
LOG_SCOPE (logger);
remove_overlapping_bindings (mgr, lhs_reg);
remove_overlapping_bindings (mgr, lhs_reg, uncertainty);
rhs_sval = simplify_for_binding (rhs_sval);
@ -2438,8 +2498,14 @@ store::set_value (store_manager *mgr, const region *lhs_reg,
lhs_reg->dump_to_pp (pp, true);
logger->end_log_line ();
}
iter_cluster->mark_region_as_unknown (mgr, iter_base_reg,
uncertainty);
/* Mark all of iter_cluster's iter_base_reg as unknown,
using LHS_REG when considering overlaps, to handle
symbolic vs concrete issues. */
iter_cluster->mark_region_as_unknown
(mgr,
iter_base_reg, /* reg_to_bind */
lhs_reg, /* reg_for_overlap */
uncertainty);
break;
case tristate::TS_TRUE:
@ -2603,7 +2669,7 @@ store::mark_region_as_unknown (store_manager *mgr, const region *reg,
|| !base_reg->tracked_p ())
return;
binding_cluster *cluster = get_or_create_cluster (base_reg);
cluster->mark_region_as_unknown (mgr, reg, uncertainty);
cluster->mark_region_as_unknown (mgr, reg, reg, uncertainty);
}
/* Purge state involving SVAL. */
@ -2826,10 +2892,14 @@ store::get_representative_path_vars (const region_model *model,
}
/* Remove all bindings overlapping REG within this store, removing
any clusters that become redundant. */
any clusters that become redundant.
If UNCERTAINTY is non-NULL, use it to record any svalues that
were removed, as being maybe-bound. */
void
store::remove_overlapping_bindings (store_manager *mgr, const region *reg)
store::remove_overlapping_bindings (store_manager *mgr, const region *reg,
uncertainty_t *uncertainty)
{
const region *base_reg = reg->get_base_region ();
if (binding_cluster **cluster_slot = m_cluster_map.get (base_reg))
@ -2842,7 +2912,7 @@ store::remove_overlapping_bindings (store_manager *mgr, const region *reg)
delete cluster;
return;
}
cluster->remove_overlapping_bindings (mgr, reg, NULL);
cluster->remove_overlapping_bindings (mgr, reg, uncertainty);
}
}

View File

@ -509,7 +509,8 @@ public:
void remove_overlapping_bindings (store_manager *mgr,
const binding_key *drop_key,
uncertainty_t *uncertainty);
uncertainty_t *uncertainty,
bool always_overlap);
private:
void get_overlapping_bindings (const binding_key *key,
@ -574,7 +575,9 @@ public:
void purge_region (store_manager *mgr, const region *reg);
void fill_region (store_manager *mgr, const region *reg, const svalue *sval);
void zero_fill_region (store_manager *mgr, const region *reg);
void mark_region_as_unknown (store_manager *mgr, const region *reg,
void mark_region_as_unknown (store_manager *mgr,
const region *reg_to_bind,
const region *reg_for_overlap,
uncertainty_t *uncertainty);
void purge_state_involving (const svalue *sval,
region_model_manager *sval_mgr);
@ -765,7 +768,8 @@ public:
region_model_manager *mgr);
private:
void remove_overlapping_bindings (store_manager *mgr, const region *reg);
void remove_overlapping_bindings (store_manager *mgr, const region *reg,
uncertainty_t *uncertainty);
tristate eval_alias_1 (const region *base_reg_a,
const region *base_reg_b) const;

View File

@ -59,6 +59,8 @@ along with GCC; see the file COPYING3. If not see
namespace ana {
static int cmp_csts_and_types (const_tree cst1, const_tree cst2);
/* class svalue and its various subclasses. */
/* class svalue. */
@ -304,7 +306,7 @@ svalue::implicitly_live_p (const svalue_set *, const region_model *) const
of the same type. */
static int
cmp_cst (const_tree cst1, const_tree cst2)
cmp_csts_same_type (const_tree cst1, const_tree cst2)
{
gcc_assert (TREE_TYPE (cst1) == TREE_TYPE (cst2));
gcc_assert (TREE_CODE (cst1) == TREE_CODE (cst2));
@ -323,9 +325,10 @@ cmp_cst (const_tree cst1, const_tree cst2)
TREE_REAL_CST_PTR (cst2),
sizeof (real_value));
case COMPLEX_CST:
if (int cmp_real = cmp_cst (TREE_REALPART (cst1), TREE_REALPART (cst2)))
if (int cmp_real = cmp_csts_and_types (TREE_REALPART (cst1),
TREE_REALPART (cst2)))
return cmp_real;
return cmp_cst (TREE_IMAGPART (cst1), TREE_IMAGPART (cst2));
return cmp_csts_and_types (TREE_IMAGPART (cst1), TREE_IMAGPART (cst2));
case VECTOR_CST:
if (int cmp_log2_npatterns
= ((int)VECTOR_CST_LOG2_NPATTERNS (cst1)
@ -337,13 +340,29 @@ cmp_cst (const_tree cst1, const_tree cst2)
return cmp_nelts_per_pattern;
unsigned encoded_nelts = vector_cst_encoded_nelts (cst1);
for (unsigned i = 0; i < encoded_nelts; i++)
if (int el_cmp = cmp_cst (VECTOR_CST_ENCODED_ELT (cst1, i),
VECTOR_CST_ENCODED_ELT (cst2, i)))
return el_cmp;
{
const_tree elt1 = VECTOR_CST_ENCODED_ELT (cst1, i);
const_tree elt2 = VECTOR_CST_ENCODED_ELT (cst2, i);
if (int el_cmp = cmp_csts_and_types (elt1, elt2))
return el_cmp;
}
return 0;
}
}
/* Comparator for imposing a deterministic order on constants that might
not be of the same type. */
static int
cmp_csts_and_types (const_tree cst1, const_tree cst2)
{
int t1 = TYPE_UID (TREE_TYPE (cst1));
int t2 = TYPE_UID (TREE_TYPE (cst2));
if (int cmp_type = t1 - t2)
return cmp_type;
return cmp_csts_same_type (cst1, cst2);
}
/* Comparator for imposing a deterministic order on svalues. */
int
@ -375,7 +394,7 @@ svalue::cmp_ptr (const svalue *sval1, const svalue *sval2)
const constant_svalue *constant_sval2 = (const constant_svalue *)sval2;
const_tree cst1 = constant_sval1->get_constant ();
const_tree cst2 = constant_sval2->get_constant ();
return cmp_cst (cst1, cst2);
return cmp_csts_same_type (cst1, cst2);
}
break;
case SK_UNKNOWN:
@ -644,6 +663,48 @@ svalue::all_zeroes_p () const
return false;
}
/* If this svalue is a pointer, attempt to determine the base region it points
to. Return NULL on any problems. */
const region *
svalue::maybe_get_deref_base_region () const
{
const svalue *iter = this;
while (1)
{
switch (iter->get_kind ())
{
default:
return NULL;
case SK_REGION:
{
const region_svalue *region_sval
= as_a <const region_svalue *> (iter);
return region_sval->get_pointee ()->get_base_region ();
}
case SK_BINOP:
{
const binop_svalue *binop_sval
= as_a <const binop_svalue *> (iter);
switch (binop_sval->get_op ())
{
case POINTER_PLUS_EXPR:
/* If we have a symbolic value expressing pointer arithmetic,
use the LHS. */
iter = binop_sval->get_arg0 ();
continue;
default:
return NULL;
}
return NULL;
}
}
}
}
/* class region_svalue : public svalue. */
/* Implementation of svalue::dump_to_pp vfunc for region_svalue. */

View File

@ -175,6 +175,8 @@ public:
per-type and thus it's meaningless for them to "have state". */
virtual bool can_have_associated_state_p () const { return true; }
const region *maybe_get_deref_base_region () const;
protected:
svalue (complexity c, tree type)
: m_complexity (c), m_type (type)

View File

@ -1497,10 +1497,14 @@ asan_redzone_buffer::emit_redzone_byte (HOST_WIDE_INT offset,
HOST_WIDE_INT off
= m_prev_offset + ASAN_SHADOW_GRANULARITY * m_shadow_bytes.length ();
if (off == offset)
/* Consecutive shadow memory byte. */;
else if (offset < m_prev_offset + (HOST_WIDE_INT) (ASAN_SHADOW_GRANULARITY
* RZ_BUFFER_SIZE)
&& !m_shadow_bytes.is_empty ())
{
/* Consecutive shadow memory byte. */
m_shadow_bytes.safe_push (value);
flush_if_full ();
/* Shadow memory byte with a small gap. */
for (; off < offset; off += ASAN_SHADOW_GRANULARITY)
m_shadow_bytes.safe_push (0);
}
else
{
@ -1521,9 +1525,9 @@ asan_redzone_buffer::emit_redzone_byte (HOST_WIDE_INT offset,
m_shadow_mem = adjust_address (m_shadow_mem, VOIDmode,
diff >> ASAN_SHADOW_SHIFT);
m_prev_offset = offset;
m_shadow_bytes.safe_push (value);
flush_if_full ();
}
m_shadow_bytes.safe_push (value);
flush_if_full ();
}
/* Emit RTX emission of the content of the buffer. */

View File

@ -636,15 +636,20 @@ decl_attributes (tree *node, tree attributes, int flags,
&& !DECL_FUNCTION_SPECIFIC_OPTIMIZATION (*node))
{
DECL_FUNCTION_SPECIFIC_OPTIMIZATION (*node) = optimization_current_node;
tree cur_tree
= build_target_option_node (&global_options, &global_options_set);
tree old_tree = DECL_FUNCTION_SPECIFIC_TARGET (*node);
if (!old_tree)
old_tree = target_option_default_node;
/* The changes on optimization options can cause the changes in
target options, update it accordingly if it's changed. */
if (old_tree != cur_tree)
DECL_FUNCTION_SPECIFIC_TARGET (*node) = cur_tree;
/* Don't set DECL_FUNCTION_SPECIFIC_TARGET for targets that don't
support #pragma GCC target or target attribute. */
if (target_option_default_node)
{
tree cur_tree
= build_target_option_node (&global_options, &global_options_set);
tree old_tree = DECL_FUNCTION_SPECIFIC_TARGET (*node);
if (!old_tree)
old_tree = target_option_default_node;
/* The changes on optimization options can cause the changes in
target options, update it accordingly if it's changed. */
if (old_tree != cur_tree)
DECL_FUNCTION_SPECIFIC_TARGET (*node) = cur_tree;
}
}
/* If this is a function and the user used #pragma GCC target, add the

View File

@ -2967,16 +2967,28 @@ expand_builtin_int_roundingfn_2 (tree exp, rtx target)
BUILT_IN_IROUND and if __builtin_iround is called directly, emit
a call to lround in the hope that the target provides at least some
C99 functions. This should result in the best user experience for
not full C99 targets. */
tree fallback_fndecl = mathfn_built_in_1
(TREE_TYPE (arg), as_combined_fn (fallback_fn), 0);
not full C99 targets.
As scalar float conversions with same mode are useless in GIMPLE,
we can end up e.g. with _Float32 argument passed to float builtin,
try to get the type from the builtin prototype first. */
tree fallback_fndecl = NULL_TREE;
if (tree argtypes = TYPE_ARG_TYPES (TREE_TYPE (fndecl)))
fallback_fndecl
= mathfn_built_in_1 (TREE_VALUE (argtypes),
as_combined_fn (fallback_fn), 0);
if (fallback_fndecl == NULL_TREE)
fallback_fndecl
= mathfn_built_in_1 (TREE_TYPE (arg),
as_combined_fn (fallback_fn), 0);
if (fallback_fndecl)
{
exp = build_call_nofold_loc (EXPR_LOCATION (exp),
fallback_fndecl, 1, arg);
exp = build_call_nofold_loc (EXPR_LOCATION (exp),
fallback_fndecl, 1, arg);
target = expand_call (exp, NULL_RTX, target == const0_rtx);
target = maybe_emit_group_store (target, TREE_TYPE (exp));
return convert_to_mode (mode, target, 0);
target = expand_call (exp, NULL_RTX, target == const0_rtx);
target = maybe_emit_group_store (target, TREE_TYPE (exp));
return convert_to_mode (mode, target, 0);
}
}
return expand_call (exp, target, target == const0_rtx);

View File

@ -1,3 +1,16 @@
2022-04-26 Patrick Palka <ppalka@redhat.com>
PR c++/105304
* c-common.cc (verify_tree) [restart]: Move up to before the
NULL test.
2022-04-11 Jakub Jelinek <jakub@redhat.com>
PR c++/105186
* c-common.cc (c_common_nodes_and_builtins): After registering __int%d
and __int%d__ builtin types, initialize corresponding ridpointers
entry.
2022-03-30 Marek Polacek <polacek@redhat.com>
PR c++/101030

View File

@ -2009,12 +2009,12 @@ verify_tree (tree x, struct tlist **pbefore_sp, struct tlist **pno_sp,
enum tree_code code;
enum tree_code_class cl;
restart:
/* X may be NULL if it is the operand of an empty statement expression
({ }). */
if (x == NULL)
return;
restart:
code = TREE_CODE (x);
cl = TREE_CODE_CLASS (code);
@ -4278,6 +4278,8 @@ c_common_nodes_and_builtins (void)
sprintf (name, "__int%d__", int_n_data[i].bitsize);
record_builtin_type ((enum rid)(RID_FIRST_INT_N + i), name,
int_n_trees[i].signed_type);
ridpointers[RID_FIRST_INT_N + i]
= DECL_NAME (TYPE_NAME (int_n_trees[i].signed_type));
sprintf (name, "__int%d unsigned", int_n_data[i].bitsize);
record_builtin_type (RID_MAX, name, int_n_trees[i].unsigned_type);

View File

@ -1,3 +1,8 @@
2022-04-08 Jakub Jelinek <jakub@redhat.com>
PR c/105149
* c-typeck.cc (c_build_va_arg): Reject function types.
2022-03-22 Marek Polacek <polacek@redhat.com>
PR c/82283

View File

@ -15896,6 +15896,12 @@ c_build_va_arg (location_t loc1, tree expr, location_t loc2, tree type)
"type %qT", type);
return error_mark_node;
}
else if (TREE_CODE (type) == FUNCTION_TYPE)
{
error_at (loc2, "second argument to %<va_arg%> is a function type %qT",
type);
return error_mark_node;
}
else if (warn_cxx_compat && TREE_CODE (type) == ENUMERAL_TYPE)
warning_at (loc2, OPT_Wc___compat,
"C++ requires promoted type, not enum type, in %<va_arg%>");

View File

@ -507,6 +507,7 @@ cgraph_node::create (tree decl)
gcc_assert (TREE_CODE (decl) == FUNCTION_DECL);
node->decl = decl;
node->semantic_interposition = opt_for_fn (decl, flag_semantic_interposition);
if ((flag_openacc || flag_openmp)
&& lookup_attribute ("omp declare target", DECL_ATTRIBUTES (decl)))
@ -3487,7 +3488,11 @@ cgraph_node::verify_node (void)
"returns a pointer");
error_found = true;
}
if (definition && externally_visible
if (definition
&& externally_visible
/* For aliases in lto1 free_lang_data doesn't guarantee preservation
of opt_for_fn (decl, flag_semantic_interposition). See PR105399. */
&& (!alias || !in_lto_p)
&& semantic_interposition
!= opt_for_fn (decl, flag_semantic_interposition))
{

View File

@ -394,6 +394,7 @@ cgraph_node::create_clone (tree new_decl, profile_count prof_count,
new_node->versionable = versionable;
new_node->can_change_signature = can_change_signature;
new_node->redefined_extern_inline = redefined_extern_inline;
new_node->semantic_interposition = semantic_interposition;
new_node->tm_may_enter_irr = tm_may_enter_irr;
new_node->externally_visible = false;
new_node->no_reorder = no_reorder;

View File

@ -621,6 +621,7 @@ cgraph_node::analyze (void)
tree decl = this->decl;
location_t saved_loc = input_location;
input_location = DECL_SOURCE_LOCATION (decl);
semantic_interposition = opt_for_fn (decl, flag_semantic_interposition);
if (thunk)
{

View File

@ -2569,6 +2569,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
rtx new_other_notes;
int i;
scalar_int_mode dest_mode, temp_mode;
bool has_non_call_exception = false;
/* Immediately return if any of I0,I1,I2 are the same insn (I3 can
never be). */
@ -2951,6 +2952,32 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
return 0;
}
/* With non-call exceptions we can end up trying to combine multiple
insns with possible EH side effects. Make sure we can combine
that to a single insn which means there must be at most one insn
in the combination with an EH side effect. */
if (cfun->can_throw_non_call_exceptions)
{
if (find_reg_note (i3, REG_EH_REGION, NULL_RTX)
|| find_reg_note (i2, REG_EH_REGION, NULL_RTX)
|| (i1 && find_reg_note (i1, REG_EH_REGION, NULL_RTX))
|| (i0 && find_reg_note (i0, REG_EH_REGION, NULL_RTX)))
{
has_non_call_exception = true;
if (insn_could_throw_p (i3)
+ insn_could_throw_p (i2)
+ (i1 ? insn_could_throw_p (i1) : 0)
+ (i0 ? insn_could_throw_p (i0) : 0) > 1)
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "Can't combine multiple insns with EH "
"side-effects\n");
undo_all ();
return 0;
}
}
}
/* Record whether i2 and i3 are trivial moves. */
i2_was_move = is_just_move (i2);
i3_was_move = is_just_move (i3);
@ -3685,7 +3712,13 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
|| !modified_between_p (*split, i2, i3))
/* We can't overwrite I2DEST if its value is still used by
NEWPAT. */
&& ! reg_referenced_p (i2dest, newpat))
&& ! reg_referenced_p (i2dest, newpat)
/* We should not split a possibly trapping part when we
care about non-call EH and have REG_EH_REGION notes
to distribute. */
&& ! (cfun->can_throw_non_call_exceptions
&& has_non_call_exception
&& may_trap_p (*split)))
{
rtx newdest = i2dest;
enum rtx_code split_code = GET_CODE (*split);
@ -14175,23 +14208,35 @@ distribute_notes (rtx notes, rtx_insn *from_insn, rtx_insn *i3, rtx_insn *i2,
break;
case REG_EH_REGION:
/* These notes must remain with the call or trapping instruction. */
if (CALL_P (i3))
place = i3;
else if (i2 && CALL_P (i2))
place = i2;
else
{
gcc_assert (cfun->can_throw_non_call_exceptions);
if (may_trap_p (i3))
place = i3;
else if (i2 && may_trap_p (i2))
place = i2;
/* ??? Otherwise assume we've combined things such that we
can now prove that the instructions can't trap. Drop the
note in this case. */
}
break;
{
/* The landing pad handling needs to be kept in sync with the
prerequisite checking in try_combine. */
int lp_nr = INTVAL (XEXP (note, 0));
/* A REG_EH_REGION note transfering control can only ever come
from i3. */
if (lp_nr > 0)
gcc_assert (from_insn == i3);
/* We are making sure there is a single effective REG_EH_REGION
note and it's valid to put it on i3. */
if (!insn_could_throw_p (from_insn))
/* Throw away stra notes on insns that can never throw. */
;
else
{
if (CALL_P (i3))
place = i3;
else
{
gcc_assert (cfun->can_throw_non_call_exceptions);
/* If i3 can still trap preserve the note, otherwise we've
combined things such that we can now prove that the
instructions can't trap. Drop the note in this case. */
if (may_trap_p (i3))
place = i3;
}
}
break;
}
case REG_ARGS_SIZE:
/* ??? How to distribute between i3-i1. Assume i3 contains the

View File

@ -50,10 +50,10 @@ EXPORTED_CONST int processor_flags_table[] =
/* z15 */ PF_IEEE_FLOAT | PF_ZARCH | PF_LONG_DISPLACEMENT
| PF_EXTIMM | PF_DFP | PF_Z10 | PF_Z196 | PF_ZEC12 | PF_TX
| PF_Z13 | PF_VX | PF_VXE | PF_Z14 | PF_VXE2 | PF_Z15,
/* arch14 */ PF_IEEE_FLOAT | PF_ZARCH | PF_LONG_DISPLACEMENT
/* z16 */ PF_IEEE_FLOAT | PF_ZARCH | PF_LONG_DISPLACEMENT
| PF_EXTIMM | PF_DFP | PF_Z10 | PF_Z196 | PF_ZEC12 | PF_TX
| PF_Z13 | PF_VX | PF_VXE | PF_Z14 | PF_VXE2 | PF_Z15
| PF_NNPA | PF_ARCH14
| PF_NNPA | PF_Z16
};
/* Change optimizations to be performed, depending on the

View File

@ -4261,7 +4261,7 @@ case "${target}" in
ext_val=`echo $ext_val | sed -e 's/[a-z0-9]\+//'`
done
ext_mask="(("$ext_mask") << 6)"
ext_mask="(("$ext_mask") << TARGET_CPU_NBITS)"
if [ x"$base_id" != x ]; then
target_cpu_cname="TARGET_CPU_$base_id | $ext_mask"
fi
@ -4717,7 +4717,7 @@ case "${target}" in
esac
PYTHON=`which python || which python3 || which python2`
if test "x${PYTHON}" != x; then
with_arch=`${PYTHON} ${srcdir}/config/riscv/arch-canonicalize ${with_arch}`
with_arch=`${PYTHON} ${srcdir}/config/riscv/arch-canonicalize -misa-spec=${with_isa_spec} ${with_arch}`
fi
tm_defines="${tm_defines} TARGET_RISCV_DEFAULT_ARCH=${with_arch}"
@ -4766,6 +4766,7 @@ case "${target}" in
case "${target}" in
riscv*-*-elf*)
if ${srcdir}/config/riscv/multilib-generator \
-misa-spec=${with_isa_spec} \
`echo ${with_multilib_generator} | sed 's/;/ /g'`\
> t-multilib-config;
then
@ -5531,7 +5532,7 @@ case "${target}" in
for which in arch tune; do
eval "val=\$with_$which"
case ${val} in
"" | native | z900 | z990 | z9-109 | z9-ec | z10 | z196 | zEC12 | z13 | z14 | z15 | arch5 | arch6 | arch7 | arch8 | arch9 | arch10 | arch11 | arch12 | arch13 | arch14 )
"" | native | z900 | z990 | z9-109 | z9-ec | z10 | z196 | zEC12 | z13 | z14 | z15 | z16 | arch5 | arch6 | arch7 | arch8 | arch9 | arch10 | arch11 | arch12 | arch13 | arch14 )
# OK
;;
*)

View File

@ -1664,6 +1664,14 @@ aarch64_init_ls64_builtins (void)
= aarch64_general_add_builtin (data[i].name, data[i].type, data[i].code);
}
/* Implement #pragma GCC aarch64 "arm_acle.h". */
void
handle_arm_acle_h (void)
{
if (TARGET_LS64)
aarch64_init_ls64_builtins ();
}
/* Initialize fpsr fpcr getters and setters. */
static void
@ -1755,9 +1763,6 @@ aarch64_general_init_builtins (void)
if (TARGET_MEMTAG)
aarch64_init_memtag_builtins ();
if (TARGET_LS64)
aarch64_init_ls64_builtins ();
}
/* Implement TARGET_BUILTIN_DECL for the AARCH64_BUILTIN_GENERAL group. */

View File

@ -302,6 +302,8 @@ aarch64_pragma_aarch64 (cpp_reader *)
aarch64_sve::handle_arm_sve_h ();
else if (strcmp (name, "arm_neon.h") == 0)
handle_arm_neon_h ();
else if (strcmp (name, "arm_acle.h") == 0)
handle_arm_acle_h ();
else
error ("unknown %<#pragma GCC aarch64%> option %qs", name);
}

View File

@ -995,6 +995,7 @@ rtx aarch64_general_expand_builtin (unsigned int, tree, rtx, int);
tree aarch64_general_builtin_decl (unsigned, bool);
tree aarch64_general_builtin_rsqrt (unsigned int);
tree aarch64_builtin_vectorized_function (unsigned int, tree, tree);
void handle_arm_acle_h (void);
void handle_arm_neon_h (void);
namespace aarch64_sve {

View File

@ -3385,20 +3385,6 @@
;; 'across lanes' add.
(define_expand "reduc_plus_scal_<mode>"
[(match_operand:<VEL> 0 "register_operand")
(unspec:VDQ_I [(match_operand:VDQ_I 1 "register_operand")]
UNSPEC_ADDV)]
"TARGET_SIMD"
{
rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);
rtx scratch = gen_reg_rtx (<MODE>mode);
emit_insn (gen_aarch64_reduc_plus_internal<mode> (scratch, operands[1]));
emit_insn (gen_aarch64_get_lane<mode> (operands[0], scratch, elt));
DONE;
}
)
(define_insn "aarch64_faddp<mode>"
[(set (match_operand:VHSDF 0 "register_operand" "=w")
(unspec:VHSDF [(match_operand:VHSDF 1 "register_operand" "w")
@ -3409,15 +3395,58 @@
[(set_attr "type" "neon_fp_reduc_add_<stype><q>")]
)
(define_insn "aarch64_reduc_plus_internal<mode>"
[(set (match_operand:VDQV 0 "register_operand" "=w")
(unspec:VDQV [(match_operand:VDQV 1 "register_operand" "w")]
(define_insn "reduc_plus_scal_<mode>"
[(set (match_operand:<VEL> 0 "register_operand" "=w")
(unspec:<VEL> [(match_operand:VDQV 1 "register_operand" "w")]
UNSPEC_ADDV))]
"TARGET_SIMD"
"add<VDQV:vp>\\t%<Vetype>0, %1.<Vtype>"
[(set_attr "type" "neon_reduc_add<q>")]
)
(define_insn "reduc_plus_scal_v2si"
[(set (match_operand:SI 0 "register_operand" "=w")
(unspec:SI [(match_operand:V2SI 1 "register_operand" "w")]
UNSPEC_ADDV))]
"TARGET_SIMD"
"addp\\t%0.2s, %1.2s, %1.2s"
[(set_attr "type" "neon_reduc_add")]
)
;; ADDV with result zero-extended to SI/DImode (for popcount).
(define_insn "aarch64_zero_extend<GPI:mode>_reduc_plus_<VDQV_E:mode>"
[(set (match_operand:GPI 0 "register_operand" "=w")
(zero_extend:GPI
(unspec:<VDQV_E:VEL> [(match_operand:VDQV_E 1 "register_operand" "w")]
UNSPEC_ADDV)))]
"TARGET_SIMD"
"add<VDQV_E:vp>\\t%<VDQV_E:Vetype>0, %1.<VDQV_E:Vtype>"
[(set_attr "type" "neon_reduc_add<VDQV_E:q>")]
)
(define_insn "reduc_plus_scal_<mode>"
[(set (match_operand:<VEL> 0 "register_operand" "=w")
(unspec:<VEL> [(match_operand:V2F 1 "register_operand" "w")]
UNSPEC_FADDV))]
"TARGET_SIMD"
"faddp\\t%<Vetype>0, %1.<Vtype>"
[(set_attr "type" "neon_fp_reduc_add_<Vetype><q>")]
)
(define_expand "reduc_plus_scal_v4sf"
[(set (match_operand:SF 0 "register_operand")
(unspec:SF [(match_operand:V4SF 1 "register_operand")]
UNSPEC_FADDV))]
"TARGET_SIMD"
{
rtx elt = aarch64_endian_lane_rtx (V4SFmode, 0);
rtx scratch = gen_reg_rtx (V4SFmode);
emit_insn (gen_aarch64_faddpv4sf (scratch, operands[1], operands[1]));
emit_insn (gen_aarch64_faddpv4sf (scratch, scratch, scratch));
emit_insn (gen_aarch64_get_lanev4sf (operands[0], scratch, elt));
DONE;
})
(define_insn "aarch64_<su>addlv<mode>"
[(set (match_operand:<VWIDE_S> 0 "register_operand" "=w")
(unspec:<VWIDE_S> [(match_operand:VDQV_L 1 "register_operand" "w")]
@ -3436,49 +3465,6 @@
[(set_attr "type" "neon_reduc_add<q>")]
)
;; ADDV with result zero-extended to SI/DImode (for popcount).
(define_insn "aarch64_zero_extend<GPI:mode>_reduc_plus_<VDQV_E:mode>"
[(set (match_operand:GPI 0 "register_operand" "=w")
(zero_extend:GPI
(unspec:<VDQV_E:VEL> [(match_operand:VDQV_E 1 "register_operand" "w")]
UNSPEC_ADDV)))]
"TARGET_SIMD"
"add<VDQV_E:vp>\\t%<VDQV_E:Vetype>0, %1.<VDQV_E:Vtype>"
[(set_attr "type" "neon_reduc_add<VDQV_E:q>")]
)
(define_insn "aarch64_reduc_plus_internalv2si"
[(set (match_operand:V2SI 0 "register_operand" "=w")
(unspec:V2SI [(match_operand:V2SI 1 "register_operand" "w")]
UNSPEC_ADDV))]
"TARGET_SIMD"
"addp\\t%0.2s, %1.2s, %1.2s"
[(set_attr "type" "neon_reduc_add")]
)
(define_insn "reduc_plus_scal_<mode>"
[(set (match_operand:<VEL> 0 "register_operand" "=w")
(unspec:<VEL> [(match_operand:V2F 1 "register_operand" "w")]
UNSPEC_FADDV))]
"TARGET_SIMD"
"faddp\\t%<Vetype>0, %1.<Vtype>"
[(set_attr "type" "neon_fp_reduc_add_<Vetype><q>")]
)
(define_expand "reduc_plus_scal_v4sf"
[(set (match_operand:SF 0 "register_operand")
(unspec:V4SF [(match_operand:V4SF 1 "register_operand")]
UNSPEC_FADDV))]
"TARGET_SIMD"
{
rtx elt = aarch64_endian_lane_rtx (V4SFmode, 0);
rtx scratch = gen_reg_rtx (V4SFmode);
emit_insn (gen_aarch64_faddpv4sf (scratch, operands[1], operands[1]));
emit_insn (gen_aarch64_faddpv4sf (scratch, scratch, scratch));
emit_insn (gen_aarch64_get_lanev4sf (operands[0], scratch, elt));
DONE;
})
(define_insn "clrsb<mode>2"
[(set (match_operand:VDQ_BHSI 0 "register_operand" "=w")
(clrsb:VDQ_BHSI (match_operand:VDQ_BHSI 1 "register_operand" "w")))]

View File

@ -15637,7 +15637,7 @@ private:
unsigned int adjust_body_cost (loop_vec_info, const aarch64_vector_costs *,
unsigned int);
bool prefer_unrolled_loop () const;
unsigned int determine_suggested_unroll_factor ();
unsigned int determine_suggested_unroll_factor (loop_vec_info);
/* True if we have performed one-time initialization based on the
vec_info. */
@ -16746,7 +16746,8 @@ adjust_body_cost_sve (const aarch64_vec_op_count *ops,
}
unsigned int
aarch64_vector_costs::determine_suggested_unroll_factor ()
aarch64_vector_costs::
determine_suggested_unroll_factor (loop_vec_info loop_vinfo)
{
bool sve = m_vec_flags & VEC_ANY_SVE;
/* If we are trying to unroll an Advanced SIMD main loop that contains
@ -16760,6 +16761,7 @@ aarch64_vector_costs::determine_suggested_unroll_factor ()
return 1;
unsigned int max_unroll_factor = 1;
auto vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
for (auto vec_ops : m_ops)
{
aarch64_simd_vec_issue_info const *vec_issue
@ -16768,7 +16770,8 @@ aarch64_vector_costs::determine_suggested_unroll_factor ()
return 1;
/* Limit unroll factor to a value adjustable by the user, the default
value is 4. */
unsigned int unroll_factor = aarch64_vect_unroll_limit;
unsigned int unroll_factor = MIN (aarch64_vect_unroll_limit,
(int) known_alignment (vf));
unsigned int factor
= vec_ops.reduction_latency > 1 ? vec_ops.reduction_latency : 1;
unsigned int temp;
@ -16946,7 +16949,8 @@ aarch64_vector_costs::finish_cost (const vector_costs *uncast_scalar_costs)
{
m_costs[vect_body] = adjust_body_cost (loop_vinfo, scalar_costs,
m_costs[vect_body]);
m_suggested_unroll_factor = determine_suggested_unroll_factor ();
m_suggested_unroll_factor
= determine_suggested_unroll_factor (loop_vinfo);
}
/* Apply the heuristic described above m_stp_sequence_cost. Prefer
@ -18053,6 +18057,9 @@ aarch64_validate_mtune (const char *str, const struct processor **res)
return false;
}
static_assert (TARGET_CPU_generic < TARGET_CPU_MASK,
"TARGET_CPU_NBITS is big enough");
/* Return the CPU corresponding to the enum CPU.
If it doesn't specify a cpu, return the default. */
@ -18062,12 +18069,12 @@ aarch64_get_tune_cpu (enum aarch64_processor cpu)
if (cpu != aarch64_none)
return &all_cores[cpu];
/* The & 0x3f is to extract the bottom 6 bits that encode the
default cpu as selected by the --with-cpu GCC configure option
/* The & TARGET_CPU_MASK is to extract the bottom TARGET_CPU_NBITS bits that
encode the default cpu as selected by the --with-cpu GCC configure option
in config.gcc.
???: The whole TARGET_CPU_DEFAULT and AARCH64_CPU_DEFAULT_FLAGS
flags mechanism should be reworked to make it more sane. */
return &all_cores[TARGET_CPU_DEFAULT & 0x3f];
return &all_cores[TARGET_CPU_DEFAULT & TARGET_CPU_MASK];
}
/* Return the architecture corresponding to the enum ARCH.
@ -18079,7 +18086,8 @@ aarch64_get_arch (enum aarch64_arch arch)
if (arch != aarch64_no_arch)
return &all_architectures[arch];
const struct processor *cpu = &all_cores[TARGET_CPU_DEFAULT & 0x3f];
const struct processor *cpu
= &all_cores[TARGET_CPU_DEFAULT & TARGET_CPU_MASK];
return &all_architectures[cpu->arch];
}
@ -18166,7 +18174,7 @@ aarch64_override_options (void)
{
/* Get default configure-time CPU. */
selected_cpu = aarch64_get_tune_cpu (aarch64_none);
aarch64_isa_flags = TARGET_CPU_DEFAULT >> 6;
aarch64_isa_flags = TARGET_CPU_DEFAULT >> TARGET_CPU_NBITS;
}
if (selected_tune)

View File

@ -813,10 +813,16 @@ enum target_cpus
TARGET_CPU_generic
};
/* Define how many bits are used to represent the CPU in TARGET_CPU_DEFAULT.
This needs to be big enough to fit the value of TARGET_CPU_generic.
All bits after this are used to represent the AARCH64_CPU_DEFAULT_FLAGS. */
#define TARGET_CPU_NBITS 8
#define TARGET_CPU_MASK ((1 << TARGET_CPU_NBITS) - 1)
/* If there is no CPU defined at configure, use generic as default. */
#ifndef TARGET_CPU_DEFAULT
#define TARGET_CPU_DEFAULT \
(TARGET_CPU_generic | (AARCH64_CPU_DEFAULT_FLAGS << 6))
(TARGET_CPU_generic | (AARCH64_CPU_DEFAULT_FLAGS << TARGET_CPU_NBITS))
#endif
/* If inserting NOP before a mult-accumulate insn remember to adjust the

View File

@ -29,6 +29,8 @@
#include <stdint.h>
#pragma GCC aarch64 "arm_acle.h"
#ifdef __cplusplus
extern "C" {
#endif

View File

@ -26,8 +26,8 @@
# Arch and FPU variants to build libraries with
MULTI_ARCH_OPTS_A = march=armv7-a/march=armv7-a+fp/march=armv7-a+simd/march=armv7ve+simd/march=armv8-a/march=armv8-a+simd/march=armv9-a/march=armv9-a+simd
MULTI_ARCH_DIRS_A = v7-a v7-a+fp v7-a+simd v7ve+simd v8-a v8-a+simd v9-a v9-a+simd
MULTI_ARCH_OPTS_A = march=armv7-a/march=armv7-a+fp/march=armv7-a+simd/march=armv7ve+simd/march=armv8-a/march=armv8-a+simd
MULTI_ARCH_DIRS_A = v7-a v7-a+fp v7-a+simd v7ve+simd v8-a v8-a+simd
# ARMv7-A - build nofp, fp-d16 and SIMD variants
@ -46,11 +46,6 @@ MULTILIB_REQUIRED += mthumb/march=armv8-a/mfloat-abi=soft
MULTILIB_REQUIRED += mthumb/march=armv8-a+simd/mfloat-abi=hard
MULTILIB_REQUIRED += mthumb/march=armv8-a+simd/mfloat-abi=softfp
# Armv9-A - build nofp and SIMD variants.
MULTILIB_REQUIRED += mthumb/march=armv9-a/mfloat-abi=soft
MULTILIB_REQUIRED += mthumb/march=armv9-a+simd/mfloat-abi=hard
MULTILIB_REQUIRED += mthumb/march=armv9-a+simd/mfloat-abi=softfp
# Matches
# Arch Matches
@ -135,14 +130,12 @@ MULTILIB_MATCHES += $(foreach ARCH, $(v8_6_a_simd_variants), \
march?armv8-a+simd=march?armv8.6-a$(ARCH))
# Armv9 without SIMD: map down to base architecture
MULTILIB_MATCHES += $(foreach ARCH, $(v9_a_nosimd_variants), \
march?armv9-a=march?armv9-a$(ARCH))
MULTILIB_MATCHES += march?armv8-a=march?armv9-a
# No variants without SIMD.
# Armv9 with SIMD: map down to base arch + simd
MULTILIB_MATCHES += march?armv9-a+simd=march?armv9-a+crc+simd \
$(foreach ARCH, $(filter-out +simd, $(v9_a_simd_variants)), \
march?armv9-a+simd=march?armv9-a$(ARCH) \
march?armv9-a+simd=march?armv9-a+crc$(ARCH))
MULTILIB_MATCHES += $(foreach ARCH, $(v9_a_simd_variants), \
march?armv8-a+simd=march?armv9-a$(ARCH))
# Use Thumb libraries for everything.
@ -150,13 +143,11 @@ MULTILIB_REUSE += mthumb/march.armv7-a/mfloat-abi.soft=marm/march.armv7-a/mfloa
MULTILIB_REUSE += mthumb/march.armv8-a/mfloat-abi.soft=marm/march.armv8-a/mfloat-abi.soft
MULTILIB_REUSE += mthumb/march.armv9-a/mfloat-abi.soft=marm/march.armv9-a/mfloat-abi.soft
MULTILIB_REUSE += $(foreach ABI, hard softfp, \
$(foreach ARCH, armv7-a+fp armv7-a+simd armv7ve+simd armv8-a+simd armv9-a+simd, \
$(foreach ARCH, armv7-a+fp armv7-a+simd armv7ve+simd armv8-a+simd, \
mthumb/march.$(ARCH)/mfloat-abi.$(ABI)=marm/march.$(ARCH)/mfloat-abi.$(ABI)))
# Softfp but no FP, use the soft-float libraries.
MULTILIB_REUSE += $(foreach MODE, arm thumb, \
$(foreach ARCH, armv7-a armv8-a armv9-a, \
$(foreach ARCH, armv7-a armv8-a, \
mthumb/march.$(ARCH)/mfloat-abi.soft=m$(MODE)/march.$(ARCH)/mfloat-abi.softfp))

View File

@ -78,7 +78,6 @@ v8_4_a_simd_variants := $(call all_feat_combs, simd fp16 crypto i8mm bf16)
v8_5_a_simd_variants := $(call all_feat_combs, simd fp16 crypto i8mm bf16)
v8_6_a_simd_variants := $(call all_feat_combs, simd fp16 crypto i8mm bf16)
v8_r_nosimd_variants := +crc
v9_a_nosimd_variants := +crc
v9_a_simd_variants := $(call all_feat_combs, simd fp16 crypto i8mm bf16)
ifneq (,$(HAS_APROFILE))
@ -206,14 +205,10 @@ MULTILIB_MATCHES += $(foreach ARCH, $(v8_6_a_simd_variants), \
# Armv9
MULTILIB_MATCHES += march?armv7=march?armv9-a
MULTILIB_MATCHES += $(foreach ARCH, $(v9_a_nosimd_variants), \
march?armv7=march?armv9-a$(ARCH))
# Armv9 with SIMD
MULTILIB_MATCHES += march?armv7+fp=march?armv9-a+crc+simd \
$(foreach ARCH, $(v9_a_simd_variants), \
march?armv7+fp=march?armv9-a$(ARCH) \
march?armv7+fp=march?armv9-a+crc$(ARCH))
MULTILIB_MATCHES += $(foreach ARCH, $(v9_a_simd_variants), \
march?armv7+fp=march?armv9-a$(ARCH))
endif # Not APROFILE.
# Use Thumb libraries for everything.

View File

@ -1741,7 +1741,7 @@
(ior:SI (ashift:SI (match_operand:SI 1 "register_operand" "d") (const_int 1))
(zero_extend:SI (reg:BI REG_CC))))
(set (reg:BI REG_CC)
(zero_extract:BI (match_dup 1) (const_int 31) (const_int 0)))]
(zero_extract:BI (match_dup 1) (const_int 1) (const_int 31)))]
""
"%0 = ROT %1 BY 1%!"
[(set_attr "type" "dsp32shiftimm")])

View File

@ -55,7 +55,7 @@ along with GCC; see the file COPYING3. If not see
#endif
#undef TARGET_LIBC_HAS_FUNCTION
#define TARGET_LIBC_HAS_FUNCTION no_c99_libc_has_function
#define TARGET_LIBC_HAS_FUNCTION bsd_libc_has_function
/* Use --as-needed -lgcc_s for eh support. */
#ifdef HAVE_LD_AS_NEEDED

View File

@ -5588,8 +5588,9 @@ gcn_print_lds_decl (FILE *f, tree var)
fprintf (f, "%u", gang_private_hwm);
gang_private_hwm += size;
if (gang_private_hwm > gang_private_size_opt)
error ("gang-private data-share memory exhausted (increase with "
"%<-mgang-private-size=<number>%>)");
error ("%d bytes of gang-private data-share memory exhausted"
" (increase with %<-mgang-private-size=%d%>, for example)",
gang_private_size_opt, gang_private_hwm);
}
}

View File

@ -3286,31 +3286,67 @@ _mm_maskz_scalef_round_ss (__mmask8 __U, __m128 __A, __m128 __B, const int __R)
(__mmask8) __U, __R);
}
#else
#define _mm512_scalef_round_pd(A, B, C) \
(__m512d)__builtin_ia32_scalefpd512_mask(A, B, (__v8df)_mm512_undefined_pd(), -1, C)
#define _mm512_scalef_round_pd(A, B, C) \
((__m512d) \
__builtin_ia32_scalefpd512_mask((A), (B), \
(__v8df) _mm512_undefined_pd(), \
-1, (C)))
#define _mm512_mask_scalef_round_pd(W, U, A, B, C) \
(__m512d)__builtin_ia32_scalefpd512_mask(A, B, W, U, C)
#define _mm512_mask_scalef_round_pd(W, U, A, B, C) \
((__m512d) __builtin_ia32_scalefpd512_mask((A), (B), (W), (U), (C)))
#define _mm512_maskz_scalef_round_pd(U, A, B, C) \
(__m512d)__builtin_ia32_scalefpd512_mask(A, B, (__v8df)_mm512_setzero_pd(), U, C)
#define _mm512_maskz_scalef_round_pd(U, A, B, C) \
((__m512d) \
__builtin_ia32_scalefpd512_mask((A), (B), \
(__v8df) _mm512_setzero_pd(), \
(U), (C)))
#define _mm512_scalef_round_ps(A, B, C) \
(__m512)__builtin_ia32_scalefps512_mask(A, B, (__v16sf)_mm512_undefined_ps(), -1, C)
#define _mm512_scalef_round_ps(A, B, C) \
((__m512) \
__builtin_ia32_scalefps512_mask((A), (B), \
(__v16sf) _mm512_undefined_ps(), \
-1, (C)))
#define _mm512_mask_scalef_round_ps(W, U, A, B, C) \
(__m512)__builtin_ia32_scalefps512_mask(A, B, W, U, C)
#define _mm512_mask_scalef_round_ps(W, U, A, B, C) \
((__m512) __builtin_ia32_scalefps512_mask((A), (B), (W), (U), (C)))
#define _mm512_maskz_scalef_round_ps(U, A, B, C) \
(__m512)__builtin_ia32_scalefps512_mask(A, B, (__v16sf)_mm512_setzero_ps(), U, C)
#define _mm512_maskz_scalef_round_ps(U, A, B, C) \
((__m512) \
__builtin_ia32_scalefps512_mask((A), (B), \
(__v16sf) _mm512_setzero_ps(), \
(U), (C)))
#define _mm_scalef_round_sd(A, B, C) \
(__m128d)__builtin_ia32_scalefsd_mask_round (A, B, \
(__v2df)_mm_setzero_pd (), -1, C)
#define _mm_scalef_round_sd(A, B, C) \
((__m128d) \
__builtin_ia32_scalefsd_mask_round ((A), (B), \
(__v2df) _mm_undefined_pd (), \
-1, (C)))
#define _mm_scalef_round_ss(A, B, C) \
(__m128)__builtin_ia32_scalefss_mask_round (A, B, \
(__v4sf)_mm_setzero_ps (), -1, C)
#define _mm_scalef_round_ss(A, B, C) \
((__m128) \
__builtin_ia32_scalefss_mask_round ((A), (B), \
(__v4sf) _mm_undefined_ps (), \
-1, (C)))
#define _mm_mask_scalef_round_sd(W, U, A, B, C) \
((__m128d) \
__builtin_ia32_scalefsd_mask_round ((A), (B), (W), (U), (C)))
#define _mm_mask_scalef_round_ss(W, U, A, B, C) \
((__m128) \
__builtin_ia32_scalefss_mask_round ((A), (B), (W), (U), (C)))
#define _mm_maskz_scalef_round_sd(U, A, B, C) \
((__m128d) \
__builtin_ia32_scalefsd_mask_round ((A), (B), \
(__v2df) _mm_setzero_pd (), \
(U), (C)))
#define _mm_maskz_scalef_round_ss(U, A, B, C) \
((__m128) \
__builtin_ia32_scalefss_mask_round ((A), (B), \
(__v4sf) _mm_setzero_ps (), \
(U), (C)))
#endif
#define _mm_mask_scalef_sd(W, U, A, B) \

View File

@ -3136,6 +3136,8 @@ ix86_expand_int_movcc (rtx operands[])
bool sign_bit_compare_p = false;
rtx op0 = XEXP (operands[1], 0);
rtx op1 = XEXP (operands[1], 1);
rtx op2 = operands[2];
rtx op3 = operands[3];
if (GET_MODE (op0) == TImode
|| (GET_MODE (op0) == DImode
@ -3153,17 +3155,29 @@ ix86_expand_int_movcc (rtx operands[])
|| (op1 == constm1_rtx && (code == GT || code == LE)))
sign_bit_compare_p = true;
/* op0 == op1 ? op0 : op3 is equivalent to op0 == op1 ? op1 : op3,
but if op1 is a constant, the latter form allows more optimizations,
either through the last 2 ops being constant handling, or the one
constant and one variable cases. On the other side, for cmov the
former might be better as we don't need to load the constant into
another register. */
if (code == EQ && CONST_INT_P (op1) && rtx_equal_p (op0, op2))
op2 = op1;
/* Similarly for op0 != op1 ? op2 : op0 and op0 != op1 ? op2 : op1. */
else if (code == NE && CONST_INT_P (op1) && rtx_equal_p (op0, op3))
op3 = op1;
/* Don't attempt mode expansion here -- if we had to expand 5 or 6
HImode insns, we'd be swallowed in word prefix ops. */
if ((mode != HImode || TARGET_FAST_PREFIX)
&& (mode != (TARGET_64BIT ? TImode : DImode))
&& CONST_INT_P (operands[2])
&& CONST_INT_P (operands[3]))
&& CONST_INT_P (op2)
&& CONST_INT_P (op3))
{
rtx out = operands[0];
HOST_WIDE_INT ct = INTVAL (operands[2]);
HOST_WIDE_INT cf = INTVAL (operands[3]);
HOST_WIDE_INT ct = INTVAL (op2);
HOST_WIDE_INT cf = INTVAL (op3);
HOST_WIDE_INT diff;
diff = ct - cf;
@ -3559,6 +3573,9 @@ ix86_expand_int_movcc (rtx operands[])
if (BRANCH_COST (optimize_insn_for_speed_p (), false) <= 2)
return false;
operands[2] = op2;
operands[3] = op3;
/* If one of the two operands is an interesting constant, load a
constant with the above and mask it in with a logical operation. */
@ -17036,7 +17053,8 @@ ix86_emit_fp_unordered_jump (rtx label)
/* Output code to perform an sinh XFmode calculation. */
void ix86_emit_i387_sinh (rtx op0, rtx op1)
void
ix86_emit_i387_sinh (rtx op0, rtx op1)
{
rtx e1 = gen_reg_rtx (XFmode);
rtx e2 = gen_reg_rtx (XFmode);
@ -17084,7 +17102,8 @@ void ix86_emit_i387_sinh (rtx op0, rtx op1)
/* Output code to perform an cosh XFmode calculation. */
void ix86_emit_i387_cosh (rtx op0, rtx op1)
void
ix86_emit_i387_cosh (rtx op0, rtx op1)
{
rtx e1 = gen_reg_rtx (XFmode);
rtx e2 = gen_reg_rtx (XFmode);
@ -17106,7 +17125,8 @@ void ix86_emit_i387_cosh (rtx op0, rtx op1)
/* Output code to perform an tanh XFmode calculation. */
void ix86_emit_i387_tanh (rtx op0, rtx op1)
void
ix86_emit_i387_tanh (rtx op0, rtx op1)
{
rtx e1 = gen_reg_rtx (XFmode);
rtx e2 = gen_reg_rtx (XFmode);
@ -17152,7 +17172,8 @@ void ix86_emit_i387_tanh (rtx op0, rtx op1)
/* Output code to perform an asinh XFmode calculation. */
void ix86_emit_i387_asinh (rtx op0, rtx op1)
void
ix86_emit_i387_asinh (rtx op0, rtx op1)
{
rtx e1 = gen_reg_rtx (XFmode);
rtx e2 = gen_reg_rtx (XFmode);
@ -17204,7 +17225,8 @@ void ix86_emit_i387_asinh (rtx op0, rtx op1)
/* Output code to perform an acosh XFmode calculation. */
void ix86_emit_i387_acosh (rtx op0, rtx op1)
void
ix86_emit_i387_acosh (rtx op0, rtx op1)
{
rtx e1 = gen_reg_rtx (XFmode);
rtx e2 = gen_reg_rtx (XFmode);
@ -17230,7 +17252,8 @@ void ix86_emit_i387_acosh (rtx op0, rtx op1)
/* Output code to perform an atanh XFmode calculation. */
void ix86_emit_i387_atanh (rtx op0, rtx op1)
void
ix86_emit_i387_atanh (rtx op0, rtx op1)
{
rtx e1 = gen_reg_rtx (XFmode);
rtx e2 = gen_reg_rtx (XFmode);
@ -17281,7 +17304,8 @@ void ix86_emit_i387_atanh (rtx op0, rtx op1)
/* Output code to perform a log1p XFmode calculation. */
void ix86_emit_i387_log1p (rtx op0, rtx op1)
void
ix86_emit_i387_log1p (rtx op0, rtx op1)
{
rtx_code_label *label1 = gen_label_rtx ();
rtx_code_label *label2 = gen_label_rtx ();
@ -17291,6 +17315,11 @@ void ix86_emit_i387_log1p (rtx op0, rtx op1)
rtx cst, cstln2, cst1;
rtx_insn *insn;
/* The emit_jump call emits pending stack adjust, make sure it is emitted
before the conditional jump, otherwise the stack adjustment will be
only conditional. */
do_pending_stack_adjust ();
cst = const_double_from_real_value
(REAL_VALUE_ATOF ("0.29289321881345247561810596348408353", XFmode), XFmode);
cstln2 = force_reg (XFmode, standard_80387_constant_rtx (4)); /* fldln2 */
@ -17320,7 +17349,8 @@ void ix86_emit_i387_log1p (rtx op0, rtx op1)
}
/* Emit code for round calculation. */
void ix86_emit_i387_round (rtx op0, rtx op1)
void
ix86_emit_i387_round (rtx op0, rtx op1)
{
machine_mode inmode = GET_MODE (op1);
machine_mode outmode = GET_MODE (op0);
@ -17434,7 +17464,8 @@ void ix86_emit_i387_round (rtx op0, rtx op1)
/* Output code to perform a Newton-Rhapson approximation of a single precision
floating point divide [http://en.wikipedia.org/wiki/N-th_root_algorithm]. */
void ix86_emit_swdivsf (rtx res, rtx a, rtx b, machine_mode mode)
void
ix86_emit_swdivsf (rtx res, rtx a, rtx b, machine_mode mode)
{
rtx x0, x1, e0, e1;
@ -17485,7 +17516,8 @@ void ix86_emit_swdivsf (rtx res, rtx a, rtx b, machine_mode mode)
/* Output code to perform a Newton-Rhapson approximation of a
single precision floating point [reciprocal] square root. */
void ix86_emit_swsqrtsf (rtx res, rtx a, machine_mode mode, bool recip)
void
ix86_emit_swsqrtsf (rtx res, rtx a, machine_mode mode, bool recip)
{
rtx x0, e0, e1, e2, e3, mthree, mhalf;
REAL_VALUE_TYPE r;
@ -23240,9 +23272,10 @@ ix86_expand_divmod_libfunc (rtx libfunc, machine_mode mode,
*rem_p = rem;
}
void ix86_expand_atomic_fetch_op_loop (rtx target, rtx mem, rtx val,
enum rtx_code code, bool after,
bool doubleword)
void
ix86_expand_atomic_fetch_op_loop (rtx target, rtx mem, rtx val,
enum rtx_code code, bool after,
bool doubleword)
{
rtx old_reg, new_reg, old_mem, success;
machine_mode mode = GET_MODE (target);
@ -23286,10 +23319,11 @@ void ix86_expand_atomic_fetch_op_loop (rtx target, rtx mem, rtx val,
it will be relaxed to an atomic load + compare, and skip
cmpxchg instruction if mem != exp_input. */
void ix86_expand_cmpxchg_loop (rtx *ptarget_bool, rtx target_val,
rtx mem, rtx exp_input, rtx new_input,
rtx mem_model, bool doubleword,
rtx_code_label *loop_label)
void
ix86_expand_cmpxchg_loop (rtx *ptarget_bool, rtx target_val,
rtx mem, rtx exp_input, rtx new_input,
rtx mem_model, bool doubleword,
rtx_code_label *loop_label)
{
rtx_code_label *cmp_label = NULL;
rtx_code_label *done_label = NULL;
@ -23388,6 +23422,7 @@ void ix86_expand_cmpxchg_loop (rtx *ptarget_bool, rtx target_val,
/* If mem is not expected, pause and loop back. */
emit_label (cmp_label);
emit_move_insn (target_val, new_mem);
emit_insn (gen_pause ());
emit_jump_insn (gen_jump (loop_label));
emit_barrier ();

View File

@ -4891,6 +4891,7 @@ ix86_gimplify_va_arg (tree valist, tree type, gimple_seq *pre_p,
{
int i, prev_size = 0;
tree temp = create_tmp_var (type, "va_arg_tmp");
TREE_ADDRESSABLE (temp) = 1;
/* addr = &temp; */
t = build1 (ADDR_EXPR, build_pointer_type (type), temp);
@ -6524,7 +6525,8 @@ ix86_initial_elimination_offset (int from, int to)
}
/* Emits a warning for unsupported msabi to sysv pro/epilogues. */
void warn_once_call_ms2sysv_xlogues (const char *feature)
void
warn_once_call_ms2sysv_xlogues (const char *feature)
{
static bool warned_once = false;
if (!warned_once)
@ -18806,7 +18808,8 @@ ix86_veclibabi_svml (combined_fn fn, tree type_out, tree type_in)
return NULL_TREE;
}
tree fndecl = mathfn_built_in (TREE_TYPE (type_in), fn);
tree fndecl = mathfn_built_in (el_mode == DFmode
? double_type_node : float_type_node, fn);
bname = IDENTIFIER_POINTER (DECL_NAME (fndecl));
if (DECL_FUNCTION_CODE (fndecl) == BUILT_IN_LOGF)
@ -18898,7 +18901,8 @@ ix86_veclibabi_acml (combined_fn fn, tree type_out, tree type_in)
return NULL_TREE;
}
tree fndecl = mathfn_built_in (TREE_TYPE (type_in), fn);
tree fndecl = mathfn_built_in (el_mode == DFmode
? double_type_node : float_type_node, fn);
bname = IDENTIFIER_POINTER (DECL_NAME (fndecl));
sprintf (name + 7, "%s", bname+10);

View File

@ -810,17 +810,11 @@ _mm_cmpgt_epi64 (__m128i __X, __m128i __Y)
#include <popcntintrin.h>
#ifndef __SSE4_1__
#ifndef __CRC32__
#pragma GCC push_options
#pragma GCC target("sse4.1")
#define __DISABLE_SSE4_1__
#endif /* __SSE4_1__ */
#ifndef __SSE4_2__
#pragma GCC push_options
#pragma GCC target("sse4.2")
#define __DISABLE_SSE4_2__
#endif /* __SSE4_1__ */
#pragma GCC target("crc32")
#define __DISABLE_CRC32__
#endif /* __CRC32__ */
/* Accumulate CRC32 (polynomial 0x11EDC6F41) value. */
extern __inline unsigned int __attribute__((__gnu_inline__, __always_inline__, __artificial__))
@ -849,14 +843,9 @@ _mm_crc32_u64 (unsigned long long __C, unsigned long long __V)
}
#endif
#ifdef __DISABLE_SSE4_2__
#undef __DISABLE_SSE4_2__
#ifdef __DISABLE_CRC32__
#undef __DISABLE_CRC32__
#pragma GCC pop_options
#endif /* __DISABLE_SSE4_2__ */
#ifdef __DISABLE_SSE4_1__
#undef __DISABLE_SSE4_1__
#pragma GCC pop_options
#endif /* __DISABLE_SSE4_1__ */
#endif /* __DISABLE_CRC32__ */
#endif /* _SMMINTRIN_H_INCLUDED */

View File

@ -327,9 +327,7 @@
;; 128-, 256- and 512-bit float vector modes for bitwise operations
(define_mode_iterator VFB
[(V32HF "TARGET_AVX512FP16")
(V16HF "TARGET_AVX512FP16")
(V8HF "TARGET_AVX512FP16")
[(V32HF "TARGET_AVX512F") (V16HF "TARGET_AVX") (V8HF "TARGET_SSE2")
(V16SF "TARGET_AVX512F") (V8SF "TARGET_AVX") V4SF
(V8DF "TARGET_AVX512F") (V4DF "TARGET_AVX") (V2DF "TARGET_SSE2")])
@ -340,8 +338,7 @@
;; 128- and 256-bit float vector modes for bitwise operations
(define_mode_iterator VFB_128_256
[(V16HF "TARGET_AVX512FP16")
(V8HF "TARGET_AVX512FP16")
[(V16HF "TARGET_AVX") (V8HF "TARGET_SSE2")
(V8SF "TARGET_AVX") V4SF
(V4DF "TARGET_AVX") (V2DF "TARGET_SSE2")])
@ -399,7 +396,7 @@
;; All 512bit vector float modes for bitwise operations
(define_mode_iterator VFB_512
[(V32HF "TARGET_AVX512FP16") V16SF V8DF])
[V32HF V16SF V8DF])
(define_mode_iterator VI48_AVX512VL
[V16SI (V8SI "TARGET_AVX512VL") (V4SI "TARGET_AVX512VL")
@ -4581,7 +4578,8 @@
(not:VFB_128_256
(match_operand:VFB_128_256 1 "register_operand" "0,x,v,v"))
(match_operand:VFB_128_256 2 "vector_operand" "xBm,xm,vm,vm")))]
"TARGET_SSE && <mask_avx512vl_condition>"
"TARGET_SSE && <mask_avx512vl_condition>
&& (!<mask_applied> || <ssescalarmode>mode != HFmode)"
{
char buf[128];
const char *ops;
@ -4648,7 +4646,7 @@
(not:VFB_512
(match_operand:VFB_512 1 "register_operand" "v"))
(match_operand:VFB_512 2 "nonimmediate_operand" "vm")))]
"TARGET_AVX512F"
"TARGET_AVX512F && (!<mask_applied> || <ssescalarmode>mode != HFmode)"
{
char buf[128];
const char *ops;
@ -4683,7 +4681,8 @@
(any_logic:VFB_128_256
(match_operand:VFB_128_256 1 "vector_operand")
(match_operand:VFB_128_256 2 "vector_operand")))]
"TARGET_SSE && <mask_avx512vl_condition>"
"TARGET_SSE && <mask_avx512vl_condition>
&& (!<mask_applied> || <ssescalarmode>mode != HFmode)"
"ix86_fixup_binary_operands_no_copy (<CODE>, <MODE>mode, operands);")
(define_expand "<code><mode>3<mask_name>"
@ -4691,7 +4690,7 @@
(any_logic:VFB_512
(match_operand:VFB_512 1 "nonimmediate_operand")
(match_operand:VFB_512 2 "nonimmediate_operand")))]
"TARGET_AVX512F"
"TARGET_AVX512F && (!<mask_applied> || <ssescalarmode>mode != HFmode)"
"ix86_fixup_binary_operands_no_copy (<CODE>, <MODE>mode, operands);")
(define_insn "*<code><mode>3<mask_name>"
@ -4700,6 +4699,7 @@
(match_operand:VFB_128_256 1 "vector_operand" "%0,x,v,v")
(match_operand:VFB_128_256 2 "vector_operand" "xBm,xm,vm,vm")))]
"TARGET_SSE && <mask_avx512vl_condition>
&& (!<mask_applied> || <ssescalarmode>mode != HFmode)
&& !(MEM_P (operands[1]) && MEM_P (operands[2]))"
{
char buf[128];
@ -4766,7 +4766,8 @@
(any_logic:VFB_512
(match_operand:VFB_512 1 "nonimmediate_operand" "%v")
(match_operand:VFB_512 2 "nonimmediate_operand" "vm")))]
"TARGET_AVX512F && !(MEM_P (operands[1]) && MEM_P (operands[2]))"
"TARGET_AVX512F && !(MEM_P (operands[1]) && MEM_P (operands[2]))
&& (!<mask_applied> || <ssescalarmode>mode != HFmode)"
{
char buf[128];
const char *ops;
@ -16741,17 +16742,6 @@
(match_operand:<avx512fmaskmode> 4 "register_operand")))]
"TARGET_AVX512F")
(define_expand "<sse2_avx2>_andnot<mode>3_mask"
[(set (match_operand:VI12_AVX512VL 0 "register_operand")
(vec_merge:VI12_AVX512VL
(and:VI12_AVX512VL
(not:VI12_AVX512VL
(match_operand:VI12_AVX512VL 1 "register_operand"))
(match_operand:VI12_AVX512VL 2 "nonimmediate_operand"))
(match_operand:VI12_AVX512VL 3 "nonimm_or_0_operand")
(match_operand:<avx512fmaskmode> 4 "register_operand")))]
"TARGET_AVX512BW")
(define_insn "*andnot<mode>3"
[(set (match_operand:VI 0 "register_operand" "=x,x,v")
(and:VI

View File

@ -329,6 +329,9 @@ loongarch_flatten_aggregate_field (const_tree type,
if (!TYPE_P (TREE_TYPE (f)))
return -1;
if (DECL_SIZE (f) && integer_zerop (DECL_SIZE (f)))
continue;
HOST_WIDE_INT pos = offset + int_byte_position (f);
n = loongarch_flatten_aggregate_field (TREE_TYPE (f), fields, n,
pos);
@ -473,13 +476,14 @@ loongarch_pass_aggregate_in_fpr_and_gpr_p (const_tree type,
static rtx
loongarch_pass_fpr_single (machine_mode type_mode, unsigned regno,
machine_mode value_mode)
machine_mode value_mode,
HOST_WIDE_INT offset)
{
rtx x = gen_rtx_REG (value_mode, regno);
if (type_mode != value_mode)
{
x = gen_rtx_EXPR_LIST (VOIDmode, x, const0_rtx);
x = gen_rtx_EXPR_LIST (VOIDmode, x, GEN_INT (offset));
x = gen_rtx_PARALLEL (type_mode, gen_rtvec (1, x));
}
return x;
@ -539,7 +543,8 @@ loongarch_get_arg_info (struct loongarch_arg_info *info,
{
case 1:
return loongarch_pass_fpr_single (mode, fregno,
TYPE_MODE (fields[0].type));
TYPE_MODE (fields[0].type),
fields[0].offset);
case 2:
return loongarch_pass_fpr_pair (mode, fregno,

View File

@ -713,6 +713,12 @@
;;
;; Float division and modulus.
(define_expand "div<mode>3"
[(set (match_operand:ANYF 0 "register_operand")
(div:ANYF (match_operand:ANYF 1 "reg_or_1_operand")
(match_operand:ANYF 2 "register_operand")))]
"")
(define_insn "*div<mode>3"
[(set (match_operand:ANYF 0 "register_operand" "=f")
(div:ANYF (match_operand:ANYF 1 "register_operand" "f")
@ -2047,13 +2053,17 @@
(define_insn "loongarch_ibar"
[(unspec_volatile:SI
[(match_operand 0 "const_uimm15_operand")] UNSPECV_IBAR)]
[(match_operand 0 "const_uimm15_operand")]
UNSPECV_IBAR)
(clobber (mem:BLK (scratch)))]
""
"ibar\t%0")
(define_insn "loongarch_dbar"
[(unspec_volatile:SI
[(match_operand 0 "const_uimm15_operand")] UNSPECV_DBAR)]
[(match_operand 0 "const_uimm15_operand")]
UNSPECV_DBAR)
(clobber (mem:BLK (scratch)))]
""
"dbar\t%0")
@ -2072,13 +2082,17 @@
(define_insn "loongarch_syscall"
[(unspec_volatile:SI
[(match_operand 0 "const_uimm15_operand")] UNSPECV_SYSCALL)]
[(match_operand 0 "const_uimm15_operand")]
UNSPECV_SYSCALL)
(clobber (mem:BLK (scratch)))]
""
"syscall\t%0")
(define_insn "loongarch_break"
[(unspec_volatile:SI
[(match_operand 0 "const_uimm15_operand")] UNSPECV_BREAK)]
[(match_operand 0 "const_uimm15_operand")]
UNSPECV_BREAK)
(clobber (mem:BLK (scratch)))]
""
"break\t%0")
@ -2103,7 +2117,8 @@
(define_insn "loongarch_csrrd_<d>"
[(set (match_operand:GPR 0 "register_operand" "=r")
(unspec_volatile:GPR [(match_operand 1 "const_uimm14_operand")]
UNSPECV_CSRRD))]
UNSPECV_CSRRD))
(clobber (mem:BLK (scratch)))]
""
"csrrd\t%0,%1"
[(set_attr "type" "load")
@ -2114,7 +2129,8 @@
(unspec_volatile:GPR
[(match_operand:GPR 1 "register_operand" "0")
(match_operand 2 "const_uimm14_operand")]
UNSPECV_CSRWR))]
UNSPECV_CSRWR))
(clobber (mem:BLK (scratch)))]
""
"csrwr\t%0,%2"
[(set_attr "type" "store")
@ -2126,7 +2142,8 @@
[(match_operand:GPR 1 "register_operand" "0")
(match_operand:GPR 2 "register_operand" "q")
(match_operand 3 "const_uimm14_operand")]
UNSPECV_CSRXCHG))]
UNSPECV_CSRXCHG))
(clobber (mem:BLK (scratch)))]
""
"csrxchg\t%0,%2,%3"
[(set_attr "type" "load")
@ -2135,7 +2152,8 @@
(define_insn "loongarch_iocsrrd_<size>"
[(set (match_operand:QHWD 0 "register_operand" "=r")
(unspec_volatile:QHWD [(match_operand:SI 1 "register_operand" "r")]
UNSPECV_IOCSRRD))]
UNSPECV_IOCSRRD))
(clobber (mem:BLK (scratch)))]
""
"iocsrrd.<size>\t%0,%1"
[(set_attr "type" "load")
@ -2144,7 +2162,8 @@
(define_insn "loongarch_iocsrwr_<size>"
[(unspec_volatile:QHWD [(match_operand:QHWD 0 "register_operand" "r")
(match_operand:SI 1 "register_operand" "r")]
UNSPECV_IOCSRWR)]
UNSPECV_IOCSRWR)
(clobber (mem:BLK (scratch)))]
""
"iocsrwr.<size>\t%0,%1"
[(set_attr "type" "load")
@ -2154,7 +2173,8 @@
[(unspec_volatile:X [(match_operand 0 "const_uimm5_operand")
(match_operand:X 1 "register_operand" "r")
(match_operand 2 "const_imm12_operand")]
UNSPECV_CACOP)]
UNSPECV_CACOP)
(clobber (mem:BLK (scratch)))]
""
"cacop\t%0,%1,%2"
[(set_attr "type" "load")
@ -2164,7 +2184,8 @@
[(unspec_volatile:X [(match_operand:X 0 "register_operand" "r")
(match_operand:X 1 "register_operand" "r")
(match_operand 2 "const_uimm5_operand")]
UNSPECV_LDDIR)]
UNSPECV_LDDIR)
(clobber (mem:BLK (scratch)))]
""
"lddir\t%0,%1,%2"
[(set_attr "type" "load")
@ -2173,7 +2194,8 @@
(define_insn "loongarch_ldpte_<d>"
[(unspec_volatile:X [(match_operand:X 0 "register_operand" "r")
(match_operand 1 "const_uimm5_operand")]
UNSPECV_LDPTE)]
UNSPECV_LDPTE)
(clobber (mem:BLK (scratch)))]
""
"ldpte\t%0,%1"
[(set_attr "type" "load")

View File

@ -29,25 +29,6 @@
#define STARTFILE_SPEC "%{mmainkernel:crt0.o}"
/* Newer versions of CUDA no longer support sm_30, and nvptx-tools as
currently doesn't handle that gracefully when verifying
( https://github.com/MentorEmbedded/nvptx-tools/issues/30 ). Work around
this by verifying with sm_35 when having misa=sm_30 (either implicitly
or explicitly). */
#define ASM_SPEC \
"%{" \
/* Explict misa=sm_30. */ \
"misa=sm_30:-m sm_35" \
/* Separator. */ \
"; " \
/* Catch-all. */ \
"misa=*:-m %*" \
/* Separator. */ \
"; " \
/* Implicit misa=sm_30. */ \
":-m sm_35" \
"}"
#define TARGET_CPU_CPP_BUILTINS() nvptx_cpu_cpp_builtins ()
/* Avoid the default in ../../gcc.cc, which adds "-pthread", which is not

View File

@ -52,7 +52,6 @@ mgomp
Target Mask(GOMP)
Generate code for OpenMP offloading: enables -msoft-stack and -muniform-simt.
; Default needs to be in sync with default in ASM_SPEC in nvptx.h.
misa=
Target RejectNegative ToLower Joined Enum(ptx_isa) Var(ptx_isa_option) Init(PTX_ISA_SM30)
Specify the PTX ISA target architecture to use.

View File

@ -20,14 +20,18 @@
# along with GCC; see the file COPYING3. If not see
# <http://www.gnu.org/licenses/>.
# TODO: Extract riscv_subset_t from riscv-common.cc and make it can be compiled
# standalone to replace this script, that also prevents us implementing
# that twice and keep sync again and again.
from __future__ import print_function
import sys
import argparse
import collections
import itertools
from functools import reduce
SUPPORTED_ISA_SPEC = ["2.2", "20190608", "20191213"]
CANONICAL_ORDER = "imafdgqlcbjktpvn"
LONG_EXT_PREFIXES = ['z', 's', 'h', 'x']
@ -35,29 +39,42 @@ LONG_EXT_PREFIXES = ['z', 's', 'h', 'x']
# IMPLIED_EXT(ext) -> implied extension list.
#
IMPLIED_EXT = {
"d" : ["f"],
"zk" : ["zkn"],
"zk" : ["zkr"],
"zk" : ["zkt"],
"zkn" : ["zbkb"],
"zkn" : ["zbkc"],
"zkn" : ["zbkx"],
"zkn" : ["zkne"],
"zkn" : ["zknd"],
"zkn" : ["zknh"],
"zks" : ["zbkb"],
"zks" : ["zbkc"],
"zks" : ["zbkx"],
"zks" : ["zksed"],
"zks" : ["zksh"],
"d" : ["f", "zicsr"],
"f" : ["zicsr"],
"zk" : ["zkn", "zkr", "zkt"],
"zkn" : ["zbkb", "zbkc", "zbkx", "zkne", "zknd", "zknh"],
"zks" : ["zbkb", "zbkc", "zbkx", "zksed", "zksh"],
"v" : ["zvl128b", "zve64d"],
"zve32x" : ["zvl32b"],
"zve64x" : ["zve32x", "zvl64b"],
"zve32f" : ["f", "zve32x"],
"zve64f" : ["f", "zve32f", "zve64x"],
"zve64d" : ["d", "zve64f"],
"zvl64b" : ["zvl32b"],
"zvl128b" : ["zvl64b"],
"zvl256b" : ["zvl128b"],
"zvl512b" : ["zvl256b"],
"zvl1024b" : ["zvl512b"],
"zvl2048b" : ["zvl1024b"],
"zvl4096b" : ["zvl2048b"],
"zvl8192b" : ["zvl4096b"],
"zvl16384b" : ["zvl8192b"],
"zvl32768b" : ["zvl16384b"],
"zvl65536b" : ["zvl32768b"],
}
def arch_canonicalize(arch):
def arch_canonicalize(arch, isa_spec):
# TODO: Support extension version.
is_isa_spec_2p2 = isa_spec == '2.2'
new_arch = ""
extra_long_ext = []
if arch[:5] in ['rv32e', 'rv32i', 'rv32g', 'rv64i', 'rv64g']:
# TODO: We should expand g to imad_zifencei once we support newer spec.
new_arch = arch[:5].replace("g", "imafd")
if arch[:5] in ['rv32g', 'rv64g']:
if not is_isa_spec_2p2:
extra_long_ext = ['zicsr', 'zifencei']
else:
raise Exception("Unexpected arch: `%s`" % arch[:5])
@ -74,15 +91,24 @@ def arch_canonicalize(arch):
long_exts = []
std_exts = list(arch[5:])
long_exts += extra_long_ext
#
# Handle implied extensions.
#
for ext in std_exts + long_exts:
if ext in IMPLIED_EXT:
implied_exts = IMPLIED_EXT[ext]
for implied_ext in implied_exts:
if implied_ext not in std_exts + long_exts:
long_exts.append(implied_ext)
any_change = True
while any_change:
any_change = False
for ext in std_exts + long_exts:
if ext in IMPLIED_EXT:
implied_exts = IMPLIED_EXT[ext]
for implied_ext in implied_exts:
if implied_ext == 'zicsr' and is_isa_spec_2p2:
continue
if implied_ext not in std_exts + long_exts:
long_exts.append(implied_ext)
any_change = True
# Single letter extension might appear in the long_exts list,
# becasue we just append extensions list to the arch string.
@ -99,6 +125,9 @@ def arch_canonicalize(arch):
return (exts.startswith("x"), exts.startswith("zxm"),
LONG_EXT_PREFIXES.index(exts[0]), canonical_sort, exts[1:])
# Removing duplicates.
long_exts = list(set(long_exts))
# Multi-letter extension must be in lexicographic order.
long_exts = list(sorted(filter(lambda x:len(x) != 1, long_exts),
key=longext_sort))
@ -118,11 +147,20 @@ def arch_canonicalize(arch):
# Concat rest of the multi-char extensions.
if long_exts:
new_arch += "_" + "_".join(long_exts)
return new_arch
if len(sys.argv) < 2:
print ("Usage: %s <arch_str> [<arch_str>*]" % sys.argv)
sys.exit(1)
for arg in sys.argv[1:]:
print (arch_canonicalize(arg))
parser = argparse.ArgumentParser()
parser.add_argument('-misa-spec', type=str,
default='20191213',
choices=SUPPORTED_ISA_SPEC)
parser.add_argument('arch_strs', nargs=argparse.REMAINDER)
args = parser.parse_args()
for arch in args.arch_strs:
print (arch_canonicalize(arch, args.misa_spec))

View File

@ -46,16 +46,18 @@ import argparse
# TODO: Add test for this script.
#
SUPPORTED_ISA_SPEC = ["2.2", "20190608", "20191213"]
arches = collections.OrderedDict()
abis = collections.OrderedDict()
required = []
reuse = []
def arch_canonicalize(arch):
def arch_canonicalize(arch, isa_spec):
this_file = os.path.abspath(os.path.join( __file__))
arch_can_script = \
os.path.join(os.path.dirname(this_file), "arch-canonicalize")
proc = subprocess.Popen([sys.executable, arch_can_script, arch],
proc = subprocess.Popen([sys.executable, arch_can_script,
'-misa-spec=%s' % isa_spec, arch],
stdout=subprocess.PIPE)
out, err = proc.communicate()
return out.decode().strip()
@ -133,6 +135,9 @@ options = filter(lambda x:x.startswith("--"), sys.argv[1:])
parser = argparse.ArgumentParser()
parser.add_argument("--cmodel", type=str)
parser.add_argument('-misa-spec', type=str,
default='20191213',
choices=SUPPORTED_ISA_SPEC)
parser.add_argument("cfgs", type=str, nargs='*')
args = parser.parse_args()
@ -158,13 +163,14 @@ for cmodel in cmodels:
if cmodel == "compact" and arch.startswith("rv32"):
continue
arch = arch_canonicalize (arch)
arch = arch_canonicalize (arch, args.misa_spec)
arches[arch] = 1
abis[abi] = 1
extra = list(filter(None, extra.split(',')))
ext_combs = expand_combination(ext)
alts = sum([[x] + [x + y for y in ext_combs] for x in [arch] + extra], [])
alts = list(map(arch_canonicalize, alts))
alts = filter(lambda x: len(x) != 0, alts)
alts = list(map(lambda a : arch_canonicalize(a, args.misa_spec), alts))
# Drop duplicated entry.
alts = unique(alts)

View File

@ -1190,9 +1190,6 @@
const vd __builtin_altivec_neg_v2df (vd);
NEG_V2DF negv2df2 {}
const vsll __builtin_altivec_neg_v2di (vsll);
NEG_V2DI negv2di2 {}
void __builtin_altivec_stvx_v2df (vd, signed long, void *);
STVX_V2DF altivec_stvx_v2df {stvec}
@ -2136,6 +2133,9 @@
const vus __builtin_altivec_nand_v8hi_uns (vus, vus);
NAND_V8HI_UNS nandv8hi3 {}
const vsll __builtin_altivec_neg_v2di (vsll);
NEG_V2DI negv2di2 {}
const vsc __builtin_altivec_orc_v16qi (vsc, vsc);
ORC_V16QI orcv16qi3 {}

View File

@ -25678,11 +25678,20 @@ rs6000_sibcall_aix (rtx value, rtx func_desc, rtx tlsarg, rtx cookie)
rtx r12 = NULL_RTX;
rtx func_addr = func_desc;
gcc_assert (INTVAL (cookie) == 0);
if (global_tlsarg)
tlsarg = global_tlsarg;
/* Handle longcall attributes. */
if (INTVAL (cookie) & CALL_LONG && SYMBOL_REF_P (func_desc))
{
/* PCREL can do a sibling call to a longcall function
because we don't need to restore the TOC register. */
gcc_assert (rs6000_pcrel_p ());
func_desc = rs6000_longcall_ref (func_desc, tlsarg);
}
else
gcc_assert (INTVAL (cookie) == 0);
/* For ELFv2, r12 and CTR need to hold the function address
for an indirect call. */
if (GET_CODE (func_desc) != SYMBOL_REF && DEFAULT_ABI == ABI_ELFv2)

View File

@ -835,8 +835,8 @@
;; complex forms. Basic data transfer is done later.
(define_insn "zero_extendqi<mode>2"
[(set (match_operand:EXTQI 0 "gpc_reg_operand" "=r,r,^wa,^v")
(zero_extend:EXTQI (match_operand:QI 1 "reg_or_mem_operand" "m,r,Z,v")))]
[(set (match_operand:EXTQI 0 "gpc_reg_operand" "=r,r,wa,^v")
(zero_extend:EXTQI (match_operand:QI 1 "reg_or_mem_operand" "m,r,?Z,v")))]
""
"@
lbz%U1%X1 %0,%1
@ -889,8 +889,8 @@
(define_insn "zero_extendhi<mode>2"
[(set (match_operand:EXTHI 0 "gpc_reg_operand" "=r,r,^wa,^v")
(zero_extend:EXTHI (match_operand:HI 1 "reg_or_mem_operand" "m,r,Z,v")))]
[(set (match_operand:EXTHI 0 "gpc_reg_operand" "=r,r,wa,^v")
(zero_extend:EXTHI (match_operand:HI 1 "reg_or_mem_operand" "m,r,?Z,v")))]
""
"@
lhz%U1%X1 %0,%1
@ -944,7 +944,7 @@
(define_insn "zero_extendsi<mode>2"
[(set (match_operand:EXTSI 0 "gpc_reg_operand" "=r,r,d,wa,wa,r,wa")
(zero_extend:EXTSI (match_operand:SI 1 "reg_or_mem_operand" "m,r,Z,Z,r,wa,wa")))]
(zero_extend:EXTSI (match_operand:SI 1 "reg_or_mem_operand" "m,r,?Z,?Z,r,wa,wa")))]
""
"@
lwz%U1%X1 %0,%1
@ -7496,7 +7496,7 @@
[(set (match_operand:SI 0 "nonimmediate_operand"
"=r, r,
r, d, v,
m, Z, Z,
m, ?Z, ?Z,
r, r, r, r,
wa, wa, wa, v,
wa, v, v,
@ -7504,7 +7504,7 @@
r, *h, *h")
(match_operand:SI 1 "input_operand"
"r, U,
m, Z, Z,
m, ?Z, ?Z,
r, d, v,
I, L, eI, n,
wa, O, wM, wB,
@ -7785,11 +7785,11 @@
;; MTVSRWZ MF%1 MT%1 NOP
(define_insn "*mov<mode>_internal"
[(set (match_operand:QHI 0 "nonimmediate_operand"
"=r, r, wa, m, Z, r,
"=r, r, wa, m, ?Z, r,
wa, wa, wa, v, ?v, r,
wa, r, *c*l, *h")
(match_operand:QHI 1 "input_operand"
"r, m, Z, r, wa, i,
"r, m, ?Z, r, wa, i,
wa, O, wM, wB, wS, wa,
r, *h, r, 0"))]
"gpc_reg_operand (operands[0], <MODE>mode)
@ -7973,10 +7973,10 @@
;; FMR MR MT%0 MF%1 NOP
(define_insn "movsd_hardfloat"
[(set (match_operand:SD 0 "nonimmediate_operand"
"=!r, d, m, Z, ?d, ?r,
"=!r, d, m, ?Z, ?d, ?r,
f, !r, *c*l, !r, *h")
(match_operand:SD 1 "input_operand"
"m, Z, r, wx, r, d,
"m, ?Z, r, wx, r, d,
f, r, r, *h, 0"))]
"(register_operand (operands[0], SDmode)
|| register_operand (operands[1], SDmode))
@ -14580,10 +14580,10 @@
[(set_attr "type" "fp,fpstore,mtvsr,mfvsr,store")])
(define_insn_and_split "unpack<mode>_nodm"
[(set (match_operand:<FP128_64> 0 "nonimmediate_operand" "=d,m")
[(set (match_operand:<FP128_64> 0 "nonimmediate_operand" "=d,m,m")
(unspec:<FP128_64>
[(match_operand:FMOVE128 1 "register_operand" "d,d")
(match_operand:QI 2 "const_0_to_1_operand" "i,i")]
[(match_operand:FMOVE128 1 "register_operand" "d,d,r")
(match_operand:QI 2 "const_0_to_1_operand" "i,i,i")]
UNSPEC_UNPACK_128BIT))]
"(!TARGET_POWERPC64 || !TARGET_DIRECT_MOVE) && FLOAT128_2REG_P (<MODE>mode)"
"#"
@ -14600,15 +14600,28 @@
operands[3] = gen_rtx_REG (<FP128_64>mode, fp_regno);
}
[(set_attr "type" "fp,fpstore")])
[(set_attr "type" "fp,fpstore,store")])
(define_insn_and_split "pack<mode>"
(define_expand "pack<mode>"
[(use (match_operand:FMOVE128 0 "register_operand"))
(use (match_operand:<FP128_64> 1 "register_operand"))
(use (match_operand:<FP128_64> 2 "register_operand"))]
"FLOAT128_2REG_P (<MODE>mode)"
{
if (TARGET_HARD_FLOAT)
emit_insn (gen_pack<mode>_hard (operands[0], operands[1], operands[2]));
else
emit_insn (gen_pack<mode>_soft (operands[0], operands[1], operands[2]));
DONE;
})
(define_insn_and_split "pack<mode>_hard"
[(set (match_operand:FMOVE128 0 "register_operand" "=&d")
(unspec:FMOVE128
[(match_operand:<FP128_64> 1 "register_operand" "d")
(match_operand:<FP128_64> 2 "register_operand" "d")]
UNSPEC_PACK_128BIT))]
"FLOAT128_2REG_P (<MODE>mode)"
"FLOAT128_2REG_P (<MODE>mode) && TARGET_HARD_FLOAT"
"#"
"&& reload_completed"
[(set (match_dup 3) (match_dup 1))
@ -14626,6 +14639,34 @@
[(set_attr "type" "fp")
(set_attr "length" "8")])
(define_insn_and_split "pack<mode>_soft"
[(set (match_operand:FMOVE128 0 "register_operand" "=&r")
(unspec:FMOVE128
[(match_operand:<FP128_64> 1 "register_operand" "r")
(match_operand:<FP128_64> 2 "register_operand" "r")]
UNSPEC_PACK_128BIT))]
"FLOAT128_2REG_P (<MODE>mode) && TARGET_SOFT_FLOAT"
"#"
"&& reload_completed"
[(set (match_dup 3) (match_dup 1))
(set (match_dup 4) (match_dup 2))]
{
unsigned dest_hi = REGNO (operands[0]);
unsigned dest_lo = dest_hi + (TARGET_POWERPC64 ? 1 : 2);
gcc_assert (!IN_RANGE (REGNO (operands[1]), dest_hi, dest_lo));
gcc_assert (!IN_RANGE (REGNO (operands[2]), dest_hi, dest_lo));
operands[3] = gen_rtx_REG (<FP128_64>mode, dest_hi);
operands[4] = gen_rtx_REG (<FP128_64>mode, dest_lo);
}
[(set_attr "type" "integer")
(set (attr "length")
(if_then_else
(match_test "TARGET_POWERPC64")
(const_string "8")
(const_string "16")))])
(define_insn "unpack<mode>"
[(set (match_operand:DI 0 "register_operand" "=wa,wa")
(unspec:DI [(match_operand:FMOVE128_VSX 1 "register_operand" "0,wa")

2562
gcc/config/s390/3931.md Normal file

File diff suppressed because it is too large Load Diff

View File

@ -123,8 +123,12 @@ s390_host_detect_local_cpu (int argc, const char **argv)
case 0x8562:
cpu = "z15";
break;
case 0x3931:
case 0x3932:
cpu = "z16";
break;
default:
cpu = "arch14";
cpu = "z16";
break;
}
}

View File

@ -38,7 +38,7 @@ enum processor_type
PROCESSOR_2964_Z13,
PROCESSOR_3906_Z14,
PROCESSOR_8561_Z15,
PROCESSOR_ARCH14,
PROCESSOR_3931_Z16,
PROCESSOR_NATIVE,
PROCESSOR_max
};

View File

@ -49,7 +49,6 @@ extern void s390_function_profiler (FILE *, int);
extern void s390_set_has_landing_pad_p (bool);
extern bool s390_hard_regno_rename_ok (unsigned int, unsigned int);
extern int s390_class_max_nregs (enum reg_class, machine_mode);
extern bool s390_function_arg_vector (machine_mode, const_tree);
extern bool s390_return_addr_from_memory(void);
extern bool s390_fma_allowed_p (machine_mode);
#if S390_USE_TARGET_ATTRIBUTE

View File

@ -337,7 +337,7 @@ const struct s390_processor processor_table[] =
{ "z13", "z13", PROCESSOR_2964_Z13, &zEC12_cost, 11 },
{ "z14", "arch12", PROCESSOR_3906_Z14, &zEC12_cost, 12 },
{ "z15", "arch13", PROCESSOR_8561_Z15, &zEC12_cost, 13 },
{ "arch14", "arch14", PROCESSOR_ARCH14, &zEC12_cost, 14 },
{ "z16", "arch14", PROCESSOR_3931_Z16, &zEC12_cost, 14 },
{ "native", "", PROCESSOR_NATIVE, NULL, 0 }
};
@ -853,12 +853,6 @@ s390_expand_builtin (tree exp, rtx target, rtx subtarget ATTRIBUTE_UNUSED,
error ("Builtin %qF requires z15 or higher", fndecl);
return const0_rtx;
}
if ((bflags & B_NNPA) && !TARGET_NNPA)
{
error ("Builtin %qF requires arch14 or higher.", fndecl);
return const0_rtx;
}
}
if (fcode >= S390_OVERLOADED_BUILTIN_VAR_OFFSET
&& fcode < S390_ALL_BUILTIN_MAX)
@ -8525,7 +8519,7 @@ s390_issue_rate (void)
case PROCESSOR_2827_ZEC12:
case PROCESSOR_2964_Z13:
case PROCESSOR_3906_Z14:
case PROCESSOR_ARCH14:
case PROCESSOR_3931_Z16:
default:
return 1;
}
@ -12148,29 +12142,26 @@ s390_function_arg_size (machine_mode mode, const_tree type)
gcc_unreachable ();
}
/* Return true if a function argument of type TYPE and mode MODE
is to be passed in a vector register, if available. */
/* Return true if a variable of TYPE should be passed as single value
with type CODE. If STRICT_SIZE_CHECK_P is true the sizes of the
record type and the field type must match.
bool
s390_function_arg_vector (machine_mode mode, const_tree type)
The ABI says that record types with a single member are treated
just like that member would be. This function is a helper to
detect such cases. The function also produces the proper
diagnostics for cases where the outcome might be different
depending on the GCC version. */
static bool
s390_single_field_struct_p (enum tree_code code, const_tree type,
bool strict_size_check_p)
{
if (!TARGET_VX_ABI)
return false;
if (s390_function_arg_size (mode, type) > 16)
return false;
/* No type info available for some library calls ... */
if (!type)
return VECTOR_MODE_P (mode);
/* The ABI says that record types with a single member are treated
just like that member would be. */
int empty_base_seen = 0;
bool zero_width_bf_skipped_p = false;
const_tree orig_type = type;
while (TREE_CODE (type) == RECORD_TYPE)
{
tree field, single = NULL_TREE;
tree field, single_type = NULL_TREE;
for (field = TYPE_FIELDS (type); field; field = DECL_CHAIN (field))
{
@ -12187,48 +12178,108 @@ s390_function_arg_vector (machine_mode mode, const_tree type)
continue;
}
if (single == NULL_TREE)
single = TREE_TYPE (field);
if (DECL_FIELD_CXX_ZERO_WIDTH_BIT_FIELD (field))
{
zero_width_bf_skipped_p = true;
continue;
}
if (single_type == NULL_TREE)
single_type = TREE_TYPE (field);
else
return false;
}
if (single == NULL_TREE)
if (single_type == NULL_TREE)
return false;
else
{
/* If the field declaration adds extra byte due to
e.g. padding this is not accepted as vector type. */
if (int_size_in_bytes (single) <= 0
|| int_size_in_bytes (single) != int_size_in_bytes (type))
return false;
type = single;
}
/* Reaching this point we have a struct with a single member and
zero or more zero-sized bit-fields which have been skipped in the
past. */
/* If ZERO_WIDTH_BF_SKIPPED_P then the struct will not be accepted. In case
we are not supposed to emit a warning exit early. */
if (zero_width_bf_skipped_p && !warn_psabi)
return false;
/* If the field declaration adds extra bytes due to padding this
is not accepted with STRICT_SIZE_CHECK_P. */
if (strict_size_check_p
&& (int_size_in_bytes (single_type) <= 0
|| int_size_in_bytes (single_type) != int_size_in_bytes (type)))
return false;
type = single_type;
}
if (!VECTOR_TYPE_P (type))
if (TREE_CODE (type) != code)
return false;
if (warn_psabi && empty_base_seen)
if (warn_psabi)
{
static unsigned last_reported_type_uid;
unsigned uid = TYPE_UID (TYPE_MAIN_VARIANT (orig_type));
if (uid != last_reported_type_uid)
if (empty_base_seen)
{
const char *url = CHANGES_ROOT_URL "gcc-10/changes.html#empty_base";
last_reported_type_uid = uid;
if (empty_base_seen & 1)
inform (input_location,
"parameter passing for argument of type %qT when C++17 "
"is enabled changed to match C++14 %{in GCC 10.1%}",
orig_type, url);
else
inform (input_location,
"parameter passing for argument of type %qT with "
"%<[[no_unique_address]]%> members changed "
"%{in GCC 10.1%}", orig_type, url);
static unsigned last_reported_type_uid_empty_base;
if (uid != last_reported_type_uid_empty_base)
{
last_reported_type_uid_empty_base = uid;
const char *url = CHANGES_ROOT_URL "gcc-10/changes.html#empty_base";
if (empty_base_seen & 1)
inform (input_location,
"parameter passing for argument of type %qT when C++17 "
"is enabled changed to match C++14 %{in GCC 10.1%}",
orig_type, url);
else
inform (input_location,
"parameter passing for argument of type %qT with "
"%<[[no_unique_address]]%> members changed "
"%{in GCC 10.1%}", orig_type, url);
}
}
/* For C++ older GCCs ignored zero width bitfields and therefore
passed structs more often as single values than GCC 12 does.
So diagnostics are only required in cases where we do NOT
accept the struct to be passed as single value. */
if (zero_width_bf_skipped_p)
{
static unsigned last_reported_type_uid_zero_width;
if (uid != last_reported_type_uid_zero_width)
{
last_reported_type_uid_zero_width = uid;
inform (input_location,
"parameter passing for argument of type %qT with "
"zero-width bit fields members changed in GCC 12",
orig_type);
}
}
}
return !zero_width_bf_skipped_p;
}
/* Return true if a function argument of type TYPE and mode MODE
is to be passed in a vector register, if available. */
static bool
s390_function_arg_vector (machine_mode mode, const_tree type)
{
if (!TARGET_VX_ABI)
return false;
if (s390_function_arg_size (mode, type) > 16)
return false;
/* No type info available for some library calls ... */
if (!type)
return VECTOR_MODE_P (mode);
if (!s390_single_field_struct_p (VECTOR_TYPE, type, true))
return false;
return true;
}
@ -12249,64 +12300,9 @@ s390_function_arg_float (machine_mode mode, const_tree type)
if (!type)
return mode == SFmode || mode == DFmode || mode == SDmode || mode == DDmode;
/* The ABI says that record types with a single member are treated
just like that member would be. */
int empty_base_seen = 0;
const_tree orig_type = type;
while (TREE_CODE (type) == RECORD_TYPE)
{
tree field, single = NULL_TREE;
for (field = TYPE_FIELDS (type); field; field = DECL_CHAIN (field))
{
if (TREE_CODE (field) != FIELD_DECL)
continue;
if (DECL_FIELD_ABI_IGNORED (field))
{
if (lookup_attribute ("no_unique_address",
DECL_ATTRIBUTES (field)))
empty_base_seen |= 2;
else
empty_base_seen |= 1;
continue;
}
if (single == NULL_TREE)
single = TREE_TYPE (field);
else
return false;
}
if (single == NULL_TREE)
return false;
else
type = single;
}
if (TREE_CODE (type) != REAL_TYPE)
if (!s390_single_field_struct_p (REAL_TYPE, type, false))
return false;
if (warn_psabi && empty_base_seen)
{
static unsigned last_reported_type_uid;
unsigned uid = TYPE_UID (TYPE_MAIN_VARIANT (orig_type));
if (uid != last_reported_type_uid)
{
const char *url = CHANGES_ROOT_URL "gcc-10/changes.html#empty_base";
last_reported_type_uid = uid;
if (empty_base_seen & 1)
inform (input_location,
"parameter passing for argument of type %qT when C++17 "
"is enabled changed to match C++14 %{in GCC 10.1%}",
orig_type, url);
else
inform (input_location,
"parameter passing for argument of type %qT with "
"%<[[no_unique_address]]%> members changed "
"%{in GCC 10.1%}", orig_type, url);
}
}
return true;
}
@ -14879,7 +14875,6 @@ s390_get_sched_attrmask (rtx_insn *insn)
mask |= S390_SCHED_ATTR_MASK_GROUPOFTWO;
break;
case PROCESSOR_8561_Z15:
case PROCESSOR_ARCH14:
if (get_attr_z15_cracked (insn))
mask |= S390_SCHED_ATTR_MASK_CRACKED;
if (get_attr_z15_expanded (insn))
@ -14891,6 +14886,18 @@ s390_get_sched_attrmask (rtx_insn *insn)
if (get_attr_z15_groupoftwo (insn))
mask |= S390_SCHED_ATTR_MASK_GROUPOFTWO;
break;
case PROCESSOR_3931_Z16:
if (get_attr_z16_cracked (insn))
mask |= S390_SCHED_ATTR_MASK_CRACKED;
if (get_attr_z16_expanded (insn))
mask |= S390_SCHED_ATTR_MASK_EXPANDED;
if (get_attr_z16_endgroup (insn))
mask |= S390_SCHED_ATTR_MASK_ENDGROUP;
if (get_attr_z16_groupalone (insn))
mask |= S390_SCHED_ATTR_MASK_GROUPALONE;
if (get_attr_z16_groupoftwo (insn))
mask |= S390_SCHED_ATTR_MASK_GROUPOFTWO;
break;
default:
gcc_unreachable ();
}
@ -14927,7 +14934,6 @@ s390_get_unit_mask (rtx_insn *insn, int *units)
mask |= 1 << 3;
break;
case PROCESSOR_8561_Z15:
case PROCESSOR_ARCH14:
*units = 4;
if (get_attr_z15_unit_lsu (insn))
mask |= 1 << 0;
@ -14938,6 +14944,17 @@ s390_get_unit_mask (rtx_insn *insn, int *units)
if (get_attr_z15_unit_vfu (insn))
mask |= 1 << 3;
break;
case PROCESSOR_3931_Z16:
*units = 4;
if (get_attr_z16_unit_lsu (insn))
mask |= 1 << 0;
if (get_attr_z16_unit_fxa (insn))
mask |= 1 << 1;
if (get_attr_z16_unit_fxb (insn))
mask |= 1 << 2;
if (get_attr_z16_unit_vfu (insn))
mask |= 1 << 3;
break;
default:
gcc_unreachable ();
}
@ -14951,7 +14968,7 @@ s390_is_fpd (rtx_insn *insn)
return false;
return get_attr_z13_unit_fpd (insn) || get_attr_z14_unit_fpd (insn)
|| get_attr_z15_unit_fpd (insn);
|| get_attr_z15_unit_fpd (insn) || get_attr_z16_unit_fpd (insn);
}
static bool
@ -14961,7 +14978,7 @@ s390_is_fxd (rtx_insn *insn)
return false;
return get_attr_z13_unit_fxd (insn) || get_attr_z14_unit_fxd (insn)
|| get_attr_z15_unit_fxd (insn);
|| get_attr_z15_unit_fxd (insn) || get_attr_z16_unit_fxd (insn);
}
/* Returns TRUE if INSN is a long-running instruction. */

View File

@ -43,12 +43,12 @@ enum processor_flags
PF_VXE2 = 8192,
PF_Z15 = 16384,
PF_NNPA = 32768,
PF_ARCH14 = 65536
PF_Z16 = 65536
};
/* This is necessary to avoid a warning about comparing different enum
types. */
#define s390_tune_attr ((enum attr_cpu)(s390_tune > PROCESSOR_8561_Z15 ? PROCESSOR_8561_Z15 : s390_tune ))
#define s390_tune_attr ((enum attr_cpu)(s390_tune > PROCESSOR_3931_Z16 ? PROCESSOR_3931_Z16 : s390_tune ))
/* These flags indicate that the generated code should run on a cpu
providing the respective hardware facility regardless of the
@ -110,10 +110,10 @@ enum processor_flags
(s390_arch_flags & PF_VXE2)
#define TARGET_CPU_VXE2_P(opts) \
(opts->x_s390_arch_flags & PF_VXE2)
#define TARGET_CPU_ARCH14 \
(s390_arch_flags & PF_ARCH14)
#define TARGET_CPU_ARCH14_P(opts) \
(opts->x_s390_arch_flags & PF_ARCH14)
#define TARGET_CPU_Z16 \
(s390_arch_flags & PF_Z16)
#define TARGET_CPU_Z16_P(opts) \
(opts->x_s390_arch_flags & PF_Z16)
#define TARGET_CPU_NNPA \
(s390_arch_flags & PF_NNPA)
#define TARGET_CPU_NNPA_P(opts) \
@ -177,9 +177,9 @@ enum processor_flags
(TARGET_VX && TARGET_CPU_VXE2)
#define TARGET_VXE2_P(opts) \
(TARGET_VX_P (opts) && TARGET_CPU_VXE2_P (opts))
#define TARGET_ARCH14 (TARGET_ZARCH && TARGET_CPU_ARCH14)
#define TARGET_ARCH14_P(opts) \
(TARGET_ZARCH_P (opts->x_target_flags) && TARGET_CPU_ARCH14_P (opts))
#define TARGET_Z16 (TARGET_ZARCH && TARGET_CPU_Z16)
#define TARGET_Z16_P(opts) \
(TARGET_ZARCH_P (opts->x_target_flags) && TARGET_CPU_Z16_P (opts))
#define TARGET_NNPA \
(TARGET_ZARCH && TARGET_CPU_NNPA)
#define TARGET_NNPA_P(opts) \

View File

@ -518,11 +518,11 @@
;; Processor type. This attribute must exactly match the processor_type
;; enumeration in s390.h.
(define_attr "cpu" "z900,z990,z9_109,z9_ec,z10,z196,zEC12,z13,z14,z15"
(define_attr "cpu" "z900,z990,z9_109,z9_ec,z10,z196,zEC12,z13,z14,z15,z16"
(const (symbol_ref "s390_tune_attr")))
(define_attr "cpu_facility"
"standard,ieee,zarch,cpu_zarch,longdisp,extimm,dfp,z10,z196,zEC12,vx,z13,z14,vxe,z15,vxe2,arch14,nnpa"
"standard,ieee,zarch,cpu_zarch,longdisp,extimm,dfp,z10,z196,zEC12,vx,z13,z14,vxe,z15,vxe2,z16,nnpa"
(const_string "standard"))
(define_attr "enabled" ""
@ -588,8 +588,8 @@
(match_test "TARGET_VXE2"))
(const_int 1)
(and (eq_attr "cpu_facility" "arch14")
(match_test "TARGET_ARCH14"))
(and (eq_attr "cpu_facility" "z16")
(match_test "TARGET_Z16"))
(const_int 1)
(and (eq_attr "cpu_facility" "nnpa")
@ -629,6 +629,9 @@
;; Pipeline description for z15
(include "8561.md")
;; Pipeline description for z16
(include "3931.md")
;; Predicates
(include "predicates.md")

View File

@ -116,7 +116,10 @@ EnumValue
Enum(processor_type) String(arch13) Value(PROCESSOR_8561_Z15)
EnumValue
Enum(processor_type) String(arch14) Value(PROCESSOR_ARCH14)
Enum(processor_type) String(arch14) Value(PROCESSOR_3931_Z16)
EnumValue
Enum(processor_type) String(z16) Value(PROCESSOR_3931_Z16)
EnumValue
Enum(processor_type) String(native) Value(PROCESSOR_NATIVE) DriverOnly

View File

@ -8884,8 +8884,20 @@ epilogue_renumber (rtx *where, int test)
if (REGNO (*where) >= 8 && REGNO (*where) < 24) /* oX or lX */
return 1;
if (! test && REGNO (*where) >= 24 && REGNO (*where) < 32)
*where = gen_rtx_REG (GET_MODE (*where), OUTGOING_REGNO (REGNO(*where)));
/* fallthrough */
{
if (ORIGINAL_REGNO (*where))
{
rtx n = gen_raw_REG (GET_MODE (*where),
OUTGOING_REGNO (REGNO (*where)));
ORIGINAL_REGNO (n) = ORIGINAL_REGNO (*where);
*where = n;
}
else
*where = gen_rtx_REG (GET_MODE (*where),
OUTGOING_REGNO (REGNO (*where)));
}
return 0;
case SCRATCH:
case PC:
case CONST_INT:

10
gcc/configure vendored
View File

@ -30625,7 +30625,10 @@ $as_echo_n "checking linker for compressed debug sections... " >&6; }
# In binutils 2.26, gld gained support for the ELF gABI format.
if test $in_tree_ld = yes ; then
gcc_cv_ld_compress_debug=0
if test "$gcc_cv_gld_major_version" -eq 2 -a "$gcc_cv_gld_minor_version" -ge 19 -o "$gcc_cv_gld_major_version" -gt 2 \
if test $ld_is_mold = yes; then
gcc_cv_ld_compress_debug=3
gcc_cv_ld_compress_debug_option="--compress-debug-sections"
elif test "$gcc_cv_gld_major_version" -eq 2 -a "$gcc_cv_gld_minor_version" -ge 19 -o "$gcc_cv_gld_major_version" -gt 2 \
&& test $in_tree_ld_is_elf = yes && test $ld_is_gold = yes; then
gcc_cv_ld_compress_debug=2
gcc_cv_ld_compress_debug_option="--compress-debug-sections"
@ -30638,7 +30641,10 @@ if test $in_tree_ld = yes ; then
gcc_cv_ld_compress_debug=1
fi
elif echo "$ld_ver" | grep GNU > /dev/null; then
if test "$ld_vers_major" -lt 2 \
if test $ld_is_mold = yes; then
gcc_cv_ld_compress_debug=3
gcc_cv_ld_compress_debug_option="--compress-debug-sections"
elif test "$ld_vers_major" -lt 2 \
|| test "$ld_vers_major" -eq 2 -a "$ld_vers_minor" -lt 21; then
gcc_cv_ld_compress_debug=0
elif test "$ld_vers_major" -eq 2 -a "$ld_vers_minor" -lt 26; then

View File

@ -6266,7 +6266,10 @@ AC_MSG_CHECKING(linker for compressed debug sections)
# In binutils 2.26, gld gained support for the ELF gABI format.
if test $in_tree_ld = yes ; then
gcc_cv_ld_compress_debug=0
if test "$gcc_cv_gld_major_version" -eq 2 -a "$gcc_cv_gld_minor_version" -ge 19 -o "$gcc_cv_gld_major_version" -gt 2 \
if test $ld_is_mold = yes; then
gcc_cv_ld_compress_debug=3
gcc_cv_ld_compress_debug_option="--compress-debug-sections"
elif test "$gcc_cv_gld_major_version" -eq 2 -a "$gcc_cv_gld_minor_version" -ge 19 -o "$gcc_cv_gld_major_version" -gt 2 \
&& test $in_tree_ld_is_elf = yes && test $ld_is_gold = yes; then
gcc_cv_ld_compress_debug=2
gcc_cv_ld_compress_debug_option="--compress-debug-sections"
@ -6279,7 +6282,10 @@ if test $in_tree_ld = yes ; then
gcc_cv_ld_compress_debug=1
fi
elif echo "$ld_ver" | grep GNU > /dev/null; then
if test "$ld_vers_major" -lt 2 \
if test $ld_is_mold = yes; then
gcc_cv_ld_compress_debug=3
gcc_cv_ld_compress_debug_option="--compress-debug-sections"
elif test "$ld_vers_major" -lt 2 \
|| test "$ld_vers_major" -eq 2 -a "$ld_vers_minor" -lt 21; then
gcc_cv_ld_compress_debug=0
elif test "$ld_vers_major" -eq 2 -a "$ld_vers_minor" -lt 26; then

View File

@ -1,3 +1,280 @@
2022-04-27 Jason Merrill <jason@redhat.com>
* tree.cc (strip_typedefs): Add default argument comments.
2022-04-27 Marek Polacek <polacek@redhat.com>
PR c++/105398
* pt.cc (uses_template_parms): Return false for any NAMESPACE_DECL.
2022-04-26 Jason Merrill <jason@redhat.com>
PR c++/102629
* pt.cc (gen_elem_of_pack_expansion_instantiation): Clear
TEMPLATE_TYPE_PARAMETER_PACK on auto.
2022-04-26 Patrick Palka <ppalka@redhat.com>
PR c++/105386
* semantics.cc (finish_decltype_type): Pass tf_decltype to
instantiate_non_dependent_expr_sfinae.
2022-04-26 Jason Merrill <jason@redhat.com>
PR c++/104624
* pt.cc (check_for_bare_parameter_packs): Check for lambda
function parameter pack.
2022-04-26 Patrick Palka <ppalka@redhat.com>
PR c++/105289
PR c++/86193
* pt.cc (process_partial_specialization): Downgrade "partial
specialization isn't more specialized" diagnostic from permerror
to an on-by-default pedwarn.
(unify) <case TEMPLATE_PARM_INDEX>: When substituting into the
NTTP type a second time, use the original type not the
substituted type.
2022-04-25 Marek Polacek <polacek@redhat.com>
PR c++/105353
* typeck.cc (build_x_shufflevector): Use
instantiation_dependent_expression_p except for the first two
arguments.
2022-04-21 Marek Polacek <polacek@redhat.com>
* constexpr.cc (cxx_eval_logical_expression): Remove unused
parameter.
(cxx_eval_constant_expression) <case TRUTH_ANDIF_EXPR>,
<case TRUTH_OR_EXPR>: Adjust calls to cxx_eval_logical_expression.
2022-04-21 Marek Polacek <polacek@redhat.com>
PR c++/105321
* constexpr.cc (cxx_eval_logical_expression): Always pass false for lval
to cxx_eval_constant_expression.
2022-04-20 Ed Catmur <ed@catmur.uk>
PR c++/104996
* call.cc (compare_ics): When comparing list-initialization
sequences, do not return early.
2022-04-19 Jakub Jelinek <jakub@redhat.com>
PR c++/105256
* typeck2.cc (process_init_constructor_array,
process_init_constructor_record, process_init_constructor_union): Move
CONSTRUCTOR_PLACEHOLDER_BOUNDARY flag from CONSTRUCTOR elements to the
containing CONSTRUCTOR.
2022-04-15 Marek Polacek <polacek@redhat.com>
PR c++/105268
* parser.cc (cp_parser_placeholder_type_specifier): Return
error_mark_node when trying to build up a constrained parameter in
a default argument.
2022-04-15 Jason Merrill <jason@redhat.com>
PR c++/102804
* decl.cc (grokdeclarator): Drop typedef used with 'unsigned'.
2022-04-15 Jason Merrill <jason@redhat.com>
PR c++/102987
* error.cc (dump_expr): Handle USING_DECL.
[VIEW_CONVERT_EXPR]: Just look through location wrapper.
2022-04-14 Jason Merrill <jason@redhat.com>
PR c++/104646
* constexpr.cc (maybe_save_constexpr_fundef): Don't do extra
checks for defaulted ctors.
2022-04-14 Jason Merrill <jason@redhat.com>
PR c++/82980
* lambda.cc (type_deducible_expression_p): New.
(lambda_capture_field_type): Check it.
2022-04-14 Jason Merrill <jason@redhat.com>
PR c++/65211
* pt.cc (tsubst_decl) [TYPE_DECL]: Copy TYPE_ALIGN.
2022-04-14 Jason Merrill <jason@redhat.com>
PR c++/97219
* name-lookup.cc (dependent_local_decl_p): New.
* cp-tree.h (dependent_local_decl_p): Declare.
* semantics.cc (finish_call_expr): Use it.
* pt.cc (tsubst_arg_types): Also substitute default args
for local externs.
2022-04-14 Jason Merrill <jason@redhat.com>
PR c++/101698
* pt.cc (tsubst_baselink): Also check dependent optype.
2022-04-14 Jason Merrill <jason@redhat.com>
PR c++/101442
* decl.cc (cp_finish_decl): Don't pass decl to push_cleanup.
* init.cc (perform_member_init): Likewise.
* semantics.cc (push_cleanup): Adjust comment.
2022-04-13 Jason Merrill <jason@redhat.com>
PR c++/105245
PR c++/100111
* constexpr.cc (cxx_eval_store_expression): Build a CONSTRUCTOR
as needed in empty base handling.
2022-04-13 Jakub Jelinek <jakub@redhat.com>
PR c++/105233
* decl2.cc (cp_check_const_attributes): For aligned attribute
pass manifestly_const_eval=true to fold_non_dependent_expr.
2022-04-13 Marek Polacek <polacek@redhat.com>
PR c++/97296
* call.cc (direct_reference_binding): strip_top_quals when creating
a ck_qual.
2022-04-12 Jason Merrill <jason@redhat.com>
PR c++/104669
* decl.cc (decls_match): Compare versions even if not recording.
(duplicate_decls): Propagate attributes to alias.
* decl2.cc (find_last_decl): Give up if versioned.
2022-04-12 Jason Merrill <jason@redhat.com>
PR c++/102071
* init.cc (build_new_1): Check array_p for alignment.
2022-04-12 Patrick Palka <ppalka@redhat.com>
PR c++/103105
* pt.cc (build_extra_args): Call preserve_args.
2022-04-12 Jason Merrill <jason@redhat.com>
PR c++/104142
* decl.cc (check_initializer): Check TREE_SIDE_EFFECTS.
2022-04-12 Jason Merrill <jason@redhat.com>
PR c++/105223
PR c++/92918
* class.cc (finish_struct): Always using op=.
2022-04-11 Jason Merrill <jason@redhat.com>
PR c++/98249
* call.cc (build_operator_new_call): Just look in ::.
2022-04-11 Alexandre Oliva <oliva@adacore.com>
* constexpr.cc (cxx_eval_call_expression): Disregard dtor
result.
2022-04-11 Alexandre Oliva <oliva@adacore.com>
* semantics.cc (set_cleanup_locs): Propagate locus to call
wrapped in cast-to-void.
2022-04-11 Jason Merrill <jason@redhat.com>
PR c++/100370
* init.cc (warn_placement_new_too_small): Check deref.
2022-04-09 Jason Merrill <jason@redhat.com>
PR c++/105191
PR c++/92385
* tree.cc (build_vec_init_elt): Do {}-init for aggregates.
* constexpr.cc (cxx_eval_vec_init): Only treat {} as value-init
for non-aggregate types.
(build_vec_init_expr): Also check constancy of explicit
initializer elements.
2022-04-09 Jason Merrill <jason@redhat.com>
PR c++/91618
PR c++/96604
* name-lookup.cc (set_decl_namespace): Set
DECL_IMPLICIT_INSTANTIATION if no non-template match.
* pt.cc (check_explicit_specialization): Check it.
* decl2.cc (check_classfn): Call it.
2022-04-07 Patrick Palka <ppalka@redhat.com>
PR c++/99479
* name-lookup.cc (name_lookup::using_queue): Change to an
auto_vec (with 16 elements of internal storage).
(name_lookup::queue_namespace): Change return type to void,
take queue parameter by reference and adjust function body
accordingly.
(name_lookup::do_queue_usings): Inline into ...
(name_lookup::queue_usings): ... here. As in queue_namespace.
(name_lookup::search_unqualified): Don't make queue static,
remove length variable, and adjust function body accordingly.
2022-04-07 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/102586
* cp-objcp-common.h (cp_classtype_as_base): Declare.
(LANG_HOOKS_CLASSTYPE_AS_BASE): Redefine.
* cp-objcp-common.cc (cp_classtype_as_base): New function.
2022-04-07 Jason Merrill <jason@redhat.com>
PR c++/101051
* decl.cc (grokdeclarator): Reject conversion with trailing return
sooner.
2022-04-07 Jason Merrill <jason@redhat.com>
PR c++/101717
* lambda.cc (lambda_expr_this_capture): Check all enclosing
lambdas for completeness.
2022-04-07 Jason Merrill <jason@redhat.com>
PR c++/105187
* typeck2.cc (store_init_value): Allow TREE_HAS_CONSTRUCTOR for
vectors.
2022-04-06 Jakub Jelinek <jakub@redhat.com>
PR c++/104668
* decl2.cc (splice_template_attributes): Return NULL if *p is
error_mark_node.
(cplus_decl_attributes): Return early if attributes is
error_mark_node. Don't check that later.
2022-04-06 Patrick Palka <ppalka@redhat.com>
PR c++/105143
* pt.cc (do_class_deduction): Check complain before attempting
to issue a -Wctad-maybe-unsupported warning.
2022-04-06 Jason Merrill <jason@redhat.com>
PR c++/104702
* init.cc (build_vec_init): Use a reference for the result.
2022-04-06 Jason Merrill <jason@redhat.com>
PR c++/100608
* name-lookup.cc (check_local_shadow): Use -Wshadow=local
if exactly one of 'old' and 'decl' is a type.
2022-04-05 Jason Merrill <jason@redhat.com>
PR c++/103852

View File

@ -1680,8 +1680,19 @@ direct_reference_binding (tree type, conversion *conv)
because the types "int *" and "const int *const" are
reference-related and we were binding both directly and they
had the same rank. To break it up, we add a ck_qual under the
ck_ref_bind so that conversion sequence ranking chooses #1. */
conv = build_conv (ck_qual, t, conv);
ck_ref_bind so that conversion sequence ranking chooses #1.
We strip_top_quals here which is also what standard_conversion
does. Failure to do so would confuse comp_cv_qual_signature
into thinking that in
void f(const int * const &); // #1
void f(const int *); // #2
int *x;
f(x);
#2 is a better match than #1 even though they're ambiguous (97296). */
conv = build_conv (ck_qual, strip_top_quals (t), conv);
return build_conv (ck_ref_bind, type, conv);
}
@ -4899,8 +4910,7 @@ build_operator_new_call (tree fnname, vec<tree, va_gc> **args,
up in the global scope.
we disregard block-scope declarations of "operator new". */
fns = lookup_name (fnname, LOOK_where::NAMESPACE);
fns = lookup_arg_dependent (fnname, fns, *args);
fns = lookup_qualified_name (global_namespace, fnname);
if (align_arg)
{
@ -11536,12 +11546,9 @@ compare_ics (conversion *ics1, conversion *ics2)
P0388R4.) */
else if (t1->kind == ck_aggr
&& TREE_CODE (t1->type) == ARRAY_TYPE
&& TREE_CODE (t2->type) == ARRAY_TYPE)
&& TREE_CODE (t2->type) == ARRAY_TYPE
&& same_type_p (TREE_TYPE (t1->type), TREE_TYPE (t2->type)))
{
/* The type of the array elements must be the same. */
if (!same_type_p (TREE_TYPE (t1->type), TREE_TYPE (t2->type)))
return 0;
tree n1 = nelts_initialized_by_list_init (t1);
tree n2 = nelts_initialized_by_list_init (t2);
if (tree_int_cst_lt (n1, n2))

View File

@ -7723,17 +7723,14 @@ finish_struct (tree t, tree attributes)
lookup not to fail or recurse into bases. This isn't added
to the template decl list so we drop this at instantiation
time. */
if (!get_class_binding_direct (t, assign_op_identifier, false))
{
tree ass_op = build_lang_decl (USING_DECL, assign_op_identifier,
NULL_TREE);
DECL_CONTEXT (ass_op) = t;
USING_DECL_SCOPE (ass_op) = t;
DECL_DEPENDENT_P (ass_op) = true;
DECL_ARTIFICIAL (ass_op) = true;
DECL_CHAIN (ass_op) = TYPE_FIELDS (t);
TYPE_FIELDS (t) = ass_op;
}
tree ass_op = build_lang_decl (USING_DECL, assign_op_identifier,
NULL_TREE);
DECL_CONTEXT (ass_op) = t;
USING_DECL_SCOPE (ass_op) = t;
DECL_DEPENDENT_P (ass_op) = true;
DECL_ARTIFICIAL (ass_op) = true;
DECL_CHAIN (ass_op) = TYPE_FIELDS (t);
TYPE_FIELDS (t) = ass_op;
TYPE_SIZE (t) = bitsize_zero_node;
TYPE_SIZE_UNIT (t) = size_zero_node;

View File

@ -920,7 +920,8 @@ maybe_save_constexpr_fundef (tree fun)
if (!potential && complain)
require_potential_rvalue_constant_expression (massaged);
if (DECL_CONSTRUCTOR_P (fun) && potential)
if (DECL_CONSTRUCTOR_P (fun) && potential
&& !DECL_DEFAULTED_FN (fun))
{
if (cx_check_missing_mem_inits (DECL_CONTEXT (fun),
massaged, complain))
@ -2889,7 +2890,8 @@ cxx_eval_call_expression (const constexpr_ctx *ctx, tree t,
else
{
result = *ctx->global->values.get (res);
if (result == NULL_TREE && !*non_constant_p)
if (result == NULL_TREE && !*non_constant_p
&& !DECL_DESTRUCTOR_P (fun))
{
if (!ctx->quiet)
error ("%<constexpr%> call flows off the end "
@ -4564,19 +4566,18 @@ cxx_eval_bit_cast (const constexpr_ctx *ctx, tree t, bool *non_constant_p,
static tree
cxx_eval_logical_expression (const constexpr_ctx *ctx, tree t,
tree bailout_value, tree continue_value,
bool lval,
bool *non_constant_p, bool *overflow_p)
{
tree r;
tree lhs = cxx_eval_constant_expression (ctx, TREE_OPERAND (t, 0),
lval,
non_constant_p, overflow_p);
/*lval*/false, non_constant_p,
overflow_p);
VERIFY_CONSTANT (lhs);
if (tree_int_cst_equal (lhs, bailout_value))
return lhs;
gcc_assert (tree_int_cst_equal (lhs, continue_value));
r = cxx_eval_constant_expression (ctx, TREE_OPERAND (t, 1),
lval, non_constant_p,
/*lval*/false, non_constant_p,
overflow_p);
VERIFY_CONSTANT (r);
return r;
@ -5008,7 +5009,8 @@ cxx_eval_vec_init (const constexpr_ctx *ctx, tree t,
bool value_init = VEC_INIT_EXPR_VALUE_INIT (t);
if (!init || !BRACE_ENCLOSED_INITIALIZER_P (init))
;
else if (CONSTRUCTOR_NELTS (init) == 0)
else if (CONSTRUCTOR_NELTS (init) == 0
&& !CP_AGGREGATE_TYPE_P (strip_array_types (atype)))
{
/* Handle {} as value-init. */
init = NULL_TREE;
@ -5931,6 +5933,12 @@ cxx_eval_store_expression (const constexpr_ctx *ctx, tree t,
{
/* See above on initialization of empty bases. */
gcc_assert (is_empty_class (TREE_TYPE (init)) && !lval);
if (!*valp)
{
/* But do make sure we have something in *valp. */
*valp = build_constructor (type, nullptr);
CONSTRUCTOR_NO_CLEARING (*valp) = no_zero_init;
}
return init;
}
else
@ -7097,7 +7105,6 @@ cxx_eval_constant_expression (const constexpr_ctx *ctx, tree t,
case TRUTH_ANDIF_EXPR:
r = cxx_eval_logical_expression (ctx, t, boolean_false_node,
boolean_true_node,
lval,
non_constant_p, overflow_p);
break;
@ -7105,7 +7112,6 @@ cxx_eval_constant_expression (const constexpr_ctx *ctx, tree t,
case TRUTH_ORIF_EXPR:
r = cxx_eval_logical_expression (ctx, t, boolean_true_node,
boolean_false_node,
lval,
non_constant_p, overflow_p);
break;

View File

@ -513,8 +513,14 @@ coro_promise_type_found_p (tree fndecl, location_t loc)
coro_info->promise_type);
inform (DECL_SOURCE_LOCATION (BASELINK_FUNCTIONS (has_ret_void)),
"%<return_void%> declared here");
inform (DECL_SOURCE_LOCATION (BASELINK_FUNCTIONS (has_ret_val)),
"%<return_value%> declared here");
has_ret_val = BASELINK_FUNCTIONS (has_ret_val);
const char *message = "%<return_value%> declared here";
if (TREE_CODE (has_ret_val) == OVERLOAD)
{
has_ret_val = OVL_FIRST (has_ret_val);
message = "%<return_value%> first declared here";
}
inform (DECL_SOURCE_LOCATION (has_ret_val), message);
coro_info->coro_co_return_error_emitted = true;
return false;
}
@ -877,13 +883,14 @@ coro_diagnose_throwing_fn (tree fndecl)
static bool
coro_diagnose_throwing_final_aw_expr (tree expr)
{
tree t = TARGET_EXPR_INITIAL (expr);
if (TREE_CODE (expr) == TARGET_EXPR)
expr = TARGET_EXPR_INITIAL (expr);
tree fn = NULL_TREE;
if (TREE_CODE (t) == CALL_EXPR)
fn = CALL_EXPR_FN(t);
else if (TREE_CODE (t) == AGGR_INIT_EXPR)
fn = AGGR_INIT_EXPR_FN (t);
else if (TREE_CODE (t) == CONSTRUCTOR)
if (TREE_CODE (expr) == CALL_EXPR)
fn = CALL_EXPR_FN (expr);
else if (TREE_CODE (expr) == AGGR_INIT_EXPR)
fn = AGGR_INIT_EXPR_FN (expr);
else if (TREE_CODE (expr) == CONSTRUCTOR)
return false;
else
{
@ -1148,10 +1155,13 @@ finish_co_await_expr (location_t kw, tree expr)
extraneous warnings during substitution. */
suppress_warning (current_function_decl, OPT_Wreturn_type);
/* If we don't know the promise type, we can't proceed, build the
co_await with the expression unchanged. */
tree functype = TREE_TYPE (current_function_decl);
if (dependent_type_p (functype) || type_dependent_expression_p (expr))
/* Defer expansion when we are processing a template.
FIXME: If the coroutine function's type is not dependent, and the operand
is not dependent, we should determine the type of the co_await expression
using the DEPENDENT_EXPR wrapper machinery. That allows us to determine
the subexpression type, but leave its operand unchanged and then
instantiate it later. */
if (processing_template_decl)
{
tree aw_expr = build5_loc (kw, CO_AWAIT_EXPR, unknown_type_node, expr,
NULL_TREE, NULL_TREE, NULL_TREE,
@ -1222,10 +1232,9 @@ finish_co_yield_expr (location_t kw, tree expr)
extraneous warnings during substitution. */
suppress_warning (current_function_decl, OPT_Wreturn_type);
/* If we don't know the promise type, we can't proceed, build the
co_await with the expression unchanged. */
tree functype = TREE_TYPE (current_function_decl);
if (dependent_type_p (functype) || type_dependent_expression_p (expr))
/* Defer expansion when we are processing a template; see FIXME in the
co_await code. */
if (processing_template_decl)
return build2_loc (kw, CO_YIELD_EXPR, unknown_type_node, expr, NULL_TREE);
if (!coro_promise_type_found_p (current_function_decl, kw))
@ -1307,10 +1316,9 @@ finish_co_return_stmt (location_t kw, tree expr)
&& check_for_bare_parameter_packs (expr))
return error_mark_node;
/* If we don't know the promise type, we can't proceed, build the
co_return with the expression unchanged. */
tree functype = TREE_TYPE (current_function_decl);
if (dependent_type_p (functype) || type_dependent_expression_p (expr))
/* Defer expansion when we are processing a template; see FIXME in the
co_await code. */
if (processing_template_decl)
{
/* co_return expressions are always void type, regardless of the
expression type. */
@ -3109,7 +3117,7 @@ maybe_promote_temps (tree *stmt, void *d)
If the initializer is a conditional expression, we need to collect
and declare any promoted variables nested within it. DTORs for such
variables must be run conditionally too. */
if (t->var && DECL_NAME (t->var))
if (t->var)
{
tree var = t->var;
DECL_CHAIN (var) = vlist;
@ -3310,7 +3318,7 @@ add_var_to_bind (tree& bind, tree var_type,
tree b_vars = BIND_EXPR_VARS (bind);
/* Build a variable to hold the condition, this will be included in the
frame as a local var. */
char *nam = xasprintf ("%s.%d", nam_root, nam_vers);
char *nam = xasprintf ("__%s_%d", nam_root, nam_vers);
tree newvar = build_lang_decl (VAR_DECL, get_identifier (nam), var_type);
free (nam);
DECL_CHAIN (newvar) = b_vars;
@ -3955,7 +3963,7 @@ register_local_var_uses (tree *stmt, int *do_subtree, void *d)
scopes with identically named locals and still be able to
identify them in the coroutine frame. */
tree lvname = DECL_NAME (lvar);
char *buf;
char *buf = NULL;
/* The outermost bind scope contains the artificial variables that
we inject to implement the coro state machine. We want to be able
@ -3965,14 +3973,14 @@ register_local_var_uses (tree *stmt, int *do_subtree, void *d)
else if (lvname != NULL_TREE)
buf = xasprintf ("%s_%u_%u", IDENTIFIER_POINTER (lvname),
lvd->nest_depth, lvd->bind_indx);
else
buf = xasprintf ("_D%u_%u_%u", DECL_UID (lvar), lvd->nest_depth,
lvd->bind_indx);
/* TODO: Figure out if we should build a local type that has any
excess alignment or size from the original decl. */
local_var.field_id
= coro_make_frame_entry (lvd->field_list, buf, lvtype, lvd->loc);
free (buf);
if (buf)
{
local_var.field_id = coro_make_frame_entry (lvd->field_list, buf,
lvtype, lvd->loc);
free (buf);
}
/* We don't walk any of the local var sub-trees, they won't contain
any bind exprs. */
}

View File

@ -280,6 +280,22 @@ cp_unit_size_without_reusable_padding (tree type)
return TYPE_SIZE_UNIT (type);
}
/* Returns type corresponding to FIELD's type when FIELD is a C++ base class
i.e., type without virtual base classes or tail padding. Returns
NULL_TREE otherwise. */
tree
cp_classtype_as_base (const_tree field)
{
if (DECL_FIELD_IS_BASE (field))
{
tree type = TREE_TYPE (field);
if (TYPE_LANG_SPECIFIC (type))
return CLASSTYPE_AS_BASE (type);
}
return NULL_TREE;
}
/* Stubs to keep c-opts.cc happy. */
void
push_file_scope (void)

View File

@ -31,6 +31,7 @@ extern int cp_decl_dwarf_attribute (const_tree, int);
extern int cp_type_dwarf_attribute (const_tree, int);
extern void cp_common_init_ts (void);
extern tree cp_unit_size_without_reusable_padding (tree);
extern tree cp_classtype_as_base (const_tree);
extern tree cp_get_global_decls ();
extern tree cp_pushdecl (tree);
extern void cp_register_dumps (gcc::dump_manager *);
@ -167,6 +168,8 @@ extern tree cxx_simulate_record_decl (location_t, const char *,
#define LANG_HOOKS_TYPE_DWARF_ATTRIBUTE cp_type_dwarf_attribute
#undef LANG_HOOKS_UNIT_SIZE_WITHOUT_REUSABLE_PADDING
#define LANG_HOOKS_UNIT_SIZE_WITHOUT_REUSABLE_PADDING cp_unit_size_without_reusable_padding
#undef LANG_HOOKS_CLASSTYPE_AS_BASE
#define LANG_HOOKS_CLASSTYPE_AS_BASE cp_classtype_as_base
#undef LANG_HOOKS_OMP_PREDETERMINED_SHARING
#define LANG_HOOKS_OMP_PREDETERMINED_SHARING cxx_omp_predetermined_sharing

View File

@ -8243,6 +8243,7 @@ extern tree fold_builtin_source_location (location_t);
/* in name-lookup.cc */
extern tree strip_using_decl (tree);
extern void diagnose_name_conflict (tree, tree);
extern bool dependent_local_decl_p (tree);
/* Tell the binding oracle what kind of binding we are looking for. */

View File

@ -1069,11 +1069,14 @@ decls_match (tree newdecl, tree olddecl, bool record_versions /* = true */)
if (types_match
&& !DECL_EXTERN_C_P (newdecl)
&& !DECL_EXTERN_C_P (olddecl)
&& record_versions
&& maybe_version_functions (newdecl, olddecl,
(!DECL_FUNCTION_VERSIONED (newdecl)
|| !DECL_FUNCTION_VERSIONED (olddecl))))
return 0;
&& targetm.target_option.function_versions (newdecl, olddecl))
{
if (record_versions)
maybe_version_functions (newdecl, olddecl,
(!DECL_FUNCTION_VERSIONED (newdecl)
|| !DECL_FUNCTION_VERSIONED (olddecl)));
return 0;
}
}
else if (TREE_CODE (newdecl) == TEMPLATE_DECL)
{
@ -2598,7 +2601,12 @@ duplicate_decls (tree newdecl, tree olddecl, bool hiding, bool was_hidden)
else
{
retrofit_lang_decl (newdecl);
DECL_LOCAL_DECL_ALIAS (newdecl) = DECL_LOCAL_DECL_ALIAS (olddecl);
tree alias = DECL_LOCAL_DECL_ALIAS (newdecl)
= DECL_LOCAL_DECL_ALIAS (olddecl);
DECL_ATTRIBUTES (alias)
= (*targetm.merge_decl_attributes) (alias, newdecl);
if (TREE_CODE (newdecl) == FUNCTION_DECL)
merge_attribute_bits (newdecl, alias);
}
}
@ -7444,6 +7452,10 @@ check_initializer (tree decl, tree init, int flags, vec<tree, va_gc> **cleanups)
if (init && init != error_mark_node)
init_code = build2 (INIT_EXPR, type, decl, init);
if (init_code && !TREE_SIDE_EFFECTS (init_code)
&& init_code != error_mark_node)
init_code = NULL_TREE;
if (init_code)
{
/* We might have set these in cp_finish_decl. */
@ -8542,7 +8554,7 @@ cp_finish_decl (tree decl, tree init, bool init_const_expr_p,
{
for (tree t : *cleanups)
{
push_cleanup (decl, t, false);
push_cleanup (NULL_TREE, t, false);
/* As in initialize_local_var. */
wrap_temporary_cleanups (init, t);
}
@ -12231,6 +12243,8 @@ grokdeclarator (const cp_declarator *declarator,
pedwarn (loc, OPT_Wpedantic, "%qs specified with %qT",
key, type);
ok = !flag_pedantic_errors;
type = DECL_ORIGINAL_TYPE (typedef_decl);
typedef_decl = NULL_TREE;
}
else if (declspecs->decltype_p)
error_at (loc, "%qs specified with %<decltype%>", key);
@ -12866,6 +12880,11 @@ grokdeclarator (const cp_declarator *declarator,
"type specifier", name);
return error_mark_node;
}
if (late_return_type && sfk == sfk_conversion)
{
error ("a conversion function cannot have a trailing return type");
return error_mark_node;
}
type = splice_late_return_type (type, late_return_type);
if (type == error_mark_node)
return error_mark_node;
@ -13030,8 +13049,6 @@ grokdeclarator (const cp_declarator *declarator,
maybe_warn_cpp0x (CPP0X_EXPLICIT_CONVERSION);
explicitp = 2;
}
if (late_return_type_p)
error ("a conversion function cannot have a trailing return type");
}
else if (sfk == sfk_deduction_guide)
{

View File

@ -734,11 +734,15 @@ check_classfn (tree ctype, tree function, tree template_parms)
tree pushed_scope = push_scope (ctype);
tree matched = NULL_TREE;
tree fns = get_class_binding (ctype, DECL_NAME (function));
bool saw_template = false;
for (ovl_iterator iter (fns); !matched && iter; ++iter)
{
tree fndecl = *iter;
if (TREE_CODE (fndecl) == TEMPLATE_DECL)
saw_template = true;
/* A member template definition only matches a member template
declaration. */
if (is_template != (TREE_CODE (fndecl) == TEMPLATE_DECL))
@ -788,6 +792,23 @@ check_classfn (tree ctype, tree function, tree template_parms)
matched = fndecl;
}
if (!matched && !is_template && saw_template
&& !processing_template_decl && DECL_UNIQUE_FRIEND_P (function))
{
/* "[if no non-template match is found,] each remaining function template
is replaced with the specialization chosen by deduction from the
friend declaration or discarded if deduction fails."
So ask check_explicit_specialization to find a matching template. */
SET_DECL_IMPLICIT_INSTANTIATION (function);
tree spec = check_explicit_specialization (DECL_NAME (function),
function, /* tcount */0,
/* friend flag */4,
/* attrlist */NULL_TREE);
if (spec != error_mark_node)
matched = spec;
}
if (!matched)
{
if (!COMPLETE_TYPE_P (ctype))
@ -1513,12 +1534,19 @@ cp_check_const_attributes (tree attributes)
for (attr = attributes; attr; attr = TREE_CHAIN (attr))
{
tree arg;
/* As we implement alignas using gnu::aligned attribute and
alignas argument is a constant expression, force manifestly
constant evaluation of aligned attribute argument. */
bool manifestly_const_eval
= is_attribute_p ("aligned", get_attribute_name (attr));
for (arg = TREE_VALUE (attr); arg && TREE_CODE (arg) == TREE_LIST;
arg = TREE_CHAIN (arg))
{
tree expr = TREE_VALUE (arg);
if (EXPR_P (expr))
TREE_VALUE (arg) = fold_non_dependent_expr (expr);
TREE_VALUE (arg)
= fold_non_dependent_expr (expr, tf_warning_or_error,
manifestly_const_eval);
}
}
}
@ -1616,8 +1644,16 @@ find_last_decl (tree decl)
if (TREE_CODE (*iter) == OVERLOAD)
continue;
if (decls_match (decl, *iter, /*record_decls=*/false))
return *iter;
tree d = *iter;
/* We can't compare versions in the middle of processing the
attribute that has the version. */
if (TREE_CODE (d) == FUNCTION_DECL
&& DECL_FUNCTION_VERSIONED (d))
return NULL_TREE;
if (decls_match (decl, d, /*record_decls=*/false))
return d;
}
return NULL_TREE;
}

View File

@ -2203,6 +2203,7 @@ dump_expr (cxx_pretty_printer *pp, tree t, int flags)
case WILDCARD_DECL:
case OVERLOAD:
case TYPE_DECL:
case USING_DECL:
case IDENTIFIER_NODE:
dump_decl (pp, t, ((flags & ~(TFF_DECL_SPECIFIERS|TFF_RETURN_TYPE
|TFF_TEMPLATE_HEADER))
@ -2584,6 +2585,13 @@ dump_expr (cxx_pretty_printer *pp, tree t, int flags)
case VIEW_CONVERT_EXPR:
{
tree op = TREE_OPERAND (t, 0);
if (location_wrapper_p (t))
{
dump_expr (pp, op, flags);
break;
}
tree ttype = TREE_TYPE (t);
tree optype = TREE_TYPE (op);

View File

@ -1066,7 +1066,7 @@ perform_member_init (tree member, tree init, hash_set<tree> &uninitialized)
init = build2 (INIT_EXPR, type, decl, init);
finish_expr_stmt (init);
FOR_EACH_VEC_ELT (*cleanups, i, t)
push_cleanup (decl, t, false);
push_cleanup (NULL_TREE, t, false);
}
else if (type_build_ctor_call (type)
|| (init && CLASS_TYPE_P (strip_array_types (type))))
@ -2811,6 +2811,11 @@ warn_placement_new_too_small (tree type, tree nelts, tree size, tree oper)
if (!objsize)
return;
/* We can only draw conclusions if ref.deref == -1,
i.e. oper is the address of the object. */
if (ref.deref != -1)
return;
offset_int bytes_avail = wi::to_offset (objsize);
offset_int bytes_need;
@ -3287,7 +3292,7 @@ build_new_1 (vec<tree, va_gc> **placement, tree type, tree nelts,
{
unsigned align = TYPE_ALIGN_UNIT (elt_type);
/* Also consider the alignment of the cookie, if any. */
if (TYPE_VEC_NEW_USES_COOKIE (elt_type))
if (array_p && TYPE_VEC_NEW_USES_COOKIE (elt_type))
align = MAX (align, TYPE_ALIGN_UNIT (size_type_node));
align_arg = build_int_cst (align_type_node, align);
}

View File

@ -183,6 +183,24 @@ lambda_function (tree lambda)
return lambda;
}
/* True if EXPR is an expression whose type can be used directly in lambda
capture. Not to be used for 'auto'. */
static bool
type_deducible_expression_p (tree expr)
{
if (!type_dependent_expression_p (expr))
return true;
if (BRACE_ENCLOSED_INITIALIZER_P (expr)
|| TREE_CODE (expr) == EXPR_PACK_EXPANSION)
return false;
tree t = non_reference (TREE_TYPE (expr));
if (!t) return false;
while (TREE_CODE (t) == POINTER_TYPE)
t = TREE_TYPE (t);
return currently_open_class (t);
}
/* Returns the type to use for the FIELD_DECL corresponding to the
capture of EXPR. EXPLICIT_INIT_P indicates whether this is a
C++14 init capture, and BY_REFERENCE_P indicates whether we're
@ -211,7 +229,7 @@ lambda_capture_field_type (tree expr, bool explicit_init_p,
else
type = do_auto_deduction (type, expr, auto_node);
}
else if (type_dependent_expression_p (expr))
else if (!type_deducible_expression_p (expr))
{
type = cxx_make_type (DECLTYPE_TYPE);
DECLTYPE_TYPE_EXPR (type) = expr;
@ -741,6 +759,7 @@ lambda_expr_this_capture (tree lambda, int add_capture_p)
{
tree lambda_stack = NULL_TREE;
tree init = NULL_TREE;
bool saw_complete = false;
/* If we are in a lambda function, we can move out until we hit:
1. a non-lambda function or NSDMI,
@ -759,6 +778,11 @@ lambda_expr_this_capture (tree lambda, int add_capture_p)
lambda_stack);
tree closure = LAMBDA_EXPR_CLOSURE (tlambda);
if (COMPLETE_TYPE_P (closure))
/* We're instantiating a generic lambda op(), the containing
scope may be gone. */
saw_complete = true;
tree containing_function
= decl_function_context (TYPE_NAME (closure));
@ -768,7 +792,7 @@ lambda_expr_this_capture (tree lambda, int add_capture_p)
/* Lambda in an NSDMI. We don't have a function to look up
'this' in, but we can find (or rebuild) the fake one from
inject_this_parameter. */
if (!containing_function && !COMPLETE_TYPE_P (closure))
if (!containing_function && !saw_complete)
/* If we're parsing a lambda in a non-local class,
we can find the fake 'this' in scope_chain. */
init = scope_chain->x_current_class_ptr;

View File

@ -429,7 +429,7 @@ class name_lookup
{
public:
typedef std::pair<tree, tree> using_pair;
typedef vec<using_pair, va_heap, vl_embed> using_queue;
typedef auto_vec<using_pair, 16> using_queue;
public:
tree name; /* The identifier being looked for. */
@ -528,16 +528,8 @@ private:
bool search_usings (tree scope);
private:
using_queue *queue_namespace (using_queue *queue, int depth, tree scope);
using_queue *do_queue_usings (using_queue *queue, int depth,
vec<tree, va_gc> *usings);
using_queue *queue_usings (using_queue *queue, int depth,
vec<tree, va_gc> *usings)
{
if (usings)
queue = do_queue_usings (queue, depth, usings);
return queue;
}
void queue_namespace (using_queue& queue, int depth, tree scope);
void queue_usings (using_queue& queue, int depth, vec<tree, va_gc> *usings);
private:
void add_fns (tree);
@ -1084,39 +1076,35 @@ name_lookup::search_qualified (tree scope, bool usings)
/* Add SCOPE to the unqualified search queue, recursively add its
inlines and those via using directives. */
name_lookup::using_queue *
name_lookup::queue_namespace (using_queue *queue, int depth, tree scope)
void
name_lookup::queue_namespace (using_queue& queue, int depth, tree scope)
{
if (see_and_mark (scope))
return queue;
return;
/* Record it. */
tree common = scope;
while (SCOPE_DEPTH (common) > depth)
common = CP_DECL_CONTEXT (common);
vec_safe_push (queue, using_pair (common, scope));
queue.safe_push (using_pair (common, scope));
/* Queue its inline children. */
if (vec<tree, va_gc> *inlinees = DECL_NAMESPACE_INLINEES (scope))
for (unsigned ix = inlinees->length (); ix--;)
queue = queue_namespace (queue, depth, (*inlinees)[ix]);
queue_namespace (queue, depth, (*inlinees)[ix]);
/* Queue its using targets. */
queue = queue_usings (queue, depth, NAMESPACE_LEVEL (scope)->using_directives);
return queue;
queue_usings (queue, depth, NAMESPACE_LEVEL (scope)->using_directives);
}
/* Add the namespaces in USINGS to the unqualified search queue. */
name_lookup::using_queue *
name_lookup::do_queue_usings (using_queue *queue, int depth,
vec<tree, va_gc> *usings)
void
name_lookup::queue_usings (using_queue& queue, int depth, vec<tree, va_gc> *usings)
{
for (unsigned ix = usings->length (); ix--;)
queue = queue_namespace (queue, depth, (*usings)[ix]);
return queue;
if (usings)
for (unsigned ix = usings->length (); ix--;)
queue_namespace (queue, depth, (*usings)[ix]);
}
/* Unqualified namespace lookup in SCOPE.
@ -1128,15 +1116,12 @@ name_lookup::do_queue_usings (using_queue *queue, int depth,
bool
name_lookup::search_unqualified (tree scope, cp_binding_level *level)
{
/* Make static to avoid continual reallocation. We're not
recursive. */
static using_queue *queue = NULL;
using_queue queue;
bool found = false;
int length = vec_safe_length (queue);
/* Queue local using-directives. */
for (; level->kind != sk_namespace; level = level->level_chain)
queue = queue_usings (queue, SCOPE_DEPTH (scope), level->using_directives);
queue_usings (queue, SCOPE_DEPTH (scope), level->using_directives);
for (; !found; scope = CP_DECL_CONTEXT (scope))
{
@ -1144,19 +1129,19 @@ name_lookup::search_unqualified (tree scope, cp_binding_level *level)
int depth = SCOPE_DEPTH (scope);
/* Queue namespaces reachable from SCOPE. */
queue = queue_namespace (queue, depth, scope);
queue_namespace (queue, depth, scope);
/* Search every queued namespace where SCOPE is the common
ancestor. Adjust the others. */
unsigned ix = length;
unsigned ix = 0;
do
{
using_pair &pair = (*queue)[ix];
using_pair &pair = queue[ix];
while (pair.first == scope)
{
found |= search_namespace_only (pair.second);
pair = queue->pop ();
if (ix == queue->length ())
pair = queue.pop ();
if (ix == queue.length ())
goto done;
}
/* The depth is the same as SCOPE, find the parent scope. */
@ -1164,7 +1149,7 @@ name_lookup::search_unqualified (tree scope, cp_binding_level *level)
pair.first = CP_DECL_CONTEXT (pair.first);
ix++;
}
while (ix < queue->length ());
while (ix < queue.length ());
done:;
if (scope == global_namespace)
break;
@ -1181,9 +1166,6 @@ name_lookup::search_unqualified (tree scope, cp_binding_level *level)
dedup (false);
/* Restore to incoming length. */
vec_safe_truncate (queue, length);
return found;
}
@ -5916,6 +5898,7 @@ set_decl_namespace (tree decl, tree scope, bool friendp)
tree found = NULL_TREE;
bool hidden_p = false;
bool saw_template = false;
for (lkp_iterator iter (old); iter; ++iter)
{
@ -5940,6 +5923,20 @@ set_decl_namespace (tree decl, tree scope, bool friendp)
found = ofn;
hidden_p = iter.hidden_p ();
}
else if (TREE_CODE (decl) == FUNCTION_DECL
&& TREE_CODE (ofn) == TEMPLATE_DECL)
saw_template = true;
}
if (!found && friendp && saw_template)
{
/* "[if no non-template match is found,] each remaining function template
is replaced with the specialization chosen by deduction from the
friend declaration or discarded if deduction fails."
So tell check_explicit_specialization to look for a match. */
SET_DECL_IMPLICIT_INSTANTIATION (decl);
return;
}
if (found)
@ -8979,4 +8976,22 @@ cp_emit_debug_info_for_using (tree t, tree context)
}
}
/* True if D is a local declaration in dependent scope. Assumes that it is
(part of) the current lookup result for its name. */
bool
dependent_local_decl_p (tree d)
{
if (!DECL_LOCAL_DECL_P (d))
return false;
cxx_binding *b = IDENTIFIER_BINDING (DECL_NAME (d));
cp_binding_level *l = b->scope;
while (!l->this_entity)
l = l->level_chain;
return uses_template_parms (l->this_entity);
}
#include "gt-cp-name-lookup.h"

View File

@ -20041,7 +20041,16 @@ cp_parser_placeholder_type_specifier (cp_parser *parser, location_t loc,
/* In a template parameter list, a type-parameter can be introduced
by type-constraints alone. */
if (processing_template_parmlist && !placeholder)
return build_constrained_parameter (con, proto, args);
{
/* In a default argument we may not be creating new parameters. */
if (parser->local_variables_forbidden_p & LOCAL_VARS_FORBIDDEN)
{
/* If this assert turns out to be false, do error() instead. */
gcc_assert (tentative);
return error_mark_node;
}
return build_constrained_parameter (con, proto, args);
}
/* Diagnose issues placeholder issues. */
if (!flag_concepts_ts
@ -25924,6 +25933,7 @@ cp_parser_class_specifier_1 (cp_parser* parser)
case CPP_OPEN_PAREN:
case CPP_CLOSE_PAREN:
case CPP_COMMA:
case CPP_SCOPE:
want_semicolon = false;
break;

View File

@ -2863,7 +2863,9 @@ check_explicit_specialization (tree declarator,
specialization = 1;
SET_DECL_TEMPLATE_SPECIALIZATION (decl);
}
else if (TREE_CODE (declarator) == TEMPLATE_ID_EXPR)
else if (TREE_CODE (declarator) == TEMPLATE_ID_EXPR
|| (DECL_LANG_SPECIFIC (decl)
&& DECL_IMPLICIT_INSTANTIATION (decl)))
{
if (is_friend)
/* This could be something like:
@ -4339,7 +4341,9 @@ check_for_bare_parameter_packs (tree t, location_t loc /* = UNKNOWN_LOCATION */)
parameter_packs = TREE_CHAIN (parameter_packs))
{
tree pack = TREE_VALUE (parameter_packs);
if (is_capture_proxy (pack))
if (is_capture_proxy (pack)
|| (TREE_CODE (pack) == PARM_DECL
&& DECL_CONTEXT (DECL_CONTEXT (pack)) == lam))
break;
}
@ -5225,8 +5229,9 @@ process_partial_specialization (tree decl)
&& !get_partial_spec_bindings (maintmpl, maintmpl, specargs))
{
auto_diagnostic_group d;
if (permerror (input_location, "partial specialization %qD is not "
"more specialized than", decl))
if (pedwarn (input_location, 0,
"partial specialization %qD is not more specialized than",
decl))
inform (DECL_SOURCE_LOCATION (maintmpl), "primary template %qD",
maintmpl);
}
@ -10900,7 +10905,7 @@ uses_template_parms (tree t)
|| uses_template_parms (TREE_CHAIN (t)));
else if (TREE_CODE (t) == TYPE_DECL)
dependent_p = dependent_type_p (TREE_TYPE (t));
else if (t == error_mark_node)
else if (t == error_mark_node || TREE_CODE (t) == NAMESPACE_DECL)
dependent_p = false;
else
dependent_p = instantiation_dependent_expression_p (t);
@ -12677,7 +12682,13 @@ gen_elem_of_pack_expansion_instantiation (tree pattern,
t = tsubst_expr (pattern, args, complain, in_decl,
/*integral_constant_expression_p=*/false);
else
t = tsubst (pattern, args, complain, in_decl);
{
t = tsubst (pattern, args, complain, in_decl);
if (is_auto (t) && !ith_elem_is_expansion)
/* When expanding the fake auto... pack expansion from add_capture, we
need to mark that the expansion is no longer a pack. */
TEMPLATE_TYPE_PARAMETER_PACK (t) = false;
}
/* If the Ith argument pack element is a pack expansion, then
the Ith element resulting from the substituting is going to
@ -13046,7 +13057,7 @@ build_extra_args (tree pattern, tree args, tsubst_flags_t complain)
{
/* Make a copy of the extra arguments so that they won't get changed
out from under us. */
tree extra = copy_template_args (args);
tree extra = preserve_args (copy_template_args (args), /*cow_p=*/false);
if (local_specializations)
if (tree locals = extract_local_specs (pattern, complain))
extra = tree_cons (NULL_TREE, extra, locals);
@ -15082,6 +15093,12 @@ tsubst_decl (tree t, tree args, tsubst_flags_t complain)
{
DECL_ORIGINAL_TYPE (r) = NULL_TREE;
set_underlying_type (r);
/* common_handle_aligned_attribute doesn't apply the alignment
to DECL_ORIGINAL_TYPE. */
if (TYPE_USER_ALIGN (TREE_TYPE (t)))
TREE_TYPE (r) = build_aligned_type (TREE_TYPE (r),
TYPE_ALIGN (TREE_TYPE (t)));
}
layout_decl (r, 0);
@ -15180,7 +15197,9 @@ tsubst_arg_types (tree arg_types,
/* Except that we do substitute default arguments under tsubst_lambda_expr,
since the new op() won't have any associated template arguments for us
to refer to later. */
if (lambda_fn_in_template_p (in_decl))
if (lambda_fn_in_template_p (in_decl)
|| (in_decl && TREE_CODE (in_decl) == FUNCTION_DECL
&& DECL_LOCAL_DECL_P (in_decl)))
default_arg = tsubst_copy_and_build (default_arg, args, complain, in_decl,
false/*fn*/, false/*constexpr*/);
@ -16468,7 +16487,8 @@ tsubst_baselink (tree baselink, tree object_type,
tree binfo_type = BINFO_TYPE (BASELINK_BINFO (baselink));
binfo_type = tsubst (binfo_type, args, complain, in_decl);
bool dependent_p = binfo_type != BINFO_TYPE (BASELINK_BINFO (baselink));
bool dependent_p = (binfo_type != BINFO_TYPE (BASELINK_BINFO (baselink))
|| optype != BASELINK_OPTYPE (baselink));
if (dependent_p)
{
@ -24288,7 +24308,7 @@ unify (tree tparms, tree targs, tree parm, tree arg, int strict,
/* Now check whether the type of this parameter is still
dependent, and give up if so. */
++processing_template_decl;
tparm = tsubst (tparm, targs, tf_none, NULL_TREE);
tparm = tsubst (TREE_TYPE (parm), targs, tf_none, NULL_TREE);
--processing_template_decl;
if (uses_template_parms (tparm))
return unify_success (explain_p);

View File

@ -609,7 +609,17 @@ set_cleanup_locs (tree stmts, location_t loc)
{
if (TREE_CODE (stmts) == CLEANUP_STMT)
{
protected_set_expr_location (CLEANUP_EXPR (stmts), loc);
tree t = CLEANUP_EXPR (stmts);
protected_set_expr_location (t, loc);
/* Avoid locus differences for C++ cdtor calls depending on whether
cdtor_returns_this: a conversion to void is added to discard the return
value, and this conversion ends up carrying the location, and when it
gets discarded, the location is lost. So hold it in the call as
well. */
if (TREE_CODE (t) == NOP_EXPR
&& TREE_TYPE (t) == void_type_node
&& TREE_CODE (TREE_OPERAND (t, 0)) == CALL_EXPR)
protected_set_expr_location (TREE_OPERAND (t, 0), loc);
set_cleanup_locs (CLEANUP_BODY (stmts), loc);
}
else if (TREE_CODE (stmts) == STATEMENT_LIST)
@ -656,7 +666,8 @@ do_pushlevel (scope_kind sk)
/* Queue a cleanup. CLEANUP is an expression/statement to be executed
when the current scope is exited. EH_ONLY is true when this is not
meant to apply to normal control flow transfer. */
meant to apply to normal control flow transfer. DECL is the VAR_DECL
being cleaned up, if any, or null for temporaries or subobjects. */
void
push_cleanup (tree decl, tree cleanup, bool eh_only)
@ -2679,13 +2690,13 @@ finish_call_expr (tree fn, vec<tree, va_gc> **args, bool disallow_virtual,
if (processing_template_decl)
{
/* If FN is a local extern declaration or set thereof, look them up
again at instantiation time. */
/* If FN is a local extern declaration (or set thereof) in a template,
look it up again at instantiation time. */
if (is_overloaded_fn (fn))
{
tree ifn = get_first_fn (fn);
if (TREE_CODE (ifn) == FUNCTION_DECL
&& DECL_LOCAL_DECL_P (ifn))
&& dependent_local_decl_p (ifn))
orig_fn = DECL_NAME (ifn);
}
@ -11241,7 +11252,7 @@ finish_decltype_type (tree expr, bool id_expression_or_member_access_p,
}
else if (processing_template_decl)
{
expr = instantiate_non_dependent_expr_sfinae (expr, complain);
expr = instantiate_non_dependent_expr_sfinae (expr, complain|tf_decltype);
if (expr == error_mark_node)
return error_mark_node;
/* Keep processing_template_decl cleared for the rest of the function

View File

@ -740,7 +740,7 @@ build_cplus_new (tree type, tree init, tsubst_flags_t complain)
constructor calls until gimplification time; now we only do it to set
VEC_INIT_EXPR_IS_CONSTEXPR.
We assume that init is either NULL_TREE, void_type_node (indicating
We assume that init is either NULL_TREE, {}, void_type_node (indicating
value-initialization), or another array to copy. */
static tree
@ -752,7 +752,20 @@ build_vec_init_elt (tree type, tree init, tsubst_flags_t complain)
|| !CLASS_TYPE_P (inner_type))
/* No interesting initialization to do. */
return integer_zero_node;
else if (init == void_type_node)
if (init && BRACE_ENCLOSED_INITIALIZER_P (init))
{
/* Even if init has initializers for some array elements,
we're interested in the {}-init of trailing elements. */
if (CP_AGGREGATE_TYPE_P (inner_type))
{
tree empty = build_constructor (init_list_type_node, nullptr);
return digest_init (inner_type, empty, complain);
}
else
/* It's equivalent to value-init. */
init = void_type_node;
}
if (init == void_type_node)
return build_value_init (inner_type, complain);
releasing_vec argvec;
@ -808,9 +821,13 @@ build_vec_init_expr (tree type, tree init, tsubst_flags_t complain)
TREE_SIDE_EFFECTS (init) = true;
SET_EXPR_LOCATION (init, input_location);
if (cxx_dialect >= cxx11
&& potential_constant_expression (elt_init))
VEC_INIT_EXPR_IS_CONSTEXPR (init) = true;
if (cxx_dialect >= cxx11)
{
bool cx = potential_constant_expression (elt_init);
if (BRACE_ENCLOSED_INITIALIZER_P (init))
cx &= potential_constant_expression (init);
VEC_INIT_EXPR_IS_CONSTEXPR (init) = cx;
}
VEC_INIT_EXPR_VALUE_INIT (init) = value_init;
return init;
@ -1566,7 +1583,8 @@ apply_identity_attributes (tree result, tree attribs, bool *remove_attributes)
stripped. */
tree
strip_typedefs (tree t, bool *remove_attributes, unsigned int flags)
strip_typedefs (tree t, bool *remove_attributes /* = NULL */,
unsigned int flags /* = 0 */)
{
tree result = NULL, type = NULL, t0 = NULL;

View File

@ -6315,7 +6315,9 @@ build_x_shufflevector (location_t loc, vec<tree, va_gc> *args,
if (processing_template_decl)
{
for (unsigned i = 0; i < args->length (); ++i)
if (type_dependent_expression_p ((*args)[i]))
if (i <= 1
? type_dependent_expression_p ((*args)[i])
: instantiation_dependent_expression_p ((*args)[i]))
{
tree exp = build_min_nt_call_vec (NULL, args);
CALL_EXPR_IFN (exp) = IFN_SHUFFLEVECTOR;

View File

@ -922,6 +922,7 @@ store_init_value (tree decl, tree init, vec<tree, va_gc>** cleanups, int flags)
here it should have been digested into an actual value for the type. */
gcc_checking_assert (TREE_CODE (value) != CONSTRUCTOR
|| processing_template_decl
|| TREE_CODE (type) == VECTOR_TYPE
|| !TREE_HAS_CONSTRUCTOR (value));
/* If the initializer is not a constant, fill in DECL_INITIAL with
@ -1514,6 +1515,14 @@ process_init_constructor_array (tree type, tree init, int nested, int flags,
strip_array_types (TREE_TYPE (ce->value)))));
picflags |= picflag_from_initializer (ce->value);
/* Propagate CONSTRUCTOR_PLACEHOLDER_BOUNDARY to outer
CONSTRUCTOR. */
if (TREE_CODE (ce->value) == CONSTRUCTOR
&& CONSTRUCTOR_PLACEHOLDER_BOUNDARY (ce->value))
{
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (init) = 1;
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (ce->value) = 0;
}
}
/* No more initializers. If the array is unbounded, we are done. Otherwise,
@ -1559,6 +1568,14 @@ process_init_constructor_array (tree type, tree init, int nested, int flags,
}
picflags |= picflag_from_initializer (next);
/* Propagate CONSTRUCTOR_PLACEHOLDER_BOUNDARY to outer
CONSTRUCTOR. */
if (TREE_CODE (next) == CONSTRUCTOR
&& CONSTRUCTOR_PLACEHOLDER_BOUNDARY (next))
{
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (init) = 1;
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (next) = 0;
}
if (len > i+1)
{
tree range = build2 (RANGE_EXPR, size_type_node,
@ -1753,6 +1770,13 @@ process_init_constructor_record (tree type, tree init, int nested, int flags,
if (fldtype != TREE_TYPE (field))
next = cp_convert_and_check (TREE_TYPE (field), next, complain);
picflags |= picflag_from_initializer (next);
/* Propagate CONSTRUCTOR_PLACEHOLDER_BOUNDARY to outer CONSTRUCTOR. */
if (TREE_CODE (next) == CONSTRUCTOR
&& CONSTRUCTOR_PLACEHOLDER_BOUNDARY (next))
{
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (init) = 1;
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (next) = 0;
}
CONSTRUCTOR_APPEND_ELT (v, field, next);
}
@ -1893,6 +1917,14 @@ process_init_constructor_union (tree type, tree init, int nested, int flags,
ce->value = massage_init_elt (TREE_TYPE (ce->index), ce->value, nested,
flags, complain);
/* Propagate CONSTRUCTOR_PLACEHOLDER_BOUNDARY to outer CONSTRUCTOR. */
if (ce->value
&& TREE_CODE (ce->value) == CONSTRUCTOR
&& CONSTRUCTOR_PLACEHOLDER_BOUNDARY (ce->value))
{
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (init) = 1;
CONSTRUCTOR_PLACEHOLDER_BOUNDARY (ce->value) = 0;
}
return picflag_from_initializer (ce->value);
}

View File

@ -179,6 +179,40 @@ ctf_dvd_lookup (const ctf_container_ref ctfc, dw_die_ref die)
return NULL;
}
/* Insert a dummy CTF variable into the list of variables to be ignored. */
static void
ctf_dvd_ignore_insert (ctf_container_ref ctfc, ctf_dvdef_ref dvd)
{
bool existed = false;
ctf_dvdef_ref entry = dvd;
ctf_dvdef_ref * item = ctfc->ctfc_ignore_vars->find_slot (entry, INSERT);
if (*item == NULL)
*item = dvd;
else
existed = true;
/* Duplicate variable records not expected to be inserted. */
gcc_assert (!existed);
}
/* Lookup the dummy CTF variable given the DWARF die for the non-defining
decl to be ignored. */
bool
ctf_dvd_ignore_lookup (const ctf_container_ref ctfc, dw_die_ref die)
{
ctf_dvdef_t entry;
entry.dvd_key = die;
ctf_dvdef_ref * slot = ctfc->ctfc_ignore_vars->find_slot (&entry, NO_INSERT);
if (slot)
return true;
return false;
}
/* Append member definition to the list. Member list is a singly-linked list
with list start pointing to the head. */
@ -666,9 +700,10 @@ ctf_add_member_offset (ctf_container_ref ctfc, dw_die_ref sou,
int
ctf_add_variable (ctf_container_ref ctfc, const char * name, ctf_id_t ref,
dw_die_ref die, unsigned int external_vis)
dw_die_ref die, unsigned int external_vis,
dw_die_ref die_var_decl)
{
ctf_dvdef_ref dvd;
ctf_dvdef_ref dvd, dvd_ignore;
gcc_assert (name);
@ -680,6 +715,24 @@ ctf_add_variable (ctf_container_ref ctfc, const char * name, ctf_id_t ref,
dvd->dvd_name = ctf_add_string (ctfc, name, &(dvd->dvd_name_offset));
dvd->dvd_visibility = external_vis;
dvd->dvd_type = ref;
/* If DW_AT_specification attribute exists, keep track of it as this is
the non-defining declaration corresponding to the variable. We will
skip emitting CTF variable for such incomplete, non-defining
declarations.
There could be some non-defining declarations, however, for which a
defining declaration does not show up in the same CU. For such
cases, the compiler continues to emit CTF variable record as
usual. */
if (die_var_decl)
{
dvd_ignore = ggc_cleared_alloc<ctf_dvdef_t> ();
dvd_ignore->dvd_key = die_var_decl;
/* It's alright to leave other fields as zero. No valid CTF
variable will be added for these DW_TAG_variable DIEs. */
ctf_dvd_ignore_insert (ctfc, dvd_ignore);
}
ctf_dvd_insert (ctfc, dvd);
if (strcmp (name, ""))
@ -900,6 +953,8 @@ new_ctf_container (void)
= hash_table<ctfc_dtd_hasher>::create_ggc (100);
tu_ctfc->ctfc_vars
= hash_table<ctfc_dvd_hasher>::create_ggc (100);
tu_ctfc->ctfc_ignore_vars
= hash_table<ctfc_dvd_hasher>::create_ggc (10);
return tu_ctfc;
}
@ -952,6 +1007,9 @@ ctfc_delete_container (ctf_container_ref ctfc)
ctfc->ctfc_vars->empty ();
ctfc->ctfc_types = NULL;
ctfc->ctfc_ignore_vars->empty ();
ctfc->ctfc_ignore_vars = NULL;
ctfc_delete_strtab (&ctfc->ctfc_strtable);
ctfc_delete_strtab (&ctfc->ctfc_aux_strtable);
if (ctfc->ctfc_vars_list)

View File

@ -274,6 +274,8 @@ typedef struct GTY (()) ctf_container
hash_table <ctfc_dtd_hasher> * GTY (()) ctfc_types;
/* CTF variables. */
hash_table <ctfc_dvd_hasher> * GTY (()) ctfc_vars;
/* CTF variables to be ignored. */
hash_table <ctfc_dvd_hasher> * GTY (()) ctfc_ignore_vars;
/* CTF string table. */
ctf_strtable_t ctfc_strtable;
@ -301,6 +303,8 @@ typedef struct GTY (()) ctf_container
/* List of pre-processed CTF Variables. CTF requires that the variables
appear in the sorted order of their names. */
ctf_dvdef_t ** GTY ((length ("0"))) ctfc_vars_list;
/* Count of pre-processed CTF Variables in the list. */
uint64_t ctfc_vars_list_count;
/* List of pre-processed CTF types. CTF requires that a shared type must
appear before the type that uses it. For the compiler, this means types
are emitted in sorted order of their type IDs. */
@ -392,6 +396,8 @@ extern ctf_dtdef_ref ctf_dtd_lookup (const ctf_container_ref ctfc,
dw_die_ref die);
extern ctf_dvdef_ref ctf_dvd_lookup (const ctf_container_ref ctfc,
dw_die_ref die);
extern bool ctf_dvd_ignore_lookup (const ctf_container_ref ctfc,
dw_die_ref die);
extern const char * ctf_add_string (ctf_container_ref, const char *,
uint32_t *, int);
@ -428,7 +434,7 @@ extern int ctf_add_member_offset (ctf_container_ref, dw_die_ref, const char *,
extern int ctf_add_function_arg (ctf_container_ref, dw_die_ref,
const char *, ctf_id_t);
extern int ctf_add_variable (ctf_container_ref, const char *, ctf_id_t,
dw_die_ref, unsigned int);
dw_die_ref, unsigned int, dw_die_ref);
extern ctf_id_t ctf_lookup_tree_type (ctf_container_ref, const tree);
extern ctf_id_t get_btf_id (ctf_id_t);

View File

@ -173,9 +173,7 @@ ctf_calc_num_vbytes (ctf_dtdef_ref ctftype)
static void
ctf_list_add_ctf_vars (ctf_container_ref ctfc, ctf_dvdef_ref var)
{
/* FIXME - static may not fly with multiple CUs. */
static int num_vars_added = 0;
ctfc->ctfc_vars_list[num_vars_added++] = var;
ctfc->ctfc_vars_list[ctfc->ctfc_vars_list_count++] = var;
}
/* Initialize the various sections and labels for CTF output. */
@ -214,6 +212,13 @@ ctf_dvd_preprocess_cb (ctf_dvdef_ref * slot, void * arg)
ctf_dvdef_ref var = (ctf_dvdef_ref) *slot;
ctf_container_ref arg_ctfc = dvd_arg->dvd_arg_ctfc;
/* If the CTF variable corresponds to an extern variable declaration with
a defining declaration later on, skip it. Only CTF variable
corresponding to the defining declaration for the extern variable is
desirable. */
if (ctf_dvd_ignore_lookup (arg_ctfc, var->dvd_key))
return 1;
ctf_preprocess_var (arg_ctfc, var);
/* Keep track of global objts. */
@ -278,16 +283,16 @@ static void
ctf_preprocess (ctf_container_ref ctfc)
{
size_t num_ctf_types = ctfc->ctfc_types->elements ();
size_t num_ctf_vars = ctfc_get_num_ctf_vars (ctfc);
/* Initialize an array to keep track of the CTF variables at global
scope. */
size_t num_global_objts = ctfc->ctfc_num_global_objts;
scope. At this time, size it conservatively. */
size_t num_global_objts = num_ctf_vars;
if (num_global_objts)
{
ctfc->ctfc_gobjts_list = ggc_vec_alloc<ctf_dvdef_t*>(num_global_objts);
}
size_t num_ctf_vars = ctfc_get_num_ctf_vars (ctfc);
if (num_ctf_vars)
{
ctf_dvd_preprocess_arg_t dvd_arg;
@ -301,8 +306,11 @@ ctf_preprocess (ctf_container_ref ctfc)
list for sorting. */
ctfc->ctfc_vars->traverse<void *, ctf_dvd_preprocess_cb> (&dvd_arg);
/* Sort the list. */
qsort (ctfc->ctfc_vars_list, num_ctf_vars, sizeof (ctf_dvdef_ref),
ctf_varent_compare);
qsort (ctfc->ctfc_vars_list, ctfc->ctfc_vars_list_count,
sizeof (ctf_dvdef_ref), ctf_varent_compare);
/* Update the actual number of the generated CTF variables at global
scope. */
ctfc->ctfc_num_global_objts = dvd_arg.dvd_global_obj_idx;
}
/* Initialize an array to keep track of the CTF functions types for global
@ -478,7 +486,7 @@ output_ctf_header (ctf_container_ref ctfc)
/* Vars appear after function index. */
varoff = funcidxoff + ctfc->ctfc_num_global_funcs * sizeof (uint32_t);
/* CTF types appear after vars. */
typeoff = varoff + ctfc_get_num_ctf_vars (ctfc) * sizeof (ctf_varent_t);
typeoff = varoff + (ctfc->ctfc_vars_list_count) * sizeof (ctf_varent_t);
/* The total number of bytes for CTF types is the sum of the number of
times struct ctf_type_t, struct ctf_stype_t are written, plus the
amount of variable length data after each one of these. */
@ -597,7 +605,7 @@ static void
output_ctf_vars (ctf_container_ref ctfc)
{
size_t i;
size_t num_ctf_vars = ctfc_get_num_ctf_vars (ctfc);
unsigned int num_ctf_vars = ctfc->ctfc_vars_list_count;
if (num_ctf_vars)
{
/* Iterate over the list of sorted vars and output the asm. */

View File

@ -1,3 +1,19 @@
2022-04-21 Iain Buclaw <ibuclaw@gdcproject.org>
* dmd/MERGE: Merge upstream dmd eb7bee331.
* dmd/VERSION: Update version to v2.100.0-beta.1.
* d-lang.cc (d_handle_option): Handle OPT_frevert_dip1000.
* lang.opt (frevert=dip1000): New option.
2022-04-13 Iain Buclaw <ibuclaw@gdcproject.org>
* Make-lang.in (D_FRONTEND_OBJS): Add d/common-bitfields.o,
d/mustuse.o.
* d-ctfloat.cc (CTFloat::isIdentical): Don't treat NaN values as
identical.
* dmd/MERGE: Merge upstream dmd 4d1bfcf14.
* expr.cc (ExprVisitor::visit (VoidInitExp *)): New.
2022-04-03 Iain Buclaw <ibuclaw@gdcproject.org>
* d-lang.cc: Include dmd/template.h.

View File

@ -89,6 +89,7 @@ D_FRONTEND_OBJS = \
d/canthrow.o \
d/chkformat.o \
d/clone.o \
d/common-bitfields.o \
d/common-file.o \
d/common-outbuffer.o \
d/common-string.o \
@ -143,6 +144,7 @@ D_FRONTEND_OBJS = \
d/lambdacomp.o \
d/lexer.o \
d/mtype.o \
d/mustuse.o \
d/nogc.o \
d/nspace.o \
d/ob.o \

View File

@ -55,8 +55,7 @@ CTFloat::isIdentical (real_t x, real_t y)
{
real_value rx = x.rv ();
real_value ry = y.rv ();
return (REAL_VALUE_ISNAN (rx) && REAL_VALUE_ISNAN (ry))
|| real_identical (&rx, &ry);
return real_identical (&rx, &ry);
}
/* Return true if real_t value R is NaN. */

View File

@ -637,12 +637,17 @@ d_handle_option (size_t scode, const char *arg, HOST_WIDE_INT value,
break;
case OPT_frevert_all:
global.params.useDIP1000 = FeatureState::disabled;
global.params.useDIP25 = FeatureState::disabled;
global.params.dtorFields = FeatureState::disabled;
global.params.fix16997 = !value;
global.params.markdown = !value;
break;
case OPT_frevert_dip1000:
global.params.useDIP1000 = FeatureState::disabled;
break;
case OPT_frevert_dip25:
global.params.useDIP25 = FeatureState::disabled;
break;

View File

@ -31,11 +31,11 @@ along with GCC; see the file COPYING3. If not see
/* Compare the first N bytes of S1 and S2 without regard to the case. */
int
Port::memicmp (const char *s1, const char *s2, size_t n)
Port::memicmp (const char *s1, const char *s2, d_size_t n)
{
int result = 0;
for (size_t i = 0; i < n; i++)
for (d_size_t i = 0; i < n; i++)
{
char c1 = s1[i];
char c2 = s2[i];
@ -143,9 +143,9 @@ Port::readlongBE (const void *buffer)
/* Write an SZ-byte sized VALUE to BUFFER, ignoring endian-ness. */
void
Port::valcpy (void *buffer, uint64_t value, size_t sz)
Port::valcpy (void *buffer, uint64_t value, d_size_t sz)
{
gcc_assert (((size_t) buffer) % sz == 0);
gcc_assert (((d_size_t) buffer) % sz == 0);
switch (sz)
{

View File

@ -1,4 +1,4 @@
47871363d804f54b29ccfd444b082c19716c2301
313d28b3db7523e67880ae3baf8ef28ce9abe9bd
The first line of this file holds the git revision number of the last
merge done from the dlang/dmd repository.

View File

@ -130,6 +130,8 @@ Note that these groups have no strict meaning, the category assignments are a bi
| [impcnvtab.d](https://github.com/dlang/dmd/blob/master/src/dmd/impcnvtab.d) | Define an implicit conversion table for basic types |
| [importc.d](https://github.com/dlang/dmd/blob/master/src/dmd/importc.d) | Helpers specific to ImportC |
| [sideeffect.d](https://github.com/dlang/dmd/blob/master/src/dmd/sideeffect.d) | Extract side-effects of expressions for certain lowerings. |
| [mustuse.d](https://github.com/dlang/dmd/blob/master/src/dmd/mustuse.d) | Helpers related to the `@mustuse` attribute |
**Compile Time Function Execution (CTFE)**

View File

@ -1 +1 @@
v2.099.1-beta.1
v2.100.0-beta.1

View File

@ -57,6 +57,28 @@ enum ClassKind : ubyte
c,
}
/**
* Give a nice string for a class kind for error messages
* Params:
* c = class kind
* Returns:
* 0-terminated string for `c`
*/
const(char)* toChars(ClassKind c)
{
final switch (c)
{
case ClassKind.d:
return "D";
case ClassKind.cpp:
return "C++";
case ClassKind.objc:
return "Objective-C";
case ClassKind.c:
return "C";
}
}
/**
* If an aggregate has a pargma(mangle, ...) this holds the information
* to mangle.

Some files were not shown because too many files have changed in this diff Show More