2020-04-01 Fritz Reese <foreese@gcc.gnu.org>
PR fortran/85982
* fortran/decl.c (match_attr_spec): Lump COMP_STRUCTURE/COMP_MAP into
attribute checking used by TYPE.
2020-04-01 Fritz Reese <foreese@gcc.gnu.org>
PR fortran/85982
* gfortran.dg/dec_structure_28.f90: New test.
since r278669 (fix for PR ipa/91956), IPA-SRA makes sure that the clone
it creates is put into the same same_comdat as the original cgraph_node,
so that it can call private comdats (such as the ipa-split bits of a
comdat that is private).
However, that means that if there is non-comdat caller of a public
comdat that is modified by IPA-SRA, it now finds itself calling a
private comdat, which call graph verifier does not like (and for a
reason, in theory it can disappear and since it is private it would not
be available from other CUs).
The patch fixes this by performing the fix for PR 91956 only when the
node in question actually calls a local comdat and when it does, also
making sure that no callers come from a different same_comdat (disabling
IPA-SRA if both conditions are true), so that it plays by the rules in
both modes, does not violate the private comdat calling rule and at the
same time does not disable the transformation unnecessarily.
The patch also fixes up the calls_comdat_local of callers of the
modified node, despite that not triggering any known issues.
2020-04-02 Martin Jambor <mjambor@suse.cz>
PR ipa/92676
* ipa-sra.c (struct caller_issues): New fields candidate and
call_from_outside_comdat.
(check_for_caller_issues): Check for calls from outsied of
candidate's same_comdat_group.
(check_all_callers_for_issues): Set up issues.candidate, check result
of the new check.
(mark_callers_calls_comdat_local): New function.
(process_isra_node_results): Set calls_comdat_local of callers if
appropriate.
This does away with enabling -ffinite-loops at -O2+ for all languages
and instead enables it selectively for C++ only.
It also makes -ffinite-loops loop-private at CFG construction time
fixing correctness issues with inlining.
2020-04-02 Richard Biener <rguenther@suse.de>
PR c/94392
* c-opts.c (c_common_post_options): Enable -ffinite-loops
for -O2 and C++11 or newer.
* common.opt (ffinite-loops): Initialize to zero.
* opts.c (default_options_table): Remove OPT_ffinite_loops
entry.
* cfgloop.h (loop::finite_p): New member.
* cfgloopmanip.c (copy_loop_info): Copy finite_p.
* ipa-icf-gimple.c (func_checker::compare_loops): Compare
finite_p.
* lto-streamer-in.c (input_cfg): Stream finite_p.
* lto-streamer-out.c (output_cfg): Likewise.
* tree-cfg.c (replace_loop_annotate): Initialize finite_p
from flag_finite_loops at CFG build time.
* tree-ssa-loop-niter.c (finite_loop_p): Check the loops
finite_p flag instead of flag_finite_loops.
* doc/invoke.texi (ffinite-loops): Adjust documentation of
default setting.
* gcc.dg/torture/pr94392.c: New testcase.
This removes the DW_TAG_imported_unit we generate for each referenced
early debug unit in LTRANS units. They are more harmful than they
do good and the semantics can be read in a way making it even wrong.
2020-04-02 Richard Biener <rguenther@suse.de>
PR debug/94450
* dwarf2out.c (dwarf2out_early_finish): Remove code emitting
DW_TAG_imported_unit.
Complement commit bfe78b0847 ("RISC-V: Using fmv.x.w/fmv.w.x rather
than fmv.x.s/fmv.s.x") and document a binutils 2.30 requirement in the
installation manual, matching the addition of fmv.x.w/fmv.w.x mnemonics
to GAS.
gcc/
* doc/install.texi (Specific) <riscv32-*-elf, riscv32-*-linux>
<riscv64-*-elf, riscv64-*-linux>: Update binutils requirement to
2.30.
The commit r10-7415 brings scalar type consideration
to eliminate epilogue peeling for gaps, but it exposed
one problem that the current handling doesn't consider
the memory access type VMAT_CONTIGUOUS_REVERSE, for
which the overrun happens on low address side. This
patch is to make the code take care of it by updating
the offset and construction element order accordingly.
Bootstrapped/regtested on powerpc64le-linux-gnu P8
and aarch64-linux-gnu.
2020-04-02 Kewen Lin <linkw@gcc.gnu.org>
gcc/ChangeLog
PR tree-optimization/94401
* tree-vect-loop.c (vectorizable_load): Handle VMAT_CONTIGUOUS_REVERSE
access type when loading halves of vector to avoid peeling for gaps.
I've noticed while trying to reproduce PR92989 the following warning:
In file included from ./tm.h:42,
from ../../gcc/backend.h:28,
from ../../gcc/lra-assigns.c:80:
../../gcc/config/mips/mti-linux.h:31:5: warning: invalid suffix on literal; C++11 requires a space between literal and string macro [-Wliteral-suffix]
"/%{mmicromips:micro}mips%{mel|EL:el}-"MIPS_SYSVERSION_SPEC \
^
This fixes it, string concatenation works just fine even with whitespace
in between.
2020-04-02 Jakub Jelinek <jakub@redhat.com>
* config/mips/mti-linux.h (SYSROOT_SUFFIX_SPEC): Add a space in
between a string literal and MIPS_SYSVERSION_SPEC macro.
I forgot to document the new param in invoke.texi, does the text below
look OK?
Tested with make info and make pdf.
Thanks,
Martin
2020-04-02 Martin Jambor <mjambor@suse.cz>
* doc/invoke.texi (Optimize Options): Document sra-max-propagations.
For the PR in question, my proposal would be to also lower
-param=max-find-base-term-values=
default from 2000 to 200 after this, at least in the above 4
bootstraps/regtests there is nothing that would ever result in
find_base_term returning non-NULL with more than 200 VALUEs being processed.
2020-04-02 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/92264
* params.opt (-param=max-find-base-term-values=): Decrease default
from 2000 to 200.
As discussed in the PR, if !ACCUMULATE_OUTGOING_ARGS on large functions we
can have hundreds of thousands of stack pointer adjustments and cselib
creates a new VALUE after each sp adjustment, which form extremely deep
VALUE chains, which is very harmful e.g. for find_base_term.
E.g. if we have
sp -= 4
sp -= 4
sp += 4
sp += 4
sp -= 4
sp += 4
that means 7 VALUEs, one for the sp at beginning (val1), than val2 = val1 -
4, then val3 = val2 - 4, then val4 = val3 + 4, then val5 = val4 + 4, then
val6 = val5 - 4, then val7 = val6 + 4.
This patch tweaks cselib, so that it is smarter about sp adjustments.
When cselib_lookup (stack_pointer_rtx, Pmode, 1, VOIDmode) and we know
nothing about sp yet (this happens at the start of the function, for
non-var-tracking also after cselib_reset_table and for var-tracking after
processing fp_setter insn where we forget about former sp values because
that is now hfp related while everything after it is sp related), we
look it up normally, but in addition to what we have been doing before
we mark the VALUE as SP_DERIVED_VALUE_P. Further lookups of sp + offset
are then special cased, so that it is canonicalized to that
SP_DERIVED_VALUE_P VALUE + CONST_INT (if possible). So, for the above,
we get val1 with SP_DERIVED_VALUE_P set, then val2 = val1 - 4, val3 = val1 -
8 (note, no longer val2 - 4!), then we get val2 again, val1 again, val2
again, val1 again.
In the find_base_term visited_vals.length () > 100 find_base_term
statistics during combined x86_64-linux and i686-linux bootstrap+regtest
cycle, without the patch I see:
find_base_term > 100
returning NULL returning non-NULL
32-bit compilations 4229178 407
64-bit compilations 217523 0
with largest visited_vals.length () when returning non-NULL being 206.
With the patch the same numbers are:
32-bit compilations 1249588 135
64-bit compilations 3510 0
with largest visited_vals.length () when returning non-NULL being 173.
This shows significant reduction of the deep VALUE chains.
On powerpc64{,le}-linux, these stats didn't change at all, we have
1008 0
for all of -m32, -m64 and little-endian -m64, just the
gcc.dg/pr85180.c and gcc.dg/pr87985.c testcases which are unrelated to sp.
My earlier version of the patch, which contained just the rtl.h and cselib.c
changes, regressed some tests:
gcc.dg/guality/{pr36728-{1,3},pr68860-{1,2}}.c
gcc.target/i386/{pr88416,sse-{13,23,24,25,26}}.c
The problem with the former tests was worse debug info, where with -m32
where arg7 was passed in a stack slot we though a push later on might have
invalidated it, when it couldn't. This is something I've solved with the
var-tracking.c (vt_initialize) changes. In those problematic functions, we
create a cfa_base VALUE (argp) and want to record that at the start of
the function the argp VALUE is sp + off and also record that current sp
VALUE is argp's VALUE - off. The second permanent equivalence didn't make
it after the patch though, because cselib_add_permanent_equiv will
cselib_lookup the value of the expression it wants to add as the equivalence
and if it is the same VALUE as we are calling it on, it doesn't do anything;
and due to the cselib changes for sp based accesses that is exactly what
happened. By reversing the order of the cselib_add_permanent_equiv calls we
get both equivalences though and thus are able to canonicalize the sp based
accesses in var-tracking to the cfa_base value + offset.
The i386 FAILs were all ICEs, where we had pushf instruction pushing flags
and then pop pseudo reading that value again. With the cselib changes,
cselib during RTL DSE is able to see through the sp adjustment and wanted
to replace_read what was done pushf, by moving the flags register into a
pseudo and replace the memory read in the pop with that pseudo. That is
wrong for two reasons: one is that the backend doesn't have an instruction
to move the flags hard register into some other register, but replace_read
has been validating just the mem -> pseudo replacement and not the insns
emitted by copy_to_mode_reg. And the second issue is that it is obviously
wrong to replace a stack pop which contains stack post-increment by a copy
of pseudo into destination. dse.c has some code to handle RTX_AUTOINC, but
only uses it when actually removing stores and only when there is REG_INC
note (stack RTX_AUTOINC does not have those), in check_for_inc_dec* where
it emits the reg adjustment(s) before the insn that is going to be deleted.
replace_read doesn't remove the insn, so if it e.g. contained REG_INC note,
it would be kept there and we might have the RTX_AUTOINC not just in *loc,
but other spots.
So, the dse.c changes try to validate the added insns and punt on all
RTX_AUTOINC in *loc. Furthermore, it seems that with the cselib.c changes
on the gfortran.dg/pr87360.f90 and gcc.target/i386/pr88416.c testcases
check_for_inc_dec{,_1} happily throws stack pointer autoinc on the floor,
which is also wrong. While we could perhaps do the for_each_inc_dec
call regardless of whether we have REG_INC note or not, we aren't prepared
to handle e.g. REG_ARGS_SIZE distribution and thus could end up with wrong
unwind info or ICEs during dwarf2cfi.c. So the patch also punts on those,
after all, if we'd in theory managed to try to optimize such pushes before,
we'd create wrong-code.
On x86_64-linux and i686-linux, the patch has some minor debug info coverage
differences, but it doesn't appear very significant to me.
https://github.com/pmachata/dwlocstat tool gives (where before is vanilla
trunk + the rtl.h patch but not {cselib,var-tracking,dse}.c
--enable-checking=yes,rtl,extra bootstrapped, then {cselib,var-tracking,dse}.c
hunks applied and make cc1plus, while after is trunk with the whole patch
applied).
64-bit cc1plus
before
cov% samples cumul
0..10 1232756/48% 1232756/48%
11..20 31089/1% 1263845/49%
21..30 39172/1% 1303017/51%
31..40 38853/1% 1341870/52%
41..50 47473/1% 1389343/54%
51..60 45171/1% 1434514/56%
61..70 69393/2% 1503907/59%
71..80 61988/2% 1565895/61%
81..90 104528/4% 1670423/65%
91..100 875402/34% 2545825/100%
after
cov% samples cumul
0..10 1233238/48% 1233238/48%
11..20 31086/1% 1264324/49%
21..30 39157/1% 1303481/51%
31..40 38819/1% 1342300/52%
41..50 47447/1% 1389747/54%
51..60 45151/1% 1434898/56%
61..70 69379/2% 1504277/59%
71..80 61946/2% 1566223/61%
81..90 104508/4% 1670731/65%
91..100 875094/34% 2545825/100%
32-bit cc1plus
before
cov% samples cumul
0..10 1231221/48% 1231221/48%
11..20 30992/1% 1262213/49%
21..30 36422/1% 1298635/51%
31..40 35793/1% 1334428/52%
41..50 47102/1% 1381530/54%
51..60 41201/1% 1422731/56%
61..70 65467/2% 1488198/58%
71..80 59560/2% 1547758/61%
81..90 104076/4% 1651834/65%
91..100 881879/34% 2533713/100%
after
cov% samples cumul
0..10 1230469/48% 1230469/48%
11..20 30390/1% 1260859/49%
21..30 36362/1% 1297221/51%
31..40 36042/1% 1333263/52%
41..50 47619/1% 1380882/54%
51..60 41674/1% 1422556/56%
61..70 65849/2% 1488405/58%
71..80 59857/2% 1548262/61%
81..90 104178/4% 1652440/65%
91..100 881273/34% 2533713/100%
2020-04-02 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/92264
* rtl.h (struct rtx_def): Mention that call bit is used as
SP_DERIVED_VALUE_P in cselib.c.
* cselib.c (SP_DERIVED_VALUE_P): Define.
(PRESERVED_VALUE_P, SP_BASED_VALUE_P): Move definitions earlier.
(cselib_hasher::equal): Handle equality between SP_DERIVED_VALUE_P
val_rtx and sp based expression where offsets cancel each other.
(preserve_constants_and_equivs): Formatting fix.
(cselib_reset_table): Add reverse op loc to SP_DERIVED_VALUE_P
locs list for cfa_base_preserved_val if needed. Formatting fix.
(autoinc_split): If the to be returned value is a REG, MEM or
VALUE which has SP_DERIVED_VALUE_P + CONST_INT as one of its
locs, return the SP_DERIVED_VALUE_P VALUE and adjust *off.
(rtx_equal_for_cselib_1): Call autoinc_split even if both
expressions are PLUS in Pmode with CONST_INT second operands.
Handle SP_DERIVED_VALUE_P cases.
(cselib_hash_plus_const_int): New function.
(cselib_hash_rtx): Use it for PLUS in Pmode with CONST_INT
second operand, as well as for PRE_DEC etc. that ought to be
hashed the same way.
(cselib_subst_to_values): Substitute PLUS with Pmode and
CONST_INT operand if the first operand is a VALUE which has
SP_DERIVED_VALUE_P + CONST_INT as one of its locs for the
SP_DERIVED_VALUE_P + adjusted offset.
(cselib_lookup_1): When creating a new VALUE for stack_pointer_rtx,
set SP_DERIVED_VALUE_P on it. Set PRESERVED_VALUE_P when adding
SP_DERIVED_VALUE_P PRESERVED_VALUE_P subseted VALUE location.
* var-tracking.c (vt_initialize): Call cselib_add_permanent_equiv
on the sp value before calling cselib_add_permanent_equiv on the
cfa_base value.
* dse.c (check_for_inc_dec_1, check_for_inc_dec): Punt on RTX_AUTOINC
in the insn without REG_INC note.
(replace_read): Punt on RTX_AUTOINC in the *loc being replaced.
Punt on invalid insns added by copy_to_mode_reg. Formatting fixes.
The following testcase ICEs, because aarch64_gen_compare_reg_maybe_ze emits
invalid RTL.
For y_mode [QH]Imode it expects y to be of that mode (or CONST_INT that fits
into that mode) and x being SImode; for non-CONST_INT y it zero extends y
into SImode and compares that against x, for CONST_INT y it zero extends y
into SImode. The problem is that when the zero extended constant isn't
usable directly, it forces it into a REG, but with y_mode mode, and then
compares against y. That is wrong, because it should force it into a SImode
REG and compare that way.
2020-04-02 Jakub Jelinek <jakub@redhat.com>
PR target/94435
* config/aarch64/aarch64.c (aarch64_gen_compare_reg_maybe_ze): For
y_mode E_[QH]Imode and y being a CONST_INT, change y_mode to SImode.
* gcc.target/aarch64/pr94435.c: New test.
The following testcase ICEs, because aarch64_gen_compare_reg_maybe_ze emits
invalid RTL.
For y_mode [QH]Imode it expects y to be of that mode (or CONST_INT that fits
into that mode) and x being SImode; for non-CONST_INT y it zero extends y
into SImode and compares that against x, for CONST_INT y it zero extends y
into SImode. The problem is that when the zero extended constant isn't
usable directly, it forces it into a REG, but with y_mode mode, and then
compares against y. That is wrong, because it should force it into a SImode
REG and compare that way.
2020-04-02 Jakub Jelinek <jakub@redhat.com>
PR target/94435
* config/aarch64/aarch64.c (aarch64_gen_compare_reg_maybe_ze): For
y_mode E_[QH]Imode and y being a CONST_INT, change y_mode to SImode.
* gcc.target/aarch64/pr94435.c: New test.
For operands with an identical set of alternatives there is no point
in marking them commutative. This patch removes the superfluous
constraint modifiers in vector.md and vx-builtins.md since it might
slow down reload without buying us anything.
There were even two patterns where the constraint modifier was plain
wrong: "sub<VF_HW>3" and "ior_not<VT>3". Fortunately it never had any effect.
gcc/ChangeLog:
2020-04-02 Andreas Krebbel <krebbel@linux.ibm.com>
* config/s390/vector.md ("<ti*>add<mode>3", "mul<mode>3")
("and<mode>3", "notand<mode>3", "ior<mode>3", "ior_not<mode>3")
("xor<mode>3", "notxor<mode>3", "smin<mode>3", "smax<mode>3")
("umin<mode>3", "umax<mode>3", "vec_widen_smult_even_<mode>")
("vec_widen_umult_even_<mode>", "vec_widen_smult_odd_<mode>")
("vec_widen_umult_odd_<mode>", "add<mode>3", "sub<mode>3")
("mul<mode>3", "fma<mode>4", "fms<mode>4", "neg_fma<mode>4")
("neg_fms<mode>4", "*smax<mode>3_vxe", "*smaxv2df3_vx")
("*smin<mode>3_vxe", "*sminv2df3_vx"): Remove % constraint
modifier.
("vec_widen_umult_lo_<mode>", "vec_widen_umult_hi_<mode>")
("vec_widen_smult_lo_<mode>", "vec_widen_smult_hi_<mode>"):
Remove constraints from expander.
* config/s390/vx-builtins.md ("vacc<bhfgq>_<mode>", "vacq")
("vacccq", "vec_avg<mode>", "vec_avgu<mode>", "vec_vmal<mode>")
("vec_vmah<mode>", "vec_vmalh<mode>", "vec_vmae<mode>")
("vec_vmale<mode>", "vec_vmao<mode>", "vec_vmalo<mode>")
("vec_smulh<mode>", "vec_umulh<mode>", "vec_nor<mode>3")
("vfmin<mode>", "vfmax<mode>"): Remove % constraint modifier.
ICE occurs when findloc is used with character arguments of different
kinds. If the character kinds are different reject the code.
Original patch provided by Steven G. Kargl <kargl@gcc.gnu.org>.
gcc/fortran/ChangeLog:
PR fortran/93498
* check.c (gfc_check_findloc): If the kinds of the arguments
differ goto label "incompat".
gcc/testsuite/ChangeLog:
PR fortran/93498
* gfortran.dg/pr93498_1.f90: New test.
* gfortran.dg/pr93498_2.f90: New test.
Deferred size arrays can not be used in equivalance statements.
gcc/fortran/ChangeLog:
PR fortran/94030
* resolve.c (resolve_equivalence): Correct formatting
around the label "identical_types". Instead of using
gfc_resolve_array_spec use is_non_constants_shape_array
to determine whether the array can be used in a in an
equivalence statement.
gcc/testsuite/ChangeLog:
PR fortran/94030
* gfortran.dg/pr94030_1.f90
* gfortran.dg/pr94030_2.f90
The scan-file match is likely too strict to always succeed, so instead
have split it up into a set of smaller matches.
gcc/testsuite/ChangeLog:
PR d/94315
* gdc.dg/pr93038.d: Split dg-final into multiple tests.
* gdc.dg/pr93038b.d: Likewise.
The symbol being scanned for only matched on 64-bit targets.
gcc/testsuite/ChangeLog:
PR d/94321
* gdc.dg/pr92216.d: Update to work on targets with 16 or 32-bit
pointers.
PR analyzer/94378 reports a false -Wanalyzer-malloc-leak
when returning a struct containing a malloc-ed pointer.
The issue is that the assignment code was not handling
compound copies, only copying top-level values from region to region,
and not copying child values.
This patch introduces a region_model::copy_region function, using
it for assignments and when analyzing function return values.
It recursively copies nested values within structs, unions, and
arrays, fixing the bug.
gcc/analyzer/ChangeLog:
PR analyzer/94378
* checker-path.cc: Include "bitmap.h".
* constraint-manager.cc: Likewise.
* diagnostic-manager.cc: Likewise.
* engine.cc: Likewise.
(exploded_node::detect_leaks): Pass null region_id to pop_frame.
* program-point.cc: Include "bitmap.h".
* program-state.cc: Likewise.
* region-model.cc (id_set<region_id>::id_set): Convert to...
(region_id_set::region_id_set): ...this.
(svalue_id_set::svalue_id_set): New ctor.
(region_model::copy_region): New function.
(region_model::copy_struct_region): New function.
(region_model::copy_union_region): New function.
(region_model::copy_array_region): New function.
(stack_region::pop_frame): Drop return value. Add
"result_dst_rid" param; if it is non-null, use copy_region to copy
the result to it. Rather than capture and pass a single "known
used" return value to be used by purge_unused_values, instead
gather and pass a set of known used return values.
(root_region::pop_frame): Drop return value. Add "result_dst_rid"
param.
(region_model::on_assignment): Use copy_region.
(region_model::on_return): Likewise for the result.
(region_model::on_longjmp): Pass null for pop_frame's
result_dst_rid.
(region_model::update_for_return_superedge): Pass the region for the
return value of the call, if any, to pop_frame, rather than setting
the lvalue for the lhs of the result.
(region_model::pop_frame): Drop return value. Add
"result_dst_rid" param.
(region_model::purge_unused_svalues): Convert third param from an
svalue_id * to an svalue_id_set *, updating the initial populating
of the "used" bitmap accordingly. Don't remap it when done.
(struct selftest::coord_test): New selftest fixture, extracted from...
(selftest::test_dump_2): ...here.
(selftest::test_compound_assignment): New selftest.
(selftest::test_stack_frames): Pass null to new param of pop_frame.
(selftest::analyzer_region_model_cc_tests): Call the new selftest.
* region-model.h (class id_set): Delete template.
(class region_id_set): Reimplement, using old id_set implementation.
(class svalue_id_set): Likewise. Convert from auto_sbitmap to
auto_bitmap.
(region::get_active_view): New accessor.
(stack_region::pop_frame): Drop return value. Add
"result_dst_rid" param.
(root_region::pop_frame): Likewise.
(region_model::pop_frame): Likewise.
(region_model::copy_region): New decl.
(region_model::purge_unused_svalues): Convert third param from an
svalue_id * to an svalue_id_set *.
(region_model::copy_struct_region): New decl.
(region_model::copy_union_region): New decl.
(region_model::copy_array_region): New decl.
gcc/testsuite/ChangeLog:
PR analyzer/94378
* gcc.dg/analyzer/compound-assignment-1.c: New test.
* gcc.dg/analyzer/compound-assignment-2.c: New test.
* gcc.dg/analyzer/compound-assignment-3.c: New test.
Segher's patch that added -fsplit-wide-types-early and enabled by default
for rs6000, caused pr87507.c to FAIL because when running lower-subreg
earlier, we don't see any pseudo-to-pseudo copies of our wide type,
which are created by combine, therefore, we skip decomposing our TImode
accesses. The fix here is just to always run the third pass of lower-subreg
instead of disabling it if we ran the second pass.
2020-04-01 Peter Bergner <bergner@linux.ibm.com>
PR rtl-optimization/94123
* lower-subreg.c (pass_lower_subreg3::gate): Remove test for
flag_split_wide_types_early.
The example code in the PR uses r2 (the TOC register) directly. In the
RTL generated for that, r2 is copied to some pseudo, and then cprop
propagates that into a "*tocref<mode>" insn, because nothing is
preventing it from doing that.
So, put the same condition in the insn condition for this as we will
later encounter in the constraint anyway, fixing this.
2020-04-01 Segher Boessenkool <segher@kernel.crashing.org>
PR target/94420
* config/rs6000/rs6000.md (*tocref<mode> for P): Add insn condition
on operands[1].
Failures of pr93365.f90, pr93600_1.f90 and pr93600_2.f90.
Changes made by PR94246 delete and changed code from expr.c
introduced by PR93600, the deleted code. This broke the PR93600
test cases. Restoring the deleted code and leaving the changed
code alone allows the cases for PR93600 and PR94246 to pass.
gcc/fortran/ChangeLog:
PR fortran/94386
expr.c (simplify_parameter_variable): Restore code deleted
in PR94246.
The following testcase ICEs because the objsz pass calls replace_uses_by
on SSA_NAME_OCCURS_IN_ABNORMAL_PHI SSA_NAME. The following patch instead
of that calls replace_call_with_value, which will turn it into
xyz_123(ab) = 234;
2020-04-01 Jakub Jelinek <jakub@redhat.com>
PR middle-end/94423
* tree-object-size.c (pass_object_sizes::execute): Don't call
replace_uses_by for SSA_NAME_OCCURS_IN_ABNORMAL_PHI lhs, instead
call replace_call_with_value.
* gcc.dg/ubsan/pr94423.c: New test.
As PR94043 shows, my commit r10-4524 exposed one issue in
vectorizable_live_operation, which inserts one extra BB
before the single exit, leading unexpected operand expansion
and unexpected loop depth assertion. As Richi suggested,
this patch is to teach vectorizable_live_operation to
generate loop closed phi for vec_lhs, it looks like:
loop;
# lhs' = PHI <lhs>
=>
loop;
# vec_lhs' = PHI <vec_lhs>
new_tree = BIT_FIELD_REF <vec_lhs', ...>;
lhs' = new_tree;
I noticed that there are some SLP cases that have same lhs
and vec_lhs but different offsets, which can make us have
more PHIs for the same vec_lhs there. But I think it would
be fine since only one of them is actually live, the others
should be eliminated by the following dce. So the patch
doesn't check whether there is one phi for vec_lhs, just
create one directly instead.
Bootstrapped/regtested on powerpc64le-linux-gnu (LE) P8.
2020-04-01 Kewen Lin <linkw@gcc.gnu.org>
gcc/ChangeLog
PR tree-optimization/94043
* tree-vect-loop.c (vectorizable_live_operation): Generate loop-closed
phi for vec_lhs and use it for lane extraction.
gcc/testsuite/ChangeLog
PR tree-optimization/94043
* gfortran.dg/graphite/vect-pr94043.f90: New test.
We represent 'this' in a default member initializer with a PLACEHOLDER_EXPR.
Normally in constexpr evaluation when we encounter one it refers to
ctx->ctor, but when we're creating a temporary of class type, that replaces
ctx->ctor, so a PLACEHOLDER_EXPR that refers to the type of the member being
initialized needs to be replaced before that happens.
gcc/cp/ChangeLog
2020-03-31 Jason Merrill <jason@redhat.com>
PR c++/94205
* constexpr.c (cxx_eval_constant_expression) [TARGET_EXPR]: Call
replace_placeholders.
* typeck2.c (store_init_value): Fix arguments to
fold_non_dependent_expr.
This patch has no semantic effect; committing it separately makes the change
for 94205 easier to read.
gcc/cp/ChangeLog
2020-03-31 Jason Merrill <jason@redhat.com>
* constexpr.c (cxx_eval_constant_expression) [TARGET_EXPR]: Use
local variables.
This change fixes the symbol merging in get_symbol_decl to also consider
prototypes. This allows the ability to set user defined attributes on
the prototype of a function, which then get applied to the definition,
if found later in the compilation.
The lowering of UDAs to GCC attributes has been commonized into a single
function called apply_user_attributes.
gcc/d/ChangeLog:
PR d/90136
* d-attribs.cc: Include dmd/attrib.h.
(build_attributes): Redeclare as static.
(apply_user_attributes): New function.
* d-tree.h (class UserAttributeDeclaration): Remove.
(build_attributes): Remove.
(apply_user_attributes): Declare.
(finish_aggregate_type): Remove attrs argument.
* decl.cc (get_symbol_decl): Merge declaration prototypes with
definitions. Use apply_user_attributes.
* modules.cc (layout_moduleinfo_fields): Remove last argument to
finish_aggregate_type.
* typeinfo.cc (layout_classinfo_interfaces): Likewise.
* types.cc (layout_aggregate_members): Likewise.
(finish_aggregate_type): Remove attrs argument.
(TypeVisitor::visit (TypeEnum *)): Use apply_user_attributes.
(TypeVisitor::visit (TypeStruct *)): Remove last argument to
finish_aggregate_type. Use apply_user_attributes.
(TypeVisitor::visit (TypeClass *)): Likewise.
gcc/testsuite/ChangeLog:
PR d/90136
* gdc.dg/pr90136a.d: New test.
* gdc.dg/pr90136b.d: New test.
* gdc.dg/pr90136c.d: New test.
This attribute is not directly accessible from user code, rather it is
indirectly added from the @forceinline attribute. Even so, a handler
should be present for it to prevent false positive warnings.
Said warnings are not something that could happen currently, but will
become a problem from fixing PR90136 later.
gcc/d/ChangeLog:
* d-attribs.cc (d_langhook_common_attribute_table): Add always_inline.
(handle_always_inline_attribute): New function.
gcc/jit/ChangeLog
2020-03-31 Andrea Corallo <andrea.corallo@arm.com>
David Malcolm <dmalcolm@redhat.com>
* docs/topics/compatibility.rst (LIBGCCJIT_ABI_13): New ABI tag
plus add version paragraph.
* libgccjit++.h (namespace gccjit::version): Add new namespace.
* libgccjit.c (gcc_jit_version_major, gcc_jit_version_minor)
(gcc_jit_version_patchlevel): New functions.
* libgccjit.h (LIBGCCJIT_HAVE_gcc_jit_version): New macro.
(gcc_jit_version_major, gcc_jit_version_minor)
(gcc_jit_version_patchlevel): New functions.
* libgccjit.map (LIBGCCJIT_ABI_13) New ABI tag.
gcc/testsuite/ChangeLog
2020-03-31 Andrea Corallo <andrea.corallo@arm.com>
* jit.dg/test-version.c: New testcase.
* jit.dg/all-non-failing-tests.h: Add test-version.c.
This patch removes the manual insertion of padding for fields in
constructed struct literals, and instead uses memset() on the
declaration being initialized.
When compiling optimized builds, the intent is usually missed, and
alignment holes end up with non-zero values in them anyway.
gcc/d/ChangeLog:
PR d/94424
* d-codegen.cc (build_alignment_field): Remove.
(build_struct_literal): Don't insert alignment padding.
* expr.cc (ExprVisitor::visit (AssignExp *)): Call memset before
assigning struct literals.
gcc/testsuite/ChangeLog:
PR d/94424
* gdc.dg/pr94424.d: New test.
In the testcase for PR94398, we're trying to compute:
alignment_support_scheme
= vect_supportable_dr_alignment (first_dr_info, false);
gcc_assert (alignment_support_scheme);
even for VMAT_GATHER_SCATTER, which always accesses individual elements.
Here we should set alignment_support_scheme to dr_unaligned_supported
the gather/scatter case instead of calling vect_supportable_dr_alignment.
2020-03-31 Felix Yang <felix.yang@huawei.com>
gcc/
PR tree-optimization/94398
* tree-vect-stmts.c (vectorizable_store): Instead of calling
vect_supportable_dr_alignment, set alignment_support_scheme to
dr_unaligned_supported for gather-scatter accesses.
(vectorizable_load): Likewise.
gcc/testsuite/
PR tree-optimization/94398
* gcc.target/aarch64/pr94398.c: New test.
This adds weak linkage to internal TypeInfo data on top of the existing
DECL_COMDAT, which helps in the unlikely event that two of the same
TypeInfo data ends up in multiple places.
gcc/d/ChangeLog:
* typeinfo.cc (TypeInfoVisitor::internal_reference): Call
d_comdat_linkage on generated decl.
Replace all relevant explicit uses of V64 vectors with an iterator (albeit
with only one entry). This is prerequisite to adding extra vector lengths.
The changes are purely mechanical: comparing the mddump files from before
and after shows only white-space differences and the use of GET_MODE_NUNITS.
2020-03-31 Andrew Stubbs <ams@codesourcery.com>
gcc/
* config/gcn/gcn-valu.md (V_QI, V_HI, V_HF, V_SI, V_SF, V_DI, V_DF):
New mode iterators.
(vnsi, VnSI, vndi, VnDI): New mode attributes.
(mov<mode>): Use <VnDI> in place of V64DI.
(mov<mode>_exec): Likewise.
(mov<mode>_sgprbase): Likewise.
(reload_out<mode>): Likewise.
(*vec_set<mode>_1): Use GET_MODE_NUNITS instead of constant 64.
(gather_load<mode>v64si): Rename to ...
(gather_load<mode><vnsi>): ... this, and use <VnSI> in place of V64SI,
and <VnDI> in place of V64DI.
(gather<mode>_insn_1offset<exec>): Use <VnDI> in place of V64DI.
(gather<mode>_insn_1offset_ds<exec>): Use <VnSI> in place of V64SI.
(gather<mode>_insn_2offsets<exec>): Use <VnSI> and <VnDI>.
(scatter_store<mode>v64si): Rename to ...
(scatter_store<mode><vnsi>): ... this, and use <VnSI> and <VnDI>.
(scatter<mode>_expr<exec_scatter>): Use <VnSI> and <VnDI>.
(scatter<mode>_insn_1offset<exec_scatter>): Likewise.
(scatter<mode>_insn_1offset_ds<exec_scatter>): Likewise.
(scatter<mode>_insn_2offsets<exec_scatter>): Likewise.
(ds_bpermute<mode>): Use <VnSI>.
(addv64si3_vcc<exec_vcc>): Rename to ...
(add<mode>3_vcc<exec_vcc>): ... this, and use V_SI.
(addv64si3_vcc_dup<exec_vcc>): Rename to ...
(add<mode>3_vcc_dup<exec_vcc>): ... this, and use V_SI.
(addcv64si3<exec_vcc>): Rename to ...
(addc<mode>3<exec_vcc>): ... this, and use V_SI.
(subv64si3_vcc<exec_vcc>): Rename to ...
(sub<mode>3_vcc<exec_vcc>): ... this, and use V_SI.
(subcv64si3<exec_vcc>): Rename to ...
(subc<mode>3<exec_vcc>): ... this, and use V_SI.
(addv64di3): Rename to ...
(add<mode>3): ... this, and use V_DI.
(addv64di3_exec): Rename to ...
(add<mode>3_exec): ... this, and use V_DI.
(subv64di3): Rename to ...
(sub<mode>3): ... this, and use V_DI.
(subv64di3_exec): Rename to ...
(sub<mode>3_exec): ... this, and use V_DI.
(addv64di3_zext): Rename to ...
(add<mode>3_zext): ... this, and use V_DI and <VnSI>.
(addv64di3_zext_exec): Rename to ...
(add<mode>3_zext_exec): ... this, and use V_DI and <VnSI>.
(addv64di3_zext_dup): Rename to ...
(add<mode>3_zext_dup): ... this, and use V_DI and <VnSI>.
(addv64di3_zext_dup_exec): Rename to ...
(add<mode>3_zext_dup_exec): ... this, and use V_DI and <VnSI>.
(addv64di3_zext_dup2): Rename to ...
(add<mode>3_zext_dup2): ... this, and use V_DI and <VnSI>.
(addv64di3_zext_dup2_exec): Rename to ...
(add<mode>3_zext_dup2_exec): ... this, and use V_DI and <VnSI>.
(addv64di3_sext_dup2): Rename to ...
(add<mode>3_sext_dup2): ... this, and use V_DI and <VnSI>.
(addv64di3_sext_dup2_exec): Rename to ...
(add<mode>3_sext_dup2_exec): ... this, and use V_DI and <VnSI>.
(<su>mulv64si3_highpart<exec>): Rename to ...
(<su>mul<mode>3_highpart<exec>): ... this and use V_SI and <VnDI>.
(mulv64di3): Rename to ...
(mul<mode>3): ... this, and use V_DI and <VnSI>.
(mulv64di3_exec): Rename to ...
(mul<mode>3_exec): ... this, and use V_DI and <VnSI>.
(mulv64di3_zext): Rename to ...
(mul<mode>3_zext): ... this, and use V_DI and <VnSI>.
(mulv64di3_zext_exec): Rename to ...
(mul<mode>3_zext_exec): ... this, and use V_DI and <VnSI>.
(mulv64di3_zext_dup2): Rename to ...
(mul<mode>3_zext_dup2): ... this, and use V_DI and <VnSI>.
(mulv64di3_zext_dup2_exec): Rename to ...
(mul<mode>3_zext_dup2_exec): ... this, and use V_DI and <VnSI>.
(<expander>v64di3): Rename to ...
(<expander><mode>3): ... this, and use V_DI and <VnSI>.
(<expander>v64di3_exec): Rename to ...
(<expander><mode>3_exec): ... this, and use V_DI and <VnSI>.
(<expander>v64si3<exec>): Rename to ...
(<expander><mode>3<exec>): ... this, and use V_SI and <VnSI>.
(v<expander>v64si3<exec>): Rename to ...
(v<expander><mode>3<exec>): ... this, and use V_SI and <VnSI>.
(<expander>v64si3<exec>): Rename to ...
(<expander><vnsi>3<exec>): ... this, and use V_SI.
(subv64df3<exec>): Rename to ...
(sub<mode>3<exec>): ... this, and use V_DF.
(truncv64di<mode>2): Rename to ...
(trunc<vndi><mode>2): ... this, and use <VnDI>.
(truncv64di<mode>2_exec): Rename to ...
(trunc<vndi><mode>2_exec): ... this, and use <VnDI>.
(<convop><mode>v64di2): Rename to ...
(<convop><mode><vndi>2): ... this, and use <VnDI>.
(<convop><mode>v64di2_exec): Rename to ...
(<convop><mode><vndi>2_exec): ... this, and use <VnDI>.
(vec_cmp<u>v64qidi): Rename to ...
(vec_cmp<u><mode>di): ... this, and use <VnSI>.
(vec_cmp<u>v64qidi_exec): Rename to ...
(vec_cmp<u><mode>di_exec): ... this, and use <VnSI>.
(vcond_mask_<mode>di): Use <VnDI>.
(maskload<mode>di): Likewise.
(maskstore<mode>di): Likewise.
(mask_gather_load<mode>v64si): Rename to ...
(mask_gather_load<mode><vnsi>): ... this, and use <VnSI> and <VnDI>.
(mask_scatter_store<mode>v64si): Rename to ...
(mask_scatter_store<mode><vnsi>): ... this, and use <VnSI> and <VnDI>.
(*<reduc_op>_dpp_shr_v64di): Rename to ...
(*<reduc_op>_dpp_shr_<mode>): ... this, and use V_DI and <VnSI>.
(*plus_carry_in_dpp_shr_v64si): Rename to ...
(*plus_carry_in_dpp_shr_<mode>): ... this, and use V_SI.
(*plus_carry_dpp_shr_v64di): Rename to ...
(*plus_carry_dpp_shr_<mode>): ... this, and use V_DI and <VnSI>.
(vec_seriesv64si): Rename to ...
(vec_series<mode>): ... this, and use V_SI.
(vec_seriesv64di): Rename to ...
(vec_series<mode>): ... this, and use V_DI.
Use HOST_WIDE_INT_PRINT_DEC macro instead of %ld for format printing.
gcc/
xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com>
* config/arc/arc.c (arc_print_operand): Use
HOST_WIDE_INT_PRINT_DEC macro.