Split the aarch64_<su>qmovn<mode> pattern into separate scalar and
vector variants. Further split the vector RTL pattern into big/
little endian variants that model the zero-high-half semantics of the
underlying instruction. Modeling these semantics allows for better
RTL combinations while also removing some register allocation issues
as the compiler now knows that the operation is totally destructive.
Add new tests to narrow_zero_high_half.c to verify the benefit of
this change.
gcc/ChangeLog:
2021-06-14 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Split generator
for aarch64_<su>qmovn builtins into scalar and vector
variants.
* config/aarch64/aarch64-simd.md (aarch64_<su>qmovn<mode>_insn_le):
Define.
(aarch64_<su>qmovn<mode>_insn_be): Define.
(aarch64_<su>qmovn<mode>): Split into scalar and vector
variants. Change vector variant to an expander that emits the
correct instruction depending on endianness.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/narrow_zero_high_half.c: Add new tests.
Split the aarch64_sqmovun<mode> pattern into separate scalar and
vector variants. Further split the vector pattern into big/little
endian variants that model the zero-high-half semantics of the
underlying instruction. Modeling these semantics allows for better
RTL combinations while also removing some register allocation issues
as the compiler now knows that the operation is totally destructive.
Add new tests to narrow_zero_high_half.c to verify the benefit of
this change.
gcc/ChangeLog:
2021-06-14 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Split generator
for aarch64_sqmovun builtins into scalar and vector variants.
* config/aarch64/aarch64-simd.md (aarch64_sqmovun<mode>):
Split into scalar and vector variants. Change vector variant
to an expander that emits the correct instruction depending
on endianness.
(aarch64_sqmovun<mode>_insn_le): Define.
(aarch64_sqmovun<mode>_insn_be): Define.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/narrow_zero_high_half.c: Add new tests.
Modeling the zero-high-half semantics of the XTN narrowing
instruction in RTL indicates to the compiler that this is a totally
destructive operation. This enables more RTL simplifications and also
prevents some register allocation issues.
Add new tests to narrow_zero_high_half.c to verify the benefit of
this change.
gcc/ChangeLog:
2021-06-11 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd.md (aarch64_xtn<mode>_insn_le):
Define - modeling zero-high-half semantics.
(aarch64_xtn<mode>): Change to an expander that emits the
appropriate instruction depending on endianness.
(aarch64_xtn<mode>_insn_be): Define - modeling zero-high-half
semantics.
(aarch64_xtn2<mode>_le): Rename to...
(aarch64_xtn2<mode>_insn_le): This.
(aarch64_xtn2<mode>_be): Rename to...
(aarch64_xtn2<mode>_insn_be): This.
(vec_pack_trunc_<mode>): Emit truncation instruction instead
of aarch64_xtn.
* config/aarch64/iterators.md (Vnarrowd): Add Vnarrowd mode
attribute iterator.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/narrow_zero_high_half.c: Add new tests.
Add tests to verify that Neon narrowing-shift instructions clear the
top half of the result vector. It is sufficient to show that a
subsequent combine with a zero-vector is optimized away - leaving
just the narrowing-shift instruction.
gcc/testsuite/ChangeLog:
2021-06-15 Jonathan Wright <jonathan.wright@arm.com>
* gcc.target/aarch64/narrow_zero_high_half.c: New test.
When SRA transforms an assignment where the RHS is an aggregate decl
that it creates replacements for, the (least efficient) fallback
method of dealing with them is to store all the replacements back into
the original decl and then let the original assignment takes its
course.
That of course should not need to be done for TREE_READONLY bases
which cannot change contents. The SRA code handled this situation
only for DECL_IN_CONSTANT_POOL const decls, this patch modifies the
check so that it tests for TREE_READONLY and I also looked at all
other callers of generate_subtree_copies and added checks to another
one dealing with the same exact situation and one which deals with it
in a non-assignment context.
gcc/ChangeLog:
2021-06-11 Martin Jambor <mjambor@suse.cz>
PR tree-optimization/100453
* tree-sra.c (create_access): Disqualify any const candidates
which are written to.
(sra_modify_expr): Do not store sub-replacements back to a const base.
(handle_unscalarized_data_in_subtree): Likewise.
(sra_modify_assign): Likewise. Earlier, use TREE_READONLy test
instead of constant_decl_p.
gcc/testsuite/ChangeLog:
2021-06-10 Martin Jambor <mjambor@suse.cz>
PR tree-optimization/100453
* gcc.dg/tree-ssa/pr100453.c: New test.
I've noticed this test now on various arches sometimes FAILs, sometimes
PASSes (the line 12 test in particular).
The problem is that a = 0; initialization in the caller no longer happens
before the f(&a) call as what the argument points to is only used in
debug info.
Making the function noipa forces the caller to initialize it and still
tests what the test wants to test, namely that we don't consider *p as
valid location for the c variable at line 18 (after it has been overwritten
with *p = 1;).
2021-06-16 Jakub Jelinek <jakub@redhat.com>
* gcc.dg/guality/pr49888.c (f): Use noipa attribute instead of
noinline, noclone.
The following testcase is miscompiled on x86_64-linux, the bitfield store
is implemented as a RMW 64-bit operation at d+24 when the d variable has
size of only 28 bytes and scheduling moves in between the R and W part
a store to a different variable that happens to be right after the d
variable.
The reason for this is that we weren't creating
DECL_BIT_FIELD_REPRESENTATIVEs for bitfields in unions.
The following patch does create them, but treats all such bitfields as if
they were in a structure where the particular bitfield is the only field.
2021-06-16 Jakub Jelinek <jakub@redhat.com>
PR middle-end/101062
* stor-layout.c (finish_bitfield_representative): For fields in unions
assume nextf is always NULL.
(finish_bitfield_layout): Compute bit field representatives also in
unions, but handle it as if each bitfield was the only field in the
aggregate.
* gcc.dg/pr101062.c: New test.
When we face a sm_ord vs sm_unord for the same ref during
store sequence merging we assert that the ref is already marked
unsupported. But it can be that it will only be marked so
during the ongoing merging so instead of asserting mark it here.
Also apply some optimization to not waste resources to search
for already unsupported refs.
2021-06-16 Richard Biener <rguenther@suse.de>
PR tree-optimization/101088
* tree-ssa-loop-im.c (sm_seq_valid_bb): Only look for
supported refs on edges. Do not assert same ref but
different kind stores are unsuported but mark them so.
(hoist_memory_references): Only look for supported refs
on exits.
* gcc.dg/torture/pr101088.c: New testcase.
This patch tackles PR46235 to improve the code generated for bit tests
on x86_64 by making more use of the bt instruction. Currently, GCC emits
bt instructions when followed by condition jumps (thanks to Uros' splitters).
This patch adds splitters in i386.md, to catch the cases where bt is followed
by a conditional move (as in the original report), or by a setc/setnc (as in
comment 5 of the Bugzilla PR).
With this patch, the function in the original PR
int foo(int a, int x, int y) {
if (a & (1 << x))
return a;
return 1;
}
which with -O2 on mainline generates:
foo: movl %edi, %eax
movl %esi, %ecx
sarl %cl, %eax
testb $1, %al
movl $1, %eax
cmovne %edi, %eax
ret
now generates:
foo: btl %esi, %edi
movl $1, %eax
cmovc %edi, %eax
ret
Likewise, IsBitSet1 and IsBitSet2 (from comment 5)
bool IsBitSet1(unsigned char byte, int index) {
return (byte & (1<<index)) != 0;
}
bool IsBitSet2(unsigned char byte, int index) {
return (byte >> index) & 1;
}
Before:
movzbl %dil, %eax
movl %esi, %ecx
sarl %cl, %eax
andl $1, %eax
ret
After:
movzbl %dil, %edi
btl %esi, %edi
setc %al
ret
According to Agner Fog, SAR/SHR r,cl takes 2 cycles on skylake,
where BT r,r takes only one, so the performance improvements on
recent hardware may be more significant than implied by just
the reduced number of instructions. I've avoided transforming cases
(such as btsi_setcsi) where using bt sequences may not be a clear
win (over sarq/andl).
2010-06-15 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR rtl-optimization/46235
* config/i386/i386.md: New define_split for bt followed by cmov.
(*bt<mode>_setcqi): New define_insn_and_split for bt followed by setc.
(*bt<mode>_setncqi): New define_insn_and_split for bt then setnc.
(*bt<mode>_setnc<mode>): New define_insn_and_split for bt followed
by setnc with zero extension.
gcc/testsuite/ChangeLog
PR rtl-optimization/46235
* gcc.target/i386/bt-5.c: New test.
* gcc.target/i386/bt-6.c: New test.
* gcc.target/i386/bt-7.c: New test.
As the following testcase shows, libffi didn't handle properly
classify_arguments of structures at byte offsets not divisible by
UNITS_PER_WORD. The following patch adjusts it to match what
config/i386/ classify_argument does for that and also ports the
PR38781 fix there (the second chunk).
This has been committed to upstream libffi already:
5651bea284
2021-06-16 Jakub Jelinek <jakub@redhat.com>
* src/x86/ffi64.c (classify_argument): For FFI_TYPE_STRUCT set words
to number of words needed for type->size + byte_offset bytes rather
than just type->size bytes. Compute pos before the loop and check
total size of the structure.
* testsuite/libffi.call/nested_struct12.c: New test.
gcc/ada/
* sem_util.adb (Is_Volatile_Function): Follow the exact wording
of SPARK (regarding volatile functions) and Ada (regarding
protected functions).
gcc/ada/
* sem_util.adb (Is_OK_Volatile_Context): All references to
volatile objects are legal in preanalysis.
(Within_Volatile_Function): Previously it was wrongly called on
Empty entities; now it is only called on E_Return_Statement,
which allow the body to be greatly simplified.
gcc/ada/
* sem_res.adb (Set_Slice_Subtype): Revert special-case
introduced previously, which is not needed as Itypes created for
slices are precisely always used.
gcc/ada/
* urealp.adb (Scale): Change first paramter to Uint and adjust.
(Equivalent_Decimal_Exponent): Pass U.Den directly to Scale.
* libgnat/s-exponr.adb (Negative): Rename to...
(Safe_Negative): ...this and change its lower bound.
(Exponr): Adjust to above renaming and deal with Integer'First.
gcc/ada/
* sem_res.adb (Flag_Effectively_Volatile_Objects): Detect also
allocators within restricted contexts and not just entity names.
(Resolve_Actuals): Remove duplicated code for detecting
restricted contexts; it is now exclusively done in
Is_OK_Volatile_Context.
(Resolve_Entity_Name): Adapt to new parameter of
Is_OK_Volatile_Context.
* sem_util.ads, sem_util.adb (Is_OK_Volatile_Context): Adapt to
handle contexts both inside and outside of subprogram call
actual parameters.
(Within_Subprogram_Call): Remove; now handled by
Is_OK_Volatile_Context itself and its parameter.
gcc/ada/
* checks.adb (Apply_Scalar_Range_Check): Fix handling of check depending
on the parameter passing mechanism. Grammar adjustment ("has"
=> "have").
(Parameter_Passing_Mechanism_Specified): Add a hyphen in a comment.
gcc/ada/
* sem_res.adb (Is_Assignment_Or_Object_Expression): Whitespace
cleanup.
(Is_Attribute_Expression): Prevent AST climbing from going to
the root of the compilation unit.
gcc/ada/
* ghost.adb: Add another special case where full analysis is
needed. This bug is due to quirks in the way
Mark_And_Set_Ghost_Assignment works (it happens very early,
before name resolution is done).
gcc/ada/
* sem_res.adb (Resolve_Raise_Expression): Apply Ada_2020 rules
concerning the need for parentheses around Raise_Expressions in
various contexts.
gcc/ada/
* sem_ch13.adb (Validate_Unchecked_Conversion): Move detection
of generic types before switching to their private views; fix
style in using AND THEN.
gcc/ada/
* sem_ch3.adb (Analyze_Component_Declaration): Do not special
case raise expressions.
gcc/testsuite/
* gnat.dg/limited4.adb: Disable illegal code.
gcc/ada/
* doc/gnat_ugn/building_executable_programs_with_gnat.rst:
Instead of referring to the formatting of the Ada examples in
Ada RM add use the list of checks that are actually performed.
* gnat_ugn.texi: Regenerate.
gcc/ada/
* initialize.c: Do not include vxWorks.h and fcntl.h from here.
(__gnat_initialize) [__MINGW32__]: Remove #ifdef and attribute
(__gnat_initialize) [init_float]: Delete.
(__gnat_initialize) [VxWorks]: Likewise.
(__gnat_initialize) [PA-RISC HP-UX 10]: Likewise.
* runtime.h: Add comment about vxWorks.h include.
This makes us pass down the vector type for the two-operand
SLP node build rather than picking that from operand one which,
when constant or external, could be NULL.
2021-06-16 Richard Biener <rguenther@suse.de>
PR tree-optimization/101083
* tree-vect-slp.c (vect_slp_build_two_operator_nodes): Get
vectype as argument.
(vect_build_slp_tree_2): Adjust.
* gcc.dg/vect/pr97832-4.c: New testcase.
Looks like my patch for PR analyzer/99212 implicitly assumed
little-endian, which the following patch fixes.
Fixes bitfields-1.c on:
- armeb-none-linux-gnueabihf
- cris-elf
- powerpc64-darwin
- s390-linux-gnu
gcc/analyzer/ChangeLog:
PR analyzer/99212
PR analyzer/101082
* engine.cc: Include "target.h".
(impl_run_checkers): Log BITS_BIG_ENDIAN, BYTES_BIG_ENDIAN, and
WORDS_BIG_ENDIAN.
* region-model-manager.cc
(region_model_manager::maybe_fold_binop): Move support for masking
via ARG0 & CST into...
(region_model_manager::maybe_undo_optimize_bit_field_compare):
...this new function. Flatten by converting from nested
conditionals to a series of early return statements to reject
failures. Reject if type is not unsigned_char_type_node.
Handle BYTES_BIG_ENDIAN when determining which bits are bound
in the binding_map.
* region-model.h
(region_model_manager::maybe_undo_optimize_bit_field_compare):
New decl.
* store.cc (bit_range::dump): New function.
* store.h (bit_range::dump): New decl.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This restricts the API of the CPOs and other function objects so they
cannot be misused by deriving from them or taking their addresses.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/bits/ranges_base.h (ranges::begin, ranges::end)
(ranges::cbegin, ranges::cend, ranges::rbeing, ranges::rend)
(ranges::crbegin, ranges::crend, ranges::size, ranges::ssize)
(ranges::empty, ranges::data, ranges::cdata): Make types final.
Add deleted operator& overloads.
(ranges::advance, ranges::distance, ranges::next, ranges::prev):
Likewise.
* testsuite/std/ranges/headers/ranges/synopsis.cc: Replace
ill-formed & expressions with using-declarations. Add checks for
other function objects.
The assertion in the subrange constructor causes semantic changes,
because the call to ranges::distance performs additional operations that
are not part of the constructor's specification. That will fail to
compile if the iterator is move-only, because the argument to
ranges::distance is passed by value. It will modify the subrange if the
iterator is not a forward iterator, because incrementing the copy also
affects the _M_begin member. Those problems could be prevented by using
if-constexpr to only do the assertion for copyable forward iterators,
but the call to ranges::distance can also prevent the constructor being
usable in constant expressions. If the member initializers are usable in
constant expressions, but iterator increments of equality comparisons
are not, then the checks done by __glibcxx_assert might
make constant evaluation fail.
This change removes the assertion. Additionally, a new typedef is
introduced to simplify the declarations using __make_unsigned_like_t on
the iterator's difference type.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/bits/ranges_util.h (subrange): Add __size_type typedef
and use it to simplify declarations.
(subrange(i, s, n)): Remove assertion.
* testsuite/std/ranges/subrange/constexpr.cc: New test.
By changing __cust_access::__decay_copy from a function template to a
function object we avoid ADL. That means it's fine to call it
unqualified (the compiler won't waste time doing ADL in associated
namespaces, and won't try to complete associated types).
This also makes some other minor simplications to other concepts for the
[range.access] CPOs.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/bits/iterator_concepts.h (__cust_access::__decay_copy):
Replace with function object.
(__cust_access::__member_begin, ___cust_access::_adl_begin): Use
__decay_copy unqualified.
* include/bits/ranges_base.h (__member_end, __adl_end):
Likewise. Use __range_iter_t for type of ranges::begin.
(__member_rend): Use correct value category for rbegin argument.
(__member_data): Use __decay_copy unqualified.
(__begin_data): Use __range_iter_t for type of ranges::begin.
For bitwise or, nonzero|X is always nonzero. Make sure we don't drop to
varying in this case.
gcc/ChangeLog:
* range-op.cc (operator_bitwise_or::wi_fold): Make sure
nonzero|X is nonzero.
(range_op_bitwise_and_tests): Add tests for above.