libstdc++-v3/ChangeLog:
* include/bits/ranges_util.h
(__detail::__uses_nonqualification_pointer_conversion): Define
and use it ...
(__detail::__convertible_to_nonslicing): ... here, as per LWG 3470.
* testsuite/std/ranges/subrange/1.cc: New test.
libstdc++-v3/ChangeLog:
* include/std/ranges (iota_view::_Iterator): Befriend iota_view.
(iota_view::_Sentinel): Likewise.
(iota_view::iota_view): Add three overloads, each taking an
iterator/sentinel pair as per LWG 3523.
* testsuite/std/ranges/iota/iota_view.cc (test06): New test.
This patch also reverts r11-3504 since that workaround is now obsolete
after this resolution.
libstdc++-v3/ChangeLog:
* include/bits/ranges_base.h (view_interface): Forward declare.
(__detail::__is_derived_from_view_interface_fn): Declare.
(__detail::__is_derived_from_view_interface): Define as per LWG 3549.
(enable_view): Adjust as per LWG 3549.
* include/bits/ranges_util.h (view_interface): Don't derive from
view_base.
* include/std/ranges (filter_view): Revert r11-3504 change.
(transform_view): Likewise.
(take_view): Likewise.
(take_while_view): Likewise.
(drop_view): Likewise.
(drop_while_view): Likewise.
(join_view): Likewise.
(lazy_split_view): Likewise.
(split_view): Likewise.
(reverse_view): Likewise.
* testsuite/std/ranges/adaptors/sizeof.cc: Update expected sizes.
* testsuite/std/ranges/view.cc (test_view::test_view): Remove
this default ctor since views no longer need to be default initable.
(test01): New test.
Currently this function only returns a non-zero value for /dev/random
and /dev/urandom. When a hardware instruction such as RDRAND is in use
it should (in theory) be perfectly random and produce 32 bits of entropy
in each 32-bit result. Add a helper function to identify the source of
randomness from the _M_func and _M_file data members, and return a
suitable value when RDRAND or RDSEED is being used.
libstdc++-v3/ChangeLog:
* src/c++11/random.cc (which_source): New helper function.
(random_device::_M_getentropy()): Use which_source and return
suitable values for sources other than device files.
* testsuite/26_numerics/random/random_device/entropy.cc: New test.
Some compatibility implementations of x86 intrinsics include
Power intrinsics which require POWER8. Guard them.
emmintrin.h:
- _mm_cmpord_pd: Remove code which was ostensibly for pre-POWER8,
but which indeed depended on POWER8 (vec_cmpgt(v2du)/vcmpgtud).
The "POWER8" version works fine on pre-POWER8.
- _mm_mul_epu32: vec_mule(v4su) uses vmuleuw.
pmmintrin.h:
- _mm_movehdup_ps: vec_mergeo(v4su) uses vmrgow.
- _mm_moveldup_ps: vec_mergee(v4su) uses vmrgew.
smmintrin.h:
- _mm_cmpeq_epi64: vec_cmpeq(v2di) uses vcmpequd.
- _mm_mul_epi32: vec_mule(v4si) uses vmuluwm.
- _mm_cmpgt_epi64: vec_cmpgt(v2di) uses vcmpgtsd.
tmmintrin.h:
- _mm_sign_epi8: vec_neg(v4si) uses vsububm.
- _mm_sign_epi16: vec_neg(v4si) uses vsubuhm.
- _mm_sign_epi32: vec_neg(v4si) uses vsubuwm.
Note that the above three could actually be supported pre-POWER8,
but current GCC does not support them before POWER8.
- _mm_sign_pi8: depends on _mm_sign_epi8.
- _mm_sign_pi16: depends on _mm_sign_epi16.
- _mm_sign_pi32: depends on _mm_sign_epi32.
sse4_2-pcmpgtq.c:
- _mm_cmpgt_epi64: vec_cmpeq(v2di) uses vcmpequd.
2021-10-19 Paul A. Clarke <pc@us.ibm.com>
gcc
PR target/101893
PR target/102719
* config/rs6000/emmintrin.h: Guard POWER8 intrinsics.
* config/rs6000/pmmintrin.h: Same.
* config/rs6000/smmintrin.h: Same.
* config/rs6000/tmmintrin.h: Same.
gcc/testsuite
* gcc.target/powerpc/sse4_2-pcmpgtq.c: Tighten dg constraints
to minimally Power8.
In r12-826 I tried to remove some redundant steps from the doxygen
build, but they are needed when configure is run as a relative path. The
use of pwd is to resolve the relative path to an absolute one.
libstdc++-v3/ChangeLog:
* doc/Makefile.am (stamp-html-doxygen, stamp-html-doxygen)
(stamp-latex-doxygen, stamp-man-doxygen): Fix recipes for
relative ${top_srcdir}.
* doc/Makefile.in: Regenerate.
Shows now up with gfortran.dg/deferred_type_param_6.f90 due to more ME
optimizations, causing fails without this commit.
gcc/fortran/ChangeLog:
* trans-types.c (create_fn_spec): For allocatable/pointer
character(len=:), use 'w' not 'R' as fn spec for the length dummy
argument.
This refactors vect_supportable_dr_alignment to get the misalignment
as input parameter which allows us to elide modifying/restoring
of DR_MISALIGNMENT during alignment peeling analysis which eventually
makes it more straight-forward to split out the negative step
handling.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_supportable_dr_alignment): Add
misalignment parameter.
* tree-vect-data-refs.c (vect_get_peeling_costs_all_drs):
Do not change DR_MISALIGNMENT in place, instead pass the
adjusted misalignment to vect_supportable_dr_alignment.
(vect_peeling_supportable): Likewise.
(vect_peeling_hash_get_lowest_cost): Adjust.
(vect_enhance_data_refs_alignment): Likewise.
(vect_vfa_access_size): Likewise.
(vect_supportable_dr_alignment): Add misalignment
parameter and simplify.
* tree-vect-stmts.c (get_negative_load_store_type): Adjust.
(get_group_load_store_type): Likewise.
(get_load_store_type): Likewise.
This more clearly expresses the intent (a completely unused, trivial
type) than using char. It's also consistent with the unions in
std::optional.
libstdc++-v3/ChangeLog:
* include/std/variant (_Uninitialized): Use an empty struct
for the unused union member, instead of char.
Another new addition to the C++23 working draft.
The new member functions of std::optional are only defined for C++23,
but the new members of _Optional_payload_base are defined for C++20 so
that they can be used in non-propagating-cache in <ranges>. The
_Optional_payload_base::_M_construct member can also be used in
non-propagating-cache now, because it's constexpr since r12-4389.
There will be an LWG issue about the feature test macro, suggesting that
we should just bump the value of __cpp_lib_optional instead. I haven't
done that here, but it can be changed once consensus is reached on the
change.
libstdc++-v3/ChangeLog:
* include/std/optional (_Optional_payload_base::_Storage): Add
constructor taking a callable function to invoke.
(_Optional_payload_base::_M_apply): New function.
(__cpp_lib_monadic_optional): Define for C++23.
(optional::and_then, optional::transform, optional::or_else):
Define for C++23.
* include/std/ranges (__detail::__cached): Remove.
(__detail::__non_propagating_cache): Remove use of __cached for
contained value. Use _Optional_payload_base::_M_construct and
_Optional_payload_base::_M_apply to set the contained value.
* include/std/version (__cpp_lib_monadic_optional): Define.
* testsuite/20_util/optional/monadic/and_then.cc: New test.
* testsuite/20_util/optional/monadic/or_else.cc: New test.
* testsuite/20_util/optional/monadic/or_else_neg.cc: New test.
* testsuite/20_util/optional/monadic/transform.cc: New test.
* testsuite/20_util/optional/monadic/version.cc: New test.
PR fortran/92482
gcc/fortran/ChangeLog:
* trans-expr.c (gfc_conv_procedure_call): Use TREE_OPERAND not
build_fold_indirect_ref_loc to undo an ADDR_EXPR.
gcc/testsuite/ChangeLog:
* gfortran.dg/bind-c-char-descr.f90: Remove xfail; extend a bit.
The garbage collector of AIX linker might remove the reference to
__tls_get_addr if it's added inside an unused csect, which can be
the case of .data with very simple programs.
gcc/ChangeLog:
2021-10-19 Clément Chigot <clement.chigot@atos.net>
* config/rs6000/rs6000.c (rs6000_xcoff_file_end): Move
__tls_get_addr reference to .text csect.
This passes down the already available alignment scheme and
misalignment to the load/store costing routines, removing
redundant queries.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_get_store_cost): Adjust signature.
(vect_get_load_cost): Likewise.
* tree-vect-data-refs.c (vect_get_data_access_cost): Get
alignment support scheme and misalignment as arguments
and pass them down.
(vect_get_peeling_costs_all_drs): Compute that info here
and note that we shouldn't need to.
* tree-vect-stmts.c (vect_model_store_cost): Get
alignment support scheme and misalignment as arguments.
(vect_get_store_cost): Likewise.
(vect_model_load_cost): Likewise.
(vect_get_load_cost): Likewise.
(vectorizable_store): Pass down alignment support scheme
and misalignment to costing.
(vectorizable_load): Likewise.
This moves the computation of a negative offset that needs to be
applied when we vectorize a negative stride access to
get_load_store_type alongside where we compute the actual access
method.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vect-stmts.c (get_negative_load_store_type): Add
offset output parameter and initialize it.
(get_group_load_store_type): Likewise.
(get_load_store_type): Likewise.
(vectorizable_store): Use offset as computed by
get_load_store_type.
(vectorizable_load): Likewise.
The PR shows that when carefully crafting the runtime alias
condition in the vectorizer we might end up using defs from
the loop preheader but will end up inserting the condition
before the .LOOP_VECTORIZED call. So the following makes
sure to insert invariants before that when we versioned the
loop, preserving the invariant the vectorizer relies on.
2021-10-19 Richard Biener <rguenther@suse.de>
PR tree-optimization/102827
* tree-if-conv.c (predicate_statements): Add pe parameter
and use that edge to insert invariant stmts on.
(combine_blocks): Pass through pe.
(tree_if_conversion): Compute the edge to insert invariant
stmts on and pass it along.
* gcc.dg/pr102827.c: New testcase.
This patch resolves PR target/102785 where my recent patch to constant
fold saturating addition/subtraction exposed a latent bug in the bfin
backend. The patterns used for blackfin's V2HI ssaddsub and sssubadd
instructions had the indices/operations swapped. This was harmless
until we started evaluating these expressions at compile-time, when
the mismatch was caught by the testsuite.
2021-10-19 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/102785
* config/bfin/bfin.md (addsubv2hi3, subaddv2hi3, ssaddsubv2hi3,
sssubaddv2hi3): Swap the order of operators in vec_concat.
gcc/
* config/rs6000/rs6000-call.c (altivec_expand_lxvr_builtin):
Modify the expansion for sign extension. All extensions are done
within VSX registers.
gcc/testsuite/
* gcc.target/powerpc/p10_vec_xl_sext.c: New test.
This makes us compute the misalignment alongside the alignment support
scheme in get_load_store_type, removing some out-of-place calls to
the DR alignment API.
2021-10-18 Richard Biener <rguenther@suse.de>
* tree-vect-stmts.c (get_group_load_store_type): Add
misalignment output parameter and initialize it.
(get_group_load_store_type): Likewise.
(vectorizable_store): Remove now redundant queries.
(vectorizable_load): Likewise.
The following testcase incorrectly rejects the c initializer,
while in the s.*a case cxx_eval_* sees .__pfn reads etc.,
in the s.*&S::foo case get_member_function_from_ptrfunc creates
expressions which use INTEGER_CSTs with type of pointer to METHOD_TYPE.
And cxx_eval_constant_expression rejects any INTEGER_CSTs with pointer
type if they aren't 0.
Either we'd need to make sure we defer such folding till cp_fold but the
function and pfn_from_ptrmemfunc is used from lots of places, or
the following patch just tries to reject only non-zero INTEGER_CSTs
with pointer types if they don't point to METHOD_TYPE in the hope that
all such INTEGER_CSTs with POINTER_TYPE to METHOD_TYPE are result of
folding valid pointer-to-member function expressions.
I don't immediately see how one could create such INTEGER_CSTs otherwise,
cast of integers to PMF is rejected and would have the PMF RECORD_TYPE
anyway, etc.
2021-10-19 Jakub Jelinek <jakub@redhat.com>
PR c++/102786
* constexpr.c (cxx_eval_constant_expression): Don't reject
INTEGER_CSTs with type POINTER_TYPE to METHOD_TYPE.
* g++.dg/cpp2a/constexpr-virtual19.C: New test.
There are two calls with true as parameter, one is only relevant
for the case of the misalignment being unknown which means the
access is never aligned there, the other is in the peeling hash
insert code used conditional on the unlimited cost model which
adds an artificial count. But the way it works right now is
that it boosts the count if the specific misalignment when not peeling
is unsupported - in particular when the access is currently aligned
we'll query the backend with a misalign value of zero. I've
changed it to boost the peeling when unknown alignment is not
supported instead and noted how we could in principle improve this.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_supportable_dr_alignment): Remove
check_aligned argument.
* tree-vect-data-refs.c (vect_supportable_dr_alignment):
Likewise.
(vect_peeling_hash_insert): Add supportable_if_not_aligned
argument and do not call vect_supportable_dr_alignment here.
(vect_peeling_supportable): Adjust.
(vect_enhance_data_refs_alignment): Compute whether the
access is supported with different alignment here and
pass that down to vect_peeling_hash_insert.
(vect_vfa_access_size): Adjust.
* tree-vect-stmts.c (vect_get_store_cost): Likewise.
(vect_get_load_cost): Likewise.
(get_negative_load_store_type): Likewise.
(get_group_load_store_type): Likewise.
(get_load_store_type): Likewise.
PR tree-optimization/102796
gcc/
* gimple-range.cc (gimple_ranger::range_on_edge): Process EH edges
normally. Return get_tree_range for non gimple_range_ssa_p names.
(gimple_ranger::range_of_stmt): Use get_tree_range for non
gimple_range_ssa_p names.
gcc/testsuite/
* g++.dg/pr102796.C: New.
Add tests to check that explicitly specifying the containing procedure as the
base name for declare variant works.
2021-10-18 Kwok Cheung Yeung <kcy@codesourcery.com>
gcc/testsuite/
* gfortran.dg/gomp/declare-variant-15.f90 (variant2, base2, test2):
Add tests.
* gfortran.dg/gomp/declare-variant-16.f90 (base2, variant2, test2):
Add tests.
In r208350 I improved the diagnostic location of the initializer-list
pedwarn in C++98 mode on crash90.C, but didn't adjust the testcase to verify
the location, so reverting that change didn't break regression testing.
gcc/testsuite/ChangeLog:
* g++.dg/template/crash90.C: Check location of pedwarn.
This fixes handling of the return value of vect_supportable_dr_alignment
in multiple places. We should use the enum type and not int for
storage and not auto-convert the enum return value to bool. It also
commonizes the read/write path in vect_supportable_dr_alignment.
2021-10-18 Richard Biener <rguenther@suse.de>
* tree-vect-data-refs.c (vect_peeling_hash_insert): Do
not auto-convert dr_alignment_support to bool.
(vect_peeling_supportable): Likewise.
(vect_enhance_data_refs_alignment): Likewise.
(vect_supportable_dr_alignment): Commonize read/write case.
* tree-vect-stmts.c (vect_get_store_cost): Use
dr_alignment_support, not int, for the vect_supportable_dr_alignment
result.
(vect_get_load_cost): Likewise.
This is a minor cleanup to bail out early if the result of
__builtin_object_size is not assigned to anything and avoid initializing
the object size arrays.
gcc/ChangeLog:
* tree-object-size.c (object_sizes_execute): Consolidate LHS
null check and do it early.
Signed-off-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
This uses the computed alignment scheme in vectorizable_store
much like vectorizable_load does instead of re-querying
it via aligned_access_p.
2021-10-18 Richard Biener <rguenther@suse.de>
* tree-vect-stmts.c (vectorizable_store): Use the
computed alignment scheme instead of querying
aligned_access_p.
The following avoids the recomputation of the alignment scheme
which is already fully determined by get_load_store_type.
2021-10-18 Richard Biener <rguenther@suse.de>
* tree-vect-stmts.c (vectorizable_store): Do not recompute
alignment scheme already determined by get_load_store_type.
If numa-domains is used with num-places count, sometimes the function
could create more places than requested and crash. This depended on the
content of /sys/devices/system/node/online file, e.g. if the file
contains
0-1,16-17
and all NUMA nodes contain at least one CPU in the cpuset of the program,
then numa_domains(2) or numa_domains(4) (or 5+) work fine while
numa_domains(1) or numa_domains(3) misbehave. I.e. the function was able
to stop after reaching limit on the , separators (or trivially at the end),
but not within in the ranges.
2021-10-18 Jakub Jelinek <jakub@redhat.com>
* config/linux/affinity.c (gomp_affinity_init_numa_domains): Add
&& gomp_places_list_len < count after nfirst <= nlast loop condition.
On x86-64,
$ make check RUNTESTFLAGS="--target_board='unix{-m32,}'"
can be used to test both 64-bit and 32-bit targets. Require ia32 target
instead of explicit -m32 for 32-bit only test.
* gcc.target/i386/387-12.c (dg-do compile): Require ia32.
(dg-options): Remove -m32.
My recent attempts to come up with a testcase for my patch to evaluate
ss_plus in simplify-rtx.c, identified a missed optimization opportunity
(that's potentially a long-time regression): The RTL optimizers no longer
place constants in the constant pool.
The motivating x86_64 example is the simple program:
typedef char v8qi __attribute__ ((vector_size (8)));
v8qi foo()
{
v8qi tx = { 1, 0, 0, 0, 0, 0, 0, 0 };
v8qi ty = { 2, 0, 0, 0, 0, 0, 0, 0 };
v8qi t = __builtin_ia32_paddsb(tx, ty);
return t;
}
which (with my previous patch) currently results in:
foo: movq .LC0(%rip), %xmm0
movq .LC1(%rip), %xmm1
paddsb %xmm1, %xmm0
ret
even though the RTL contains the result in a REG_EQUAL note:
(insn 7 6 12 2 (set (reg:V8QI 83)
(ss_plus:V8QI (reg:V8QI 84)
(reg:V8QI 85))) "ssaddqi3.c":7:12 1419 {*mmx_ssaddv8qi3}
(expr_list:REG_DEAD (reg:V8QI 85)
(expr_list:REG_DEAD (reg:V8QI 84)
(expr_list:REG_EQUAL (const_vector:V8QI [
(const_int 3 [0x3])
(const_int 0 [0]) repeated x7
])
(nil)))))
Together with the patch below, GCC will now generate the much
more sensible:
foo: movq .LC2(%rip), %xmm0
ret
My first approach was to look in cse.c (where the REG_EQUAL note gets
added) and notice that the constant pool handling functionality has been
unreachable for a while. A quick search for constant_pool_entries_cost
shows that it's initialized to zero, but never set to a non-zero value,
meaning that force_const_mem is never called. This functionality used
to work way back in 2003, but has been lost over time:
https://gcc.gnu.org/pipermail/gcc-patches/2003-October/116435.html
The changes to cse.c below restore this functionality (placing suitable
constants in the constant pool) with two significant refinements;
(i) it only attempts to do this if the function already uses a constant
pool (thanks to the availability of crtl->uses_constant_pool since 2003).
(ii) it allows different constants (i.e. modes) to have different costs,
so that floating point "doubles" and 64-bit, 128-bit, 256-bit and 512-bit
vectors don't all have the share the same cost. Back in 2003, the
assumption was that everything in a constant pool had the same
cost, hence the global variable constant_pool_entries_cost.
Although this is a useful CSE fix, it turns out that it doesn't cure
my motivating problem above. CSE only considers a single instruction,
so determines that it's cheaper to perform the ss_plus (COSTS_N_INSNS(1))
than read the result from the constant pool (COSTS_N_INSNS(2)). It's
only when the other reads from the constant pool are also eliminated,
that this transformation is a win. Hence a better place to perform
this transformation is in combine, where after failing to "recog" the
load of a suitable constant, it can retry after calling force_const_mem.
This achieves the desired transformation and allows the backend insn_cost
call-back to control whether or not using the constant pool is preferrable.
Alas, it's rare to change code generation without affecting something in
GCC's testsuite. On x86_64-pc-linux-gnu there were two families of new
failures (and I'd predict similar benign fallout on other platforms).
One failure was gcc.target/i386/387-12.c (aka PR target/26915), where
the test is missing an explicit -m32 flag. On i686, it's very reasonable
to materialize -1.0 using "fld1; fchs", but on x86_64-pc-linux-gnu we
currently generate the awkward:
testm1: fld1
fchs
fstpl -8(%rsp)
movsd -8(%rsp), %xmm0
ret
which combine now very reasonably simplifies to just:
testm1: movsd .LC3(%rip), %xmm0
ret
The other class of x86_64-pc-linux-gnu failure was from materialization
of vector constants using vpbroadcast (e.g. gcc.target/i386/pr90773-17.c)
where the decision is finely balanced; the load of an integer register
with an immediate constant, followed by a vpbroadcast is deemed to be
COSTS_N_INSNS(2), whereas a load from the constant pool is also reported
as COSTS_N_INSNS(2). My solution is to tweak the i386.c's rtx_costs
so that all other things being equal, an instruction (sequence) that
accesses memory is fractionally more expensive than one that doesn't.
2021-10-18 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* combine.c (recog_for_combine): For an unrecognized move/set of
a constant, try force_const_mem to place it in the constant pool.
* cse.c (constant_pool_entries_cost, constant_pool_entries_regcost):
Delete global variables (that are no longer assigned a cost value).
(cse_insn): Simplify logic for deciding whether to place a folded
constant in the constant pool using force_const_mem.
(cse_main): Remove zero initialization of constant_pool_entries_cost
and constant_pool_entries_regcost.
* config/i386/i386.c (ix86_rtx_costs): Make memory accesses
fractionally more expensive, when optimizing for speed.
gcc/testsuite/ChangeLog
* gcc.target/i386/387-12.c: Add explicit -m32 option.
Blackfin processors support a ONES instruction that implements a
32-bit popcount returning a 16-bit result. This instruction was
previously described by GCC's bfin backend using an UNSPEC, which
this patch changes to use a popcount:SI rtx thats capture its semantics,
allowing it to evaluated and simplified at compile-time. I've decided
to keep the instruction name the same (avoiding any changes to the
__builtin_bfin_ones machinery), but have provided popcountsi2 and
popcounthi2 expanders so that the middle-end can use this instruction
to implement __builtin_popcount (and __builtin_parity).
The new testcase ones.c
short foo ()
{
int t = 5;
short r = __builtin_bfin_ones(t);
return r;
}
previously generated:
_foo: nop;
nop;
R0 = 5 (X);
R0.L = ONES R0;
rts;
with this patch, now generates:
_foo: nop;
nop;
nop;
R0 = 2 (X);
rts;
The new testcase popcount.c
int foo(int x)
{
return __builtin_popcount(x);
}
previously generated:
_foo: [--SP] = RETS;
SP += -12;
call ___popcountsi2;
SP += 12;
RETS = [SP++];
rts;
now generates:
_foo: nop;
nop;
R0.L = ONES R0;
R0 = R0.L (Z);
rts;
And the new testcase parity.c
int foo(int x)
{
return __builtin_parity(x);
}
previously generated:
_foo: [--SP] = RETS;
SP += -12;
call ___paritysi2;
SP += 12;
RETS = [SP++];
rts;
now generates:
_foo: nop;
R1 = 1 (X);
R0.L = ONES R0;
R0 = R1 & R0;
rts;
2021-10-18 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/bfin/bfin.md (define_constants): Remove UNSPEC_ONES.
(define_insn "ones"): Replace UNSPEC_ONES with a truncate of
a popcount, allowing compile-time evaluation/simplification.
(popcountsi2, popcounthi2): New expanders using a "ones" insn.
gcc/testsuite/ChangeLog
* gcc.target/bfin/ones.c: New test case.
* gcc.target/bfin/parity.c: New test case.
* gcc.target/bfin/popcount.c: New test case.