gcc/ada/
* sem_ch4.adb (Analyze_QUantified_Expression): Issue warning on
conjunct/disjunct sub-expression of the full expression inside a
quantified expression, when it does not reference the quantified
variable.
The removal of --param=max-fsm-thread-length is causing code
explosion. I thought that --param=max-fsm-thread-path-insns was a
better gague for path profitability than raw BB length, but it turns
out that we don't take into account PHIs when estimating the number of
statements.
In this PR, we have a sequence of very large PHIs that have us
traversing extremely large paths that blow up the compilation.
We could fix this a couple of different ways. We could avoid
traversing more than a certain number of PHI arguments, or ignore
large PHIs altogether. The old implementation certainly had this
knob, and we could cut things off before we even got to the ranger.
We could also adjust the instruction estimation to take into account
PHIs, but I'm sure we'll mess something else in the process ;-).
The easiest thing to do is just restore the knob.
At a later time we could tweak this further, for instance,
disregarding empty blocks in the count. BTW, this is the reason I
didn't chop things off in the lowlevel registry for all threaders: the
forward threader can't really explore too deep paths, but it could
theoretically get there while threading over empty blocks.
This fixes 102814, 102852, and I bet it solves the Linux kernel cross
compile issue.
Tested on x86-64 Linux.
gcc/ChangeLog:
PR tree-optimization/102814
* doc/invoke.texi: Document --param=max-fsm-thread-length.
* params.opt: Add --param=max-fsm-thread-length.
* tree-ssa-threadbackward.c
(back_threader_profitability::profitable_path_p): Fail on paths
longer than max-fsm-thread-length.
This is a regression present on the mainline in the form of -fcompare-debug
failure at -O3 on a compiler-generated testcase. Fixed by disregarding a
debug statement in the last position of a basic block to reset the current
location for the outgoing edges.
gcc/
PR middle-end/102764
* cfgexpand.c (expand_gimple_basic_block): Disregard a final debug
statement to reset the current location for the outgoing edges.
gcc/testsuite/
* gcc.dg/pr102764.c: New test.
This addresses PR ada/100486, which is the bootstrap failure of GCC 11 for
32-bit Windows in the MSYS setup. The PR shows that we cannot rely on
exception propagation being operational during the bootstrap, at least on
the 11 branch, so fix this by removing the problematic raise statement.
gcc/ada/
PR ada/100486
* sem_prag.adb (Check_Valid_Library_Unit_Pragma): Do not raise an
exception as part of the bootstrap.
If GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is not defined, the intent was to
treat the split of the structure between first cacheline (64 bytes)
as mostly write-once, use afterwards and second cacheline as rw just
as an optimization. But as has been reported, with vectorization enabled
at -O2 it can now result in aligned vector 16-byte or larger stores.
When not having posix_memalign/aligned_alloc/memalign or other similar API,
alloc.c emulates it but it needs to allocate extra memory for the dynamic
realignment.
So, for the GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC not defined case, this patch
stops using aligned (64) attribute in the middle of the structure and instead
inserts padding that puts the second half of the structure at offset 64 bytes.
And when GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is defined, usually it was allocated
as aligned, but for the orphaned case it could still be allocated just with
gomp_malloc without guaranteed proper alignment.
2021-10-20 Jakub Jelinek <jakub@redhat.com>
PR libgomp/102838
* libgomp.h (struct gomp_work_share_1st_cacheline): New type.
(struct gomp_work_share): Only use aligned(64) attribute if
GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is defined, otherwise just
add padding before lock to ensure lock is at offset 64 bytes
into the structure.
(gomp_workshare_struct_check1, gomp_workshare_struct_check2):
New poor man's static assertions.
* work.c (gomp_work_share_start): Use gomp_aligned_alloc instead of
gomp_malloc if GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC.
My recent push_local_extern_decl_alias change broke error-recovery,
do_pushdecl can return error_mark_node and set_decl_tls_model can't be
called on that. There are other code paths that store error_mark_node
into DECL_LOCAL_DECL_ALIAS, with the intent to differentiate the cases
where we haven't yet tried to push it into the namespace scope (NULL)
and one where we have tried it but it failed (error_mark_node), but looking
around, there are other spots where we call functions or do processing
which doesn't tolerate error_mark_node.
So, the first hunk with the testcase fixes the testcase, the others
fix what I've spotted and the fix was easy to figure out (there are I think
3 other spots mainly for function multiversioning).
2021-10-20 Jakub Jelinek <jakub@redhat.com>
PR c++/102642
* name-lookup.c (push_local_extern_decl_alias): Don't call
set_decl_tls_model on error_mark_node.
* decl.c (make_rtl_for_nonlocal_decl): Don't call
set_user_assembler_name on error_mark_node.
* parser.c (cp_parser_oacc_declare): Ignore DECL_LOCAL_DECL_ALIAS
if it is error_mark_node.
(cp_parser_omp_declare_target): Likewise.
* g++.dg/tls/pr102642.C: New test.
There is a lot of fall-out from this patch, as there were many threading
tests that assumed the restrictions introduced by this patch were valid.
Some tests have merely shifted the threading to after loop
optimizations, but others ended up with no threading opportunities at
all. Surprisingly some tests ended up with more total threads. It was
a crapshoot all around.
On a postive note, there are 6 tests that no longer XFAIL, and one
guality test which now passes.
I felt a bit queasy about such a fundamental change wrt threading, so I
ran it through my callgrind test harness (.ii files from a bootstrap).
There was no change in overall compilation, DOM, or the VRP threaders.
However, there was a slight increase of 1.63% in the backward threader.
I'm pretty sure we could reduce this if we incorporated the restrictions
into their profitability code. This way we could stop the search when
we ran into one of these restrictions. Not sure it's worth it at this
point.
Tested on x86-64 Linux.
Co-authored-by: Richard Biener <rguenther@suse.de>
gcc/ChangeLog:
* tree-ssa-threadupdate.c (cancel_thread): Dump threading reason
on the same line as the threading cancellation.
(jt_path_registry::cancel_invalid_paths): Avoid rotating loops.
Avoid threading through loop headers where the path remains in the
loop.
libgomp/ChangeLog:
* testsuite/libgomp.graphite/force-parallel-5.c: Remove xfail.
gcc/testsuite/ChangeLog:
* gcc.dg/Warray-bounds-87.c: Remove xfail.
* gcc.dg/analyzer/pr94851-2.c: Remove xfail.
* gcc.dg/graphite/pr69728.c: Remove xfail.
* gcc.dg/graphite/scop-dsyr2k.c: Remove xfail.
* gcc.dg/graphite/scop-dsyrk.c: Remove xfail.
* gcc.dg/shrink-wrap-loop.c: Remove xfail.
* gcc.dg/loop-8.c: Adjust for new threading restrictions.
* gcc.dg/tree-ssa/ifc-20040816-1.c: Same.
* gcc.dg/tree-ssa/pr21559.c: Same.
* gcc.dg/tree-ssa/pr59597.c: Same.
* gcc.dg/tree-ssa/pr71437.c: Same.
* gcc.dg/tree-ssa/pr77445-2.c: Same.
* gcc.dg/tree-ssa/ssa-dom-thread-4.c: Same.
* gcc.dg/tree-ssa/ssa-dom-thread-7.c: Same.
* gcc.dg/vect/bb-slp-16.c: Same.
* gcc.dg/tree-ssa/ssa-dom-thread-6.c: Remove.
* gcc.dg/tree-ssa/ssa-dom-thread-18.c: Remove.
* gcc.dg/tree-ssa/ssa-dom-thread-2a.c: Remove.
* gcc.dg/tree-ssa/ssa-thread-invalid.c: New test.
Compute the unknown size value as a function of the min/max bit of
object_size_type. This transforms into a neat little branchless
sequence on x86_64:
movl %edi, %eax
sarl %eax
xorl $1, %eax
negl %eax
cltq
which should be faster than loading the value from memory. A quick
unscientific test using
`time make check-gcc RUNTESTFLAGS="dg.exp=builtin*"`
shaves about half a second off execution time with this. Also simplify
implementation of unknown_object_size.
gcc/ChangeLog:
* tree-object-size.c (unknown): Make into a function. Adjust
all uses.
(unknown_object_size): Simplify implementation.
Signed-off-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
As discussed in [1], this patch add xfail/target selector to those
testcases, also make a copy of them so that they can be tested w/o
vectorization.
Newly added xfail/target selectors are used to check the vectorization
capability of continuous byte/double bytes storage, these scenarios
are exactly the part of the testcases that regressed after O2
vectorization.
[1] https://gcc.gnu.org/pipermail/gcc-patches/2021-October/581456.html.
2021-10-19 Hongtao Liu <hongtao.liu@intel.com>
Kewen Lin <linkw@linux.ibm.com>
gcc/ChangeLog
* doc/sourcebuild.texi (Effective-Target Keywords): Document
vect_slp_v2qi_store, vect_slp_v4qi_store, vect_slp_v8qi_store,
vect_slp_v16qi_store, vect_slp_v2hi_store,
vect_slp_v4hi_store, vect_slp_v2si_store, vect_slp_v4si_store.
gcc/testsuite/ChangeLog
PR middle-end/102722
PR middle-end/102697
PR middle-end/102462
PR middle-end/102706
PR middle-end/102744
* c-c++-common/Wstringop-overflow-2.c: Adjust testcase with new
xfail/target selector.
* gcc.dg/Warray-bounds-51.c: Ditto.
* gcc.dg/Warray-parameter-3.c: Ditto.
* gcc.dg/Wstringop-overflow-14.c: Ditto.
* gcc.dg/Wstringop-overflow-21.c: Ditto.
* gcc.dg/Wstringop-overflow-68.c: Ditto.
* gcc.dg/Wstringop-overflow-76.c: Ditto.
* gcc.dg/Warray-bounds-48.c: Ditto.
* gcc.dg/Wzero-length-array-bounds-2.c: Ditto.
* lib/target-supports.exp (check_vect_slp_aligned_store_usage):
New function.
(check_effective_target_vect_slp_v2qi_store): Ditto.
(check_effective_target_vect_slp_v4qi_store): Ditto.
(check_effective_target_vect_slp_v8qi_store): Ditto.
(check_effective_target_vect_slp_v16qi_store): Ditto.
(check_effective_target_vect_slp_v2hi_store): Ditto.
(check_effective_target_vect_slp_v4hi_store): Ditto.
(check_effective_target_vect_slp_v2si_store): Ditto.
(check_effective_target_vect_slp_v4si_store): Ditto.
* c-c++-common/Wstringop-overflow-2-novec.c: New test.
* gcc.dg/Warray-bounds-51-novec.c: New test.
* gcc.dg/Warray-bounds-48-novec.c: New test.
* gcc.dg/Warray-parameter-3-novec.c: New test.
* gcc.dg/Wstringop-overflow-14-novec.c: New test.
* gcc.dg/Wstringop-overflow-21-novec.c: New test.
* gcc.dg/Wstringop-overflow-76-novec.c: New test.
* gcc.dg/Wzero-length-array-bounds-2-novec.c: New test.
libstdc++-v3/ChangeLog:
* include/bits/ranges_util.h
(__detail::__uses_nonqualification_pointer_conversion): Define
and use it ...
(__detail::__convertible_to_nonslicing): ... here, as per LWG 3470.
* testsuite/std/ranges/subrange/1.cc: New test.
libstdc++-v3/ChangeLog:
* include/std/ranges (iota_view::_Iterator): Befriend iota_view.
(iota_view::_Sentinel): Likewise.
(iota_view::iota_view): Add three overloads, each taking an
iterator/sentinel pair as per LWG 3523.
* testsuite/std/ranges/iota/iota_view.cc (test06): New test.
This patch also reverts r11-3504 since that workaround is now obsolete
after this resolution.
libstdc++-v3/ChangeLog:
* include/bits/ranges_base.h (view_interface): Forward declare.
(__detail::__is_derived_from_view_interface_fn): Declare.
(__detail::__is_derived_from_view_interface): Define as per LWG 3549.
(enable_view): Adjust as per LWG 3549.
* include/bits/ranges_util.h (view_interface): Don't derive from
view_base.
* include/std/ranges (filter_view): Revert r11-3504 change.
(transform_view): Likewise.
(take_view): Likewise.
(take_while_view): Likewise.
(drop_view): Likewise.
(drop_while_view): Likewise.
(join_view): Likewise.
(lazy_split_view): Likewise.
(split_view): Likewise.
(reverse_view): Likewise.
* testsuite/std/ranges/adaptors/sizeof.cc: Update expected sizes.
* testsuite/std/ranges/view.cc (test_view::test_view): Remove
this default ctor since views no longer need to be default initable.
(test01): New test.
Currently this function only returns a non-zero value for /dev/random
and /dev/urandom. When a hardware instruction such as RDRAND is in use
it should (in theory) be perfectly random and produce 32 bits of entropy
in each 32-bit result. Add a helper function to identify the source of
randomness from the _M_func and _M_file data members, and return a
suitable value when RDRAND or RDSEED is being used.
libstdc++-v3/ChangeLog:
* src/c++11/random.cc (which_source): New helper function.
(random_device::_M_getentropy()): Use which_source and return
suitable values for sources other than device files.
* testsuite/26_numerics/random/random_device/entropy.cc: New test.
Some compatibility implementations of x86 intrinsics include
Power intrinsics which require POWER8. Guard them.
emmintrin.h:
- _mm_cmpord_pd: Remove code which was ostensibly for pre-POWER8,
but which indeed depended on POWER8 (vec_cmpgt(v2du)/vcmpgtud).
The "POWER8" version works fine on pre-POWER8.
- _mm_mul_epu32: vec_mule(v4su) uses vmuleuw.
pmmintrin.h:
- _mm_movehdup_ps: vec_mergeo(v4su) uses vmrgow.
- _mm_moveldup_ps: vec_mergee(v4su) uses vmrgew.
smmintrin.h:
- _mm_cmpeq_epi64: vec_cmpeq(v2di) uses vcmpequd.
- _mm_mul_epi32: vec_mule(v4si) uses vmuluwm.
- _mm_cmpgt_epi64: vec_cmpgt(v2di) uses vcmpgtsd.
tmmintrin.h:
- _mm_sign_epi8: vec_neg(v4si) uses vsububm.
- _mm_sign_epi16: vec_neg(v4si) uses vsubuhm.
- _mm_sign_epi32: vec_neg(v4si) uses vsubuwm.
Note that the above three could actually be supported pre-POWER8,
but current GCC does not support them before POWER8.
- _mm_sign_pi8: depends on _mm_sign_epi8.
- _mm_sign_pi16: depends on _mm_sign_epi16.
- _mm_sign_pi32: depends on _mm_sign_epi32.
sse4_2-pcmpgtq.c:
- _mm_cmpgt_epi64: vec_cmpeq(v2di) uses vcmpequd.
2021-10-19 Paul A. Clarke <pc@us.ibm.com>
gcc
PR target/101893
PR target/102719
* config/rs6000/emmintrin.h: Guard POWER8 intrinsics.
* config/rs6000/pmmintrin.h: Same.
* config/rs6000/smmintrin.h: Same.
* config/rs6000/tmmintrin.h: Same.
gcc/testsuite
* gcc.target/powerpc/sse4_2-pcmpgtq.c: Tighten dg constraints
to minimally Power8.
In r12-826 I tried to remove some redundant steps from the doxygen
build, but they are needed when configure is run as a relative path. The
use of pwd is to resolve the relative path to an absolute one.
libstdc++-v3/ChangeLog:
* doc/Makefile.am (stamp-html-doxygen, stamp-html-doxygen)
(stamp-latex-doxygen, stamp-man-doxygen): Fix recipes for
relative ${top_srcdir}.
* doc/Makefile.in: Regenerate.
Shows now up with gfortran.dg/deferred_type_param_6.f90 due to more ME
optimizations, causing fails without this commit.
gcc/fortran/ChangeLog:
* trans-types.c (create_fn_spec): For allocatable/pointer
character(len=:), use 'w' not 'R' as fn spec for the length dummy
argument.
This refactors vect_supportable_dr_alignment to get the misalignment
as input parameter which allows us to elide modifying/restoring
of DR_MISALIGNMENT during alignment peeling analysis which eventually
makes it more straight-forward to split out the negative step
handling.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_supportable_dr_alignment): Add
misalignment parameter.
* tree-vect-data-refs.c (vect_get_peeling_costs_all_drs):
Do not change DR_MISALIGNMENT in place, instead pass the
adjusted misalignment to vect_supportable_dr_alignment.
(vect_peeling_supportable): Likewise.
(vect_peeling_hash_get_lowest_cost): Adjust.
(vect_enhance_data_refs_alignment): Likewise.
(vect_vfa_access_size): Likewise.
(vect_supportable_dr_alignment): Add misalignment
parameter and simplify.
* tree-vect-stmts.c (get_negative_load_store_type): Adjust.
(get_group_load_store_type): Likewise.
(get_load_store_type): Likewise.
This more clearly expresses the intent (a completely unused, trivial
type) than using char. It's also consistent with the unions in
std::optional.
libstdc++-v3/ChangeLog:
* include/std/variant (_Uninitialized): Use an empty struct
for the unused union member, instead of char.
Another new addition to the C++23 working draft.
The new member functions of std::optional are only defined for C++23,
but the new members of _Optional_payload_base are defined for C++20 so
that they can be used in non-propagating-cache in <ranges>. The
_Optional_payload_base::_M_construct member can also be used in
non-propagating-cache now, because it's constexpr since r12-4389.
There will be an LWG issue about the feature test macro, suggesting that
we should just bump the value of __cpp_lib_optional instead. I haven't
done that here, but it can be changed once consensus is reached on the
change.
libstdc++-v3/ChangeLog:
* include/std/optional (_Optional_payload_base::_Storage): Add
constructor taking a callable function to invoke.
(_Optional_payload_base::_M_apply): New function.
(__cpp_lib_monadic_optional): Define for C++23.
(optional::and_then, optional::transform, optional::or_else):
Define for C++23.
* include/std/ranges (__detail::__cached): Remove.
(__detail::__non_propagating_cache): Remove use of __cached for
contained value. Use _Optional_payload_base::_M_construct and
_Optional_payload_base::_M_apply to set the contained value.
* include/std/version (__cpp_lib_monadic_optional): Define.
* testsuite/20_util/optional/monadic/and_then.cc: New test.
* testsuite/20_util/optional/monadic/or_else.cc: New test.
* testsuite/20_util/optional/monadic/or_else_neg.cc: New test.
* testsuite/20_util/optional/monadic/transform.cc: New test.
* testsuite/20_util/optional/monadic/version.cc: New test.
PR fortran/92482
gcc/fortran/ChangeLog:
* trans-expr.c (gfc_conv_procedure_call): Use TREE_OPERAND not
build_fold_indirect_ref_loc to undo an ADDR_EXPR.
gcc/testsuite/ChangeLog:
* gfortran.dg/bind-c-char-descr.f90: Remove xfail; extend a bit.
The garbage collector of AIX linker might remove the reference to
__tls_get_addr if it's added inside an unused csect, which can be
the case of .data with very simple programs.
gcc/ChangeLog:
2021-10-19 Clément Chigot <clement.chigot@atos.net>
* config/rs6000/rs6000.c (rs6000_xcoff_file_end): Move
__tls_get_addr reference to .text csect.
This passes down the already available alignment scheme and
misalignment to the load/store costing routines, removing
redundant queries.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_get_store_cost): Adjust signature.
(vect_get_load_cost): Likewise.
* tree-vect-data-refs.c (vect_get_data_access_cost): Get
alignment support scheme and misalignment as arguments
and pass them down.
(vect_get_peeling_costs_all_drs): Compute that info here
and note that we shouldn't need to.
* tree-vect-stmts.c (vect_model_store_cost): Get
alignment support scheme and misalignment as arguments.
(vect_get_store_cost): Likewise.
(vect_model_load_cost): Likewise.
(vect_get_load_cost): Likewise.
(vectorizable_store): Pass down alignment support scheme
and misalignment to costing.
(vectorizable_load): Likewise.
This moves the computation of a negative offset that needs to be
applied when we vectorize a negative stride access to
get_load_store_type alongside where we compute the actual access
method.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vect-stmts.c (get_negative_load_store_type): Add
offset output parameter and initialize it.
(get_group_load_store_type): Likewise.
(get_load_store_type): Likewise.
(vectorizable_store): Use offset as computed by
get_load_store_type.
(vectorizable_load): Likewise.
The PR shows that when carefully crafting the runtime alias
condition in the vectorizer we might end up using defs from
the loop preheader but will end up inserting the condition
before the .LOOP_VECTORIZED call. So the following makes
sure to insert invariants before that when we versioned the
loop, preserving the invariant the vectorizer relies on.
2021-10-19 Richard Biener <rguenther@suse.de>
PR tree-optimization/102827
* tree-if-conv.c (predicate_statements): Add pe parameter
and use that edge to insert invariant stmts on.
(combine_blocks): Pass through pe.
(tree_if_conversion): Compute the edge to insert invariant
stmts on and pass it along.
* gcc.dg/pr102827.c: New testcase.
This patch resolves PR target/102785 where my recent patch to constant
fold saturating addition/subtraction exposed a latent bug in the bfin
backend. The patterns used for blackfin's V2HI ssaddsub and sssubadd
instructions had the indices/operations swapped. This was harmless
until we started evaluating these expressions at compile-time, when
the mismatch was caught by the testsuite.
2021-10-19 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/102785
* config/bfin/bfin.md (addsubv2hi3, subaddv2hi3, ssaddsubv2hi3,
sssubaddv2hi3): Swap the order of operators in vec_concat.
gcc/
* config/rs6000/rs6000-call.c (altivec_expand_lxvr_builtin):
Modify the expansion for sign extension. All extensions are done
within VSX registers.
gcc/testsuite/
* gcc.target/powerpc/p10_vec_xl_sext.c: New test.
This makes us compute the misalignment alongside the alignment support
scheme in get_load_store_type, removing some out-of-place calls to
the DR alignment API.
2021-10-18 Richard Biener <rguenther@suse.de>
* tree-vect-stmts.c (get_group_load_store_type): Add
misalignment output parameter and initialize it.
(get_group_load_store_type): Likewise.
(vectorizable_store): Remove now redundant queries.
(vectorizable_load): Likewise.
The following testcase incorrectly rejects the c initializer,
while in the s.*a case cxx_eval_* sees .__pfn reads etc.,
in the s.*&S::foo case get_member_function_from_ptrfunc creates
expressions which use INTEGER_CSTs with type of pointer to METHOD_TYPE.
And cxx_eval_constant_expression rejects any INTEGER_CSTs with pointer
type if they aren't 0.
Either we'd need to make sure we defer such folding till cp_fold but the
function and pfn_from_ptrmemfunc is used from lots of places, or
the following patch just tries to reject only non-zero INTEGER_CSTs
with pointer types if they don't point to METHOD_TYPE in the hope that
all such INTEGER_CSTs with POINTER_TYPE to METHOD_TYPE are result of
folding valid pointer-to-member function expressions.
I don't immediately see how one could create such INTEGER_CSTs otherwise,
cast of integers to PMF is rejected and would have the PMF RECORD_TYPE
anyway, etc.
2021-10-19 Jakub Jelinek <jakub@redhat.com>
PR c++/102786
* constexpr.c (cxx_eval_constant_expression): Don't reject
INTEGER_CSTs with type POINTER_TYPE to METHOD_TYPE.
* g++.dg/cpp2a/constexpr-virtual19.C: New test.
There are two calls with true as parameter, one is only relevant
for the case of the misalignment being unknown which means the
access is never aligned there, the other is in the peeling hash
insert code used conditional on the unlimited cost model which
adds an artificial count. But the way it works right now is
that it boosts the count if the specific misalignment when not peeling
is unsupported - in particular when the access is currently aligned
we'll query the backend with a misalign value of zero. I've
changed it to boost the peeling when unknown alignment is not
supported instead and noted how we could in principle improve this.
2021-10-19 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_supportable_dr_alignment): Remove
check_aligned argument.
* tree-vect-data-refs.c (vect_supportable_dr_alignment):
Likewise.
(vect_peeling_hash_insert): Add supportable_if_not_aligned
argument and do not call vect_supportable_dr_alignment here.
(vect_peeling_supportable): Adjust.
(vect_enhance_data_refs_alignment): Compute whether the
access is supported with different alignment here and
pass that down to vect_peeling_hash_insert.
(vect_vfa_access_size): Adjust.
* tree-vect-stmts.c (vect_get_store_cost): Likewise.
(vect_get_load_cost): Likewise.
(get_negative_load_store_type): Likewise.
(get_group_load_store_type): Likewise.
(get_load_store_type): Likewise.