Commit patch #3 of 4 for Power7 VSX support

Co-Authored-By: Pat Haugen <pthaugen@us.ibm.com>
Co-Authored-By: Revital Eres <eres@il.ibm.com>

From-SVN: r150018
This commit is contained in:
Michael Meissner 2009-07-23 16:05:37 +00:00 committed by Michael Meissner
parent 2304116044
commit a72c65c754
13 changed files with 3128 additions and 1625 deletions

View File

@ -1,3 +1,12 @@
2009-07-17 Michael Meissner <meissner@linux.vnet.ibm.com>
PR boehm-gc/40785
* include/private/gc_locks.h (GC_test_and_set): If GCC 4.4, use
the __sync_lock_test_and _set and __sync_lock_release builtins on
the powerpc. If not GCC 4.4, fix up the constraints so that it
builds without error.
(GC_clear): Ditto.
2009-07-17 Kai Tietz <kai.tietz@onevision.com>
* configure.ac: Add rule for mingw targets to add -DGC_BUILD=1 to

View File

@ -139,49 +139,35 @@
# define GC_TEST_AND_SET_DEFINED
# endif
# if defined(POWERPC)
# if 0 /* CPP_WORDSZ == 64 totally broken to use int locks with ldarx */
inline static int GC_test_and_set(volatile unsigned int *addr) {
unsigned long oldval;
unsigned long temp = 1; /* locked value */
__asm__ __volatile__(
"1:\tldarx %0,0,%3\n" /* load and reserve */
"\tcmpdi %0, 0\n" /* if load is */
"\tbne 2f\n" /* non-zero, return already set */
"\tstdcx. %2,0,%1\n" /* else store conditional */
"\tbne- 1b\n" /* retry if lost reservation */
"\tsync\n" /* import barrier */
"2:\t\n" /* oldval is zero if we set */
: "=&r"(oldval), "=p"(addr)
: "r"(temp), "1"(addr)
: "cr0","memory");
return (int)oldval;
}
# define GC_TEST_AND_SET_DEFINED
# define GC_CLEAR_DEFINED
# if (__GNUC__>4)||((__GNUC__==4)&&(__GNUC_MINOR__>=4))
# define GC_test_and_set(addr) __sync_lock_test_and_set (addr, 1)
# define GC_clear(addr) __sync_lock_release (addr)
# else
inline static int GC_test_and_set(volatile unsigned int *addr) {
int oldval;
int temp = 1; /* locked value */
__asm__ __volatile__(
"1:\tlwarx %0,0,%3\n" /* load and reserve */
"\n1:\n"
"\tlwarx %0,%y3\n" /* load and reserve, 32-bits */
"\tcmpwi %0, 0\n" /* if load is */
"\tbne 2f\n" /* non-zero, return already set */
"\tstwcx. %2,0,%1\n" /* else store conditional */
"\tstwcx. %2,%y3\n" /* else store conditional */
"\tbne- 1b\n" /* retry if lost reservation */
"\tsync\n" /* import barrier */
"2:\t\n" /* oldval is zero if we set */
: "=&r"(oldval), "=p"(addr)
: "r"(temp), "1"(addr)
: "=&r"(oldval), "=m"(addr)
: "r"(temp), "Z"(addr)
: "cr0","memory");
return oldval;
}
# endif
# define GC_TEST_AND_SET_DEFINED
inline static void GC_clear(volatile unsigned int *addr) {
__asm__ __volatile__("lwsync" : : : "memory");
*(addr) = 0;
}
# define GC_CLEAR_DEFINED
# endif
# endif
# if defined(ALPHA)
inline static int GC_test_and_set(volatile unsigned int * addr)

View File

@ -1,3 +1,188 @@
2009-07-22 Michael Meissner <meissner@linux.vnet.ibm.com>
Pat Haugen <pthaugen@us.ibm.com>
Revital Eres <ERES@il.ibm.com>
* config/rs6000/vector.md: New file. Move most of the vector
expander support here from altivec.md to allow for the VSX vector
unit in the future. Add support for secondary_reload patterns.
Rewrite the patterns for vector comparison, and vector comparison
predicate instructions so that the RTL expresses the desired
behavior, instead of using unspec.
* config/rs6000/constraints.md ("f" constraint): Use
rs6000_constraints to hold the precalculated register class.
("d" constraint): Ditto.
("wd" constraint): New constraint for VSX.
("wf" constraint): Ditto.
("ws" constraint): Ditto.
("wa" constraint): Ditto.
("wZ" constraint): Ditto.
("j" constraint): Ditto.
* config/rs6000/predicates.md (vsx_register_operand): New
predicate for VSX.
(vfloat_operand): New predicate for vector.md.
(vint_operand): Ditto.
(vlogical_operand): Ditto.
(easy_fp_constant): If VSX, 0.0 is an easy constant.
(easy_vector_constant): Add VSX support.
(altivec_indexed_or_indirect_operand): New predicate for
recognizing Altivec style memory references with AND -16.
* config/rs6000/rs6000.c (rs6000_vector_reload): New static global
for vector secondary reload support.
(rs6000_vector_reg_class): Delete, replacing it with rs6000_constraints.
(rs6000_vsx_reg_class): Ditto.
(rs6000_constraints): New array to hold the register classes of
each of the register constraints that can vary at runtime.
(builtin_mode_to_type): New static array for builtin function type
creation.
(builtin_hash_table): New static hash table for builtin function
type creation.
(TARGET_SECONDARY_RELOAD): Define target hook.
(TARGET_IRA_COVER_CLASSES): Ditto.
(rs6000_hard_regno_nregs_internal): If -mvsx, floating point
registers are 128 bits if VSX memory reference instructions are
used.
(rs6000_hard_regno_mode_ok): For VSX, only check if the VSX memory
unit is being used.
(rs6000_debug_vector_unit): Move into rs6000_debug_reg_global.
(rs6000_debug_reg_global): Move -mdebug=reg statements here.
Print several of the scheduling related parameters.
(rs6000_init_hard_regno_mode_ok): Switch to putting constraints in
rs6000_constraints instead of rs6000_vector_reg_class. Move
-mdebug=reg code to rs6000_debug_reg_global. Add support for
-mvsx-align-128 debug switch. Drop testing float_p if VSX or
Altivec. Add VSX support. Setup for secondary reload support on
Altivec/VSX registers.
(rs6000_override_options): Make power7 set the scheduling groups
like the power5. Add support for new debug switches to override
the scheduling defaults. Temporarily disable -mcpu=power7 from
setting -mvsx. Add support for debug switches -malways-hint,
-msched-groups, and -malign-branch-targets.
(rs6000_buitlin_conversion): Add support for returning unsigned
vector conversion functions to fix regressions due to stricter
type checking.
(rs6000_builtin_mul_widen_even): Ditto.
(rs6000_builtin_mul_widen_odd): Ditto.
(rs6000_builtin_vec_perm): Ditto.
(rs6000_vec_const_move): On VSX, use xxlxor to clear register.
(rs6000_expand_vector_init): Initial VSX support for using xxlxor
to zero a register.
(rs6000_emit_move): Fixup invalid const symbol_ref+reg that is
generated upstream.
(bdesc_3arg): Add builtins for unsigned types. Add builtins for
VSX types for bit operations. Changes to accomidate vector.md.
(bdesc_2arg): Ditto.
(bdesc_1arg): Ditto.
(struct builtin_description_predicates): Rewrite predicate
handling so that RTL describes the operation, instead of passing
the instruction to be used as a string argument.
(bdesc_altivec_preds): Ditto.
(altivec_expand_predicate_builtin): Ditto.
(altivec_expand_builtin): Ditto.
(rs6000_expand_ternop_builtin): Use a switch instead of an if
statement for vsldoi support.
(altivec_expand_ld_builtin): Change to use new names from
vector.md.
(altivec_expand_st_builtin): Ditto.
(paired_expand_builtin): Whitespace changes.
(rs6000_init_builtins): Add V2DF/V2DI types. Initialize the
builtin_mode_to_type table for secondary reload. Call
builtin_function_type to build random builtin functions.
(altivec_init_builtins): Change to use builtin_function_type to
create builtin function types dynamically as we need them.
(builtin_hash_function): New support for hashing the tree types
for builtin function as we need it, rather than trying to build
all of the trees that we need. Add initial preliminary VSX
support.
(builtin_function_type): Ditto.
(builtin_function_eq): Ditto.
(builtin_hash_struct): Ditto.
(rs6000_init_builtins): Ditto.
(rs6000_common_init_builtins): Ditto.
(altivec_init_builtins): Ditto.
(rs6000_common_init_builtins): Ditto.
(enum reload_reg_type): New enum for simplifing reg classes.
(rs6000_reload_register_type): Simplify register classes into GPR,
Vector, and other registers.
Altivec and VSX addresses in reload.
(rs6000_secondary_reload_inner): Ditto.
(rs6000_ira_cover_classes): New target hook, that returns the
appropriate cover classes, based on -mvsx being used or not.
(rs6000_secondary_reload_class): Add VSX support.
(get_vec_cmp_insn): Delete, rewrite vector conditionals.
(get_vsel_insn): Ditto.
(rs6000_emit_vector_compare): Rewrite vector conditional support
so that where we can, we use RTL operators, instead of blindly use
UNSPEC.
(rs6000_emit_vector_select): Ditto.
(rs6000_emit_vector_cond_expr): Ditto.
(rs6000_emit_minmax): Directly generate min/max under altivec,
vsx.
(create_TOC_reference): Add -mdebug=addr support.
(emit_frame_save): VSX loads/stores need register indexed
addressing.
* config/rs6000/rs6000.md: Include vector.md.
* config/rs6000/t-rs6000 (MD_INCLUDES): Add vector.md.
* config/rs6000/rs6000-c.c (altivec_overloaded_builtins): Add
support for V2DI, V2DF in logical, permute, select operations.
* config/rs6000/rs6000.opt (-mvsx-scalar-double): Add new debug
switch for vsx/power7.
(-mvsx-scalar-memory): Ditto.
(-mvsx-align-128): Ditto.
(-mallow-movmisalign): Ditto.
(-mallow-df-permute): Ditto.
(-msched-groups): Ditto.
(-malways-hint): Ditto.
(-malign-branch-targets): Ditto.
* config/rs6000/rs6000.h (IRA_COVER_CLASSES): Delete, use target
hook instead.
(IRA_COVER_CLASSES_PRE_VSX): Cover classes if not -mvsx.
(IRA_COVER_CLASSES_VSX): Cover classes if -mvsx.
(rs6000_vector_reg_class): Delete.
(rs6000_vsx_reg_class): Ditto.
(enum rs6000_reg_class_enum): New enum for the constraints that
vary based on target switches.
(rs6000_constraints): New array to hold the register class for all
of the register constraints that vary based on the switches used.
(ALTIVEC_BUILTIN_*_UNS): Add unsigned builtin functions.
(enum rs6000_builtins): Add unsigned varients for the builtin
declarations returned by target hooks for expanding multiplies,
select, and permute operations. Add VSX builtins.
(enum rs6000_builtin_type_index): Add entries for VSX.
(V2DI_type_node): Ditto.
(V2DF_type_node): Ditto.
(unsigned_V2DI_type_node): Ditto.
(bool_long_type_node): Ditto.
(intDI_type_internal_node): Ditto.
(uintDI_type_internal_node): Ditto.
(double_type_internal_node): Ditto.
* config/rs6000/altivec.md (whole file): Move all expanders to
vector.md from altivec.md. Rename insn matching functions to be
altivec_foo.
(UNSPEC_VCMP*): Delete, rewrite vector comparisons.
(altivec_vcmp*): Ditto.
(UNSPEC_VPERM_UNS): New, add for unsigned types using vperm.
(VM): New iterator for moves that includes the VSX types.
(altivec_vperm_<mode>): Add VSX types. Add unsigned types.
(altivec_vperm_<mode>_uns): New, for unsigned types.
(altivec_vsel_*): Rewrite vector comparisons and predicate
builtins.
(altivec_eq<mode>): Ditto.
(altivec_gt<mode>): Ditto.
(altivec_gtu<mode>): Ditto.
(altivec_eqv4sf): Ditto.
(altivec_gev4sf): Ditto.
(altivec_gtv4sf): Ditto.
(altivec_vcmpbfp_p): Ditto.
2009-07-23 Richard Earnshaw <rearnsha@arm.com>
(split for ior/xor with shift and zero-extend): Cast op3 to

File diff suppressed because it is too large Load Diff

View File

@ -17,14 +17,14 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
;; Available constraint letters: "e", "k", "u", "A", "B", "C", "D"
;; Register constraints
(define_register_constraint "f" "TARGET_HARD_FLOAT && TARGET_FPRS
? FLOAT_REGS : NO_REGS"
(define_register_constraint "f" "rs6000_constraints[RS6000_CONSTRAINT_f]"
"@internal")
(define_register_constraint "d" "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_DOUBLE_FLOAT
? FLOAT_REGS : NO_REGS"
(define_register_constraint "d" "rs6000_constraints[RS6000_CONSTRAINT_d]"
"@internal")
(define_register_constraint "b" "BASE_REGS"
@ -54,6 +54,28 @@
(define_register_constraint "z" "XER_REGS"
"@internal")
;; Use w as a prefix to add VSX modes
;; vector double (V2DF)
(define_register_constraint "wd" "rs6000_constraints[RS6000_CONSTRAINT_wd]"
"@internal")
;; vector float (V4SF)
(define_register_constraint "wf" "rs6000_constraints[RS6000_CONSTRAINT_wf]"
"@internal")
;; scalar double (DF)
(define_register_constraint "ws" "rs6000_constraints[RS6000_CONSTRAINT_ws]"
"@internal")
;; any VSX register
(define_register_constraint "wa" "rs6000_constraints[RS6000_CONSTRAINT_wa]"
"@internal")
;; Altivec style load/store that ignores the bottom bits of the address
(define_memory_constraint "wZ"
"Indexed or indirect memory operand, ignoring the bottom 4 bits"
(match_operand 0 "altivec_indexed_or_indirect_operand"))
;; Integer constraints
(define_constraint "I"
@ -173,3 +195,7 @@ usually better to use @samp{m} or @samp{es} in @code{asm} statements)"
(define_constraint "W"
"vector constant that does not require memory"
(match_operand 0 "easy_vector_constant"))
(define_constraint "j"
"Zero vector constant"
(match_test "(op == const0_rtx || op == CONST0_RTX (GET_MODE (op)))"))

View File

@ -38,6 +38,37 @@
|| ALTIVEC_REGNO_P (REGNO (op))
|| REGNO (op) > LAST_VIRTUAL_REGISTER")))
;; Return 1 if op is a VSX register.
(define_predicate "vsx_register_operand"
(and (match_operand 0 "register_operand")
(match_test "GET_CODE (op) != REG
|| VSX_REGNO_P (REGNO (op))
|| REGNO (op) > LAST_VIRTUAL_REGISTER")))
;; Return 1 if op is a vector register that operates on floating point vectors
;; (either altivec or VSX).
(define_predicate "vfloat_operand"
(and (match_operand 0 "register_operand")
(match_test "GET_CODE (op) != REG
|| VFLOAT_REGNO_P (REGNO (op))
|| REGNO (op) > LAST_VIRTUAL_REGISTER")))
;; Return 1 if op is a vector register that operates on integer vectors
;; (only altivec, VSX doesn't support integer vectors)
(define_predicate "vint_operand"
(and (match_operand 0 "register_operand")
(match_test "GET_CODE (op) != REG
|| VINT_REGNO_P (REGNO (op))
|| REGNO (op) > LAST_VIRTUAL_REGISTER")))
;; Return 1 if op is a vector register to do logical operations on (and, or,
;; xor, etc.)
(define_predicate "vlogical_operand"
(and (match_operand 0 "register_operand")
(match_test "GET_CODE (op) != REG
|| VLOGICAL_REGNO_P (REGNO (op))
|| REGNO (op) > LAST_VIRTUAL_REGISTER")))
;; Return 1 if op is XER register.
(define_predicate "xer_operand"
(and (match_code "reg")
@ -234,6 +265,10 @@
&& num_insns_constant_wide ((HOST_WIDE_INT) k[3]) == 1);
case DFmode:
/* The constant 0.f is easy under VSX. */
if (op == CONST0_RTX (DFmode) && VECTOR_UNIT_VSX_P (DFmode))
return 1;
/* Force constants to memory before reload to utilize
compress_float_constant.
Avoid this when flag_unsafe_math_optimizations is enabled
@ -292,6 +327,9 @@
if (TARGET_PAIRED_FLOAT)
return false;
if ((VSX_VECTOR_MODE (mode) || mode == TImode) && zero_constant (op, mode))
return true;
if (ALTIVEC_VECTOR_MODE (mode))
{
if (zero_constant (op, mode))
@ -394,16 +432,36 @@
(match_code "mem")
{
op = XEXP (op, 0);
if (TARGET_ALTIVEC
&& ALTIVEC_VECTOR_MODE (mode)
if (VECTOR_MEM_ALTIVEC_P (mode)
&& GET_CODE (op) == AND
&& GET_CODE (XEXP (op, 1)) == CONST_INT
&& INTVAL (XEXP (op, 1)) == -16)
op = XEXP (op, 0);
else if (VECTOR_MEM_VSX_P (mode)
&& GET_CODE (op) == PRE_MODIFY)
op = XEXP (op, 1);
return indexed_or_indirect_address (op, mode);
})
;; Return 1 if the operand is an indexed or indirect memory operand with an
;; AND -16 in it, used to recognize when we need to switch to Altivec loads
;; to realign loops instead of VSX (altivec silently ignores the bottom bits,
;; while VSX uses the full address and traps)
(define_predicate "altivec_indexed_or_indirect_operand"
(match_code "mem")
{
op = XEXP (op, 0);
if (VECTOR_MEM_ALTIVEC_OR_VSX_P (mode)
&& GET_CODE (op) == AND
&& GET_CODE (XEXP (op, 1)) == CONST_INT
&& INTVAL (XEXP (op, 1)) == -16)
return indexed_or_indirect_address (XEXP (op, 0), mode);
return 0;
})
;; Return 1 if the operand is an indexed or indirect address.
(define_special_predicate "indexed_or_indirect_address"
(and (match_test "REG_P (op)

View File

@ -670,6 +670,12 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
@ -718,6 +724,12 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
@ -1482,6 +1494,8 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 },
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
@ -1506,6 +1520,12 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
@ -2122,6 +2142,12 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_V2DF, RS6000_BTI_bool_V4SI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
@ -2366,6 +2392,10 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V4SI },
{ ALTIVEC_BUILTIN_VEC_NMSUB, ALTIVEC_BUILTIN_VNMSUBFP,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_2DF,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_4SF,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_4SI,
@ -2392,10 +2422,28 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = {
RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_16QI,
RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DF,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DF,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_unsigned_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DF,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DF,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_unsigned_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SF,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SF,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_unsigned_V4SI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SI,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SI,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SI,
RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SI,

File diff suppressed because it is too large Load Diff

View File

@ -1280,12 +1280,24 @@ enum reg_class
purpose. Any move between two registers of a cover class should be
cheaper than load or store of the registers. The macro value is
array of register classes with LIM_REG_CLASSES used as the end
marker. */
marker.
#define IRA_COVER_CLASSES \
We need two IRA_COVER_CLASSES, one for pre-VSX, and the other for VSX to
account for the Altivec and Floating registers being subsets of the VSX
register set. */
#define IRA_COVER_CLASSES_PRE_VSX \
{ \
GENERAL_REGS, SPECIAL_REGS, FLOAT_REGS, ALTIVEC_REGS, \
/*VRSAVE_REGS,*/ VSCR_REGS, SPE_ACC_REGS, SPEFSCR_REGS, \
GENERAL_REGS, SPECIAL_REGS, FLOAT_REGS, ALTIVEC_REGS, /* VSX_REGS, */ \
/* VRSAVE_REGS,*/ VSCR_REGS, SPE_ACC_REGS, SPEFSCR_REGS, \
/* MQ_REGS, LINK_REGS, CTR_REGS, */ \
CR_REGS, XER_REGS, LIM_REG_CLASSES \
}
#define IRA_COVER_CLASSES_VSX \
{ \
GENERAL_REGS, SPECIAL_REGS, /* FLOAT_REGS, ALTIVEC_REGS, */ VSX_REGS, \
/* VRSAVE_REGS,*/ VSCR_REGS, SPE_ACC_REGS, SPEFSCR_REGS, \
/* MQ_REGS, LINK_REGS, CTR_REGS, */ \
CR_REGS, XER_REGS, LIM_REG_CLASSES \
}
@ -1306,9 +1318,20 @@ extern enum reg_class rs6000_regno_regclass[FIRST_PSEUDO_REGISTER];
#define REGNO_REG_CLASS(REGNO) rs6000_regno_regclass[(REGNO)]
#endif
/* Register classes for altivec registers (and eventually other vector
units). */
extern enum reg_class rs6000_vector_reg_class[];
/* Register classes for various constraints that are based on the target
switches. */
enum r6000_reg_class_enum {
RS6000_CONSTRAINT_d, /* fpr registers for double values */
RS6000_CONSTRAINT_f, /* fpr registers for single values */
RS6000_CONSTRAINT_v, /* Altivec registers */
RS6000_CONSTRAINT_wa, /* Any VSX register */
RS6000_CONSTRAINT_wd, /* VSX register for V2DF */
RS6000_CONSTRAINT_wf, /* VSX register for V4SF */
RS6000_CONSTRAINT_ws, /* VSX register for DF */
RS6000_CONSTRAINT_MAX
};
extern enum reg_class rs6000_constraints[RS6000_CONSTRAINT_MAX];
/* The class value for index registers, and the one for base regs. */
#define INDEX_REG_CLASS GENERAL_REGS
@ -2493,24 +2516,40 @@ enum rs6000_builtins
ALTIVEC_BUILTIN_VMINSW,
ALTIVEC_BUILTIN_VMINFP,
ALTIVEC_BUILTIN_VMULEUB,
ALTIVEC_BUILTIN_VMULEUB_UNS,
ALTIVEC_BUILTIN_VMULESB,
ALTIVEC_BUILTIN_VMULEUH,
ALTIVEC_BUILTIN_VMULEUH_UNS,
ALTIVEC_BUILTIN_VMULESH,
ALTIVEC_BUILTIN_VMULOUB,
ALTIVEC_BUILTIN_VMULOUB_UNS,
ALTIVEC_BUILTIN_VMULOSB,
ALTIVEC_BUILTIN_VMULOUH,
ALTIVEC_BUILTIN_VMULOUH_UNS,
ALTIVEC_BUILTIN_VMULOSH,
ALTIVEC_BUILTIN_VNMSUBFP,
ALTIVEC_BUILTIN_VNOR,
ALTIVEC_BUILTIN_VOR,
ALTIVEC_BUILTIN_VSEL_2DF, /* needed for VSX */
ALTIVEC_BUILTIN_VSEL_2DI, /* needed for VSX */
ALTIVEC_BUILTIN_VSEL_4SI,
ALTIVEC_BUILTIN_VSEL_4SF,
ALTIVEC_BUILTIN_VSEL_8HI,
ALTIVEC_BUILTIN_VSEL_16QI,
ALTIVEC_BUILTIN_VSEL_2DI_UNS,
ALTIVEC_BUILTIN_VSEL_4SI_UNS,
ALTIVEC_BUILTIN_VSEL_8HI_UNS,
ALTIVEC_BUILTIN_VSEL_16QI_UNS,
ALTIVEC_BUILTIN_VPERM_2DF, /* needed for VSX */
ALTIVEC_BUILTIN_VPERM_2DI, /* needed for VSX */
ALTIVEC_BUILTIN_VPERM_4SI,
ALTIVEC_BUILTIN_VPERM_4SF,
ALTIVEC_BUILTIN_VPERM_8HI,
ALTIVEC_BUILTIN_VPERM_16QI,
ALTIVEC_BUILTIN_VPERM_2DI_UNS,
ALTIVEC_BUILTIN_VPERM_4SI_UNS,
ALTIVEC_BUILTIN_VPERM_8HI_UNS,
ALTIVEC_BUILTIN_VPERM_16QI_UNS,
ALTIVEC_BUILTIN_VPKUHUM,
ALTIVEC_BUILTIN_VPKUWUM,
ALTIVEC_BUILTIN_VPKPX,
@ -3138,6 +3177,219 @@ enum rs6000_builtins
RS6000_BUILTIN_RSQRTF,
RS6000_BUILTIN_BSWAP_HI,
/* VSX builtins. */
VSX_BUILTIN_LXSDUX,
VSX_BUILTIN_LXSDX,
VSX_BUILTIN_LXVD2UX,
VSX_BUILTIN_LXVD2X,
VSX_BUILTIN_LXVDSX,
VSX_BUILTIN_LXVW4UX,
VSX_BUILTIN_LXVW4X,
VSX_BUILTIN_STXSDUX,
VSX_BUILTIN_STXSDX,
VSX_BUILTIN_STXVD2UX,
VSX_BUILTIN_STXVD2X,
VSX_BUILTIN_STXVW4UX,
VSX_BUILTIN_STXVW4X,
VSX_BUILTIN_XSABSDP,
VSX_BUILTIN_XSADDDP,
VSX_BUILTIN_XSCMPODP,
VSX_BUILTIN_XSCMPUDP,
VSX_BUILTIN_XSCPSGNDP,
VSX_BUILTIN_XSCVDPSP,
VSX_BUILTIN_XSCVDPSXDS,
VSX_BUILTIN_XSCVDPSXWS,
VSX_BUILTIN_XSCVDPUXDS,
VSX_BUILTIN_XSCVDPUXWS,
VSX_BUILTIN_XSCVSPDP,
VSX_BUILTIN_XSCVSXDDP,
VSX_BUILTIN_XSCVUXDDP,
VSX_BUILTIN_XSDIVDP,
VSX_BUILTIN_XSMADDADP,
VSX_BUILTIN_XSMADDMDP,
VSX_BUILTIN_XSMAXDP,
VSX_BUILTIN_XSMINDP,
VSX_BUILTIN_XSMOVDP,
VSX_BUILTIN_XSMSUBADP,
VSX_BUILTIN_XSMSUBMDP,
VSX_BUILTIN_XSMULDP,
VSX_BUILTIN_XSNABSDP,
VSX_BUILTIN_XSNEGDP,
VSX_BUILTIN_XSNMADDADP,
VSX_BUILTIN_XSNMADDMDP,
VSX_BUILTIN_XSNMSUBADP,
VSX_BUILTIN_XSNMSUBMDP,
VSX_BUILTIN_XSRDPI,
VSX_BUILTIN_XSRDPIC,
VSX_BUILTIN_XSRDPIM,
VSX_BUILTIN_XSRDPIP,
VSX_BUILTIN_XSRDPIZ,
VSX_BUILTIN_XSREDP,
VSX_BUILTIN_XSRSQRTEDP,
VSX_BUILTIN_XSSQRTDP,
VSX_BUILTIN_XSSUBDP,
VSX_BUILTIN_XSTDIVDP_FE,
VSX_BUILTIN_XSTDIVDP_FG,
VSX_BUILTIN_XSTSQRTDP_FE,
VSX_BUILTIN_XSTSQRTDP_FG,
VSX_BUILTIN_XVABSDP,
VSX_BUILTIN_XVABSSP,
VSX_BUILTIN_XVADDDP,
VSX_BUILTIN_XVADDSP,
VSX_BUILTIN_XVCMPEQDP,
VSX_BUILTIN_XVCMPEQSP,
VSX_BUILTIN_XVCMPGEDP,
VSX_BUILTIN_XVCMPGESP,
VSX_BUILTIN_XVCMPGTDP,
VSX_BUILTIN_XVCMPGTSP,
VSX_BUILTIN_XVCMPEQDP_P,
VSX_BUILTIN_XVCMPEQSP_P,
VSX_BUILTIN_XVCMPGEDP_P,
VSX_BUILTIN_XVCMPGESP_P,
VSX_BUILTIN_XVCMPGTDP_P,
VSX_BUILTIN_XVCMPGTSP_P,
VSX_BUILTIN_XVCPSGNDP,
VSX_BUILTIN_XVCPSGNSP,
VSX_BUILTIN_XVCVDPSP,
VSX_BUILTIN_XVCVDPSXDS,
VSX_BUILTIN_XVCVDPSXWS,
VSX_BUILTIN_XVCVDPUXDS,
VSX_BUILTIN_XVCVDPUXDS_UNS,
VSX_BUILTIN_XVCVDPUXWS,
VSX_BUILTIN_XVCVSPDP,
VSX_BUILTIN_XVCVSPSXDS,
VSX_BUILTIN_XVCVSPSXWS,
VSX_BUILTIN_XVCVSPUXDS,
VSX_BUILTIN_XVCVSPUXWS,
VSX_BUILTIN_XVCVSXDDP,
VSX_BUILTIN_XVCVSXDSP,
VSX_BUILTIN_XVCVSXWDP,
VSX_BUILTIN_XVCVSXWSP,
VSX_BUILTIN_XVCVUXDDP,
VSX_BUILTIN_XVCVUXDDP_UNS,
VSX_BUILTIN_XVCVUXDSP,
VSX_BUILTIN_XVCVUXWDP,
VSX_BUILTIN_XVCVUXWSP,
VSX_BUILTIN_XVDIVDP,
VSX_BUILTIN_XVDIVSP,
VSX_BUILTIN_XVMADDDP,
VSX_BUILTIN_XVMADDSP,
VSX_BUILTIN_XVMAXDP,
VSX_BUILTIN_XVMAXSP,
VSX_BUILTIN_XVMINDP,
VSX_BUILTIN_XVMINSP,
VSX_BUILTIN_XVMSUBDP,
VSX_BUILTIN_XVMSUBSP,
VSX_BUILTIN_XVMULDP,
VSX_BUILTIN_XVMULSP,
VSX_BUILTIN_XVNABSDP,
VSX_BUILTIN_XVNABSSP,
VSX_BUILTIN_XVNEGDP,
VSX_BUILTIN_XVNEGSP,
VSX_BUILTIN_XVNMADDDP,
VSX_BUILTIN_XVNMADDSP,
VSX_BUILTIN_XVNMSUBDP,
VSX_BUILTIN_XVNMSUBSP,
VSX_BUILTIN_XVRDPI,
VSX_BUILTIN_XVRDPIC,
VSX_BUILTIN_XVRDPIM,
VSX_BUILTIN_XVRDPIP,
VSX_BUILTIN_XVRDPIZ,
VSX_BUILTIN_XVREDP,
VSX_BUILTIN_XVRESP,
VSX_BUILTIN_XVRSPI,
VSX_BUILTIN_XVRSPIC,
VSX_BUILTIN_XVRSPIM,
VSX_BUILTIN_XVRSPIP,
VSX_BUILTIN_XVRSPIZ,
VSX_BUILTIN_XVRSQRTEDP,
VSX_BUILTIN_XVRSQRTESP,
VSX_BUILTIN_XVSQRTDP,
VSX_BUILTIN_XVSQRTSP,
VSX_BUILTIN_XVSUBDP,
VSX_BUILTIN_XVSUBSP,
VSX_BUILTIN_XVTDIVDP_FE,
VSX_BUILTIN_XVTDIVDP_FG,
VSX_BUILTIN_XVTDIVSP_FE,
VSX_BUILTIN_XVTDIVSP_FG,
VSX_BUILTIN_XVTSQRTDP_FE,
VSX_BUILTIN_XVTSQRTDP_FG,
VSX_BUILTIN_XVTSQRTSP_FE,
VSX_BUILTIN_XVTSQRTSP_FG,
VSX_BUILTIN_XXSEL_2DI,
VSX_BUILTIN_XXSEL_2DF,
VSX_BUILTIN_XXSEL_4SI,
VSX_BUILTIN_XXSEL_4SF,
VSX_BUILTIN_XXSEL_8HI,
VSX_BUILTIN_XXSEL_16QI,
VSX_BUILTIN_XXSEL_2DI_UNS,
VSX_BUILTIN_XXSEL_4SI_UNS,
VSX_BUILTIN_XXSEL_8HI_UNS,
VSX_BUILTIN_XXSEL_16QI_UNS,
VSX_BUILTIN_VPERM_2DI,
VSX_BUILTIN_VPERM_2DF,
VSX_BUILTIN_VPERM_4SI,
VSX_BUILTIN_VPERM_4SF,
VSX_BUILTIN_VPERM_8HI,
VSX_BUILTIN_VPERM_16QI,
VSX_BUILTIN_VPERM_2DI_UNS,
VSX_BUILTIN_VPERM_4SI_UNS,
VSX_BUILTIN_VPERM_8HI_UNS,
VSX_BUILTIN_VPERM_16QI_UNS,
VSX_BUILTIN_XXPERMDI_2DF,
VSX_BUILTIN_XXPERMDI_2DI,
VSX_BUILTIN_XXPERMDI_4SF,
VSX_BUILTIN_XXPERMDI_4SI,
VSX_BUILTIN_XXPERMDI_8HI,
VSX_BUILTIN_XXPERMDI_16QI,
VSX_BUILTIN_CONCAT_2DF,
VSX_BUILTIN_CONCAT_2DI,
VSX_BUILTIN_SET_2DF,
VSX_BUILTIN_SET_2DI,
VSX_BUILTIN_SPLAT_2DF,
VSX_BUILTIN_SPLAT_2DI,
VSX_BUILTIN_XXMRGHW_4SF,
VSX_BUILTIN_XXMRGHW_4SI,
VSX_BUILTIN_XXMRGLW_4SF,
VSX_BUILTIN_XXMRGLW_4SI,
VSX_BUILTIN_XXSLDWI_16QI,
VSX_BUILTIN_XXSLDWI_8HI,
VSX_BUILTIN_XXSLDWI_4SI,
VSX_BUILTIN_XXSLDWI_4SF,
VSX_BUILTIN_XXSLDWI_2DI,
VSX_BUILTIN_XXSLDWI_2DF,
VSX_BUILTIN_VEC_INIT_V2DF,
VSX_BUILTIN_VEC_INIT_V2DI,
VSX_BUILTIN_VEC_SET_V2DF,
VSX_BUILTIN_VEC_SET_V2DI,
VSX_BUILTIN_VEC_EXT_V2DF,
VSX_BUILTIN_VEC_EXT_V2DI,
/* VSX overloaded builtins, add the overloaded functions not present in
Altivec. */
VSX_BUILTIN_VEC_MUL,
VSX_BUILTIN_OVERLOADED_FIRST = VSX_BUILTIN_VEC_MUL,
VSX_BUILTIN_VEC_MSUB,
VSX_BUILTIN_VEC_NMADD,
VSX_BUITLIN_VEC_NMSUB,
VSX_BUILTIN_VEC_DIV,
VSX_BUILTIN_VEC_XXMRGHW,
VSX_BUILTIN_VEC_XXMRGLW,
VSX_BUILTIN_VEC_XXPERMDI,
VSX_BUILTIN_VEC_XXSLDWI,
VSX_BUILTIN_VEC_XXSPLTD,
VSX_BUILTIN_VEC_XXSPLTW,
VSX_BUILTIN_OVERLOADED_LAST = VSX_BUILTIN_VEC_XXSPLTW,
/* Combined VSX/Altivec builtins. */
VECTOR_BUILTIN_FLOAT_V4SI_V4SF,
VECTOR_BUILTIN_UNSFLOAT_V4SI_V4SF,
VECTOR_BUILTIN_FIX_V4SF_V4SI,
VECTOR_BUILTIN_FIXUNS_V4SF_V4SI,
/* Power7 builtins, that aren't VSX instructions. */
POWER7_BUILTIN_BPERMD,
RS6000_BUILTIN_COUNT
};
@ -3151,6 +3403,8 @@ enum rs6000_builtin_type_index
RS6000_BTI_V16QI,
RS6000_BTI_V2SI,
RS6000_BTI_V2SF,
RS6000_BTI_V2DI,
RS6000_BTI_V2DF,
RS6000_BTI_V4HI,
RS6000_BTI_V4SI,
RS6000_BTI_V4SF,
@ -3158,13 +3412,16 @@ enum rs6000_builtin_type_index
RS6000_BTI_unsigned_V16QI,
RS6000_BTI_unsigned_V8HI,
RS6000_BTI_unsigned_V4SI,
RS6000_BTI_unsigned_V2DI,
RS6000_BTI_bool_char, /* __bool char */
RS6000_BTI_bool_short, /* __bool short */
RS6000_BTI_bool_int, /* __bool int */
RS6000_BTI_bool_long, /* __bool long */
RS6000_BTI_pixel, /* __pixel */
RS6000_BTI_bool_V16QI, /* __vector __bool char */
RS6000_BTI_bool_V8HI, /* __vector __bool short */
RS6000_BTI_bool_V4SI, /* __vector __bool int */
RS6000_BTI_bool_V2DI, /* __vector __bool long */
RS6000_BTI_pixel_V8HI, /* __vector __pixel */
RS6000_BTI_long, /* long_integer_type_node */
RS6000_BTI_unsigned_long, /* long_unsigned_type_node */
@ -3174,7 +3431,10 @@ enum rs6000_builtin_type_index
RS6000_BTI_UINTHI, /* unsigned_intHI_type_node */
RS6000_BTI_INTSI, /* intSI_type_node */
RS6000_BTI_UINTSI, /* unsigned_intSI_type_node */
RS6000_BTI_INTDI, /* intDI_type_node */
RS6000_BTI_UINTDI, /* unsigned_intDI_type_node */
RS6000_BTI_float, /* float_type_node */
RS6000_BTI_double, /* double_type_node */
RS6000_BTI_void, /* void_type_node */
RS6000_BTI_MAX
};
@ -3185,6 +3445,8 @@ enum rs6000_builtin_type_index
#define opaque_p_V2SI_type_node (rs6000_builtin_types[RS6000_BTI_opaque_p_V2SI])
#define opaque_V4SI_type_node (rs6000_builtin_types[RS6000_BTI_opaque_V4SI])
#define V16QI_type_node (rs6000_builtin_types[RS6000_BTI_V16QI])
#define V2DI_type_node (rs6000_builtin_types[RS6000_BTI_V2DI])
#define V2DF_type_node (rs6000_builtin_types[RS6000_BTI_V2DF])
#define V2SI_type_node (rs6000_builtin_types[RS6000_BTI_V2SI])
#define V2SF_type_node (rs6000_builtin_types[RS6000_BTI_V2SF])
#define V4HI_type_node (rs6000_builtin_types[RS6000_BTI_V4HI])
@ -3194,13 +3456,16 @@ enum rs6000_builtin_type_index
#define unsigned_V16QI_type_node (rs6000_builtin_types[RS6000_BTI_unsigned_V16QI])
#define unsigned_V8HI_type_node (rs6000_builtin_types[RS6000_BTI_unsigned_V8HI])
#define unsigned_V4SI_type_node (rs6000_builtin_types[RS6000_BTI_unsigned_V4SI])
#define unsigned_V2DI_type_node (rs6000_builtin_types[RS6000_BTI_unsigned_V2DI])
#define bool_char_type_node (rs6000_builtin_types[RS6000_BTI_bool_char])
#define bool_short_type_node (rs6000_builtin_types[RS6000_BTI_bool_short])
#define bool_int_type_node (rs6000_builtin_types[RS6000_BTI_bool_int])
#define bool_long_type_node (rs6000_builtin_types[RS6000_BTI_bool_long])
#define pixel_type_node (rs6000_builtin_types[RS6000_BTI_pixel])
#define bool_V16QI_type_node (rs6000_builtin_types[RS6000_BTI_bool_V16QI])
#define bool_V8HI_type_node (rs6000_builtin_types[RS6000_BTI_bool_V8HI])
#define bool_V4SI_type_node (rs6000_builtin_types[RS6000_BTI_bool_V4SI])
#define bool_V2DI_type_node (rs6000_builtin_types[RS6000_BTI_bool_V2DI])
#define pixel_V8HI_type_node (rs6000_builtin_types[RS6000_BTI_pixel_V8HI])
#define long_integer_type_internal_node (rs6000_builtin_types[RS6000_BTI_long])
@ -3211,7 +3476,10 @@ enum rs6000_builtin_type_index
#define uintHI_type_internal_node (rs6000_builtin_types[RS6000_BTI_UINTHI])
#define intSI_type_internal_node (rs6000_builtin_types[RS6000_BTI_INTSI])
#define uintSI_type_internal_node (rs6000_builtin_types[RS6000_BTI_UINTSI])
#define intDI_type_internal_node (rs6000_builtin_types[RS6000_BTI_INTDI])
#define uintDI_type_internal_node (rs6000_builtin_types[RS6000_BTI_UINTDI])
#define float_type_internal_node (rs6000_builtin_types[RS6000_BTI_float])
#define double_type_internal_node (rs6000_builtin_types[RS6000_BTI_double])
#define void_type_internal_node (rs6000_builtin_types[RS6000_BTI_void])
extern GTY(()) tree rs6000_builtin_types[RS6000_BTI_MAX];

View File

@ -15322,6 +15322,7 @@
(include "sync.md")
(include "vector.md")
(include "altivec.md")
(include "spe.md")
(include "dfp.md")

View File

@ -119,6 +119,38 @@ mvsx
Target Report Mask(VSX)
Use vector/scalar (VSX) instructions
mvsx-scalar-double
Target Undocumented Report Var(TARGET_VSX_SCALAR_DOUBLE) Init(-1)
; If -mvsx, use VSX arithmetic instructions for scalar double (on by default)
mvsx-scalar-memory
Target Undocumented Report Var(TARGET_VSX_SCALAR_MEMORY)
; If -mvsx, use VSX scalar memory reference instructions for scalar double (off by default)
mvsx-align-128
Target Undocumented Report Var(TARGET_VSX_ALIGN_128)
; If -mvsx, set alignment to 128 bits instead of 32/64
mallow-movmisalign
Target Undocumented Var(TARGET_ALLOW_MOVMISALIGN) Init(-1)
; Allow/disallow the movmisalign in DF/DI vectors
mallow-df-permute
Target Undocumented Var(TARGET_ALLOW_DF_PERMUTE)
; Allow/disallow permutation of DF/DI vectors
msched-groups
Target Undocumented Report Var(TARGET_SCHED_GROUPS) Init(-1)
; Explicitly set/unset whether rs6000_sched_groups is set
malways-hint
Target Undocumented Report Var(TARGET_ALWAYS_HINT) Init(-1)
; Explicitly set/unset whether rs6000_always_hint is set
malign-branch-targets
Target Undocumented Report Var(TARGET_ALIGN_BRANCH_TARGETS) Init(-1)
; Explicitly set/unset whether rs6000_align_branch_targets is set
mupdate
Target Report Var(TARGET_UPDATE) Init(1)
Generate load/store with update instructions

View File

@ -59,6 +59,7 @@ MD_INCLUDES = $(srcdir)/config/rs6000/rios1.md \
$(srcdir)/config/rs6000/constraints.md \
$(srcdir)/config/rs6000/darwin.md \
$(srcdir)/config/rs6000/sync.md \
$(srcdir)/config/rs6000/vector.md \
$(srcdir)/config/rs6000/altivec.md \
$(srcdir)/config/rs6000/spe.md \
$(srcdir)/config/rs6000/dfp.md \

700
gcc/config/rs6000/vector.md Normal file
View File

@ -0,0 +1,700 @@
;; Expander definitions for vector support. No instructions are in this file,
;; this file provides the generic vector expander, and the actual vector
;; instructions will be in altivec.md.
;; Copyright (C) 2009
;; Free Software Foundation, Inc.
;; Contributed by Michael Meissner <meissner@linux.vnet.ibm.com>
;; This file is part of GCC.
;; GCC is free software; you can redistribute it and/or modify it
;; under the terms of the GNU General Public License as published
;; by the Free Software Foundation; either version 3, or (at your
;; option) any later version.
;; GCC is distributed in the hope that it will be useful, but WITHOUT
;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
;; or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
;; License for more details.
;; You should have received a copy of the GNU General Public License
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
;; Vector int modes
(define_mode_iterator VEC_I [V16QI V8HI V4SI])
;; Vector float modes
(define_mode_iterator VEC_F [V4SF])
;; Vector arithmetic modes
(define_mode_iterator VEC_A [V16QI V8HI V4SI V4SF])
;; Vector modes that need alginment via permutes
(define_mode_iterator VEC_K [V16QI V8HI V4SI V4SF])
;; Vector logical modes
(define_mode_iterator VEC_L [V16QI V8HI V4SI V2DI V4SF V2DF TI])
;; Vector modes for moves. Don't do TImode here.
(define_mode_iterator VEC_M [V16QI V8HI V4SI V2DI V4SF V2DF])
;; Vector comparison modes
(define_mode_iterator VEC_C [V16QI V8HI V4SI V4SF V2DF])
;; Vector init/extract modes
(define_mode_iterator VEC_E [V16QI V8HI V4SI V2DI V4SF V2DF])
;; Vector reload iterator
(define_mode_iterator VEC_R [V16QI V8HI V4SI V2DI V4SF V2DF DF TI])
;; Base type from vector mode
(define_mode_attr VEC_base [(V16QI "QI")
(V8HI "HI")
(V4SI "SI")
(V2DI "DI")
(V4SF "SF")
(V2DF "DF")
(TI "TI")])
;; Same size integer type for floating point data
(define_mode_attr VEC_int [(V4SF "v4si")
(V2DF "v2di")])
(define_mode_attr VEC_INT [(V4SF "V4SI")
(V2DF "V2DI")])
;; constants for unspec
(define_constants
[(UNSPEC_PREDICATE 400)])
;; Vector move instructions.
(define_expand "mov<mode>"
[(set (match_operand:VEC_M 0 "nonimmediate_operand" "")
(match_operand:VEC_M 1 "any_operand" ""))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
{
if (can_create_pseudo_p ())
{
if (CONSTANT_P (operands[1])
&& !easy_vector_constant (operands[1], <MODE>mode))
operands[1] = force_const_mem (<MODE>mode, operands[1]);
else if (!vlogical_operand (operands[0], <MODE>mode)
&& !vlogical_operand (operands[1], <MODE>mode))
operands[1] = force_reg (<MODE>mode, operands[1]);
}
})
;; Generic vector floating point load/store instructions.
(define_expand "vector_load_<mode>"
[(set (match_operand:VEC_M 0 "vfloat_operand" "")
(match_operand:VEC_M 1 "memory_operand" ""))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_store_<mode>"
[(set (match_operand:VEC_M 0 "memory_operand" "")
(match_operand:VEC_M 1 "vfloat_operand" ""))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
;; Splits if a GPR register was chosen for the move
(define_split
[(set (match_operand:VEC_L 0 "nonimmediate_operand" "")
(match_operand:VEC_L 1 "input_operand" ""))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)
&& reload_completed
&& gpr_or_gpr_p (operands[0], operands[1])"
[(pc)]
{
rs6000_split_multireg_move (operands[0], operands[1]);
DONE;
})
;; Reload patterns for vector operations. We may need an addtional base
;; register to convert the reg+offset addressing to reg+reg for vector
;; registers and reg+reg or (reg+reg)&(-16) addressing to just an index
;; register for gpr registers.
(define_expand "reload_<VEC_R:mode>_<P:mptrsize>_store"
[(parallel [(match_operand:VEC_R 0 "memory_operand" "m")
(match_operand:VEC_R 1 "gpc_reg_operand" "r")
(match_operand:P 2 "register_operand" "=&b")])]
"<P:tptrsize>"
{
rs6000_secondary_reload_inner (operands[1], operands[0], operands[2], true);
DONE;
})
(define_expand "reload_<VEC_R:mode>_<P:mptrsize>_load"
[(parallel [(match_operand:VEC_R 0 "gpc_reg_operand" "=&r")
(match_operand:VEC_R 1 "memory_operand" "m")
(match_operand:P 2 "register_operand" "=&b")])]
"<P:tptrsize>"
{
rs6000_secondary_reload_inner (operands[0], operands[1], operands[2], false);
DONE;
})
;; Reload sometimes tries to move the address to a GPR, and can generate
;; invalid RTL for addresses involving AND -16. Allow addresses involving
;; reg+reg, reg+small constant, or just reg, all wrapped in an AND -16.
(define_insn_and_split "*vec_reload_and_plus_<mptrsize>"
[(set (match_operand:P 0 "gpc_reg_operand" "=b")
(and:P (plus:P (match_operand:P 1 "gpc_reg_operand" "r")
(match_operand:P 2 "reg_or_cint_operand" "rI"))
(const_int -16)))]
"TARGET_ALTIVEC && (reload_in_progress || reload_completed)"
"#"
"&& reload_completed"
[(set (match_dup 0)
(plus:P (match_dup 1)
(match_dup 2)))
(parallel [(set (match_dup 0)
(and:P (match_dup 0)
(const_int -16)))
(clobber:CC (scratch:CC))])])
;; The normal ANDSI3/ANDDI3 won't match if reload decides to move an AND -16
;; address to a register because there is no clobber of a (scratch), so we add
;; it here.
(define_insn_and_split "*vec_reload_and_reg_<mptrsize>"
[(set (match_operand:P 0 "gpc_reg_operand" "=b")
(and:P (match_operand:P 1 "gpc_reg_operand" "r")
(const_int -16)))]
"TARGET_ALTIVEC && (reload_in_progress || reload_completed)"
"#"
"&& reload_completed"
[(parallel [(set (match_dup 0)
(and:P (match_dup 1)
(const_int -16)))
(clobber:CC (scratch:CC))])])
;; Generic floating point vector arithmetic support
(define_expand "add<mode>3"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(plus:VEC_F (match_operand:VEC_F 1 "vfloat_operand" "")
(match_operand:VEC_F 2 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "sub<mode>3"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(minus:VEC_F (match_operand:VEC_F 1 "vfloat_operand" "")
(match_operand:VEC_F 2 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "mul<mode>3"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(mult:VEC_F (match_operand:VEC_F 1 "vfloat_operand" "")
(match_operand:VEC_F 2 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode) && TARGET_FUSED_MADD"
"
{
emit_insn (gen_altivec_mulv4sf3 (operands[0], operands[1], operands[2]));
DONE;
}")
(define_expand "neg<mode>2"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(neg:VEC_F (match_operand:VEC_F 1 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
emit_insn (gen_altivec_negv4sf2 (operands[0], operands[1]));
DONE;
}")
(define_expand "abs<mode>2"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(abs:VEC_F (match_operand:VEC_F 1 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
emit_insn (gen_altivec_absv4sf2 (operands[0], operands[1]));
DONE;
}")
(define_expand "smin<mode>3"
[(set (match_operand:VEC_F 0 "register_operand" "")
(smin:VEC_F (match_operand:VEC_F 1 "register_operand" "")
(match_operand:VEC_F 2 "register_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "smax<mode>3"
[(set (match_operand:VEC_F 0 "register_operand" "")
(smax:VEC_F (match_operand:VEC_F 1 "register_operand" "")
(match_operand:VEC_F 2 "register_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "ftrunc<mode>2"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(fix:VEC_F (match_operand:VEC_F 1 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
;; Vector comparisons
(define_expand "vcond<mode>"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(if_then_else:VEC_F
(match_operator 3 "comparison_operator"
[(match_operand:VEC_F 4 "vfloat_operand" "")
(match_operand:VEC_F 5 "vfloat_operand" "")])
(match_operand:VEC_F 1 "vfloat_operand" "")
(match_operand:VEC_F 2 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
if (rs6000_emit_vector_cond_expr (operands[0], operands[1], operands[2],
operands[3], operands[4], operands[5]))
DONE;
else
FAIL;
}")
(define_expand "vcond<mode>"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(if_then_else:VEC_I
(match_operator 3 "comparison_operator"
[(match_operand:VEC_I 4 "vint_operand" "")
(match_operand:VEC_I 5 "vint_operand" "")])
(match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
if (rs6000_emit_vector_cond_expr (operands[0], operands[1], operands[2],
operands[3], operands[4], operands[5]))
DONE;
else
FAIL;
}")
(define_expand "vcondu<mode>"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(if_then_else:VEC_I
(match_operator 3 "comparison_operator"
[(match_operand:VEC_I 4 "vint_operand" "")
(match_operand:VEC_I 5 "vint_operand" "")])
(match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
if (rs6000_emit_vector_cond_expr (operands[0], operands[1], operands[2],
operands[3], operands[4], operands[5]))
DONE;
else
FAIL;
}")
(define_expand "vector_eq<mode>"
[(set (match_operand:VEC_C 0 "vlogical_operand" "")
(eq:VEC_C (match_operand:VEC_C 1 "vlogical_operand" "")
(match_operand:VEC_C 2 "vlogical_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_gt<mode>"
[(set (match_operand:VEC_C 0 "vlogical_operand" "")
(gt:VEC_C (match_operand:VEC_C 1 "vlogical_operand" "")
(match_operand:VEC_C 2 "vlogical_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_ge<mode>"
[(set (match_operand:VEC_C 0 "vlogical_operand" "")
(ge:VEC_C (match_operand:VEC_C 1 "vlogical_operand" "")
(match_operand:VEC_C 2 "vlogical_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_gtu<mode>"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(gtu:VEC_I (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_geu<mode>"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(geu:VEC_I (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
;; Note the arguments for __builtin_altivec_vsel are op2, op1, mask
;; which is in the reverse order that we want
(define_expand "vector_select_<mode>"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(if_then_else:VEC_L
(ne:CC (match_operand:VEC_L 3 "vlogical_operand" "")
(const_int 0))
(match_operand:VEC_L 2 "vlogical_operand" "")
(match_operand:VEC_L 1 "vlogical_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_select_<mode>_uns"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(if_then_else:VEC_L
(ne:CCUNS (match_operand:VEC_L 3 "vlogical_operand" "")
(const_int 0))
(match_operand:VEC_L 2 "vlogical_operand" "")
(match_operand:VEC_L 1 "vlogical_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
;; Expansions that compare vectors producing a vector result and a predicate,
;; setting CR6 to indicate a combined status
(define_expand "vector_eq_<mode>_p"
[(parallel
[(set (reg:CC 74)
(unspec:CC [(eq:CC (match_operand:VEC_A 1 "vlogical_operand" "")
(match_operand:VEC_A 2 "vlogical_operand" ""))]
UNSPEC_PREDICATE))
(set (match_operand:VEC_A 0 "vlogical_operand" "")
(eq:VEC_A (match_dup 1)
(match_dup 2)))])]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_gt_<mode>_p"
[(parallel
[(set (reg:CC 74)
(unspec:CC [(gt:CC (match_operand:VEC_A 1 "vlogical_operand" "")
(match_operand:VEC_A 2 "vlogical_operand" ""))]
UNSPEC_PREDICATE))
(set (match_operand:VEC_A 0 "vlogical_operand" "")
(gt:VEC_A (match_dup 1)
(match_dup 2)))])]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_ge_<mode>_p"
[(parallel
[(set (reg:CC 74)
(unspec:CC [(ge:CC (match_operand:VEC_F 1 "vfloat_operand" "")
(match_operand:VEC_F 2 "vfloat_operand" ""))]
UNSPEC_PREDICATE))
(set (match_operand:VEC_F 0 "vfloat_operand" "")
(ge:VEC_F (match_dup 1)
(match_dup 2)))])]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "vector_gtu_<mode>_p"
[(parallel
[(set (reg:CC 74)
(unspec:CC [(gtu:CC (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" ""))]
UNSPEC_PREDICATE))
(set (match_operand:VEC_I 0 "vlogical_operand" "")
(gtu:VEC_I (match_dup 1)
(match_dup 2)))])]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"")
;; AltiVec predicates.
(define_expand "cr6_test_for_zero"
[(set (match_operand:SI 0 "register_operand" "=r")
(eq:SI (reg:CC 74)
(const_int 0)))]
"TARGET_ALTIVEC"
"")
(define_expand "cr6_test_for_zero_reverse"
[(set (match_operand:SI 0 "register_operand" "=r")
(eq:SI (reg:CC 74)
(const_int 0)))
(set (match_dup 0) (minus:SI (const_int 1) (match_dup 0)))]
"TARGET_ALTIVEC"
"")
(define_expand "cr6_test_for_lt"
[(set (match_operand:SI 0 "register_operand" "=r")
(lt:SI (reg:CC 74)
(const_int 0)))]
"TARGET_ALTIVEC"
"")
(define_expand "cr6_test_for_lt_reverse"
[(set (match_operand:SI 0 "register_operand" "=r")
(lt:SI (reg:CC 74)
(const_int 0)))
(set (match_dup 0) (minus:SI (const_int 1) (match_dup 0)))]
"TARGET_ALTIVEC"
"")
;; Vector logical instructions
(define_expand "xor<mode>3"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(xor:VEC_L (match_operand:VEC_L 1 "vlogical_operand" "")
(match_operand:VEC_L 2 "vlogical_operand" "")))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "ior<mode>3"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(ior:VEC_L (match_operand:VEC_L 1 "vlogical_operand" "")
(match_operand:VEC_L 2 "vlogical_operand" "")))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "and<mode>3"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(and:VEC_L (match_operand:VEC_L 1 "vlogical_operand" "")
(match_operand:VEC_L 2 "vlogical_operand" "")))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "one_cmpl<mode>2"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(not:VEC_L (match_operand:VEC_L 1 "vlogical_operand" "")))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "nor<mode>3"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(not:VEC_L (ior:VEC_L (match_operand:VEC_L 1 "vlogical_operand" "")
(match_operand:VEC_L 2 "vlogical_operand" ""))))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
(define_expand "andc<mode>3"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
(and:VEC_L (not:VEC_L (match_operand:VEC_L 2 "vlogical_operand" ""))
(match_operand:VEC_L 1 "vlogical_operand" "")))]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
"")
;; Same size conversions
(define_expand "float<VEC_int><mode>2"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(float:VEC_F (match_operand:<VEC_INT> 1 "vint_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
emit_insn (gen_altivec_vcfsx (operands[0], operands[1], const0_rtx));
DONE;
}")
(define_expand "unsigned_float<VEC_int><mode>2"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
(unsigned_float:VEC_F (match_operand:<VEC_INT> 1 "vint_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
emit_insn (gen_altivec_vcfux (operands[0], operands[1], const0_rtx));
DONE;
}")
(define_expand "fix_trunc<mode><VEC_int>2"
[(set (match_operand:<VEC_INT> 0 "vint_operand" "")
(fix:<VEC_INT> (match_operand:VEC_F 1 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
emit_insn (gen_altivec_vctsxs (operands[0], operands[1], const0_rtx));
DONE;
}")
(define_expand "fixuns_trunc<mode><VEC_int>2"
[(set (match_operand:<VEC_INT> 0 "vint_operand" "")
(unsigned_fix:<VEC_INT> (match_operand:VEC_F 1 "vfloat_operand" "")))]
"VECTOR_UNIT_ALTIVEC_P (<MODE>mode)"
"
{
emit_insn (gen_altivec_vctuxs (operands[0], operands[1], const0_rtx));
DONE;
}")
;; Vector initialization, set, extract
(define_expand "vec_init<mode>"
[(match_operand:VEC_E 0 "vlogical_operand" "")
(match_operand:VEC_E 1 "" "")]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
{
rs6000_expand_vector_init (operands[0], operands[1]);
DONE;
})
(define_expand "vec_set<mode>"
[(match_operand:VEC_E 0 "vlogical_operand" "")
(match_operand:<VEC_base> 1 "register_operand" "")
(match_operand 2 "const_int_operand" "")]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
{
rs6000_expand_vector_set (operands[0], operands[1], INTVAL (operands[2]));
DONE;
})
(define_expand "vec_extract<mode>"
[(match_operand:<VEC_base> 0 "register_operand" "")
(match_operand:VEC_E 1 "vlogical_operand" "")
(match_operand 2 "const_int_operand" "")]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
{
rs6000_expand_vector_extract (operands[0], operands[1],
INTVAL (operands[2]));
DONE;
})
;; Interleave patterns
(define_expand "vec_interleave_highv4sf"
[(set (match_operand:V4SF 0 "vfloat_operand" "")
(vec_merge:V4SF
(vec_select:V4SF (match_operand:V4SF 1 "vfloat_operand" "")
(parallel [(const_int 0)
(const_int 2)
(const_int 1)
(const_int 3)]))
(vec_select:V4SF (match_operand:V4SF 2 "vfloat_operand" "")
(parallel [(const_int 2)
(const_int 0)
(const_int 3)
(const_int 1)]))
(const_int 5)))]
"VECTOR_UNIT_ALTIVEC_P (V4SFmode)"
"")
(define_expand "vec_interleave_lowv4sf"
[(set (match_operand:V4SF 0 "vfloat_operand" "")
(vec_merge:V4SF
(vec_select:V4SF (match_operand:V4SF 1 "vfloat_operand" "")
(parallel [(const_int 2)
(const_int 0)
(const_int 3)
(const_int 1)]))
(vec_select:V4SF (match_operand:V4SF 2 "vfloat_operand" "")
(parallel [(const_int 0)
(const_int 2)
(const_int 1)
(const_int 3)]))
(const_int 5)))]
"VECTOR_UNIT_ALTIVEC_P (V4SFmode)"
"")
;; Align vector loads with a permute.
(define_expand "vec_realign_load_<mode>"
[(match_operand:VEC_K 0 "vlogical_operand" "")
(match_operand:VEC_K 1 "vlogical_operand" "")
(match_operand:VEC_K 2 "vlogical_operand" "")
(match_operand:V16QI 3 "vlogical_operand" "")]
"VECTOR_MEM_ALTIVEC_P (<MODE>mode)"
{
emit_insn (gen_altivec_vperm_<mode> (operands[0], operands[1], operands[2],
operands[3]));
DONE;
})
;; Vector shift left in bits. Currently supported ony for shift
;; amounts that can be expressed as byte shifts (divisible by 8).
;; General shift amounts can be supported using vslo + vsl. We're
;; not expecting to see these yet (the vectorizer currently
;; generates only shifts divisible by byte_size).
(define_expand "vec_shl_<mode>"
[(match_operand:VEC_L 0 "vlogical_operand" "")
(match_operand:VEC_L 1 "vlogical_operand" "")
(match_operand:QI 2 "reg_or_short_operand" "")]
"TARGET_ALTIVEC"
"
{
rtx bitshift = operands[2];
rtx shift;
rtx insn;
HOST_WIDE_INT bitshift_val;
HOST_WIDE_INT byteshift_val;
if (! CONSTANT_P (bitshift))
FAIL;
bitshift_val = INTVAL (bitshift);
if (bitshift_val & 0x7)
FAIL;
byteshift_val = bitshift_val >> 3;
shift = gen_rtx_CONST_INT (QImode, byteshift_val);
insn = gen_altivec_vsldoi_<mode> (operands[0], operands[1], operands[1],
shift);
emit_insn (insn);
DONE;
}")
;; Vector shift right in bits. Currently supported ony for shift
;; amounts that can be expressed as byte shifts (divisible by 8).
;; General shift amounts can be supported using vsro + vsr. We're
;; not expecting to see these yet (the vectorizer currently
;; generates only shifts divisible by byte_size).
(define_expand "vec_shr_<mode>"
[(match_operand:VEC_L 0 "vlogical_operand" "")
(match_operand:VEC_L 1 "vlogical_operand" "")
(match_operand:QI 2 "reg_or_short_operand" "")]
"TARGET_ALTIVEC"
"
{
rtx bitshift = operands[2];
rtx shift;
rtx insn;
HOST_WIDE_INT bitshift_val;
HOST_WIDE_INT byteshift_val;
if (! CONSTANT_P (bitshift))
FAIL;
bitshift_val = INTVAL (bitshift);
if (bitshift_val & 0x7)
FAIL;
byteshift_val = 16 - (bitshift_val >> 3);
shift = gen_rtx_CONST_INT (QImode, byteshift_val);
insn = gen_altivec_vsldoi_<mode> (operands[0], operands[1], operands[1],
shift);
emit_insn (insn);
DONE;
}")
;; Expanders for rotate each element in a vector
(define_expand "vrotl<mode>3"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(rotate:VEC_I (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"TARGET_ALTIVEC"
"")
;; Expanders for arithmetic shift left on each vector element
(define_expand "vashl<mode>3"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(ashift:VEC_I (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"TARGET_ALTIVEC"
"")
;; Expanders for logical shift right on each vector element
(define_expand "vlshr<mode>3"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(lshiftrt:VEC_I (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"TARGET_ALTIVEC"
"")
;; Expanders for arithmetic shift right on each vector element
(define_expand "vashr<mode>3"
[(set (match_operand:VEC_I 0 "vint_operand" "")
(ashiftrt:VEC_I (match_operand:VEC_I 1 "vint_operand" "")
(match_operand:VEC_I 2 "vint_operand" "")))]
"TARGET_ALTIVEC"
"")