backport: thumb2.md: New file.

2007-01-03  Paul Brook  <paul@codesourcery.com>

	Merge from sourcerygxx-4_1.
	gcc/
	* config/arm/thumb2.md: New file.
	* config/arm/elf.h (JUMP_TABLES_IN_TEXT_SECTION): Return True for
	Thumb-2.
	* config/arm/coff.h (JUMP_TABLES_IN_TEXT_SECTION): Ditto.
	* config/arm/aout.h (ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion.
	(ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump tables.
	* config/arm/aof.h (ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump
	tables.
	(ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion.
	* config/arm/ieee754-df.S: Use macros for Thumb-2/Unified asm
	comptibility.
	* config/arm/ieee754-sf.S: Ditto.
	* config/arm/arm.c (thumb_base_register_rtx_p): Rename...
	(thumb1_base_register_rtx_p): ... to this.
	(thumb_index_register_rtx_p): Rename...
	(thumb1_index_register_rtx_p): ... to this.
	(thumb_output_function_prologue): Rename...
	(thumb1_output_function_prologue): ... to this.
	(thumb_legitimate_address_p): Rename...
	(thumb1_legitimate_address_p): ... to this.
	(thumb_rtx_costs): Rename...
	(thumb1_rtx_costs): ... to this.
	(thumb_compute_save_reg_mask): Rename...
	(thumb1_compute_save_reg_mask): ... to this.
	(thumb_final_prescan_insn): Rename...
	(thumb1_final_prescan_insn): ... to this.
	(thumb_expand_epilogue): Rename...
	(thumb1_expand_epilogue): ... to this.
	(arm_unwind_emit_stm): Rename...
	(arm_unwind_emit_sequence): ... to this.
	(thumb2_legitimate_index_p, thumb2_legitimate_address_p,
	thumb1_compute_save_reg_mask, arm_dwarf_handle_frame_unspec,
	thumb2_index_mul_operand, output_move_vfp, arm_shift_nmem,
	arm_save_coproc_regs, thumb_set_frame_pointer, arm_print_condition,
	thumb2_final_prescan_insn, thumb2_asm_output_opcode, arm_output_shift,
	thumb2_output_casesi): New functions.
	(TARGET_DWARF_HANDLE_FRAME_UNSPEC): Define.
	(FL_THUMB2, FL_NOTM, FL_DIV, FL_FOR_ARCH6T2, FL_FOR_ARCH7,
	FL_FOR_ARCH7A, FL_FOR_ARCH7R, FL_FOR_ARCH7M, ARM_LSL_NAME,
	THUMB2_WORK_REGS): Define.
	(arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv, arm_condexec_count,
	arm_condexec_mask, arm_condexec_masklen)): New variables.
	(all_architectures): Add armv6t2, armv7, armv7a, armv7r and armv7m.
	(arm_override_options): Check new CPU capabilities.
	Set new architecture flag variables.
	(arm_isr_value): Handle v7m interrupt functions.
	(user_return_insn): Return 0 for v7m interrupt functions.  Handle
	Thumb-2.
	(const_ok_for_arm): Handle Thumb-2 constants.
	(arm_gen_constant): Ditto.  Use movw when available.
	(arm_function_ok_for_sibcall): Return false for v7m interrupt
	functions.
	(legitimize_pic_address, arm_call_tls_get_addr): Handle Thumb-2.
	(thumb_find_work_register, arm_load_pic_register,
	legitimize_tls_address, arm_address_cost, load_multiple_sequence,
	emit_ldm_seq, emit_stm_seq, arm_select_cc_mode, get_jump_table_size,
	print_multi_reg, output_mov_long_double_fpa_from_arm,
	output_mov_long_double_arm_from_fpa, output_mov_double_fpa_from_arm,
	output_mov_double_fpa_from_arm, output_move_double,
	arm_compute_save_reg_mask, arm_compute_save_reg0_reg12_mask,
	output_return_instruction, arm_output_function_prologue,
	arm_output_epilogue, arm_get_frame_offsets, arm_regno_class,
	arm_output_mi_thunk, thumb_set_return_address): Ditto.
	(arm_expand_prologue): Handle Thumb-2.  Use arm_save_coproc_regs.
	(arm_coproc_mem_operand): Allow POST_INC/PRE_DEC.
	(arithmetic_instr, shift_op): Use arm_shift_nmem.
	(arm_print_operand): Use arm_print_condition.  Handle '(', ')', '.',
	'!' and 'L'.
	(arm_final_prescan_insn): Use extract_constrain_insn_cached.
	(thumb_expand_prologue): Use thumb_set_frame_pointer.
	(arm_file_start): Output directive for unified syntax.
	(arm_unwind_emit_set): Handle stack alignment instruction.
	* config/arm/lib1funcs.asm: Remove default for __ARM_ARCH__.
	Add v6t2, v7, v7a, v7r and v7m.
	(RETLDM): Add Thumb-2 code.
	(do_it, shift1, do_push, do_pop, COND, THUMB_SYNTAX): New macros.
	* config/arm/arm.h (TARGET_CPU_CPP_BUILTINS): Define __thumb2__.
	(TARGET_THUMB1, TARGET_32BIT, TARGET_THUMB2, TARGET_DSP_MULTIPLY,
	TARGET_INT_SIMD, TARGET_UNIFIED_ASM, ARM_FT_STACKALIGN, IS_STACKALIGN,
	THUMB2_TRAMPOLINE_TEMPLATE, TRAMPOLINE_ADJUST_ADDRESS,
	ASM_OUTPUT_OPCODE, THUMB2_GO_IF_LEGITIMATE_ADDRESS,
	THUMB2_LEGITIMIZE_ADDRESS, CASE_VECTOR_PC_RELATIVE,
	CASE_VECTOR_SHORTEN_MODE, ADDR_VEC_ALIGN, ASM_OUTPUT_CASE_END,
	ADJUST_INSN_LENGTH): Define.
	(TARGET_REALLY_IWMMXT, TARGET_IWMMXT_ABI, CONDITIONAL_REGISTER_USAGE,
	STATIC_CHAIN_REGNUM, HARD_REGNO_NREGS, INDEX_REG_CLASS,
	BASE_REG_CLASS, MODE_BASE_REG_CLASS, SMALL_REGISTER_CLASSES,
	PREFERRED_RELOAD_CLASS, SECONDARY_OUTPUT_RELOAD_CLASS,
	SECONDARY_INPUT_RELOAD_CLASS, LIBCALL_VALUE, FUNCTION_VALUE_REGNO_P,
	TRAMPOLINE_SIZE, INITIALIZE_TRAMPOLINE, HAVE_PRE_INCREMENT,
	HAVE_POST_DECREMENT, HAVE_PRE_DECREMENT, HAVE_PRE_MODIFY_DISP,
	HAVE_POST_MODIFY_DISP, HAVE_PRE_MODIFY_REG, HAVE_POST_MODIFY_REG,
	REGNO_MODE_OK_FOR_BASE_P, LEGITIMATE_CONSTANT_P,
	REG_MODE_OK_FOR_BASE_P, REG_OK_FOR_INDEX_P, GO_IF_LEGITIMATE_ADDRESS,
	LEGITIMIZE_ADDRESS, THUMB2_LEGITIMIZE_ADDRESS,
	GO_IF_MODE_DEPENDENT_ADDRESS, MEMORY_MOVE_COST, BRANCH_COST,
	ASM_APP_OFF, ASM_OUTPUT_CASE_LABEL, ARM_DECLARE_FUNCTION_NAME,
	FINAL_PRESCAN_INSN, PRINT_OPERAND_PUNCT_VALID_P,
	PRINT_OPERAND_ADDRESS): Adjust for Thumb-2.
	(arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv): New declarations.
	* config/arm/arm-cores.def: Add arm1156t2-s, cortex-a8, cortex-r4 and
	cortex-m3.
	* config/arm/arm-tune.md: Regenerate.
	* config/arm/arm-protos.h: Update prototypes.
	* config/arm/vfp.md: Enable patterns for Thumb-2.
	(arm_movsi_vfp): Add movw alternative.  Use output_move_vfp.
	(arm_movdi_vfp, movsf_vfp, movdf_vfp): Use output_move_vfp.
	(thumb2_movsi_vfp, thumb2_movdi_vfp, thumb2_movsf_vfp,
	thumb2_movdf_vfp, thumb2_movsfcc_vfp, thumb2_movdfcc_vfp): New.
	* config/arm/libunwind.S: Add Thumb-2 code.
	* config/arm/constraints.md: Update include Thumb-2.
	* config/arm/ieee754-sf.S: Add Thumb-2/Unified asm support.
	* config/arm/ieee754-df.S: Ditto.
	* config/arm/bpabi.S: Ditto.
	* config/arm/t-arm (MD_INCLUDES): Add thumb2.md.
	* config/arm/predicates.md (low_register_operand,
	low_reg_or_int_operand, thumb_16bit_operator): New.
	(thumb_cmp_operand, thumb_cmpneg_operand): Rename...
	(thumb1_cmp_operand, thumb1_cmpneg_operand): ... to this.
	* config/arm/t-arm-elf: Add armv7 multilib.
	* config/arm/arm.md: Update patterns for Thumb-2 and Unified asm.
	Include thumb2.md.
	(UNSPEC_STACK_ALIGN, ce_count): New.
	(arm_incscc, arm_decscc, arm_umaxsi3, arm_uminsi3,
	arm_zero_extendsidi2, arm_zero_extendqidi2): New
	insns/expanders.
	* config/arm/fpa.md: Update patterns for Thumb-2 and Unified asm.
	(thumb2_movsf_fpa, thumb2_movdf_fpa, thumb2_movxf_fpa,
	thumb2_movsfcc_fpa, thumb2_movdfcc_fpa): New insns.
	* config/arm/cirrus.md: Update patterns for Thumb-2 and Unified asm.
	(cirrus_thumb2_movdi, cirrus_thumb2_movsi_insn,
	thumb2_cirrus_movsf_hard_insn, thumb2_cirrus_movdf_hard_insn): New
	insns.
	* doc/extend.texi: Document ARMv7-M interrupt functions.
	* doc/invoke.texi: Document Thumb-2 new cores+architectures.

From-SVN: r120408
This commit is contained in:
Paul Brook 2007-01-03 23:48:10 +00:00 committed by Paul Brook
parent f8e7718c6f
commit 5b3e666315
27 changed files with 4644 additions and 1369 deletions

View File

@ -1,3 +1,142 @@
2007-01-03 Paul Brook <paul@codesourcery.com>
Merge from sourcerygxx-4_1.
* config/arm/thumb2.md: New file.
* config/arm/elf.h (JUMP_TABLES_IN_TEXT_SECTION): Return True for
Thumb-2.
* config/arm/coff.h (JUMP_TABLES_IN_TEXT_SECTION): Ditto.
* config/arm/aout.h (ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion.
(ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump tables.
* config/arm/aof.h (ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump
tables.
(ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion.
* config/arm/ieee754-df.S: Use macros for Thumb-2/Unified asm
comptibility.
* config/arm/ieee754-sf.S: Ditto.
* config/arm/arm.c (thumb_base_register_rtx_p): Rename...
(thumb1_base_register_rtx_p): ... to this.
(thumb_index_register_rtx_p): Rename...
(thumb1_index_register_rtx_p): ... to this.
(thumb_output_function_prologue): Rename...
(thumb1_output_function_prologue): ... to this.
(thumb_legitimate_address_p): Rename...
(thumb1_legitimate_address_p): ... to this.
(thumb_rtx_costs): Rename...
(thumb1_rtx_costs): ... to this.
(thumb_compute_save_reg_mask): Rename...
(thumb1_compute_save_reg_mask): ... to this.
(thumb_final_prescan_insn): Rename...
(thumb1_final_prescan_insn): ... to this.
(thumb_expand_epilogue): Rename...
(thumb1_expand_epilogue): ... to this.
(arm_unwind_emit_stm): Rename...
(arm_unwind_emit_sequence): ... to this.
(thumb2_legitimate_index_p, thumb2_legitimate_address_p,
thumb1_compute_save_reg_mask, arm_dwarf_handle_frame_unspec,
thumb2_index_mul_operand, output_move_vfp, arm_shift_nmem,
arm_save_coproc_regs, thumb_set_frame_pointer, arm_print_condition,
thumb2_final_prescan_insn, thumb2_asm_output_opcode, arm_output_shift,
thumb2_output_casesi): New functions.
(TARGET_DWARF_HANDLE_FRAME_UNSPEC): Define.
(FL_THUMB2, FL_NOTM, FL_DIV, FL_FOR_ARCH6T2, FL_FOR_ARCH7,
FL_FOR_ARCH7A, FL_FOR_ARCH7R, FL_FOR_ARCH7M, ARM_LSL_NAME,
THUMB2_WORK_REGS): Define.
(arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv, arm_condexec_count,
arm_condexec_mask, arm_condexec_masklen)): New variables.
(all_architectures): Add armv6t2, armv7, armv7a, armv7r and armv7m.
(arm_override_options): Check new CPU capabilities.
Set new architecture flag variables.
(arm_isr_value): Handle v7m interrupt functions.
(user_return_insn): Return 0 for v7m interrupt functions. Handle
Thumb-2.
(const_ok_for_arm): Handle Thumb-2 constants.
(arm_gen_constant): Ditto. Use movw when available.
(arm_function_ok_for_sibcall): Return false for v7m interrupt
functions.
(legitimize_pic_address, arm_call_tls_get_addr): Handle Thumb-2.
(thumb_find_work_register, arm_load_pic_register,
legitimize_tls_address, arm_address_cost, load_multiple_sequence,
emit_ldm_seq, emit_stm_seq, arm_select_cc_mode, get_jump_table_size,
print_multi_reg, output_mov_long_double_fpa_from_arm,
output_mov_long_double_arm_from_fpa, output_mov_double_fpa_from_arm,
output_mov_double_fpa_from_arm, output_move_double,
arm_compute_save_reg_mask, arm_compute_save_reg0_reg12_mask,
output_return_instruction, arm_output_function_prologue,
arm_output_epilogue, arm_get_frame_offsets, arm_regno_class,
arm_output_mi_thunk, thumb_set_return_address): Ditto.
(arm_expand_prologue): Handle Thumb-2. Use arm_save_coproc_regs.
(arm_coproc_mem_operand): Allow POST_INC/PRE_DEC.
(arithmetic_instr, shift_op): Use arm_shift_nmem.
(arm_print_operand): Use arm_print_condition. Handle '(', ')', '.',
'!' and 'L'.
(arm_final_prescan_insn): Use extract_constrain_insn_cached.
(thumb_expand_prologue): Use thumb_set_frame_pointer.
(arm_file_start): Output directive for unified syntax.
(arm_unwind_emit_set): Handle stack alignment instruction.
* config/arm/lib1funcs.asm: Remove default for __ARM_ARCH__.
Add v6t2, v7, v7a, v7r and v7m.
(RETLDM): Add Thumb-2 code.
(do_it, shift1, do_push, do_pop, COND, THUMB_SYNTAX): New macros.
* config/arm/arm.h (TARGET_CPU_CPP_BUILTINS): Define __thumb2__.
(TARGET_THUMB1, TARGET_32BIT, TARGET_THUMB2, TARGET_DSP_MULTIPLY,
TARGET_INT_SIMD, TARGET_UNIFIED_ASM, ARM_FT_STACKALIGN, IS_STACKALIGN,
THUMB2_TRAMPOLINE_TEMPLATE, TRAMPOLINE_ADJUST_ADDRESS,
ASM_OUTPUT_OPCODE, THUMB2_GO_IF_LEGITIMATE_ADDRESS,
THUMB2_LEGITIMIZE_ADDRESS, CASE_VECTOR_PC_RELATIVE,
CASE_VECTOR_SHORTEN_MODE, ADDR_VEC_ALIGN, ASM_OUTPUT_CASE_END,
ADJUST_INSN_LENGTH): Define.
(TARGET_REALLY_IWMMXT, TARGET_IWMMXT_ABI, CONDITIONAL_REGISTER_USAGE,
STATIC_CHAIN_REGNUM, HARD_REGNO_NREGS, INDEX_REG_CLASS,
BASE_REG_CLASS, MODE_BASE_REG_CLASS, SMALL_REGISTER_CLASSES,
PREFERRED_RELOAD_CLASS, SECONDARY_OUTPUT_RELOAD_CLASS,
SECONDARY_INPUT_RELOAD_CLASS, LIBCALL_VALUE, FUNCTION_VALUE_REGNO_P,
TRAMPOLINE_SIZE, INITIALIZE_TRAMPOLINE, HAVE_PRE_INCREMENT,
HAVE_POST_DECREMENT, HAVE_PRE_DECREMENT, HAVE_PRE_MODIFY_DISP,
HAVE_POST_MODIFY_DISP, HAVE_PRE_MODIFY_REG, HAVE_POST_MODIFY_REG,
REGNO_MODE_OK_FOR_BASE_P, LEGITIMATE_CONSTANT_P,
REG_MODE_OK_FOR_BASE_P, REG_OK_FOR_INDEX_P, GO_IF_LEGITIMATE_ADDRESS,
LEGITIMIZE_ADDRESS, THUMB2_LEGITIMIZE_ADDRESS,
GO_IF_MODE_DEPENDENT_ADDRESS, MEMORY_MOVE_COST, BRANCH_COST,
ASM_APP_OFF, ASM_OUTPUT_CASE_LABEL, ARM_DECLARE_FUNCTION_NAME,
FINAL_PRESCAN_INSN, PRINT_OPERAND_PUNCT_VALID_P,
PRINT_OPERAND_ADDRESS): Adjust for Thumb-2.
(arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv): New declarations.
* config/arm/arm-cores.def: Add arm1156t2-s, cortex-a8, cortex-r4 and
cortex-m3.
* config/arm/arm-tune.md: Regenerate.
* config/arm/arm-protos.h: Update prototypes.
* config/arm/vfp.md: Enable patterns for Thumb-2.
(arm_movsi_vfp): Add movw alternative. Use output_move_vfp.
(arm_movdi_vfp, movsf_vfp, movdf_vfp): Use output_move_vfp.
(thumb2_movsi_vfp, thumb2_movdi_vfp, thumb2_movsf_vfp,
thumb2_movdf_vfp, thumb2_movsfcc_vfp, thumb2_movdfcc_vfp): New.
* config/arm/libunwind.S: Add Thumb-2 code.
* config/arm/constraints.md: Update include Thumb-2.
* config/arm/ieee754-sf.S: Add Thumb-2/Unified asm support.
* config/arm/ieee754-df.S: Ditto.
* config/arm/bpabi.S: Ditto.
* config/arm/t-arm (MD_INCLUDES): Add thumb2.md.
* config/arm/predicates.md (low_register_operand,
low_reg_or_int_operand, thumb_16bit_operator): New.
(thumb_cmp_operand, thumb_cmpneg_operand): Rename...
(thumb1_cmp_operand, thumb1_cmpneg_operand): ... to this.
* config/arm/t-arm-elf: Add armv7 multilib.
* config/arm/arm.md: Update patterns for Thumb-2 and Unified asm.
Include thumb2.md.
(UNSPEC_STACK_ALIGN, ce_count): New.
(arm_incscc, arm_decscc, arm_umaxsi3, arm_uminsi3,
arm_zero_extendsidi2, arm_zero_extendqidi2): New
insns/expanders.
* config/arm/fpa.md: Update patterns for Thumb-2 and Unified asm.
(thumb2_movsf_fpa, thumb2_movdf_fpa, thumb2_movxf_fpa,
thumb2_movsfcc_fpa, thumb2_movdfcc_fpa): New insns.
* config/arm/cirrus.md: Update patterns for Thumb-2 and Unified asm.
(cirrus_thumb2_movdi, cirrus_thumb2_movsi_insn,
thumb2_cirrus_movsf_hard_insn, thumb2_cirrus_movdf_hard_insn): New
insns.
* doc/extend.texi: Document ARMv7-M interrupt functions.
* doc/invoke.texi: Document Thumb-2 new cores+architectures.
2007-01-03 Jakub Jelinek <jakub@redhat.com>
* unwind-dw2.c (SIGNAL_FRAME_BIT, EXTENDED_CONTEXT_BIT): Define.

View File

@ -1,6 +1,6 @@
/* Definitions of target machine for GNU compiler, for Advanced RISC Machines
ARM compilation, AOF Assembler.
Copyright (C) 1995, 1996, 1997, 2000, 2003, 2004
Copyright (C) 1995, 1996, 1997, 2000, 2003, 2004, 2007
Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@armltd.co.uk)
@ -258,18 +258,47 @@ do { \
#define ARM_MCOUNT_NAME "_mcount"
/* Output of Dispatch Tables. */
#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
do \
{ \
if (TARGET_ARM) \
fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE)); \
else \
fprintf ((STREAM), "\tDCD\t|L..%d| - |L..%d|\n", (VALUE), (REL)); \
} \
#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
do \
{ \
if (TARGET_ARM) \
fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE)); \
else if (TARGET_THUMB1) \
fprintf ((STREAM), "\tDCD\t|L..%d| - |L..%d|\n", (VALUE), (REL)); \
else /* Thumb-2 */ \
{ \
switch (GET_MODE(body)) \
{ \
case QImode: /* TBB */ \
asm_fprintf (STREAM, "\tDCB\t(|L..%d| - |L..%d|)/2\n", \
VALUE, REL); \
break; \
case HImode: /* TBH */ \
asm_fprintf (STREAM, "\tDCW\t|L..%d| - |L..%d|)/2\n", \
VALUE, REL); \
break; \
case SImode: \
if (flag_pic) \
asm_fprintf (STREAM, "\tDCD\t|L..%d| + 1 - |L..%d|\n", \
VALUE, REL); \
else \
asm_fprintf (STREAM, "\tDCD\t|L..%d| + 1\n", VALUE); \
break; \
default: \
gcc_unreachable(); \
} \
} \
} \
while (0)
#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
fprintf ((STREAM), "\tDCD\t|L..%d|\n", (VALUE))
#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
do \
{ \
gcc_assert (!TARGET_THUMB2) \
fprintf ((STREAM), "\tDCD\t|L..%d|\n", (VALUE)) \
} \
while (0)
/* A label marking the start of a jump table is a data label. */
#define ASM_OUTPUT_CASE_LABEL(STREAM, PREFIX, NUM, TABLE) \

View File

@ -1,5 +1,5 @@
/* Definitions of target machine for GNU compiler, for ARM with a.out
Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2004
Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2004, 2007
Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@armltd.co.uk).
@ -214,16 +214,47 @@
#endif
/* Output an element of a dispatch table. */
#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
asm_fprintf (STREAM, "\t.word\t%LL%d\n", VALUE)
#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
do \
{ \
gcc_assert (!TARGET_THUMB2); \
asm_fprintf (STREAM, "\t.word\t%LL%d\n", VALUE); \
} \
while (0)
/* Thumb-2 always uses addr_diff_elf so that the Table Branch instructions
can be used. For non-pic code where the offsets do not suitable for
TBB/TBH the elements are output as absolute labels. */
#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
do \
{ \
if (TARGET_ARM) \
asm_fprintf (STREAM, "\tb\t%LL%d\n", VALUE); \
else \
else if (TARGET_THUMB1) \
asm_fprintf (STREAM, "\t.word\t%LL%d-%LL%d\n", VALUE, REL); \
else /* Thumb-2 */ \
{ \
switch (GET_MODE(body)) \
{ \
case QImode: /* TBB */ \
asm_fprintf (STREAM, "\t.byte\t(%LL%d-%LL%d)/2\n", \
VALUE, REL); \
break; \
case HImode: /* TBH */ \
asm_fprintf (STREAM, "\t.2byte\t(%LL%d-%LL%d)/2\n", \
VALUE, REL); \
break; \
case SImode: \
if (flag_pic) \
asm_fprintf (STREAM, "\t.word\t%LL%d+1-%LL%d\n", VALUE, REL); \
else \
asm_fprintf (STREAM, "\t.word\t%LL%d+1\n", VALUE); \
break; \
default: \
gcc_unreachable(); \
} \
} \
} \
while (0)

View File

@ -1,5 +1,5 @@
/* ARM CPU Cores
Copyright (C) 2003, 2005 Free Software Foundation, Inc.
Copyright (C) 2003, 2005, 2006, 2007 Free Software Foundation, Inc.
Written by CodeSourcery, LLC
This file is part of GCC.
@ -115,3 +115,7 @@ ARM_CORE("arm1176jz-s", arm1176jzs, 6ZK, FL_LDSCHED, 9e)
ARM_CORE("arm1176jzf-s", arm1176jzfs, 6ZK, FL_LDSCHED | FL_VFPV2, 9e)
ARM_CORE("mpcorenovfp", mpcorenovfp, 6K, FL_LDSCHED, 9e)
ARM_CORE("mpcore", mpcore, 6K, FL_LDSCHED | FL_VFPV2, 9e)
ARM_CORE("arm1156t2-s", arm1156t2s, 6T2, FL_LDSCHED, 9e)
ARM_CORE("cortex-a8", cortexa8, 7A, FL_LDSCHED, 9e)
ARM_CORE("cortex-r4", cortexr4, 7R, FL_LDSCHED, 9e)
ARM_CORE("cortex-m3", cortexm3, 7M, FL_LDSCHED, 9e)

View File

@ -1,5 +1,5 @@
/* Prototypes for exported functions defined in arm.c and pe.c
Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005
Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@arm.com)
Minor hacks by Nick Clifton (nickc@cygnus.com)
@ -33,6 +33,7 @@ extern const char *arm_output_epilogue (rtx);
extern void arm_expand_prologue (void);
extern const char *arm_strip_name_encoding (const char *);
extern void arm_asm_output_labelref (FILE *, const char *);
extern void thumb2_asm_output_opcode (FILE *);
extern unsigned long arm_current_func_type (void);
extern HOST_WIDE_INT arm_compute_initial_elimination_offset (unsigned int,
unsigned int);
@ -58,7 +59,8 @@ extern int legitimate_pic_operand_p (rtx);
extern rtx legitimize_pic_address (rtx, enum machine_mode, rtx);
extern rtx legitimize_tls_address (rtx, rtx);
extern int arm_legitimate_address_p (enum machine_mode, rtx, RTX_CODE, int);
extern int thumb_legitimate_address_p (enum machine_mode, rtx, int);
extern int thumb1_legitimate_address_p (enum machine_mode, rtx, int);
extern int thumb2_legitimate_address_p (enum machine_mode, rtx, int);
extern int thumb_legitimate_offset_p (enum machine_mode, HOST_WIDE_INT);
extern rtx arm_legitimize_address (rtx, rtx, enum machine_mode);
extern rtx thumb_legitimize_address (rtx, rtx, enum machine_mode);
@ -108,6 +110,7 @@ extern const char *output_mov_long_double_arm_from_arm (rtx *);
extern const char *output_mov_double_fpa_from_arm (rtx *);
extern const char *output_mov_double_arm_from_fpa (rtx *);
extern const char *output_move_double (rtx *);
extern const char *output_move_vfp (rtx *operands);
extern const char *output_add_immediate (rtx *);
extern const char *arithmetic_instr (rtx, int);
extern void output_ascii_pseudo_op (FILE *, const unsigned char *, int);
@ -116,7 +119,6 @@ extern void arm_poke_function_name (FILE *, const char *);
extern void arm_print_operand (FILE *, rtx, int);
extern void arm_print_operand_address (FILE *, rtx);
extern void arm_final_prescan_insn (rtx);
extern int arm_go_if_legitimate_address (enum machine_mode, rtx);
extern int arm_debugger_arg_offset (int, rtx);
extern int arm_is_longcall_p (rtx, int, int);
extern int arm_emit_vector_const (FILE *, rtx);
@ -124,6 +126,7 @@ extern const char * arm_output_load_gr (rtx *);
extern const char *vfp_output_fstmd (rtx *);
extern void arm_set_return_address (rtx, rtx);
extern int arm_eliminable_register (rtx);
extern const char *arm_output_shift(rtx *, int);
extern bool arm_output_addr_const_extra (FILE *, rtx);
@ -151,23 +154,24 @@ extern int arm_float_words_big_endian (void);
/* Thumb functions. */
extern void arm_init_expanders (void);
extern const char *thumb_unexpanded_epilogue (void);
extern void thumb_expand_prologue (void);
extern void thumb_expand_epilogue (void);
extern void thumb1_expand_prologue (void);
extern void thumb1_expand_epilogue (void);
#ifdef TREE_CODE
extern int is_called_in_ARM_mode (tree);
#endif
extern int thumb_shiftable_const (unsigned HOST_WIDE_INT);
#ifdef RTX_CODE
extern void thumb_final_prescan_insn (rtx);
extern void thumb1_final_prescan_insn (rtx);
extern void thumb2_final_prescan_insn (rtx);
extern const char *thumb_load_double_from_address (rtx *);
extern const char *thumb_output_move_mem_multiple (int, rtx *);
extern const char *thumb_call_via_reg (rtx);
extern void thumb_expand_movmemqi (rtx *);
extern int thumb_go_if_legitimate_address (enum machine_mode, rtx);
extern rtx arm_return_addr (int, rtx);
extern void thumb_reload_out_hi (rtx *);
extern void thumb_reload_in_hi (rtx *);
extern void thumb_set_return_address (rtx, rtx);
extern const char *thumb2_output_casesi(rtx *);
#endif
/* Defined in pe.c. */

View File

@ -1,5 +1,5 @@
;; -*- buffer-read-only: t -*-
;; Generated automatically by gentune.sh from arm-cores.def
(define_attr "tune"
"arm2,arm250,arm3,arm6,arm60,arm600,arm610,arm620,arm7,arm7d,arm7di,arm70,arm700,arm700i,arm710,arm720,arm710c,arm7100,arm7500,arm7500fe,arm7m,arm7dm,arm7dmi,arm8,arm810,strongarm,strongarm110,strongarm1100,strongarm1110,arm7tdmi,arm7tdmis,arm710t,arm720t,arm740t,arm9,arm9tdmi,arm920,arm920t,arm922t,arm940t,ep9312,arm10tdmi,arm1020t,arm9e,arm946es,arm966es,arm968es,arm10e,arm1020e,arm1022e,xscale,iwmmxt,arm926ejs,arm1026ejs,arm1136js,arm1136jfs,arm1176jzs,arm1176jzfs,mpcorenovfp,mpcore"
"arm2,arm250,arm3,arm6,arm60,arm600,arm610,arm620,arm7,arm7d,arm7di,arm70,arm700,arm700i,arm710,arm720,arm710c,arm7100,arm7500,arm7500fe,arm7m,arm7dm,arm7dmi,arm8,arm810,strongarm,strongarm110,strongarm1100,strongarm1110,arm7tdmi,arm7tdmis,arm710t,arm720t,arm740t,arm9,arm9tdmi,arm920,arm920t,arm922t,arm940t,ep9312,arm10tdmi,arm1020t,arm9e,arm946es,arm966es,arm968es,arm10e,arm1020e,arm1022e,xscale,iwmmxt,arm926ejs,arm1026ejs,arm1136js,arm1136jfs,arm1176jzs,arm1176jzfs,mpcorenovfp,mpcore,arm1156t2s,cortexa8,cortexr4,cortexm3"
(const (symbol_ref "arm_tune")))

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
/* Definitions of target machine for GNU compiler, for ARM.
Copyright (C) 1991, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
Contributed by Pieter `Tiggr' Schoenmakers (rcpieter@win.tue.nl)
and Martin Simmons (@harleqn.co.uk).
More major hacks by Richard Earnshaw (rearnsha@arm.com)
@ -39,6 +39,8 @@ extern char arm_arch_name[];
builtin_define ("__APCS_32__"); \
if (TARGET_THUMB) \
builtin_define ("__thumb__"); \
if (TARGET_THUMB2) \
builtin_define ("__thumb2__"); \
\
if (TARGET_BIG_END) \
{ \
@ -181,8 +183,8 @@ extern GTY(()) rtx aof_pic_label;
#define TARGET_MAVERICK (arm_fp_model == ARM_FP_MODEL_MAVERICK)
#define TARGET_VFP (arm_fp_model == ARM_FP_MODEL_VFP)
#define TARGET_IWMMXT (arm_arch_iwmmxt)
#define TARGET_REALLY_IWMMXT (TARGET_IWMMXT && TARGET_ARM)
#define TARGET_IWMMXT_ABI (TARGET_ARM && arm_abi == ARM_ABI_IWMMXT)
#define TARGET_REALLY_IWMMXT (TARGET_IWMMXT && TARGET_32BIT)
#define TARGET_IWMMXT_ABI (TARGET_32BIT && arm_abi == ARM_ABI_IWMMXT)
#define TARGET_ARM (! TARGET_THUMB)
#define TARGET_EITHER 1 /* (TARGET_ARM | TARGET_THUMB) */
#define TARGET_BACKTRACE (leaf_function_p () \
@ -195,6 +197,25 @@ extern GTY(()) rtx aof_pic_label;
#define TARGET_HARD_TP (target_thread_pointer == TP_CP15)
#define TARGET_SOFT_TP (target_thread_pointer == TP_SOFT)
/* Only 16-bit thumb code. */
#define TARGET_THUMB1 (TARGET_THUMB && !arm_arch_thumb2)
/* Arm or Thumb-2 32-bit code. */
#define TARGET_32BIT (TARGET_ARM || arm_arch_thumb2)
/* 32-bit Thumb-2 code. */
#define TARGET_THUMB2 (TARGET_THUMB && arm_arch_thumb2)
/* "DSP" multiply instructions, eg. SMULxy. */
#define TARGET_DSP_MULTIPLY \
(TARGET_32BIT && arm_arch5e && arm_arch_notm)
/* Integer SIMD instructions, and extend-accumulate instructions. */
#define TARGET_INT_SIMD \
(TARGET_32BIT && arm_arch6 && arm_arch_notm)
/* We could use unified syntax for arm mode, but for now we just use it
for Thumb-2. */
#define TARGET_UNIFIED_ASM TARGET_THUMB2
/* True iff the full BPABI is being used. If TARGET_BPABI is true,
then TARGET_AAPCS_BASED must be true -- but the converse does not
hold. TARGET_BPABI implies the use of the BPABI runtime library,
@ -320,6 +341,9 @@ extern int arm_arch5e;
/* Nonzero if this chip supports the ARM Architecture 6 extensions. */
extern int arm_arch6;
/* Nonzero if instructions not present in the 'M' profile can be used. */
extern int arm_arch_notm;
/* Nonzero if this chip can benefit from load scheduling. */
extern int arm_ld_sched;
@ -351,6 +375,12 @@ extern int arm_tune_wbuf;
interworking clean. */
extern int arm_cpp_interwork;
/* Nonzero if chip supports Thumb 2. */
extern int arm_arch_thumb2;
/* Nonzero if chip supports integer division instruction. */
extern int arm_arch_hwdiv;
#ifndef TARGET_DEFAULT
#define TARGET_DEFAULT (MASK_APCS_FRAME)
#endif
@ -648,7 +678,7 @@ extern int arm_structure_size_boundary;
{ \
int regno; \
\
if (TARGET_SOFT_FLOAT || TARGET_THUMB || !TARGET_FPA) \
if (TARGET_SOFT_FLOAT || TARGET_THUMB1 || !TARGET_FPA) \
{ \
for (regno = FIRST_FPA_REGNUM; \
regno <= LAST_FPA_REGNUM; ++regno) \
@ -660,6 +690,7 @@ extern int arm_structure_size_boundary;
/* When optimizing for size, it's better not to use \
the HI regs, because of the overhead of stacking \
them. */ \
/* ??? Is this still true for thumb2? */ \
for (regno = FIRST_HI_REGNUM; \
regno <= LAST_HI_REGNUM; ++regno) \
fixed_regs[regno] = call_used_regs[regno] = 1; \
@ -668,10 +699,10 @@ extern int arm_structure_size_boundary;
/* The link register can be clobbered by any branch insn, \
but we have no way to track that at present, so mark \
it as unavailable. */ \
if (TARGET_THUMB) \
if (TARGET_THUMB1) \
fixed_regs[LR_REGNUM] = call_used_regs[LR_REGNUM] = 1; \
\
if (TARGET_ARM && TARGET_HARD_FLOAT) \
if (TARGET_32BIT && TARGET_HARD_FLOAT) \
{ \
if (TARGET_MAVERICK) \
{ \
@ -807,7 +838,7 @@ extern int arm_structure_size_boundary;
/* The native (Norcroft) Pascal compiler for the ARM passes the static chain
as an invisible last argument (possible since varargs don't exist in
Pascal), so the following is not true. */
#define STATIC_CHAIN_REGNUM (TARGET_ARM ? 12 : 9)
#define STATIC_CHAIN_REGNUM 12
/* Define this to be where the real frame pointer is if it is not possible to
work out the offset between the frame pointer and the automatic variables
@ -901,7 +932,7 @@ extern int arm_structure_size_boundary;
On the ARM regs are UNITS_PER_WORD bits wide; FPA regs can hold any FP
mode. */
#define HARD_REGNO_NREGS(REGNO, MODE) \
((TARGET_ARM \
((TARGET_32BIT \
&& REGNO >= FIRST_FPA_REGNUM \
&& REGNO != FRAME_POINTER_REGNUM \
&& REGNO != ARG_POINTER_REGNUM) \
@ -1042,14 +1073,14 @@ enum reg_class
|| (CLASS) == CC_REG)
/* The class value for index registers, and the one for base regs. */
#define INDEX_REG_CLASS (TARGET_THUMB ? LO_REGS : GENERAL_REGS)
#define BASE_REG_CLASS (TARGET_THUMB ? LO_REGS : GENERAL_REGS)
#define INDEX_REG_CLASS (TARGET_THUMB1 ? LO_REGS : GENERAL_REGS)
#define BASE_REG_CLASS (TARGET_THUMB1 ? LO_REGS : GENERAL_REGS)
/* For the Thumb the high registers cannot be used as base registers
when addressing quantities in QI or HI mode; if we don't know the
mode, then we must be conservative. */
#define MODE_BASE_REG_CLASS(MODE) \
(TARGET_ARM ? GENERAL_REGS : \
(TARGET_32BIT ? GENERAL_REGS : \
(((MODE) == SImode) ? BASE_REGS : LO_REGS))
/* For Thumb we can not support SP+reg addressing, so we return LO_REGS
@ -1060,15 +1091,16 @@ enum reg_class
registers explicitly used in the rtl to be used as spill registers
but prevents the compiler from extending the lifetime of these
registers. */
#define SMALL_REGISTER_CLASSES TARGET_THUMB
#define SMALL_REGISTER_CLASSES TARGET_THUMB1
/* Given an rtx X being reloaded into a reg required to be
in class CLASS, return the class of reg to actually use.
In general this is just CLASS, but for the Thumb we prefer
a LO_REGS class or a subset. */
#define PREFERRED_RELOAD_CLASS(X, CLASS) \
(TARGET_ARM ? (CLASS) : \
((CLASS) == BASE_REGS ? (CLASS) : LO_REGS))
In general this is just CLASS, but for the Thumb core registers and
immediate constants we prefer a LO_REGS class or a subset. */
#define PREFERRED_RELOAD_CLASS(X, CLASS) \
(TARGET_ARM ? (CLASS) : \
((CLASS) == GENERAL_REGS || (CLASS) == HI_REGS \
|| (CLASS) == NO_REGS ? LO_REGS : (CLASS)))
/* Must leave BASE_REGS reloads alone */
#define THUMB_SECONDARY_INPUT_RELOAD_CLASS(CLASS, MODE, X) \
@ -1093,7 +1125,7 @@ enum reg_class
((TARGET_VFP && TARGET_HARD_FLOAT \
&& (CLASS) == VFP_REGS) \
? vfp_secondary_reload_class (MODE, X) \
: TARGET_ARM \
: TARGET_32BIT \
? (((MODE) == HImode && ! arm_arch4 && true_regnum (X) == -1) \
? GENERAL_REGS : NO_REGS) \
: THUMB_SECONDARY_OUTPUT_RELOAD_CLASS (CLASS, MODE, X))
@ -1109,7 +1141,7 @@ enum reg_class
&& (CLASS) == CIRRUS_REGS \
&& (CONSTANT_P (X) || GET_CODE (X) == SYMBOL_REF)) \
? GENERAL_REGS : \
(TARGET_ARM ? \
(TARGET_32BIT ? \
(((CLASS) == IWMMXT_REGS || (CLASS) == IWMMXT_GR_REGS) \
&& CONSTANT_P (X)) \
? GENERAL_REGS : \
@ -1188,6 +1220,7 @@ enum reg_class
/* We could probably achieve better results by defining PROMOTE_MODE to help
cope with the variances between the Thumb's signed and unsigned byte and
halfword load instructions. */
/* ??? This should be safe for thumb2, but we may be able to do better. */
#define THUMB_LEGITIMIZE_RELOAD_ADDRESS(X, MODE, OPNUM, TYPE, IND_L, WIN) \
do { \
rtx new_x = thumb_legitimize_reload_address (&X, MODE, OPNUM, TYPE, IND_L); \
@ -1215,7 +1248,7 @@ do { \
/* Moves between FPA_REGS and GENERAL_REGS are two memory insns. */
#define REGISTER_MOVE_COST(MODE, FROM, TO) \
(TARGET_ARM ? \
(TARGET_32BIT ? \
((FROM) == FPA_REGS && (TO) != FPA_REGS ? 20 : \
(FROM) != FPA_REGS && (TO) == FPA_REGS ? 20 : \
(FROM) == VFP_REGS && (TO) != VFP_REGS ? 10 : \
@ -1289,10 +1322,10 @@ do { \
/* Define how to find the value returned by a library function
assuming the value has mode MODE. */
#define LIBCALL_VALUE(MODE) \
(TARGET_ARM && TARGET_HARD_FLOAT_ABI && TARGET_FPA \
(TARGET_32BIT && TARGET_HARD_FLOAT_ABI && TARGET_FPA \
&& GET_MODE_CLASS (MODE) == MODE_FLOAT \
? gen_rtx_REG (MODE, FIRST_FPA_REGNUM) \
: TARGET_ARM && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK \
: TARGET_32BIT && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK \
&& GET_MODE_CLASS (MODE) == MODE_FLOAT \
? gen_rtx_REG (MODE, FIRST_CIRRUS_FP_REGNUM) \
: TARGET_IWMMXT_ABI && arm_vector_mode_supported_p (MODE) \
@ -1311,10 +1344,10 @@ do { \
/* On a Cirrus chip, mvf0 can return results. */
#define FUNCTION_VALUE_REGNO_P(REGNO) \
((REGNO) == ARG_REGISTER (1) \
|| (TARGET_ARM && ((REGNO) == FIRST_CIRRUS_FP_REGNUM) \
|| (TARGET_32BIT && ((REGNO) == FIRST_CIRRUS_FP_REGNUM) \
&& TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK) \
|| ((REGNO) == FIRST_IWMMXT_REGNUM && TARGET_IWMMXT_ABI) \
|| (TARGET_ARM && ((REGNO) == FIRST_FPA_REGNUM) \
|| (TARGET_32BIT && ((REGNO) == FIRST_FPA_REGNUM) \
&& TARGET_HARD_FLOAT_ABI && TARGET_FPA))
/* Amount of memory needed for an untyped call to save all possible return
@ -1362,6 +1395,7 @@ do { \
#define ARM_FT_NAKED (1 << 3) /* No prologue or epilogue. */
#define ARM_FT_VOLATILE (1 << 4) /* Does not return. */
#define ARM_FT_NESTED (1 << 5) /* Embedded inside another func. */
#define ARM_FT_STACKALIGN (1 << 6) /* Called with misaligned stack. */
/* Some macros to test these flags. */
#define ARM_FUNC_TYPE(t) (t & ARM_FT_TYPE_MASK)
@ -1369,6 +1403,7 @@ do { \
#define IS_VOLATILE(t) (t & ARM_FT_VOLATILE)
#define IS_NAKED(t) (t & ARM_FT_NAKED)
#define IS_NESTED(t) (t & ARM_FT_NESTED)
#define IS_STACKALIGN(t) (t & ARM_FT_STACKALIGN)
/* Structure used to hold the function stack frame layout. Offsets are
@ -1570,6 +1605,8 @@ typedef struct
/* Determine if the epilogue should be output as RTL.
You should override this if you define FUNCTION_EXTRA_EPILOGUE. */
/* This is disabled for Thumb-2 because it will confuse the
conditional insn counter. */
#define USE_RETURN_INSN(ISCOND) \
(TARGET_ARM ? use_return_insn (ISCOND, NULL) : 0)
@ -1646,37 +1683,57 @@ typedef struct
assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
}
/* On the Thumb we always switch into ARM mode to execute the trampoline.
Why - because it is easier. This code will always be branched to via
a BX instruction and since the compiler magically generates the address
of the function the linker has no opportunity to ensure that the
bottom bit is set. Thus the processor will be in ARM mode when it
reaches this code. So we duplicate the ARM trampoline code and add
a switch into Thumb mode as well. */
#define THUMB_TRAMPOLINE_TEMPLATE(FILE) \
/* The Thumb-2 trampoline is similar to the arm implementation.
Unlike 16-bit Thumb, we enter the stub in thumb mode. */
#define THUMB2_TRAMPOLINE_TEMPLATE(FILE) \
{ \
asm_fprintf (FILE, "\tldr.w\t%r, [%r, #4]\n", \
STATIC_CHAIN_REGNUM, PC_REGNUM); \
asm_fprintf (FILE, "\tldr.w\t%r, [%r, #4]\n", \
PC_REGNUM, PC_REGNUM); \
assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
}
#define THUMB1_TRAMPOLINE_TEMPLATE(FILE) \
{ \
fprintf (FILE, "\t.code 32\n"); \
ASM_OUTPUT_ALIGN(FILE, 2); \
fprintf (FILE, "\t.code\t16\n"); \
fprintf (FILE, ".Ltrampoline_start:\n"); \
asm_fprintf (FILE, "\tldr\t%r, [%r, #8]\n", \
STATIC_CHAIN_REGNUM, PC_REGNUM); \
asm_fprintf (FILE, "\tldr\t%r, [%r, #8]\n", \
IP_REGNUM, PC_REGNUM); \
asm_fprintf (FILE, "\torr\t%r, %r, #1\n", \
IP_REGNUM, IP_REGNUM); \
asm_fprintf (FILE, "\tbx\t%r\n", IP_REGNUM); \
fprintf (FILE, "\t.word\t0\n"); \
fprintf (FILE, "\t.word\t0\n"); \
fprintf (FILE, "\t.code 16\n"); \
asm_fprintf (FILE, "\tpush\t{r0, r1}\n"); \
asm_fprintf (FILE, "\tldr\tr0, [%r, #8]\n", \
PC_REGNUM); \
asm_fprintf (FILE, "\tmov\t%r, r0\n", \
STATIC_CHAIN_REGNUM); \
asm_fprintf (FILE, "\tldr\tr0, [%r, #8]\n", \
PC_REGNUM); \
asm_fprintf (FILE, "\tstr\tr0, [%r, #4]\n", \
SP_REGNUM); \
asm_fprintf (FILE, "\tpop\t{r0, %r}\n", \
PC_REGNUM); \
assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
}
#define TRAMPOLINE_TEMPLATE(FILE) \
if (TARGET_ARM) \
ARM_TRAMPOLINE_TEMPLATE (FILE) \
else if (TARGET_THUMB2) \
THUMB2_TRAMPOLINE_TEMPLATE (FILE) \
else \
THUMB_TRAMPOLINE_TEMPLATE (FILE)
THUMB1_TRAMPOLINE_TEMPLATE (FILE)
/* Thumb trampolines should be entered in thumb mode, so set the bottom bit
of the address. */
#define TRAMPOLINE_ADJUST_ADDRESS(ADDR) do \
{ \
if (TARGET_THUMB) \
(ADDR) = expand_simple_binop (Pmode, IOR, (ADDR), GEN_INT(1), \
gen_reg_rtx (Pmode), 0, OPTAB_LIB_WIDEN); \
} while(0)
/* Length in units of the trampoline for entering a nested function. */
#define TRAMPOLINE_SIZE (TARGET_ARM ? 16 : 24)
#define TRAMPOLINE_SIZE (TARGET_32BIT ? 16 : 20)
/* Alignment required for a trampoline in bits. */
#define TRAMPOLINE_ALIGNMENT 32
@ -1690,11 +1747,11 @@ typedef struct
{ \
emit_move_insn (gen_rtx_MEM (SImode, \
plus_constant (TRAMP, \
TARGET_ARM ? 8 : 16)), \
TARGET_32BIT ? 8 : 12)), \
CXT); \
emit_move_insn (gen_rtx_MEM (SImode, \
plus_constant (TRAMP, \
TARGET_ARM ? 12 : 20)), \
TARGET_32BIT ? 12 : 16)), \
FNADDR); \
emit_library_call (gen_rtx_SYMBOL_REF (Pmode, "__clear_cache"), \
0, VOIDmode, 2, TRAMP, Pmode, \
@ -1705,13 +1762,13 @@ typedef struct
/* Addressing modes, and classification of registers for them. */
#define HAVE_POST_INCREMENT 1
#define HAVE_PRE_INCREMENT TARGET_ARM
#define HAVE_POST_DECREMENT TARGET_ARM
#define HAVE_PRE_DECREMENT TARGET_ARM
#define HAVE_PRE_MODIFY_DISP TARGET_ARM
#define HAVE_POST_MODIFY_DISP TARGET_ARM
#define HAVE_PRE_MODIFY_REG TARGET_ARM
#define HAVE_POST_MODIFY_REG TARGET_ARM
#define HAVE_PRE_INCREMENT TARGET_32BIT
#define HAVE_POST_DECREMENT TARGET_32BIT
#define HAVE_PRE_DECREMENT TARGET_32BIT
#define HAVE_PRE_MODIFY_DISP TARGET_32BIT
#define HAVE_POST_MODIFY_DISP TARGET_32BIT
#define HAVE_PRE_MODIFY_REG TARGET_32BIT
#define HAVE_POST_MODIFY_REG TARGET_32BIT
/* Macros to check register numbers against specific register classes. */
@ -1723,20 +1780,20 @@ typedef struct
#define TEST_REGNO(R, TEST, VALUE) \
((R TEST VALUE) || ((unsigned) reg_renumber[R] TEST VALUE))
/* On the ARM, don't allow the pc to be used. */
/* Don't allow the pc to be used. */
#define ARM_REGNO_OK_FOR_BASE_P(REGNO) \
(TEST_REGNO (REGNO, <, PC_REGNUM) \
|| TEST_REGNO (REGNO, ==, FRAME_POINTER_REGNUM) \
|| TEST_REGNO (REGNO, ==, ARG_POINTER_REGNUM))
#define THUMB_REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \
#define THUMB1_REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \
(TEST_REGNO (REGNO, <=, LAST_LO_REGNUM) \
|| (GET_MODE_SIZE (MODE) >= 4 \
&& TEST_REGNO (REGNO, ==, STACK_POINTER_REGNUM)))
#define REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \
(TARGET_THUMB \
? THUMB_REGNO_MODE_OK_FOR_BASE_P (REGNO, MODE) \
(TARGET_THUMB1 \
? THUMB1_REGNO_MODE_OK_FOR_BASE_P (REGNO, MODE) \
: ARM_REGNO_OK_FOR_BASE_P (REGNO))
/* Nonzero if X can be the base register in a reg+reg addressing mode.
@ -1763,6 +1820,7 @@ typedef struct
#else
/* ??? Should the TARGET_ARM here also apply to thumb2? */
#define CONSTANT_ADDRESS_P(X) \
(GET_CODE (X) == SYMBOL_REF \
&& (CONSTANT_POOL_ADDRESS_P (X) \
@ -1788,8 +1846,8 @@ typedef struct
#define LEGITIMATE_CONSTANT_P(X) \
(!arm_tls_referenced_p (X) \
&& (TARGET_ARM ? ARM_LEGITIMATE_CONSTANT_P (X) \
: THUMB_LEGITIMATE_CONSTANT_P (X)))
&& (TARGET_32BIT ? ARM_LEGITIMATE_CONSTANT_P (X) \
: THUMB_LEGITIMATE_CONSTANT_P (X)))
/* Special characters prefixed to function names
in order to encode attribute like information.
@ -1823,6 +1881,11 @@ typedef struct
#define ASM_OUTPUT_LABELREF(FILE, NAME) \
arm_asm_output_labelref (FILE, NAME)
/* Output IT instructions for conditonally executed Thumb-2 instructions. */
#define ASM_OUTPUT_OPCODE(STREAM, PTR) \
if (TARGET_THUMB2) \
thumb2_asm_output_opcode (STREAM);
/* The EABI specifies that constructors should go in .init_array.
Other targets use .ctors for compatibility. */
#ifndef ARM_EABI_CTORS_SECTION_OP
@ -1898,7 +1961,8 @@ typedef struct
We have two alternate definitions for each of them.
The usual definition accepts all pseudo regs; the other rejects
them unless they have been allocated suitable hard regs.
The symbol REG_OK_STRICT causes the latter definition to be used. */
The symbol REG_OK_STRICT causes the latter definition to be used.
Thumb-2 has the same restictions as arm. */
#ifndef REG_OK_STRICT
#define ARM_REG_OK_FOR_BASE_P(X) \
@ -1907,7 +1971,7 @@ typedef struct
|| REGNO (X) == FRAME_POINTER_REGNUM \
|| REGNO (X) == ARG_POINTER_REGNUM)
#define THUMB_REG_MODE_OK_FOR_BASE_P(X, MODE) \
#define THUMB1_REG_MODE_OK_FOR_BASE_P(X, MODE) \
(REGNO (X) <= LAST_LO_REGNUM \
|| REGNO (X) >= FIRST_PSEUDO_REGISTER \
|| (GET_MODE_SIZE (MODE) >= 4 \
@ -1922,8 +1986,8 @@ typedef struct
#define ARM_REG_OK_FOR_BASE_P(X) \
ARM_REGNO_OK_FOR_BASE_P (REGNO (X))
#define THUMB_REG_MODE_OK_FOR_BASE_P(X, MODE) \
THUMB_REGNO_MODE_OK_FOR_BASE_P (REGNO (X), MODE)
#define THUMB1_REG_MODE_OK_FOR_BASE_P(X, MODE) \
THUMB1_REGNO_MODE_OK_FOR_BASE_P (REGNO (X), MODE)
#define REG_STRICT_P 1
@ -1932,22 +1996,23 @@ typedef struct
/* Now define some helpers in terms of the above. */
#define REG_MODE_OK_FOR_BASE_P(X, MODE) \
(TARGET_THUMB \
? THUMB_REG_MODE_OK_FOR_BASE_P (X, MODE) \
(TARGET_THUMB1 \
? THUMB1_REG_MODE_OK_FOR_BASE_P (X, MODE) \
: ARM_REG_OK_FOR_BASE_P (X))
#define ARM_REG_OK_FOR_INDEX_P(X) ARM_REG_OK_FOR_BASE_P (X)
/* For Thumb, a valid index register is anything that can be used in
/* For 16-bit Thumb, a valid index register is anything that can be used in
a byte load instruction. */
#define THUMB_REG_OK_FOR_INDEX_P(X) THUMB_REG_MODE_OK_FOR_BASE_P (X, QImode)
#define THUMB1_REG_OK_FOR_INDEX_P(X) \
THUMB1_REG_MODE_OK_FOR_BASE_P (X, QImode)
/* Nonzero if X is a hard reg that can be used as an index
or if it is a pseudo reg. On the Thumb, the stack pointer
is not suitable. */
#define REG_OK_FOR_INDEX_P(X) \
(TARGET_THUMB \
? THUMB_REG_OK_FOR_INDEX_P (X) \
(TARGET_THUMB1 \
? THUMB1_REG_OK_FOR_INDEX_P (X) \
: ARM_REG_OK_FOR_INDEX_P (X))
/* Nonzero if X can be the base register in a reg+reg addressing mode.
@ -1972,17 +2037,25 @@ typedef struct
goto WIN; \
}
#define THUMB_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN) \
#define THUMB2_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN) \
{ \
if (thumb_legitimate_address_p (MODE, X, REG_STRICT_P)) \
if (thumb2_legitimate_address_p (MODE, X, REG_STRICT_P)) \
goto WIN; \
}
#define THUMB1_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN) \
{ \
if (thumb1_legitimate_address_p (MODE, X, REG_STRICT_P)) \
goto WIN; \
}
#define GO_IF_LEGITIMATE_ADDRESS(MODE, X, WIN) \
if (TARGET_ARM) \
ARM_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN) \
else /* if (TARGET_THUMB) */ \
THUMB_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)
else if (TARGET_THUMB2) \
THUMB2_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN) \
else /* if (TARGET_THUMB1) */ \
THUMB1_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)
/* Try machine-dependent ways of modifying an illegitimate address
@ -1992,7 +2065,12 @@ do { \
X = arm_legitimize_address (X, OLDX, MODE); \
} while (0)
#define THUMB_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN) \
/* ??? Implement LEGITIMIZE_ADDRESS for thumb2. */
#define THUMB2_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN) \
do { \
} while (0)
#define THUMB1_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN) \
do { \
X = thumb_legitimize_address (X, OLDX, MODE); \
} while (0)
@ -2001,8 +2079,10 @@ do { \
do { \
if (TARGET_ARM) \
ARM_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
else if (TARGET_THUMB2) \
THUMB2_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
else \
THUMB_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
THUMB1_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
\
if (memory_address_p (MODE, X)) \
goto WIN; \
@ -2019,7 +2099,7 @@ do { \
/* Nothing helpful to do for the Thumb */
#define GO_IF_MODE_DEPENDENT_ADDRESS(ADDR, LABEL) \
if (TARGET_ARM) \
if (TARGET_32BIT) \
ARM_GO_IF_MODE_DEPENDENT_ADDRESS (ADDR, LABEL)
@ -2027,6 +2107,13 @@ do { \
for the index in the tablejump instruction. */
#define CASE_VECTOR_MODE Pmode
#define CASE_VECTOR_PC_RELATIVE TARGET_THUMB2
#define CASE_VECTOR_SHORTEN_MODE(min, max, body) \
((min < 0 || max >= 0x2000 || !TARGET_THUMB2) ? SImode \
: (max >= 0x200) ? HImode \
: QImode)
/* signed 'char' is most compatible, but RISC OS wants it unsigned.
unsigned is probably best, but may break some code. */
#ifndef DEFAULT_SIGNED_CHAR
@ -2085,14 +2172,14 @@ do { \
/* Moves to and from memory are quite expensive */
#define MEMORY_MOVE_COST(M, CLASS, IN) \
(TARGET_ARM ? 10 : \
(TARGET_32BIT ? 10 : \
((GET_MODE_SIZE (M) < 4 ? 8 : 2 * GET_MODE_SIZE (M)) \
* (CLASS == LO_REGS ? 1 : 2)))
/* Try to generate sequences that don't involve branches, we can then use
conditional instructions */
#define BRANCH_COST \
(TARGET_ARM ? 4 : (optimize > 0 ? 2 : 0))
(TARGET_32BIT ? 4 : (optimize > 0 ? 2 : 0))
/* Position Independent Code. */
/* We decide which register to use based on the compilation options and
@ -2160,7 +2247,8 @@ extern int making_const_table;
#define CLZ_DEFINED_VALUE_AT_ZERO(MODE, VALUE) ((VALUE) = 32, 1)
#undef ASM_APP_OFF
#define ASM_APP_OFF (TARGET_THUMB ? "\t.code\t16\n" : "")
#define ASM_APP_OFF (TARGET_THUMB1 ? "\t.code\t16\n" : \
TARGET_THUMB2 ? "\t.thumb\n" : "")
/* Output a push or a pop instruction (only used when profiling). */
#define ASM_OUTPUT_REG_PUSH(STREAM, REGNO) \
@ -2184,16 +2272,28 @@ extern int making_const_table;
asm_fprintf (STREAM, "\tpop {%r}\n", REGNO); \
} while (0)
/* Jump table alignment is explicit in ASM_OUTPUT_CASE_LABEL. */
#define ADDR_VEC_ALIGN(JUMPTABLE) 0
/* This is how to output a label which precedes a jumptable. Since
Thumb instructions are 2 bytes, we may need explicit alignment here. */
#undef ASM_OUTPUT_CASE_LABEL
#define ASM_OUTPUT_CASE_LABEL(FILE, PREFIX, NUM, JUMPTABLE) \
do \
{ \
if (TARGET_THUMB) \
ASM_OUTPUT_ALIGN (FILE, 2); \
(*targetm.asm_out.internal_label) (FILE, PREFIX, NUM); \
} \
#define ASM_OUTPUT_CASE_LABEL(FILE, PREFIX, NUM, JUMPTABLE) \
do \
{ \
if (TARGET_THUMB && GET_MODE (PATTERN (JUMPTABLE)) == SImode) \
ASM_OUTPUT_ALIGN (FILE, 2); \
(*targetm.asm_out.internal_label) (FILE, PREFIX, NUM); \
} \
while (0)
/* Make sure subsequent insns are aligned after a TBB. */
#define ASM_OUTPUT_CASE_END(FILE, NUM, JUMPTABLE) \
do \
{ \
if (GET_MODE (PATTERN (JUMPTABLE)) == QImode) \
ASM_OUTPUT_ALIGN (FILE, 1); \
} \
while (0)
#define ARM_DECLARE_FUNCTION_NAME(STREAM, NAME, DECL) \
@ -2201,11 +2301,13 @@ extern int making_const_table;
{ \
if (TARGET_THUMB) \
{ \
if (is_called_in_ARM_mode (DECL) \
|| current_function_is_thunk) \
if (is_called_in_ARM_mode (DECL) \
|| (TARGET_THUMB1 && current_function_is_thunk)) \
fprintf (STREAM, "\t.code 32\n") ; \
else if (TARGET_THUMB1) \
fprintf (STREAM, "\t.code\t16\n\t.thumb_func\n") ; \
else \
fprintf (STREAM, "\t.code 16\n\t.thumb_func\n") ; \
fprintf (STREAM, "\t.thumb\n\t.thumb_func\n") ; \
} \
if (TARGET_POKE_FUNCTION_NAME) \
arm_poke_function_name (STREAM, (char *) NAME); \
@ -2247,17 +2349,28 @@ extern int making_const_table;
}
#endif
/* Add two bytes to the length of conditionally executed Thumb-2
instructions for the IT instruction. */
#define ADJUST_INSN_LENGTH(insn, length) \
if (TARGET_THUMB2 && GET_CODE (PATTERN (insn)) == COND_EXEC) \
length += 2;
/* Only perform branch elimination (by making instructions conditional) if
we're optimizing. Otherwise it's of no use anyway. */
we're optimizing. For Thumb-2 check if any IT instructions need
outputting. */
#define FINAL_PRESCAN_INSN(INSN, OPVEC, NOPERANDS) \
if (TARGET_ARM && optimize) \
arm_final_prescan_insn (INSN); \
else if (TARGET_THUMB) \
thumb_final_prescan_insn (INSN)
else if (TARGET_THUMB2) \
thumb2_final_prescan_insn (INSN); \
else if (TARGET_THUMB1) \
thumb1_final_prescan_insn (INSN)
#define PRINT_OPERAND_PUNCT_VALID_P(CODE) \
(CODE == '@' || CODE == '|' \
|| (TARGET_ARM && (CODE == '?')) \
(CODE == '@' || CODE == '|' || CODE == '.' \
|| CODE == '(' || CODE == ')' \
|| (TARGET_32BIT && (CODE == '?')) \
|| (TARGET_THUMB2 && (CODE == '!')) \
|| (TARGET_THUMB && (CODE == '_')))
/* Output an operand of an instruction. */
@ -2390,7 +2503,7 @@ extern int making_const_table;
}
#define PRINT_OPERAND_ADDRESS(STREAM, X) \
if (TARGET_ARM) \
if (TARGET_32BIT) \
ARM_PRINT_OPERAND_ADDRESS (STREAM, X) \
else \
THUMB_PRINT_OPERAND_ADDRESS (STREAM, X)

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
/* Miscellaneous BPABI functions.
Copyright (C) 2003, 2004 Free Software Foundation, Inc.
Copyright (C) 2003, 2004, 2007 Free Software Foundation, Inc.
Contributed by CodeSourcery, LLC.
This file is free software; you can redistribute it and/or modify it
@ -44,7 +44,8 @@
ARM_FUNC_START aeabi_lcmp
subs ip, xxl, yyl
sbcs ip, xxh, yyh
subeqs ip, xxl, yyl
do_it eq
COND(sub,s,eq) ip, xxl, yyl
mov r0, ip
RET
FUNC_END aeabi_lcmp
@ -55,12 +56,18 @@ ARM_FUNC_START aeabi_lcmp
ARM_FUNC_START aeabi_ulcmp
cmp xxh, yyh
do_it lo
movlo r0, #-1
do_it hi
movhi r0, #1
do_it ne
RETc(ne)
cmp xxl, yyl
do_it lo
movlo r0, #-1
do_it hi
movhi r0, #1
do_it eq
moveq r0, #0
RET
FUNC_END aeabi_ulcmp
@ -71,11 +78,16 @@ ARM_FUNC_START aeabi_ulcmp
ARM_FUNC_START aeabi_ldivmod
sub sp, sp, #8
stmfd sp!, {sp, lr}
#if defined(__thumb2__)
mov ip, sp
push {ip, lr}
#else
do_push {sp, lr}
#endif
bl SYM(__gnu_ldivmod_helper) __PLT__
ldr lr, [sp, #4]
add sp, sp, #8
ldmfd sp!, {r2, r3}
do_pop {r2, r3}
RET
#endif /* L_aeabi_ldivmod */
@ -84,11 +96,16 @@ ARM_FUNC_START aeabi_ldivmod
ARM_FUNC_START aeabi_uldivmod
sub sp, sp, #8
stmfd sp!, {sp, lr}
#if defined(__thumb2__)
mov ip, sp
push {ip, lr}
#else
do_push {sp, lr}
#endif
bl SYM(__gnu_uldivmod_helper) __PLT__
ldr lr, [sp, #4]
add sp, sp, #8
ldmfd sp!, {r2, r3}
do_pop {r2, r3}
RET
#endif /* L_aeabi_divmod */

View File

@ -1,5 +1,5 @@
;; Cirrus EP9312 "Maverick" ARM floating point co-processor description.
;; Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
;; Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
;; Contributed by Red Hat.
;; Written by Aldy Hernandez (aldyh@redhat.com)
@ -34,7 +34,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(plus:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:DI 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfadd64%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -44,7 +44,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(plus:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfadd32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -54,7 +54,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(plus:SF (match_operand:SF 1 "cirrus_fp_register" "v")
(match_operand:SF 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfadds%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -64,7 +64,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(plus:DF (match_operand:DF 1 "cirrus_fp_register" "v")
(match_operand:DF 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfaddd%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -74,7 +74,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(minus:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:DI 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsub64%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -84,7 +84,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(minus:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfsub32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -94,7 +94,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(minus:SF (match_operand:SF 1 "cirrus_fp_register" "v")
(match_operand:SF 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsubs%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -104,7 +104,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(minus:DF (match_operand:DF 1 "cirrus_fp_register" "v")
(match_operand:DF 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsubd%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -114,7 +114,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(mult:SI (match_operand:SI 2 "cirrus_fp_register" "v")
(match_operand:SI 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfmul32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -124,7 +124,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(mult:DI (match_operand:DI 2 "cirrus_fp_register" "v")
(match_operand:DI 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmul64%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_dmult")
(set_attr "cirrus" "normal")]
@ -136,7 +136,7 @@
(mult:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_fp_register" "v"))
(match_operand:SI 3 "cirrus_fp_register" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfmac32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -149,7 +149,7 @@
(match_operand:SI 1 "cirrus_fp_register" "0")
(mult:SI (match_operand:SI 2 "cirrus_fp_register" "v")
(match_operand:SI 3 "cirrus_fp_register" "v"))))]
"0 && TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"0 && TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmsc32%?\\t%V0, %V2, %V3"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -159,7 +159,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(mult:SF (match_operand:SF 1 "cirrus_fp_register" "v")
(match_operand:SF 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmuls%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@ -169,7 +169,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(mult:DF (match_operand:DF 1 "cirrus_fp_register" "v")
(match_operand:DF 2 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmuld%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_dmult")
(set_attr "cirrus" "normal")]
@ -179,7 +179,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(ashift:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfsh32%?\\t%V0, %V1, #%s2"
[(set_attr "cirrus" "normal")]
)
@ -188,7 +188,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(ashiftrt:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfsh32%?\\t%V0, %V1, #-%s2"
[(set_attr "cirrus" "normal")]
)
@ -197,7 +197,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(ashift:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "register_operand" "r")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfrshl32%?\\t%V1, %V0, %s2"
[(set_attr "cirrus" "normal")]
)
@ -206,7 +206,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(ashift:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "register_operand" "r")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfrshl64%?\\t%V1, %V0, %s2"
[(set_attr "cirrus" "normal")]
)
@ -215,7 +215,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(ashift:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsh64%?\\t%V0, %V1, #%s2"
[(set_attr "cirrus" "normal")]
)
@ -224,7 +224,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(ashiftrt:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsh64%?\\t%V0, %V1, #-%s2"
[(set_attr "cirrus" "normal")]
)
@ -232,7 +232,7 @@
(define_insn "*cirrus_absdi2"
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(abs:DI (match_operand:DI 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfabs64%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -242,7 +242,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(neg:DI (match_operand:DI 1 "cirrus_fp_register" "v")))
(clobber (reg:CC CC_REGNUM))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfneg64%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -250,7 +250,7 @@
(define_insn "*cirrus_negsi2"
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(neg:SI (match_operand:SI 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfneg32%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -258,7 +258,7 @@
(define_insn "*cirrus_negsf2"
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(neg:SF (match_operand:SF 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfnegs%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -266,7 +266,7 @@
(define_insn "*cirrus_negdf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(neg:DF (match_operand:DF 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfnegd%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -276,7 +276,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(abs:SI (match_operand:SI 1 "cirrus_fp_register" "v")))
(clobber (reg:CC CC_REGNUM))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfabs32%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -284,7 +284,7 @@
(define_insn "*cirrus_abssf2"
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(abs:SF (match_operand:SF 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfabss%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -292,7 +292,7 @@
(define_insn "*cirrus_absdf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(abs:DF (match_operand:DF 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfabsd%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -302,7 +302,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(float:SF (match_operand:SI 1 "s_register_operand" "r")))
(clobber (match_scratch:DF 2 "=v"))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmv64lr%?\\t%Z2, %1\;cfcvt32s%?\\t%V0, %Y2"
[(set_attr "length" "8")
(set_attr "cirrus" "move")]
@ -312,7 +312,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(float:DF (match_operand:SI 1 "s_register_operand" "r")))
(clobber (match_scratch:DF 2 "=v"))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmv64lr%?\\t%Z2, %1\;cfcvt32d%?\\t%V0, %Y2"
[(set_attr "length" "8")
(set_attr "cirrus" "move")]
@ -321,14 +321,14 @@
(define_insn "floatdisf2"
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(float:SF (match_operand:DI 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvt64s%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")])
(define_insn "floatdidf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(float:DF (match_operand:DI 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvt64d%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")])
@ -336,7 +336,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:SF (match_operand:SF 1 "cirrus_fp_register" "v"))))
(clobber (match_scratch:DF 2 "=v"))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cftruncs32%?\\t%Y2, %V1\;cfmvr64l%?\\t%0, %Z2"
[(set_attr "length" "8")
(set_attr "cirrus" "normal")]
@ -346,7 +346,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:DF (match_operand:DF 1 "cirrus_fp_register" "v"))))
(clobber (match_scratch:DF 2 "=v"))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cftruncd32%?\\t%Y2, %V1\;cfmvr64l%?\\t%0, %Z2"
[(set_attr "length" "8")]
)
@ -355,7 +355,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(float_truncate:SF
(match_operand:DF 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvtds%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -363,7 +363,7 @@
(define_insn "*cirrus_extendsfdf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(float_extend:DF (match_operand:SF 1 "cirrus_fp_register" "v")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvtsd%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@ -478,3 +478,111 @@
(set_attr "cirrus" " not, not,not, not, not,normal,double,move,normal,double")]
)
(define_insn "*cirrus_thumb2_movdi"
[(set (match_operand:DI 0 "nonimmediate_di_operand" "=r,r,o<>,v,r,v,m,v")
(match_operand:DI 1 "di_operand" "rIK,mi,r,r,v,mi,v,v"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"*
{
switch (which_alternative)
{
case 0:
case 1:
case 2:
return (output_move_double (operands));
case 3: return \"cfmv64lr%?\\t%V0, %Q1\;cfmv64hr%?\\t%V0, %R1\";
case 4: return \"cfmvr64l%?\\t%Q0, %V1\;cfmvr64h%?\\t%R0, %V1\";
case 5: return \"cfldr64%?\\t%V0, %1\";
case 6: return \"cfstr64%?\\t%V1, %0\";
/* Shifting by 0 will just copy %1 into %0. */
case 7: return \"cfsh64%?\\t%V0, %V1, #0\";
default: abort ();
}
}"
[(set_attr "length" " 8, 8, 8, 8, 8, 4, 4, 4")
(set_attr "type" " *,load2,store2, *, *, load2,store2, *")
(set_attr "pool_range" " *,4096, *, *, *, 1020, *, *")
(set_attr "neg_pool_range" " *, 0, *, *, *, 1008, *, *")
(set_attr "cirrus" "not, not, not,move,normal,double,double,normal")]
)
;; Cirrus SI values have been outlawed. Look in arm.h for the comment
;; on HARD_REGNO_MODE_OK.
(define_insn "*cirrus_thumb2_movsi_insn"
[(set (match_operand:SI 0 "general_operand" "=r,r,r,m,*v,r,*v,T,*v")
(match_operand:SI 1 "general_operand" "rI,K,mi,r,r,*v,T,*v,*v"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0
&& (register_operand (operands[0], SImode)
|| register_operand (operands[1], SImode))"
"@
mov%?\\t%0, %1
mvn%?\\t%0, #%B1
ldr%?\\t%0, %1
str%?\\t%1, %0
cfmv64lr%?\\t%Z0, %1
cfmvr64l%?\\t%0, %Z1
cfldr32%?\\t%V0, %1
cfstr32%?\\t%V1, %0
cfsh32%?\\t%V0, %V1, #0"
[(set_attr "type" "*, *, load1,store1, *, *, load1,store1, *")
(set_attr "pool_range" "*, *, 4096, *, *, *, 1024, *, *")
(set_attr "neg_pool_range" "*, *, 0, *, *, *, 1012, *, *")
(set_attr "cirrus" "not,not, not, not,move,normal,normal,normal,normal")]
)
(define_insn "*thumb2_cirrus_movsf_hard_insn"
[(set (match_operand:SF 0 "nonimmediate_operand" "=v,v,v,r,m,r,r,m")
(match_operand:SF 1 "general_operand" "v,mE,r,v,v,r,mE,r"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_MAVERICK
&& (GET_CODE (operands[0]) != MEM
|| register_operand (operands[1], SFmode))"
"@
cfcpys%?\\t%V0, %V1
cfldrs%?\\t%V0, %1
cfmvsr%?\\t%V0, %1
cfmvrs%?\\t%0, %V1
cfstrs%?\\t%V1, %0
mov%?\\t%0, %1
ldr%?\\t%0, %1\\t%@ float
str%?\\t%1, %0\\t%@ float"
[(set_attr "length" " *, *, *, *, *, 4, 4, 4")
(set_attr "type" " *, load1, *, *,store1, *,load1,store1")
(set_attr "pool_range" " *, 1020, *, *, *, *,4096, *")
(set_attr "neg_pool_range" " *, 1008, *, *, *, *, 0, *")
(set_attr "cirrus" "normal,normal,move,normal,normal,not, not, not")]
)
(define_insn "*thumb2_cirrus_movdf_hard_insn"
[(set (match_operand:DF 0 "nonimmediate_operand" "=r,Q,r,m,r,v,v,v,r,m")
(match_operand:DF 1 "general_operand" "Q,r,r,r,mF,v,mF,r,v,v"))]
"TARGET_THUMB2
&& TARGET_HARD_FLOAT && TARGET_MAVERICK
&& (GET_CODE (operands[0]) != MEM
|| register_operand (operands[1], DFmode))"
"*
{
switch (which_alternative)
{
case 0: return \"ldm%?ia\\t%m1, %M0\\t%@ double\";
case 1: return \"stm%?ia\\t%m0, %M1\\t%@ double\";
case 2: case 3: case 4: return output_move_double (operands);
case 5: return \"cfcpyd%?\\t%V0, %V1\";
case 6: return \"cfldrd%?\\t%V0, %1\";
case 7: return \"cfmvdlr\\t%V0, %Q1\;cfmvdhr%?\\t%V0, %R1\";
case 8: return \"cfmvrdl%?\\t%Q0, %V1\;cfmvrdh%?\\t%R0, %V1\";
case 9: return \"cfstrd%?\\t%V1, %0\";
default: abort ();
}
}"
[(set_attr "type" "load1,store2, *,store2,load1, *, load1, *, *,store2")
(set_attr "length" " 4, 4, 8, 8, 8, 4, 4, 8, 8, 4")
(set_attr "pool_range" " *, *, *, *,4092, *, 1020, *, *, *")
(set_attr "neg_pool_range" " *, *, *, *, 0, *, 1008, *, *, *")
(set_attr "cirrus" " not, not,not, not, not,normal,double,move,normal,double")]
)

View File

@ -1,7 +1,7 @@
/* Definitions of target machine for GNU compiler.
For ARM with COFF object format.
Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005
Free Software Foundation, Inc.
Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005,
2007 Free Software Foundation, Inc.
Contributed by Doug Evans (devans@cygnus.com).
This file is part of GCC.
@ -59,9 +59,10 @@
/* Define this macro if jump tables (for `tablejump' insns) should be
output in the text section, along with the assembler instructions.
Otherwise, the readonly data section is used. */
/* We put ARM jump tables in the text section, because it makes the code
more efficient, but for Thumb it's better to put them out of band. */
#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_ARM)
/* We put ARM and Thumb-2 jump tables in the text section, because it makes
the code more efficient, but for Thumb-1 it's better to put them out of
band. */
#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_32BIT)
#undef READONLY_DATA_SECTION_ASM_OP
#define READONLY_DATA_SECTION_ASM_OP "\t.section .rdata"

View File

@ -1,5 +1,5 @@
;; Constraint definitions for ARM and Thumb
;; Copyright (C) 2006 Free Software Foundation, Inc.
;; Copyright (C) 2006, 2007 Free Software Foundation, Inc.
;; Contributed by ARM Ltd.
;; This file is part of GCC.
@ -20,20 +20,21 @@
;; Boston, MA 02110-1301, USA.
;; The following register constraints have been used:
;; - in ARM state: f, v, w, y, z
;; - in ARM/Thumb-2 state: f, v, w, y, z
;; - in Thumb state: h, k, b
;; - in both states: l, c
;; In ARM state, 'l' is an alias for 'r'
;; The following normal constraints have been used:
;; in ARM state: G, H, I, J, K, L, M
;; in Thumb state: I, J, K, L, M, N, O
;; in ARM/Thumb-2 state: G, H, I, J, K, L, M
;; in Thumb-1 state: I, J, K, L, M, N, O
;; The following multi-letter normal constraints have been used:
;; in ARM state: Da, Db, Dc
;; in ARM/Thumb-2 state: Da, Db, Dc
;; The following memory constraints have been used:
;; in ARM state: Q, Uq, Uv, Uy
;; in ARM/Thumb-2 state: Q, Uv, Uy
;; in ARM state: Uq
(define_register_constraint "f" "TARGET_ARM ? FPA_REGS : NO_REGS"
@ -70,99 +71,103 @@
"@internal The condition code register.")
(define_constraint "I"
"In ARM state a constant that can be used as an immediate value in a Data
Processing instruction. In Thumb state a constant in the range 0-255."
"In ARM/Thumb-2 state a constant that can be used as an immediate value in a
Data Processing instruction. In Thumb-1 state a constant in the range
0-255."
(and (match_code "const_int")
(match_test "TARGET_ARM ? const_ok_for_arm (ival)
(match_test "TARGET_32BIT ? const_ok_for_arm (ival)
: ival >= 0 && ival <= 255")))
(define_constraint "J"
"In ARM state a constant in the range @minus{}4095-4095. In Thumb state
a constant in the range @minus{}255-@minus{}1."
"In ARM/Thumb-2 state a constant in the range @minus{}4095-4095. In Thumb-1
state a constant in the range @minus{}255-@minus{}1."
(and (match_code "const_int")
(match_test "TARGET_ARM ? (ival >= -4095 && ival <= 4095)
(match_test "TARGET_32BIT ? (ival >= -4095 && ival <= 4095)
: (ival >= -255 && ival <= -1)")))
(define_constraint "K"
"In ARM state a constant that satisfies the @code{I} constraint if inverted.
In Thumb state a constant that satisfies the @code{I} constraint multiplied
by any power of 2."
"In ARM/Thumb-2 state a constant that satisfies the @code{I} constraint if
inverted. In Thumb-1 state a constant that satisfies the @code{I}
constraint multiplied by any power of 2."
(and (match_code "const_int")
(match_test "TARGET_ARM ? const_ok_for_arm (~ival)
(match_test "TARGET_32BIT ? const_ok_for_arm (~ival)
: thumb_shiftable_const (ival)")))
(define_constraint "L"
"In ARM state a constant that satisfies the @code{I} constraint if negated.
In Thumb state a constant in the range @minus{}7-7."
"In ARM/Thumb-2 state a constant that satisfies the @code{I} constraint if
negated. In Thumb-1 state a constant in the range @minus{}7-7."
(and (match_code "const_int")
(match_test "TARGET_ARM ? const_ok_for_arm (-ival)
(match_test "TARGET_32BIT ? const_ok_for_arm (-ival)
: (ival >= -7 && ival <= 7)")))
;; The ARM state version is internal...
;; @internal In ARM state a constant in the range 0-32 or any power of 2.
;; @internal In ARM/Thumb-2 state a constant in the range 0-32 or any
;; power of 2.
(define_constraint "M"
"In Thumb state a constant that is a multiple of 4 in the range 0-1020."
"In Thumb-1 state a constant that is a multiple of 4 in the range 0-1020."
(and (match_code "const_int")
(match_test "TARGET_ARM ? ((ival >= 0 && ival <= 32)
(match_test "TARGET_32BIT ? ((ival >= 0 && ival <= 32)
|| ((ival & (ival - 1)) == 0))
: ((ival >= 0 && ival <= 1020) && ((ival & 3) == 0))")))
(define_constraint "N"
"In Thumb state a constant in the range 0-31."
"In ARM/Thumb-2 state a constant suitable for a MOVW instruction.
In Thumb-1 state a constant in the range 0-31."
(and (match_code "const_int")
(match_test "TARGET_THUMB && ival >= 0 && ival <= 31")))
(match_test "TARGET_32BIT ? arm_arch_thumb2 && ((ival & 0xffff0000) == 0)
: (ival >= 0 && ival <= 31)")))
(define_constraint "O"
"In Thumb state a constant that is a multiple of 4 in the range
"In Thumb-1 state a constant that is a multiple of 4 in the range
@minus{}508-508."
(and (match_code "const_int")
(match_test "TARGET_THUMB && ival >= -508 && ival <= 508
(match_test "TARGET_THUMB1 && ival >= -508 && ival <= 508
&& ((ival & 3) == 0)")))
(define_constraint "G"
"In ARM state a valid FPA immediate constant."
"In ARM/Thumb-2 state a valid FPA immediate constant."
(and (match_code "const_double")
(match_test "TARGET_ARM && arm_const_double_rtx (op)")))
(match_test "TARGET_32BIT && arm_const_double_rtx (op)")))
(define_constraint "H"
"In ARM state a valid FPA immediate constant when negated."
"In ARM/Thumb-2 state a valid FPA immediate constant when negated."
(and (match_code "const_double")
(match_test "TARGET_ARM && neg_const_double_rtx_ok_for_fpa (op)")))
(match_test "TARGET_32BIT && neg_const_double_rtx_ok_for_fpa (op)")))
(define_constraint "Da"
"@internal
In ARM state a const_int, const_double or const_vector that can
In ARM/Thumb-2 state a const_int, const_double or const_vector that can
be generated with two Data Processing insns."
(and (match_code "const_double,const_int,const_vector")
(match_test "TARGET_ARM && arm_const_double_inline_cost (op) == 2")))
(match_test "TARGET_32BIT && arm_const_double_inline_cost (op) == 2")))
(define_constraint "Db"
"@internal
In ARM state a const_int, const_double or const_vector that can
In ARM/Thumb-2 state a const_int, const_double or const_vector that can
be generated with three Data Processing insns."
(and (match_code "const_double,const_int,const_vector")
(match_test "TARGET_ARM && arm_const_double_inline_cost (op) == 3")))
(match_test "TARGET_32BIT && arm_const_double_inline_cost (op) == 3")))
(define_constraint "Dc"
"@internal
In ARM state a const_int, const_double or const_vector that can
In ARM/Thumb-2 state a const_int, const_double or const_vector that can
be generated with four Data Processing insns. This pattern is disabled
if optimizing for space or when we have load-delay slots to fill."
(and (match_code "const_double,const_int,const_vector")
(match_test "TARGET_ARM && arm_const_double_inline_cost (op) == 4
(match_test "TARGET_32BIT && arm_const_double_inline_cost (op) == 4
&& !(optimize_size || arm_ld_sched)")))
(define_memory_constraint "Uv"
"@internal
In ARM state a valid VFP load/store address."
In ARM/Thumb-2 state a valid VFP load/store address."
(and (match_code "mem")
(match_test "TARGET_ARM && arm_coproc_mem_operand (op, FALSE)")))
(match_test "TARGET_32BIT && arm_coproc_mem_operand (op, FALSE)")))
(define_memory_constraint "Uy"
"@internal
In ARM state a valid iWMMX load/store address."
In ARM/Thumb-2 state a valid iWMMX load/store address."
(and (match_code "mem")
(match_test "TARGET_ARM && arm_coproc_mem_operand (op, TRUE)")))
(match_test "TARGET_32BIT && arm_coproc_mem_operand (op, TRUE)")))
(define_memory_constraint "Uq"
"@internal
@ -174,7 +179,7 @@
(define_memory_constraint "Q"
"@internal
In ARM state an address that is a single base register."
In ARM/Thumb-2 state an address that is a single base register."
(and (match_code "mem")
(match_test "REG_P (XEXP (op, 0))")))

View File

@ -1,6 +1,6 @@
/* Definitions of target machine for GNU compiler.
For ARM with ELF obj format.
Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2004, 2005
Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2004, 2005, 2007
Free Software Foundation, Inc.
Contributed by Philip Blundell <philb@gnu.org> and
Catherine Moore <clm@cygnus.com>
@ -96,9 +96,10 @@
/* Define this macro if jump tables (for `tablejump' insns) should be
output in the text section, along with the assembler instructions.
Otherwise, the readonly data section is used. */
/* We put ARM jump tables in the text section, because it makes the code
more efficient, but for Thumb it's better to put them out of band. */
#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_ARM)
/* We put ARM and Thumb-2 jump tables in the text section, because it makes
the code more efficient, but for Thumb-1 it's better to put them out of
band. */
#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_32BIT)
#ifndef LINK_SPEC
#define LINK_SPEC "%{mbig-endian:-EB} %{mlittle-endian:-EL} -X"

View File

@ -1,6 +1,6 @@
;;- Machine description for FPA co-processor for ARM cpus.
;; Copyright 1991, 1993, 1994, 1995, 1996, 1996, 1997, 1998, 1999, 2000,
;; 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
;; 2001, 2002, 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
;; Contributed by Pieter `Tiggr' Schoenmakers (rcpieter@win.tue.nl)
;; and Martin Simmons (@harleqn.co.uk).
;; More major hacks by Richard Earnshaw (rearnsha@arm.com).
@ -22,6 +22,10 @@
;; the Free Software Foundation, 51 Franklin Street, Fifth Floor,
;; Boston, MA 02110-1301, USA.
;; Some FPA mnemonics are ambiguous between conditional infixes and
;; conditional suffixes. All instructions use a conditional infix,
;; even in unified assembly mode.
;; FPA automaton.
(define_automaton "armfp")
@ -101,7 +105,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f,f")
(plus:SF (match_operand:SF 1 "s_register_operand" "%f,f")
(match_operand:SF 2 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
adf%?s\\t%0, %1, %2
suf%?s\\t%0, %1, #%N2"
@ -113,7 +117,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f,f")
(plus:DF (match_operand:DF 1 "s_register_operand" "%f,f")
(match_operand:DF 2 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
adf%?d\\t%0, %1, %2
suf%?d\\t%0, %1, #%N2"
@ -126,7 +130,7 @@
(plus:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f,f"))
(match_operand:DF 2 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
adf%?d\\t%0, %1, %2
suf%?d\\t%0, %1, #%N2"
@ -139,7 +143,7 @@
(plus:DF (match_operand:DF 1 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"adf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@ -151,7 +155,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"adf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@ -161,7 +165,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f,f")
(minus:SF (match_operand:SF 1 "arm_float_rhs_operand" "f,G")
(match_operand:SF 2 "arm_float_rhs_operand" "fG,f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
suf%?s\\t%0, %1, %2
rsf%?s\\t%0, %2, %1"
@ -172,7 +176,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f,f")
(minus:DF (match_operand:DF 1 "arm_float_rhs_operand" "f,G")
(match_operand:DF 2 "arm_float_rhs_operand" "fG,f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
suf%?d\\t%0, %1, %2
rsf%?d\\t%0, %2, %1"
@ -185,7 +189,7 @@
(minus:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"suf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@ -196,7 +200,7 @@
(minus:DF (match_operand:DF 1 "arm_float_rhs_operand" "f,G")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f,f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
suf%?d\\t%0, %1, %2
rsf%?d\\t%0, %2, %1"
@ -210,7 +214,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"suf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@ -220,7 +224,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f")
(mult:SF (match_operand:SF 1 "s_register_operand" "f")
(match_operand:SF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"fml%?s\\t%0, %1, %2"
[(set_attr "type" "ffmul")
(set_attr "predicable" "yes")]
@ -230,7 +234,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(mult:DF (match_operand:DF 1 "s_register_operand" "f")
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@ -241,7 +245,7 @@
(mult:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@ -252,7 +256,7 @@
(mult:DF (match_operand:DF 1 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@ -263,7 +267,7 @@
(mult:DF
(float_extend:DF (match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF (match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@ -275,7 +279,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f,f")
(div:SF (match_operand:SF 1 "arm_float_rhs_operand" "f,G")
(match_operand:SF 2 "arm_float_rhs_operand" "fG,f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
fdv%?s\\t%0, %1, %2
frd%?s\\t%0, %2, %1"
@ -287,7 +291,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f,f")
(div:DF (match_operand:DF 1 "arm_float_rhs_operand" "f,G")
(match_operand:DF 2 "arm_float_rhs_operand" "fG,f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
dvf%?d\\t%0, %1, %2
rdf%?d\\t%0, %2, %1"
@ -300,7 +304,7 @@
(div:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"dvf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -311,7 +315,7 @@
(div:DF (match_operand:DF 1 "arm_float_rhs_operand" "fG")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rdf%?d\\t%0, %2, %1"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -323,7 +327,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"dvf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -333,7 +337,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f")
(mod:SF (match_operand:SF 1 "s_register_operand" "f")
(match_operand:SF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?s\\t%0, %1, %2"
[(set_attr "type" "fdivs")
(set_attr "predicable" "yes")]
@ -343,7 +347,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(mod:DF (match_operand:DF 1 "s_register_operand" "f")
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -354,7 +358,7 @@
(mod:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -365,7 +369,7 @@
(mod:DF (match_operand:DF 1 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -377,7 +381,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@ -386,7 +390,7 @@
(define_insn "*negsf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(neg:SF (match_operand:SF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mnf%?s\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -395,7 +399,7 @@
(define_insn "*negdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(neg:DF (match_operand:DF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mnf%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -405,7 +409,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(neg:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mnf%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -414,7 +418,7 @@
(define_insn "*abssf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(abs:SF (match_operand:SF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"abs%?s\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -423,7 +427,7 @@
(define_insn "*absdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(abs:DF (match_operand:DF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"abs%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -433,7 +437,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(abs:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"abs%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -442,7 +446,7 @@
(define_insn "*sqrtsf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"sqt%?s\\t%0, %1"
[(set_attr "type" "float_em")
(set_attr "predicable" "yes")]
@ -451,7 +455,7 @@
(define_insn "*sqrtdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(sqrt:DF (match_operand:DF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"sqt%?d\\t%0, %1"
[(set_attr "type" "float_em")
(set_attr "predicable" "yes")]
@ -461,7 +465,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(sqrt:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"sqt%?d\\t%0, %1"
[(set_attr "type" "float_em")
(set_attr "predicable" "yes")]
@ -470,7 +474,7 @@
(define_insn "*floatsisf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(float:SF (match_operand:SI 1 "s_register_operand" "r")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"flt%?s\\t%0, %1"
[(set_attr "type" "r_2_f")
(set_attr "predicable" "yes")]
@ -479,7 +483,7 @@
(define_insn "*floatsidf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(float:DF (match_operand:SI 1 "s_register_operand" "r")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"flt%?d\\t%0, %1"
[(set_attr "type" "r_2_f")
(set_attr "predicable" "yes")]
@ -488,7 +492,7 @@
(define_insn "*fix_truncsfsi2_fpa"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"fix%?z\\t%0, %1"
[(set_attr "type" "f_2_r")
(set_attr "predicable" "yes")]
@ -497,7 +501,7 @@
(define_insn "*fix_truncdfsi2_fpa"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"fix%?z\\t%0, %1"
[(set_attr "type" "f_2_r")
(set_attr "predicable" "yes")]
@ -507,7 +511,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f")
(float_truncate:SF
(match_operand:DF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mvf%?s\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -516,7 +520,7 @@
(define_insn "*extendsfdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(float_extend:DF (match_operand:SF 1 "s_register_operand" "f")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mvf%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@ -561,8 +565,8 @@
switch (which_alternative)
{
default:
case 0: return \"ldm%?ia\\t%m1, %M0\\t%@ double\";
case 1: return \"stm%?ia\\t%m0, %M1\\t%@ double\";
case 0: return \"ldm%(ia%)\\t%m1, %M0\\t%@ double\";
case 1: return \"stm%(ia%)\\t%m0, %M1\\t%@ double\";
case 2: return \"#\";
case 3: case 4: return output_move_double (operands);
case 5: return \"mvf%?d\\t%0, %1\";
@ -609,11 +613,102 @@
(set_attr "type" "ffarith,f_load,f_store")]
)
;; stfs/ldfs always use a conditional infix. This works around the
;; ambiguity between "stf pl s" and "sftp ls".
(define_insn "*thumb2_movsf_fpa"
[(set (match_operand:SF 0 "nonimmediate_operand" "=f,f,f, m,f,r,r,r, m")
(match_operand:SF 1 "general_operand" "fG,H,mE,f,r,f,r,mE,r"))]
"TARGET_THUMB2
&& TARGET_HARD_FLOAT && TARGET_FPA
&& (GET_CODE (operands[0]) != MEM
|| register_operand (operands[1], SFmode))"
"@
mvf%?s\\t%0, %1
mnf%?s\\t%0, #%N1
ldf%?s\\t%0, %1
stf%?s\\t%1, %0
str%?\\t%1, [%|sp, #-4]!\;ldf%?s\\t%0, [%|sp], #4
stf%?s\\t%1, [%|sp, #-4]!\;ldr%?\\t%0, [%|sp], #4
mov%?\\t%0, %1 @bar
ldr%?\\t%0, %1\\t%@ float
str%?\\t%1, %0\\t%@ float"
[(set_attr "length" "4,4,4,4,8,8,4,4,4")
(set_attr "ce_count" "1,1,1,1,2,2,1,1,1")
(set_attr "predicable" "yes")
(set_attr "type"
"ffarith,ffarith,f_load,f_store,r_mem_f,f_mem_r,*,load1,store1")
(set_attr "pool_range" "*,*,1024,*,*,*,*,4096,*")
(set_attr "neg_pool_range" "*,*,1012,*,*,*,*,0,*")]
)
;; Not predicable because we don't know the number of instructions.
(define_insn "*thumb2_movdf_fpa"
[(set (match_operand:DF 0 "nonimmediate_operand"
"=r,Q,r,m,r, f, f,f, m,!f,!r")
(match_operand:DF 1 "general_operand"
"Q, r,r,r,mF,fG,H,mF,f,r, f"))]
"TARGET_THUMB2
&& TARGET_HARD_FLOAT && TARGET_FPA
&& (GET_CODE (operands[0]) != MEM
|| register_operand (operands[1], DFmode))"
"*
{
switch (which_alternative)
{
default:
case 0: return \"ldm%(ia%)\\t%m1, %M0\\t%@ double\";
case 1: return \"stm%(ia%)\\t%m0, %M1\\t%@ double\";
case 2: case 3: case 4: return output_move_double (operands);
case 5: return \"mvf%?d\\t%0, %1\";
case 6: return \"mnf%?d\\t%0, #%N1\";
case 7: return \"ldf%?d\\t%0, %1\";
case 8: return \"stf%?d\\t%1, %0\";
case 9: return output_mov_double_fpa_from_arm (operands);
case 10: return output_mov_double_arm_from_fpa (operands);
}
}
"
[(set_attr "length" "4,4,8,8,8,4,4,4,4,8,8")
(set_attr "type"
"load1,store2,*,store2,load1,ffarith,ffarith,f_load,f_store,r_mem_f,f_mem_r")
(set_attr "pool_range" "*,*,*,*,4092,*,*,1024,*,*,*")
(set_attr "neg_pool_range" "*,*,*,*,0,*,*,1020,*,*,*")]
)
;; Saving and restoring the floating point registers in the prologue should
;; be done in XFmode, even though we don't support that for anything else
;; (Well, strictly it's 'internal representation', but that's effectively
;; XFmode).
;; Not predicable because we don't know the number of instructions.
(define_insn "*thumb2_movxf_fpa"
[(set (match_operand:XF 0 "nonimmediate_operand" "=f,f,f,m,f,r,r")
(match_operand:XF 1 "general_operand" "fG,H,m,f,r,f,r"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_FPA && reload_completed"
"*
switch (which_alternative)
{
default:
case 0: return \"mvf%?e\\t%0, %1\";
case 1: return \"mnf%?e\\t%0, #%N1\";
case 2: return \"ldf%?e\\t%0, %1\";
case 3: return \"stf%?e\\t%1, %0\";
case 4: return output_mov_long_double_fpa_from_arm (operands);
case 5: return output_mov_long_double_arm_from_fpa (operands);
case 6: return output_mov_long_double_arm_from_arm (operands);
}
"
[(set_attr "length" "4,4,4,4,8,8,12")
(set_attr "type" "ffarith,ffarith,f_load,f_store,r_mem_f,f_mem_r,*")
(set_attr "pool_range" "*,*,1024,*,*,*,*")
(set_attr "neg_pool_range" "*,*,1004,*,*,*,*")]
)
(define_insn "*cmpsf_fpa"
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:SF 0 "s_register_operand" "f,f")
(match_operand:SF 1 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?\\t%0, %1
cnf%?\\t%0, #%N1"
@ -625,7 +720,7 @@
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:DF 0 "s_register_operand" "f,f")
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?\\t%0, %1
cnf%?\\t%0, #%N1"
@ -638,7 +733,7 @@
(compare:CCFP (float_extend:DF
(match_operand:SF 0 "s_register_operand" "f,f"))
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?\\t%0, %1
cnf%?\\t%0, #%N1"
@ -651,7 +746,7 @@
(compare:CCFP (match_operand:DF 0 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"cmf%?\\t%0, %1"
[(set_attr "conds" "set")
(set_attr "type" "f_2_r")]
@ -661,7 +756,7 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:SF 0 "s_register_operand" "f,f")
(match_operand:SF 1 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?e\\t%0, %1
cnf%?e\\t%0, #%N1"
@ -673,7 +768,7 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "f,f")
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?e\\t%0, %1
cnf%?e\\t%0, #%N1"
@ -686,7 +781,7 @@
(compare:CCFPE (float_extend:DF
(match_operand:SF 0 "s_register_operand" "f,f"))
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?e\\t%0, %1
cnf%?e\\t%0, #%N1"
@ -699,7 +794,7 @@
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"cmf%?e\\t%0, %1"
[(set_attr "conds" "set")
(set_attr "type" "f_2_r")]
@ -748,3 +843,48 @@
(set_attr "type" "ffarith")
(set_attr "conds" "use")]
)
(define_insn "*thumb2_movsfcc_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f,f,f,f,f,f,f,f")
(if_then_else:SF
(match_operator 3 "arm_comparison_operator"
[(match_operand 4 "cc_register" "") (const_int 0)])
(match_operand:SF 1 "arm_float_add_operand" "0,0,fG,H,fG,fG,H,H")
(match_operand:SF 2 "arm_float_add_operand" "fG,H,0,0,fG,H,fG,H")))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_FPA"
"@
it\\t%D3\;mvf%D3s\\t%0, %2
it\\t%D3\;mnf%D3s\\t%0, #%N2
it\\t%d3\;mvf%d3s\\t%0, %1
it\\t%d3\;mnf%d3s\\t%0, #%N1
ite\\t%d3\;mvf%d3s\\t%0, %1\;mvf%D3s\\t%0, %2
ite\\t%d3\;mvf%d3s\\t%0, %1\;mnf%D3s\\t%0, #%N2
ite\\t%d3\;mnf%d3s\\t%0, #%N1\;mvf%D3s\\t%0, %2
ite\\t%d3\;mnf%d3s\\t%0, #%N1\;mnf%D3s\\t%0, #%N2"
[(set_attr "length" "6,6,6,6,10,10,10,10")
(set_attr "type" "ffarith")
(set_attr "conds" "use")]
)
(define_insn "*thumb2_movdfcc_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f,f,f,f,f,f,f,f")
(if_then_else:DF
(match_operator 3 "arm_comparison_operator"
[(match_operand 4 "cc_register" "") (const_int 0)])
(match_operand:DF 1 "arm_float_add_operand" "0,0,fG,H,fG,fG,H,H")
(match_operand:DF 2 "arm_float_add_operand" "fG,H,0,0,fG,H,fG,H")))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_FPA"
"@
it\\t%D3\;mvf%D3d\\t%0, %2
it\\t%D3\;mnf%D3d\\t%0, #%N2
it\\t%d3\;mvf%d3d\\t%0, %1
it\\t%d3\;mnf%d3d\\t%0, #%N1
ite\\t%d3\;mvf%d3d\\t%0, %1\;mvf%D3d\\t%0, %2
ite\\t%d3\;mvf%d3d\\t%0, %1\;mnf%D3d\\t%0, #%N2
ite\\t%d3\;mnf%d3d\\t%0, #%N1\;mvf%D3d\\t%0, %2
ite\\t%d3\;mnf%d3d\\t%0, #%N1\;mnf%D3d\\t%0, #%N2"
[(set_attr "length" "6,6,6,6,10,10,10,10")
(set_attr "type" "ffarith")
(set_attr "conds" "use")]
)

View File

@ -1,6 +1,6 @@
/* ieee754-df.S double-precision floating point support for ARM
Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
Contributed by Nicolas Pitre (nico@cam.org)
This file is free software; you can redistribute it and/or modify it
@ -88,23 +88,26 @@ ARM_FUNC_ALIAS aeabi_dsub subdf3
ARM_FUNC_START adddf3
ARM_FUNC_ALIAS aeabi_dadd adddf3
1: stmfd sp!, {r4, r5, lr}
1: do_push {r4, r5, lr}
@ Look for zeroes, equal values, INF, or NAN.
mov r4, xh, lsl #1
mov r5, yh, lsl #1
shift1 lsl, r4, xh, #1
shift1 lsl, r5, yh, #1
teq r4, r5
do_it eq
teqeq xl, yl
orrnes ip, r4, xl
orrnes ip, r5, yl
mvnnes ip, r4, asr #21
mvnnes ip, r5, asr #21
do_it ne, ttt
COND(orr,s,ne) ip, r4, xl
COND(orr,s,ne) ip, r5, yl
COND(mvn,s,ne) ip, r4, asr #21
COND(mvn,s,ne) ip, r5, asr #21
beq LSYM(Lad_s)
@ Compute exponent difference. Make largest exponent in r4,
@ corresponding arg in xh-xl, and positive exponent difference in r5.
mov r4, r4, lsr #21
shift1 lsr, r4, r4, #21
rsbs r5, r4, r5, lsr #21
do_it lt
rsblt r5, r5, #0
ble 1f
add r4, r4, r5
@ -119,6 +122,7 @@ ARM_FUNC_ALIAS aeabi_dadd adddf3
@ already in xh-xl. We need up to 54 bit to handle proper rounding
@ of 0x1p54 - 1.1.
cmp r5, #54
do_it hi
RETLDM "r4, r5" hi
@ Convert mantissa to signed integer.
@ -127,15 +131,25 @@ ARM_FUNC_ALIAS aeabi_dadd adddf3
mov ip, #0x00100000
orr xh, ip, xh, lsr #12
beq 1f
#if defined(__thumb2__)
negs xl, xl
sbc xh, xh, xh, lsl #1
#else
rsbs xl, xl, #0
rsc xh, xh, #0
#endif
1:
tst yh, #0x80000000
mov yh, yh, lsl #12
orr yh, ip, yh, lsr #12
beq 1f
#if defined(__thumb2__)
negs yl, yl
sbc yh, yh, yh, lsl #1
#else
rsbs yl, yl, #0
rsc yh, yh, #0
#endif
1:
@ If exponent == difference, one or both args were denormalized.
@ Since this is not common case, rescale them off line.
@ -149,27 +163,35 @@ LSYM(Lad_x):
@ Shift yh-yl right per r5, add to xh-xl, keep leftover bits into ip.
rsbs lr, r5, #32
blt 1f
mov ip, yl, lsl lr
adds xl, xl, yl, lsr r5
shift1 lsl, ip, yl, lr
shiftop adds xl xl yl lsr r5 yl
adc xh, xh, #0
adds xl, xl, yh, lsl lr
adcs xh, xh, yh, asr r5
shiftop adds xl xl yh lsl lr yl
shiftop adcs xh xh yh asr r5 yh
b 2f
1: sub r5, r5, #32
add lr, lr, #32
cmp yl, #1
mov ip, yh, lsl lr
shift1 lsl,ip, yh, lr
do_it cs
orrcs ip, ip, #2 @ 2 not 1, to allow lsr #1 later
adds xl, xl, yh, asr r5
shiftop adds xl xl yh asr r5 yh
adcs xh, xh, yh, asr #31
2:
@ We now have a result in xh-xl-ip.
@ Keep absolute value in xh-xl-ip, sign in r5 (the n bit was set above)
and r5, xh, #0x80000000
bpl LSYM(Lad_p)
#if defined(__thumb2__)
mov lr, #0
negs ip, ip
sbcs xl, lr, xl
sbc xh, lr, xh
#else
rsbs ip, ip, #0
rscs xl, xl, #0
rsc xh, xh, #0
#endif
@ Determine how to normalize the result.
LSYM(Lad_p):
@ -195,7 +217,8 @@ LSYM(Lad_p):
@ Pack final result together.
LSYM(Lad_e):
cmp ip, #0x80000000
moveqs ip, xl, lsr #1
do_it eq
COND(mov,s,eq) ip, xl, lsr #1
adcs xl, xl, #0
adc xh, xh, r4, lsl #20
orr xh, xh, r5
@ -238,9 +261,11 @@ LSYM(Lad_l):
#else
teq xh, #0
do_it eq, t
moveq xh, xl
moveq xl, #0
clz r3, xh
do_it eq
addeq r3, r3, #32
sub r3, r3, #11
@ -256,20 +281,29 @@ LSYM(Lad_l):
@ since a register switch happened above.
add ip, r2, #20
rsb r2, r2, #12
mov xl, xh, lsl ip
mov xh, xh, lsr r2
shift1 lsl, xl, xh, ip
shift1 lsr, xh, xh, r2
b 3f
@ actually shift value left 1 to 20 bits, which might also represent
@ 32 to 52 bits if counting the register switch that happened earlier.
1: add r2, r2, #20
2: rsble ip, r2, #32
mov xh, xh, lsl r2
2: do_it le
rsble ip, r2, #32
shift1 lsl, xh, xh, r2
#if defined(__thumb2__)
lsr ip, xl, ip
itt le
orrle xh, xh, ip
lslle xl, xl, r2
#else
orrle xh, xh, xl, lsr ip
movle xl, xl, lsl r2
#endif
@ adjust exponent accordingly.
3: subs r4, r4, r3
do_it ge, tt
addge xh, xh, r4, lsl #20
orrge xh, xh, r5
RETLDM "r4, r5" ge
@ -285,23 +319,23 @@ LSYM(Lad_l):
@ shift result right of 1 to 20 bits, sign is in r5.
add r4, r4, #20
rsb r2, r4, #32
mov xl, xl, lsr r4
orr xl, xl, xh, lsl r2
orr xh, r5, xh, lsr r4
shift1 lsr, xl, xl, r4
shiftop orr xl xl xh lsl r2 yh
shiftop orr xh r5 xh lsr r4 yh
RETLDM "r4, r5"
@ shift result right of 21 to 31 bits, or left 11 to 1 bits after
@ a register switch from xh to xl.
1: rsb r4, r4, #12
rsb r2, r4, #32
mov xl, xl, lsr r2
orr xl, xl, xh, lsl r4
shift1 lsr, xl, xl, r2
shiftop orr xl xl xh lsl r4 yh
mov xh, r5
RETLDM "r4, r5"
@ Shift value right of 32 to 64 bits, or 0 to 32 bits after a switch
@ from xh to xl.
2: mov xl, xh, lsr r4
2: shift1 lsr, xl, xh, r4
mov xh, r5
RETLDM "r4, r5"
@ -310,6 +344,7 @@ LSYM(Lad_l):
LSYM(Lad_d):
teq r4, #0
eor yh, yh, #0x00100000
do_it eq, te
eoreq xh, xh, #0x00100000
addeq r4, r4, #1
subne r5, r5, #1
@ -318,15 +353,18 @@ LSYM(Lad_d):
LSYM(Lad_s):
mvns ip, r4, asr #21
mvnnes ip, r5, asr #21
do_it ne
COND(mvn,s,ne) ip, r5, asr #21
beq LSYM(Lad_i)
teq r4, r5
do_it eq
teqeq xl, yl
beq 1f
@ Result is x + 0.0 = x or 0.0 + y = y.
teq r4, #0
do_it eq, t
moveq xh, yh
moveq xl, yl
RETLDM "r4, r5"
@ -334,6 +372,7 @@ LSYM(Lad_s):
1: teq xh, yh
@ Result is x - x = 0.
do_it ne, tt
movne xh, #0
movne xl, #0
RETLDM "r4, r5" ne
@ -343,9 +382,11 @@ LSYM(Lad_s):
bne 2f
movs xl, xl, lsl #1
adcs xh, xh, xh
do_it cs
orrcs xh, xh, #0x80000000
RETLDM "r4, r5"
2: adds r4, r4, #(2 << 21)
do_it cc, t
addcc xh, xh, #(1 << 20)
RETLDM "r4, r5" cc
and r5, xh, #0x80000000
@ -365,13 +406,16 @@ LSYM(Lad_o):
@ otherwise return xh-xl (which is INF or -INF)
LSYM(Lad_i):
mvns ip, r4, asr #21
do_it ne, te
movne xh, yh
movne xl, yl
mvneqs ip, r5, asr #21
COND(mvn,s,eq) ip, r5, asr #21
do_it ne, t
movne yh, xh
movne yl, xl
orrs r4, xl, xh, lsl #12
orreqs r5, yl, yh, lsl #12
do_it eq, te
COND(orr,s,eq) r5, yl, yh, lsl #12
teqeq xh, yh
orrne xh, xh, #0x00080000 @ quiet NAN
RETLDM "r4, r5"
@ -385,9 +429,10 @@ ARM_FUNC_START floatunsidf
ARM_FUNC_ALIAS aeabi_ui2d floatunsidf
teq r0, #0
do_it eq, t
moveq r1, #0
RETc(eq)
stmfd sp!, {r4, r5, lr}
do_push {r4, r5, lr}
mov r4, #0x400 @ initial exponent
add r4, r4, #(52-1 - 1)
mov r5, #0 @ sign bit is 0
@ -404,12 +449,14 @@ ARM_FUNC_START floatsidf
ARM_FUNC_ALIAS aeabi_i2d floatsidf
teq r0, #0
do_it eq, t
moveq r1, #0
RETc(eq)
stmfd sp!, {r4, r5, lr}
do_push {r4, r5, lr}
mov r4, #0x400 @ initial exponent
add r4, r4, #(52-1 - 1)
ands r5, r0, #0x80000000 @ sign bit in r5
do_it mi
rsbmi r0, r0, #0 @ absolute value
.ifnc xl, r0
mov xl, r0
@ -427,17 +474,19 @@ ARM_FUNC_ALIAS aeabi_f2d extendsfdf2
mov xh, r2, asr #3 @ stretch exponent
mov xh, xh, rrx @ retrieve sign bit
mov xl, r2, lsl #28 @ retrieve remaining bits
andnes r3, r2, #0xff000000 @ isolate exponent
do_it ne, ttt
COND(and,s,ne) r3, r2, #0xff000000 @ isolate exponent
teqne r3, #0xff000000 @ if not 0, check if INF or NAN
eorne xh, xh, #0x38000000 @ fixup exponent otherwise.
RETc(ne) @ and return it.
teq r2, #0 @ if actually 0
do_it ne, e
teqne r3, #0xff000000 @ or INF or NAN
RETc(eq) @ we are done already.
@ value was denormalized. We can normalize it now.
stmfd sp!, {r4, r5, lr}
do_push {r4, r5, lr}
mov r4, #0x380 @ setup corresponding exponent
and r5, xh, #0x80000000 @ move sign bit in r5
bic xh, xh, #0x80000000
@ -451,7 +500,10 @@ ARM_FUNC_ALIAS aeabi_ul2d floatundidf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
do_it eq, t
mvfeqd f0, #0.0
#else
do_it eq
#endif
RETc(eq)
@ -460,9 +512,9 @@ ARM_FUNC_ALIAS aeabi_ul2d floatundidf
@ we can return the result in f0 as well as in r0/r1 for backwards
@ compatibility.
adr ip, LSYM(f0_ret)
stmfd sp!, {r4, r5, ip, lr}
do_push {r4, r5, ip, lr}
#else
stmfd sp!, {r4, r5, lr}
do_push {r4, r5, lr}
#endif
mov r5, #0
@ -473,7 +525,10 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
do_itt eq
mvfeqd f0, #0.0
#else
do_it eq
#endif
RETc(eq)
@ -482,15 +537,20 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
@ we can return the result in f0 as well as in r0/r1 for backwards
@ compatibility.
adr ip, LSYM(f0_ret)
stmfd sp!, {r4, r5, ip, lr}
do_push {r4, r5, ip, lr}
#else
stmfd sp!, {r4, r5, lr}
do_push {r4, r5, lr}
#endif
ands r5, ah, #0x80000000 @ sign bit in r5
bpl 2f
#if defined(__thumb2__)
negs al, al
sbc ah, ah, ah, lsl #1
#else
rsbs al, al, #0
rsc ah, ah, #0
#endif
2:
mov r4, #0x400 @ initial exponent
add r4, r4, #(52-1 - 1)
@ -508,16 +568,18 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
@ The value is too big. Scale it down a bit...
mov r2, #3
movs ip, ip, lsr #3
do_it ne
addne r2, r2, #3
movs ip, ip, lsr #3
do_it ne
addne r2, r2, #3
add r2, r2, ip, lsr #3
rsb r3, r2, #32
mov ip, xl, lsl r3
mov xl, xl, lsr r2
orr xl, xl, xh, lsl r3
mov xh, xh, lsr r2
shift1 lsl, ip, xl, r3
shift1 lsr, xl, xl, r2
shiftop orr xl xl xh lsl r3 lr
shift1 lsr, xh, xh, r2
add r4, r4, r2
b LSYM(Lad_p)
@ -526,7 +588,7 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
@ Legacy code expects the result to be returned in f0. Copy it
@ there as well.
LSYM(f0_ret):
stmfd sp!, {r0, r1}
do_push {r0, r1}
ldfd f0, [sp], #8
RETLDM
@ -543,13 +605,14 @@ LSYM(f0_ret):
ARM_FUNC_START muldf3
ARM_FUNC_ALIAS aeabi_dmul muldf3
stmfd sp!, {r4, r5, r6, lr}
do_push {r4, r5, r6, lr}
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
orr ip, ip, #0x700
ands r4, ip, xh, lsr #20
andnes r5, ip, yh, lsr #20
do_it ne, tte
COND(and,s,ne) r5, ip, yh, lsr #20
teqne r4, ip
teqne r5, ip
bleq LSYM(Lml_s)
@ -565,7 +628,8 @@ ARM_FUNC_ALIAS aeabi_dmul muldf3
bic xh, xh, ip, lsl #21
bic yh, yh, ip, lsl #21
orrs r5, xl, xh, lsl #12
orrnes r5, yl, yh, lsl #12
do_it ne
COND(orr,s,ne) r5, yl, yh, lsl #12
orr xh, xh, #0x00100000
orr yh, yh, #0x00100000
beq LSYM(Lml_1)
@ -646,6 +710,7 @@ ARM_FUNC_ALIAS aeabi_dmul muldf3
@ The LSBs in ip are only significant for the final rounding.
@ Fold them into lr.
teq ip, #0
do_it ne
orrne lr, lr, #1
@ Adjust result upon the MSB position.
@ -666,12 +731,14 @@ ARM_FUNC_ALIAS aeabi_dmul muldf3
@ Check exponent range for under/overflow.
subs ip, r4, #(254 - 1)
do_it hi
cmphi ip, #0x700
bhi LSYM(Lml_u)
@ Round the result, merge final exponent.
cmp lr, #0x80000000
moveqs lr, xl, lsr #1
do_it eq
COND(mov,s,eq) lr, xl, lsr #1
adcs xl, xl, #0
adc xh, xh, r4, lsl #20
RETLDM "r4, r5, r6"
@ -683,7 +750,8 @@ LSYM(Lml_1):
orr xl, xl, yl
eor xh, xh, yh
subs r4, r4, ip, lsr #1
rsbgts r5, r4, ip
do_it gt, tt
COND(rsb,s,gt) r5, r4, ip
orrgt xh, xh, r4, lsl #20
RETLDM "r4, r5, r6" gt
@ -698,6 +766,7 @@ LSYM(Lml_u):
@ Check if denormalized result is possible, otherwise return signed 0.
cmn r4, #(53 + 1)
do_it le, tt
movle xl, #0
bicle xh, xh, #0x7fffffff
RETLDM "r4, r5, r6" le
@ -712,14 +781,15 @@ LSYM(Lml_u):
@ shift result right of 1 to 20 bits, preserve sign bit, round, etc.
add r4, r4, #20
rsb r5, r4, #32
mov r3, xl, lsl r5
mov xl, xl, lsr r4
orr xl, xl, xh, lsl r5
shift1 lsl, r3, xl, r5
shift1 lsr, xl, xl, r4
shiftop orr xl xl xh lsl r5 r2
and r2, xh, #0x80000000
bic xh, xh, #0x80000000
adds xl, xl, r3, lsr #31
adc xh, r2, xh, lsr r4
shiftop adc xh r2 xh lsr r4 r6
orrs lr, lr, r3, lsl #1
do_it eq
biceq xl, xl, r3, lsr #31
RETLDM "r4, r5, r6"
@ -727,27 +797,29 @@ LSYM(Lml_u):
@ a register switch from xh to xl. Then round.
1: rsb r4, r4, #12
rsb r5, r4, #32
mov r3, xl, lsl r4
mov xl, xl, lsr r5
orr xl, xl, xh, lsl r4
shift1 lsl, r3, xl, r4
shift1 lsr, xl, xl, r5
shiftop orr xl xl xh lsl r4 r2
bic xh, xh, #0x7fffffff
adds xl, xl, r3, lsr #31
adc xh, xh, #0
orrs lr, lr, r3, lsl #1
do_it eq
biceq xl, xl, r3, lsr #31
RETLDM "r4, r5, r6"
@ Shift value right of 32 to 64 bits, or 0 to 32 bits after a switch
@ from xh to xl. Leftover bits are in r3-r6-lr for rounding.
2: rsb r5, r4, #32
orr lr, lr, xl, lsl r5
mov r3, xl, lsr r4
orr r3, r3, xh, lsl r5
mov xl, xh, lsr r4
shiftop orr lr lr xl lsl r5 r2
shift1 lsr, r3, xl, r4
shiftop orr r3 r3 xh lsl r5 r2
shift1 lsr, xl, xh, r4
bic xh, xh, #0x7fffffff
bic xl, xl, xh, lsr r4
shiftop bic xl xl xh lsr r4 r2
add xl, xl, r3, lsr #31
orrs lr, lr, r3, lsl #1
do_it eq
biceq xl, xl, r3, lsr #31
RETLDM "r4, r5, r6"
@ -760,15 +832,18 @@ LSYM(Lml_d):
1: movs xl, xl, lsl #1
adc xh, xh, xh
tst xh, #0x00100000
do_it eq
subeq r4, r4, #1
beq 1b
orr xh, xh, r6
teq r5, #0
do_it ne
movne pc, lr
2: and r6, yh, #0x80000000
3: movs yl, yl, lsl #1
adc yh, yh, yh
tst yh, #0x00100000
do_it eq
subeq r5, r5, #1
beq 3b
orr yh, yh, r6
@ -778,26 +853,29 @@ LSYM(Lml_s):
@ Isolate the INF and NAN cases away
teq r4, ip
and r5, ip, yh, lsr #20
do_it ne
teqne r5, ip
beq 1f
@ Here, one or more arguments are either denormalized or zero.
orrs r6, xl, xh, lsl #1
orrnes r6, yl, yh, lsl #1
do_it ne
COND(orr,s,ne) r6, yl, yh, lsl #1
bne LSYM(Lml_d)
@ Result is 0, but determine sign anyway.
LSYM(Lml_z):
eor xh, xh, yh
bic xh, xh, #0x7fffffff
and xh, xh, #0x80000000
mov xl, #0
RETLDM "r4, r5, r6"
1: @ One or both args are INF or NAN.
orrs r6, xl, xh, lsl #1
do_it eq, te
moveq xl, yl
moveq xh, yh
orrnes r6, yl, yh, lsl #1
COND(orr,s,ne) r6, yl, yh, lsl #1
beq LSYM(Lml_n) @ 0 * INF or INF * 0 -> NAN
teq r4, ip
bne 1f
@ -806,6 +884,7 @@ LSYM(Lml_z):
1: teq r5, ip
bne LSYM(Lml_i)
orrs r6, yl, yh, lsl #12
do_it ne, t
movne xl, yl
movne xh, yh
bne LSYM(Lml_n) @ <anything> * NAN -> NAN
@ -834,13 +913,14 @@ LSYM(Lml_n):
ARM_FUNC_START divdf3
ARM_FUNC_ALIAS aeabi_ddiv divdf3
stmfd sp!, {r4, r5, r6, lr}
do_push {r4, r5, r6, lr}
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
orr ip, ip, #0x700
ands r4, ip, xh, lsr #20
andnes r5, ip, yh, lsr #20
do_it ne, tte
COND(and,s,ne) r5, ip, yh, lsr #20
teqne r4, ip
teqne r5, ip
bleq LSYM(Ldv_s)
@ -871,6 +951,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
@ Ensure result will land to known bit position.
@ Apply exponent bias accordingly.
cmp r5, yh
do_it eq
cmpeq r6, yl
adc r4, r4, #(255 - 2)
add r4, r4, #0x300
@ -889,6 +970,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
@ The actual division loop.
1: subs lr, r6, yl
sbcs lr, r5, yh
do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip
@ -896,6 +978,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
mov yl, yl, rrx
subs lr, r6, yl
sbcs lr, r5, yh
do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip, lsr #1
@ -903,6 +986,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
mov yl, yl, rrx
subs lr, r6, yl
sbcs lr, r5, yh
do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip, lsr #2
@ -910,6 +994,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
mov yl, yl, rrx
subs lr, r6, yl
sbcs lr, r5, yh
do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip, lsr #3
@ -936,18 +1021,21 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
2:
@ Be sure result starts in the high word.
tst xh, #0x00100000
do_it eq, t
orreq xh, xh, xl
moveq xl, #0
3:
@ Check exponent range for under/overflow.
subs ip, r4, #(254 - 1)
do_it hi
cmphi ip, #0x700
bhi LSYM(Lml_u)
@ Round the result, merge final exponent.
subs ip, r5, yh
subeqs ip, r6, yl
moveqs ip, xl, lsr #1
do_it eq, t
COND(sub,s,eq) ip, r6, yl
COND(mov,s,eq) ip, xl, lsr #1
adcs xl, xl, #0
adc xh, xh, r4, lsl #20
RETLDM "r4, r5, r6"
@ -957,7 +1045,8 @@ LSYM(Ldv_1):
and lr, lr, #0x80000000
orr xh, lr, xh, lsr #12
adds r4, r4, ip, lsr #1
rsbgts r5, r4, ip
do_it gt, tt
COND(rsb,s,gt) r5, r4, ip
orrgt xh, xh, r4, lsl #20
RETLDM "r4, r5, r6" gt
@ -976,6 +1065,7 @@ LSYM(Ldv_u):
LSYM(Ldv_s):
and r5, ip, yh, lsr #20
teq r4, ip
do_it eq
teqeq r5, ip
beq LSYM(Lml_n) @ INF/NAN / INF/NAN -> NAN
teq r4, ip
@ -996,7 +1086,8 @@ LSYM(Ldv_s):
b LSYM(Lml_n) @ <anything> / NAN -> NAN
2: @ If both are nonzero, we need to normalize and resume above.
orrs r6, xl, xh, lsl #1
orrnes r6, yl, yh, lsl #1
do_it ne
COND(orr,s,ne) r6, yl, yh, lsl #1
bne LSYM(Lml_d)
@ One or both arguments are 0.
orrs r4, xl, xh, lsl #1
@ -1035,14 +1126,17 @@ ARM_FUNC_ALIAS eqdf2 cmpdf2
mov ip, xh, lsl #1
mvns ip, ip, asr #21
mov ip, yh, lsl #1
mvnnes ip, ip, asr #21
do_it ne
COND(mvn,s,ne) ip, ip, asr #21
beq 3f
@ Test for equality.
@ Note that 0.0 is equal to -0.0.
2: orrs ip, xl, xh, lsl #1 @ if x == 0.0 or -0.0
orreqs ip, yl, yh, lsl #1 @ and y == 0.0 or -0.0
do_it eq, e
COND(orr,s,eq) ip, yl, yh, lsl #1 @ and y == 0.0 or -0.0
teqne xh, yh @ or xh == yh
do_it eq, tt
teqeq xl, yl @ and xl == yl
moveq r0, #0 @ then equal.
RETc(eq)
@ -1054,10 +1148,13 @@ ARM_FUNC_ALIAS eqdf2 cmpdf2
teq xh, yh
@ Compare values if same sign
do_it pl
cmppl xh, yh
do_it eq
cmpeq xl, yl
@ Result:
do_it cs, e
movcs r0, yh, asr #31
mvncc r0, yh, asr #31
orr r0, r0, #1
@ -1100,14 +1197,15 @@ ARM_FUNC_ALIAS aeabi_cdcmple aeabi_cdcmpeq
@ The status-returning routines are required to preserve all
@ registers except ip, lr, and cpsr.
6: stmfd sp!, {r0, lr}
6: do_push {r0, lr}
ARM_CALL cmpdf2
@ Set the Z flag correctly, and the C flag unconditionally.
cmp r0, #0
cmp r0, #0
@ Clear the C flag if the return value was -1, indicating
@ that the first operand was smaller than the second.
cmnmi r0, #0
RETLDM "r0"
do_it mi
cmnmi r0, #0
RETLDM "r0"
FUNC_END aeabi_cdcmple
FUNC_END aeabi_cdcmpeq
@ -1117,6 +1215,7 @@ ARM_FUNC_START aeabi_dcmpeq
str lr, [sp, #-8]!
ARM_CALL aeabi_cdcmple
do_it eq, e
moveq r0, #1 @ Equal to.
movne r0, #0 @ Less than, greater than, or unordered.
RETLDM
@ -1127,6 +1226,7 @@ ARM_FUNC_START aeabi_dcmplt
str lr, [sp, #-8]!
ARM_CALL aeabi_cdcmple
do_it cc, e
movcc r0, #1 @ Less than.
movcs r0, #0 @ Equal to, greater than, or unordered.
RETLDM
@ -1137,6 +1237,7 @@ ARM_FUNC_START aeabi_dcmple
str lr, [sp, #-8]!
ARM_CALL aeabi_cdcmple
do_it ls, e
movls r0, #1 @ Less than or equal to.
movhi r0, #0 @ Greater than or unordered.
RETLDM
@ -1147,6 +1248,7 @@ ARM_FUNC_START aeabi_dcmpge
str lr, [sp, #-8]!
ARM_CALL aeabi_cdrcmple
do_it ls, e
movls r0, #1 @ Operand 2 is less than or equal to operand 1.
movhi r0, #0 @ Operand 2 greater than operand 1, or unordered.
RETLDM
@ -1157,6 +1259,7 @@ ARM_FUNC_START aeabi_dcmpgt
str lr, [sp, #-8]!
ARM_CALL aeabi_cdrcmple
do_it cc, e
movcc r0, #1 @ Operand 2 is less than operand 1.
movcs r0, #0 @ Operand 2 is greater than or equal to operand 1,
@ or they are unordered.
@ -1211,7 +1314,8 @@ ARM_FUNC_ALIAS aeabi_d2iz fixdfsi
orr r3, r3, #0x80000000
orr r3, r3, xl, lsr #21
tst xh, #0x80000000 @ the sign bit
mov r0, r3, lsr r2
shift1 lsr, r0, r3, r2
do_it ne
rsbne r0, r0, #0
RET
@ -1221,6 +1325,7 @@ ARM_FUNC_ALIAS aeabi_d2iz fixdfsi
2: orrs xl, xl, xh, lsl #12
bne 4f @ x is NAN.
3: ands r0, xh, #0x80000000 @ the sign bit
do_it eq
moveq r0, #0x7fffffff @ maximum signed positive si
RET
@ -1251,7 +1356,7 @@ ARM_FUNC_ALIAS aeabi_d2uiz fixunsdfsi
mov r3, xh, lsl #11
orr r3, r3, #0x80000000
orr r3, r3, xl, lsr #21
mov r0, r3, lsr r2
shift1 lsr, r0, r3, r2
RET
1: mov r0, #0
@ -1278,8 +1383,9 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
@ check exponent range.
mov r2, xh, lsl #1
subs r3, r2, #((1023 - 127) << 21)
subcss ip, r3, #(1 << 21)
rsbcss ip, ip, #(254 << 21)
do_it cs, t
COND(sub,s,cs) ip, r3, #(1 << 21)
COND(rsb,s,cs) ip, ip, #(254 << 21)
bls 2f @ value is out of range
1: @ shift and round mantissa
@ -1288,6 +1394,7 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
orr xl, ip, xl, lsr #29
cmp r2, #0x80000000
adc r0, xl, r3, lsl #2
do_it eq
biceq r0, r0, #1
RET
@ -1297,6 +1404,7 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
@ check if denormalized value is possible
adds r2, r3, #(23 << 21)
do_it lt, t
andlt r0, xh, #0x80000000 @ too small, return signed 0.
RETc(lt)
@ -1305,13 +1413,18 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
mov r2, r2, lsr #21
rsb r2, r2, #24
rsb ip, r2, #32
#if defined(__thumb2__)
lsls r3, xl, ip
#else
movs r3, xl, lsl ip
mov xl, xl, lsr r2
#endif
shift1 lsr, xl, xl, r2
do_it ne
orrne xl, xl, #1 @ fold r3 for rounding considerations.
mov r3, xh, lsl #11
mov r3, r3, lsr #11
orr xl, xl, r3, lsl ip
mov r3, r3, lsr r2
shiftop orr xl xl r3 lsl ip ip
shift1 lsr, r3, r3, r2
mov r3, r3, lsl #1
b 1b
@ -1319,6 +1432,7 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
mvns r3, r2, asr #21
bne 5f @ simple overflow
orrs r3, xl, xh, lsl #12
do_it ne, tt
movne r0, #0x7f000000
orrne r0, r0, #0x00c00000
RETc(ne) @ return NAN

View File

@ -1,6 +1,6 @@
/* ieee754-sf.S single-precision floating point support for ARM
Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
Contributed by Nicolas Pitre (nico@cam.org)
This file is free software; you can redistribute it and/or modify it
@ -71,36 +71,42 @@ ARM_FUNC_ALIAS aeabi_fadd addsf3
1: @ Look for zeroes, equal values, INF, or NAN.
movs r2, r0, lsl #1
movnes r3, r1, lsl #1
do_it ne, ttt
COND(mov,s,ne) r3, r1, lsl #1
teqne r2, r3
mvnnes ip, r2, asr #24
mvnnes ip, r3, asr #24
COND(mvn,s,ne) ip, r2, asr #24
COND(mvn,s,ne) ip, r3, asr #24
beq LSYM(Lad_s)
@ Compute exponent difference. Make largest exponent in r2,
@ corresponding arg in r0, and positive exponent difference in r3.
mov r2, r2, lsr #24
rsbs r3, r2, r3, lsr #24
do_it gt, ttt
addgt r2, r2, r3
eorgt r1, r0, r1
eorgt r0, r1, r0
eorgt r1, r0, r1
do_it lt
rsblt r3, r3, #0
@ If exponent difference is too large, return largest argument
@ already in r0. We need up to 25 bit to handle proper rounding
@ of 0x1p25 - 1.1.
cmp r3, #25
do_it hi
RETc(hi)
@ Convert mantissa to signed integer.
tst r0, #0x80000000
orr r0, r0, #0x00800000
bic r0, r0, #0xff000000
do_it ne
rsbne r0, r0, #0
tst r1, #0x80000000
orr r1, r1, #0x00800000
bic r1, r1, #0xff000000
do_it ne
rsbne r1, r1, #0
@ If exponent == difference, one or both args were denormalized.
@ -114,15 +120,20 @@ LSYM(Lad_x):
@ Shift and add second arg to first arg in r0.
@ Keep leftover bits into r1.
adds r0, r0, r1, asr r3
shiftop adds r0 r0 r1 asr r3 ip
rsb r3, r3, #32
mov r1, r1, lsl r3
shift1 lsl, r1, r1, r3
@ Keep absolute value in r0-r1, sign in r3 (the n bit was set above)
and r3, r0, #0x80000000
bpl LSYM(Lad_p)
#if defined(__thumb2__)
negs r1, r1
sbc r0, r0, r0, lsl #1
#else
rsbs r1, r1, #0
rsc r0, r0, #0
#endif
@ Determine how to normalize the result.
LSYM(Lad_p):
@ -147,6 +158,7 @@ LSYM(Lad_p):
LSYM(Lad_e):
cmp r1, #0x80000000
adc r0, r0, r2, lsl #23
do_it eq
biceq r0, r0, #1
orr r0, r0, r3
RET
@ -185,16 +197,23 @@ LSYM(Lad_l):
clz ip, r0
sub ip, ip, #8
subs r2, r2, ip
mov r0, r0, lsl ip
shift1 lsl, r0, r0, ip
#endif
@ Final result with sign
@ If exponent negative, denormalize result.
do_it ge, et
addge r0, r0, r2, lsl #23
rsblt r2, r2, #0
orrge r0, r0, r3
#if defined(__thumb2__)
do_it lt, t
lsrlt r0, r0, r2
orrlt r0, r3, r0
#else
orrlt r0, r3, r0, lsr r2
#endif
RET
@ Fixup and adjust bit position for denormalized arguments.
@ -202,6 +221,7 @@ LSYM(Lad_l):
LSYM(Lad_d):
teq r2, #0
eor r1, r1, #0x00800000
do_it eq, te
eoreq r0, r0, #0x00800000
addeq r2, r2, #1
subne r3, r3, #1
@ -211,7 +231,8 @@ LSYM(Lad_s):
mov r3, r1, lsl #1
mvns ip, r2, asr #24
mvnnes ip, r3, asr #24
do_it ne
COND(mvn,s,ne) ip, r3, asr #24
beq LSYM(Lad_i)
teq r2, r3
@ -219,12 +240,14 @@ LSYM(Lad_s):
@ Result is x + 0.0 = x or 0.0 + y = y.
teq r2, #0
do_it eq
moveq r0, r1
RET
1: teq r0, r1
@ Result is x - x = 0.
do_it ne, t
movne r0, #0
RETc(ne)
@ -232,9 +255,11 @@ LSYM(Lad_s):
tst r2, #0xff000000
bne 2f
movs r0, r0, lsl #1
do_it cs
orrcs r0, r0, #0x80000000
RET
2: adds r2, r2, #(2 << 24)
do_it cc, t
addcc r0, r0, #(1 << 23)
RETc(cc)
and r3, r0, #0x80000000
@ -253,11 +278,13 @@ LSYM(Lad_o):
@ otherwise return r0 (which is INF or -INF)
LSYM(Lad_i):
mvns r2, r2, asr #24
do_it ne, et
movne r0, r1
mvneqs r3, r3, asr #24
COND(mvn,s,eq) r3, r3, asr #24
movne r1, r0
movs r2, r0, lsl #9
moveqs r3, r1, lsl #9
do_it eq, te
COND(mov,s,eq) r3, r1, lsl #9
teqeq r0, r1
orrne r0, r0, #0x00400000 @ quiet NAN
RET
@ -278,9 +305,11 @@ ARM_FUNC_START floatsisf
ARM_FUNC_ALIAS aeabi_i2f floatsisf
ands r3, r0, #0x80000000
do_it mi
rsbmi r0, r0, #0
1: movs ip, r0
do_it eq
RETc(eq)
@ Add initial exponent to sign
@ -302,7 +331,10 @@ ARM_FUNC_ALIAS aeabi_ul2f floatundisf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
do_itt eq
mvfeqs f0, #0.0
#else
do_it eq
#endif
RETc(eq)
@ -314,14 +346,22 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
do_it eq, t
mvfeqs f0, #0.0
#else
do_it eq
#endif
RETc(eq)
ands r3, ah, #0x80000000 @ sign bit in r3
bpl 1f
#if defined(__thumb2__)
negs al, al
sbc ah, ah, ah, lsl #1
#else
rsbs al, al, #0
rsc ah, ah, #0
#endif
1:
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
@ For hard FPA code we want to return via the tail below so that
@ -332,12 +372,14 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
#endif
movs ip, ah
do_it eq, tt
moveq ip, al
moveq ah, al
moveq al, #0
@ Add initial exponent to sign
orr r3, r3, #((127 + 23 + 32) << 23)
do_it eq
subeq r3, r3, #(32 << 23)
2: sub r3, r3, #(1 << 23)
@ -345,15 +387,19 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
mov r2, #23
cmp ip, #(1 << 16)
do_it hs, t
movhs ip, ip, lsr #16
subhs r2, r2, #16
cmp ip, #(1 << 8)
do_it hs, t
movhs ip, ip, lsr #8
subhs r2, r2, #8
cmp ip, #(1 << 4)
do_it hs, t
movhs ip, ip, lsr #4
subhs r2, r2, #4
cmp ip, #(1 << 2)
do_it hs, e
subhs r2, r2, #2
sublo r2, r2, ip, lsr #1
subs r2, r2, ip, lsr #3
@ -368,19 +414,21 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
sub r3, r3, r2, lsl #23
blt 3f
add r3, r3, ah, lsl r2
mov ip, al, lsl r2
shiftop add r3 r3 ah lsl r2 ip
shift1 lsl, ip, al, r2
rsb r2, r2, #32
cmp ip, #0x80000000
adc r0, r3, al, lsr r2
shiftop adc r0 r3 al lsr r2 r2
do_it eq
biceq r0, r0, #1
RET
3: add r2, r2, #32
mov ip, ah, lsl r2
shift1 lsl, ip, ah, r2
rsb r2, r2, #32
orrs al, al, ip, lsl #1
adc r0, r3, ah, lsr r2
shiftop adc r0 r3 ah lsr r2 r2
do_it eq
biceq r0, r0, ip, lsr #31
RET
@ -408,7 +456,8 @@ ARM_FUNC_ALIAS aeabi_fmul mulsf3
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
ands r2, ip, r0, lsr #23
andnes r3, ip, r1, lsr #23
do_it ne, tt
COND(and,s,ne) r3, ip, r1, lsr #23
teqne r2, ip
teqne r3, ip
beq LSYM(Lml_s)
@ -424,7 +473,8 @@ LSYM(Lml_x):
@ If power of two, branch to a separate path.
@ Make up for final alignment.
movs r0, r0, lsl #9
movnes r1, r1, lsl #9
do_it ne
COND(mov,s,ne) r1, r1, lsl #9
beq LSYM(Lml_1)
mov r3, #0x08000000
orr r0, r3, r0, lsr #5
@ -436,7 +486,7 @@ LSYM(Lml_x):
and r3, ip, #0x80000000
@ Well, no way to make it shorter without the umull instruction.
stmfd sp!, {r3, r4, r5}
do_push {r3, r4, r5}
mov r4, r0, lsr #16
mov r5, r1, lsr #16
bic r0, r0, r4, lsl #16
@ -447,7 +497,7 @@ LSYM(Lml_x):
mla r0, r4, r1, r0
adds r3, r3, r0, lsl #16
adc r1, ip, r0, lsr #16
ldmfd sp!, {r0, r4, r5}
do_pop {r0, r4, r5}
#else
@ -461,6 +511,7 @@ LSYM(Lml_x):
@ Adjust result upon the MSB position.
cmp r1, #(1 << 23)
do_it cc, tt
movcc r1, r1, lsl #1
orrcc r1, r1, r3, lsr #31
movcc r3, r3, lsl #1
@ -476,6 +527,7 @@ LSYM(Lml_x):
@ Round the result, merge final exponent.
cmp r3, #0x80000000
adc r0, r0, r2, lsl #23
do_it eq
biceq r0, r0, #1
RET
@ -483,11 +535,13 @@ LSYM(Lml_x):
LSYM(Lml_1):
teq r0, #0
and ip, ip, #0x80000000
do_it eq
moveq r1, r1, lsl #9
orr r0, ip, r0, lsr #9
orr r0, r0, r1, lsr #9
subs r2, r2, #127
rsbgts r3, r2, #255
do_it gt, tt
COND(rsb,s,gt) r3, r2, #255
orrgt r0, r0, r2, lsl #23
RETc(gt)
@ -502,18 +556,20 @@ LSYM(Lml_u):
@ Check if denormalized result is possible, otherwise return signed 0.
cmn r2, #(24 + 1)
do_it le, t
bicle r0, r0, #0x7fffffff
RETc(le)
@ Shift value right, round, etc.
rsb r2, r2, #0
movs r1, r0, lsl #1
mov r1, r1, lsr r2
shift1 lsr, r1, r1, r2
rsb r2, r2, #32
mov ip, r0, lsl r2
shift1 lsl, ip, r0, r2
movs r0, r1, rrx
adc r0, r0, #0
orrs r3, r3, ip, lsl #1
do_it eq
biceq r0, r0, ip, lsr #31
RET
@ -522,14 +578,16 @@ LSYM(Lml_u):
LSYM(Lml_d):
teq r2, #0
and ip, r0, #0x80000000
1: moveq r0, r0, lsl #1
1: do_it eq, tt
moveq r0, r0, lsl #1
tsteq r0, #0x00800000
subeq r2, r2, #1
beq 1b
orr r0, r0, ip
teq r3, #0
and ip, r1, #0x80000000
2: moveq r1, r1, lsl #1
2: do_it eq, tt
moveq r1, r1, lsl #1
tsteq r1, #0x00800000
subeq r3, r3, #1
beq 2b
@ -540,12 +598,14 @@ LSYM(Lml_s):
@ Isolate the INF and NAN cases away
and r3, ip, r1, lsr #23
teq r2, ip
do_it ne
teqne r3, ip
beq 1f
@ Here, one or more arguments are either denormalized or zero.
bics ip, r0, #0x80000000
bicnes ip, r1, #0x80000000
do_it ne
COND(bic,s,ne) ip, r1, #0x80000000
bne LSYM(Lml_d)
@ Result is 0, but determine sign anyway.
@ -556,6 +616,7 @@ LSYM(Lml_z):
1: @ One or both args are INF or NAN.
teq r0, #0x0
do_it ne, ett
teqne r0, #0x80000000
moveq r0, r1
teqne r1, #0x0
@ -568,6 +629,7 @@ LSYM(Lml_z):
1: teq r3, ip
bne LSYM(Lml_i)
movs r3, r1, lsl #9
do_it ne
movne r0, r1
bne LSYM(Lml_n) @ <anything> * NAN -> NAN
@ -597,7 +659,8 @@ ARM_FUNC_ALIAS aeabi_fdiv divsf3
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
ands r2, ip, r0, lsr #23
andnes r3, ip, r1, lsr #23
do_it ne, tt
COND(and,s,ne) r3, ip, r1, lsr #23
teqne r2, ip
teqne r3, ip
beq LSYM(Ldv_s)
@ -624,25 +687,31 @@ LSYM(Ldv_x):
@ Ensure result will land to known bit position.
@ Apply exponent bias accordingly.
cmp r3, r1
do_it cc
movcc r3, r3, lsl #1
adc r2, r2, #(127 - 2)
@ The actual division loop.
mov ip, #0x00800000
1: cmp r3, r1
do_it cs, t
subcs r3, r3, r1
orrcs r0, r0, ip
cmp r3, r1, lsr #1
do_it cs, t
subcs r3, r3, r1, lsr #1
orrcs r0, r0, ip, lsr #1
cmp r3, r1, lsr #2
do_it cs, t
subcs r3, r3, r1, lsr #2
orrcs r0, r0, ip, lsr #2
cmp r3, r1, lsr #3
do_it cs, t
subcs r3, r3, r1, lsr #3
orrcs r0, r0, ip, lsr #3
movs r3, r3, lsl #4
movnes ip, ip, lsr #4
do_it ne
COND(mov,s,ne) ip, ip, lsr #4
bne 1b
@ Check exponent for under/overflow.
@ -652,6 +721,7 @@ LSYM(Ldv_x):
@ Round the result, merge final exponent.
cmp r3, r1
adc r0, r0, r2, lsl #23
do_it eq
biceq r0, r0, #1
RET
@ -660,7 +730,8 @@ LSYM(Ldv_1):
and ip, ip, #0x80000000
orr r0, ip, r0, lsr #9
adds r2, r2, #127
rsbgts r3, r2, #255
do_it gt, tt
COND(rsb,s,gt) r3, r2, #255
orrgt r0, r0, r2, lsl #23
RETc(gt)
@ -674,14 +745,16 @@ LSYM(Ldv_1):
LSYM(Ldv_d):
teq r2, #0
and ip, r0, #0x80000000
1: moveq r0, r0, lsl #1
1: do_it eq, tt
moveq r0, r0, lsl #1
tsteq r0, #0x00800000
subeq r2, r2, #1
beq 1b
orr r0, r0, ip
teq r3, #0
and ip, r1, #0x80000000
2: moveq r1, r1, lsl #1
2: do_it eq, tt
moveq r1, r1, lsl #1
tsteq r1, #0x00800000
subeq r3, r3, #1
beq 2b
@ -707,7 +780,8 @@ LSYM(Ldv_s):
b LSYM(Lml_n) @ <anything> / NAN -> NAN
2: @ If both are nonzero, we need to normalize and resume above.
bics ip, r0, #0x80000000
bicnes ip, r1, #0x80000000
do_it ne
COND(bic,s,ne) ip, r1, #0x80000000
bne LSYM(Ldv_d)
@ One or both arguments are zero.
bics r2, r0, #0x80000000
@ -759,18 +833,24 @@ ARM_FUNC_ALIAS eqsf2 cmpsf2
mov r2, r0, lsl #1
mov r3, r1, lsl #1
mvns ip, r2, asr #24
mvnnes ip, r3, asr #24
do_it ne
COND(mvn,s,ne) ip, r3, asr #24
beq 3f
@ Compare values.
@ Note that 0.0 is equal to -0.0.
2: orrs ip, r2, r3, lsr #1 @ test if both are 0, clear C flag
do_it ne
teqne r0, r1 @ if not 0 compare sign
subpls r0, r2, r3 @ if same sign compare values, set r0
do_it pl
COND(sub,s,pl) r0, r2, r3 @ if same sign compare values, set r0
@ Result:
do_it hi
movhi r0, r1, asr #31
do_it lo
mvnlo r0, r1, asr #31
do_it ne
orrne r0, r0, #1
RET
@ -806,14 +886,15 @@ ARM_FUNC_ALIAS aeabi_cfcmple aeabi_cfcmpeq
@ The status-returning routines are required to preserve all
@ registers except ip, lr, and cpsr.
6: stmfd sp!, {r0, r1, r2, r3, lr}
6: do_push {r0, r1, r2, r3, lr}
ARM_CALL cmpsf2
@ Set the Z flag correctly, and the C flag unconditionally.
cmp r0, #0
cmp r0, #0
@ Clear the C flag if the return value was -1, indicating
@ that the first operand was smaller than the second.
cmnmi r0, #0
RETLDM "r0, r1, r2, r3"
do_it mi
cmnmi r0, #0
RETLDM "r0, r1, r2, r3"
FUNC_END aeabi_cfcmple
FUNC_END aeabi_cfcmpeq
@ -823,6 +904,7 @@ ARM_FUNC_START aeabi_fcmpeq
str lr, [sp, #-8]!
ARM_CALL aeabi_cfcmple
do_it eq, e
moveq r0, #1 @ Equal to.
movne r0, #0 @ Less than, greater than, or unordered.
RETLDM
@ -833,6 +915,7 @@ ARM_FUNC_START aeabi_fcmplt
str lr, [sp, #-8]!
ARM_CALL aeabi_cfcmple
do_it cc, e
movcc r0, #1 @ Less than.
movcs r0, #0 @ Equal to, greater than, or unordered.
RETLDM
@ -843,6 +926,7 @@ ARM_FUNC_START aeabi_fcmple
str lr, [sp, #-8]!
ARM_CALL aeabi_cfcmple
do_it ls, e
movls r0, #1 @ Less than or equal to.
movhi r0, #0 @ Greater than or unordered.
RETLDM
@ -853,6 +937,7 @@ ARM_FUNC_START aeabi_fcmpge
str lr, [sp, #-8]!
ARM_CALL aeabi_cfrcmple
do_it ls, e
movls r0, #1 @ Operand 2 is less than or equal to operand 1.
movhi r0, #0 @ Operand 2 greater than operand 1, or unordered.
RETLDM
@ -863,6 +948,7 @@ ARM_FUNC_START aeabi_fcmpgt
str lr, [sp, #-8]!
ARM_CALL aeabi_cfrcmple
do_it cc, e
movcc r0, #1 @ Operand 2 is less than operand 1.
movcs r0, #0 @ Operand 2 is greater than or equal to operand 1,
@ or they are unordered.
@ -914,7 +1000,8 @@ ARM_FUNC_ALIAS aeabi_f2iz fixsfsi
mov r3, r0, lsl #8
orr r3, r3, #0x80000000
tst r0, #0x80000000 @ the sign bit
mov r0, r3, lsr r2
shift1 lsr, r0, r3, r2
do_it ne
rsbne r0, r0, #0
RET
@ -926,6 +1013,7 @@ ARM_FUNC_ALIAS aeabi_f2iz fixsfsi
movs r2, r0, lsl #9
bne 4f @ r0 is NAN.
3: ands r0, r0, #0x80000000 @ the sign bit
do_it eq
moveq r0, #0x7fffffff @ the maximum signed positive si
RET
@ -954,7 +1042,7 @@ ARM_FUNC_ALIAS aeabi_f2uiz fixunssfsi
@ scale the value
mov r3, r0, lsl #8
orr r3, r3, #0x80000000
mov r0, r3, lsr r2
shift1 lsr, r0, r3, r2
RET
1: mov r0, #0

View File

@ -1,5 +1,6 @@
;; ??? This file needs auditing for thumb2
;; Patterns for the Intel Wireless MMX technology architecture.
;; Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
;; Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
;; Contributed by Red Hat.
;; This file is part of GCC.

View File

@ -1,7 +1,7 @@
@ libgcc routines for ARM cpu.
@ Division routines, written by Richard Earnshaw, (rearnsha@armltd.co.uk)
/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005, 2007
Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it
@ -69,31 +69,30 @@ Boston, MA 02110-1301, USA. */
/* Function end macros. Variants for interworking. */
@ This selects the minimum architecture level required.
#define __ARM_ARCH__ 3
#if defined(__ARM_ARCH_3M__) || defined(__ARM_ARCH_4__) \
|| defined(__ARM_ARCH_4T__)
/* We use __ARM_ARCH__ set to 4 here, but in reality it's any processor with
long multiply instructions. That includes v3M. */
# undef __ARM_ARCH__
# define __ARM_ARCH__ 4
#endif
#if defined(__ARM_ARCH_5__) || defined(__ARM_ARCH_5T__) \
|| defined(__ARM_ARCH_5E__) || defined(__ARM_ARCH_5TE__) \
|| defined(__ARM_ARCH_5TEJ__)
# undef __ARM_ARCH__
# define __ARM_ARCH__ 5
#endif
#if defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) \
|| defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) \
|| defined(__ARM_ARCH_6ZK__)
# undef __ARM_ARCH__
|| defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__)
# define __ARM_ARCH__ 6
#endif
#if defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) \
|| defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__)
# define __ARM_ARCH__ 7
#endif
#ifndef __ARM_ARCH__
#error Unable to determine architecture.
#endif
@ -193,7 +192,11 @@ LSYM(Lend_fde):
.ifc "\regs",""
ldr\cond lr, [sp], #8
.else
# if defined(__thumb2__)
pop\cond {\regs, lr}
# else
ldm\cond\dirn sp!, {\regs, lr}
# endif
.endif
.ifnc "\unwind", ""
/* Mark LR as restored. */
@ -201,14 +204,51 @@ LSYM(Lend_fde):
.endif
bx\cond lr
#else
/* Caller is responsible for providing IT instruction. */
.ifc "\regs",""
ldr\cond pc, [sp], #8
.else
ldm\cond\dirn sp!, {\regs, pc}
# if defined(__thumb2__)
pop\cond {\regs, pc}
# else
ldm\cond\dirn sp!, {\regs, lr}
# endif
.endif
#endif
.endm
/* The Unified assembly syntax allows the same code to be assembled for both
ARM and Thumb-2. However this is only supported by recent gas, so define
a set of macros to allow ARM code on older assemblers. */
#if defined(__thumb2__)
.macro do_it cond, suffix=""
it\suffix \cond
.endm
.macro shift1 op, arg0, arg1, arg2
\op \arg0, \arg1, \arg2
.endm
#define do_push push
#define do_pop pop
#define COND(op1, op2, cond) op1 ## op2 ## cond
/* Perform an arithmetic operation with a variable shift operand. This
requires two instructions and a scratch register on Thumb-2. */
.macro shiftop name, dest, src1, src2, shiftop, shiftreg, tmp
\shiftop \tmp, \src2, \shiftreg
\name \dest, \src1, \tmp
.endm
#else
.macro do_it cond, suffix=""
.endm
.macro shift1 op, arg0, arg1, arg2
mov \arg0, \arg1, \op \arg2
.endm
#define do_push stmfd sp!,
#define do_pop ldmfd sp!,
#define COND(op1, op2, cond) op1 ## cond ## op2
.macro shiftop name, dest, src1, src2, shiftop, shiftreg, tmp
\name \dest, \src1, \src2, \shiftop \shiftreg
.endm
#endif
.macro ARM_LDIV0 name
str lr, [sp, #-8]!
@ -260,11 +300,17 @@ SYM (\name):
#ifdef __thumb__
#define THUMB_FUNC .thumb_func
#define THUMB_CODE .force_thumb
# if defined(__thumb2__)
#define THUMB_SYNTAX .syntax divided
# else
#define THUMB_SYNTAX
# endif
#else
#define THUMB_FUNC
#define THUMB_CODE
#define THUMB_SYNTAX
#endif
.macro FUNC_START name
.text
.globl SYM (__\name)
@ -272,13 +318,27 @@ SYM (\name):
.align 0
THUMB_CODE
THUMB_FUNC
THUMB_SYNTAX
SYM (__\name):
.endm
/* Special function that will always be coded in ARM assembly, even if
in Thumb-only compilation. */
#if defined(__INTERWORKING_STUBS__)
#if defined(__thumb2__)
/* For Thumb-2 we build everything in thumb mode. */
.macro ARM_FUNC_START name
FUNC_START \name
.syntax unified
.endm
#define EQUIV .thumb_set
.macro ARM_CALL name
bl __\name
.endm
#elif defined(__INTERWORKING_STUBS__)
.macro ARM_FUNC_START name
FUNC_START \name
bx pc
@ -294,7 +354,9 @@ _L__\name:
.macro ARM_CALL name
bl _L__\name
.endm
#else
#else /* !(__INTERWORKING_STUBS__ || __thumb2__) */
.macro ARM_FUNC_START name
.text
.globl SYM (__\name)
@ -307,6 +369,7 @@ SYM (__\name):
.macro ARM_CALL name
bl __\name
.endm
#endif
.macro FUNC_ALIAS new old
@ -1183,6 +1246,10 @@ LSYM(Lover12):
#endif /* L_call_via_rX */
/* Don't bother with the old interworking routines for Thumb-2. */
/* ??? Maybe only omit these on v7m. */
#ifndef __thumb2__
#if defined L_interwork_call_via_rX
/* These labels & instructions are used by the Arm/Thumb interworking code,
@ -1307,6 +1374,7 @@ LSYM(Lchange_\register):
SIZE (_interwork_call_via_lr)
#endif /* L_interwork_call_via_rX */
#endif /* !__thumb2__ */
#endif /* Arch supports thumb. */
#ifndef __symbian__

View File

@ -1,5 +1,5 @@
/* Support functions for the unwinder.
Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
Contributed by Paul Brook
This file is free software; you can redistribute it and/or modify it
@ -49,7 +49,14 @@ ARM_FUNC_START restore_core_regs
this. */
add r1, r0, #52
ldmia r1, {r3, r4, r5} /* {sp, lr, pc}. */
#ifdef __INTERWORKING__
#if defined(__thumb2__)
/* Thumb-2 doesn't allow sp in a load-multiple instruction, so push
the target address onto the target stack. This is safe as
we're always returning to somewhere further up the call stack. */
mov ip, r3
mov lr, r4
str r5, [ip, #-4]!
#elif defined(__INTERWORKING__)
/* Restore pc into ip. */
mov r2, r5
stmfd sp!, {r2, r3, r4}
@ -58,8 +65,12 @@ ARM_FUNC_START restore_core_regs
#endif
/* Don't bother restoring ip. */
ldmia r0, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp}
#if defined(__thumb2__)
/* Pop the return address off the target stack. */
mov sp, ip
pop {pc}
#elif defined(__INTERWORKING__)
/* Pop the three registers we pushed earlier. */
#ifdef __INTERWORKING__
ldmfd sp, {ip, sp, lr}
bx ip
#else
@ -114,7 +125,13 @@ ARM_FUNC_START gnu_Unwind_Save_VFP_D_16_to_31
ARM_FUNC_START \name
/* Create a phase2_vrs structure. */
/* Split reg push in two to ensure the correct value for sp. */
#if defined(__thumb2__)
mov ip, sp
push {lr} /* PC is ignored. */
push {ip, lr} /* Push original SP and LR. */
#else
stmfd sp!, {sp, lr, pc}
#endif
stmfd sp!, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
/* Demand-save flags, plus an extra word for alignment. */
@ -123,7 +140,7 @@ ARM_FUNC_START gnu_Unwind_Save_VFP_D_16_to_31
/* Point r1 at the block. Pass r[0..nargs) unchanged. */
add r\nargs, sp, #4
#if defined(__thumb__)
#if defined(__thumb__) && !defined(__thumb2__)
/* Switch back to thumb mode to avoid interworking hassle. */
adr ip, .L1_\name
orr ip, ip, #1

View File

@ -1,5 +1,5 @@
;; Predicate definitions for ARM and Thumb
;; Copyright (C) 2004 Free Software Foundation, Inc.
;; Copyright (C) 2004, 2007 Free Software Foundation, Inc.
;; Contributed by ARM Ltd.
;; This file is part of GCC.
@ -39,6 +39,16 @@
return REGNO (op) < FIRST_PSEUDO_REGISTER;
})
;; A low register.
(define_predicate "low_register_operand"
(and (match_code "reg")
(match_test "REGNO (op) <= LAST_LO_REGNUM")))
;; A low register or const_int.
(define_predicate "low_reg_or_int_operand"
(ior (match_code "const_int")
(match_operand 0 "low_register_operand")))
;; Any core register, or any pseudo. */
(define_predicate "arm_general_register_operand"
(match_code "reg,subreg")
@ -174,6 +184,10 @@
(match_code "ashift,ashiftrt,lshiftrt,rotatert"))
(match_test "mode == GET_MODE (op)")))
;; True for operators that have 16-bit thumb variants. */
(define_special_predicate "thumb_16bit_operator"
(match_code "plus,minus,and,ior,xor"))
;; True for EQ & NE
(define_special_predicate "equality_operator"
(match_code "eq,ne"))
@ -399,13 +413,13 @@
;; Thumb predicates
;;
(define_predicate "thumb_cmp_operand"
(define_predicate "thumb1_cmp_operand"
(ior (and (match_code "reg,subreg")
(match_operand 0 "s_register_operand"))
(and (match_code "const_int")
(match_test "((unsigned HOST_WIDE_INT) INTVAL (op)) < 256"))))
(define_predicate "thumb_cmpneg_operand"
(define_predicate "thumb1_cmpneg_operand"
(and (match_code "const_int")
(match_test "INTVAL (op) < 0 && INTVAL (op) > -256")))

View File

@ -10,7 +10,8 @@ MD_INCLUDES= $(srcdir)/config/arm/arm-tune.md \
$(srcdir)/config/arm/cirrus.md \
$(srcdir)/config/arm/fpa.md \
$(srcdir)/config/arm/iwmmxt.md \
$(srcdir)/config/arm/vfp.md
$(srcdir)/config/arm/vfp.md \
$(srcdir)/config/arm/thumb2.md
s-config s-conditions s-flags s-codes s-constants s-emit s-recog s-preds \
s-opinit s-extract s-peep s-attr s-attrtab s-output: $(MD_INCLUDES)

View File

@ -11,6 +11,16 @@ MULTILIB_DIRNAMES = arm thumb
MULTILIB_EXCEPTIONS =
MULTILIB_MATCHES =
#MULTILIB_OPTIONS += march=armv7
#MULTILIB_DIRNAMES += thumb2
#MULTILIB_EXCEPTIONS += march=armv7* marm/*march=armv7*
#MULTILIB_MATCHES += march?armv7=march?armv7-a
#MULTILIB_MATCHES += march?armv7=march?armv7-r
#MULTILIB_MATCHES += march?armv7=march?armv7-m
#MULTILIB_MATCHES += march?armv7=mcpu?cortex-a8
#MULTILIB_MATCHES += march?armv7=mcpu?cortex-r4
#MULTILIB_MATCHES += march?armv7=mcpu?cortex-m3
# MULTILIB_OPTIONS += mcpu=ep9312
# MULTILIB_DIRNAMES += ep9312
# MULTILIB_EXCEPTIONS += *mthumb/*mcpu=ep9312*

1188
gcc/config/arm/thumb2.md Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
;; ARM VFP coprocessor Machine Description
;; Copyright (C) 2003, 2005 Free Software Foundation, Inc.
;; Copyright (C) 2003, 2005, 2006, 2007 Free Software Foundation, Inc.
;; Written by CodeSourcery, LLC.
;;
;; This file is part of GCC.
@ -121,25 +121,77 @@
;; ??? For now do not allow loading constants into vfp regs. This causes
;; problems because small constants get converted into adds.
(define_insn "*arm_movsi_vfp"
[(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r ,m,*w,r,*w,*w, *Uv")
(match_operand:SI 1 "general_operand" "rI,K,mi,r,r,*w,*w,*Uvi,*w"))]
[(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r ,m,*w,r,*w,*w, *Uv")
(match_operand:SI 1 "general_operand" "rI,K,N,mi,r,r,*w,*w,*Uvi,*w"))]
"TARGET_ARM && TARGET_VFP && TARGET_HARD_FLOAT
&& ( s_register_operand (operands[0], SImode)
|| s_register_operand (operands[1], SImode))"
"@
mov%?\\t%0, %1
mvn%?\\t%0, #%B1
ldr%?\\t%0, %1
str%?\\t%1, %0
fmsr%?\\t%0, %1\\t%@ int
fmrs%?\\t%0, %1\\t%@ int
fcpys%?\\t%0, %1\\t%@ int
flds%?\\t%0, %1\\t%@ int
fsts%?\\t%1, %0\\t%@ int"
"*
switch (which_alternative)
{
case 0:
return \"mov%?\\t%0, %1\";
case 1:
return \"mvn%?\\t%0, #%B1\";
case 2:
return \"movw%?\\t%0, %1\";
case 3:
return \"ldr%?\\t%0, %1\";
case 4:
return \"str%?\\t%1, %0\";
case 5:
return \"fmsr%?\\t%0, %1\\t%@ int\";
case 6:
return \"fmrs%?\\t%0, %1\\t%@ int\";
case 7:
return \"fcpys%?\\t%0, %1\\t%@ int\";
case 8: case 9:
return output_move_vfp (operands);
default:
gcc_unreachable ();
}
"
[(set_attr "predicable" "yes")
(set_attr "type" "*,*,load1,store1,r_2_f,f_2_r,ffarith,f_loads,f_stores")
(set_attr "pool_range" "*,*,4096,*,*,*,*,1020,*")
(set_attr "neg_pool_range" "*,*,4084,*,*,*,*,1008,*")]
(set_attr "type" "*,*,*,load1,store1,r_2_f,f_2_r,ffarith,f_loads,f_stores")
(set_attr "pool_range" "*,*,*,4096,*,*,*,*,1020,*")
(set_attr "neg_pool_range" "*,*,*,4084,*,*,*,*,1008,*")]
)
(define_insn "*thumb2_movsi_vfp"
[(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r,m,*w,r,*w,*w, *Uv")
(match_operand:SI 1 "general_operand" "rI,K,N,mi,r,r,*w,*w,*Uvi,*w"))]
"TARGET_THUMB2 && TARGET_VFP && TARGET_HARD_FLOAT
&& ( s_register_operand (operands[0], SImode)
|| s_register_operand (operands[1], SImode))"
"*
switch (which_alternative)
{
case 0:
return \"mov%?\\t%0, %1\";
case 1:
return \"mvn%?\\t%0, #%B1\";
case 2:
return \"movw%?\\t%0, %1\";
case 3:
return \"ldr%?\\t%0, %1\";
case 4:
return \"str%?\\t%1, %0\";
case 5:
return \"fmsr%?\\t%0, %1\\t%@ int\";
case 6:
return \"fmrs%?\\t%0, %1\\t%@ int\";
case 7:
return \"fcpys%?\\t%0, %1\\t%@ int\";
case 8: case 9:
return output_move_vfp (operands);
default:
gcc_unreachable ();
}
"
[(set_attr "predicable" "yes")
(set_attr "type" "*,*,*,load1,store1,r_2_f,f_2_r,ffarith,f_load,f_store")
(set_attr "pool_range" "*,*,*,4096,*,*,*,*,1020,*")
(set_attr "neg_pool_range" "*,*,*, 0,*,*,*,*,1008,*")]
)
@ -165,10 +217,8 @@
return \"fmrrd%?\\t%Q0, %R0, %P1\\t%@ int\";
case 5:
return \"fcpyd%?\\t%P0, %P1\\t%@ int\";
case 6:
return \"fldd%?\\t%P0, %1\\t%@ int\";
case 7:
return \"fstd%?\\t%P1, %0\\t%@ int\";
case 6: case 7:
return output_move_vfp (operands);
default:
gcc_unreachable ();
}
@ -179,6 +229,33 @@
(set_attr "neg_pool_range" "*,1008,*,*,*,*,1008,*")]
)
(define_insn "*thumb2_movdi_vfp"
[(set (match_operand:DI 0 "nonimmediate_di_operand" "=r, r,m,w,r,w,w, Uv")
(match_operand:DI 1 "di_operand" "rIK,mi,r,r,w,w,Uvi,w"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
"*
switch (which_alternative)
{
case 0: case 1: case 2:
return (output_move_double (operands));
case 3:
return \"fmdrr%?\\t%P0, %Q1, %R1\\t%@ int\";
case 4:
return \"fmrrd%?\\t%Q0, %R0, %P1\\t%@ int\";
case 5:
return \"fcpyd%?\\t%P0, %P1\\t%@ int\";
case 6: case 7:
return output_move_vfp (operands);
default:
abort ();
}
"
[(set_attr "type" "*,load2,store2,r_2_f,f_2_r,ffarith,f_load,f_store")
(set_attr "length" "8,8,8,4,4,4,4,4")
(set_attr "pool_range" "*,4096,*,*,*,*,1020,*")
(set_attr "neg_pool_range" "*, 0,*,*,*,*,1008,*")]
)
;; SFmode moves
;; Disparage the w<->r cases because reloading an invalid address is
@ -190,21 +267,66 @@
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP
&& ( s_register_operand (operands[0], SFmode)
|| s_register_operand (operands[1], SFmode))"
"@
fmsr%?\\t%0, %1
fmrs%?\\t%0, %1
flds%?\\t%0, %1
fsts%?\\t%1, %0
ldr%?\\t%0, %1\\t%@ float
str%?\\t%1, %0\\t%@ float
fcpys%?\\t%0, %1
mov%?\\t%0, %1\\t%@ float"
"*
switch (which_alternative)
{
case 0:
return \"fmsr%?\\t%0, %1\";
case 1:
return \"fmrs%?\\t%0, %1\";
case 2: case 3:
return output_move_vfp (operands);
case 4:
return \"ldr%?\\t%0, %1\\t%@ float\";
case 5:
return \"str%?\\t%1, %0\\t%@ float\";
case 6:
return \"fcpys%?\\t%0, %1\";
case 7:
return \"mov%?\\t%0, %1\\t%@ float\";
default:
gcc_unreachable ();
}
"
[(set_attr "predicable" "yes")
(set_attr "type" "r_2_f,f_2_r,ffarith,*,f_loads,f_stores,load1,store1")
(set_attr "pool_range" "*,*,1020,*,4096,*,*,*")
(set_attr "neg_pool_range" "*,*,1008,*,4080,*,*,*")]
)
(define_insn "*thumb2_movsf_vfp"
[(set (match_operand:SF 0 "nonimmediate_operand" "=w,?r,w ,Uv,r ,m,w,r")
(match_operand:SF 1 "general_operand" " ?r,w,UvE,w, mE,r,w,r"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP
&& ( s_register_operand (operands[0], SFmode)
|| s_register_operand (operands[1], SFmode))"
"*
switch (which_alternative)
{
case 0:
return \"fmsr%?\\t%0, %1\";
case 1:
return \"fmrs%?\\t%0, %1\";
case 2: case 3:
return output_move_vfp (operands);
case 4:
return \"ldr%?\\t%0, %1\\t%@ float\";
case 5:
return \"str%?\\t%1, %0\\t%@ float\";
case 6:
return \"fcpys%?\\t%0, %1\";
case 7:
return \"mov%?\\t%0, %1\\t%@ float\";
default:
gcc_unreachable ();
}
"
[(set_attr "predicable" "yes")
(set_attr "type" "r_2_f,f_2_r,ffarith,*,f_load,f_store,load1,store1")
(set_attr "pool_range" "*,*,1020,*,4092,*,*,*")
(set_attr "neg_pool_range" "*,*,1008,*,0,*,*,*")]
)
;; DFmode moves
@ -224,10 +346,8 @@
return \"fmrrd%?\\t%Q0, %R0, %P1\";
case 2: case 3:
return output_move_double (operands);
case 4:
return \"fldd%?\\t%P0, %1\";
case 5:
return \"fstd%?\\t%P1, %0\";
case 4: case 5:
return output_move_vfp (operands);
case 6:
return \"fcpyd%?\\t%P0, %P1\";
case 7:
@ -243,6 +363,35 @@
(set_attr "neg_pool_range" "*,*,1008,*,1008,*,*,*")]
)
(define_insn "*thumb2_movdf_vfp"
[(set (match_operand:DF 0 "nonimmediate_soft_df_operand" "=w,?r,r, m,w ,Uv,w,r")
(match_operand:DF 1 "soft_df_operand" " ?r,w,mF,r,UvF,w, w,r"))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
"*
{
switch (which_alternative)
{
case 0:
return \"fmdrr%?\\t%P0, %Q1, %R1\";
case 1:
return \"fmrrd%?\\t%Q0, %R0, %P1\";
case 2: case 3: case 7:
return output_move_double (operands);
case 4: case 5:
return output_move_vfp (operands);
case 6:
return \"fcpyd%?\\t%P0, %P1\";
default:
abort ();
}
}
"
[(set_attr "type" "r_2_f,f_2_r,ffarith,*,load2,store2,f_load,f_store")
(set_attr "length" "4,4,8,8,4,4,4,8")
(set_attr "pool_range" "*,*,4096,*,1020,*,*,*")
(set_attr "neg_pool_range" "*,*,0,*,1008,*,*,*")]
)
;; Conditional move patterns
@ -269,6 +418,29 @@
(set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
)
(define_insn "*thumb2_movsfcc_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w,w,w,w,w,w,?r,?r,?r")
(if_then_else:SF
(match_operator 3 "arm_comparison_operator"
[(match_operand 4 "cc_register" "") (const_int 0)])
(match_operand:SF 1 "s_register_operand" "0,w,w,0,?r,?r,0,w,w")
(match_operand:SF 2 "s_register_operand" "w,0,w,?r,0,?r,w,0,w")))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
"@
it\\t%D3\;fcpys%D3\\t%0, %2
it\\t%d3\;fcpys%d3\\t%0, %1
ite\\t%D3\;fcpys%D3\\t%0, %2\;fcpys%d3\\t%0, %1
it\\t%D3\;fmsr%D3\\t%0, %2
it\\t%d3\;fmsr%d3\\t%0, %1
ite\\t%D3\;fmsr%D3\\t%0, %2\;fmsr%d3\\t%0, %1
it\\t%D3\;fmrs%D3\\t%0, %2
it\\t%d3\;fmrs%d3\\t%0, %1
ite\\t%D3\;fmrs%D3\\t%0, %2\;fmrs%d3\\t%0, %1"
[(set_attr "conds" "use")
(set_attr "length" "6,6,10,6,6,10,6,6,10")
(set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
)
(define_insn "*movdfcc_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w,w,w,w,w,w,?r,?r,?r")
(if_then_else:DF
@ -292,13 +464,36 @@
(set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
)
(define_insn "*thumb2_movdfcc_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w,w,w,w,w,w,?r,?r,?r")
(if_then_else:DF
(match_operator 3 "arm_comparison_operator"
[(match_operand 4 "cc_register" "") (const_int 0)])
(match_operand:DF 1 "s_register_operand" "0,w,w,0,?r,?r,0,w,w")
(match_operand:DF 2 "s_register_operand" "w,0,w,?r,0,?r,w,0,w")))]
"TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
"@
it\\t%D3\;fcpyd%D3\\t%P0, %P2
it\\t%d3\;fcpyd%d3\\t%P0, %P1
ite\\t%D3\;fcpyd%D3\\t%P0, %P2\;fcpyd%d3\\t%P0, %P1
it\t%D3\;fmdrr%D3\\t%P0, %Q2, %R2
it\t%d3\;fmdrr%d3\\t%P0, %Q1, %R1
ite\\t%D3\;fmdrr%D3\\t%P0, %Q2, %R2\;fmdrr%d3\\t%P0, %Q1, %R1
it\t%D3\;fmrrd%D3\\t%Q0, %R0, %P2
it\t%d3\;fmrrd%d3\\t%Q0, %R0, %P1
ite\\t%D3\;fmrrd%D3\\t%Q0, %R0, %P2\;fmrrd%d3\\t%Q0, %R0, %P1"
[(set_attr "conds" "use")
(set_attr "length" "6,6,10,6,6,10,6,6,10")
(set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
)
;; Sign manipulation functions
(define_insn "*abssf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(abs:SF (match_operand:SF 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fabss%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "ffarith")]
@ -307,7 +502,7 @@
(define_insn "*absdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(abs:DF (match_operand:DF 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fabsd%?\\t%P0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "ffarith")]
@ -316,7 +511,7 @@
(define_insn "*negsf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w,?r")
(neg:SF (match_operand:SF 1 "s_register_operand" "w,r")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fnegs%?\\t%0, %1
eor%?\\t%0, %1, #-2147483648"
@ -327,12 +522,12 @@
(define_insn_and_split "*negdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w,?r,?r")
(neg:DF (match_operand:DF 1 "s_register_operand" "w,0,r")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fnegd%?\\t%P0, %P1
#
#"
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP && reload_completed
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP && reload_completed
&& arm_general_register_operand (operands[0], DFmode)"
[(set (match_dup 0) (match_dup 1))]
"
@ -377,7 +572,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=w")
(plus:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fadds%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -387,7 +582,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=w")
(plus:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"faddd%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -398,7 +593,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=w")
(minus:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsubs%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -408,7 +603,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=w")
(minus:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsubd%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -421,7 +616,7 @@
[(set (match_operand:SF 0 "s_register_operand" "+w")
(div:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fdivs%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivs")]
@ -431,7 +626,7 @@
[(set (match_operand:DF 0 "s_register_operand" "+w")
(div:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fdivd%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivd")]
@ -444,7 +639,7 @@
[(set (match_operand:SF 0 "s_register_operand" "+w")
(mult:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmuls%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -454,7 +649,7 @@
[(set (match_operand:DF 0 "s_register_operand" "+w")
(mult:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmuld%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@ -465,7 +660,7 @@
[(set (match_operand:SF 0 "s_register_operand" "+w")
(mult:SF (neg:SF (match_operand:SF 1 "s_register_operand" "w"))
(match_operand:SF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmuls%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -475,7 +670,7 @@
[(set (match_operand:DF 0 "s_register_operand" "+w")
(mult:DF (neg:DF (match_operand:DF 1 "s_register_operand" "w"))
(match_operand:DF 2 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmuld%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@ -490,7 +685,7 @@
(plus:SF (mult:SF (match_operand:SF 2 "s_register_operand" "w")
(match_operand:SF 3 "s_register_operand" "w"))
(match_operand:SF 1 "s_register_operand" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmacs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -501,7 +696,7 @@
(plus:DF (mult:DF (match_operand:DF 2 "s_register_operand" "w")
(match_operand:DF 3 "s_register_operand" "w"))
(match_operand:DF 1 "s_register_operand" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmacd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@ -513,7 +708,7 @@
(minus:SF (mult:SF (match_operand:SF 2 "s_register_operand" "w")
(match_operand:SF 3 "s_register_operand" "w"))
(match_operand:SF 1 "s_register_operand" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmscs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -524,7 +719,7 @@
(minus:DF (mult:DF (match_operand:DF 2 "s_register_operand" "w")
(match_operand:DF 3 "s_register_operand" "w"))
(match_operand:DF 1 "s_register_operand" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmscd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@ -536,7 +731,7 @@
(minus:SF (match_operand:SF 1 "s_register_operand" "0")
(mult:SF (match_operand:SF 2 "s_register_operand" "w")
(match_operand:SF 3 "s_register_operand" "w"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmacs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -547,7 +742,7 @@
(minus:DF (match_operand:DF 1 "s_register_operand" "0")
(mult:DF (match_operand:DF 2 "s_register_operand" "w")
(match_operand:DF 3 "s_register_operand" "w"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmacd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@ -561,7 +756,7 @@
(neg:SF (match_operand:SF 2 "s_register_operand" "w"))
(match_operand:SF 3 "s_register_operand" "w"))
(match_operand:SF 1 "s_register_operand" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmscs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@ -573,7 +768,7 @@
(neg:DF (match_operand:DF 2 "s_register_operand" "w"))
(match_operand:DF 3 "s_register_operand" "w"))
(match_operand:DF 1 "s_register_operand" "0")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmscd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@ -585,7 +780,7 @@
(define_insn "*extendsfdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(float_extend:DF (match_operand:SF 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fcvtds%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -594,7 +789,7 @@
(define_insn "*truncdfsf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(float_truncate:SF (match_operand:DF 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fcvtsd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -603,7 +798,7 @@
(define_insn "*truncsisf2_vfp"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" "w"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftosizs%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -612,7 +807,7 @@
(define_insn "*truncsidf2_vfp"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" "w"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftosizd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -622,7 +817,7 @@
(define_insn "fixuns_truncsfsi2"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(unsigned_fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" "w"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftouizs%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -631,7 +826,7 @@
(define_insn "fixuns_truncdfsi2"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(unsigned_fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" "w"))))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftouizd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -641,7 +836,7 @@
(define_insn "*floatsisf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(float:SF (match_operand:SI 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsitos%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -650,7 +845,7 @@
(define_insn "*floatsidf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(float:DF (match_operand:SI 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsitod%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -660,7 +855,7 @@
(define_insn "floatunssisf2"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(unsigned_float:SF (match_operand:SI 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fuitos%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -669,7 +864,7 @@
(define_insn "floatunssidf2"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(unsigned_float:DF (match_operand:SI 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fuitod%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@ -681,7 +876,7 @@
(define_insn "*sqrtsf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsqrts%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivs")]
@ -690,7 +885,7 @@
(define_insn "*sqrtdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(sqrt:DF (match_operand:DF 1 "s_register_operand" "w")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsqrtd%?\\t%P0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivd")]
@ -702,7 +897,7 @@
(define_insn "*movcc_vfp"
[(set (reg CC_REGNUM)
(reg VFPCC_REGNUM))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmstat%?"
[(set_attr "conds" "set")
(set_attr "type" "f_flag")]
@ -712,9 +907,9 @@
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:SF 0 "s_register_operand" "w")
(match_operand:SF 1 "vfp_compare_operand" "wG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_dup 0)
(match_dup 1)))
@ -727,9 +922,9 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:SF 0 "s_register_operand" "w")
(match_operand:SF 1 "vfp_compare_operand" "wG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_dup 0)
(match_dup 1)))
@ -742,9 +937,9 @@
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:DF 0 "s_register_operand" "w")
(match_operand:DF 1 "vfp_compare_operand" "wG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_dup 0)
(match_dup 1)))
@ -757,9 +952,9 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "w")
(match_operand:DF 1 "vfp_compare_operand" "wG")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_dup 0)
(match_dup 1)))
@ -775,7 +970,7 @@
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_operand:SF 0 "s_register_operand" "w,w")
(match_operand:SF 1 "vfp_compare_operand" "w,G")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmps%?\\t%0, %1
fcmpzs%?\\t%0"
@ -787,7 +982,7 @@
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_operand:SF 0 "s_register_operand" "w,w")
(match_operand:SF 1 "vfp_compare_operand" "w,G")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmpes%?\\t%0, %1
fcmpezs%?\\t%0"
@ -799,7 +994,7 @@
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_operand:DF 0 "s_register_operand" "w,w")
(match_operand:DF 1 "vfp_compare_operand" "w,G")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmpd%?\\t%P0, %P1
fcmpzd%?\\t%P0"
@ -811,7 +1006,7 @@
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "w,w")
(match_operand:DF 1 "vfp_compare_operand" "w,G")))]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmped%?\\t%P0, %P1
fcmpezd%?\\t%P0"
@ -827,7 +1022,7 @@
[(set (match_operand:BLK 0 "memory_operand" "=m")
(unspec:BLK [(match_operand:DF 1 "s_register_operand" "w")]
UNSPEC_PUSH_MULT))])]
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"* return vfp_output_fstmd (operands);"
[(set_attr "type" "f_stored")]
)

View File

@ -1,5 +1,5 @@
@c Copyright (C) 1988, 1989, 1992, 1993, 1994, 1996, 1998, 1999, 2000,
@c 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
@c 2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
@c This is part of the GCC manual.
@c For copying conditions, see the file gcc.texi.
@ -1965,6 +1965,9 @@ void f () __attribute__ ((interrupt ("IRQ")));
Permissible values for this parameter are: IRQ, FIQ, SWI, ABORT and UNDEF@.
On ARMv7-M the interrupt type is ignored, and the attibute means the function
may be called with a word aligned stack pointer.
@item interrupt_handler
@cindex interrupt handler functions on the Blackfin, m68k, H8/300 and SH processors
Use this attribute on the Blackfin, m68k, H8/300, H8/300H, H8S, and SH to

View File

@ -1,5 +1,6 @@
@c Copyright (C) 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999,
@c 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
@c 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
@c Free Software Foundation, Inc.
@c This is part of the GCC manual.
@c For copying conditions, see the file gcc.texi.
@ -7605,8 +7606,9 @@ assembly code. Permissible names are: @samp{arm2}, @samp{arm250},
@samp{arm10tdmi}, @samp{arm1020t}, @samp{arm1026ej-s},
@samp{arm10e}, @samp{arm1020e}, @samp{arm1022e},
@samp{arm1136j-s}, @samp{arm1136jf-s}, @samp{mpcore}, @samp{mpcorenovfp},
@samp{arm1176jz-s}, @samp{arm1176jzf-s}, @samp{xscale}, @samp{iwmmxt},
@samp{ep9312}.
@samp{arm1156t2-s}, @samp{arm1176jz-s}, @samp{arm1176jzf-s},
@samp{cortex-a8}, @samp{cortex-r4}, @samp{cortex-m3},
@samp{xscale}, @samp{iwmmxt}, @samp{ep9312}.
@itemx -mtune=@var{name}
@opindex mtune
@ -7627,7 +7629,8 @@ assembly code. This option can be used in conjunction with or instead
of the @option{-mcpu=} option. Permissible names are: @samp{armv2},
@samp{armv2a}, @samp{armv3}, @samp{armv3m}, @samp{armv4}, @samp{armv4t},
@samp{armv5}, @samp{armv5t}, @samp{armv5te}, @samp{armv6}, @samp{armv6j},
@samp{iwmmxt}, @samp{ep9312}.
@samp{armv6t2}, @samp{armv6z}, @samp{armv6zk}, @samp{armv7}, @samp{armv7-a},
@samp{armv7-r}, @samp{armv7-m}, @samp{iwmmxt}, @samp{ep9312}.
@item -mfpu=@var{name}
@itemx -mfpe=@var{number}
@ -7745,8 +7748,11 @@ and has length @code{((pc[-3]) & 0xff000000)}.
@item -mthumb
@opindex mthumb
Generate code for the 16-bit Thumb instruction set. The default is to
Generate code for the Thumb instruction set. The default is to
use the 32-bit ARM instruction set.
This option automatically enables either 16-bit Thumb-1 or
mixed 16/32-bit Thumb-2 instructions based on the @option{-mcpu=@var{name}}
and @option{-march=@var{name}} options.
@item -mtpcs-frame
@opindex mtpcs-frame