builtins.c (get_memory_rtx): Fix comment.

2019-06-27  Aaron Sawdey <acsawdey@linux.ibm.com>

	* builtins.c (get_memory_rtx): Fix comment.
	* optabs.def (movmem_optab): Change to cpymem_optab.
	* expr.c (emit_block_move_via_cpymem): Change movmem to cpymem.
	(emit_block_move_hints): Change movmem to cpymem.
	* defaults.h: Change movmem to cpymem.
	* targhooks.c (get_move_ratio): Change movmem to cpymem.
	(default_use_by_pieces_infrastructure_p): Ditto.
	* config/aarch64/aarch64-protos.h: Change movmem to cpymem.
	* config/aarch64/aarch64.c (aarch64_expand_movmem): Change movmem
	to cpymem.
	* config/aarch64/aarch64.h: Change movmem to cpymem.
	* config/aarch64/aarch64.md (movmemdi): Change name to cpymemdi.
	* config/alpha/alpha.h: Change movmem to cpymem in comment.
	* config/alpha/alpha.md (movmemqi, movmemdi, *movmemdi_1): Change
	movmem to cpymem.
	* config/arc/arc-protos.h: Change movmem to cpymem.
	* config/arc/arc.c (arc_expand_movmem): Change movmem to cpymem.
	* config/arc/arc.h: Change movmem to cpymem in comment.
	* config/arc/arc.md (movmemsi): Change movmem to cpymem.
	* config/arm/arm-protos.h: Change movmem to cpymem in names.
	* config/arm/arm.c (arm_movmemqi_unaligned, arm_gen_movmemqi, 
	gen_movmem_ldrd_strd, thumb_expand_movmemqi) Change movmem to cpymem.
	* config/arm/arm.md (movmemqi): Change movmem to cpymem.
	* config/arm/thumb1.md (movmem12b, movmem8b): Change movmem to cpymem.
	* config/avr/avr-protos.h: Change movmem to cpymem.
	* config/avr/avr.c (avr_adjust_insn_length, avr_emit_movmemhi,
	avr_out_movmem): Change movmem to cpymem.
	* config/avr/avr.md (movmemhi, movmem_<mode>, movmemx_<mode>):
	Change movmem to cpymem.
	* config/bfin/bfin-protos.h: Change movmem to cpymem.
	* config/bfin/bfin.c (single_move_for_movmem, bfin_expand_movmem):
	Change movmem to cpymem.
	* config/bfin/bfin.h: Change movmem to cpymem in comment.
	* config/bfin/bfin.md (movmemsi): Change name to cpymemsi.
	* config/c6x/c6x-protos.h: Change movmem to cpymem.
	* config/c6x/c6x.c (c6x_expand_movmem): Change movmem to cpymem.
	* config/c6x/c6x.md (movmemsi): Change name to cpymemsi.
	* config/frv/frv.md (movmemsi): Change name to cpymemsi.
	* config/ft32/ft32.md (movmemsi): Change name to cpymemsi.
	* config/h8300/h8300.md (movmemsi): Change name to cpymemsi.
	* config/i386/i386-expand.c (expand_set_or_movmem_via_loop,
	expand_set_or_movmem_via_rep, expand_movmem_epilogue,
	expand_setmem_epilogue_via_loop, expand_set_or_cpymem_prologue,
	expand_small_cpymem_or_setmem,
	expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves,
	expand_set_or_cpymem_constant_prologue, 
	ix86_expand_set_or_cpymem): Change movmem to cpymem.
	* config/i386/i386-protos.h: Change movmem to cpymem.
	* config/i386/i386.h: Change movmem to cpymem in comment.
	* config/i386/i386.md (movmem<mode>): Change name to cpymem.
	(setmem<mode>): Change expansion function name.
	* config/lm32/lm32.md (movmemsi): Change name to cpymemsi.
	* config/m32c/blkmov.md (movmemhi, movmemhi_bhi_op, movmemhi_bpsi_op,
	movmemhi_whi_op, movmemhi_wpsi_op): Change movmem to cpymem.
	* config/m32c/m32c-protos.h: Change movmem to cpymem.
	* config/m32c/m32c.c (m32c_expand_movmemhi): Change movmem to cpymem.
	* config/m32r/m32r.c (m32r_expand_block_move): Change movmem to cpymem.
	* config/m32r/m32r.md (movmemsi, movmemsi_internal): Change movmem
	to cpymem.
	* config/mcore/mcore.md (movmemsi): Change name to cpymemsi.
	* config/microblaze/microblaze.c: Change movmem to cpymem in comment.
	* config/microblaze/microblaze.md (movmemsi): Change name to cpymemsi.
	* config/mips/mips.c (mips_use_by_pieces_infrastructure_p):
	Change movmem to cpymem.
	* config/mips/mips.h: Change movmem to cpymem.
	* config/mips/mips.md (movmemsi): Change name to cpymemsi.
	* config/nds32/nds32-memory-manipulation.c
	(nds32_expand_movmemsi_loop_unknown_size,
	nds32_expand_movmemsi_loop_known_size, nds32_expand_movmemsi_loop,
	nds32_expand_movmemsi_unroll,
	nds32_expand_movmemsi): Change movmem to cpymem.
	* config/nds32/nds32-multiple.md (movmemsi): Change name to cpymemsi.
	* config/nds32/nds32-protos.h: Change movmem to cpymem.
	* config/pa/pa.c (compute_movmem_length): Change movmem to cpymem.
	(pa_adjust_insn_length): Change call to compute_movmem_length.
	* config/pa/pa.md (movmemsi, movmemsi_prereload, movmemsi_postreload,
	movmemdi, movmemdi_prereload, 
	movmemdi_postreload): Change movmem to cpymem.
	* config/pdp11/pdp11.md (movmemhi, movmemhi1, 
	movmemhi_nocc, UNSPEC_MOVMEM): Change movmem to cpymem.
	* config/riscv/riscv.c: Change movmem to cpymem in comment.
	* config/riscv/riscv.h: Change movmem to cpymem.
	* config/riscv/riscv.md: (movmemsi) Change name to cpymemsi.
	* config/rs6000/rs6000.md: (movmemsi) Change name to cpymemsi.
	* config/rx/rx.md: (UNSPEC_MOVMEM, movmemsi, rx_movmem): Change
	movmem to cpymem.
	* config/s390/s390-protos.h: Change movmem to cpymem.
	* config/s390/s390.c (s390_expand_movmem, s390_expand_setmem,
	s390_expand_insv): Change movmem to cpymem.
	* config/s390/s390.md (movmem<mode>, movmem_short, *movmem_short,
	movmem_long, *movmem_long, *movmem_long_31z): Change movmem to cpymem.
	* config/sh/sh.md (movmemsi): Change name to cpymemsi.
	* config/sparc/sparc.h: Change movmem to cpymem in comment.
	* config/vax/vax-protos.h (vax_output_movmemsi): Remove prototype
	for nonexistent function.
	* config/vax/vax.h: Change movmem to cpymem in comment.
	* config/vax/vax.md (movmemhi, movmemhi1): Change movmem to cpymem.
	* config/visium/visium.h: Change movmem to cpymem in comment.
	* config/visium/visium.md (movmemsi): Change name to cpymemsi.
	* config/xtensa/xtensa.md (movmemsi): Change name to cpymemsi.
	* doc/md.texi: Change movmem to cpymem and update description to match.
	* doc/rtl.texi: Change movmem to cpymem.
	* target.def (use_by_pieces_infrastructure_p): Change movmem to cpymem.
        * doc/tm.texi: Regenerate.

From-SVN: r272755
This commit is contained in:
Aaron Sawdey 2019-06-27 14:45:36 +00:00 committed by Aaron Sawdey
parent 00e72aa462
commit 76715c3216
75 changed files with 352 additions and 244 deletions

View File

@ -1,3 +1,110 @@
2019-06-27 Aaron Sawdey <acsawdey@linux.ibm.com>
* builtins.c (get_memory_rtx): Fix comment.
* optabs.def (movmem_optab): Change to cpymem_optab.
* expr.c (emit_block_move_via_cpymem): Change movmem to cpymem.
(emit_block_move_hints): Change movmem to cpymem.
* defaults.h: Change movmem to cpymem.
* targhooks.c (get_move_ratio): Change movmem to cpymem.
(default_use_by_pieces_infrastructure_p): Ditto.
* config/aarch64/aarch64-protos.h: Change movmem to cpymem.
* config/aarch64/aarch64.c (aarch64_expand_movmem): Change movmem
to cpymem.
* config/aarch64/aarch64.h: Change movmem to cpymem.
* config/aarch64/aarch64.md (movmemdi): Change name to cpymemdi.
* config/alpha/alpha.h: Change movmem to cpymem in comment.
* config/alpha/alpha.md (movmemqi, movmemdi, *movmemdi_1): Change
movmem to cpymem.
* config/arc/arc-protos.h: Change movmem to cpymem.
* config/arc/arc.c (arc_expand_movmem): Change movmem to cpymem.
* config/arc/arc.h: Change movmem to cpymem in comment.
* config/arc/arc.md (movmemsi): Change movmem to cpymem.
* config/arm/arm-protos.h: Change movmem to cpymem in names.
* config/arm/arm.c (arm_movmemqi_unaligned, arm_gen_movmemqi,
gen_movmem_ldrd_strd, thumb_expand_movmemqi) Change movmem to cpymem.
* config/arm/arm.md (movmemqi): Change movmem to cpymem.
* config/arm/thumb1.md (movmem12b, movmem8b): Change movmem to cpymem.
* config/avr/avr-protos.h: Change movmem to cpymem.
* config/avr/avr.c (avr_adjust_insn_length, avr_emit_movmemhi,
avr_out_movmem): Change movmem to cpymem.
* config/avr/avr.md (movmemhi, movmem_<mode>, movmemx_<mode>):
Change movmem to cpymem.
* config/bfin/bfin-protos.h: Change movmem to cpymem.
* config/bfin/bfin.c (single_move_for_movmem, bfin_expand_movmem):
Change movmem to cpymem.
* config/bfin/bfin.h: Change movmem to cpymem in comment.
* config/bfin/bfin.md (movmemsi): Change name to cpymemsi.
* config/c6x/c6x-protos.h: Change movmem to cpymem.
* config/c6x/c6x.c (c6x_expand_movmem): Change movmem to cpymem.
* config/c6x/c6x.md (movmemsi): Change name to cpymemsi.
* config/frv/frv.md (movmemsi): Change name to cpymemsi.
* config/ft32/ft32.md (movmemsi): Change name to cpymemsi.
* config/h8300/h8300.md (movmemsi): Change name to cpymemsi.
* config/i386/i386-expand.c (expand_set_or_movmem_via_loop,
expand_set_or_movmem_via_rep, expand_movmem_epilogue,
expand_setmem_epilogue_via_loop, expand_set_or_cpymem_prologue,
expand_small_cpymem_or_setmem,
expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves,
expand_set_or_cpymem_constant_prologue,
ix86_expand_set_or_cpymem): Change movmem to cpymem.
* config/i386/i386-protos.h: Change movmem to cpymem.
* config/i386/i386.h: Change movmem to cpymem in comment.
* config/i386/i386.md (movmem<mode>): Change name to cpymem.
(setmem<mode>): Change expansion function name.
* config/lm32/lm32.md (movmemsi): Change name to cpymemsi.
* config/m32c/blkmov.md (movmemhi, movmemhi_bhi_op, movmemhi_bpsi_op,
movmemhi_whi_op, movmemhi_wpsi_op): Change movmem to cpymem.
* config/m32c/m32c-protos.h: Change movmem to cpymem.
* config/m32c/m32c.c (m32c_expand_movmemhi): Change movmem to cpymem.
* config/m32r/m32r.c (m32r_expand_block_move): Change movmem to cpymem.
* config/m32r/m32r.md (movmemsi, movmemsi_internal): Change movmem
to cpymem.
* config/mcore/mcore.md (movmemsi): Change name to cpymemsi.
* config/microblaze/microblaze.c: Change movmem to cpymem in comment.
* config/microblaze/microblaze.md (movmemsi): Change name to cpymemsi.
* config/mips/mips.c (mips_use_by_pieces_infrastructure_p):
Change movmem to cpymem.
* config/mips/mips.h: Change movmem to cpymem.
* config/mips/mips.md (movmemsi): Change name to cpymemsi.
* config/nds32/nds32-memory-manipulation.c
(nds32_expand_movmemsi_loop_unknown_size,
nds32_expand_movmemsi_loop_known_size, nds32_expand_movmemsi_loop,
nds32_expand_movmemsi_unroll,
nds32_expand_movmemsi): Change movmem to cpymem.
* config/nds32/nds32-multiple.md (movmemsi): Change name to cpymemsi.
* config/nds32/nds32-protos.h: Change movmem to cpymem.
* config/pa/pa.c (compute_movmem_length): Change movmem to cpymem.
(pa_adjust_insn_length): Change call to compute_movmem_length.
* config/pa/pa.md (movmemsi, movmemsi_prereload, movmemsi_postreload,
movmemdi, movmemdi_prereload,
movmemdi_postreload): Change movmem to cpymem.
* config/pdp11/pdp11.md (movmemhi, movmemhi1,
movmemhi_nocc, UNSPEC_MOVMEM): Change movmem to cpymem.
* config/riscv/riscv.c: Change movmem to cpymem in comment.
* config/riscv/riscv.h: Change movmem to cpymem.
* config/riscv/riscv.md: (movmemsi) Change name to cpymemsi.
* config/rs6000/rs6000.md: (movmemsi) Change name to cpymemsi.
* config/rx/rx.md: (UNSPEC_MOVMEM, movmemsi, rx_movmem): Change
movmem to cpymem.
* config/s390/s390-protos.h: Change movmem to cpymem.
* config/s390/s390.c (s390_expand_movmem, s390_expand_setmem,
s390_expand_insv): Change movmem to cpymem.
* config/s390/s390.md (movmem<mode>, movmem_short, *movmem_short,
movmem_long, *movmem_long, *movmem_long_31z): Change movmem to cpymem.
* config/sh/sh.md (movmemsi): Change name to cpymemsi.
* config/sparc/sparc.h: Change movmem to cpymem in comment.
* config/vax/vax-protos.h (vax_output_movmemsi): Remove prototype
for nonexistent function.
* config/vax/vax.h: Change movmem to cpymem in comment.
* config/vax/vax.md (movmemhi, movmemhi1): Change movmem to cpymem.
* config/visium/visium.h: Change movmem to cpymem in comment.
* config/visium/visium.md (movmemsi): Change name to cpymemsi.
* config/xtensa/xtensa.md (movmemsi): Change name to cpymemsi.
* doc/md.texi: Change movmem to cpymem and update description to match.
* doc/rtl.texi: Change movmem to cpymem.
* target.def (use_by_pieces_infrastructure_p): Change movmem to cpymem.
* doc/tm.texi: Regenerate.
2019-06-27 Bill Schmidt <wschmidt@linux.ibm.com>
* config/rs6000/rs6000.c (rs6000_option_override_internal): Enable

View File

@ -1416,7 +1416,7 @@ expand_builtin_prefetch (tree exp)
}
/* Get a MEM rtx for expression EXP which is the address of an operand
to be used in a string instruction (cmpstrsi, movmemsi, ..). LEN is
to be used in a string instruction (cmpstrsi, cpymemsi, ..). LEN is
the maximum length of the block of memory that might be accessed or
NULL if unknown. */

View File

@ -424,12 +424,12 @@ bool aarch64_constant_address_p (rtx);
bool aarch64_emit_approx_div (rtx, rtx, rtx);
bool aarch64_emit_approx_sqrt (rtx, rtx, bool);
void aarch64_expand_call (rtx, rtx, bool);
bool aarch64_expand_movmem (rtx *);
bool aarch64_expand_cpymem (rtx *);
bool aarch64_float_const_zero_rtx_p (rtx);
bool aarch64_float_const_rtx_p (rtx);
bool aarch64_function_arg_regno_p (unsigned);
bool aarch64_fusion_enabled_p (enum aarch64_fusion_pairs);
bool aarch64_gen_movmemqi (rtx *);
bool aarch64_gen_cpymemqi (rtx *);
bool aarch64_gimple_fold_builtin (gimple_stmt_iterator *);
bool aarch64_is_extend_from_extract (scalar_int_mode, rtx, rtx);
bool aarch64_is_long_call_p (rtx);

View File

@ -17386,11 +17386,11 @@ aarch64_copy_one_block_and_progress_pointers (rtx *src, rtx *dst,
*dst = aarch64_progress_pointer (*dst);
}
/* Expand movmem, as if from a __builtin_memcpy. Return true if
/* Expand cpymem, as if from a __builtin_memcpy. Return true if
we succeed, otherwise return false. */
bool
aarch64_expand_movmem (rtx *operands)
aarch64_expand_cpymem (rtx *operands)
{
int n, mode_bits;
rtx dst = operands[0];

View File

@ -855,7 +855,7 @@ typedef struct
/* MOVE_RATIO dictates when we will use the move_by_pieces infrastructure.
move_by_pieces will continually copy the largest safe chunks. So a
7-byte copy is a 4-byte + 2-byte + byte copy. This proves inefficient
for both size and speed of copy, so we will instead use the "movmem"
for both size and speed of copy, so we will instead use the "cpymem"
standard name to implement the copy. This logic does not apply when
targeting -mstrict-align, so keep a sensible default in that case. */
#define MOVE_RATIO(speed) \

View File

@ -1375,17 +1375,17 @@
;; 0 is dst
;; 1 is src
;; 2 is size of move in bytes
;; 2 is size of copy in bytes
;; 3 is alignment
(define_expand "movmemdi"
(define_expand "cpymemdi"
[(match_operand:BLK 0 "memory_operand")
(match_operand:BLK 1 "memory_operand")
(match_operand:DI 2 "immediate_operand")
(match_operand:DI 3 "immediate_operand")]
"!STRICT_ALIGNMENT"
{
if (aarch64_expand_movmem (operands))
if (aarch64_expand_cpymem (operands))
DONE;
FAIL;
}

View File

@ -759,7 +759,7 @@ do { \
#define MOVE_MAX 8
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction pairs, we will do a movmem or libcall instead.
move-instruction pairs, we will do a cpymem or libcall instead.
Without byte/word accesses, we want no more than four instructions;
with, several single byte accesses are better. */

View File

@ -4673,7 +4673,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemqi"
(define_expand "cpymemqi"
[(parallel [(set (match_operand:BLK 0 "memory_operand")
(match_operand:BLK 1 "memory_operand"))
(use (match_operand:DI 2 "immediate_operand"))
@ -4686,7 +4686,7 @@
FAIL;
})
(define_expand "movmemdi"
(define_expand "cpymemdi"
[(parallel [(set (match_operand:BLK 0 "memory_operand")
(match_operand:BLK 1 "memory_operand"))
(use (match_operand:DI 2 "immediate_operand"))
@ -4703,7 +4703,7 @@
"TARGET_ABI_OPEN_VMS"
"operands[4] = gen_rtx_SYMBOL_REF (Pmode, \"OTS$MOVE\");")
(define_insn "*movmemdi_1"
(define_insn "*cpymemdi_1"
[(set (match_operand:BLK 0 "memory_operand" "=m,m")
(match_operand:BLK 1 "memory_operand" "m,m"))
(use (match_operand:DI 2 "nonmemory_operand" "r,i"))

View File

@ -35,7 +35,7 @@ extern void arc_final_prescan_insn (rtx_insn *, rtx *, int);
extern const char *arc_output_libcall (const char *);
extern int arc_output_addsi (rtx *operands, bool, bool);
extern int arc_output_commutative_cond_exec (rtx *operands, bool);
extern bool arc_expand_movmem (rtx *operands);
extern bool arc_expand_cpymem (rtx *operands);
extern bool prepare_move_operands (rtx *operands, machine_mode mode);
extern void emit_shift (enum rtx_code, rtx, rtx, rtx);
extern void arc_expand_atomic_op (enum rtx_code, rtx, rtx, rtx, rtx, rtx);

View File

@ -8778,7 +8778,7 @@ arc_output_commutative_cond_exec (rtx *operands, bool output_p)
return 8;
}
/* Helper function of arc_expand_movmem. ADDR points to a chunk of memory.
/* Helper function of arc_expand_cpymem. ADDR points to a chunk of memory.
Emit code and return an potentially modified address such that offsets
up to SIZE are can be added to yield a legitimate address.
if REUSE is set, ADDR is a register that may be modified. */
@ -8812,7 +8812,7 @@ force_offsettable (rtx addr, HOST_WIDE_INT size, bool reuse)
offset ranges. Return true on success. */
bool
arc_expand_movmem (rtx *operands)
arc_expand_cpymem (rtx *operands)
{
rtx dst = operands[0];
rtx src = operands[1];
@ -10322,7 +10322,7 @@ arc_use_by_pieces_infrastructure_p (unsigned HOST_WIDE_INT size,
enum by_pieces_operation op,
bool speed_p)
{
/* Let the movmem expander handle small block moves. */
/* Let the cpymem expander handle small block moves. */
if (op == MOVE_BY_PIECES)
return false;

View File

@ -1423,7 +1423,7 @@ do { \
in one reasonably fast instruction. */
#define MOVE_MAX 4
/* Undo the effects of the movmem pattern presence on STORE_BY_PIECES_P . */
/* Undo the effects of the cpymem pattern presence on STORE_BY_PIECES_P . */
#define MOVE_RATIO(SPEED) ((SPEED) ? 15 : 3)
/* Define this to be nonzero if shift instructions ignore all but the

View File

@ -5126,13 +5126,13 @@ core_3, archs4x, archs4xd, archs4xd_slow"
(set_attr "type" "loop_end")
(set_attr "length" "4,20")])
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" "")
(match_operand:SI 2 "nonmemory_operand" "")
(match_operand 3 "immediate_operand" "")]
""
"if (arc_expand_movmem (operands)) DONE; else FAIL;")
"if (arc_expand_cpymem (operands)) DONE; else FAIL;")
;; Close http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35803 if this works
;; to the point that we can generate cmove instructions.

View File

@ -126,8 +126,8 @@ extern bool offset_ok_for_ldrd_strd (HOST_WIDE_INT);
extern bool operands_ok_ldrd_strd (rtx, rtx, rtx, HOST_WIDE_INT, bool, bool);
extern bool gen_operands_ldrd_strd (rtx *, bool, bool, bool);
extern bool valid_operands_ldrd_strd (rtx *, bool);
extern int arm_gen_movmemqi (rtx *);
extern bool gen_movmem_ldrd_strd (rtx *);
extern int arm_gen_cpymemqi (rtx *);
extern bool gen_cpymem_ldrd_strd (rtx *);
extern machine_mode arm_select_cc_mode (RTX_CODE, rtx, rtx);
extern machine_mode arm_select_dominance_cc_mode (rtx, rtx,
HOST_WIDE_INT);
@ -203,7 +203,7 @@ extern void thumb2_final_prescan_insn (rtx_insn *);
extern const char *thumb_load_double_from_address (rtx *);
extern const char *thumb_output_move_mem_multiple (int, rtx *);
extern const char *thumb_call_via_reg (rtx);
extern void thumb_expand_movmemqi (rtx *);
extern void thumb_expand_cpymemqi (rtx *);
extern rtx arm_return_addr (int, rtx);
extern void thumb_reload_out_hi (rtx *);
extern void thumb_set_return_address (rtx, rtx);

View File

@ -14385,7 +14385,7 @@ arm_block_move_unaligned_loop (rtx dest, rtx src, HOST_WIDE_INT length,
core type, optimize_size setting, etc. */
static int
arm_movmemqi_unaligned (rtx *operands)
arm_cpymemqi_unaligned (rtx *operands)
{
HOST_WIDE_INT length = INTVAL (operands[2]);
@ -14422,7 +14422,7 @@ arm_movmemqi_unaligned (rtx *operands)
}
int
arm_gen_movmemqi (rtx *operands)
arm_gen_cpymemqi (rtx *operands)
{
HOST_WIDE_INT in_words_to_go, out_words_to_go, last_bytes;
HOST_WIDE_INT srcoffset, dstoffset;
@ -14436,7 +14436,7 @@ arm_gen_movmemqi (rtx *operands)
return 0;
if (unaligned_access && (INTVAL (operands[3]) & 3) != 0)
return arm_movmemqi_unaligned (operands);
return arm_cpymemqi_unaligned (operands);
if (INTVAL (operands[3]) & 3)
return 0;
@ -14570,7 +14570,7 @@ arm_gen_movmemqi (rtx *operands)
return 1;
}
/* Helper for gen_movmem_ldrd_strd. Increase the address of memory rtx
/* Helper for gen_cpymem_ldrd_strd. Increase the address of memory rtx
by mode size. */
inline static rtx
next_consecutive_mem (rtx mem)
@ -14585,7 +14585,7 @@ next_consecutive_mem (rtx mem)
/* Copy using LDRD/STRD instructions whenever possible.
Returns true upon success. */
bool
gen_movmem_ldrd_strd (rtx *operands)
gen_cpymem_ldrd_strd (rtx *operands)
{
unsigned HOST_WIDE_INT len;
HOST_WIDE_INT align;
@ -14629,7 +14629,7 @@ gen_movmem_ldrd_strd (rtx *operands)
/* If we cannot generate any LDRD/STRD, try to generate LDM/STM. */
if (!(dst_aligned || src_aligned))
return arm_gen_movmemqi (operands);
return arm_gen_cpymemqi (operands);
/* If the either src or dst is unaligned we'll be accessing it as pairs
of unaligned SImode accesses. Otherwise we can generate DImode
@ -26395,7 +26395,7 @@ thumb_call_via_reg (rtx reg)
/* Routines for generating rtl. */
void
thumb_expand_movmemqi (rtx *operands)
thumb_expand_cpymemqi (rtx *operands)
{
rtx out = copy_to_mode_reg (SImode, XEXP (operands[0], 0));
rtx in = copy_to_mode_reg (SImode, XEXP (operands[1], 0));
@ -26404,13 +26404,13 @@ thumb_expand_movmemqi (rtx *operands)
while (len >= 12)
{
emit_insn (gen_movmem12b (out, in, out, in));
emit_insn (gen_cpymem12b (out, in, out, in));
len -= 12;
}
if (len >= 8)
{
emit_insn (gen_movmem8b (out, in, out, in));
emit_insn (gen_cpymem8b (out, in, out, in));
len -= 8;
}

View File

@ -7250,7 +7250,7 @@
;; We could let this apply for blocks of less than this, but it clobbers so
;; many registers that there is then probably a better way.
(define_expand "movmemqi"
(define_expand "cpymemqi"
[(match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" "")
(match_operand:SI 2 "const_int_operand" "")
@ -7262,12 +7262,12 @@
if (TARGET_LDRD && current_tune->prefer_ldrd_strd
&& !optimize_function_for_size_p (cfun))
{
if (gen_movmem_ldrd_strd (operands))
if (gen_cpymem_ldrd_strd (operands))
DONE;
FAIL;
}
if (arm_gen_movmemqi (operands))
if (arm_gen_cpymemqi (operands))
DONE;
FAIL;
}
@ -7277,7 +7277,7 @@
|| INTVAL (operands[2]) > 48)
FAIL;
thumb_expand_movmemqi (operands);
thumb_expand_cpymemqi (operands);
DONE;
}
"

View File

@ -928,7 +928,7 @@
;; Thumb block-move insns
(define_insn "movmem12b"
(define_insn "cpymem12b"
[(set (mem:SI (match_operand:SI 2 "register_operand" "0"))
(mem:SI (match_operand:SI 3 "register_operand" "1")))
(set (mem:SI (plus:SI (match_dup 2) (const_int 4)))
@ -950,7 +950,7 @@
(set_attr "type" "store_12")]
)
(define_insn "movmem8b"
(define_insn "cpymem8b"
[(set (mem:SI (match_operand:SI 2 "register_operand" "0"))
(mem:SI (match_operand:SI 3 "register_operand" "1")))
(set (mem:SI (plus:SI (match_dup 2) (const_int 4)))

View File

@ -82,7 +82,7 @@ extern rtx avr_to_int_mode (rtx);
extern void avr_expand_prologue (void);
extern void avr_expand_epilogue (bool);
extern bool avr_emit_movmemhi (rtx*);
extern bool avr_emit_cpymemhi (rtx*);
extern int avr_epilogue_uses (int regno);
extern void avr_output_addr_vec (rtx_insn*, rtx);
@ -92,7 +92,7 @@ extern const char* avr_out_plus (rtx, rtx*, int* =NULL, int* =NULL, bool =true);
extern const char* avr_out_round (rtx_insn *, rtx*, int* =NULL);
extern const char* avr_out_addto_sp (rtx*, int*);
extern const char* avr_out_xload (rtx_insn *, rtx*, int*);
extern const char* avr_out_movmem (rtx_insn *, rtx*, int*);
extern const char* avr_out_cpymem (rtx_insn *, rtx*, int*);
extern const char* avr_out_insert_bits (rtx*, int*);
extern bool avr_popcount_each_byte (rtx, int, int);
extern bool avr_has_nibble_0xf (rtx);

View File

@ -9404,7 +9404,7 @@ avr_adjust_insn_length (rtx_insn *insn, int len)
case ADJUST_LEN_MOV16: output_movhi (insn, op, &len); break;
case ADJUST_LEN_MOV24: avr_out_movpsi (insn, op, &len); break;
case ADJUST_LEN_MOV32: output_movsisf (insn, op, &len); break;
case ADJUST_LEN_MOVMEM: avr_out_movmem (insn, op, &len); break;
case ADJUST_LEN_CPYMEM: avr_out_cpymem (insn, op, &len); break;
case ADJUST_LEN_XLOAD: avr_out_xload (insn, op, &len); break;
case ADJUST_LEN_SEXT: avr_out_sign_extend (insn, op, &len); break;
@ -13321,7 +13321,7 @@ avr_emit3_fix_outputs (rtx (*gen)(rtx,rtx,rtx), rtx *op,
}
/* Worker function for movmemhi expander.
/* Worker function for cpymemhi expander.
XOP[0] Destination as MEM:BLK
XOP[1] Source " "
XOP[2] # Bytes to copy
@ -13330,7 +13330,7 @@ avr_emit3_fix_outputs (rtx (*gen)(rtx,rtx,rtx), rtx *op,
Return FALSE if the operand compination is not supported. */
bool
avr_emit_movmemhi (rtx *xop)
avr_emit_cpymemhi (rtx *xop)
{
HOST_WIDE_INT count;
machine_mode loop_mode;
@ -13407,14 +13407,14 @@ avr_emit_movmemhi (rtx *xop)
Do the copy-loop inline. */
rtx (*fun) (rtx, rtx, rtx)
= QImode == loop_mode ? gen_movmem_qi : gen_movmem_hi;
= QImode == loop_mode ? gen_cpymem_qi : gen_cpymem_hi;
insn = fun (xas, loop_reg, loop_reg);
}
else
{
rtx (*fun) (rtx, rtx)
= QImode == loop_mode ? gen_movmemx_qi : gen_movmemx_hi;
= QImode == loop_mode ? gen_cpymemx_qi : gen_cpymemx_hi;
emit_move_insn (gen_rtx_REG (QImode, 23), a_hi8);
@ -13428,7 +13428,7 @@ avr_emit_movmemhi (rtx *xop)
}
/* Print assembler for movmem_qi, movmem_hi insns...
/* Print assembler for cpymem_qi, cpymem_hi insns...
$0 : Address Space
$1, $2 : Loop register
Z : Source address
@ -13436,7 +13436,7 @@ avr_emit_movmemhi (rtx *xop)
*/
const char*
avr_out_movmem (rtx_insn *insn ATTRIBUTE_UNUSED, rtx *op, int *plen)
avr_out_cpymem (rtx_insn *insn ATTRIBUTE_UNUSED, rtx *op, int *plen)
{
addr_space_t as = (addr_space_t) INTVAL (op[0]);
machine_mode loop_mode = GET_MODE (op[1]);

View File

@ -70,7 +70,7 @@
(define_c_enum "unspec"
[UNSPEC_STRLEN
UNSPEC_MOVMEM
UNSPEC_CPYMEM
UNSPEC_INDEX_JMP
UNSPEC_FMUL
UNSPEC_FMULS
@ -158,7 +158,7 @@
tsthi, tstpsi, tstsi, compare, compare64, call,
mov8, mov16, mov24, mov32, reload_in16, reload_in24, reload_in32,
ufract, sfract, round,
xload, movmem,
xload, cpymem,
ashlqi, ashrqi, lshrqi,
ashlhi, ashrhi, lshrhi,
ashlsi, ashrsi, lshrsi,
@ -992,20 +992,20 @@
;;=========================================================================
;; move string (like memcpy)
(define_expand "movmemhi"
(define_expand "cpymemhi"
[(parallel [(set (match_operand:BLK 0 "memory_operand" "")
(match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:HI 2 "const_int_operand" ""))
(use (match_operand:HI 3 "const_int_operand" ""))])]
""
{
if (avr_emit_movmemhi (operands))
if (avr_emit_cpymemhi (operands))
DONE;
FAIL;
})
(define_mode_attr MOVMEM_r_d [(QI "r")
(define_mode_attr CPYMEM_r_d [(QI "r")
(HI "wd")])
;; $0 : Address Space
@ -1013,23 +1013,23 @@
;; R30 : source address
;; R26 : destination address
;; "movmem_qi"
;; "movmem_hi"
(define_insn "movmem_<mode>"
;; "cpymem_qi"
;; "cpymem_hi"
(define_insn "cpymem_<mode>"
[(set (mem:BLK (reg:HI REG_X))
(mem:BLK (reg:HI REG_Z)))
(unspec [(match_operand:QI 0 "const_int_operand" "n")]
UNSPEC_MOVMEM)
(use (match_operand:QIHI 1 "register_operand" "<MOVMEM_r_d>"))
UNSPEC_CPYMEM)
(use (match_operand:QIHI 1 "register_operand" "<CPYMEM_r_d>"))
(clobber (reg:HI REG_X))
(clobber (reg:HI REG_Z))
(clobber (reg:QI LPM_REGNO))
(clobber (match_operand:QIHI 2 "register_operand" "=1"))]
""
{
return avr_out_movmem (insn, operands, NULL);
return avr_out_cpymem (insn, operands, NULL);
}
[(set_attr "adjust_len" "movmem")
[(set_attr "adjust_len" "cpymem")
(set_attr "cc" "clobber")])
@ -1039,14 +1039,14 @@
;; R23:Z : 24-bit source address
;; R26 : 16-bit destination address
;; "movmemx_qi"
;; "movmemx_hi"
(define_insn "movmemx_<mode>"
;; "cpymemx_qi"
;; "cpymemx_hi"
(define_insn "cpymemx_<mode>"
[(set (mem:BLK (reg:HI REG_X))
(mem:BLK (lo_sum:PSI (reg:QI 23)
(reg:HI REG_Z))))
(unspec [(match_operand:QI 0 "const_int_operand" "n")]
UNSPEC_MOVMEM)
UNSPEC_CPYMEM)
(use (reg:QIHI 24))
(clobber (reg:HI REG_X))
(clobber (reg:HI REG_Z))

View File

@ -81,7 +81,7 @@ extern bool expand_move (rtx *, machine_mode);
extern void bfin_expand_call (rtx, rtx, rtx, rtx, int);
extern bool bfin_longcall_p (rtx, int);
extern bool bfin_dsp_memref_p (rtx);
extern bool bfin_expand_movmem (rtx, rtx, rtx, rtx);
extern bool bfin_expand_cpymem (rtx, rtx, rtx, rtx);
extern enum reg_class secondary_input_reload_class (enum reg_class,
machine_mode,

View File

@ -3208,7 +3208,7 @@ output_pop_multiple (rtx insn, rtx *operands)
/* Adjust DST and SRC by OFFSET bytes, and generate one move in mode MODE. */
static void
single_move_for_movmem (rtx dst, rtx src, machine_mode mode, HOST_WIDE_INT offset)
single_move_for_cpymem (rtx dst, rtx src, machine_mode mode, HOST_WIDE_INT offset)
{
rtx scratch = gen_reg_rtx (mode);
rtx srcmem, dstmem;
@ -3224,7 +3224,7 @@ single_move_for_movmem (rtx dst, rtx src, machine_mode mode, HOST_WIDE_INT offse
back on a different method. */
bool
bfin_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
bfin_expand_cpymem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
{
rtx srcreg, destreg, countreg;
HOST_WIDE_INT align = 0;
@ -3269,7 +3269,7 @@ bfin_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
{
if ((count & ~3) == 4)
{
single_move_for_movmem (dst, src, SImode, offset);
single_move_for_cpymem (dst, src, SImode, offset);
offset = 4;
}
else if (count & ~3)
@ -3282,7 +3282,7 @@ bfin_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
}
if (count & 2)
{
single_move_for_movmem (dst, src, HImode, offset);
single_move_for_cpymem (dst, src, HImode, offset);
offset += 2;
}
}
@ -3290,7 +3290,7 @@ bfin_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
{
if ((count & ~1) == 2)
{
single_move_for_movmem (dst, src, HImode, offset);
single_move_for_cpymem (dst, src, HImode, offset);
offset = 2;
}
else if (count & ~1)
@ -3304,7 +3304,7 @@ bfin_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
}
if (count & 1)
{
single_move_for_movmem (dst, src, QImode, offset);
single_move_for_cpymem (dst, src, QImode, offset);
}
return true;
}

View File

@ -793,7 +793,7 @@ typedef struct {
#define MOVE_MAX UNITS_PER_WORD
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction pairs, we will do a movmem or libcall instead. */
move-instruction pairs, we will do a cpymem or libcall instead. */
#define MOVE_RATIO(speed) 5

View File

@ -2316,14 +2316,14 @@
(set_attr "length" "16")
(set_attr "seq_insns" "multi")])
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" "")
(match_operand:SI 2 "const_int_operand" "")
(match_operand:SI 3 "const_int_operand" "")]
""
{
if (bfin_expand_movmem (operands[0], operands[1], operands[2], operands[3]))
if (bfin_expand_cpymem (operands[0], operands[1], operands[2], operands[3]))
DONE;
FAIL;
})

View File

@ -35,7 +35,7 @@ extern bool c6x_long_call_p (rtx);
extern void c6x_expand_call (rtx, rtx, bool);
extern rtx c6x_expand_compare (rtx, machine_mode);
extern bool c6x_force_op_for_comparison_p (enum rtx_code, rtx);
extern bool c6x_expand_movmem (rtx, rtx, rtx, rtx, rtx, rtx);
extern bool c6x_expand_cpymem (rtx, rtx, rtx, rtx, rtx, rtx);
extern rtx c6x_subword (rtx, bool);
extern void split_di (rtx *, int, rtx *, rtx *);

View File

@ -1686,10 +1686,10 @@ c6x_valid_mask_p (HOST_WIDE_INT val)
return true;
}
/* Expand a block move for a movmemM pattern. */
/* Expand a block move for a cpymemM pattern. */
bool
c6x_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp,
c6x_expand_cpymem (rtx dst, rtx src, rtx count_exp, rtx align_exp,
rtx expected_align_exp ATTRIBUTE_UNUSED,
rtx expected_size_exp ATTRIBUTE_UNUSED)
{

View File

@ -2844,7 +2844,7 @@
;; Block moves
;; -------------------------------------------------------------------------
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(use (match_operand:BLK 0 "memory_operand" ""))
(use (match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:SI 2 "nonmemory_operand" ""))
@ -2853,7 +2853,7 @@
(use (match_operand:SI 5 "const_int_operand" ""))]
""
{
if (c6x_expand_movmem (operands[0], operands[1], operands[2], operands[3],
if (c6x_expand_cpymem (operands[0], operands[1], operands[2], operands[3],
operands[4], operands[5]))
DONE;
else

View File

@ -1887,7 +1887,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(use (match_operand:SI 2 "" ""))

View File

@ -851,7 +851,7 @@
"stpcpy %b1,%b2 # %0 %b1 %b2"
)
(define_insn "movmemsi"
(define_insn "cpymemsi"
[(set (match_operand:BLK 0 "memory_operand" "=W,BW")
(match_operand:BLK 1 "memory_operand" "W,BW"))
(use (match_operand:SI 2 "ft32_imm_operand" "KA,KA"))

View File

@ -474,11 +474,11 @@
(set_attr "length_table" "*,movl")
(set_attr "cc" "set_zn,set_znv")])
;; Implement block moves using movmd. Defining movmemsi allows the full
;; Implement block copies using movmd. Defining cpymemsi allows the full
;; range of constant lengths (up to 0x40000 bytes when using movmd.l).
;; See h8sx_emit_movmd for details.
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(use (match_operand:BLK 0 "memory_operand" ""))
(use (match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:SI 2 "" ""))

View File

@ -5801,7 +5801,7 @@ counter_mode (rtx count_exp)
static void
expand_set_or_movmem_via_loop (rtx destmem, rtx srcmem,
expand_set_or_cpymem_via_loop (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx value,
rtx count, machine_mode mode, int unroll,
int expected_size, bool issetmem)
@ -5954,7 +5954,7 @@ scale_counter (rtx countreg, int scale)
Other arguments have same meaning as for previous function. */
static void
expand_set_or_movmem_via_rep (rtx destmem, rtx srcmem,
expand_set_or_cpymem_via_rep (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx value, rtx orig_value,
rtx count,
machine_mode mode, bool issetmem)
@ -6121,7 +6121,7 @@ ix86_expand_aligntest (rtx variable, int value, bool epilogue)
/* Output code to copy at most count & (max_size - 1) bytes from SRC to DEST. */
static void
expand_movmem_epilogue (rtx destmem, rtx srcmem,
expand_cpymem_epilogue (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx count, int max_size)
{
rtx src, dest;
@ -6146,7 +6146,7 @@ expand_movmem_epilogue (rtx destmem, rtx srcmem,
{
count = expand_simple_binop (GET_MODE (count), AND, count, GEN_INT (max_size - 1),
count, 1, OPTAB_DIRECT);
expand_set_or_movmem_via_loop (destmem, srcmem, destptr, srcptr, NULL,
expand_set_or_cpymem_via_loop (destmem, srcmem, destptr, srcptr, NULL,
count, QImode, 1, 4, false);
return;
}
@ -6295,7 +6295,7 @@ expand_setmem_epilogue_via_loop (rtx destmem, rtx destptr, rtx value,
{
count = expand_simple_binop (counter_mode (count), AND, count,
GEN_INT (max_size - 1), count, 1, OPTAB_DIRECT);
expand_set_or_movmem_via_loop (destmem, NULL, destptr, NULL,
expand_set_or_cpymem_via_loop (destmem, NULL, destptr, NULL,
gen_lowpart (QImode, value), count, QImode,
1, max_size / 2, true);
}
@ -6416,7 +6416,7 @@ ix86_adjust_counter (rtx countreg, HOST_WIDE_INT value)
Return value is updated DESTMEM. */
static rtx
expand_set_or_movmem_prologue (rtx destmem, rtx srcmem,
expand_set_or_cpymem_prologue (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx value,
rtx vec_value, rtx count, int align,
int desired_alignment, bool issetmem)
@ -6449,7 +6449,7 @@ expand_set_or_movmem_prologue (rtx destmem, rtx srcmem,
or setmem sequence that is valid for SIZE..2*SIZE-1 bytes
and jump to DONE_LABEL. */
static void
expand_small_movmem_or_setmem (rtx destmem, rtx srcmem,
expand_small_cpymem_or_setmem (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr,
rtx value, rtx vec_value,
rtx count, int size,
@ -6575,7 +6575,7 @@ expand_small_movmem_or_setmem (rtx destmem, rtx srcmem,
done_label:
*/
static void
expand_set_or_movmem_prologue_epilogue_by_misaligned_moves (rtx destmem, rtx srcmem,
expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves (rtx destmem, rtx srcmem,
rtx *destptr, rtx *srcptr,
machine_mode mode,
rtx value, rtx vec_value,
@ -6616,7 +6616,7 @@ expand_set_or_movmem_prologue_epilogue_by_misaligned_moves (rtx destmem, rtx src
/* Handle sizes > 3. */
for (;size2 > 2; size2 >>= 1)
expand_small_movmem_or_setmem (destmem, srcmem,
expand_small_cpymem_or_setmem (destmem, srcmem,
*destptr, *srcptr,
value, vec_value,
*count,
@ -6771,7 +6771,7 @@ expand_set_or_movmem_prologue_epilogue_by_misaligned_moves (rtx destmem, rtx src
is returned, but also of SRC, which is passed as a pointer for that
reason. */
static rtx
expand_set_or_movmem_constant_prologue (rtx dst, rtx *srcp, rtx destreg,
expand_set_or_cpymem_constant_prologue (rtx dst, rtx *srcp, rtx destreg,
rtx srcreg, rtx value, rtx vec_value,
int desired_align, int align_bytes,
bool issetmem)
@ -7214,7 +7214,7 @@ ix86_copy_addr_to_reg (rtx addr)
3) Main body: the copying loop itself, copying in SIZE_NEEDED chunks
with specified algorithm. */
bool
ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
ix86_expand_set_or_cpymem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
rtx align_exp, rtx expected_align_exp,
rtx expected_size_exp, rtx min_size_exp,
rtx max_size_exp, rtx probable_max_size_exp,
@ -7436,7 +7436,7 @@ ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
if (misaligned_prologue_used)
{
/* Misaligned move prologue handled small blocks by itself. */
expand_set_or_movmem_prologue_epilogue_by_misaligned_moves
expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves
(dst, src, &destreg, &srcreg,
move_mode, promoted_val, vec_promoted_val,
&count_exp,
@ -7553,7 +7553,7 @@ ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
dst = change_address (dst, BLKmode, destreg);
if (!issetmem)
src = change_address (src, BLKmode, srcreg);
dst = expand_set_or_movmem_prologue (dst, src, destreg, srcreg,
dst = expand_set_or_cpymem_prologue (dst, src, destreg, srcreg,
promoted_val, vec_promoted_val,
count_exp, align, desired_align,
issetmem);
@ -7567,7 +7567,7 @@ ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
{
/* If we know how many bytes need to be stored before dst is
sufficiently aligned, maintain aliasing info accurately. */
dst = expand_set_or_movmem_constant_prologue (dst, &src, destreg,
dst = expand_set_or_cpymem_constant_prologue (dst, &src, destreg,
srcreg,
promoted_val,
vec_promoted_val,
@ -7626,19 +7626,19 @@ ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
case loop_1_byte:
case loop:
case unrolled_loop:
expand_set_or_movmem_via_loop (dst, src, destreg, srcreg, promoted_val,
expand_set_or_cpymem_via_loop (dst, src, destreg, srcreg, promoted_val,
count_exp, move_mode, unroll_factor,
expected_size, issetmem);
break;
case vector_loop:
expand_set_or_movmem_via_loop (dst, src, destreg, srcreg,
expand_set_or_cpymem_via_loop (dst, src, destreg, srcreg,
vec_promoted_val, count_exp, move_mode,
unroll_factor, expected_size, issetmem);
break;
case rep_prefix_8_byte:
case rep_prefix_4_byte:
case rep_prefix_1_byte:
expand_set_or_movmem_via_rep (dst, src, destreg, srcreg, promoted_val,
expand_set_or_cpymem_via_rep (dst, src, destreg, srcreg, promoted_val,
val_exp, count_exp, move_mode, issetmem);
break;
}
@ -7691,7 +7691,7 @@ ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
vec_promoted_val, count_exp,
epilogue_size_needed);
else
expand_movmem_epilogue (dst, src, destreg, srcreg, count_exp,
expand_cpymem_epilogue (dst, src, destreg, srcreg, count_exp,
epilogue_size_needed);
}
}

View File

@ -66,7 +66,7 @@ extern int avx_vpermilp_parallel (rtx par, machine_mode mode);
extern int avx_vperm2f128_parallel (rtx par, machine_mode mode);
extern bool ix86_expand_strlen (rtx, rtx, rtx, rtx);
extern bool ix86_expand_set_or_movmem (rtx, rtx, rtx, rtx, rtx, rtx,
extern bool ix86_expand_set_or_cpymem (rtx, rtx, rtx, rtx, rtx, rtx,
rtx, rtx, rtx, rtx, bool);
extern bool constant_address_p (rtx);

View File

@ -1901,7 +1901,7 @@ typedef struct ix86_args {
? GET_MODE_SIZE (TImode) : UNITS_PER_WORD)
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction pairs, we will do a movmem or libcall instead.
move-instruction pairs, we will do a cpymem or libcall instead.
Increasing the value will always make code faster, but eventually
incurs high cost in increased code size.

View File

@ -16580,7 +16580,7 @@
(set_attr "length_immediate" "0")
(set_attr "modrm" "0")])
(define_expand "movmem<mode>"
(define_expand "cpymem<mode>"
[(use (match_operand:BLK 0 "memory_operand"))
(use (match_operand:BLK 1 "memory_operand"))
(use (match_operand:SWI48 2 "nonmemory_operand"))
@ -16592,7 +16592,7 @@
(use (match_operand:SI 8 ""))]
""
{
if (ix86_expand_set_or_movmem (operands[0], operands[1],
if (ix86_expand_set_or_cpymem (operands[0], operands[1],
operands[2], NULL, operands[3],
operands[4], operands[5],
operands[6], operands[7],
@ -16807,7 +16807,7 @@
(use (match_operand:SI 8 ""))]
""
{
if (ix86_expand_set_or_movmem (operands[0], NULL,
if (ix86_expand_set_or_cpymem (operands[0], NULL,
operands[1], operands[2],
operands[3], operands[4],
operands[5], operands[6],

View File

@ -216,7 +216,7 @@
}
}")
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" ""))
(use (match_operand:SI 2 "" ""))

View File

@ -40,14 +40,14 @@
;; 1 = source (mem:BLK ...)
;; 2 = count
;; 3 = alignment
(define_expand "movmemhi"
(define_expand "cpymemhi"
[(match_operand 0 "ap_operand" "")
(match_operand 1 "ap_operand" "")
(match_operand 2 "m32c_r3_operand" "")
(match_operand 3 "" "")
]
""
"if (m32c_expand_movmemhi(operands)) DONE; FAIL;"
"if (m32c_expand_cpymemhi(operands)) DONE; FAIL;"
)
;; We can't use mode iterators for these because M16C uses r1h to extend
@ -60,7 +60,7 @@
;; 3 = dest (in)
;; 4 = src (in)
;; 5 = count (in)
(define_insn "movmemhi_bhi_op"
(define_insn "cpymemhi_bhi_op"
[(set (mem:QI (match_operand:HI 3 "ap_operand" "0"))
(mem:QI (match_operand:HI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
@ -75,7 +75,7 @@
"TARGET_A16"
"mov.b:q\t#0,r1h\n\tsmovf.b\t; %0[0..%2-1]=r1h%1[]"
)
(define_insn "movmemhi_bpsi_op"
(define_insn "cpymemhi_bpsi_op"
[(set (mem:QI (match_operand:PSI 3 "ap_operand" "0"))
(mem:QI (match_operand:PSI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
@ -89,7 +89,7 @@
"TARGET_A24"
"smovf.b\t; %0[0..%2-1]=%1[]"
)
(define_insn "movmemhi_whi_op"
(define_insn "cpymemhi_whi_op"
[(set (mem:HI (match_operand:HI 3 "ap_operand" "0"))
(mem:HI (match_operand:HI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
@ -104,7 +104,7 @@
"TARGET_A16"
"mov.b:q\t#0,r1h\n\tsmovf.w\t; %0[0..%2-1]=r1h%1[]"
)
(define_insn "movmemhi_wpsi_op"
(define_insn "cpymemhi_wpsi_op"
[(set (mem:HI (match_operand:PSI 3 "ap_operand" "0"))
(mem:HI (match_operand:PSI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")

View File

@ -43,7 +43,7 @@ void m32c_emit_eh_epilogue (rtx);
int m32c_expand_cmpstr (rtx *);
int m32c_expand_insv (rtx *);
int m32c_expand_movcc (rtx *);
int m32c_expand_movmemhi (rtx *);
int m32c_expand_cpymemhi (rtx *);
int m32c_expand_movstr (rtx *);
void m32c_expand_neg_mulpsi3 (rtx *);
int m32c_expand_setmemhi (rtx *);

View File

@ -3592,7 +3592,7 @@ m32c_expand_setmemhi(rtx *operands)
addresses, not [mem] syntax. $0 is the destination (MEM:BLK), $1
is the source (MEM:BLK), and $2 the count (HI). */
int
m32c_expand_movmemhi(rtx *operands)
m32c_expand_cpymemhi(rtx *operands)
{
rtx desta, srca, count;
rtx desto, srco, counto;
@ -3620,9 +3620,9 @@ m32c_expand_movmemhi(rtx *operands)
{
count = copy_to_mode_reg (HImode, GEN_INT (INTVAL (count) / 2));
if (TARGET_A16)
emit_insn (gen_movmemhi_whi_op (desto, srco, counto, desta, srca, count));
emit_insn (gen_cpymemhi_whi_op (desto, srco, counto, desta, srca, count));
else
emit_insn (gen_movmemhi_wpsi_op (desto, srco, counto, desta, srca, count));
emit_insn (gen_cpymemhi_wpsi_op (desto, srco, counto, desta, srca, count));
return 1;
}
@ -3632,9 +3632,9 @@ m32c_expand_movmemhi(rtx *operands)
count = copy_to_mode_reg (HImode, count);
if (TARGET_A16)
emit_insn (gen_movmemhi_bhi_op (desto, srco, counto, desta, srca, count));
emit_insn (gen_cpymemhi_bhi_op (desto, srco, counto, desta, srca, count));
else
emit_insn (gen_movmemhi_bpsi_op (desto, srco, counto, desta, srca, count));
emit_insn (gen_cpymemhi_bpsi_op (desto, srco, counto, desta, srca, count));
return 1;
}

View File

@ -2598,7 +2598,7 @@ m32r_expand_block_move (rtx operands[])
to the word after the end of the source block, and dst_reg to point
to the last word of the destination block, provided that the block
is MAX_MOVE_BYTES long. */
emit_insn (gen_movmemsi_internal (dst_reg, src_reg, at_a_time,
emit_insn (gen_cpymemsi_internal (dst_reg, src_reg, at_a_time,
new_dst_reg, new_src_reg));
emit_move_insn (dst_reg, new_dst_reg);
emit_move_insn (src_reg, new_src_reg);
@ -2612,7 +2612,7 @@ m32r_expand_block_move (rtx operands[])
}
if (leftover)
emit_insn (gen_movmemsi_internal (dst_reg, src_reg, GEN_INT (leftover),
emit_insn (gen_cpymemsi_internal (dst_reg, src_reg, GEN_INT (leftover),
gen_reg_rtx (SImode),
gen_reg_rtx (SImode)));
return 1;

View File

@ -2195,7 +2195,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" ""))
(use (match_operand:SI 2 "immediate_operand" ""))
@ -2214,7 +2214,7 @@
;; Insn generated by block moves
(define_insn "movmemsi_internal"
(define_insn "cpymemsi_internal"
[(set (mem:BLK (match_operand:SI 0 "register_operand" "r")) ;; destination
(mem:BLK (match_operand:SI 1 "register_operand" "r"))) ;; source
(use (match_operand:SI 2 "m32r_block_immediate_operand" "J"));; # bytes to move

View File

@ -2552,7 +2552,7 @@
;; Block move - adapted from m88k.md
;; ------------------------------------------------------------------------
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (mem:BLK (match_operand:BLK 0 "" ""))
(mem:BLK (match_operand:BLK 1 "" "")))
(use (match_operand:SI 2 "general_operand" ""))

View File

@ -1250,7 +1250,7 @@ microblaze_block_move_loop (rtx dest, rtx src, HOST_WIDE_INT length)
microblaze_block_move_straight (dest, src, leftover);
}
/* Expand a movmemsi instruction. */
/* Expand a cpymemsi instruction. */
bool
microblaze_expand_block_move (rtx dest, rtx src, rtx length, rtx align_rtx)

View File

@ -1144,7 +1144,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand")
(match_operand:BLK 1 "general_operand"))
(use (match_operand:SI 2 ""))

View File

@ -7938,15 +7938,15 @@ mips_use_by_pieces_infrastructure_p (unsigned HOST_WIDE_INT size,
{
if (op == STORE_BY_PIECES)
return mips_store_by_pieces_p (size, align);
if (op == MOVE_BY_PIECES && HAVE_movmemsi)
if (op == MOVE_BY_PIECES && HAVE_cpymemsi)
{
/* movmemsi is meant to generate code that is at least as good as
move_by_pieces. However, movmemsi effectively uses a by-pieces
/* cpymemsi is meant to generate code that is at least as good as
move_by_pieces. However, cpymemsi effectively uses a by-pieces
implementation both for moves smaller than a word and for
word-aligned moves of no more than MIPS_MAX_MOVE_BYTES_STRAIGHT
bytes. We should allow the tree-level optimisers to do such
moves by pieces, as it often exposes other optimization
opportunities. We might as well continue to use movmemsi at
opportunities. We might as well continue to use cpymemsi at
the rtl level though, as it produces better code when
scheduling is disabled (such as at -O). */
if (currently_expanding_to_rtl)
@ -8165,7 +8165,7 @@ mips_block_move_loop (rtx dest, rtx src, HOST_WIDE_INT length,
emit_insn (gen_nop ());
}
/* Expand a movmemsi instruction, which copies LENGTH bytes from
/* Expand a cpymemsi instruction, which copies LENGTH bytes from
memory reference SRC to memory reference DEST. */
bool

View File

@ -3099,12 +3099,12 @@ while (0)
#define MIPS_MIN_MOVE_MEM_ALIGN 16
/* The maximum number of bytes that can be copied by one iteration of
a movmemsi loop; see mips_block_move_loop. */
a cpymemsi loop; see mips_block_move_loop. */
#define MIPS_MAX_MOVE_BYTES_PER_LOOP_ITER \
(UNITS_PER_WORD * 4)
/* The maximum number of bytes that can be copied by a straight-line
implementation of movmemsi; see mips_block_move_straight. We want
implementation of cpymemsi; see mips_block_move_straight. We want
to make sure that any loop-based implementation will iterate at
least twice. */
#define MIPS_MAX_MOVE_BYTES_STRAIGHT \
@ -3119,11 +3119,11 @@ while (0)
#define MIPS_CALL_RATIO 8
/* Any loop-based implementation of movmemsi will have at least
/* Any loop-based implementation of cpymemsi will have at least
MIPS_MAX_MOVE_BYTES_STRAIGHT / UNITS_PER_WORD memory-to-memory
moves, so allow individual copies of fewer elements.
When movmemsi is not available, use a value approximating
When cpymemsi is not available, use a value approximating
the length of a memcpy call sequence, so that move_by_pieces
will generate inline code if it is shorter than a function call.
Since move_by_pieces_ninsns counts memory-to-memory moves, but
@ -3131,7 +3131,7 @@ while (0)
value of MIPS_CALL_RATIO to take that into account. */
#define MOVE_RATIO(speed) \
(HAVE_movmemsi \
(HAVE_cpymemsi \
? MIPS_MAX_MOVE_BYTES_STRAIGHT / MOVE_MAX \
: MIPS_CALL_RATIO / 2)

View File

@ -5638,7 +5638,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand")
(match_operand:BLK 1 "general_operand"))
(use (match_operand:SI 2 ""))

View File

@ -1,4 +1,4 @@
/* Auxiliary functions for expand movmem, setmem, cmpmem, load_multiple
/* Auxiliary functions for expand cpymem, setmem, cmpmem, load_multiple
and store_multiple pattern of Andes NDS32 cpu for GNU compiler
Copyright (C) 2012-2019 Free Software Foundation, Inc.
Contributed by Andes Technology Corporation.
@ -120,14 +120,14 @@ nds32_emit_mem_move_block (int base_regno, int count,
/* ------------------------------------------------------------------------ */
/* Auxiliary function for expand movmem pattern. */
/* Auxiliary function for expand cpymem pattern. */
static bool
nds32_expand_movmemsi_loop_unknown_size (rtx dstmem, rtx srcmem,
nds32_expand_cpymemsi_loop_unknown_size (rtx dstmem, rtx srcmem,
rtx size,
rtx alignment)
{
/* Emit loop version of movmem.
/* Emit loop version of cpymem.
andi $size_least_3_bit, $size, #~7
add $dst_end, $dst, $size
@ -254,7 +254,7 @@ nds32_expand_movmemsi_loop_unknown_size (rtx dstmem, rtx srcmem,
}
static bool
nds32_expand_movmemsi_loop_known_size (rtx dstmem, rtx srcmem,
nds32_expand_cpymemsi_loop_known_size (rtx dstmem, rtx srcmem,
rtx size, rtx alignment)
{
rtx dst_base_reg, src_base_reg;
@ -288,7 +288,7 @@ nds32_expand_movmemsi_loop_known_size (rtx dstmem, rtx srcmem,
if (total_bytes < 8)
{
/* Emit total_bytes less than 8 loop version of movmem.
/* Emit total_bytes less than 8 loop version of cpymem.
add $dst_end, $dst, $size
move $dst_itr, $dst
.Lbyte_mode_loop:
@ -321,7 +321,7 @@ nds32_expand_movmemsi_loop_known_size (rtx dstmem, rtx srcmem,
}
else if (total_bytes % 8 == 0)
{
/* Emit multiple of 8 loop version of movmem.
/* Emit multiple of 8 loop version of cpymem.
add $dst_end, $dst, $size
move $dst_itr, $dst
@ -370,7 +370,7 @@ nds32_expand_movmemsi_loop_known_size (rtx dstmem, rtx srcmem,
else
{
/* Handle size greater than 8, and not a multiple of 8. */
return nds32_expand_movmemsi_loop_unknown_size (dstmem, srcmem,
return nds32_expand_cpymemsi_loop_unknown_size (dstmem, srcmem,
size, alignment);
}
@ -378,19 +378,19 @@ nds32_expand_movmemsi_loop_known_size (rtx dstmem, rtx srcmem,
}
static bool
nds32_expand_movmemsi_loop (rtx dstmem, rtx srcmem,
nds32_expand_cpymemsi_loop (rtx dstmem, rtx srcmem,
rtx size, rtx alignment)
{
if (CONST_INT_P (size))
return nds32_expand_movmemsi_loop_known_size (dstmem, srcmem,
return nds32_expand_cpymemsi_loop_known_size (dstmem, srcmem,
size, alignment);
else
return nds32_expand_movmemsi_loop_unknown_size (dstmem, srcmem,
return nds32_expand_cpymemsi_loop_unknown_size (dstmem, srcmem,
size, alignment);
}
static bool
nds32_expand_movmemsi_unroll (rtx dstmem, rtx srcmem,
nds32_expand_cpymemsi_unroll (rtx dstmem, rtx srcmem,
rtx total_bytes, rtx alignment)
{
rtx dst_base_reg, src_base_reg;
@ -533,13 +533,13 @@ nds32_expand_movmemsi_unroll (rtx dstmem, rtx srcmem,
This is auxiliary extern function to help create rtx template.
Check nds32-multiple.md file for the patterns. */
bool
nds32_expand_movmemsi (rtx dstmem, rtx srcmem, rtx total_bytes, rtx alignment)
nds32_expand_cpymemsi (rtx dstmem, rtx srcmem, rtx total_bytes, rtx alignment)
{
if (nds32_expand_movmemsi_unroll (dstmem, srcmem, total_bytes, alignment))
if (nds32_expand_cpymemsi_unroll (dstmem, srcmem, total_bytes, alignment))
return true;
if (!optimize_size && optimize > 2)
return nds32_expand_movmemsi_loop (dstmem, srcmem, total_bytes, alignment);
return nds32_expand_cpymemsi_loop (dstmem, srcmem, total_bytes, alignment);
return false;
}

View File

@ -3751,14 +3751,14 @@
;; operands[3] is the known shared alignment.
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" "")
(match_operand:SI 2 "nds32_reg_constant_operand" "")
(match_operand:SI 3 "const_int_operand" "")]
""
{
if (nds32_expand_movmemsi (operands[0],
if (nds32_expand_cpymemsi (operands[0],
operands[1],
operands[2],
operands[3]))

View File

@ -78,7 +78,7 @@ extern rtx nds32_di_low_part_subreg(rtx);
extern rtx nds32_expand_load_multiple (int, int, rtx, rtx, bool, rtx *);
extern rtx nds32_expand_store_multiple (int, int, rtx, rtx, bool, rtx *);
extern bool nds32_expand_movmemsi (rtx, rtx, rtx, rtx);
extern bool nds32_expand_cpymemsi (rtx, rtx, rtx, rtx);
extern bool nds32_expand_setmem (rtx, rtx, rtx, rtx, rtx, rtx);
extern bool nds32_expand_strlen (rtx, rtx, rtx, rtx);

View File

@ -107,7 +107,7 @@ static int pa_can_combine_p (rtx_insn *, rtx_insn *, rtx_insn *, int, rtx,
static bool forward_branch_p (rtx_insn *);
static void compute_zdepwi_operands (unsigned HOST_WIDE_INT, unsigned *);
static void compute_zdepdi_operands (unsigned HOST_WIDE_INT, unsigned *);
static int compute_movmem_length (rtx_insn *);
static int compute_cpymem_length (rtx_insn *);
static int compute_clrmem_length (rtx_insn *);
static bool pa_assemble_integer (rtx, unsigned int, int);
static void remove_useless_addtr_insns (int);
@ -2985,7 +2985,7 @@ pa_output_block_move (rtx *operands, int size_is_constant ATTRIBUTE_UNUSED)
count insns rather than emit them. */
static int
compute_movmem_length (rtx_insn *insn)
compute_cpymem_length (rtx_insn *insn)
{
rtx pat = PATTERN (insn);
unsigned int align = INTVAL (XEXP (XVECEXP (pat, 0, 7), 0));
@ -5060,7 +5060,7 @@ pa_adjust_insn_length (rtx_insn *insn, int length)
&& GET_CODE (XEXP (XVECEXP (pat, 0, 0), 1)) == MEM
&& GET_MODE (XEXP (XVECEXP (pat, 0, 0), 0)) == BLKmode
&& GET_MODE (XEXP (XVECEXP (pat, 0, 0), 1)) == BLKmode)
length += compute_movmem_length (insn) - 4;
length += compute_cpymem_length (insn) - 4;
/* Block clear pattern. */
else if (NONJUMP_INSN_P (insn)
&& GET_CODE (pat) == PARALLEL

View File

@ -3162,9 +3162,9 @@
;; The definition of this insn does not really explain what it does,
;; but it should suffice that anything generated as this insn will be
;; recognized as a movmemsi operation, and that it will not successfully
;; recognized as a cpymemsi operation, and that it will not successfully
;; combine with anything.
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(clobber (match_dup 4))
@ -3244,7 +3244,7 @@
;; operands 0 and 1 are both equivalent to symbolic MEMs. Thus, we are
;; forced to internally copy operands 0 and 1 to operands 7 and 8,
;; respectively. We then split or peephole optimize after reload.
(define_insn "movmemsi_prereload"
(define_insn "cpymemsi_prereload"
[(set (mem:BLK (match_operand:SI 0 "register_operand" "r,r"))
(mem:BLK (match_operand:SI 1 "register_operand" "r,r")))
(clobber (match_operand:SI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
@ -3337,7 +3337,7 @@
}
}")
(define_insn "movmemsi_postreload"
(define_insn "cpymemsi_postreload"
[(set (mem:BLK (match_operand:SI 0 "register_operand" "+r,r"))
(mem:BLK (match_operand:SI 1 "register_operand" "+r,r")))
(clobber (match_operand:SI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
@ -3352,7 +3352,7 @@
"* return pa_output_block_move (operands, !which_alternative);"
[(set_attr "type" "multi,multi")])
(define_expand "movmemdi"
(define_expand "cpymemdi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(clobber (match_dup 4))
@ -3432,7 +3432,7 @@
;; operands 0 and 1 are both equivalent to symbolic MEMs. Thus, we are
;; forced to internally copy operands 0 and 1 to operands 7 and 8,
;; respectively. We then split or peephole optimize after reload.
(define_insn "movmemdi_prereload"
(define_insn "cpymemdi_prereload"
[(set (mem:BLK (match_operand:DI 0 "register_operand" "r,r"))
(mem:BLK (match_operand:DI 1 "register_operand" "r,r")))
(clobber (match_operand:DI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
@ -3525,7 +3525,7 @@
}
}")
(define_insn "movmemdi_postreload"
(define_insn "cpymemdi_postreload"
[(set (mem:BLK (match_operand:DI 0 "register_operand" "+r,r"))
(mem:BLK (match_operand:DI 1 "register_operand" "+r,r")))
(clobber (match_operand:DI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp

View File

@ -26,7 +26,7 @@
UNSPECV_BLOCKAGE
UNSPECV_SETD
UNSPECV_SETI
UNSPECV_MOVMEM
UNSPECV_CPYMEM
])
(define_constants
@ -664,8 +664,8 @@
[(set_attr "length" "2,2,4,4,2")])
;; Expand a block move. We turn this into a move loop.
(define_expand "movmemhi"
[(parallel [(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
(define_expand "cpymemhi"
[(parallel [(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_operand:BLK 0 "general_operand" "=g")
(match_operand:BLK 1 "general_operand" "g")
(match_operand:HI 2 "immediate_operand" "i")
@ -694,8 +694,8 @@
}")
;; Expand a block move. We turn this into a move loop.
(define_insn_and_split "movmemhi1"
[(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
(define_insn_and_split "cpymemhi1"
[(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_operand:HI 0 "register_operand" "+r")
(match_operand:HI 1 "register_operand" "+r")
(match_operand:HI 2 "register_operand" "+r")
@ -707,7 +707,7 @@
""
"#"
"reload_completed"
[(parallel [(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
[(parallel [(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_dup 0)
(match_dup 1)
(match_dup 2)
@ -719,8 +719,8 @@
(clobber (reg:CC CC_REGNUM))])]
"")
(define_insn "movmemhi_nocc"
[(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
(define_insn "cpymemhi_nocc"
[(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_operand:HI 0 "register_operand" "+r")
(match_operand:HI 1 "register_operand" "+r")
(match_operand:HI 2 "register_operand" "+r")

View File

@ -3050,7 +3050,7 @@ riscv_block_move_loop (rtx dest, rtx src, HOST_WIDE_INT length,
emit_insn(gen_nop ());
}
/* Expand a movmemsi instruction, which copies LENGTH bytes from
/* Expand a cpymemsi instruction, which copies LENGTH bytes from
memory reference SRC to memory reference DEST. */
bool

View File

@ -840,20 +840,20 @@ while (0)
#undef PTRDIFF_TYPE
#define PTRDIFF_TYPE (POINTER_SIZE == 64 ? "long int" : "int")
/* The maximum number of bytes copied by one iteration of a movmemsi loop. */
/* The maximum number of bytes copied by one iteration of a cpymemsi loop. */
#define RISCV_MAX_MOVE_BYTES_PER_LOOP_ITER (UNITS_PER_WORD * 4)
/* The maximum number of bytes that can be copied by a straight-line
movmemsi implementation. */
cpymemsi implementation. */
#define RISCV_MAX_MOVE_BYTES_STRAIGHT (RISCV_MAX_MOVE_BYTES_PER_LOOP_ITER * 3)
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction pairs, we will do a movmem or libcall instead.
move-instruction pairs, we will do a cpymem or libcall instead.
Do not use move_by_pieces at all when strict alignment is not
in effect but the target has slow unaligned accesses; in this
case, movmem or libcall is more efficient. */
case, cpymem or libcall is more efficient. */
#define MOVE_RATIO(speed) \
(!STRICT_ALIGNMENT && riscv_slow_unaligned_access_p ? 1 : \

View File

@ -1498,7 +1498,7 @@
DONE;
})
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand")
(match_operand:BLK 1 "general_operand"))
(use (match_operand:SI 2 ""))

View File

@ -9113,7 +9113,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "")
(match_operand:BLK 1 ""))
(use (match_operand:SI 2 ""))

View File

@ -46,7 +46,7 @@
(UNSPEC_CONST 13)
(UNSPEC_MOVSTR 20)
(UNSPEC_MOVMEM 21)
(UNSPEC_CPYMEM 21)
(UNSPEC_SETMEM 22)
(UNSPEC_STRLEN 23)
(UNSPEC_CMPSTRN 24)
@ -2449,13 +2449,13 @@
(set_attr "timings" "1111")] ;; The timing is a guesstimate.
)
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel
[(set (match_operand:BLK 0 "memory_operand") ;; Dest
(match_operand:BLK 1 "memory_operand")) ;; Source
(use (match_operand:SI 2 "register_operand")) ;; Length in bytes
(match_operand 3 "immediate_operand") ;; Align
(unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_MOVMEM)]
(unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_CPYMEM)]
)]
"rx_allow_string_insns"
{
@ -2486,16 +2486,16 @@
emit_move_insn (len, force_operand (operands[2], NULL_RTX));
operands[0] = replace_equiv_address_nv (operands[0], addr1);
operands[1] = replace_equiv_address_nv (operands[1], addr2);
emit_insn (gen_rx_movmem ());
emit_insn (gen_rx_cpymem ());
DONE;
}
)
(define_insn "rx_movmem"
(define_insn "rx_cpymem"
[(set (mem:BLK (reg:SI 1))
(mem:BLK (reg:SI 2)))
(use (reg:SI 3))
(unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_MOVMEM)
(unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_CPYMEM)
(clobber (reg:SI 1))
(clobber (reg:SI 2))
(clobber (reg:SI 3))]

View File

@ -104,7 +104,7 @@ extern void s390_reload_symref_address (rtx , rtx , rtx , bool);
extern void s390_expand_plus_operand (rtx, rtx, rtx);
extern void emit_symbolic_move (rtx *);
extern void s390_load_address (rtx, rtx);
extern bool s390_expand_movmem (rtx, rtx, rtx);
extern bool s390_expand_cpymem (rtx, rtx, rtx);
extern void s390_expand_setmem (rtx, rtx, rtx);
extern bool s390_expand_cmpmem (rtx, rtx, rtx, rtx);
extern void s390_expand_vec_strlen (rtx, rtx, rtx);

View File

@ -5394,7 +5394,7 @@ legitimize_reload_address (rtx ad, machine_mode mode ATTRIBUTE_UNUSED,
/* Emit code to move LEN bytes from DST to SRC. */
bool
s390_expand_movmem (rtx dst, rtx src, rtx len)
s390_expand_cpymem (rtx dst, rtx src, rtx len)
{
/* When tuning for z10 or higher we rely on the Glibc functions to
do the right thing. Only for constant lengths below 64k we will
@ -5419,14 +5419,14 @@ s390_expand_movmem (rtx dst, rtx src, rtx len)
{
rtx newdst = adjust_address (dst, BLKmode, o);
rtx newsrc = adjust_address (src, BLKmode, o);
emit_insn (gen_movmem_short (newdst, newsrc,
emit_insn (gen_cpymem_short (newdst, newsrc,
GEN_INT (l > 256 ? 255 : l - 1)));
}
}
else if (TARGET_MVCLE)
{
emit_insn (gen_movmem_long (dst, src, convert_to_mode (Pmode, len, 1)));
emit_insn (gen_cpymem_long (dst, src, convert_to_mode (Pmode, len, 1)));
}
else
@ -5488,7 +5488,7 @@ s390_expand_movmem (rtx dst, rtx src, rtx len)
emit_insn (prefetch);
}
emit_insn (gen_movmem_short (dst, src, GEN_INT (255)));
emit_insn (gen_cpymem_short (dst, src, GEN_INT (255)));
s390_load_address (dst_addr,
gen_rtx_PLUS (Pmode, dst_addr, GEN_INT (256)));
s390_load_address (src_addr,
@ -5505,7 +5505,7 @@ s390_expand_movmem (rtx dst, rtx src, rtx len)
emit_jump (loop_start_label);
emit_label (loop_end_label);
emit_insn (gen_movmem_short (dst, src,
emit_insn (gen_cpymem_short (dst, src,
convert_to_mode (Pmode, count, 1)));
emit_label (end_label);
}
@ -5557,7 +5557,7 @@ s390_expand_setmem (rtx dst, rtx len, rtx val)
if (l > 1)
{
rtx newdstp1 = adjust_address (dst, BLKmode, o + 1);
emit_insn (gen_movmem_short (newdstp1, newdst,
emit_insn (gen_cpymem_short (newdstp1, newdst,
GEN_INT (l > 257 ? 255 : l - 2)));
}
}
@ -5664,7 +5664,7 @@ s390_expand_setmem (rtx dst, rtx len, rtx val)
/* Set the first byte in the block to the value and use an
overlapping mvc for the block. */
emit_move_insn (adjust_address (dst, QImode, 0), val);
emit_insn (gen_movmem_short (dstp1, dst, GEN_INT (254)));
emit_insn (gen_cpymem_short (dstp1, dst, GEN_INT (254)));
}
s390_load_address (dst_addr,
gen_rtx_PLUS (Pmode, dst_addr, GEN_INT (256)));
@ -5688,7 +5688,7 @@ s390_expand_setmem (rtx dst, rtx len, rtx val)
emit_move_insn (adjust_address (dst, QImode, 0), val);
/* execute only uses the lowest 8 bits of count that's
exactly what we need here. */
emit_insn (gen_movmem_short (dstp1, dst,
emit_insn (gen_cpymem_short (dstp1, dst,
convert_to_mode (Pmode, count, 1)));
}
@ -6330,7 +6330,7 @@ s390_expand_insv (rtx dest, rtx op1, rtx op2, rtx src)
dest = adjust_address (dest, BLKmode, 0);
set_mem_size (dest, size);
s390_expand_movmem (dest, src_mem, GEN_INT (size));
s390_expand_cpymem (dest, src_mem, GEN_INT (size));
return true;
}

View File

@ -3196,17 +3196,17 @@
;
; movmemM instruction pattern(s).
; cpymemM instruction pattern(s).
;
(define_expand "movmem<mode>"
(define_expand "cpymem<mode>"
[(set (match_operand:BLK 0 "memory_operand" "") ; destination
(match_operand:BLK 1 "memory_operand" "")) ; source
(use (match_operand:GPR 2 "general_operand" "")) ; count
(match_operand 3 "" "")]
""
{
if (s390_expand_movmem (operands[0], operands[1], operands[2]))
if (s390_expand_cpymem (operands[0], operands[1], operands[2]))
DONE;
else
FAIL;
@ -3215,7 +3215,7 @@
; Move a block that is up to 256 bytes in length.
; The block length is taken as (operands[2] % 256) + 1.
(define_expand "movmem_short"
(define_expand "cpymem_short"
[(parallel
[(set (match_operand:BLK 0 "memory_operand" "")
(match_operand:BLK 1 "memory_operand" ""))
@ -3225,7 +3225,7 @@
""
"operands[3] = gen_rtx_SCRATCH (Pmode);")
(define_insn "*movmem_short"
(define_insn "*cpymem_short"
[(set (match_operand:BLK 0 "memory_operand" "=Q,Q,Q,Q")
(match_operand:BLK 1 "memory_operand" "Q,Q,Q,Q"))
(use (match_operand 2 "nonmemory_operand" "n,a,a,a"))
@ -3293,7 +3293,7 @@
; Move a block of arbitrary length.
(define_expand "movmem_long"
(define_expand "cpymem_long"
[(parallel
[(clobber (match_dup 2))
(clobber (match_dup 3))
@ -3327,7 +3327,7 @@
operands[3] = reg1;
})
(define_insn "*movmem_long"
(define_insn "*cpymem_long"
[(clobber (match_operand:<DBL> 0 "register_operand" "=d"))
(clobber (match_operand:<DBL> 1 "register_operand" "=d"))
(set (mem:BLK (subreg:P (match_operand:<DBL> 2 "register_operand" "0") 0))
@ -3340,7 +3340,7 @@
[(set_attr "length" "8")
(set_attr "type" "vs")])
(define_insn "*movmem_long_31z"
(define_insn "*cpymem_long_31z"
[(clobber (match_operand:TI 0 "register_operand" "=d"))
(clobber (match_operand:TI 1 "register_operand" "=d"))
(set (mem:BLK (subreg:SI (match_operand:TI 2 "register_operand" "0") 4))

View File

@ -8906,7 +8906,7 @@
;; String/block move insn.
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (mem:BLK (match_operand:BLK 0))
(mem:BLK (match_operand:BLK 1)))
(use (match_operand:SI 2 "nonmemory_operand"))

View File

@ -1412,7 +1412,7 @@ do { \
#define MOVE_MAX 8
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction pairs, we will do a movmem or libcall instead. */
move-instruction pairs, we will do a cpymem or libcall instead. */
#define MOVE_RATIO(speed) ((speed) ? 8 : 3)

View File

@ -31,7 +31,6 @@ extern void vax_expand_addsub_di_operands (rtx *, enum rtx_code);
extern const char * vax_output_int_move (rtx, rtx *, machine_mode);
extern const char * vax_output_int_add (rtx_insn *, rtx *, machine_mode);
extern const char * vax_output_int_subtract (rtx_insn *, rtx *, machine_mode);
extern const char * vax_output_movmemsi (rtx, rtx *);
#endif /* RTX_CODE */
#ifdef REAL_VALUE_TYPE

View File

@ -430,7 +430,7 @@ enum reg_class { NO_REGS, ALL_REGS, LIM_REG_CLASSES };
#define MOVE_MAX 8
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction pairs, we will do a movmem or libcall instead. */
move-instruction pairs, we will do a cpymem or libcall instead. */
#define MOVE_RATIO(speed) ((speed) ? 6 : 3)
#define CLEAR_RATIO(speed) ((speed) ? 6 : 2)

View File

@ -206,8 +206,8 @@
}")
;; This is here to accept 4 arguments and pass the first 3 along
;; to the movmemhi1 pattern that really does the work.
(define_expand "movmemhi"
;; to the cpymemhi1 pattern that really does the work.
(define_expand "cpymemhi"
[(set (match_operand:BLK 0 "general_operand" "=g")
(match_operand:BLK 1 "general_operand" "g"))
(use (match_operand:HI 2 "general_operand" "g"))
@ -215,7 +215,7 @@
""
"
{
emit_insn (gen_movmemhi1 (operands[0], operands[1], operands[2]));
emit_insn (gen_cpymemhi1 (operands[0], operands[1], operands[2]));
DONE;
}")
@ -224,7 +224,7 @@
;; that anything generated as this insn will be recognized as one
;; and that it won't successfully combine with anything.
(define_insn "movmemhi1"
(define_insn "cpymemhi1"
[(set (match_operand:BLK 0 "memory_operand" "=o")
(match_operand:BLK 1 "memory_operand" "o"))
(use (match_operand:HI 2 "general_operand" "g"))

View File

@ -1138,8 +1138,8 @@ do \
always make code faster, but eventually incurs high cost in
increased code size.
Since we have a movmemsi pattern, the default MOVE_RATIO is 2, which
is too low given that movmemsi will invoke a libcall. */
Since we have a cpymemsi pattern, the default MOVE_RATIO is 2, which
is too low given that cpymemsi will invoke a libcall. */
#define MOVE_RATIO(speed) ((speed) ? 9 : 3)
/* `CLEAR_RATIO (SPEED)`

View File

@ -3006,7 +3006,7 @@
;; Argument 2 is the length
;; Argument 3 is the alignment
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "memory_operand" "")
(match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:SI 2 "general_operand" ""))

View File

@ -1026,7 +1026,7 @@
;; Block moves
(define_expand "movmemsi"
(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(use (match_operand:SI 2 "arith_operand" ""))

View File

@ -1318,10 +1318,10 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
#endif
/* If a memory-to-memory move would take MOVE_RATIO or more simple
move-instruction sequences, we will do a movmem or libcall instead. */
move-instruction sequences, we will do a cpymem or libcall instead. */
#ifndef MOVE_RATIO
#if defined (HAVE_movmemqi) || defined (HAVE_movmemhi) || defined (HAVE_movmemsi) || defined (HAVE_movmemdi) || defined (HAVE_movmemti)
#if defined (HAVE_cpymemqi) || defined (HAVE_cpymemhi) || defined (HAVE_cpymemsi) || defined (HAVE_cpymemdi) || defined (HAVE_cpymemti)
#define MOVE_RATIO(speed) 2
#else
/* If we are optimizing for space (-Os), cut down the default move ratio. */
@ -1342,7 +1342,7 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
#endif
/* If a memory set (to value other than zero) operation would take
SET_RATIO or more simple move-instruction sequences, we will do a movmem
SET_RATIO or more simple move-instruction sequences, we will do a setmem
or libcall instead. */
#ifndef SET_RATIO
#define SET_RATIO(speed) MOVE_RATIO (speed)

View File

@ -6200,13 +6200,13 @@ This pattern is not allowed to @code{FAIL}.
@item @samp{one_cmpl@var{m}2}
Store the bitwise-complement of operand 1 into operand 0.
@cindex @code{movmem@var{m}} instruction pattern
@item @samp{movmem@var{m}}
Block move instruction. The destination and source blocks of memory
@cindex @code{cpymem@var{m}} instruction pattern
@item @samp{cpymem@var{m}}
Block copy instruction. The destination and source blocks of memory
are the first two operands, and both are @code{mem:BLK}s with an
address in mode @code{Pmode}.
The number of bytes to move is the third operand, in mode @var{m}.
The number of bytes to copy is the third operand, in mode @var{m}.
Usually, you specify @code{Pmode} for @var{m}. However, if you can
generate better code knowing the range of valid lengths is smaller than
those representable in a full Pmode pointer, you should provide
@ -6226,14 +6226,16 @@ in a way that the blocks are not required to be aligned according to it in
all cases. This expected alignment is also in bytes, just like operand 4.
Expected size, when unknown, is set to @code{(const_int -1)}.
Descriptions of multiple @code{movmem@var{m}} patterns can only be
Descriptions of multiple @code{cpymem@var{m}} patterns can only be
beneficial if the patterns for smaller modes have fewer restrictions
on their first, second and fourth operands. Note that the mode @var{m}
in @code{movmem@var{m}} does not impose any restriction on the mode of
individually moved data units in the block.
in @code{cpymem@var{m}} does not impose any restriction on the mode of
individually copied data units in the block.
These patterns need not give special consideration to the possibility
that the source and destination strings might overlap.
The @code{cpymem@var{m}} patterns need not give special consideration
to the possibility that the source and destination strings might
overlap. These patterns are used to do inline expansion of
@code{__builtin_memcpy}.
@cindex @code{movstr} instruction pattern
@item @samp{movstr}
@ -6254,7 +6256,7 @@ given as a @code{mem:BLK} whose address is in mode @code{Pmode}. The
number of bytes to set is the second operand, in mode @var{m}. The value to
initialize the memory with is the third operand. Targets that only support the
clearing of memory should reject any value that is not the constant 0. See
@samp{movmem@var{m}} for a discussion of the choice of mode.
@samp{cpymem@var{m}} for a discussion of the choice of mode.
The fourth operand is the known alignment of the destination, in the form
of a @code{const_int} rtx. Thus, if the compiler knows that the
@ -6272,13 +6274,13 @@ Operand 9 is the probable maximal size (i.e.@: we cannot rely on it for
correctness, but it can be used for choosing proper code sequence for a
given size).
The use for multiple @code{setmem@var{m}} is as for @code{movmem@var{m}}.
The use for multiple @code{setmem@var{m}} is as for @code{cpymem@var{m}}.
@cindex @code{cmpstrn@var{m}} instruction pattern
@item @samp{cmpstrn@var{m}}
String compare instruction, with five operands. Operand 0 is the output;
it has mode @var{m}. The remaining four operands are like the operands
of @samp{movmem@var{m}}. The two memory blocks specified are compared
of @samp{cpymem@var{m}}. The two memory blocks specified are compared
byte by byte in lexicographic order starting at the beginning of each
string. The instruction is not allowed to prefetch more than one byte
at a time since either string may end in the first byte and reading past

View File

@ -3341,7 +3341,7 @@ that the register is live. You should think twice before adding
instead. The @code{use} RTX is most commonly useful to describe that
a fixed register is implicitly used in an insn. It is also safe to use
in patterns where the compiler knows for other reasons that the result
of the whole pattern is variable, such as @samp{movmem@var{m}} or
of the whole pattern is variable, such as @samp{cpymem@var{m}} or
@samp{call} patterns.
During the reload phase, an insn that has a @code{use} as pattern

View File

@ -6661,7 +6661,7 @@ two areas of memory, or to set, clear or store to memory, for example
when copying a @code{struct}. The @code{by_pieces} infrastructure
implements such memory operations as a sequence of load, store or move
insns. Alternate strategies are to expand the
@code{movmem} or @code{setmem} optabs, to emit a library call, or to emit
@code{cpymem} or @code{setmem} optabs, to emit a library call, or to emit
unit-by-unit, loop-based operations.
This target hook should return true if, for a memory operation with a
@ -6680,7 +6680,7 @@ optimized for speed rather than size.
Returning true for higher values of @var{size} can improve code generation
for speed if the target does not provide an implementation of the
@code{movmem} or @code{setmem} standard names, if the @code{movmem} or
@code{cpymem} or @code{setmem} standard names, if the @code{cpymem} or
@code{setmem} implementation would be more expensive than a sequence of
insns, or if the overhead of a library call would dominate that of
the body of the memory operation.

View File

@ -73,7 +73,7 @@ along with GCC; see the file COPYING3. If not see
int cse_not_expected;
static bool block_move_libcall_safe_for_call_parm (void);
static bool emit_block_move_via_movmem (rtx, rtx, rtx, unsigned, unsigned, HOST_WIDE_INT,
static bool emit_block_move_via_cpymem (rtx, rtx, rtx, unsigned, unsigned, HOST_WIDE_INT,
unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT);
static void emit_block_move_via_loop (rtx, rtx, rtx, unsigned);
@ -1624,7 +1624,7 @@ emit_block_move_hints (rtx x, rtx y, rtx size, enum block_op_methods method,
if (CONST_INT_P (size) && can_move_by_pieces (INTVAL (size), align))
move_by_pieces (x, y, INTVAL (size), align, RETURN_BEGIN);
else if (emit_block_move_via_movmem (x, y, size, align,
else if (emit_block_move_via_cpymem (x, y, size, align,
expected_align, expected_size,
min_size, max_size, probable_max_size))
;
@ -1722,11 +1722,11 @@ block_move_libcall_safe_for_call_parm (void)
return true;
}
/* A subroutine of emit_block_move. Expand a movmem pattern;
/* A subroutine of emit_block_move. Expand a cpymem pattern;
return true if successful. */
static bool
emit_block_move_via_movmem (rtx x, rtx y, rtx size, unsigned int align,
emit_block_move_via_cpymem (rtx x, rtx y, rtx size, unsigned int align,
unsigned int expected_align, HOST_WIDE_INT expected_size,
unsigned HOST_WIDE_INT min_size,
unsigned HOST_WIDE_INT max_size,
@ -1755,7 +1755,7 @@ emit_block_move_via_movmem (rtx x, rtx y, rtx size, unsigned int align,
FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
{
scalar_int_mode mode = mode_iter.require ();
enum insn_code code = direct_optab_handler (movmem_optab, mode);
enum insn_code code = direct_optab_handler (cpymem_optab, mode);
if (code != CODE_FOR_nothing
/* We don't need MODE to be narrower than BITS_PER_HOST_WIDE_INT

View File

@ -256,7 +256,7 @@ OPTAB_D (umul_highpart_optab, "umul$a3_highpart")
OPTAB_D (cmpmem_optab, "cmpmem$a")
OPTAB_D (cmpstr_optab, "cmpstr$a")
OPTAB_D (cmpstrn_optab, "cmpstrn$a")
OPTAB_D (movmem_optab, "movmem$a")
OPTAB_D (cpymem_optab, "cpymem$a")
OPTAB_D (setmem_optab, "setmem$a")
OPTAB_D (strlen_optab, "strlen$a")

View File

@ -3531,7 +3531,7 @@ two areas of memory, or to set, clear or store to memory, for example\n\
when copying a @code{struct}. The @code{by_pieces} infrastructure\n\
implements such memory operations as a sequence of load, store or move\n\
insns. Alternate strategies are to expand the\n\
@code{movmem} or @code{setmem} optabs, to emit a library call, or to emit\n\
@code{cpymem} or @code{setmem} optabs, to emit a library call, or to emit\n\
unit-by-unit, loop-based operations.\n\
\n\
This target hook should return true if, for a memory operation with a\n\
@ -3550,7 +3550,7 @@ optimized for speed rather than size.\n\
\n\
Returning true for higher values of @var{size} can improve code generation\n\
for speed if the target does not provide an implementation of the\n\
@code{movmem} or @code{setmem} standard names, if the @code{movmem} or\n\
@code{cpymem} or @code{setmem} standard names, if the @code{cpymem} or\n\
@code{setmem} implementation would be more expensive than a sequence of\n\
insns, or if the overhead of a library call would dominate that of\n\
the body of the memory operation.\n\

View File

@ -1746,9 +1746,9 @@ get_move_ratio (bool speed_p ATTRIBUTE_UNUSED)
#ifdef MOVE_RATIO
move_ratio = (unsigned int) MOVE_RATIO (speed_p);
#else
#if defined (HAVE_movmemqi) || defined (HAVE_movmemhi) || defined (HAVE_movmemsi) || defined (HAVE_movmemdi) || defined (HAVE_movmemti)
#if defined (HAVE_cpymemqi) || defined (HAVE_cpymemhi) || defined (HAVE_cpymemsi) || defined (HAVE_cpymemdi) || defined (HAVE_cpymemti)
move_ratio = 2;
#else /* No movmem patterns, pick a default. */
#else /* No cpymem patterns, pick a default. */
move_ratio = ((speed_p) ? 15 : 3);
#endif
#endif
@ -1756,7 +1756,7 @@ get_move_ratio (bool speed_p ATTRIBUTE_UNUSED)
}
/* Return TRUE if the move_by_pieces/set_by_pieces infrastructure should be
used; return FALSE if the movmem/setmem optab should be expanded, or
used; return FALSE if the cpymem/setmem optab should be expanded, or
a call to memcpy emitted. */
bool