target/arm: Simplify op_smlawx for SMLAW*

By shifting the 16-bit input left by 16, we can align the desired
portion of the 48-bit product and use tcg_gen_muls2_i32.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
Richard Henderson 2019-09-04 12:30:01 -07:00 committed by Peter Maydell
parent ea96b37464
commit 485b607d4f
1 changed files with 8 additions and 8 deletions

View File

@ -8242,7 +8242,6 @@ DO_SMLAX(SMLALTT, 2, 1, 1)
static bool op_smlawx(DisasContext *s, arg_rrrr *a, bool add, bool mt)
{
TCGv_i32 t0, t1;
TCGv_i64 t64;
if (!ENABLE_ARCH_5TE) {
return false;
@ -8250,16 +8249,17 @@ static bool op_smlawx(DisasContext *s, arg_rrrr *a, bool add, bool mt)
t0 = load_reg(s, a->rn);
t1 = load_reg(s, a->rm);
/*
* Since the nominal result is product<47:16>, shift the 16-bit
* input up by 16 bits, so that the result is at product<63:32>.
*/
if (mt) {
tcg_gen_sari_i32(t1, t1, 16);
tcg_gen_andi_i32(t1, t1, 0xffff0000);
} else {
gen_sxth(t1);
tcg_gen_shli_i32(t1, t1, 16);
}
t64 = gen_muls_i64_i32(t0, t1);
tcg_gen_shri_i64(t64, t64, 16);
t1 = tcg_temp_new_i32();
tcg_gen_extrl_i64_i32(t1, t64);
tcg_temp_free_i64(t64);
tcg_gen_muls2_i32(t0, t1, t0, t1);
tcg_temp_free_i32(t0);
if (add) {
t0 = load_reg(s, a->ra);
gen_helper_add_setq(t1, cpu_env, t1, t0);