rs6000: Add optimizations for _mm_sad_epu8

Power9 ISA added `vabsdub` instruction which is realized in the
`vec_absd` instrinsic.

Use `vec_absd` for `_mm_sad_epu8` compatibility intrinsic, when
`_ARCH_PWR9`.

Also, the realization of `vec_sum2s` on little-endian includes
two rotates in order to position the input and output to match
the semantics of `vec_sum2s`:
- Rotate the second input vector left 12 bytes. In the current usage,
  that vector is `{0}`, so this shift is unnecessary, but is currently
  not eliminated under optimization.
- Rotate the vector produced by the `vsum2sws` instruction left 4 bytes.
  The two words within each doubleword of this (rotated) result must then
  be explicitly swapped to match the semantics of `_mm_sad_epu8`,
  effectively reversing this rotate.  So, this rotate (and a susequent
  swap) are unnecessary, but not currently removed under optimization.

Using `__builtin_altivec_vsum2sws` retains both rotates, so is not an
option for removing the rotates.

For little-endian, use the `vsum2sws` instruction directly, and
eliminate the explicit rotate (swap).

2021-11-19  Paul A. Clarke  <pc@us.ibm.com>

gcc
	* config/rs6000/emmintrin.h (_mm_sad_epu8): Use vec_absd when
	_ARCH_PWR9, optimize vec_sum2s when LE.
This commit is contained in:
Paul A. Clarke 2021-10-22 12:09:43 -05:00
parent b903e0f3ad
commit fc6c6f64ec
1 changed files with 17 additions and 7 deletions

View File

@ -2189,27 +2189,37 @@ extern __inline __m128i __attribute__((__gnu_inline__, __always_inline__, __arti
_mm_sad_epu8 (__m128i __A, __m128i __B)
{
__v16qu a, b;
__v16qu vmin, vmax, vabsdiff;
__v16qu vabsdiff;
__v4si vsum;
const __v4su zero = { 0, 0, 0, 0 };
__v4si result;
a = (__v16qu) __A;
b = (__v16qu) __B;
vmin = vec_min (a, b);
vmax = vec_max (a, b);
#ifndef _ARCH_PWR9
__v16qu vmin = vec_min (a, b);
__v16qu vmax = vec_max (a, b);
vabsdiff = vec_sub (vmax, vmin);
#else
vabsdiff = vec_absd (a, b);
#endif
/* Sum four groups of bytes into integers. */
vsum = (__vector signed int) vec_sum4s (vabsdiff, zero);
#ifdef __LITTLE_ENDIAN__
/* Sum across four integers with two integer results. */
asm ("vsum2sws %0,%1,%2" : "=v" (result) : "v" (vsum), "v" (zero));
/* Note: vec_sum2s could be used here, but on little-endian, vector
shifts are added that are not needed for this use-case.
A vector shift to correctly position the 32-bit integer results
(currently at [0] and [2]) to [1] and [3] would then need to be
swapped back again since the desired results are two 64-bit
integers ([1]|[0] and [3]|[2]). Thus, no shift is performed. */
#else
/* Sum across four integers with two integer results. */
result = vec_sum2s (vsum, (__vector signed int) zero);
/* Rotate the sums into the correct position. */
#ifdef __LITTLE_ENDIAN__
result = vec_sld (result, result, 4);
#else
result = vec_sld (result, result, 6);
#endif
/* Rotate the sums into the correct position. */
return (__m128i) result;
}