Commit Graph

29 Commits

Author SHA1 Message Date
H.J. Lu d2cf37c0a2 x86-64: Use _dl_runtime_resolve_opt only with AVX512F [BZ #21871]
On AVX machines with XGETBV (ECX == 1) like Skylake processors,

(gdb) disass _dl_runtime_resolve_avx_opt
Dump of assembler code for function _dl_runtime_resolve_avx_opt:
   0x0000000000015890 <+0>:	push   %rax
   0x0000000000015891 <+1>:	push   %rcx
   0x0000000000015892 <+2>:	push   %rdx
   0x0000000000015893 <+3>:	mov    $0x1,%ecx
   0x0000000000015898 <+8>:	xgetbv
   0x000000000001589b <+11>:	mov    %eax,%r11d
   0x000000000001589e <+14>:	pop    %rdx
   0x000000000001589f <+15>:	pop    %rcx
   0x00000000000158a0 <+16>:	pop    %rax
   0x00000000000158a1 <+17>:	and    $0x4,%r11d
   0x00000000000158a5 <+21>:	bnd je 0x16200 <_dl_runtime_resolve_sse_vex>
End of assembler dump.

is slower than:

(gdb) disass _dl_runtime_resolve_avx_slow
Dump of assembler code for function _dl_runtime_resolve_avx_slow:
   0x0000000000015850 <+0>:	vorpd  %ymm0,%ymm1,%ymm8
   0x0000000000015854 <+4>:	vorpd  %ymm2,%ymm3,%ymm9
   0x0000000000015858 <+8>:	vorpd  %ymm4,%ymm5,%ymm10
   0x000000000001585c <+12>:	vorpd  %ymm6,%ymm7,%ymm11
   0x0000000000015860 <+16>:	vorpd  %ymm8,%ymm9,%ymm9
   0x0000000000015865 <+21>:	vorpd  %ymm10,%ymm11,%ymm10
   0x000000000001586a <+26>:	vpcmpeqd %xmm8,%xmm8,%xmm8
   0x000000000001586f <+31>:	vorpd  %ymm9,%ymm10,%ymm10
   0x0000000000015874 <+36>:	vptest %ymm10,%ymm8
   0x0000000000015879 <+41>:	bnd jae 0x158b0 <_dl_runtime_resolve_avx>
   0x000000000001587c <+44>:	vzeroupper
   0x000000000001587f <+47>:	bnd jmpq 0x16200 <_dl_runtime_resolve_sse_vex>
End of assembler dump.
(gdb)

since xgetbv takes much more cycles than single cycle operations like
vpord/vvpcmpeq/ptest.  _dl_runtime_resolve_opt should be used only with
AVX512 where AVX512 instructions lead to lower CPU frequency on Skylake
server.

	[BZ #21871]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	bit_arch_Use_dl_runtime_resolve_opt only with AVX512F.
2017-08-04 11:14:33 -07:00
H.J. Lu 03feacb562 x86: Rename glibc.tune.ifunc to glibc.tune.hwcaps
Rename glibc.tune.ifunc to glibc.tune.hwcaps and move it to
sysdeps/x86/dl-tunables.list since it is x86 specicifc.  Also
change type of data_cache_size, data_cache_size and
non_temporal_threshold to unsigned long int to match size_t.
Remove usage DEFAULT_STRLEN from cpu-tunables.c.

	* elf/dl-tunables.list (glibc.tune.ifunc): Removed.
	* sysdeps/x86/dl-tunables.list (glibc.tune.hwcaps): New.
	Remove security_level on all fields.
	* manual/tunables.texi: Replace ifunc with hwcaps.
	* sysdeps/x86/cpu-features.c (TUNABLE_CALLBACK (set_ifunc)):
	Renamed to ..
	(TUNABLE_CALLBACK (set_hwcaps)): This.
	(init_cpu_features): Updated.
	* sysdeps/x86/cpu-features.h (cpu_features): Change type of
	data_cache_size, data_cache_size and non_temporal_threshold to
	unsigned long int.
	* sysdeps/x86/cpu-tunables.c (DEFAULT_STRLEN): Removed.
	(TUNABLE_CALLBACK (set_ifunc)): Renamed to ...
	(TUNABLE_CALLBACK (set_hwcaps)): This.  Update comments.  Don't
	use DEFAULT_STRLEN.
2017-06-21 10:21:37 -07:00
H.J. Lu 905947c304 tunables: Add IFUNC selection and cache sizes
The current IFUNC selection is based on microbenchmarks in glibc.  It
should give the best performance for most workloads.  But other choices
may have better performance for a particular workload or on the hardware
which wasn't available at the selection was made.  The environment
variable, GLIBC_TUNABLES=glibc.tune.ifunc=-xxx,yyy,-zzz...., can be used
to enable CPU/ARCH feature yyy, disable CPU/ARCH feature yyy and zzz,
where the feature name is case-sensitive and has to match the ones in
cpu-features.h.  It can be used by glibc developers to override the
IFUNC selection to tune for a new processor or improve performance for
a particular workload.  It isn't intended for normal end users.

NOTE: the IFUNC selection may change over time.  Please check all
multiarch implementations when experimenting.

Also, GLIBC_TUNABLES=glibc.tune.x86_non_temporal_threshold=NUMBER is
provided to set threshold to use non temporal store to NUMBER,
GLIBC_TUNABLES=glibc.tune.x86_data_cache_size=NUMBER to set data cache
size, GLIBC_TUNABLES=glibc.tune.x86_shared_cache_size=NUMBER to set
shared cache size.

	* elf/dl-tunables.list (tune): Add ifunc,
	x86_non_temporal_threshold,
	x86_data_cache_size and x86_shared_cache_size.
	* manual/tunables.texi: Document glibc.tune.ifunc,
	glibc.tune.x86_data_cache_size, glibc.tune.x86_shared_cache_size
	and glibc.tune.x86_non_temporal_threshold.
	* sysdeps/unix/sysv/linux/x86/dl-sysdep.c: New file.
	* sysdeps/x86/cpu-tunables.c: Likewise.
	* sysdeps/x86/cacheinfo.c
	(init_cacheinfo): Check and get data cache size, shared cache
	size and non temporal threshold from cpu_features.
	* sysdeps/x86/cpu-features.c [HAVE_TUNABLES] (TUNABLE_NAMESPACE):
	New.
	[HAVE_TUNABLES] Include <unistd.h>.
	[HAVE_TUNABLES] Include <elf/dl-tunables.h>.
	[HAVE_TUNABLES] (TUNABLE_CALLBACK (set_ifunc)): Likewise.
	[HAVE_TUNABLES] (init_cpu_features): Use TUNABLE_GET to set
	IFUNC selection, data cache size, shared cache size and non
	temporal threshold.
	* sysdeps/x86/cpu-features.h (cpu_features): Add data_cache_size,
	shared_cache_size and non_temporal_threshold.
2017-06-20 08:37:28 -07:00
Siddhesh Poyarekar 511c5a1087 Make LD_HWCAP_MASK usable for static binaries
The LD_HWCAP_MASK environment variable was ignored in static binaries,
which is inconsistent with the behaviour of dynamically linked
binaries.  This seems to have been because of the inability of
ld_hwcap_mask being read early enough to influence anything but now
that it is in tunables, the mask is usable in static binaries as well.

This feature is important for aarch64, which relies on HWCAP_CPUID
being masked out to disable multiarch.  A sanity test on x86_64 shows
that there are no failures.  Likewise for aarch64.

	* elf/dl-hwcaps.h [HAVE_TUNABLES]: Always read hwcap_mask.
	* sysdeps/sparc/sparc32/dl-machine.h [HAVE_TUNABLES]:
	Likewise.
	* sysdeps/x86/cpu-features.c (init_cpu_features): Always set
	up hwcap and hwcap_mask.
2017-06-07 11:11:40 +05:30
Siddhesh Poyarekar ff08fc59e3 tunables: Use glibc.tune.hwcap_mask tunable instead of _dl_hwcap_mask
Drop _dl_hwcap_mask when building with tunables.  This completes the
transition of hwcap_mask reading from _dl_hwcap_mask to tunables.

	* elf/dl-hwcaps.h: New file.
	* elf/dl-hwcaps.c: Include it.
	(_dl_important_hwcaps)[HAVE_TUNABLES]: Read and update
	glibc.tune.hwcap_mask.
	* elf/dl-cache.c: Include dl-hwcaps.h.
	(_dl_load_cache_lookup)[HAVE_TUNABLES]: Read
	glibc.tune.hwcap_mask.
	* sysdeps/sparc/sparc32/dl-machine.h: Likewise.
	* elf/dl-support.c (_dl_hwcap2)[HAVE_TUNABLES]: Drop
	_dl_hwcap_mask.
	* elf/rtld.c (rtld_global_ro)[HAVE_TUNABLES]: Drop
	_dl_hwcap_mask.
	(process_envvars)[HAVE_TUNABLES]: Likewise.
	* sysdeps/generic/ldsodefs.h (rtld_global_ro)[HAVE_TUNABLES]:
	Likewise.
	* sysdeps/x86/cpu-features.c (init_cpu_features): Don't
	initialize dl_hwcap_mask when tunables are enabled.
2017-06-07 11:11:38 +05:30
H.J. Lu 1432d38ea0 x86: Set dl_platform and dl_hwcap from CPU features [BZ #21391]
dl_platform and dl_hwcap are set from AT_PLATFORM and AT_HWCAP very
early during startup.  They are used by dynamic linker to determine
platform and build an array of hardware capability names, which are
added to search path when loading shared object.  dl_platform and
dl_hwcap are unused on x86-64.  On i386, i386, i486, i586 and i686
platforms were supported and only SSE2 capability was used.

On x86, usage of AT_PLATFORM and AT_HWCAP to determine platform and
processor capabilities is obsolete since all information is available
in dl_x86_cpu_features.  This patch sets dl_platform and dl_hwcap from
dl_x86_cpu_features in dynamic linker.  On i386, the available plaforms
are changed to i586 and i686 since i386 has been deprecated.  On x86-64,
the available plaforms are haswell, which is for Haswell class processors
with BMI1, BMI2, LZCNT, MOVBE, POPCNT, AVX2 and FMA, and xeon_phi, which
is for Xeon Phi class processors with AVX512F, AVX512CD, AVX512ER and
AVX512PF.  A capability, avx512_1, is also added to x86-64 for AVX512
ISAs: AVX512F, AVX512CD, AVX512BW, AVX512DQ and AVX512VL.

	[BZ #21391]
	* sysdeps/i386/dl-machine.h (dl_platform_init) [IS_IN (rtld)]:
	Only call init_cpu_features.
	[!IS_IN (rtld)]: Only set GLRO(dl_platform) to NULL if needed.
	* sysdeps/x86_64/dl-machine.h (dl_platform_init): Likewise.
	* sysdeps/i386/dl-procinfo.h: Removed.
	* sysdeps/unix/sysv/linux/i386/dl-procinfo.h: Don't include
	<sysdeps/i386/dl-procinfo.h> nor <ldsodefs.h>.  Include
	<sysdeps/x86/dl-procinfo.h>.
	(_dl_procinfo): Replace _DL_HWCAP_COUNT with 32.
	* sysdeps/unix/sysv/linux/x86_64/dl-procinfo.h [!IS_IN (ldconfig)]:
	Include <sysdeps/x86/dl-procinfo.h> instead of
	 <sysdeps/generic/dl-procinfo.h>.
	* sysdeps/x86/cpu-features.c: Include <dl-hwcap.h>.
	(init_cpu_features): Set dl_platform, dl_hwcap and dl_hwcap_mask.
	* sysdeps/x86/cpu-features.h (bit_cpu_LZCNT): New.
	(bit_cpu_MOVBE): Likewise.
	(bit_cpu_BMI1): Likewise.
	(bit_cpu_BMI2): Likewise.
	(index_cpu_BMI1): Likewise.
	(index_cpu_BMI2): Likewise.
	(index_cpu_LZCNT): Likewise.
	(index_cpu_MOVBE): Likewise.
	(index_cpu_POPCNT): Likewise.
	(reg_BMI1): Likewise.
	(reg_BMI2): Likewise.
	(reg_LZCNT): Likewise.
	(reg_MOVBE): Likewise.
	(reg_POPCNT): Likewise.
	* sysdeps/x86/dl-hwcap.h: New file.
	* sysdeps/x86/dl-procinfo.h: Likewise.
	* sysdeps/x86/dl-procinfo.c (_dl_x86_hwcap_flags): New.
	(_dl_x86_platforms): Likewise.
2017-05-03 13:44:35 -07:00
H.J. Lu 4cb334c4d6 x86: Use AVX2 memcpy/memset on Skylake server [BZ #21396]
On Skylake server, AVX512 load/store instructions in memcpy/memset may
lead to lower CPU turbo frequency in certain situations.  Use of AVX2
in memcpy/memset has been observed to have improved overall performance
in many workloads due to the higher frequency.

Since AVX512ER is unique to Xeon Phi, this patch sets Prefer_No_AVX512
if AVX512ER isn't available so that AVX2 versions of memcpy/memset are
used on Skylake server.

	[BZ #21396]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Prefer_No_AVX512 if AVX512ER isn't available.
	* sysdeps/x86/cpu-features.h (bit_arch_Prefer_No_AVX512): New.
	(index_arch_Prefer_No_AVX512): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Don't use
	AVX512 version if Prefer_No_AVX512 is set.
	* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk):
	Likewise.
	* sysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.S (__memmove_chk):
	Likewise.
	* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
	* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk):
	Likewise.
	* sysdeps/x86_64/multiarch/memset.S (memset): Likewise.
	* sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk):
	Likewise.
2017-04-18 14:01:45 -07:00
H.J. Lu 1c53cb49de x86: Set Prefer_No_VZEROUPPER if AVX512ER is available
AVX512ER won't be implemented in any Xeon processors and will be in
all Xeon Phi processors.  Don't check CPU model number when setting
Prefer_No_VZEROUPPER for Xeon Phi.  Instead, set Prefer_No_VZEROUPPER
if AVX512ER is available.  It works with current and future Xeon Phi
and non-Xeon Phi processors.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Prefer_No_VZEROUPPER if AVX512ER is available.
	* sysdeps/x86/cpu-features.h
	(bit_cpu_AVX512PF): New.
	(bit_cpu_AVX512ER): Likewise.
	(bit_cpu_AVX512CD): Likewise.
	(bit_cpu_AVX512BW): Likewise.
	(bit_cpu_AVX512VL): Likewise.
	(index_cpu_AVX512PF): Likewise.
	(index_cpu_AVX512ER): Likewise.
	(index_cpu_AVX512CD): Likewise.
	(index_cpu_AVX512BW): Likewise.
	(index_cpu_AVX512VL): Likewise.
	(reg_AVX512PF): Likewise.
	(reg_AVX512ER): Likewise.
	(reg_AVX512CD): Likewise.
	(reg_AVX512BW): Likewise.
	(reg_AVX512VL): Likewise.
2017-04-18 08:27:32 -07:00
H.J. Lu b170d2e7ab Use CPU_FEATURES_CPU_P to check if AVX is available
Don't use bit_cpu_AVX directly.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Check AVX with
	CPU_FEATURES_CPU_P.
2017-03-17 11:38:13 -07:00
H.J. Lu 52ac22365a Use index_cpu_RTM and reg_RTM to clear the bit_cpu_RTM bit
* sysdeps/x86/cpu-features.c (init_cpu_features): Use
	index_cpu_RTM and reg_RTM to clear the bit_cpu_RTM bit.
2017-02-17 11:53:26 -08:00
Joseph Myers bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Andrew Senkevich 2702856bf4 Disable TSX on some Haswell processors.
Patch disables Intel TSX on some Haswell processors to avoid TSX
on kernels that weren't updated with the latest microcode package
(which disables broken feature by default).

    * sysdeps/x86/cpu-features.c (get_common_indeces): Add
    stepping identification.
    (init_cpu_features): Add handle of Haswell.
2016-12-19 14:15:57 +03:00
Carlos O'Donell b3d17c1cf2 Bug 20689: Fix FMA and AVX2 detection on Intel
In the Intel Architecture Instruction Set Extensions Programming
reference the recommended way to test for FMA in section
'2.2.1 Detection of FMA' is:

"Application Software must identify that hardware supports AVX as
explained in ... after that it must also detect support for FMA..."

We don't do that in glibc. We use osxsave to detect the use of xgetbv,
and after that we check for AVX and FMA orthogonally. It is conceivable
that you could have the AVX bit clear and the FMA bit in an undefined
state.

This commit fixes FMA and AVX2 detection to depend on usable AVX
as required by the recommended Intel sequences.

v1: https://www.sourceware.org/ml/libc-alpha/2016-10/msg00241.html
v2: https://www.sourceware.org/ml/libc-alpha/2016-10/msg00265.html
2016-10-17 19:39:54 -04:00
H.J. Lu fb0f7a6755 X86-64: Add _dl_runtime_resolve_avx[512]_{opt|slow} [BZ #20508]
There is transition penalty when SSE instructions are mixed with 256-bit
AVX or 512-bit AVX512 load instructions.  Since _dl_runtime_resolve_avx
and _dl_runtime_profile_avx512 save/restore 256-bit YMM/512-bit ZMM
registers, there is transition penalty when SSE instructions are used
with lazy binding on AVX and AVX512 processors.

To avoid SSE transition penalty, if only the lower 128 bits of the first
8 vector registers are non-zero, we can preserve %xmm0 - %xmm7 registers
with the zero upper bits.

For AVX and AVX512 processors which support XGETBV with ECX == 1, we can
use XGETBV with ECX == 1 to check if the upper 128 bits of YMM registers
or the upper 256 bits of ZMM registers are zero.  We can restore only the
non-zero portion of vector registers with AVX/AVX512 load instructions
which will zero-extend upper bits of vector registers.

This patch adds _dl_runtime_resolve_sse_vex which saves and restores
XMM registers with 128-bit AVX store/load instructions.  It is used to
preserve YMM/ZMM registers when only the lower 128 bits are non-zero.
_dl_runtime_resolve_avx_opt and _dl_runtime_resolve_avx512_opt are added
and used on AVX/AVX512 processors supporting XGETBV with ECX == 1 so
that we store and load only the non-zero portion of vector registers.
This avoids SSE transition penalty caused by _dl_runtime_resolve_avx and
_dl_runtime_profile_avx512 when only the lower 128 bits of vector
registers are used.

_dl_runtime_resolve_avx_slow is added and used for AVX processors which
don't support XGETBV with ECX == 1.  Since there is no SSE transition
penalty on AVX512 processors which don't support XGETBV with ECX == 1,
_dl_runtime_resolve_avx512_slow isn't provided.

	[BZ #20495]
	[BZ #20508]
	* sysdeps/x86/cpu-features.c (init_cpu_features): For Intel
	processors, set Use_dl_runtime_resolve_slow and set
	Use_dl_runtime_resolve_opt if XGETBV suports ECX == 1.
	* sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt):
	New.
	(bit_arch_Use_dl_runtime_resolve_slow): Likewise.
	(index_arch_Use_dl_runtime_resolve_opt): Likewise.
	(index_arch_Use_dl_runtime_resolve_slow): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Use
	_dl_runtime_resolve_avx512_opt and _dl_runtime_resolve_avx_opt
	if Use_dl_runtime_resolve_opt is set.  Use
	_dl_runtime_resolve_slow if Use_dl_runtime_resolve_slow is set.
	* sysdeps/x86_64/dl-trampoline.S: Include <cpu-features.h>.
	(_dl_runtime_resolve_opt): New.  Defined for AVX and AVX512.
	(_dl_runtime_resolve): Add one for _dl_runtime_resolve_sse_vex.
	* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx_slow):
	New.
	(_dl_runtime_resolve_opt): Likewise.
	(_dl_runtime_profile): Define only if _dl_runtime_profile is
	defined.
2016-09-06 08:51:07 -07:00
H.J. Lu 91655fc307 Check FMA after COMMON_CPUID_INDEX_80000001
Since the FMA4 bit is in COMMON_CPUID_INDEX_80000001 and FMA4 requires
AVX, determine if FMA4 is usable after COMMON_CPUID_INDEX_80000001 is
available and if AVX is usable.

	[BZ #20195]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Move FMA4
	check to ...
	(init_cpu_features): Here.
2016-06-07 08:00:40 -07:00
H.J. Lu 2e2d9796da Detect Intel Goldmont and Airmont processors
Updated from the model numbers of Goldmont and Airmont processors in
Intel64 And IA-32 Processor Architectures Software Developer's Manual
Volume 3 Revision 058.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Detect Intel
	Goldmont and Airmont processors.
2016-04-15 05:23:06 -07:00
H.J. Lu 27d3ce1467 Remove Fast_Copy_Backward from Intel Core processors
Intel Core i3, i5 and i7 processors have fast unaligned copy and
copy backward is ignored.  Remove Fast_Copy_Backward from Intel Core
processors to avoid confusion.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Don't set
	bit_arch_Fast_Copy_Backward for Intel Core proessors.
2016-04-01 15:09:14 -07:00
H.J. Lu e41b395523 [x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
	processors.  Set Fast_Copy_Backward for AMD Excavator
	processors.
	* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
	New.
	(index_arch_Fast_Unaligned_Copy): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
	Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
2016-03-28 04:40:03 -07:00
H.J. Lu f781a9e961 Set index_arch_AVX_Fast_Unaligned_Load only for Intel processors
Since only Intel processors with AVX2 have fast unaligned load, we
should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors.

Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces
and call get_common_indeces for other processors.

Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading
GLRO(dl_x86_cpu_features) in cpu-features.c.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Remove
	inline.  Check family before setting family, model and
	extended_model.  Set AVX, AVX2, AVX512, FMA and FMA4 usable
	bits here.
	(init_cpu_features): Replace HAS_CPU_FEATURE and
	HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and
	CPU_FEATURES_ARCH_P.  Set index_arch_AVX_Fast_Unaligned_Load
	for Intel processors with usable AVX2.  Call get_common_indeces
	for other processors with family == NULL.
	* sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro.
	(CPU_FEATURES_ARCH_P): Likewise.
	(HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P.
	(HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
2016-03-22 07:47:20 -07:00
H.J. Lu 6aa3e97e25 Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.h
index_* and bit_* macros are used to access cpuid and feature arrays o
struct cpu_features.  It is very easy to use bits and indices of cpuid
array on feature array, especially in assembly codes.  For example,
sysdeps/i386/i686/multiarch/bcopy.S has

	HAS_CPU_FEATURE (Fast_Rep_String)

which should be

	HAS_ARCH_FEATURE (Fast_Rep_String)

We change index_* and bit_* to index_cpu_*/index_arch_* and
bit_cpu_*/bit_arch_* so that we can catch such error at build time.

	[BZ #19762]
	* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h
	(EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*.
	* sysdeps/x86/cpu-features.c (init_cpu_features): Likewise.
	* sysdeps/x86/cpu-features.h (bit_*): Renamed to ...
	(bit_arch_*): This for feature array.
	(bit_*): Renamed to ...
	(bit_cpu_*): This for cpu array.
	(index_*): Renamed to ...
	(index_arch_*): This for feature array.
	(index_*): Renamed to ...
	(index_cpu_*): This for cpu array.
	[__ASSEMBLER__] (HAS_FEATURE): Add and use field.
	[__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE.
	[__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE.
	[!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and
	bit_##name with index_cpu_##name and bit_cpu_##name.
	[!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and
	bit_##name with index_arch_##name and bit_arch_##name.
2016-03-10 05:27:07 -08:00
Amit Pawar d7890e6947 Set index_Fast_Unaligned_Load for Excavator family CPUs
GLIBC benchtest testcases shows SSE2_Unaligned based implementations
are performing faster compare to SSE2 based implementations for
routines: strcmp, strcat, strncat, stpcpy, stpncpy, strcpy, strncpy
and strstr. Flag index_Fast_Unaligned_Load is set for Excavator family
0x15h CPU's. This makes SSE2_Unaligned based implementations as
default for these routines.

	[BZ #19467]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	index_Fast_Unaligned_Load flag for Excavator family CPUs.
2016-01-14 08:14:31 -08:00
Joseph Myers f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Andrew Senkevich 83d776f979 Added memset optimized with AVX512 for KNL hardware.
It shows improvement up to 28% over AVX2 memset (performance results
attached at <https://sourceware.org/ml/libc-alpha/2015-12/msg00052.html>).

    * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: New file.
    * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new file.
    * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests.
    * sysdeps/x86_64/multiarch/memset.S: Added new IFUNC branch.
    * sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
    * sysdeps/x86/cpu-features.h (bit_Prefer_No_VZEROUPPER,
    index_Prefer_No_VZEROUPPER): New.
    * sysdeps/x86/cpu-features.c (init_cpu_features): Set the
    Prefer_No_VZEROUPPER for Knights Landing.
2015-12-19 02:47:28 +03:00
H.J. Lu c9afcaaafa Enable Silvermont optimizations for Knights Landing
Knights Landing processor is based on Silvermont.  This patch enables
Silvermont optimizations for Knights Landing.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Enable
	Silvermont optimizations for Knights Landing.
2015-12-15 11:46:54 -08:00
H.J. Lu 9627da32ec Update family and model detection for AMD CPUs
AMD CPUs uses the similar encoding scheme for extended family and model
as Intel CPUs as shown in:

http://support.amd.com/TechDocs/25481.pdf

This patch updates get_common_indeces to get family and model for both
Intel and AMD CPUs when family == 0x0f.

	[BZ #19214]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Add an
	argument to return extended model.  Update family and model
	with extended family and model when family == 0x0f.
	(init_cpu_features): Updated.
2015-11-30 09:01:31 -08:00
H.J. Lu 75dd0a8f3d Detect and select i586/i686 implementation at run-time
We detect i586 and i686 features at run-time by checking CX8 and CMOV
CPUID features bits.  We can use these information to select the best
implementation in ix86 multiarch.  HAS_I586/HAS_I686 is true if i586/i686
instructions are available on the processor.

Due to the reordering and the other nifty extensions in i686, it is not
really good to use heavily i586 optimized code on an i686.  It's better
to use i486 code if it isn't an i586.  USE_I586/USE_I686 is true if
i586/i686 implementation should be used for the processor.  USE_I586
is true only if i686 instructions aren't available.  If i686 instructions
are available, we always choose i686 or i486 implementation, in that order,
and we never choose i586 implementation for i686-class processors.

	* sysdeps/i386/init-arch.h: New file.
	* sysdeps/i386/i586/init-arch.h: Likewise.
	* sysdeps/i386/i686/init-arch.h: Likewise.
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set bit_I586
	bit if CX8 is available.  Set bit_I686 bit if CMOV is available.
	* sysdeps/x86/cpu-features.h (bit_I586): New.
	(bit_I686): Likewise.
	(bit_CX8): Likewise.
	(bit_CMOV): Likewise.
	(index_CX8): Likewise.
	(index_CMOV): Likewise.
	(index_I586): Likewise.
	(index_I686): Likewise.
	(reg_CX8): Likewise.
	(reg_CMOV): Likewise.
	(HAS_I586): Defined as HAS_ARCH_FEATURE (I586) if i586 isn't
	available at compile-time.
	(HAS_I686): Defined as HAS_ARCH_FEATURE (I686) if i686 isn't
	available at compile-time.
	* sysdeps/x86/init-arch.h (USE_I586): New macro.
	(USE_I686): Likewise.
2015-08-27 09:06:44 -07:00
H.J. Lu 1814df5b02 Define HAS_CPUID/HAS_I586/HAS_I686 from -march=
cpuid, i586 and i686 instructions are available if the processor
specified by -march= supports them.  We can use this information
to determine whether those instructions can be used safely.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Check
	whether cpuid is available only if HAS_CPUID is 0.
	* sysdeps/x86/cpu-features.h (HAS_CPUID): New.
	(HAS_I586): Likewise.
	(HAS_I686): Likewise.
2015-08-18 08:00:00 -07:00
H.J. Lu a5cf909b8f Check if cpuid is available in init_cpu_features
Since not all i486 processors support cpuid, we call __get_cpuid_max to
check if cpuid is available before using it if not compiling for i586,
i686 nor x86-64.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Call
	__get_cpuid_max if not compiling for i586, i686 nor x86-64.
2015-08-13 04:53:03 -07:00
H.J. Lu e2e4f56056 Add _dl_x86_cpu_features to rtld_global
This patch adds _dl_x86_cpu_features to rtld_global in x86 ld.so
and initializes it early before __libc_start_main is called so that
cpu_features is always available when it is used and we can avoid
calling __init_cpu_features in IFUNC selectors.

	* sysdeps/i386/dl-machine.h: Include <cpu-features.c>.
	(dl_platform_init): Call init_cpu_features.
	* sysdeps/i386/dl-procinfo.c (_dl_x86_cpu_features): New.
	* sysdeps/i386/i686/cacheinfo.c
	(DISABLE_PREFERRED_MEMORY_INSTRUCTION): Removed.
	* sysdeps/i386/i686/multiarch/Makefile (aux): Remove init-arch.
	* sysdeps/i386/i686/multiarch/Versions: Removed.
	* sysdeps/i386/i686/multiarch/ifunc-defines.sym (KIND_OFFSET):
	Removed.
	* sysdeps/i386/ldsodefs.h: Include <cpu-features.h>.
	* sysdeps/unix/sysv/linux/x86/Makefile
	(libpthread-sysdep_routines): Remove init-arch.
	* sysdeps/unix/sysv/linux/x86_64/dl-procinfo.c: Include
	<sysdeps/x86_64/dl-procinfo.c> instead of
	sysdeps/generic/dl-procinfo.c>.
	* sysdeps/x86/Makefile [$(subdir) == csu] (gen-as-const-headers):
	Add cpu-features-offsets.sym and rtld-global-offsets.sym.
	[$(subdir) == elf] (sysdep-dl-routines): Add dl-get-cpu-features.
	[$(subdir) == elf] (tests): Add tst-get-cpu-features.
	[$(subdir) == elf] (tests-static): Add
	tst-get-cpu-features-static.
	* sysdeps/x86/Versions: New file.
	* sysdeps/x86/cpu-features-offsets.sym: Likewise.
	* sysdeps/x86/cpu-features.c: Likewise.
	* sysdeps/x86/cpu-features.h: Likewise.
	* sysdeps/x86/dl-get-cpu-features.c: Likewise.
	* sysdeps/x86/libc-start.c: Likewise.
	* sysdeps/x86/rtld-global-offsets.sym: Likewise.
	* sysdeps/x86/tst-get-cpu-features-static.c: Likewise.
	* sysdeps/x86/tst-get-cpu-features.c: Likewise.
	* sysdeps/x86_64/dl-procinfo.c: Likewise.
	* sysdeps/x86_64/cacheinfo.c (__cpuid_count): Removed.
	Assume USE_MULTIARCH is defined and don't check it.
	(is_intel): Replace __cpu_features with GLRO(dl_x86_cpu_features).
	(is_amd): Likewise.
	(max_cpuid): Likewise.
	(intel_check_word): Likewise.
	(__cache_sysconf): Don't call __init_cpu_features.
	(__x86_preferred_memory_instruction): Removed.
	(init_cacheinfo): Don't call __init_cpu_features. Replace
	__cpu_features with GLRO(dl_x86_cpu_features).
	* sysdeps/x86_64/dl-machine.h: <cpu-features.c>.
	(dl_platform_init): Call init_cpu_features.
	* sysdeps/x86_64/ldsodefs.h: Include <cpu-features.h>.
	* sysdeps/x86_64/multiarch/Makefile (aux): Remove init-arch.
	* sysdeps/x86_64/multiarch/Versions: Removed.
	* sysdeps/x86_64/multiarch/cacheinfo.c: Likewise.
	* sysdeps/x86_64/multiarch/init-arch.c: Likewise.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym (KIND_OFFSET):
	Removed.
	* sysdeps/x86_64/multiarch/init-arch.h: Rewrite.
2015-08-13 03:41:22 -07:00