..
ablk_helper.c
aes_glue.c
aes-i586-asm_32.S
aes-x86_64-asm_64.S
aesni-intel_asm.S
crypto: aesni_intel - fix accessing of unaligned memory
2013-06-13 14:57:42 +08:00
aesni-intel_glue.c
crypto: aesni_intel - add more optimized XTS mode for x86-64
2013-04-25 21:01:53 +08:00
blowfish_glue.c
Revert "crypto: blowfish - add AVX2/x86_64 implementation of blowfish cipher"
2013-06-21 14:44:28 +08:00
blowfish-x86_64-asm_64.S
camellia_aesni_avx2_glue.c
crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher
2013-04-25 21:09:07 +08:00
camellia_aesni_avx_glue.c
crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher
2013-04-25 21:09:07 +08:00
camellia_glue.c
camellia-aesni-avx2-asm_64.S
crypto: camellia-aesni-avx2 - tune assembly code for more performance
2013-06-21 14:44:23 +08:00
camellia-aesni-avx-asm_64.S
crypto: x86/camellia-aesni-avx - add more optimized XTS code
2013-04-25 21:01:52 +08:00
camellia-x86_64-asm_64.S
cast5_avx_glue.c
cast5-avx-x86_64-asm_64.S
cast6_avx_glue.c
crypto: cast6-avx: use new optimized XTS code
2013-04-25 21:01:52 +08:00
cast6-avx-x86_64-asm_64.S
crypto: cast6-avx: use new optimized XTS code
2013-04-25 21:01:52 +08:00
crc32-pclmul_asm.S
x86, crc32-pclmul: Fix build with older binutils
2013-05-30 16:36:23 -07:00
crc32-pclmul_glue.c
crc32c-intel_glue.c
crc32c-pcl-intel-asm_64.S
crypto: crc32-pclmul - Use gas macro for pclmulqdq
2013-04-25 21:01:44 +08:00
fpu.c
ghash-clmulni-intel_asm.S
ghash-clmulni-intel_glue.c
glue_helper-asm-avx2.S
crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher
2013-04-25 21:09:05 +08:00
glue_helper-asm-avx.S
crypto: x86 - add more optimized XTS-mode for serpent-avx
2013-04-25 21:01:51 +08:00
glue_helper.c
crypto: x86 - add more optimized XTS-mode for serpent-avx
2013-04-25 21:01:51 +08:00
Makefile
Revert "crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework"
2013-07-24 17:04:16 +10:00
salsa20_glue.c
crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_*
2013-01-20 10:16:50 +11:00
salsa20-i586-asm_32.S
crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_*
2013-01-20 10:16:50 +11:00
salsa20-x86_64-asm_64.S
crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_*
2013-01-20 10:16:50 +11:00
serpent_avx2_glue.c
crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher
2013-04-25 21:09:07 +08:00
serpent_avx_glue.c
crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher
2013-04-25 21:09:07 +08:00
serpent_sse2_glue.c
serpent-avx2-asm_64.S
crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher
2013-04-25 21:09:07 +08:00
serpent-avx-x86_64-asm_64.S
crypto: x86 - add more optimized XTS-mode for serpent-avx
2013-04-25 21:01:51 +08:00
serpent-sse2-i586-asm_32.S
crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets
2013-01-20 10:16:50 +11:00
serpent-sse2-x86_64-asm_64.S
crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets
2013-01-20 10:16:50 +11:00
sha1_ssse3_asm.S
crypto: x86/sha1 - assembler clean-ups: use ENTRY/ENDPROC
2013-01-20 10:16:51 +11:00
sha1_ssse3_glue.c
sha256_ssse3_glue.c
crypto: sha256_ssse3 - add sha224 support
2013-05-28 15:43:05 +08:00
sha256-avx2-asm.S
crypto: sha256 - Optimized sha256 x86_64 routine using AVX2's RORX instructions
2013-04-03 09:06:32 +08:00
sha256-avx-asm.S
crypto: sha256_ssse3 - fix stack corruption with SSSE3 and AVX implementations
2013-05-28 13:46:47 +08:00
sha256-ssse3-asm.S
crypto: sha256_ssse3 - fix stack corruption with SSSE3 and AVX implementations
2013-05-28 13:46:47 +08:00
sha512_ssse3_glue.c
crypto: sha512_ssse3 - add sha384 support
2013-05-28 15:43:05 +08:00
sha512-avx2-asm.S
crypto: sha512 - Optimized SHA512 x86_64 assembly routine using AVX2 RORX instruction.
2013-04-25 21:00:58 +08:00
sha512-avx-asm.S
crypto: sha512 - Optimized SHA512 x86_64 assembly routine using AVX instructions.
2013-04-25 21:00:58 +08:00
sha512-ssse3-asm.S
crypto: sha512 - Optimized SHA512 x86_64 assembly routine using Supplemental SSE3 instructions.
2013-04-25 21:00:58 +08:00
twofish_avx_glue.c
Revert "crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher"
2013-06-21 14:44:29 +08:00
twofish_glue_3way.c
twofish_glue.c
twofish-avx-x86_64-asm_64.S
crypto: x86/twofish-avx - use optimized XTS code
2013-04-25 21:01:51 +08:00
twofish-i586-asm_32.S
crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels
2013-01-20 10:16:51 +11:00
twofish-x86_64-asm_64-3way.S
crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels
2013-01-20 10:16:51 +11:00
twofish-x86_64-asm_64.S
crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels
2013-01-20 10:16:51 +11:00