x86, pthread_cond_*wait: Do not depend on %eax not being clobbered

The x86-specific versions of both pthread_cond_wait and
pthread_cond_timedwait have (in their fall-back-to-futex-wait slow
paths) calls to __pthread_mutex_cond_lock_adjust followed by
__pthread_mutex_unlock_usercnt, which load the parameters before the
first call but then assume that the first parameter, in %eax, will
survive unaffected.  This happens to have been true before now, but %eax
is a call-clobbered register, and this assumption is not safe: it could
change at any time, at GCC's whim, and indeed the stack-protector canary
checking code clobbers %eax while checking that the canary is
uncorrupted.

So reload %eax before calling __pthread_mutex_unlock_usercnt.  (Do this
unconditionally, even when stack-protection is not in use, because it's
the right thing to do, it's a slow path, and anything else is dicing
with death.)

	* sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload
	call-clobbered %eax on retry path.
	* sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise.
This commit is contained in:
Nick Alcock 2016-03-23 13:40:14 +01:00 committed by Florian Weimer
parent 3c9a4cd16c
commit 7a25d6a84d
3 changed files with 8 additions and 0 deletions

View File

@ -1,3 +1,9 @@
2016-03-23 Nick Alcock <nick.alcock@oracle.com>
* sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload
call-clobbered %eax on retry path.
* sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise.
2016-03-22 H.J. Lu <hongjiu.lu@intel.com>
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY):

View File

@ -297,6 +297,7 @@ __pthread_cond_timedwait:
correctly. */
movl dep_mutex(%ebx), %eax
call __pthread_mutex_cond_lock_adjust
movl dep_mutex(%ebx), %eax
xorl %edx, %edx
call __pthread_mutex_unlock_usercnt
jmp 8b

View File

@ -309,6 +309,7 @@ __pthread_cond_wait:
correctly. */
movl dep_mutex(%ebx), %eax
call __pthread_mutex_cond_lock_adjust
movl dep_mutex(%ebx), %eax
xorl %edx, %edx
call __pthread_mutex_unlock_usercnt
jmp 8b