93e79ed391
The current implementation of std::call_once uses pthread_once, which only meets the C++ requirements when compiled with support for exceptions. For most glibc targets and all non-glibc targets, pthread_once does not work correctly if the init_routine exits via an exception. The pthread_once_t object is left in the "active" state, and any later attempts to run another init_routine will block forever. This change makes std::call_once work correctly for Linux targets, by replacing the use of pthread_once with a futex, based on the code from __cxa_guard_acquire. For both glibc and musl, the Linux implementation of pthread_once is already based on futexes, and pthread_once_t is just a typedef for int, so this change does not alter the layout of std::once_flag. By choosing the values for the int appropriately, the new code is even ABI compatible. Code that calls the old implementation of std::call_once will use pthread_once to manipulate the int, while new code will use the new std::once_flag members to manipulate it, but they should interoperate correctly. In both cases, the int is initially zero, has the lowest bit set when there is an active execution, and equals 2 after a successful returning execution. The difference with the new code is that exceptional exceptions are correctly detected and the int is reset to zero. The __cxa_guard_acquire code (and musl's pthread_once) use an additional state to say there are other threads waiting. This allows the futex wake syscall to be skipped if there is no contention. Glibc doesn't use a waiter bit, so we have to unconditionally issue the wake in order to be compatible with code calling the old std::call_once that uses Glibc's pthread_once. If we know that we're using musl (and musl's pthread_once doesn't change) it would be possible to set a waiting state and check for it in std::once_flag::_M_finish(bool), but this patch doesn't do that. This doesn't fix the bug for non-linux targets. A similar approach could be used for targets where we know the definition of pthread_once_t is a mutex and an integer. We could make once_flag._M_activate() use pthread_mutex_lock on the mutex member within the pthread_once_t, and then only set the integer if the execution finishes, and then unlock the mutex. That would require careful study of each target's pthread_once implementation and that work is left for a later date. This also fixes PR 55394 because pthread_once is no longer needed, and PR 84323 because the fast path is now just an atomic load. As a consequence of the new implementation that doesn't use pthread_once, we can also make std::call_once work for targets with no gthreads support. The code for the single-threaded implementation follows the same methods as on Linux, but with no need for atomics or futexes. libstdc++-v3/ChangeLog: PR libstdc++/55394 PR libstdc++/66146 PR libstdc++/84323 * config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols. * include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define even when gthreads is not supported. (once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type from __gthread_once_t to int. (once_flag::_M_passive(), once_flag::_M_activate()) (once_flag::_M_finish(bool), once_flag::_Active_execution): Define new members for futex and non-threaded implementation. [_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New RAII helper type. (call_once): Use new members of once_flag. * src/c++11/mutex.cc (std::once_flag::_M_activate): Define. (std::once_flag::_M_finish): Define. * testsuite/30_threads/call_once/39909.cc: Do not require gthreads. * testsuite/30_threads/call_once/49668.cc: Likewise. * testsuite/30_threads/call_once/60497.cc: Likewise. * testsuite/30_threads/call_once/call_once1.cc: Likewise. * testsuite/30_threads/call_once/dr2442.cc: Likewise. * testsuite/30_threads/call_once/once_flag.cc: Add test for constexpr constructor. * testsuite/30_threads/call_once/66146.cc: New test. * testsuite/30_threads/call_once/constexpr.cc: Removed. * testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
882 lines
24 KiB
C++
882 lines
24 KiB
C++
// <mutex> -*- C++ -*-
|
|
|
|
// Copyright (C) 2003-2020 Free Software Foundation, Inc.
|
|
//
|
|
// This file is part of the GNU ISO C++ Library. This library is free
|
|
// software; you can redistribute it and/or modify it under the
|
|
// terms of the GNU General Public License as published by the
|
|
// Free Software Foundation; either version 3, or (at your option)
|
|
// any later version.
|
|
|
|
// This library is distributed in the hope that it will be useful,
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
// GNU General Public License for more details.
|
|
|
|
// Under Section 7 of GPL version 3, you are granted additional
|
|
// permissions described in the GCC Runtime Library Exception, version
|
|
// 3.1, as published by the Free Software Foundation.
|
|
|
|
// You should have received a copy of the GNU General Public License and
|
|
// a copy of the GCC Runtime Library Exception along with this program;
|
|
// see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
|
|
// <http://www.gnu.org/licenses/>.
|
|
|
|
/** @file include/mutex
|
|
* This is a Standard C++ Library header.
|
|
*/
|
|
|
|
#ifndef _GLIBCXX_MUTEX
|
|
#define _GLIBCXX_MUTEX 1
|
|
|
|
#pragma GCC system_header
|
|
|
|
#if __cplusplus < 201103L
|
|
# include <bits/c++0x_warning.h>
|
|
#else
|
|
|
|
#include <tuple>
|
|
#include <chrono>
|
|
#include <exception>
|
|
#include <type_traits>
|
|
#include <system_error>
|
|
#include <bits/std_mutex.h>
|
|
#include <bits/unique_lock.h>
|
|
#if ! _GTHREAD_USE_MUTEX_TIMEDLOCK
|
|
# include <condition_variable>
|
|
# include <thread>
|
|
#endif
|
|
#include <ext/atomicity.h> // __gnu_cxx::__is_single_threaded
|
|
|
|
#if defined _GLIBCXX_HAS_GTHREADS && ! defined _GLIBCXX_HAVE_TLS
|
|
# include <bits/std_function.h> // std::function
|
|
#endif
|
|
|
|
namespace std _GLIBCXX_VISIBILITY(default)
|
|
{
|
|
_GLIBCXX_BEGIN_NAMESPACE_VERSION
|
|
|
|
/**
|
|
* @addtogroup mutexes
|
|
* @{
|
|
*/
|
|
|
|
#ifdef _GLIBCXX_HAS_GTHREADS
|
|
|
|
// Common base class for std::recursive_mutex and std::recursive_timed_mutex
|
|
class __recursive_mutex_base
|
|
{
|
|
protected:
|
|
typedef __gthread_recursive_mutex_t __native_type;
|
|
|
|
__recursive_mutex_base(const __recursive_mutex_base&) = delete;
|
|
__recursive_mutex_base& operator=(const __recursive_mutex_base&) = delete;
|
|
|
|
#ifdef __GTHREAD_RECURSIVE_MUTEX_INIT
|
|
__native_type _M_mutex = __GTHREAD_RECURSIVE_MUTEX_INIT;
|
|
|
|
__recursive_mutex_base() = default;
|
|
#else
|
|
__native_type _M_mutex;
|
|
|
|
__recursive_mutex_base()
|
|
{
|
|
// XXX EAGAIN, ENOMEM, EPERM, EBUSY(may), EINVAL(may)
|
|
__GTHREAD_RECURSIVE_MUTEX_INIT_FUNCTION(&_M_mutex);
|
|
}
|
|
|
|
~__recursive_mutex_base()
|
|
{ __gthread_recursive_mutex_destroy(&_M_mutex); }
|
|
#endif
|
|
};
|
|
|
|
/// The standard recursive mutex type.
|
|
class recursive_mutex : private __recursive_mutex_base
|
|
{
|
|
public:
|
|
typedef __native_type* native_handle_type;
|
|
|
|
recursive_mutex() = default;
|
|
~recursive_mutex() = default;
|
|
|
|
recursive_mutex(const recursive_mutex&) = delete;
|
|
recursive_mutex& operator=(const recursive_mutex&) = delete;
|
|
|
|
void
|
|
lock()
|
|
{
|
|
int __e = __gthread_recursive_mutex_lock(&_M_mutex);
|
|
|
|
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
|
|
if (__e)
|
|
__throw_system_error(__e);
|
|
}
|
|
|
|
bool
|
|
try_lock() noexcept
|
|
{
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
return !__gthread_recursive_mutex_trylock(&_M_mutex);
|
|
}
|
|
|
|
void
|
|
unlock()
|
|
{
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
__gthread_recursive_mutex_unlock(&_M_mutex);
|
|
}
|
|
|
|
native_handle_type
|
|
native_handle() noexcept
|
|
{ return &_M_mutex; }
|
|
};
|
|
|
|
#if _GTHREAD_USE_MUTEX_TIMEDLOCK
|
|
template<typename _Derived>
|
|
class __timed_mutex_impl
|
|
{
|
|
protected:
|
|
template<typename _Rep, typename _Period>
|
|
bool
|
|
_M_try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
|
{
|
|
#if _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
using __clock = chrono::steady_clock;
|
|
#else
|
|
using __clock = chrono::system_clock;
|
|
#endif
|
|
|
|
auto __rt = chrono::duration_cast<__clock::duration>(__rtime);
|
|
if (ratio_greater<__clock::period, _Period>())
|
|
++__rt;
|
|
return _M_try_lock_until(__clock::now() + __rt);
|
|
}
|
|
|
|
template<typename _Duration>
|
|
bool
|
|
_M_try_lock_until(const chrono::time_point<chrono::system_clock,
|
|
_Duration>& __atime)
|
|
{
|
|
auto __s = chrono::time_point_cast<chrono::seconds>(__atime);
|
|
auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);
|
|
|
|
__gthread_time_t __ts = {
|
|
static_cast<std::time_t>(__s.time_since_epoch().count()),
|
|
static_cast<long>(__ns.count())
|
|
};
|
|
|
|
return static_cast<_Derived*>(this)->_M_timedlock(__ts);
|
|
}
|
|
|
|
#ifdef _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
template<typename _Duration>
|
|
bool
|
|
_M_try_lock_until(const chrono::time_point<chrono::steady_clock,
|
|
_Duration>& __atime)
|
|
{
|
|
auto __s = chrono::time_point_cast<chrono::seconds>(__atime);
|
|
auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);
|
|
|
|
__gthread_time_t __ts = {
|
|
static_cast<std::time_t>(__s.time_since_epoch().count()),
|
|
static_cast<long>(__ns.count())
|
|
};
|
|
|
|
return static_cast<_Derived*>(this)->_M_clocklock(CLOCK_MONOTONIC,
|
|
__ts);
|
|
}
|
|
#endif
|
|
|
|
template<typename _Clock, typename _Duration>
|
|
bool
|
|
_M_try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
|
{
|
|
#if __cplusplus > 201703L
|
|
static_assert(chrono::is_clock_v<_Clock>);
|
|
#endif
|
|
// The user-supplied clock may not tick at the same rate as
|
|
// steady_clock, so we must loop in order to guarantee that
|
|
// the timeout has expired before returning false.
|
|
auto __now = _Clock::now();
|
|
do {
|
|
auto __rtime = __atime - __now;
|
|
if (_M_try_lock_for(__rtime))
|
|
return true;
|
|
__now = _Clock::now();
|
|
} while (__atime > __now);
|
|
return false;
|
|
}
|
|
};
|
|
|
|
/// The standard timed mutex type.
|
|
class timed_mutex
|
|
: private __mutex_base, public __timed_mutex_impl<timed_mutex>
|
|
{
|
|
public:
|
|
typedef __native_type* native_handle_type;
|
|
|
|
timed_mutex() = default;
|
|
~timed_mutex() = default;
|
|
|
|
timed_mutex(const timed_mutex&) = delete;
|
|
timed_mutex& operator=(const timed_mutex&) = delete;
|
|
|
|
void
|
|
lock()
|
|
{
|
|
int __e = __gthread_mutex_lock(&_M_mutex);
|
|
|
|
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
|
|
if (__e)
|
|
__throw_system_error(__e);
|
|
}
|
|
|
|
bool
|
|
try_lock() noexcept
|
|
{
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
return !__gthread_mutex_trylock(&_M_mutex);
|
|
}
|
|
|
|
template <class _Rep, class _Period>
|
|
bool
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
|
{ return _M_try_lock_for(__rtime); }
|
|
|
|
template <class _Clock, class _Duration>
|
|
bool
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
|
{ return _M_try_lock_until(__atime); }
|
|
|
|
void
|
|
unlock()
|
|
{
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
__gthread_mutex_unlock(&_M_mutex);
|
|
}
|
|
|
|
native_handle_type
|
|
native_handle() noexcept
|
|
{ return &_M_mutex; }
|
|
|
|
private:
|
|
friend class __timed_mutex_impl<timed_mutex>;
|
|
|
|
bool
|
|
_M_timedlock(const __gthread_time_t& __ts)
|
|
{ return !__gthread_mutex_timedlock(&_M_mutex, &__ts); }
|
|
|
|
#if _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
bool
|
|
_M_clocklock(clockid_t clockid, const __gthread_time_t& __ts)
|
|
{ return !pthread_mutex_clocklock(&_M_mutex, clockid, &__ts); }
|
|
#endif
|
|
};
|
|
|
|
/// recursive_timed_mutex
|
|
class recursive_timed_mutex
|
|
: private __recursive_mutex_base,
|
|
public __timed_mutex_impl<recursive_timed_mutex>
|
|
{
|
|
public:
|
|
typedef __native_type* native_handle_type;
|
|
|
|
recursive_timed_mutex() = default;
|
|
~recursive_timed_mutex() = default;
|
|
|
|
recursive_timed_mutex(const recursive_timed_mutex&) = delete;
|
|
recursive_timed_mutex& operator=(const recursive_timed_mutex&) = delete;
|
|
|
|
void
|
|
lock()
|
|
{
|
|
int __e = __gthread_recursive_mutex_lock(&_M_mutex);
|
|
|
|
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
|
|
if (__e)
|
|
__throw_system_error(__e);
|
|
}
|
|
|
|
bool
|
|
try_lock() noexcept
|
|
{
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
return !__gthread_recursive_mutex_trylock(&_M_mutex);
|
|
}
|
|
|
|
template <class _Rep, class _Period>
|
|
bool
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
|
{ return _M_try_lock_for(__rtime); }
|
|
|
|
template <class _Clock, class _Duration>
|
|
bool
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
|
{ return _M_try_lock_until(__atime); }
|
|
|
|
void
|
|
unlock()
|
|
{
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
__gthread_recursive_mutex_unlock(&_M_mutex);
|
|
}
|
|
|
|
native_handle_type
|
|
native_handle() noexcept
|
|
{ return &_M_mutex; }
|
|
|
|
private:
|
|
friend class __timed_mutex_impl<recursive_timed_mutex>;
|
|
|
|
bool
|
|
_M_timedlock(const __gthread_time_t& __ts)
|
|
{ return !__gthread_recursive_mutex_timedlock(&_M_mutex, &__ts); }
|
|
|
|
#ifdef _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
bool
|
|
_M_clocklock(clockid_t clockid, const __gthread_time_t& __ts)
|
|
{ return !pthread_mutex_clocklock(&_M_mutex, clockid, &__ts); }
|
|
#endif
|
|
};
|
|
|
|
#else // !_GTHREAD_USE_MUTEX_TIMEDLOCK
|
|
|
|
/// timed_mutex
|
|
class timed_mutex
|
|
{
|
|
mutex _M_mut;
|
|
condition_variable _M_cv;
|
|
bool _M_locked = false;
|
|
|
|
public:
|
|
|
|
timed_mutex() = default;
|
|
~timed_mutex() { __glibcxx_assert( !_M_locked ); }
|
|
|
|
timed_mutex(const timed_mutex&) = delete;
|
|
timed_mutex& operator=(const timed_mutex&) = delete;
|
|
|
|
void
|
|
lock()
|
|
{
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
_M_cv.wait(__lk, [&]{ return !_M_locked; });
|
|
_M_locked = true;
|
|
}
|
|
|
|
bool
|
|
try_lock()
|
|
{
|
|
lock_guard<mutex> __lk(_M_mut);
|
|
if (_M_locked)
|
|
return false;
|
|
_M_locked = true;
|
|
return true;
|
|
}
|
|
|
|
template<typename _Rep, typename _Period>
|
|
bool
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
|
{
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
if (!_M_cv.wait_for(__lk, __rtime, [&]{ return !_M_locked; }))
|
|
return false;
|
|
_M_locked = true;
|
|
return true;
|
|
}
|
|
|
|
template<typename _Clock, typename _Duration>
|
|
bool
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
|
{
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
if (!_M_cv.wait_until(__lk, __atime, [&]{ return !_M_locked; }))
|
|
return false;
|
|
_M_locked = true;
|
|
return true;
|
|
}
|
|
|
|
void
|
|
unlock()
|
|
{
|
|
lock_guard<mutex> __lk(_M_mut);
|
|
__glibcxx_assert( _M_locked );
|
|
_M_locked = false;
|
|
_M_cv.notify_one();
|
|
}
|
|
};
|
|
|
|
/// recursive_timed_mutex
|
|
class recursive_timed_mutex
|
|
{
|
|
mutex _M_mut;
|
|
condition_variable _M_cv;
|
|
thread::id _M_owner;
|
|
unsigned _M_count = 0;
|
|
|
|
// Predicate type that tests whether the current thread can lock a mutex.
|
|
struct _Can_lock
|
|
{
|
|
// Returns true if the mutex is unlocked or is locked by _M_caller.
|
|
bool
|
|
operator()() const noexcept
|
|
{ return _M_mx->_M_count == 0 || _M_mx->_M_owner == _M_caller; }
|
|
|
|
const recursive_timed_mutex* _M_mx;
|
|
thread::id _M_caller;
|
|
};
|
|
|
|
public:
|
|
|
|
recursive_timed_mutex() = default;
|
|
~recursive_timed_mutex() { __glibcxx_assert( _M_count == 0 ); }
|
|
|
|
recursive_timed_mutex(const recursive_timed_mutex&) = delete;
|
|
recursive_timed_mutex& operator=(const recursive_timed_mutex&) = delete;
|
|
|
|
void
|
|
lock()
|
|
{
|
|
auto __id = this_thread::get_id();
|
|
_Can_lock __can_lock{this, __id};
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
_M_cv.wait(__lk, __can_lock);
|
|
if (_M_count == -1u)
|
|
__throw_system_error(EAGAIN); // [thread.timedmutex.recursive]/3
|
|
_M_owner = __id;
|
|
++_M_count;
|
|
}
|
|
|
|
bool
|
|
try_lock()
|
|
{
|
|
auto __id = this_thread::get_id();
|
|
_Can_lock __can_lock{this, __id};
|
|
lock_guard<mutex> __lk(_M_mut);
|
|
if (!__can_lock())
|
|
return false;
|
|
if (_M_count == -1u)
|
|
return false;
|
|
_M_owner = __id;
|
|
++_M_count;
|
|
return true;
|
|
}
|
|
|
|
template<typename _Rep, typename _Period>
|
|
bool
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
|
{
|
|
auto __id = this_thread::get_id();
|
|
_Can_lock __can_lock{this, __id};
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
if (!_M_cv.wait_for(__lk, __rtime, __can_lock))
|
|
return false;
|
|
if (_M_count == -1u)
|
|
return false;
|
|
_M_owner = __id;
|
|
++_M_count;
|
|
return true;
|
|
}
|
|
|
|
template<typename _Clock, typename _Duration>
|
|
bool
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
|
{
|
|
auto __id = this_thread::get_id();
|
|
_Can_lock __can_lock{this, __id};
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
if (!_M_cv.wait_until(__lk, __atime, __can_lock))
|
|
return false;
|
|
if (_M_count == -1u)
|
|
return false;
|
|
_M_owner = __id;
|
|
++_M_count;
|
|
return true;
|
|
}
|
|
|
|
void
|
|
unlock()
|
|
{
|
|
lock_guard<mutex> __lk(_M_mut);
|
|
__glibcxx_assert( _M_owner == this_thread::get_id() );
|
|
__glibcxx_assert( _M_count > 0 );
|
|
if (--_M_count == 0)
|
|
{
|
|
_M_owner = {};
|
|
_M_cv.notify_one();
|
|
}
|
|
}
|
|
};
|
|
|
|
#endif
|
|
#endif // _GLIBCXX_HAS_GTHREADS
|
|
|
|
/// @cond undocumented
|
|
template<typename _Lock>
|
|
inline unique_lock<_Lock>
|
|
__try_to_lock(_Lock& __l)
|
|
{ return unique_lock<_Lock>{__l, try_to_lock}; }
|
|
|
|
template<int _Idx, bool _Continue = true>
|
|
struct __try_lock_impl
|
|
{
|
|
template<typename... _Lock>
|
|
static void
|
|
__do_try_lock(tuple<_Lock&...>& __locks, int& __idx)
|
|
{
|
|
__idx = _Idx;
|
|
auto __lock = std::__try_to_lock(std::get<_Idx>(__locks));
|
|
if (__lock.owns_lock())
|
|
{
|
|
constexpr bool __cont = _Idx + 2 < sizeof...(_Lock);
|
|
using __try_locker = __try_lock_impl<_Idx + 1, __cont>;
|
|
__try_locker::__do_try_lock(__locks, __idx);
|
|
if (__idx == -1)
|
|
__lock.release();
|
|
}
|
|
}
|
|
};
|
|
|
|
template<int _Idx>
|
|
struct __try_lock_impl<_Idx, false>
|
|
{
|
|
template<typename... _Lock>
|
|
static void
|
|
__do_try_lock(tuple<_Lock&...>& __locks, int& __idx)
|
|
{
|
|
__idx = _Idx;
|
|
auto __lock = std::__try_to_lock(std::get<_Idx>(__locks));
|
|
if (__lock.owns_lock())
|
|
{
|
|
__idx = -1;
|
|
__lock.release();
|
|
}
|
|
}
|
|
};
|
|
/// @endcond
|
|
|
|
/** @brief Generic try_lock.
|
|
* @param __l1 Meets Lockable requirements (try_lock() may throw).
|
|
* @param __l2 Meets Lockable requirements (try_lock() may throw).
|
|
* @param __l3 Meets Lockable requirements (try_lock() may throw).
|
|
* @return Returns -1 if all try_lock() calls return true. Otherwise returns
|
|
* a 0-based index corresponding to the argument that returned false.
|
|
* @post Either all arguments are locked, or none will be.
|
|
*
|
|
* Sequentially calls try_lock() on each argument.
|
|
*/
|
|
template<typename _Lock1, typename _Lock2, typename... _Lock3>
|
|
int
|
|
try_lock(_Lock1& __l1, _Lock2& __l2, _Lock3&... __l3)
|
|
{
|
|
int __idx;
|
|
auto __locks = std::tie(__l1, __l2, __l3...);
|
|
__try_lock_impl<0>::__do_try_lock(__locks, __idx);
|
|
return __idx;
|
|
}
|
|
|
|
/** @brief Generic lock.
|
|
* @param __l1 Meets Lockable requirements (try_lock() may throw).
|
|
* @param __l2 Meets Lockable requirements (try_lock() may throw).
|
|
* @param __l3 Meets Lockable requirements (try_lock() may throw).
|
|
* @throw An exception thrown by an argument's lock() or try_lock() member.
|
|
* @post All arguments are locked.
|
|
*
|
|
* All arguments are locked via a sequence of calls to lock(), try_lock()
|
|
* and unlock(). If the call exits via an exception any locks that were
|
|
* obtained will be released.
|
|
*/
|
|
template<typename _L1, typename _L2, typename... _L3>
|
|
void
|
|
lock(_L1& __l1, _L2& __l2, _L3&... __l3)
|
|
{
|
|
while (true)
|
|
{
|
|
using __try_locker = __try_lock_impl<0, sizeof...(_L3) != 0>;
|
|
unique_lock<_L1> __first(__l1);
|
|
int __idx;
|
|
auto __locks = std::tie(__l2, __l3...);
|
|
__try_locker::__do_try_lock(__locks, __idx);
|
|
if (__idx == -1)
|
|
{
|
|
__first.release();
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
|
|
#if __cplusplus >= 201703L
|
|
#define __cpp_lib_scoped_lock 201703
|
|
/** @brief A scoped lock type for multiple lockable objects.
|
|
*
|
|
* A scoped_lock controls mutex ownership within a scope, releasing
|
|
* ownership in the destructor.
|
|
*/
|
|
template<typename... _MutexTypes>
|
|
class scoped_lock
|
|
{
|
|
public:
|
|
explicit scoped_lock(_MutexTypes&... __m) : _M_devices(std::tie(__m...))
|
|
{ std::lock(__m...); }
|
|
|
|
explicit scoped_lock(adopt_lock_t, _MutexTypes&... __m) noexcept
|
|
: _M_devices(std::tie(__m...))
|
|
{ } // calling thread owns mutex
|
|
|
|
~scoped_lock()
|
|
{ std::apply([](auto&... __m) { (__m.unlock(), ...); }, _M_devices); }
|
|
|
|
scoped_lock(const scoped_lock&) = delete;
|
|
scoped_lock& operator=(const scoped_lock&) = delete;
|
|
|
|
private:
|
|
tuple<_MutexTypes&...> _M_devices;
|
|
};
|
|
|
|
template<>
|
|
class scoped_lock<>
|
|
{
|
|
public:
|
|
explicit scoped_lock() = default;
|
|
explicit scoped_lock(adopt_lock_t) noexcept { }
|
|
~scoped_lock() = default;
|
|
|
|
scoped_lock(const scoped_lock&) = delete;
|
|
scoped_lock& operator=(const scoped_lock&) = delete;
|
|
};
|
|
|
|
template<typename _Mutex>
|
|
class scoped_lock<_Mutex>
|
|
{
|
|
public:
|
|
using mutex_type = _Mutex;
|
|
|
|
explicit scoped_lock(mutex_type& __m) : _M_device(__m)
|
|
{ _M_device.lock(); }
|
|
|
|
explicit scoped_lock(adopt_lock_t, mutex_type& __m) noexcept
|
|
: _M_device(__m)
|
|
{ } // calling thread owns mutex
|
|
|
|
~scoped_lock()
|
|
{ _M_device.unlock(); }
|
|
|
|
scoped_lock(const scoped_lock&) = delete;
|
|
scoped_lock& operator=(const scoped_lock&) = delete;
|
|
|
|
private:
|
|
mutex_type& _M_device;
|
|
};
|
|
#endif // C++17
|
|
|
|
/// Flag type used by std::call_once
|
|
struct once_flag
|
|
{
|
|
constexpr once_flag() noexcept = default;
|
|
|
|
/// Deleted copy constructor
|
|
once_flag(const once_flag&) = delete;
|
|
/// Deleted assignment operator
|
|
once_flag& operator=(const once_flag&) = delete;
|
|
|
|
private:
|
|
// There are two different std::once_flag interfaces, abstracting four
|
|
// different implementations.
|
|
// The preferred interface uses the _M_activate() and _M_finish(bool)
|
|
// member functions (introduced in GCC 11), which start and finish an
|
|
// active execution respectively. See [thread.once.callonce] in C++11
|
|
// for the definition of active/passive/returning/exceptional executions.
|
|
// This interface is supported for Linux (using atomics and futexes) and
|
|
// for single-threaded targets with no gthreads support.
|
|
// For other targets a pthread_once_t is used with pthread_once, but that
|
|
// doesn't work correctly for exceptional executions. That interface
|
|
// uses an object of type _Prepare_execution and a lambda expression.
|
|
#if defined _GLIBCXX_HAVE_LINUX_FUTEX || ! defined _GLIBCXX_HAS_GTHREADS
|
|
enum _Bits : int { _Init = 0, _Active = 1, _Done = 2 };
|
|
|
|
int _M_once = _Bits::_Init;
|
|
|
|
// Non-blocking check to see if all executions will be passive now.
|
|
bool
|
|
_M_passive() const noexcept;
|
|
|
|
// Attempts to begin an active execution. Blocks until it either:
|
|
// - returns true if an active execution has started on this thread, or
|
|
// - returns false if a returning execution happens on another thread.
|
|
bool _M_activate();
|
|
|
|
// Must be called to complete an active execution.
|
|
void _M_finish(bool __returning) noexcept;
|
|
|
|
// RAII helper to call _M_finish.
|
|
struct _Active_execution
|
|
{
|
|
explicit _Active_execution(once_flag& __flag) : _M_flag(__flag) { }
|
|
|
|
~_Active_execution() { _M_flag._M_finish(_M_returning); }
|
|
|
|
_Active_execution(const _Active_execution&) = delete;
|
|
_Active_execution& operator=(const _Active_execution&) = delete;
|
|
|
|
once_flag& _M_flag;
|
|
bool _M_returning = false;
|
|
};
|
|
#else
|
|
__gthread_once_t _M_once = __GTHREAD_ONCE_INIT;
|
|
|
|
struct _Prepare_execution;
|
|
#endif // ! GTHREADS
|
|
|
|
template<typename _Callable, typename... _Args>
|
|
friend void
|
|
call_once(once_flag& __once, _Callable&& __f, _Args&&... __args);
|
|
};
|
|
|
|
#if ! defined _GLIBCXX_HAS_GTHREADS
|
|
// Inline definitions of std::once_flag members for single-threaded targets.
|
|
|
|
inline bool
|
|
once_flag::_M_passive() const noexcept
|
|
{ return _M_once == _Bits::_Done; }
|
|
|
|
inline bool
|
|
once_flag::_M_activate()
|
|
{
|
|
if (_M_once == _Bits::_Init)
|
|
{
|
|
_M_once = _Bits::_Active;
|
|
return true;
|
|
}
|
|
else if (!_M_passive())
|
|
__throw_system_error(EDEADLK);
|
|
}
|
|
|
|
inline void
|
|
once_flag::_M_finish(bool returning) noexcept
|
|
{ _M_once = returning ? _Bits::_Done : _Bits::_Init; }
|
|
|
|
#elif defined _GLIBCXX_HAVE_LINUX_FUTEX
|
|
|
|
// Define this inline to make passive executions fast.
|
|
inline bool
|
|
once_flag::_M_passive() const noexcept
|
|
{
|
|
if (__gnu_cxx::__is_single_threaded())
|
|
return _M_once == _Bits::_Done;
|
|
else
|
|
return __atomic_load_n(&_M_once, __ATOMIC_ACQUIRE) == _Bits::_Done;
|
|
}
|
|
|
|
#else // GTHREADS && ! FUTEX
|
|
|
|
/// @cond undocumented
|
|
# ifdef _GLIBCXX_HAVE_TLS
|
|
// If TLS is available use thread-local state for the type-erased callable
|
|
// that is being run by std::call_once in the current thread.
|
|
extern __thread void* __once_callable;
|
|
extern __thread void (*__once_call)();
|
|
# else
|
|
// Without TLS use a global std::mutex and store the callable in a
|
|
// global std::function.
|
|
extern function<void()> __once_functor;
|
|
|
|
extern void
|
|
__set_once_functor_lock_ptr(unique_lock<mutex>*);
|
|
|
|
extern mutex&
|
|
__get_once_mutex();
|
|
# endif
|
|
|
|
// This function is passed to pthread_once by std::call_once.
|
|
// It runs __once_call() or __once_functor().
|
|
extern "C" void __once_proxy(void);
|
|
|
|
// RAII type to set up state for pthread_once call.
|
|
struct once_flag::_Prepare_execution
|
|
{
|
|
#ifdef _GLIBCXX_HAVE_TLS
|
|
template<typename _Callable>
|
|
explicit
|
|
_Prepare_execution(_Callable& __c)
|
|
{
|
|
// Store address in thread-local pointer:
|
|
__once_callable = std::__addressof(__c);
|
|
// Trampoline function to invoke the closure via thread-local pointer:
|
|
__once_call = [] { (*static_cast<_Callable*>(__once_callable))(); };
|
|
}
|
|
|
|
~_Prepare_execution()
|
|
{
|
|
// PR libstdc++/82481
|
|
__once_callable = nullptr;
|
|
__once_call = nullptr;
|
|
}
|
|
#else // ! TLS
|
|
template<typename _Callable>
|
|
explicit
|
|
_Prepare_execution(_Callable& __c)
|
|
{
|
|
// Store the callable in the global std::function
|
|
__once_functor = __c;
|
|
__set_once_functor_lock_ptr(&_M_functor_lock);
|
|
}
|
|
|
|
~_Prepare_execution()
|
|
{
|
|
if (_M_functor_lock)
|
|
__set_once_functor_lock_ptr(nullptr);
|
|
}
|
|
|
|
private:
|
|
unique_lock<mutex> _M_functor_lock{__get_once_mutex()};
|
|
#endif // ! TLS
|
|
|
|
_Prepare_execution(const _Prepare_execution&) = delete;
|
|
_Prepare_execution& operator=(const _Prepare_execution&) = delete;
|
|
};
|
|
/// @endcond
|
|
#endif
|
|
|
|
/// Invoke a callable and synchronize with other calls using the same flag
|
|
template<typename _Callable, typename... _Args>
|
|
void
|
|
call_once(once_flag& __once, _Callable&& __f, _Args&&... __args)
|
|
{
|
|
#if defined _GLIBCXX_HAVE_LINUX_FUTEX || ! defined _GLIBCXX_HAS_GTHREADS
|
|
if (__once._M_passive())
|
|
return;
|
|
else if (__once._M_activate())
|
|
{
|
|
once_flag::_Active_execution __exec(__once);
|
|
|
|
// _GLIBCXX_RESOLVE_LIB_DEFECTS
|
|
// 2442. call_once() shouldn't DECAY_COPY()
|
|
std::__invoke(std::forward<_Callable>(__f),
|
|
std::forward<_Args>(__args)...);
|
|
|
|
// __f(__args...) did not throw
|
|
__exec._M_returning = true;
|
|
}
|
|
#else
|
|
// Closure type that runs the function
|
|
auto __callable = [&] {
|
|
std::__invoke(std::forward<_Callable>(__f),
|
|
std::forward<_Args>(__args)...);
|
|
};
|
|
|
|
once_flag::_Prepare_execution __exec(__callable);
|
|
|
|
// XXX pthread_once does not reset the flag if an exception is thrown.
|
|
if (int __e = __gthread_once(&__once._M_once, &__once_proxy))
|
|
__throw_system_error(__e);
|
|
#endif
|
|
}
|
|
|
|
// @} group mutexes
|
|
_GLIBCXX_END_NAMESPACE_VERSION
|
|
} // namespace
|
|
|
|
#endif // C++11
|
|
|
|
#endif // _GLIBCXX_MUTEX
|