2008-05-06 21:11:47 +00:00
|
|
|
// <mutex> -*- C++ -*-
|
|
|
|
|
2022-01-03 10:42:10 +01:00
|
|
|
// Copyright (C) 2003-2022 Free Software Foundation, Inc.
|
2008-05-06 21:11:47 +00:00
|
|
|
//
|
|
|
|
// This file is part of the GNU ISO C++ Library. This library is free
|
|
|
|
// software; you can redistribute it and/or modify it under the
|
|
|
|
// terms of the GNU General Public License as published by the
|
2009-04-09 17:00:19 +02:00
|
|
|
// Free Software Foundation; either version 3, or (at your option)
|
2008-05-06 21:11:47 +00:00
|
|
|
// any later version.
|
|
|
|
|
|
|
|
// This library is distributed in the hope that it will be useful,
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
// GNU General Public License for more details.
|
|
|
|
|
2009-04-09 17:00:19 +02:00
|
|
|
// Under Section 7 of GPL version 3, you are granted additional
|
|
|
|
// permissions described in the GCC Runtime Library Exception, version
|
|
|
|
// 3.1, as published by the Free Software Foundation.
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2009-04-09 17:00:19 +02:00
|
|
|
// You should have received a copy of the GNU General Public License and
|
|
|
|
// a copy of the GCC Runtime Library Exception along with this program;
|
|
|
|
// see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
|
|
|
|
// <http://www.gnu.org/licenses/>.
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2010-12-19 09:21:16 +00:00
|
|
|
/** @file include/mutex
|
2008-05-06 21:11:47 +00:00
|
|
|
* This is a Standard C++ Library header.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _GLIBCXX_MUTEX
|
|
|
|
#define _GLIBCXX_MUTEX 1
|
|
|
|
|
|
|
|
#pragma GCC system_header
|
|
|
|
|
2012-11-10 12:27:22 -05:00
|
|
|
#if __cplusplus < 201103L
|
2010-02-10 19:14:33 +00:00
|
|
|
# include <bits/c++0x_warning.h>
|
2008-05-26 02:19:57 +00:00
|
|
|
#else
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2008-09-28 09:05:07 +00:00
|
|
|
#include <tuple>
|
2008-09-03 17:46:09 +00:00
|
|
|
#include <exception>
|
|
|
|
#include <type_traits>
|
|
|
|
#include <system_error>
|
libstdc++: Move C++14 <chrono> components to new <bits/chrono.h> header
This moves the "classic" contents of <chrono> to a new header, so that
<future>, <thread> etc. can get use durations and clocks without
calendar types, time zones, and chrono I/O.
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new header.
* include/Makefile.in: Regenerate.
* include/std/chrono (duration, time_point, system_clock)
(steady_clock, high_resolution_clock, chrono_literals, sys_time)
(file_clock, file_time): Move to ...
* include/bits/chrono.h: New file.
* include/bits/atomic_futex.h: Include new header instead of
<chrono>.
* include/bits/atomic_timed_wait.h: Likewise.
* include/bits/fs_fwd.h: Likewise.
* include/bits/semaphore_base.h: Likewise.
* include/bits/this_thread_sleep.h: Likewise.
* include/bits/unique_lock.h: Likewise.
* include/experimental/bits/fs_fwd.h: Likewise.
* include/experimental/chrono: Likewise.
* include/experimental/io_context: Likewise.
* include/experimental/netfwd: Likewise.
* include/experimental/timer: Likewise.
* include/std/condition_variable: Likewise.
* include/std/mutex: Likewise.
* include/std/shared_mutex: Likewise.
2021-10-07 14:51:18 +01:00
|
|
|
#include <bits/chrono.h>
|
2016-01-06 13:00:33 +00:00
|
|
|
#include <bits/std_mutex.h>
|
2018-07-25 11:40:12 +01:00
|
|
|
#include <bits/unique_lock.h>
|
2015-09-04 12:23:44 +01:00
|
|
|
#if ! _GTHREAD_USE_MUTEX_TIMEDLOCK
|
|
|
|
# include <condition_variable>
|
|
|
|
# include <thread>
|
|
|
|
#endif
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
#include <ext/atomicity.h> // __gnu_cxx::__is_single_threaded
|
|
|
|
|
|
|
|
#if defined _GLIBCXX_HAS_GTHREADS && ! defined _GLIBCXX_HAVE_TLS
|
|
|
|
# include <bits/std_function.h> // std::function
|
2016-10-13 17:59:19 +01:00
|
|
|
#endif
|
2008-09-03 17:46:09 +00:00
|
|
|
|
PR libstdc++/36104 part four
2011-01-30 Benjamin Kosnik <bkoz@redhat.com>
PR libstdc++/36104 part four
* include/bits/c++config (_GLIBCXX_STD): Remove.
(_GLIBCXX_STD_D, _GLIBCXX_PR): Now _GLIBCXX_STD_C.
(_GLIBCXX_P): Now _GLIBCXX_STD_A.
(_GLIBCXX_NAMESPACE_DEBUG, _GLIBCXX_NAMESPACE_PARALLEL,
_GLIBCXX_NAMESPACE_PROFILE, _GLIBCXX_NAMESPACE_VERSION): Remove.
(_GLIBCXX_INLINE_DEBUG, _GLIBCXX_INLINE_PARALLEL,
_GLIBCXX_INLINE_PROFILE): Remove.
(_GLIBCXX_BEGIN_NAMESPACE(X)): Remove.
(_GLIBCXX_END_NAMESPACE): Remove.
(_GLIBCXX_BEGIN_NESTED_NAMESPACE(X, Y)): Remove.
(_GLIBCXX_END_NESTED_NAMESPACE): Remove.
(_GLIBCXX_BEGIN_NAMESPACE_ALGO): Add.
(_GLIBCXX_END_NAMESPACE_ALGO): Add.
(_GLIBCXX_BEGIN_NAMESPACE_CONTAINER): Add.
(_GLIBCXX_END_NAMESPACE_CONTAINER): Add.
(_GLIBCXX_BEGIN_NAMESPACE_VERSION): Add.
(_GLIBCXX_END_NAMESPACE_VERSION): Add.
(_GLIBCXX_BEGIN_LDBL_NAMESPACE): To _GLIBCXX_BEGIN_NAMESPACE_LDBL.
(_GLIBCXX_END_LDBL_NAMESPACE): To _GLIBCXX_END_NAMESPACE_LDBL.
(_GLIBCXX_VISIBILITY_ATTR): Revert to _GLIBCXX_VISIBILITY.
* include/*: Use new macros for namespace scope.
* config/*: Same.
* src/*: Same.
* src/Makefile.am (sources): Remove debug_list.cc, add
compatibility-debug_list-2.cc.
(parallel_sources): Remove parallel_list.cc, add
compatibility-parallel_list-2.cc.
(compatibility-parallel_list-2.[o,lo]): New rule.
* src/Makefile.in: Regenerate.
* src/debug_list.cc: Remove.
* src/parallel_list.cc: Remove.
* src/compatibility-list-2.cc: New.
* src/compatibility-debug_list-2.cc: New.
* src/compatibility-parallel_list-2.cc: New.
* doc/doxygen/user.cfg.in: Adjust macros.
* testsuite/20_util/auto_ptr/assign_neg.cc: Adjust line numbers, macros.
* testsuite/20_util/declval/requirements/1_neg.cc: Same.
* testsuite/20_util/duration/requirements/typedefs_neg1.cc: Same.
* testsuite/20_util/duration/requirements/typedefs_neg2.cc: Same.
* testsuite/20_util/duration/requirements/typedefs_neg3.cc: Same.
* testsuite/20_util/forward/c_neg.cc: Same.
* testsuite/20_util/forward/f_neg.cc: Same.
* testsuite/20_util/make_signed/requirements/typedefs_neg.cc: Same.
* testsuite/20_util/make_unsigned/requirements/typedefs_neg.cc: Same.
* testsuite/20_util/ratio/cons/cons_overflow_neg.cc: Same.
* testsuite/20_util/ratio/operations/ops_overflow_neg.cc: Same.
* testsuite/20_util/shared_ptr/cons/43820_neg.cc: Same.
* testsuite/20_util/weak_ptr/comparison/cmp_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/assign_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/insert_neg.cc: Same.
* testsuite/23_containers/forward_list/capacity/1.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
assign_neg.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
insert_neg.cc: Same.
* testsuite/23_containers/list/capacity/29134.cc: Same.
* testsuite/23_containers/list/requirements/dr438/assign_neg.cc: Same.
* testsuite/23_containers/list/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/list/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/list/requirements/dr438/insert_neg.cc: Same.
* testsuite/23_containers/vector/bool/capacity/29134.cc: Same.
* testsuite/23_containers/vector/bool/modifiers/insert/31370.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/assign_neg.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/insert_neg.cc: Same.
* testsuite/25_algorithms/sort/35588.cc: Same.
* testsuite/27_io/ios_base/cons/assign_neg.cc: Same.
* testsuite/27_io/ios_base/cons/copy_neg.cc: Same.
* testsuite/ext/profile/mutex_extensions_neg.cc: Same.
* testsuite/ext/profile/profiler_algos.cc: Same.
* testsuite/ext/type_traits/add_unsigned_floating_neg.cc: Same.
* testsuite/ext/type_traits/add_unsigned_integer_neg.cc: Same.
* testsuite/ext/type_traits/remove_unsigned_floating_neg.cc: Same.
* testsuite/ext/type_traits/remove_unsigned_integer_neg.cc: Same.
* testsuite/tr1/2_general_utilities/shared_ptr/cons/43820_neg.cc: Same.
From-SVN: r169421
2011-01-30 22:39:36 +00:00
|
|
|
namespace std _GLIBCXX_VISIBILITY(default)
|
|
|
|
{
|
|
|
|
_GLIBCXX_BEGIN_NAMESPACE_VERSION
|
2010-11-02 18:51:23 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
/**
|
2019-05-02 16:46:42 +01:00
|
|
|
* @addtogroup mutexes
|
2015-09-04 12:23:44 +01:00
|
|
|
* @{
|
|
|
|
*/
|
2011-10-24 23:26:25 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
#ifdef _GLIBCXX_HAS_GTHREADS
|
2011-10-24 23:26:25 +00:00
|
|
|
|
2013-04-28 11:51:59 +00:00
|
|
|
// Common base class for std::recursive_mutex and std::recursive_timed_mutex
|
2011-10-24 23:26:25 +00:00
|
|
|
class __recursive_mutex_base
|
|
|
|
{
|
|
|
|
protected:
|
|
|
|
typedef __gthread_recursive_mutex_t __native_type;
|
|
|
|
|
|
|
|
__recursive_mutex_base(const __recursive_mutex_base&) = delete;
|
|
|
|
__recursive_mutex_base& operator=(const __recursive_mutex_base&) = delete;
|
|
|
|
|
|
|
|
#ifdef __GTHREAD_RECURSIVE_MUTEX_INIT
|
|
|
|
__native_type _M_mutex = __GTHREAD_RECURSIVE_MUTEX_INIT;
|
|
|
|
|
|
|
|
__recursive_mutex_base() = default;
|
|
|
|
#else
|
|
|
|
__native_type _M_mutex;
|
|
|
|
|
|
|
|
__recursive_mutex_base()
|
|
|
|
{
|
|
|
|
// XXX EAGAIN, ENOMEM, EPERM, EBUSY(may), EINVAL(may)
|
|
|
|
__GTHREAD_RECURSIVE_MUTEX_INIT_FUNCTION(&_M_mutex);
|
|
|
|
}
|
|
|
|
|
|
|
|
~__recursive_mutex_base()
|
2012-10-02 20:22:32 +00:00
|
|
|
{ __gthread_recursive_mutex_destroy(&_M_mutex); }
|
2011-10-24 23:26:25 +00:00
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
2015-12-10 14:02:52 +00:00
|
|
|
/// The standard recursive mutex type.
|
2011-10-24 23:26:25 +00:00
|
|
|
class recursive_mutex : private __recursive_mutex_base
|
2008-05-06 21:11:47 +00:00
|
|
|
{
|
|
|
|
public:
|
2009-02-10 08:29:57 +00:00
|
|
|
typedef __native_type* native_handle_type;
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2011-10-24 23:26:25 +00:00
|
|
|
recursive_mutex() = default;
|
|
|
|
~recursive_mutex() = default;
|
2010-11-18 18:56:29 +00:00
|
|
|
|
2008-09-03 17:46:09 +00:00
|
|
|
recursive_mutex(const recursive_mutex&) = delete;
|
|
|
|
recursive_mutex& operator=(const recursive_mutex&) = delete;
|
|
|
|
|
2008-05-07 00:55:51 +00:00
|
|
|
void
|
2008-05-06 21:11:47 +00:00
|
|
|
lock()
|
2008-05-07 00:55:51 +00:00
|
|
|
{
|
2008-05-06 21:11:47 +00:00
|
|
|
int __e = __gthread_recursive_mutex_lock(&_M_mutex);
|
|
|
|
|
|
|
|
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
|
2008-07-23 22:17:31 +00:00
|
|
|
if (__e)
|
|
|
|
__throw_system_error(__e);
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
2008-05-07 00:55:51 +00:00
|
|
|
|
|
|
|
bool
|
2011-05-25 14:32:06 +00:00
|
|
|
try_lock() noexcept
|
2008-05-06 21:11:47 +00:00
|
|
|
{
|
2008-05-15 00:52:48 +00:00
|
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
2008-09-03 17:46:09 +00:00
|
|
|
return !__gthread_recursive_mutex_trylock(&_M_mutex);
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
|
|
|
|
2008-05-07 00:55:51 +00:00
|
|
|
void
|
2008-05-06 21:11:47 +00:00
|
|
|
unlock()
|
2008-05-07 00:55:51 +00:00
|
|
|
{
|
2008-05-15 00:52:48 +00:00
|
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
2008-09-03 17:46:09 +00:00
|
|
|
__gthread_recursive_mutex_unlock(&_M_mutex);
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
|
|
|
|
2008-05-07 00:55:51 +00:00
|
|
|
native_handle_type
|
2016-10-11 11:33:16 +01:00
|
|
|
native_handle() noexcept
|
2008-09-03 17:46:09 +00:00
|
|
|
{ return &_M_mutex; }
|
2008-05-06 21:11:47 +00:00
|
|
|
};
|
|
|
|
|
2011-10-22 21:31:24 +00:00
|
|
|
#if _GTHREAD_USE_MUTEX_TIMEDLOCK
|
2013-06-18 22:55:02 +00:00
|
|
|
template<typename _Derived>
|
|
|
|
class __timed_mutex_impl
|
|
|
|
{
|
|
|
|
protected:
|
|
|
|
template<typename _Rep, typename _Period>
|
|
|
|
bool
|
|
|
|
_M_try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
|
|
|
{
|
2019-12-02 16:23:01 +00:00
|
|
|
#if _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
|
|
using __clock = chrono::steady_clock;
|
|
|
|
#else
|
|
|
|
using __clock = chrono::system_clock;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
auto __rt = chrono::duration_cast<__clock::duration>(__rtime);
|
|
|
|
if (ratio_greater<__clock::period, _Period>())
|
2013-06-18 22:55:02 +00:00
|
|
|
++__rt;
|
2019-12-02 16:23:01 +00:00
|
|
|
return _M_try_lock_until(__clock::now() + __rt);
|
2013-06-18 22:55:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
template<typename _Duration>
|
|
|
|
bool
|
2019-12-02 16:23:01 +00:00
|
|
|
_M_try_lock_until(const chrono::time_point<chrono::system_clock,
|
2013-06-18 22:55:02 +00:00
|
|
|
_Duration>& __atime)
|
|
|
|
{
|
2013-11-11 13:33:48 +00:00
|
|
|
auto __s = chrono::time_point_cast<chrono::seconds>(__atime);
|
|
|
|
auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);
|
2013-06-18 22:55:02 +00:00
|
|
|
|
|
|
|
__gthread_time_t __ts = {
|
|
|
|
static_cast<std::time_t>(__s.time_since_epoch().count()),
|
|
|
|
static_cast<long>(__ns.count())
|
|
|
|
};
|
|
|
|
|
2015-09-02 10:51:14 +00:00
|
|
|
return static_cast<_Derived*>(this)->_M_timedlock(__ts);
|
2013-06-18 22:55:02 +00:00
|
|
|
}
|
|
|
|
|
2019-12-02 16:23:01 +00:00
|
|
|
#ifdef _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
|
|
template<typename _Duration>
|
|
|
|
bool
|
|
|
|
_M_try_lock_until(const chrono::time_point<chrono::steady_clock,
|
|
|
|
_Duration>& __atime)
|
|
|
|
{
|
|
|
|
auto __s = chrono::time_point_cast<chrono::seconds>(__atime);
|
|
|
|
auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);
|
|
|
|
|
|
|
|
__gthread_time_t __ts = {
|
|
|
|
static_cast<std::time_t>(__s.time_since_epoch().count()),
|
|
|
|
static_cast<long>(__ns.count())
|
|
|
|
};
|
|
|
|
|
|
|
|
return static_cast<_Derived*>(this)->_M_clocklock(CLOCK_MONOTONIC,
|
|
|
|
__ts);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2013-06-18 22:55:02 +00:00
|
|
|
template<typename _Clock, typename _Duration>
|
|
|
|
bool
|
|
|
|
_M_try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
2013-11-11 13:33:48 +00:00
|
|
|
{
|
2020-03-25 22:07:02 +00:00
|
|
|
#if __cplusplus > 201703L
|
|
|
|
static_assert(chrono::is_clock_v<_Clock>);
|
|
|
|
#endif
|
2019-12-02 16:23:06 +00:00
|
|
|
// The user-supplied clock may not tick at the same rate as
|
|
|
|
// steady_clock, so we must loop in order to guarantee that
|
|
|
|
// the timeout has expired before returning false.
|
|
|
|
auto __now = _Clock::now();
|
|
|
|
do {
|
|
|
|
auto __rtime = __atime - __now;
|
|
|
|
if (_M_try_lock_for(__rtime))
|
|
|
|
return true;
|
|
|
|
__now = _Clock::now();
|
|
|
|
} while (__atime > __now);
|
|
|
|
return false;
|
2013-11-11 13:33:48 +00:00
|
|
|
}
|
2013-06-18 22:55:02 +00:00
|
|
|
};
|
|
|
|
|
2015-12-10 14:02:52 +00:00
|
|
|
/// The standard timed mutex type.
|
2013-06-18 22:55:02 +00:00
|
|
|
class timed_mutex
|
|
|
|
: private __mutex_base, public __timed_mutex_impl<timed_mutex>
|
|
|
|
{
|
2008-07-23 22:17:31 +00:00
|
|
|
public:
|
2009-02-10 08:29:57 +00:00
|
|
|
typedef __native_type* native_handle_type;
|
2008-07-23 22:17:31 +00:00
|
|
|
|
2011-10-24 23:26:25 +00:00
|
|
|
timed_mutex() = default;
|
|
|
|
~timed_mutex() = default;
|
2010-11-18 18:56:29 +00:00
|
|
|
|
2008-09-03 17:46:09 +00:00
|
|
|
timed_mutex(const timed_mutex&) = delete;
|
|
|
|
timed_mutex& operator=(const timed_mutex&) = delete;
|
|
|
|
|
|
|
|
void
|
|
|
|
lock()
|
|
|
|
{
|
|
|
|
int __e = __gthread_mutex_lock(&_M_mutex);
|
|
|
|
|
|
|
|
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
|
|
|
|
if (__e)
|
|
|
|
__throw_system_error(__e);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool
|
2011-05-25 14:32:06 +00:00
|
|
|
try_lock() noexcept
|
2008-09-03 17:46:09 +00:00
|
|
|
{
|
|
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
|
|
return !__gthread_mutex_trylock(&_M_mutex);
|
|
|
|
}
|
2008-07-23 22:17:31 +00:00
|
|
|
|
|
|
|
template <class _Rep, class _Period>
|
|
|
|
bool
|
2008-09-03 17:46:09 +00:00
|
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
2013-06-18 22:55:02 +00:00
|
|
|
{ return _M_try_lock_for(__rtime); }
|
2008-07-23 22:17:31 +00:00
|
|
|
|
|
|
|
template <class _Clock, class _Duration>
|
|
|
|
bool
|
2008-09-03 17:46:09 +00:00
|
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
2013-06-18 22:55:02 +00:00
|
|
|
{ return _M_try_lock_until(__atime); }
|
2008-09-03 17:46:09 +00:00
|
|
|
|
|
|
|
void
|
|
|
|
unlock()
|
|
|
|
{
|
|
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
|
|
__gthread_mutex_unlock(&_M_mutex);
|
|
|
|
}
|
2008-07-23 22:17:31 +00:00
|
|
|
|
|
|
|
native_handle_type
|
2016-10-11 11:33:16 +01:00
|
|
|
native_handle() noexcept
|
2008-09-03 17:46:09 +00:00
|
|
|
{ return &_M_mutex; }
|
2015-09-02 10:51:14 +00:00
|
|
|
|
|
|
|
private:
|
|
|
|
friend class __timed_mutex_impl<timed_mutex>;
|
|
|
|
|
|
|
|
bool
|
|
|
|
_M_timedlock(const __gthread_time_t& __ts)
|
|
|
|
{ return !__gthread_mutex_timedlock(&_M_mutex, &__ts); }
|
2019-12-02 16:23:01 +00:00
|
|
|
|
|
|
|
#if _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
|
|
bool
|
|
|
|
_M_clocklock(clockid_t clockid, const __gthread_time_t& __ts)
|
|
|
|
{ return !pthread_mutex_clocklock(&_M_mutex, clockid, &__ts); }
|
|
|
|
#endif
|
2008-07-23 22:17:31 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/// recursive_timed_mutex
|
2013-06-18 22:55:02 +00:00
|
|
|
class recursive_timed_mutex
|
|
|
|
: private __recursive_mutex_base,
|
|
|
|
public __timed_mutex_impl<recursive_timed_mutex>
|
2008-07-23 22:17:31 +00:00
|
|
|
{
|
|
|
|
public:
|
2009-02-10 08:29:57 +00:00
|
|
|
typedef __native_type* native_handle_type;
|
2008-09-03 17:46:09 +00:00
|
|
|
|
2011-10-24 23:26:25 +00:00
|
|
|
recursive_timed_mutex() = default;
|
|
|
|
~recursive_timed_mutex() = default;
|
2010-11-18 18:56:29 +00:00
|
|
|
|
2008-09-03 17:46:09 +00:00
|
|
|
recursive_timed_mutex(const recursive_timed_mutex&) = delete;
|
|
|
|
recursive_timed_mutex& operator=(const recursive_timed_mutex&) = delete;
|
|
|
|
|
|
|
|
void
|
|
|
|
lock()
|
|
|
|
{
|
|
|
|
int __e = __gthread_recursive_mutex_lock(&_M_mutex);
|
|
|
|
|
|
|
|
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
|
|
|
|
if (__e)
|
|
|
|
__throw_system_error(__e);
|
|
|
|
}
|
2008-07-23 22:17:31 +00:00
|
|
|
|
2008-09-03 17:46:09 +00:00
|
|
|
bool
|
2011-05-25 14:32:06 +00:00
|
|
|
try_lock() noexcept
|
2008-09-03 17:46:09 +00:00
|
|
|
{
|
|
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
|
|
return !__gthread_recursive_mutex_trylock(&_M_mutex);
|
|
|
|
}
|
2008-07-23 22:17:31 +00:00
|
|
|
|
|
|
|
template <class _Rep, class _Period>
|
|
|
|
bool
|
2008-09-03 17:46:09 +00:00
|
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
2013-06-18 22:55:02 +00:00
|
|
|
{ return _M_try_lock_for(__rtime); }
|
2008-07-23 22:17:31 +00:00
|
|
|
|
|
|
|
template <class _Clock, class _Duration>
|
|
|
|
bool
|
2008-09-03 17:46:09 +00:00
|
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
2013-06-18 22:55:02 +00:00
|
|
|
{ return _M_try_lock_until(__atime); }
|
2008-09-03 17:46:09 +00:00
|
|
|
|
|
|
|
void
|
|
|
|
unlock()
|
|
|
|
{
|
|
|
|
// XXX EINVAL, EAGAIN, EBUSY
|
|
|
|
__gthread_recursive_mutex_unlock(&_M_mutex);
|
|
|
|
}
|
2008-07-23 22:17:31 +00:00
|
|
|
|
|
|
|
native_handle_type
|
2016-10-11 11:33:16 +01:00
|
|
|
native_handle() noexcept
|
2008-09-03 17:46:09 +00:00
|
|
|
{ return &_M_mutex; }
|
2015-09-02 10:51:14 +00:00
|
|
|
|
|
|
|
private:
|
|
|
|
friend class __timed_mutex_impl<recursive_timed_mutex>;
|
|
|
|
|
|
|
|
bool
|
|
|
|
_M_timedlock(const __gthread_time_t& __ts)
|
|
|
|
{ return !__gthread_recursive_mutex_timedlock(&_M_mutex, &__ts); }
|
2019-12-02 16:23:01 +00:00
|
|
|
|
|
|
|
#ifdef _GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK
|
|
|
|
bool
|
|
|
|
_M_clocklock(clockid_t clockid, const __gthread_time_t& __ts)
|
|
|
|
{ return !pthread_mutex_clocklock(&_M_mutex, clockid, &__ts); }
|
|
|
|
#endif
|
2008-07-23 22:17:31 +00:00
|
|
|
};
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
#else // !_GTHREAD_USE_MUTEX_TIMEDLOCK
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
/// timed_mutex
|
|
|
|
class timed_mutex
|
|
|
|
{
|
|
|
|
mutex _M_mut;
|
|
|
|
condition_variable _M_cv;
|
|
|
|
bool _M_locked = false;
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
public:
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
timed_mutex() = default;
|
Enable lightweight checks with _GLIBCXX_ASSERTIONS.
* doc/xml/manual/using.xml (_GLIBCXX_ASSERTIONS): Document.
* doc/html/manual/using_macros.html: Regenerate.
* include/bits/c++config: Define _GLIBCXX_ASSERTIONS when
_GLIBCXX_DEBUG is defined. Disable std::string extern templates when
(_GLIBCXX_EXTERN_TEMPLATE, __glibcxx_assert): Depend on
_GLIBCXX_ASSERTIONS instead of _GLIBCXX_DEBUG.
* include/debug/debug.h [!_GLIBCXX_DEBUG]: Define
__glibcxx_requires_non_empty_range and __glibcxx_requires_nonempty.
* include/backward/auto_ptr.h (auto_ptr::operator*,
auto_ptr::operator->): Replace _GLIBCXX_DEBUG_ASSERT with
__glibcxx_assert.
* include/bits/basic_string.h (basic_string::operator[],
basic_string::front, basic_string::back, basic_string::pop_back):
Likewise.
* include/bits/random.h
(uniform_int_distribution::param_type::param_type,
uniform_real_distribution::param_type::param_type,
normal_distribution::param_type::param_type,
gamma_distribution::param_type::param_type,
bernoulli_distribution::param_type::param_type,
binomial_distribution::param_type::param_type,
geometric_distribution::param_type::param_type,
negative_binomial_distribution::param_type::param_type,
poisson_distribution::param_type::param_type,
exponential_distribution::param_type::param_type): Likewise.
* include/bits/regex.h (match_results::operator[],
match_results::prefix, match_results::suffix): Likewise.
* include/bits/regex.tcc (format, regex_iterator::operator++):
Likewise.
* include/bits/regex_automaton.tcc (_StateSeq::_M_clone): Likewise.
* include/bits/regex_compiler.tcc (_Compiler::_Compiler,
_Compiler::_M_insert_character_class_matcher): Likewise.
* include/bits/regex_executor.tcc (_Executor::_M_dfs): Likewise.
* include/bits/regex_scanner.tcc (_Scanner::_M_advance,
_Scanner::_M_scan_normal): Likewise.
* include/bits/shared_ptr_base.h (__shared_ptr::_M_reset,
__shared_ptr::operator*): Likewise.
* include/bits/stl_iterator_base_funcs.h (__advance): Likewise.
* include/bits/unique_ptr.h (unique_ptr::operator*,
unique_ptr::operator[]): Likewise.
* include/experimental/fs_path.h (path::path(string_type, _Type),
path::iterator::operator++, path::iterator::operator--,
path::iterator::operator*): Likewise.
* include/experimental/string_view (basic_string_view::operator[],
basic_string_view::front, basic_string_view::back,
basic_string_view::remove_prefix): Likewise.
* include/ext/random (beta_distribution::param_type::param_type,
normal_mv_distribution::param_type::param_type,
rice_distribution::param_type::param_type,
pareto_distribution::param_type::param_type,
k_distribution::param_type::param_type,
arcsine_distribution::param_type::param_type,
hoyt_distribution::param_type::param_type,
triangular_distribution::param_type::param_type,
von_mises_distribution::param_type::param_type,
hypergeometric_distribution::param_type::param_type,
logistic_distribution::param_type::param_type): Likewise.
* include/ext/vstring.h (__versa_string::operator[]): Likewise.
* include/std/complex (polar): Likewise.
* include/std/mutex [!_GTHREAD_USE_MUTEX_TIMEDLOCK]
(timed_mutex::~timed_mutex, timed_mutex::unlock,
(recursive_timed_mutex::~timed_mutex, recursive_timed_mutex::unlock):
Likewise.
* include/std/shared_mutex [!PTHREAD_RWLOCK_INITIALIZER]
(__shared_mutex_pthread::__shared_mutex_pthread,
__shared_mutex_pthread::~__shared_mutex_pthread): Likewise.
(__shared_mutex_pthread::lock, __shared_mutex_pthread::try_lock,
__shared_mutex_pthread::unlock, __shared_mutex_pthread::lock_shared,
__shared_mutex_pthread::try_lock_shared): Likewise.
(__shared_mutex_cv::~__shared_mutex_cv, __shared_mutex_cv::unlock,
__shared_mutex_cv::unlock_shared): Likewise.
(shared_timed_mutex::try_lock_until,
shared_timed_mutex::try_lock_shared_until): Likewise.
* include/std/valarray (valarray::valarray(const _Tp*, size_t),
valarray::operator=, valarray::sum, valarray::min, valarray::max,
_DEFINE_VALARRAY_AUGMENTED_ASSIGNMENT, _DEFINE_BINARY_OPERATOR):
Likewise.
From-SVN: r227595
2015-09-09 18:12:47 +01:00
|
|
|
~timed_mutex() { __glibcxx_assert( !_M_locked ); }
|
2008-09-03 17:46:09 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
timed_mutex(const timed_mutex&) = delete;
|
|
|
|
timed_mutex& operator=(const timed_mutex&) = delete;
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
void
|
|
|
|
lock()
|
2008-05-06 21:11:47 +00:00
|
|
|
{
|
2015-09-04 12:23:44 +01:00
|
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
|
|
_M_cv.wait(__lk, [&]{ return !_M_locked; });
|
|
|
|
_M_locked = true;
|
|
|
|
}
|
2009-02-10 08:29:57 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
bool
|
|
|
|
try_lock()
|
|
|
|
{
|
|
|
|
lock_guard<mutex> __lk(_M_mut);
|
|
|
|
if (_M_locked)
|
|
|
|
return false;
|
|
|
|
_M_locked = true;
|
|
|
|
return true;
|
|
|
|
}
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
template<typename _Rep, typename _Period>
|
|
|
|
bool
|
|
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
2008-05-07 00:55:51 +00:00
|
|
|
{
|
2015-09-04 12:23:44 +01:00
|
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
|
|
if (!_M_cv.wait_for(__lk, __rtime, [&]{ return !_M_locked; }))
|
|
|
|
return false;
|
|
|
|
_M_locked = true;
|
|
|
|
return true;
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
2008-05-07 00:55:51 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
template<typename _Clock, typename _Duration>
|
|
|
|
bool
|
|
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
2008-05-06 21:11:47 +00:00
|
|
|
{
|
2015-09-04 12:23:44 +01:00
|
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
|
|
if (!_M_cv.wait_until(__lk, __atime, [&]{ return !_M_locked; }))
|
|
|
|
return false;
|
|
|
|
_M_locked = true;
|
|
|
|
return true;
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
void
|
|
|
|
unlock()
|
|
|
|
{
|
|
|
|
lock_guard<mutex> __lk(_M_mut);
|
Enable lightweight checks with _GLIBCXX_ASSERTIONS.
* doc/xml/manual/using.xml (_GLIBCXX_ASSERTIONS): Document.
* doc/html/manual/using_macros.html: Regenerate.
* include/bits/c++config: Define _GLIBCXX_ASSERTIONS when
_GLIBCXX_DEBUG is defined. Disable std::string extern templates when
(_GLIBCXX_EXTERN_TEMPLATE, __glibcxx_assert): Depend on
_GLIBCXX_ASSERTIONS instead of _GLIBCXX_DEBUG.
* include/debug/debug.h [!_GLIBCXX_DEBUG]: Define
__glibcxx_requires_non_empty_range and __glibcxx_requires_nonempty.
* include/backward/auto_ptr.h (auto_ptr::operator*,
auto_ptr::operator->): Replace _GLIBCXX_DEBUG_ASSERT with
__glibcxx_assert.
* include/bits/basic_string.h (basic_string::operator[],
basic_string::front, basic_string::back, basic_string::pop_back):
Likewise.
* include/bits/random.h
(uniform_int_distribution::param_type::param_type,
uniform_real_distribution::param_type::param_type,
normal_distribution::param_type::param_type,
gamma_distribution::param_type::param_type,
bernoulli_distribution::param_type::param_type,
binomial_distribution::param_type::param_type,
geometric_distribution::param_type::param_type,
negative_binomial_distribution::param_type::param_type,
poisson_distribution::param_type::param_type,
exponential_distribution::param_type::param_type): Likewise.
* include/bits/regex.h (match_results::operator[],
match_results::prefix, match_results::suffix): Likewise.
* include/bits/regex.tcc (format, regex_iterator::operator++):
Likewise.
* include/bits/regex_automaton.tcc (_StateSeq::_M_clone): Likewise.
* include/bits/regex_compiler.tcc (_Compiler::_Compiler,
_Compiler::_M_insert_character_class_matcher): Likewise.
* include/bits/regex_executor.tcc (_Executor::_M_dfs): Likewise.
* include/bits/regex_scanner.tcc (_Scanner::_M_advance,
_Scanner::_M_scan_normal): Likewise.
* include/bits/shared_ptr_base.h (__shared_ptr::_M_reset,
__shared_ptr::operator*): Likewise.
* include/bits/stl_iterator_base_funcs.h (__advance): Likewise.
* include/bits/unique_ptr.h (unique_ptr::operator*,
unique_ptr::operator[]): Likewise.
* include/experimental/fs_path.h (path::path(string_type, _Type),
path::iterator::operator++, path::iterator::operator--,
path::iterator::operator*): Likewise.
* include/experimental/string_view (basic_string_view::operator[],
basic_string_view::front, basic_string_view::back,
basic_string_view::remove_prefix): Likewise.
* include/ext/random (beta_distribution::param_type::param_type,
normal_mv_distribution::param_type::param_type,
rice_distribution::param_type::param_type,
pareto_distribution::param_type::param_type,
k_distribution::param_type::param_type,
arcsine_distribution::param_type::param_type,
hoyt_distribution::param_type::param_type,
triangular_distribution::param_type::param_type,
von_mises_distribution::param_type::param_type,
hypergeometric_distribution::param_type::param_type,
logistic_distribution::param_type::param_type): Likewise.
* include/ext/vstring.h (__versa_string::operator[]): Likewise.
* include/std/complex (polar): Likewise.
* include/std/mutex [!_GTHREAD_USE_MUTEX_TIMEDLOCK]
(timed_mutex::~timed_mutex, timed_mutex::unlock,
(recursive_timed_mutex::~timed_mutex, recursive_timed_mutex::unlock):
Likewise.
* include/std/shared_mutex [!PTHREAD_RWLOCK_INITIALIZER]
(__shared_mutex_pthread::__shared_mutex_pthread,
__shared_mutex_pthread::~__shared_mutex_pthread): Likewise.
(__shared_mutex_pthread::lock, __shared_mutex_pthread::try_lock,
__shared_mutex_pthread::unlock, __shared_mutex_pthread::lock_shared,
__shared_mutex_pthread::try_lock_shared): Likewise.
(__shared_mutex_cv::~__shared_mutex_cv, __shared_mutex_cv::unlock,
__shared_mutex_cv::unlock_shared): Likewise.
(shared_timed_mutex::try_lock_until,
shared_timed_mutex::try_lock_shared_until): Likewise.
* include/std/valarray (valarray::valarray(const _Tp*, size_t),
valarray::operator=, valarray::sum, valarray::min, valarray::max,
_DEFINE_VALARRAY_AUGMENTED_ASSIGNMENT, _DEFINE_BINARY_OPERATOR):
Likewise.
From-SVN: r227595
2015-09-09 18:12:47 +01:00
|
|
|
__glibcxx_assert( _M_locked );
|
2015-09-04 12:23:44 +01:00
|
|
|
_M_locked = false;
|
|
|
|
_M_cv.notify_one();
|
|
|
|
}
|
|
|
|
};
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
/// recursive_timed_mutex
|
|
|
|
class recursive_timed_mutex
|
|
|
|
{
|
|
|
|
mutex _M_mut;
|
|
|
|
condition_variable _M_cv;
|
|
|
|
thread::id _M_owner;
|
|
|
|
unsigned _M_count = 0;
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
// Predicate type that tests whether the current thread can lock a mutex.
|
|
|
|
struct _Can_lock
|
|
|
|
{
|
|
|
|
// Returns true if the mutex is unlocked or is locked by _M_caller.
|
|
|
|
bool
|
|
|
|
operator()() const noexcept
|
|
|
|
{ return _M_mx->_M_count == 0 || _M_mx->_M_owner == _M_caller; }
|
2008-09-03 17:46:09 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
const recursive_timed_mutex* _M_mx;
|
|
|
|
thread::id _M_caller;
|
|
|
|
};
|
2008-09-03 17:46:09 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
public:
|
2009-02-10 08:29:57 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
recursive_timed_mutex() = default;
|
Enable lightweight checks with _GLIBCXX_ASSERTIONS.
* doc/xml/manual/using.xml (_GLIBCXX_ASSERTIONS): Document.
* doc/html/manual/using_macros.html: Regenerate.
* include/bits/c++config: Define _GLIBCXX_ASSERTIONS when
_GLIBCXX_DEBUG is defined. Disable std::string extern templates when
(_GLIBCXX_EXTERN_TEMPLATE, __glibcxx_assert): Depend on
_GLIBCXX_ASSERTIONS instead of _GLIBCXX_DEBUG.
* include/debug/debug.h [!_GLIBCXX_DEBUG]: Define
__glibcxx_requires_non_empty_range and __glibcxx_requires_nonempty.
* include/backward/auto_ptr.h (auto_ptr::operator*,
auto_ptr::operator->): Replace _GLIBCXX_DEBUG_ASSERT with
__glibcxx_assert.
* include/bits/basic_string.h (basic_string::operator[],
basic_string::front, basic_string::back, basic_string::pop_back):
Likewise.
* include/bits/random.h
(uniform_int_distribution::param_type::param_type,
uniform_real_distribution::param_type::param_type,
normal_distribution::param_type::param_type,
gamma_distribution::param_type::param_type,
bernoulli_distribution::param_type::param_type,
binomial_distribution::param_type::param_type,
geometric_distribution::param_type::param_type,
negative_binomial_distribution::param_type::param_type,
poisson_distribution::param_type::param_type,
exponential_distribution::param_type::param_type): Likewise.
* include/bits/regex.h (match_results::operator[],
match_results::prefix, match_results::suffix): Likewise.
* include/bits/regex.tcc (format, regex_iterator::operator++):
Likewise.
* include/bits/regex_automaton.tcc (_StateSeq::_M_clone): Likewise.
* include/bits/regex_compiler.tcc (_Compiler::_Compiler,
_Compiler::_M_insert_character_class_matcher): Likewise.
* include/bits/regex_executor.tcc (_Executor::_M_dfs): Likewise.
* include/bits/regex_scanner.tcc (_Scanner::_M_advance,
_Scanner::_M_scan_normal): Likewise.
* include/bits/shared_ptr_base.h (__shared_ptr::_M_reset,
__shared_ptr::operator*): Likewise.
* include/bits/stl_iterator_base_funcs.h (__advance): Likewise.
* include/bits/unique_ptr.h (unique_ptr::operator*,
unique_ptr::operator[]): Likewise.
* include/experimental/fs_path.h (path::path(string_type, _Type),
path::iterator::operator++, path::iterator::operator--,
path::iterator::operator*): Likewise.
* include/experimental/string_view (basic_string_view::operator[],
basic_string_view::front, basic_string_view::back,
basic_string_view::remove_prefix): Likewise.
* include/ext/random (beta_distribution::param_type::param_type,
normal_mv_distribution::param_type::param_type,
rice_distribution::param_type::param_type,
pareto_distribution::param_type::param_type,
k_distribution::param_type::param_type,
arcsine_distribution::param_type::param_type,
hoyt_distribution::param_type::param_type,
triangular_distribution::param_type::param_type,
von_mises_distribution::param_type::param_type,
hypergeometric_distribution::param_type::param_type,
logistic_distribution::param_type::param_type): Likewise.
* include/ext/vstring.h (__versa_string::operator[]): Likewise.
* include/std/complex (polar): Likewise.
* include/std/mutex [!_GTHREAD_USE_MUTEX_TIMEDLOCK]
(timed_mutex::~timed_mutex, timed_mutex::unlock,
(recursive_timed_mutex::~timed_mutex, recursive_timed_mutex::unlock):
Likewise.
* include/std/shared_mutex [!PTHREAD_RWLOCK_INITIALIZER]
(__shared_mutex_pthread::__shared_mutex_pthread,
__shared_mutex_pthread::~__shared_mutex_pthread): Likewise.
(__shared_mutex_pthread::lock, __shared_mutex_pthread::try_lock,
__shared_mutex_pthread::unlock, __shared_mutex_pthread::lock_shared,
__shared_mutex_pthread::try_lock_shared): Likewise.
(__shared_mutex_cv::~__shared_mutex_cv, __shared_mutex_cv::unlock,
__shared_mutex_cv::unlock_shared): Likewise.
(shared_timed_mutex::try_lock_until,
shared_timed_mutex::try_lock_shared_until): Likewise.
* include/std/valarray (valarray::valarray(const _Tp*, size_t),
valarray::operator=, valarray::sum, valarray::min, valarray::max,
_DEFINE_VALARRAY_AUGMENTED_ASSIGNMENT, _DEFINE_BINARY_OPERATOR):
Likewise.
From-SVN: r227595
2015-09-09 18:12:47 +01:00
|
|
|
~recursive_timed_mutex() { __glibcxx_assert( _M_count == 0 ); }
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
recursive_timed_mutex(const recursive_timed_mutex&) = delete;
|
|
|
|
recursive_timed_mutex& operator=(const recursive_timed_mutex&) = delete;
|
2009-02-10 08:29:57 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
void
|
|
|
|
lock()
|
|
|
|
{
|
2015-09-04 17:09:05 +01:00
|
|
|
auto __id = this_thread::get_id();
|
|
|
|
_Can_lock __can_lock{this, __id};
|
2015-09-04 12:23:44 +01:00
|
|
|
unique_lock<mutex> __lk(_M_mut);
|
|
|
|
_M_cv.wait(__lk, __can_lock);
|
|
|
|
if (_M_count == -1u)
|
|
|
|
__throw_system_error(EAGAIN); // [thread.timedmutex.recursive]/3
|
|
|
|
_M_owner = __id;
|
|
|
|
++_M_count;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool
|
|
|
|
try_lock()
|
|
|
|
{
|
2015-09-04 17:09:05 +01:00
|
|
|
auto __id = this_thread::get_id();
|
|
|
|
_Can_lock __can_lock{this, __id};
|
2015-09-04 12:23:44 +01:00
|
|
|
lock_guard<mutex> __lk(_M_mut);
|
|
|
|
if (!__can_lock())
|
|
|
|
return false;
|
|
|
|
if (_M_count == -1u)
|
|
|
|
return false;
|
|
|
|
_M_owner = __id;
|
|
|
|
++_M_count;
|
|
|
|
return true;
|
|
|
|
}
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
template<typename _Rep, typename _Period>
|
|
|
|
bool
|
|
|
|
try_lock_for(const chrono::duration<_Rep, _Period>& __rtime)
|
2008-05-07 00:55:51 +00:00
|
|
|
{
|
2015-09-04 17:09:05 +01:00
|
|
|
auto __id = this_thread::get_id();
|
|
|
|
_Can_lock __can_lock{this, __id};
|
2015-09-04 12:23:44 +01:00
|
|
|
unique_lock<mutex> __lk(_M_mut);
|
2015-09-04 17:09:05 +01:00
|
|
|
if (!_M_cv.wait_for(__lk, __rtime, __can_lock))
|
2015-09-04 12:23:44 +01:00
|
|
|
return false;
|
|
|
|
if (_M_count == -1u)
|
|
|
|
return false;
|
|
|
|
_M_owner = __id;
|
|
|
|
++_M_count;
|
|
|
|
return true;
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
template<typename _Clock, typename _Duration>
|
2008-05-07 00:55:51 +00:00
|
|
|
bool
|
2015-09-04 12:23:44 +01:00
|
|
|
try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime)
|
2008-05-07 00:55:51 +00:00
|
|
|
{
|
2015-09-04 17:09:05 +01:00
|
|
|
auto __id = this_thread::get_id();
|
|
|
|
_Can_lock __can_lock{this, __id};
|
2015-09-04 12:23:44 +01:00
|
|
|
unique_lock<mutex> __lk(_M_mut);
|
2015-09-04 17:09:05 +01:00
|
|
|
if (!_M_cv.wait_until(__lk, __atime, __can_lock))
|
2015-09-04 12:23:44 +01:00
|
|
|
return false;
|
|
|
|
if (_M_count == -1u)
|
|
|
|
return false;
|
|
|
|
_M_owner = __id;
|
|
|
|
++_M_count;
|
|
|
|
return true;
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
void
|
|
|
|
unlock()
|
|
|
|
{
|
|
|
|
lock_guard<mutex> __lk(_M_mut);
|
Enable lightweight checks with _GLIBCXX_ASSERTIONS.
* doc/xml/manual/using.xml (_GLIBCXX_ASSERTIONS): Document.
* doc/html/manual/using_macros.html: Regenerate.
* include/bits/c++config: Define _GLIBCXX_ASSERTIONS when
_GLIBCXX_DEBUG is defined. Disable std::string extern templates when
(_GLIBCXX_EXTERN_TEMPLATE, __glibcxx_assert): Depend on
_GLIBCXX_ASSERTIONS instead of _GLIBCXX_DEBUG.
* include/debug/debug.h [!_GLIBCXX_DEBUG]: Define
__glibcxx_requires_non_empty_range and __glibcxx_requires_nonempty.
* include/backward/auto_ptr.h (auto_ptr::operator*,
auto_ptr::operator->): Replace _GLIBCXX_DEBUG_ASSERT with
__glibcxx_assert.
* include/bits/basic_string.h (basic_string::operator[],
basic_string::front, basic_string::back, basic_string::pop_back):
Likewise.
* include/bits/random.h
(uniform_int_distribution::param_type::param_type,
uniform_real_distribution::param_type::param_type,
normal_distribution::param_type::param_type,
gamma_distribution::param_type::param_type,
bernoulli_distribution::param_type::param_type,
binomial_distribution::param_type::param_type,
geometric_distribution::param_type::param_type,
negative_binomial_distribution::param_type::param_type,
poisson_distribution::param_type::param_type,
exponential_distribution::param_type::param_type): Likewise.
* include/bits/regex.h (match_results::operator[],
match_results::prefix, match_results::suffix): Likewise.
* include/bits/regex.tcc (format, regex_iterator::operator++):
Likewise.
* include/bits/regex_automaton.tcc (_StateSeq::_M_clone): Likewise.
* include/bits/regex_compiler.tcc (_Compiler::_Compiler,
_Compiler::_M_insert_character_class_matcher): Likewise.
* include/bits/regex_executor.tcc (_Executor::_M_dfs): Likewise.
* include/bits/regex_scanner.tcc (_Scanner::_M_advance,
_Scanner::_M_scan_normal): Likewise.
* include/bits/shared_ptr_base.h (__shared_ptr::_M_reset,
__shared_ptr::operator*): Likewise.
* include/bits/stl_iterator_base_funcs.h (__advance): Likewise.
* include/bits/unique_ptr.h (unique_ptr::operator*,
unique_ptr::operator[]): Likewise.
* include/experimental/fs_path.h (path::path(string_type, _Type),
path::iterator::operator++, path::iterator::operator--,
path::iterator::operator*): Likewise.
* include/experimental/string_view (basic_string_view::operator[],
basic_string_view::front, basic_string_view::back,
basic_string_view::remove_prefix): Likewise.
* include/ext/random (beta_distribution::param_type::param_type,
normal_mv_distribution::param_type::param_type,
rice_distribution::param_type::param_type,
pareto_distribution::param_type::param_type,
k_distribution::param_type::param_type,
arcsine_distribution::param_type::param_type,
hoyt_distribution::param_type::param_type,
triangular_distribution::param_type::param_type,
von_mises_distribution::param_type::param_type,
hypergeometric_distribution::param_type::param_type,
logistic_distribution::param_type::param_type): Likewise.
* include/ext/vstring.h (__versa_string::operator[]): Likewise.
* include/std/complex (polar): Likewise.
* include/std/mutex [!_GTHREAD_USE_MUTEX_TIMEDLOCK]
(timed_mutex::~timed_mutex, timed_mutex::unlock,
(recursive_timed_mutex::~timed_mutex, recursive_timed_mutex::unlock):
Likewise.
* include/std/shared_mutex [!PTHREAD_RWLOCK_INITIALIZER]
(__shared_mutex_pthread::__shared_mutex_pthread,
__shared_mutex_pthread::~__shared_mutex_pthread): Likewise.
(__shared_mutex_pthread::lock, __shared_mutex_pthread::try_lock,
__shared_mutex_pthread::unlock, __shared_mutex_pthread::lock_shared,
__shared_mutex_pthread::try_lock_shared): Likewise.
(__shared_mutex_cv::~__shared_mutex_cv, __shared_mutex_cv::unlock,
__shared_mutex_cv::unlock_shared): Likewise.
(shared_timed_mutex::try_lock_until,
shared_timed_mutex::try_lock_shared_until): Likewise.
* include/std/valarray (valarray::valarray(const _Tp*, size_t),
valarray::operator=, valarray::sum, valarray::min, valarray::max,
_DEFINE_VALARRAY_AUGMENTED_ASSIGNMENT, _DEFINE_BINARY_OPERATOR):
Likewise.
From-SVN: r227595
2015-09-09 18:12:47 +01:00
|
|
|
__glibcxx_assert( _M_owner == this_thread::get_id() );
|
|
|
|
__glibcxx_assert( _M_count > 0 );
|
2015-09-04 12:23:44 +01:00
|
|
|
if (--_M_count == 0)
|
2009-02-10 08:29:57 +00:00
|
|
|
{
|
2015-09-04 12:23:44 +01:00
|
|
|
_M_owner = {};
|
|
|
|
_M_cv.notify_one();
|
2008-09-03 17:46:09 +00:00
|
|
|
}
|
2015-09-04 12:23:44 +01:00
|
|
|
}
|
|
|
|
};
|
2009-02-10 08:29:57 +00:00
|
|
|
|
2015-09-04 12:23:44 +01:00
|
|
|
#endif
|
|
|
|
#endif // _GLIBCXX_HAS_GTHREADS
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2019-05-02 16:46:42 +01:00
|
|
|
/// @cond undocumented
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
namespace __detail
|
|
|
|
{
|
2021-06-22 13:35:19 +01:00
|
|
|
// Lock the last lockable, after all previous ones are locked.
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
template<typename _Lockable>
|
2021-06-22 13:35:19 +01:00
|
|
|
inline int
|
2021-06-23 11:05:51 +01:00
|
|
|
__try_lock_impl(_Lockable& __l)
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
{
|
2021-06-23 11:05:51 +01:00
|
|
|
if (unique_lock<_Lockable> __lock{__l, try_to_lock})
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
{
|
|
|
|
__lock.release();
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
else
|
2021-06-22 13:35:19 +01:00
|
|
|
return 0;
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
}
|
2010-12-04 02:37:46 +00:00
|
|
|
|
2021-06-22 13:35:19 +01:00
|
|
|
// Lock each lockable in turn.
|
|
|
|
// Use iteration if all lockables are the same type, recursion otherwise.
|
|
|
|
template<typename _L0, typename... _Lockables>
|
|
|
|
inline int
|
|
|
|
__try_lock_impl(_L0& __l0, _Lockables&... __lockables)
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
{
|
2021-06-22 13:35:19 +01:00
|
|
|
#if __cplusplus >= 201703L
|
|
|
|
if constexpr ((is_same_v<_L0, _Lockables> && ...))
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
{
|
2021-06-22 13:35:19 +01:00
|
|
|
constexpr int _Np = 1 + sizeof...(_Lockables);
|
|
|
|
unique_lock<_L0> __locks[_Np] = {
|
|
|
|
{__l0, defer_lock}, {__lockables, defer_lock}...
|
|
|
|
};
|
|
|
|
for (int __i = 0; __i < _Np; ++__i)
|
|
|
|
{
|
|
|
|
if (!__locks[__i].try_lock())
|
|
|
|
{
|
|
|
|
const int __failed = __i;
|
|
|
|
while (__i--)
|
|
|
|
__locks[__i].unlock();
|
|
|
|
return __failed;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
for (auto& __l : __locks)
|
|
|
|
__l.release();
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
if (unique_lock<_L0> __lock{__l0, try_to_lock})
|
|
|
|
{
|
|
|
|
int __idx = __detail::__try_lock_impl(__lockables...);
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
if (__idx == -1)
|
2021-06-22 13:35:19 +01:00
|
|
|
{
|
|
|
|
__lock.release();
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return __idx + 1;
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
}
|
|
|
|
else
|
2021-06-22 13:35:19 +01:00
|
|
|
return 0;
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
}
|
2009-02-10 08:29:57 +00:00
|
|
|
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
} // namespace __detail
|
2019-05-02 16:46:42 +01:00
|
|
|
/// @endcond
|
2009-02-10 08:29:57 +00:00
|
|
|
|
2008-09-28 09:05:07 +00:00
|
|
|
/** @brief Generic try_lock.
|
2017-05-17 17:02:33 +01:00
|
|
|
* @param __l1 Meets Lockable requirements (try_lock() may throw).
|
|
|
|
* @param __l2 Meets Lockable requirements (try_lock() may throw).
|
|
|
|
* @param __l3 Meets Lockable requirements (try_lock() may throw).
|
2009-02-10 08:29:57 +00:00
|
|
|
* @return Returns -1 if all try_lock() calls return true. Otherwise returns
|
2008-09-28 09:05:07 +00:00
|
|
|
* a 0-based index corresponding to the argument that returned false.
|
|
|
|
* @post Either all arguments are locked, or none will be.
|
|
|
|
*
|
|
|
|
* Sequentially calls try_lock() on each argument.
|
|
|
|
*/
|
2021-06-22 13:35:19 +01:00
|
|
|
template<typename _L1, typename _L2, typename... _L3>
|
2021-06-23 11:05:51 +01:00
|
|
|
inline int
|
2021-06-22 13:35:19 +01:00
|
|
|
try_lock(_L1& __l1, _L2& __l2, _L3&... __l3)
|
2008-09-28 09:05:07 +00:00
|
|
|
{
|
2021-06-22 13:35:19 +01:00
|
|
|
return __detail::__try_lock_impl(__l1, __l2, __l3...);
|
2008-09-28 09:05:07 +00:00
|
|
|
}
|
2008-05-06 21:11:47 +00:00
|
|
|
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
/// @cond undocumented
|
|
|
|
namespace __detail
|
|
|
|
{
|
|
|
|
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
|
|
|
|
// On each recursion the lockables are rotated left one position,
|
|
|
|
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
|
|
|
|
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
|
|
|
|
// so that l_i is the first argument, and then blocks until l_i is locked.
|
|
|
|
template<typename _L0, typename... _L1>
|
|
|
|
void
|
|
|
|
__lock_impl(int& __i, int __depth, _L0& __l0, _L1&... __l1)
|
|
|
|
{
|
|
|
|
while (__i >= __depth)
|
|
|
|
{
|
|
|
|
if (__i == __depth)
|
|
|
|
{
|
|
|
|
int __failed = 1; // index that couldn't be locked
|
|
|
|
{
|
|
|
|
unique_lock<_L0> __first(__l0);
|
2021-06-22 13:35:19 +01:00
|
|
|
__failed += __detail::__try_lock_impl(__l1...);
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
if (!__failed)
|
|
|
|
{
|
|
|
|
__i = -1; // finished
|
|
|
|
__first.release();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2021-07-09 15:13:38 +01:00
|
|
|
#if defined _GLIBCXX_HAS_GTHREADS && defined _GLIBCXX_USE_SCHED_YIELD
|
libstdc++: Improve std::lock algorithm
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
2021-06-21 13:35:18 +01:00
|
|
|
__gthread_yield();
|
|
|
|
#endif
|
|
|
|
constexpr auto __n = 1 + sizeof...(_L1);
|
|
|
|
__i = (__depth + __failed) % __n;
|
|
|
|
}
|
|
|
|
else // rotate left until l_i is first.
|
|
|
|
__detail::__lock_impl(__i, __depth + 1, __l1..., __l0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
} // namespace __detail
|
|
|
|
/// @endcond
|
|
|
|
|
2010-12-04 02:37:46 +00:00
|
|
|
/** @brief Generic lock.
|
2017-05-17 17:02:33 +01:00
|
|
|
* @param __l1 Meets Lockable requirements (try_lock() may throw).
|
|
|
|
* @param __l2 Meets Lockable requirements (try_lock() may throw).
|
|
|
|
* @param __l3 Meets Lockable requirements (try_lock() may throw).
|
2010-12-04 02:37:46 +00:00
|
|
|
* @throw An exception thrown by an argument's lock() or try_lock() member.
|
|
|
|
* @post All arguments are locked.
|
|
|
|
*
|
|
|
|
* All arguments are locked via a sequence of calls to lock(), try_lock()
|
2021-06-22 13:35:19 +01:00
|
|
|
* and unlock(). If this function exits via an exception any locks that
|
|
|
|
* were obtained will be released.
|
2010-12-04 02:37:46 +00:00
|
|
|
*/
|
2014-05-13 18:22:08 +01:00
|
|
|
template<typename _L1, typename _L2, typename... _L3>
|
2008-05-07 00:55:51 +00:00
|
|
|
void
|
2010-12-04 02:37:46 +00:00
|
|
|
lock(_L1& __l1, _L2& __l2, _L3&... __l3)
|
|
|
|
{
|
2021-06-22 13:35:19 +01:00
|
|
|
#if __cplusplus >= 201703L
|
|
|
|
if constexpr (is_same_v<_L1, _L2> && (is_same_v<_L1, _L3> && ...))
|
|
|
|
{
|
|
|
|
constexpr int _Np = 2 + sizeof...(_L3);
|
|
|
|
unique_lock<_L1> __locks[] = {
|
|
|
|
{__l1, defer_lock}, {__l2, defer_lock}, {__l3, defer_lock}...
|
|
|
|
};
|
|
|
|
int __first = 0;
|
|
|
|
do {
|
|
|
|
__locks[__first].lock();
|
|
|
|
for (int __j = 1; __j < _Np; ++__j)
|
|
|
|
{
|
|
|
|
const int __idx = (__first + __j) % _Np;
|
|
|
|
if (!__locks[__idx].try_lock())
|
|
|
|
{
|
|
|
|
for (int __k = __j; __k != 0; --__k)
|
|
|
|
__locks[(__first + __k - 1) % _Np].unlock();
|
|
|
|
__first = __idx;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (!__locks[__first].owns_lock());
|
|
|
|
|
|
|
|
for (auto& __l : __locks)
|
|
|
|
__l.release();
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
int __i = 0;
|
|
|
|
__detail::__lock_impl(__i, 0, __l1, __l2, __l3...);
|
|
|
|
}
|
2010-12-04 02:37:46 +00:00
|
|
|
}
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2017-09-12 15:02:59 +01:00
|
|
|
#if __cplusplus >= 201703L
|
|
|
|
#define __cpp_lib_scoped_lock 201703
|
2017-03-05 18:38:35 +00:00
|
|
|
/** @brief A scoped lock type for multiple lockable objects.
|
|
|
|
*
|
|
|
|
* A scoped_lock controls mutex ownership within a scope, releasing
|
|
|
|
* ownership in the destructor.
|
|
|
|
*/
|
|
|
|
template<typename... _MutexTypes>
|
|
|
|
class scoped_lock
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
explicit scoped_lock(_MutexTypes&... __m) : _M_devices(std::tie(__m...))
|
|
|
|
{ std::lock(__m...); }
|
|
|
|
|
2017-07-15 16:43:22 +01:00
|
|
|
explicit scoped_lock(adopt_lock_t, _MutexTypes&... __m) noexcept
|
2017-03-05 18:38:35 +00:00
|
|
|
: _M_devices(std::tie(__m...))
|
|
|
|
{ } // calling thread owns mutex
|
|
|
|
|
|
|
|
~scoped_lock()
|
2019-06-12 15:52:06 +01:00
|
|
|
{ std::apply([](auto&... __m) { (__m.unlock(), ...); }, _M_devices); }
|
2017-03-05 18:38:35 +00:00
|
|
|
|
|
|
|
scoped_lock(const scoped_lock&) = delete;
|
|
|
|
scoped_lock& operator=(const scoped_lock&) = delete;
|
|
|
|
|
|
|
|
private:
|
|
|
|
tuple<_MutexTypes&...> _M_devices;
|
|
|
|
};
|
|
|
|
|
|
|
|
template<>
|
|
|
|
class scoped_lock<>
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
explicit scoped_lock() = default;
|
|
|
|
explicit scoped_lock(adopt_lock_t) noexcept { }
|
|
|
|
~scoped_lock() = default;
|
|
|
|
|
|
|
|
scoped_lock(const scoped_lock&) = delete;
|
|
|
|
scoped_lock& operator=(const scoped_lock&) = delete;
|
|
|
|
};
|
|
|
|
|
|
|
|
template<typename _Mutex>
|
|
|
|
class scoped_lock<_Mutex>
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
using mutex_type = _Mutex;
|
|
|
|
|
|
|
|
explicit scoped_lock(mutex_type& __m) : _M_device(__m)
|
|
|
|
{ _M_device.lock(); }
|
|
|
|
|
2017-07-15 16:43:22 +01:00
|
|
|
explicit scoped_lock(adopt_lock_t, mutex_type& __m) noexcept
|
2017-03-05 18:38:35 +00:00
|
|
|
: _M_device(__m)
|
|
|
|
{ } // calling thread owns mutex
|
|
|
|
|
|
|
|
~scoped_lock()
|
|
|
|
{ _M_device.unlock(); }
|
|
|
|
|
|
|
|
scoped_lock(const scoped_lock&) = delete;
|
|
|
|
scoped_lock& operator=(const scoped_lock&) = delete;
|
|
|
|
|
|
|
|
private:
|
|
|
|
mutex_type& _M_device;
|
|
|
|
};
|
|
|
|
#endif // C++17
|
|
|
|
|
2021-03-12 11:47:20 +00:00
|
|
|
#ifdef _GLIBCXX_HAS_GTHREADS
|
2019-05-02 16:46:42 +01:00
|
|
|
/// Flag type used by std::call_once
|
2008-05-07 00:55:51 +00:00
|
|
|
struct once_flag
|
2008-05-06 21:11:47 +00:00
|
|
|
{
|
2011-11-05 13:33:29 +00:00
|
|
|
constexpr once_flag() noexcept = default;
|
2009-02-10 08:29:57 +00:00
|
|
|
|
2011-05-25 23:49:11 +00:00
|
|
|
/// Deleted copy constructor
|
2008-09-03 17:46:09 +00:00
|
|
|
once_flag(const once_flag&) = delete;
|
2011-05-25 23:49:11 +00:00
|
|
|
/// Deleted assignment operator
|
2008-09-03 17:46:09 +00:00
|
|
|
once_flag& operator=(const once_flag&) = delete;
|
2008-05-06 21:11:47 +00:00
|
|
|
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
private:
|
2021-03-12 11:47:20 +00:00
|
|
|
// For gthreads targets a pthread_once_t is used with pthread_once, but
|
|
|
|
// for most targets this doesn't work correctly for exceptional executions.
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
__gthread_once_t _M_once = __GTHREAD_ONCE_INIT;
|
|
|
|
|
|
|
|
struct _Prepare_execution;
|
|
|
|
|
2008-09-03 17:46:09 +00:00
|
|
|
template<typename _Callable, typename... _Args>
|
|
|
|
friend void
|
2010-10-08 00:44:12 +00:00
|
|
|
call_once(once_flag& __once, _Callable&& __f, _Args&&... __args);
|
2008-05-06 21:11:47 +00:00
|
|
|
};
|
|
|
|
|
2019-05-02 16:46:42 +01:00
|
|
|
/// @cond undocumented
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
# ifdef _GLIBCXX_HAVE_TLS
|
|
|
|
// If TLS is available use thread-local state for the type-erased callable
|
|
|
|
// that is being run by std::call_once in the current thread.
|
2008-09-03 17:46:09 +00:00
|
|
|
extern __thread void* __once_callable;
|
|
|
|
extern __thread void (*__once_call)();
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
|
|
|
|
// RAII type to set up state for pthread_once call.
|
|
|
|
struct once_flag::_Prepare_execution
|
|
|
|
{
|
|
|
|
template<typename _Callable>
|
|
|
|
explicit
|
|
|
|
_Prepare_execution(_Callable& __c)
|
|
|
|
{
|
|
|
|
// Store address in thread-local pointer:
|
|
|
|
__once_callable = std::__addressof(__c);
|
|
|
|
// Trampoline function to invoke the closure via thread-local pointer:
|
|
|
|
__once_call = [] { (*static_cast<_Callable*>(__once_callable))(); };
|
|
|
|
}
|
|
|
|
|
|
|
|
~_Prepare_execution()
|
|
|
|
{
|
|
|
|
// PR libstdc++/82481
|
|
|
|
__once_callable = nullptr;
|
|
|
|
__once_call = nullptr;
|
|
|
|
}
|
2021-03-12 11:47:20 +00:00
|
|
|
|
|
|
|
_Prepare_execution(const _Prepare_execution&) = delete;
|
|
|
|
_Prepare_execution& operator=(const _Prepare_execution&) = delete;
|
|
|
|
};
|
|
|
|
|
|
|
|
# else
|
|
|
|
// Without TLS use a global std::mutex and store the callable in a
|
|
|
|
// global std::function.
|
|
|
|
extern function<void()> __once_functor;
|
|
|
|
|
|
|
|
extern void
|
|
|
|
__set_once_functor_lock_ptr(unique_lock<mutex>*);
|
|
|
|
|
|
|
|
extern mutex&
|
|
|
|
__get_once_mutex();
|
|
|
|
|
|
|
|
// RAII type to set up state for pthread_once call.
|
|
|
|
struct once_flag::_Prepare_execution
|
|
|
|
{
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
template<typename _Callable>
|
|
|
|
explicit
|
|
|
|
_Prepare_execution(_Callable& __c)
|
|
|
|
{
|
|
|
|
// Store the callable in the global std::function
|
|
|
|
__once_functor = __c;
|
|
|
|
__set_once_functor_lock_ptr(&_M_functor_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
~_Prepare_execution()
|
|
|
|
{
|
|
|
|
if (_M_functor_lock)
|
|
|
|
__set_once_functor_lock_ptr(nullptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
2021-03-12 11:47:20 +00:00
|
|
|
// XXX This deadlocks if used recursively (PR 97949)
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
unique_lock<mutex> _M_functor_lock{__get_once_mutex()};
|
|
|
|
|
|
|
|
_Prepare_execution(const _Prepare_execution&) = delete;
|
|
|
|
_Prepare_execution& operator=(const _Prepare_execution&) = delete;
|
|
|
|
};
|
2021-03-12 11:47:20 +00:00
|
|
|
# endif
|
2019-05-02 16:46:42 +01:00
|
|
|
/// @endcond
|
2021-03-12 11:47:20 +00:00
|
|
|
|
|
|
|
// This function is passed to pthread_once by std::call_once.
|
|
|
|
// It runs __once_call() or __once_functor().
|
|
|
|
extern "C" void __once_proxy(void);
|
2008-09-03 17:46:09 +00:00
|
|
|
|
2019-05-02 16:46:42 +01:00
|
|
|
/// Invoke a callable and synchronize with other calls using the same flag
|
2008-05-06 21:11:47 +00:00
|
|
|
template<typename _Callable, typename... _Args>
|
2008-05-07 00:55:51 +00:00
|
|
|
void
|
2010-10-08 00:44:12 +00:00
|
|
|
call_once(once_flag& __once, _Callable&& __f, _Args&&... __args)
|
2008-05-06 21:11:47 +00:00
|
|
|
{
|
2021-03-12 11:47:20 +00:00
|
|
|
// Closure type that runs the function
|
|
|
|
auto __callable = [&] {
|
|
|
|
std::__invoke(std::forward<_Callable>(__f),
|
|
|
|
std::forward<_Args>(__args)...);
|
|
|
|
};
|
|
|
|
|
|
|
|
once_flag::_Prepare_execution __exec(__callable);
|
|
|
|
|
|
|
|
// XXX pthread_once does not reset the flag if an exception is thrown.
|
|
|
|
if (int __e = __gthread_once(&__once._M_once, &__once_proxy))
|
|
|
|
__throw_system_error(__e);
|
|
|
|
}
|
|
|
|
|
|
|
|
#else // _GLIBCXX_HAS_GTHREADS
|
|
|
|
|
|
|
|
/// Flag type used by std::call_once
|
|
|
|
struct once_flag
|
|
|
|
{
|
|
|
|
constexpr once_flag() noexcept = default;
|
|
|
|
|
|
|
|
/// Deleted copy constructor
|
|
|
|
once_flag(const once_flag&) = delete;
|
|
|
|
/// Deleted assignment operator
|
|
|
|
once_flag& operator=(const once_flag&) = delete;
|
|
|
|
|
|
|
|
private:
|
|
|
|
// There are two different std::once_flag interfaces, abstracting four
|
|
|
|
// different implementations.
|
|
|
|
// The single-threaded interface uses the _M_activate() and _M_finish(bool)
|
|
|
|
// functions, which start and finish an active execution respectively.
|
|
|
|
// See [thread.once.callonce] in C++11 for the definition of
|
|
|
|
// active/passive/returning/exceptional executions.
|
|
|
|
enum _Bits : int { _Init = 0, _Active = 1, _Done = 2 };
|
|
|
|
|
|
|
|
int _M_once = _Bits::_Init;
|
|
|
|
|
|
|
|
// Check to see if all executions will be passive now.
|
|
|
|
bool
|
|
|
|
_M_passive() const noexcept;
|
|
|
|
|
|
|
|
// Attempts to begin an active execution.
|
|
|
|
bool _M_activate();
|
|
|
|
|
|
|
|
// Must be called to complete an active execution.
|
|
|
|
// The argument is true if the active execution was a returning execution,
|
|
|
|
// false if it was an exceptional execution.
|
|
|
|
void _M_finish(bool __returning) noexcept;
|
|
|
|
|
|
|
|
// RAII helper to call _M_finish.
|
|
|
|
struct _Active_execution
|
|
|
|
{
|
|
|
|
explicit _Active_execution(once_flag& __flag) : _M_flag(__flag) { }
|
|
|
|
|
|
|
|
~_Active_execution() { _M_flag._M_finish(_M_returning); }
|
|
|
|
|
|
|
|
_Active_execution(const _Active_execution&) = delete;
|
|
|
|
_Active_execution& operator=(const _Active_execution&) = delete;
|
|
|
|
|
|
|
|
once_flag& _M_flag;
|
|
|
|
bool _M_returning = false;
|
|
|
|
};
|
|
|
|
|
|
|
|
template<typename _Callable, typename... _Args>
|
|
|
|
friend void
|
|
|
|
call_once(once_flag& __once, _Callable&& __f, _Args&&... __args);
|
|
|
|
};
|
|
|
|
|
|
|
|
// Inline definitions of std::once_flag members for single-threaded targets.
|
|
|
|
|
|
|
|
inline bool
|
|
|
|
once_flag::_M_passive() const noexcept
|
|
|
|
{ return _M_once == _Bits::_Done; }
|
|
|
|
|
|
|
|
inline bool
|
|
|
|
once_flag::_M_activate()
|
|
|
|
{
|
|
|
|
if (_M_once == _Bits::_Init) [[__likely__]]
|
|
|
|
{
|
|
|
|
_M_once = _Bits::_Active;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else if (_M_passive()) // Caller should have checked this already.
|
|
|
|
return false;
|
|
|
|
else
|
|
|
|
__throw_system_error(EDEADLK);
|
|
|
|
}
|
|
|
|
|
|
|
|
inline void
|
|
|
|
once_flag::_M_finish(bool __returning) noexcept
|
|
|
|
{ _M_once = __returning ? _Bits::_Done : _Bits::_Init; }
|
|
|
|
|
|
|
|
/// Invoke a callable and synchronize with other calls using the same flag
|
|
|
|
template<typename _Callable, typename... _Args>
|
|
|
|
inline void
|
|
|
|
call_once(once_flag& __once, _Callable&& __f, _Args&&... __args)
|
|
|
|
{
|
libstdc++: Rewrite std::call_once to use futexes [PR 66146]
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
2020-11-03 18:44:32 +00:00
|
|
|
if (__once._M_passive())
|
|
|
|
return;
|
|
|
|
else if (__once._M_activate())
|
|
|
|
{
|
|
|
|
once_flag::_Active_execution __exec(__once);
|
|
|
|
|
|
|
|
// _GLIBCXX_RESOLVE_LIB_DEFECTS
|
|
|
|
// 2442. call_once() shouldn't DECAY_COPY()
|
|
|
|
std::__invoke(std::forward<_Callable>(__f),
|
|
|
|
std::forward<_Args>(__args)...);
|
|
|
|
|
|
|
|
// __f(__args...) did not throw
|
|
|
|
__exec._M_returning = true;
|
|
|
|
}
|
2008-05-06 21:11:47 +00:00
|
|
|
}
|
2021-03-12 11:47:20 +00:00
|
|
|
#endif // _GLIBCXX_HAS_GTHREADS
|
2009-02-21 00:45:21 +00:00
|
|
|
|
2021-04-06 15:52:19 +01:00
|
|
|
/// @} group mutexes
|
PR libstdc++/36104 part four
2011-01-30 Benjamin Kosnik <bkoz@redhat.com>
PR libstdc++/36104 part four
* include/bits/c++config (_GLIBCXX_STD): Remove.
(_GLIBCXX_STD_D, _GLIBCXX_PR): Now _GLIBCXX_STD_C.
(_GLIBCXX_P): Now _GLIBCXX_STD_A.
(_GLIBCXX_NAMESPACE_DEBUG, _GLIBCXX_NAMESPACE_PARALLEL,
_GLIBCXX_NAMESPACE_PROFILE, _GLIBCXX_NAMESPACE_VERSION): Remove.
(_GLIBCXX_INLINE_DEBUG, _GLIBCXX_INLINE_PARALLEL,
_GLIBCXX_INLINE_PROFILE): Remove.
(_GLIBCXX_BEGIN_NAMESPACE(X)): Remove.
(_GLIBCXX_END_NAMESPACE): Remove.
(_GLIBCXX_BEGIN_NESTED_NAMESPACE(X, Y)): Remove.
(_GLIBCXX_END_NESTED_NAMESPACE): Remove.
(_GLIBCXX_BEGIN_NAMESPACE_ALGO): Add.
(_GLIBCXX_END_NAMESPACE_ALGO): Add.
(_GLIBCXX_BEGIN_NAMESPACE_CONTAINER): Add.
(_GLIBCXX_END_NAMESPACE_CONTAINER): Add.
(_GLIBCXX_BEGIN_NAMESPACE_VERSION): Add.
(_GLIBCXX_END_NAMESPACE_VERSION): Add.
(_GLIBCXX_BEGIN_LDBL_NAMESPACE): To _GLIBCXX_BEGIN_NAMESPACE_LDBL.
(_GLIBCXX_END_LDBL_NAMESPACE): To _GLIBCXX_END_NAMESPACE_LDBL.
(_GLIBCXX_VISIBILITY_ATTR): Revert to _GLIBCXX_VISIBILITY.
* include/*: Use new macros for namespace scope.
* config/*: Same.
* src/*: Same.
* src/Makefile.am (sources): Remove debug_list.cc, add
compatibility-debug_list-2.cc.
(parallel_sources): Remove parallel_list.cc, add
compatibility-parallel_list-2.cc.
(compatibility-parallel_list-2.[o,lo]): New rule.
* src/Makefile.in: Regenerate.
* src/debug_list.cc: Remove.
* src/parallel_list.cc: Remove.
* src/compatibility-list-2.cc: New.
* src/compatibility-debug_list-2.cc: New.
* src/compatibility-parallel_list-2.cc: New.
* doc/doxygen/user.cfg.in: Adjust macros.
* testsuite/20_util/auto_ptr/assign_neg.cc: Adjust line numbers, macros.
* testsuite/20_util/declval/requirements/1_neg.cc: Same.
* testsuite/20_util/duration/requirements/typedefs_neg1.cc: Same.
* testsuite/20_util/duration/requirements/typedefs_neg2.cc: Same.
* testsuite/20_util/duration/requirements/typedefs_neg3.cc: Same.
* testsuite/20_util/forward/c_neg.cc: Same.
* testsuite/20_util/forward/f_neg.cc: Same.
* testsuite/20_util/make_signed/requirements/typedefs_neg.cc: Same.
* testsuite/20_util/make_unsigned/requirements/typedefs_neg.cc: Same.
* testsuite/20_util/ratio/cons/cons_overflow_neg.cc: Same.
* testsuite/20_util/ratio/operations/ops_overflow_neg.cc: Same.
* testsuite/20_util/shared_ptr/cons/43820_neg.cc: Same.
* testsuite/20_util/weak_ptr/comparison/cmp_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/assign_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/deque/requirements/dr438/insert_neg.cc: Same.
* testsuite/23_containers/forward_list/capacity/1.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
assign_neg.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/forward_list/requirements/dr438/
insert_neg.cc: Same.
* testsuite/23_containers/list/capacity/29134.cc: Same.
* testsuite/23_containers/list/requirements/dr438/assign_neg.cc: Same.
* testsuite/23_containers/list/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/list/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/list/requirements/dr438/insert_neg.cc: Same.
* testsuite/23_containers/vector/bool/capacity/29134.cc: Same.
* testsuite/23_containers/vector/bool/modifiers/insert/31370.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/assign_neg.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/
constructor_1_neg.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/
constructor_2_neg.cc: Same.
* testsuite/23_containers/vector/requirements/dr438/insert_neg.cc: Same.
* testsuite/25_algorithms/sort/35588.cc: Same.
* testsuite/27_io/ios_base/cons/assign_neg.cc: Same.
* testsuite/27_io/ios_base/cons/copy_neg.cc: Same.
* testsuite/ext/profile/mutex_extensions_neg.cc: Same.
* testsuite/ext/profile/profiler_algos.cc: Same.
* testsuite/ext/type_traits/add_unsigned_floating_neg.cc: Same.
* testsuite/ext/type_traits/add_unsigned_integer_neg.cc: Same.
* testsuite/ext/type_traits/remove_unsigned_floating_neg.cc: Same.
* testsuite/ext/type_traits/remove_unsigned_integer_neg.cc: Same.
* testsuite/tr1/2_general_utilities/shared_ptr/cons/43820_neg.cc: Same.
From-SVN: r169421
2011-01-30 22:39:36 +00:00
|
|
|
_GLIBCXX_END_NAMESPACE_VERSION
|
|
|
|
} // namespace
|
2008-09-03 17:46:09 +00:00
|
|
|
|
2012-11-10 12:27:22 -05:00
|
|
|
#endif // C++11
|
2008-05-06 21:11:47 +00:00
|
|
|
|
2008-05-26 02:19:57 +00:00
|
|
|
#endif // _GLIBCXX_MUTEX
|