This moves the "classic" contents of <chrono> to a new header, so that
<future>, <thread> etc. can get use durations and clocks without
calendar types, time zones, and chrono I/O.
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new header.
* include/Makefile.in: Regenerate.
* include/std/chrono (duration, time_point, system_clock)
(steady_clock, high_resolution_clock, chrono_literals, sys_time)
(file_clock, file_time): Move to ...
* include/bits/chrono.h: New file.
* include/bits/atomic_futex.h: Include new header instead of
<chrono>.
* include/bits/atomic_timed_wait.h: Likewise.
* include/bits/fs_fwd.h: Likewise.
* include/bits/semaphore_base.h: Likewise.
* include/bits/this_thread_sleep.h: Likewise.
* include/bits/unique_lock.h: Likewise.
* include/experimental/bits/fs_fwd.h: Likewise.
* include/experimental/chrono: Likewise.
* include/experimental/io_context: Likewise.
* include/experimental/netfwd: Likewise.
* include/experimental/timer: Likewise.
* include/std/condition_variable: Likewise.
* include/std/mutex: Likewise.
* include/std/shared_mutex: Likewise.
The std::try_lock and std::lock algorithms can use iteration instead of
recursion when all lockables have the same type and can be held by an
array of unique_lock<L> objects.
By making this change to __detail::__try_lock_impl it also benefits
__detail::__lock_impl, which uses it. For std::lock we can just put the
iterative version directly in std::lock, to avoid making any call to
__detail::__lock_impl.
Signed-off-by: Matthias Kretz <m.kretz@gsi.de>
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
Co-authored-by: Matthias Kretz <m.kretz@gsi.de>
libstdc++-v3/ChangeLog:
* include/std/mutex (lock): Replace recursion with iteration
when lockables all have the same type.
(__detail::__try_lock_impl): Likewise. Pass lockables as
parameters, instead of a tuple. Always lock the first one, and
recurse for the rest.
(__detail::__lock_impl): Adjust call to __try_lock_impl.
(__detail::__try_to_lock): Remove.
* testsuite/30_threads/lock/3.cc: Check that mutexes are locked.
* testsuite/30_threads/lock/4.cc: Also test non-heterogeneous
arguments.
* testsuite/30_threads/unique_lock/cons/60497.cc: Also check
std::try_lock.
* testsuite/30_threads/try_lock/5.cc: New test.
The current std::lock algorithm is the one called "persistent" in Howard
Hinnant's https://howardhinnant.github.io/dining_philosophers.html post.
While it tends to perform acceptably fast, it wastes a lot of CPU cycles
by continuously locking and unlocking the uncontended mutexes.
Effectively, it's a spin lock with no back-off.
This replaces it with the one Howard calls "smart and polite". It's
smart, because when a Mi.try_lock() call fails because mutex Mi is
contended, the algorithm reorders the mutexes until Mi is first, then
calls Mi.lock(), to block until Mi is no longer contended. It's
polite because it uses std::this_thread::yield() between the failed
Mi.try_lock() call and the Mi.lock() call. (In reality it uses
__gthread_yield() directly, because using this_thread::yield() would
require shuffling code around to avoid a circular dependency.)
This version of the algorithm is inspired by some hints from Howard, so
that it has strictly bounded stack usage. As the comment in the code
says:
// This function can recurse up to N levels deep, for N = 1+sizeof...(L1).
// On each recursion the lockables are rotated left one position,
// e.g. depth 0: l0, l1, l2; depth 1: l1, l2, l0; depth 2: l2, l0, l1.
// When a call to l_i.try_lock() fails it recurses/returns to depth=i
// so that l_i is the first argument, and then blocks until l_i is locked.
The 'i' parameter is the desired permuation of the lockables, and the
'depth' parameter is the depth in the call stack of the current
instantiation of the function template. If i == depth then the function
calls l0.lock() and then l1.try_lock()... for each lockable in the
parameter pack l1. If i > depth then the function rotates the lockables
to the left one place, and calls itself again to go one level deeper.
Finally, if i < depth then the function returns to a shallower depth,
equivalent to a right rotate of the lockables. When a call to
try_lock() fails, i is set to the index of the contended lockable, so
that the next call to l0.lock() will use the contended lockable as l0.
This commit also replaces the std::try_lock implementation details. The
new code is identical in behaviour, but uses a pair of constrained
function templates. This avoids instantiating a class template, and is a
litle simpler to call where used in std::__detail::__lock_impl and
std::try_lock.
Signed-off-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++-v3/ChangeLog:
* include/std/mutex (__try_to_lock): Move to __detail namespace.
(struct __try_lock_impl): Replace with ...
(__detail::__try_lock_impl<Idx>(tuple<Lockables...>&)): New
function templates to implement std::try_lock.
(try_lock): Use new __try_lock_impl.
(__detail::__lock_impl(int, int&, L0&, L1&...)): New function
template to implement std::lock.
(lock): Use __lock_impl.
The new std::call_once implementation is not backwards compatible,
contrary to my intention. Because std::once_flag::_M_active() doesn't
write glibc's "fork generation" into the pthread_once_t object, it's
possible for glibc and libstdc++ to run two active executions
concurrently. This violates the primary invariant of the feature!
This patch reverts std::once_flag and std::call_once to the old
implementation that uses pthread_once. This means PR 66146 is a problem
again, but glibc has been changed to solve that. A new API similar to
pthread_once but supporting failure and resetting the pthread_once_t
will be proposed for inclusion in glibc and other C libraries.
This change doesn't simply revert r11-4691 because I want to retain the
new implementation for non-ghtreads targets (which didn't previously
support std::call_once at all, so there's no backwards compatibility
concern). This also leaves the new std::call_once::_M_activate() and
std::call_once::_M_finish(bool) symbols present in libstdc++.so.6 so
that code already compiled against GCC 11 can still use them. Those
symbols will be removed in a subsequent commit (which distros can choose
to temporarily revert if needed).
libstdc++-v3/ChangeLog:
PR libstdc++/99341
* include/std/mutex [_GLIBCXX_HAVE_LINUX_FUTEX] (once_flag):
Revert to pthread_once_t implementation.
[_GLIBCXX_HAVE_LINUX_FUTEX] (call_once): Likewise.
* src/c++11/mutex.cc [_GLIBCXX_HAVE_LINUX_FUTEX]
(struct __once_flag_compat): New type matching the reverted
implementation of once_flag using futexes.
(once_flag::_M_activate): Remove, replace with ...
(_ZNSt9once_flag11_M_activateEv): ... alias symbol.
(once_flag::_M_finish): Remove, replace with ...
(_ZNSt9once_flag9_M_finishEb): ... alias symbol.
* testsuite/30_threads/call_once/66146.cc: Removed.
The once_flag::_M_activate() function is only ever called immediately
after a call to once_flag::_M_passive(), and so in the non-gthreads case
it is impossible for _M_passive() to be true in the body of
_M_activate(). Add a check for it anyway, to avoid warnings about
missing return.
Also replace a non-reserved name with a reserved one.
libstdc++-v3/ChangeLog:
* include/std/mutex (once_flag::_M_activate()): Add explicit
return statement for passive case.
(once_flag::_M_finish(bool)): Use reserved name for parameter.
The current implementation of std::call_once uses pthread_once, which
only meets the C++ requirements when compiled with support for
exceptions. For most glibc targets and all non-glibc targets,
pthread_once does not work correctly if the init_routine exits via an
exception. The pthread_once_t object is left in the "active" state, and
any later attempts to run another init_routine will block forever.
This change makes std::call_once work correctly for Linux targets, by
replacing the use of pthread_once with a futex, based on the code from
__cxa_guard_acquire. For both glibc and musl, the Linux implementation
of pthread_once is already based on futexes, and pthread_once_t is just
a typedef for int, so this change does not alter the layout of
std::once_flag. By choosing the values for the int appropriately, the
new code is even ABI compatible. Code that calls the old implementation
of std::call_once will use pthread_once to manipulate the int, while new
code will use the new std::once_flag members to manipulate it, but they
should interoperate correctly. In both cases, the int is initially zero,
has the lowest bit set when there is an active execution, and equals 2
after a successful returning execution. The difference with the new code
is that exceptional exceptions are correctly detected and the int is
reset to zero.
The __cxa_guard_acquire code (and musl's pthread_once) use an additional
state to say there are other threads waiting. This allows the futex wake
syscall to be skipped if there is no contention. Glibc doesn't use a
waiter bit, so we have to unconditionally issue the wake in order to be
compatible with code calling the old std::call_once that uses Glibc's
pthread_once. If we know that we're using musl (and musl's pthread_once
doesn't change) it would be possible to set a waiting state and check
for it in std::once_flag::_M_finish(bool), but this patch doesn't do
that.
This doesn't fix the bug for non-linux targets. A similar approach could
be used for targets where we know the definition of pthread_once_t is a
mutex and an integer. We could make once_flag._M_activate() use
pthread_mutex_lock on the mutex member within the pthread_once_t, and
then only set the integer if the execution finishes, and then unlock the
mutex. That would require careful study of each target's pthread_once
implementation and that work is left for a later date.
This also fixes PR 55394 because pthread_once is no longer needed, and
PR 84323 because the fast path is now just an atomic load.
As a consequence of the new implementation that doesn't use
pthread_once, we can also make std::call_once work for targets with no
gthreads support. The code for the single-threaded implementation
follows the same methods as on Linux, but with no need for atomics or
futexes.
libstdc++-v3/ChangeLog:
PR libstdc++/55394
PR libstdc++/66146
PR libstdc++/84323
* config/abi/pre/gnu.ver (GLIBCXX_3.4.29): Add new symbols.
* include/std/mutex [!_GLIBCXX_HAS_GTHREADS] (once_flag): Define
even when gthreads is not supported.
(once_flag::_M_once) [_GLIBCXX_HAVE_LINUX_FUTEX]: Change type
from __gthread_once_t to int.
(once_flag::_M_passive(), once_flag::_M_activate())
(once_flag::_M_finish(bool), once_flag::_Active_execution):
Define new members for futex and non-threaded implementation.
[_GLIBCXX_HAS_GTHREADS] (once_flag::_Prepare_execution): New
RAII helper type.
(call_once): Use new members of once_flag.
* src/c++11/mutex.cc (std::once_flag::_M_activate): Define.
(std::once_flag::_M_finish): Define.
* testsuite/30_threads/call_once/39909.cc: Do not require
gthreads.
* testsuite/30_threads/call_once/49668.cc: Likewise.
* testsuite/30_threads/call_once/60497.cc: Likewise.
* testsuite/30_threads/call_once/call_once1.cc: Likewise.
* testsuite/30_threads/call_once/dr2442.cc: Likewise.
* testsuite/30_threads/call_once/once_flag.cc: Add test for
constexpr constructor.
* testsuite/30_threads/call_once/66146.cc: New test.
* testsuite/30_threads/call_once/constexpr.cc: Removed.
* testsuite/30_threads/once_flag/cons/constexpr.cc: Removed.
For C++20 the wait_until members of mutexes and condition variables are
required to be ill-formed if given a clock that doesn't meet the
requirements for a clock type. To implement that requirement this patch
adds static assertions using the chrono::is_clock trait, and defines
that trait.
To avoid expensive checks for the common cases, the trait (and
associated variable template) are explicitly specialized for the
standard clock types.
This also moves the filesystem::__file_clock type from <filesystem> to
<chrono>, so that chrono::file_clock and chrono::file_time can be
defined in <chrono> as required.
* include/bits/fs_fwd.h (filesystem::__file_clock): Move to ...
* include/std/chrono (filesystem::__file_clock): Here.
(filesystem::__file_clock::from_sys, filesystem::__file_clock::to_sys):
Define public member functions for C++20.
(is_clock, is_clock_v): Define traits for C++20.
* include/std/condition_variable (condition_variable::wait_until): Add
check for valid clock.
* include/std/future (_State_baseV2::wait_until): Likewise.
* include/std/mutex (__timed_mutex_impl::_M_try_lock_until): Likewise.
* include/std/shared_mutex (shared_timed_mutex::try_lock_shared_until):
Likewise.
* include/std/thread (this_thread::sleep_until): Likewise.
* testsuite/30_threads/condition_variable/members/2.cc: Qualify
slow_clock with new namespace.
* testsuite/30_threads/condition_variable/members/clock_neg.cc: New
test.
* testsuite/30_threads/condition_variable_any/members/clock_neg.cc:
New test.
* testsuite/30_threads/future/members/clock_neg.cc: New test.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/3.cc:
Qualify slow_clock with new namespace.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/
clock_neg.cc: New test.
* testsuite/30_threads/shared_future/members/clock_neg.cc: New
test.
* testsuite/30_threads/shared_lock/locking/clock_neg.cc: New test.
* testsuite/30_threads/shared_timed_mutex/try_lock_until/clock_neg.cc:
New test.
* testsuite/30_threads/timed_mutex/try_lock_until/3.cc: Qualify
slow_clock with new namespace.
* testsuite/30_threads/timed_mutex/try_lock_until/4.cc: Likewise.
* testsuite/30_threads/timed_mutex/try_lock_until/clock_neg.cc: New
test.
* testsuite/30_threads/unique_lock/locking/clock_neg.cc: New test.
* testsuite/std/time/traits/is_clock.cc: New test.
* testsuite/util/slow_clock.h (slow_clock): Move to __gnu_test
namespace.
A non-standard clock may tick more slowly than
std::chrono::steady_clock. This means that we risk returning false
early when the specified timeout may not have expired. This can be
avoided by looping until the timeout time as reported by the
non-standard clock has been reached.
Unfortunately, we have no way to tell whether the non-standard clock
ticks more quickly that std::chrono::steady_clock. If it does then we
risk returning later than would be expected, but that is unavoidable and
permitted by the standard.
2019-12-02 Mike Crowe <mac@mcrowe.com>
PR libstdc++/91906 Fix timed_mutex::try_lock_until on arbitrary clock
* include/std/mutex (__timed_mutex_impl::_M_try_lock_until): Loop
until the absolute timeout time is reached as measured against the
appropriate clock.
* testsuite/util/slow_clock.h: New file. Move implementation of
slow_clock test class.
* testsuite/30_threads/condition_variable/members/2.cc: Include
slow_clock from header.
* testsuite/30_threads/shared_timed_mutex/try_lock/3.cc: Convert
existing test to templated function so that it can be called with
both system_clock and steady_clock.
* testsuite/30_threads/timed_mutex/try_lock_until/3.cc: Also run test
using slow_clock to test above fix.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/3.cc:
Likewise.
* testsuite/30_threads/recursive_timed_mutex/try_lock_until/4.cc: Add
new test that try_lock_until behaves as try_lock if the timeout has
already expired or exactly matches the current time.
From-SVN: r278902
The pthread_mutex_clocklock function is available in glibc since the
2.30 release. If this function is available in the C library it can be
used to fix PR libstdc++/78237 by supporting steady_clock properly with
timed_mutex.
This means that code using timed_mutex::try_lock_for or
timed_mutex::wait_until with steady_clock is no longer subject to timing
out early or potentially waiting for much longer if the system clock is
warped at an inopportune moment.
If pthread_mutex_clocklock is available then steady_clock is deemed to
be the "best" clock available which means that it is used for the
relative try_lock_for calls and absolute try_lock_until calls using
steady_clock and user-defined clocks. Calls explicitly using
system_clock (aka high_resolution_clock) continue to use CLOCK_REALTIME
via __gthread_cond_timedwait.
If pthread_mutex_clocklock is not available then system_clock is deemed
to be the "best" clock available which means that the previous
suboptimal behaviour remains.
2019-12-02 Mike Crowe <mac@mcrowe.com>
PR libstdc++/78237 Add full steady_clock support to timed_mutex
* acinclude.m4 (GLIBCXX_CHECK_PTHREAD_MUTEX_CLOCKLOCK): Define to
detect presence of pthread_mutex_clocklock function.
* config.h.in: Regenerate.
* configure: Regenerate.
* configure.ac: Call GLIBCXX_CHECK_PTHREAD_MUTEX_CLOCKLOCK.
* include/std/mutex (__timed_mutex_impl): Remove unnecessary __clock_t.
(__timed_mutex_impl::_M_try_lock_for): Use best clock to turn relative
timeout into absolute timeout.
(__timed_mutex_impl::_M_try_lock_until): Keep existing implementation
for system_clock. Add new implementation for steady_clock that calls
_M_clocklock. Modify overload for user-defined clock to use a relative
wait so that it automatically uses the best clock.
[_GLIBCXX_USE_PTHREAD_MUTEX_CLOCKLOCK] (timed_mutex::_M_clocklock):
New member function.
(recursive_timed_mutex::_M_clocklock): Likewise.
From-SVN: r278901
This will allow std::mutex and std::lock_guard to be used elsewhere in
the library without pulling in the whole of <chrono>.
Previously the whole of <bits/std_mutex.h> was conditional on the
_GLIBCXX_USE_C99_STDINT_TR1 macro, but only the std::unique_lock members
that use <chrono> facilities should depend on that. std::mutex only
needs to depend on _GLIBCXX_HAS_GTHREADS and std::lock_guard can be
defined unconditionally.
Some parts of <bits/std_mutex.h> and <mutex> are based on code in
<ext/concurrence.h> which dates from 2003. However, the std::unique_lock
implementation was added in 2008 by r135007, without using any earlier
code. Therefore the new header file has copyright years 2008-2018.
* include/Makefile.am: Add new <bits/unique_lock.h> header.
* include/Makefile.in: Regenerate.
* include/bits/std_mutex.h [!_GLIBCXX_USE_C99_STDINT_TR1] (mutex)
(lock_guard): Define independent of _GLIBCXX_USE_C99_STDINT_TR1.
(unique_lock): Move definition to ...
* include/bits/unique_lock.h: New header.
[!_GLIBCXX_USE_C99_STDINT_TR1] (unique_lock): Define unconditionally.
[_GLIBCXX_USE_C99_STDINT_TR1] (unique_lock(mutex_type&, time_point))
(unique_lock(mutex_type&, duration), unique_lock::try_lock_until)
(unique_lock::try_lock_for): Define only when <chrono> is usable.
* include/std/condition_variable: Include <bits/unique_lock.h>.
* include/std/mutex: Likewise.
From-SVN: r262963
PR libstdc++/79433
* doc/xml/manual/status_cxx2017.xml: Update feature-test macros.
* doc/html/*: Regenerate.
* include/Makefile.am: Remove <bits/c++17_warning.h>.
* include/Makefile.in: Regenerate.
* include/bits/c++17_warning.h: Remove.
* include/bits/string_view.tcc: Do not include <bits/c++17_warning.h>
for pre-C++17 modes.
* include/std/any: Likewise.
(__cpp_lib_any): Define.
* include/std/mutex (__cpp_lib_scoped_lock): Adjust value as per new
SD-6 draft.
* include/std/numeric (__cpp_lib_gcd_lcm): Define as per new SD-6
draft.
* include/std/optional: Do not include <bits/c++17_warning.h>.
(__cpp_lib_optional): Define.
* include/std/shared_mutex: Do not include <bits/c++14_warning.h>.
* include/std/string_view: Do not include <bits/c++17_warning.h>.
(__cpp_lib_string_view): Define.
* include/std/variant: Do not include <bits/c++17_warning.h>.
(__cpp_lib_variant): Define.
* testsuite/20_util/optional/cons/value_neg.cc: Adjust dg-error line
numbers.
* testsuite/26_numerics/gcd/1.cc: Test for __cpp_lib_gcd_lcm.
* testsuite/26_numerics/gcd/gcd_neg.cc: Adjust dg-error line
numbers.
* testsuite/26_numerics/lcm/1.cc: Test for __cpp_lib_gcd_lcm.
* testsuite/26_numerics/lcm/lcm_neg.cc: Adjust dg-error line
numbers.
* testsuite/30_threads/scoped_lock/requirements/typedefs.cc: Adjust
expected value of __cpp_lib_scoped_lock.
From-SVN: r252018
* doc/xml/manual/intro.xml: Document LWG 2442 status.
* include/std/mutex [_GLIBCXX_HAVE_TLS] (__once_call_impl): Remove.
[_GLIBCXX_HAVE_TLS] (_Once_call): Declare primary template and define
partial specialization to unpack args and forward to std::invoke.
(call_once) [_GLIBCXX_HAVE_TLS]: Use forward_as_tuple and _Once_call
instead of __bind_simple and __once_call_impl.
(call_once) [!_GLIBCXX_HAVE_TLS]: Use __invoke instead of
__bind_simple.
* testsuite/30_threads/call_once/dr2442.cc: New test.
From-SVN: r241031
2015-09-02 Sebastian Huber <sebastian.huber@embedded-brains.de>
PR libstdc++/67408
* include/std/mutex (__timed_mutex_impl::_M_try_lock_until): Use
_Derived::_M_timedlock().
(timed_mutex): Add _M_timedlock() and make base class a friend.
(recursive_timed_mutex): Likewise.
From-SVN: r227400
PR libstdc++/54562
* include/std/mutex (__timed_mutex_impl::__clock_t): Use
high_resolution_clock for absolute timeouts, because
pthread_mutex_timedlock uses CLOCK_REALTIME not CLOCK_MONOTONIC.
(__timed_mutex_impl::_M_try_lock_for): Use steady_clock for relative
timeouts as per [thread.req.timing].
(__timed_mutex_impl::_M_try_lock_until<Clock,Duration>): Convert to
__clock_t time point instead of using _M_try_lock_for.
From-SVN: r204672
PR libstdc++/57641
* include/std/mutex (timed_mutex, recursive_timed_mutex): Move common
functionality to new __timed_mutex_impl mixin. Overload try_lock_until
to handle conversion between different clocks. Replace constrained
__try_lock_for_impl overloads with conditional increment.
* include/std/shared_mutex (shared_mutex::_Mutex): Use the new mixin.
* testsuite/30_threads/timed_mutex/try_lock_until/57641.cc: New.
From-SVN: r200180
* include/std/mutex (call_once): Remove parentheses to fix error in
c++1y and gnu++1y mode.
* testsuite/30_threads/mutex/try_lock/2.cc: Call try_lock() in new
thread to avoid undefined behaviour.
From-SVN: r199875
PR libstdc++/56002
* include/std/mutex (lock_guard, unique_lock, lock): Define without
depending on _GLIBCXX_HAS_GTHREADS.
* testsuite/30_threads/lock_guard/cons/1.cc: Run on all targets.
From-SVN: r196706