Auto merge of #77727 - thomcc:mach-info-order, r=Amanieu

Avoid SeqCst or static mut in mach_timebase_info and QueryPerformanceFrequency caches

This patch went through a couple iterations but the end result is replacing a pattern where an `AtomicUsize` (updated with many SeqCst ops) guards a `static mut` with a single `AtomicU64` that is known to use 0 as a value indicating that it is not initialized.

The code in both places exists to cache values used in the conversion of Instants to Durations on macOS, iOS, and Windows.

I have no numbers to prove that this improves performance (It seems a little futile to benchmark something like this), but it's much simpler, safer, and in practice we'd expect it to be faster everywhere where Relaxed operations on AtomicU64 are cheaper than SeqCst operations on AtomicUsize, which is a lot of places.

Anyway, it also removes a bunch of unsafe code and greatly simplifies the logic, so IMO that alone would be worth it unless it was a regression.

If you want to take a look at the assembly output though, see https://godbolt.org/z/rbr6vn for x86_64, https://godbolt.org/z/cqcbqv for aarch64 (Note that this just the output of the mac side, but i'd expect the windows part to be the same and don't feel like doing another godbolt for it). There are several versions of this function in the godbolt:

- `info_new`: version in the current patch
- `info_less_new`: version in initial PR
- `info_original`: version currently in the tree
- `info_orig_but_better_orderings`: a version that just tries to change the original code's orderings from SeqCst to the (probably) minimal orderings required for soundness/correctness.

The biggest concern I have here is if we can use AtomicU64, or if there are targets that dont have it that this code supports. AFAICT: no. (If that changes in the future, it's easy enough to do something different for them)

r? `@Amanieu` because he caught a couple issues last time I tried to do a patch reducing orderings 😅

---

<details>
<summary>I rewrote this whole message so the original is inside here</summary>

I happened to notice the code we use for caching the result of mach_timebase_info uses SeqCst exclusively.

However, thinking a little more, it's actually pretty easy to avoid the static mut by packing the timebase info into an AtomicU64.

This entirely avoids needing to do the compare_exchange. The AtomicU64 can be read/written using Relaxed ops, which on current macos/ios platforms (x86_64/aarch64) have no overhead compared to direct loads/stores. This simplifies the code and makes it a lot safer too.

I have no numbers to prove that this improves performance (It seems a little futile to benchmark something like this), although it should do that on both targets it applies to.

That said, it also removes a bunch of unsafe code and simplifies the logic (arguably at least — there are only two states now, initialized or not), so I think it's a net win even without concrete numbers.

If you want to take a look at the assembly output though, see below. It has the new version, the original, and a version of the original with lower Orderings (which is still worse than the version in this PR)

- godbolt.org/z/obfqf9 x86_64-apple-darwin

- godbolt.org/z/Wz5cWc aarch64-unknown-linux-gnu (godbolt can't do aarch64-apple-ios but that doesn't matter here)

A different (and more efficient) option than this would be to just use the AtomicU64 and use the knowledge that after initialization the denominator should be nonzero... That felt like it's relying on too many things I'm not confident in, so I didn't want to do that.
</details>
This commit is contained in:
bors 2020-10-11 14:06:04 +00:00
commit bc74dd711f
2 changed files with 53 additions and 45 deletions

View File

@ -117,8 +117,7 @@ impl Hash for Timespec {
#[cfg(any(target_os = "macos", target_os = "ios"))]
mod inner {
use crate::fmt;
use crate::mem;
use crate::sync::atomic::{AtomicUsize, Ordering::SeqCst};
use crate::sync::atomic::{AtomicU64, Ordering};
use crate::sys::cvt;
use crate::sys_common::mul_div_u64;
use crate::time::Duration;
@ -233,31 +232,42 @@ mod inner {
}
fn info() -> mach_timebase_info {
static mut INFO: mach_timebase_info = mach_timebase_info { numer: 0, denom: 0 };
static STATE: AtomicUsize = AtomicUsize::new(0);
// INFO_BITS conceptually is an `Option<mach_timebase_info>`. We can do
// this in 64 bits because we know 0 is never a valid value for the
// `denom` field.
//
// Encoding this as a single `AtomicU64` allows us to use `Relaxed`
// operations, as we are only interested in in the effects on a single
// memory location.
static INFO_BITS: AtomicU64 = AtomicU64::new(0);
unsafe {
// If a previous thread has filled in this global state, use that.
if STATE.load(SeqCst) == 2 {
return INFO;
}
// ... otherwise learn for ourselves ...
let mut info = mem::zeroed();
extern "C" {
fn mach_timebase_info(info: mach_timebase_info_t) -> kern_return_t;
}
mach_timebase_info(&mut info);
// ... and attempt to be the one thread that stores it globally for
// all other threads
if STATE.compare_exchange(0, 1, SeqCst, SeqCst).is_ok() {
INFO = info;
STATE.store(2, SeqCst);
}
return info;
// If a previous thread has initialized `INFO_BITS`, use it.
let info_bits = INFO_BITS.load(Ordering::Relaxed);
if info_bits != 0 {
return info_from_bits(info_bits);
}
// ... otherwise learn for ourselves ...
extern "C" {
fn mach_timebase_info(info: mach_timebase_info_t) -> kern_return_t;
}
let mut info = info_from_bits(0);
unsafe {
mach_timebase_info(&mut info);
}
INFO_BITS.store(info_to_bits(info), Ordering::Relaxed);
info
}
#[inline]
fn info_to_bits(info: mach_timebase_info) -> u64 {
((info.denom as u64) << 32) | (info.numer as u64)
}
#[inline]
fn info_from_bits(bits: u64) -> mach_timebase_info {
mach_timebase_info { numer: bits as u32, denom: (bits >> 32) as u32 }
}
}

View File

@ -165,7 +165,7 @@ fn intervals2dur(intervals: u64) -> Duration {
mod perf_counter {
use super::NANOS_PER_SEC;
use crate::sync::atomic::{AtomicUsize, Ordering::SeqCst};
use crate::sync::atomic::{AtomicU64, Ordering};
use crate::sys::c;
use crate::sys::cvt;
use crate::sys_common::mul_div_u64;
@ -197,27 +197,25 @@ mod perf_counter {
}
fn frequency() -> c::LARGE_INTEGER {
static mut FREQUENCY: c::LARGE_INTEGER = 0;
static STATE: AtomicUsize = AtomicUsize::new(0);
// Either the cached result of `QueryPerformanceFrequency` or `0` for
// uninitialized. Storing this as a single `AtomicU64` allows us to use
// `Relaxed` operations, as we are only interested in the effects on a
// single memory location.
static FREQUENCY: AtomicU64 = AtomicU64::new(0);
unsafe {
// If a previous thread has filled in this global state, use that.
if STATE.load(SeqCst) == 2 {
return FREQUENCY;
}
// ... otherwise learn for ourselves ...
let mut frequency = 0;
cvt(c::QueryPerformanceFrequency(&mut frequency)).unwrap();
// ... and attempt to be the one thread that stores it globally for
// all other threads
if STATE.compare_exchange(0, 1, SeqCst, SeqCst).is_ok() {
FREQUENCY = frequency;
STATE.store(2, SeqCst);
}
frequency
let cached = FREQUENCY.load(Ordering::Relaxed);
// If a previous thread has filled in this global state, use that.
if cached != 0 {
return cached as c::LARGE_INTEGER;
}
// ... otherwise learn for ourselves ...
let mut frequency = 0;
unsafe {
cvt(c::QueryPerformanceFrequency(&mut frequency)).unwrap();
}
FREQUENCY.store(frequency as u64, Ordering::Relaxed);
frequency
}
fn query() -> c::LARGE_INTEGER {