Address review comments

This commit is contained in:
Gabriel Majeri 2018-10-05 08:50:17 +03:00
parent 7e921aa590
commit 6ba5584712

View File

@ -12,11 +12,11 @@
//! //!
//! ## The need for synchronization //! ## The need for synchronization
//! //!
//! Conceptually, a Rust program is simply a series of operations which will //! Conceptually, a Rust program is a series of operations which will
//! be executed on a computer. The timeline of events happening in the program //! be executed on a computer. The timeline of events happening in the
//! is consistent with the order of the operations in the code. //! program is consistent with the order of the operations in the code.
//! //!
//! Considering the following code, operating on some global static variables: //! Consider the following code, operating on some global static variables:
//! //!
//! ```rust //! ```rust
//! static mut A: u32 = 0; //! static mut A: u32 = 0;
@ -35,8 +35,10 @@
//! } //! }
//! ``` //! ```
//! //!
//! It appears _as if_ some variables stored in memory are changed, an addition //! It appears as if some variables stored in memory are changed, an addition
//! is performed, result is stored in `A` and the variable `C` is modified twice. //! is performed, result is stored in `A` and the variable `C` is
//! modified twice.
//!
//! When only a single thread is involved, the results are as expected: //! When only a single thread is involved, the results are as expected:
//! the line `7 4 4` gets printed. //! the line `7 4 4` gets printed.
//! //!
@ -50,17 +52,19 @@
//! in a temporary location until it gets printed, with the global variable //! in a temporary location until it gets printed, with the global variable
//! never getting updated. //! never getting updated.
//! //!
//! - The final result could be determined just by looking at the code at compile time, //! - The final result could be determined just by looking at the code
//! so [constant folding] might turn the whole block into a simple `println!("7 4 4")`. //! at compile time, so [constant folding] might turn the whole
//! block into a simple `println!("7 4 4")`.
//! //!
//! The compiler is allowed to perform any combination of these optimizations, as long //! The compiler is allowed to perform any combination of these
//! as the final optimized code, when executed, produces the same results as the one //! optimizations, as long as the final optimized code, when executed,
//! without optimizations. //! produces the same results as the one without optimizations.
//! //!
//! Due to the [concurrency] involved in modern computers, assumptions about //! Due to the [concurrency] involved in modern computers, assumptions
//! the program's execution order are often wrong. Access to global variables //! about the program's execution order are often wrong. Access to
//! can lead to nondeterministic results, **even if** compiler optimizations //! global variables can lead to nondeterministic results, **even if**
//! are disabled, and it is **still possible** to introduce synchronization bugs. //! compiler optimizations are disabled, and it is **still possible**
//! to introduce synchronization bugs.
//! //!
//! Note that thanks to Rust's safety guarantees, accessing global (static) //! Note that thanks to Rust's safety guarantees, accessing global (static)
//! variables requires `unsafe` code, assuming we don't use any of the //! variables requires `unsafe` code, assuming we don't use any of the
@ -74,7 +78,7 @@
//! Instructions can execute in a different order from the one we define, due to //! Instructions can execute in a different order from the one we define, due to
//! various reasons: //! various reasons:
//! //!
//! - **Compiler** reordering instructions: if the compiler can issue an //! - The **compiler** reordering instructions: If the compiler can issue an
//! instruction at an earlier point, it will try to do so. For example, it //! instruction at an earlier point, it will try to do so. For example, it
//! might hoist memory loads at the top of a code block, so that the CPU can //! might hoist memory loads at the top of a code block, so that the CPU can
//! start [prefetching] the values from memory. //! start [prefetching] the values from memory.
@ -83,20 +87,20 @@
//! signal handlers or certain kinds of low-level code. //! signal handlers or certain kinds of low-level code.
//! Use [compiler fences] to prevent this reordering. //! Use [compiler fences] to prevent this reordering.
//! //!
//! - **Single processor** executing instructions [out-of-order]: modern CPUs are //! - A **single processor** executing instructions [out-of-order]:
//! capable of [superscalar] execution, i.e. multiple instructions might be //! Modern CPUs are capable of [superscalar] execution,
//! executing at the same time, even though the machine code describes a //! i.e. multiple instructions might be executing at the same time,
//! sequential process. //! even though the machine code describes a sequential process.
//! //!
//! This kind of reordering is handled transparently by the CPU. //! This kind of reordering is handled transparently by the CPU.
//! //!
//! - **Multiprocessor** system, where multiple hardware threads run at the same time. //! - A **multiprocessor** system executing multiple hardware threads
//! In multi-threaded scenarios, you can use two kinds of primitives to deal //! at the same time: In multi-threaded scenarios, you can use two
//! with synchronization: //! kinds of primitives to deal with synchronization:
//! - [memory fences] to ensure memory accesses are made visibile to other //! - [memory fences] to ensure memory accesses are made visibile to
//! CPUs in the right order. //! other CPUs in the right order.
//! - [atomic operations] to ensure simultaneous access to the same memory //! - [atomic operations] to ensure simultaneous access to the same
//! location doesn't lead to undefined behavior. //! memory location doesn't lead to undefined behavior.
//! //!
//! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching //! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching
//! [compiler fences]: crate::sync::atomic::compiler_fence //! [compiler fences]: crate::sync::atomic::compiler_fence
@ -111,29 +115,49 @@
//! inconvenient to use, which is why the standard library also exposes some //! inconvenient to use, which is why the standard library also exposes some
//! higher-level synchronization objects. //! higher-level synchronization objects.
//! //!
//! These abstractions can be built out of lower-level primitives. For efficiency, //! These abstractions can be built out of lower-level primitives.
//! the sync objects in the standard library are usually implemented with help //! For efficiency, the sync objects in the standard library are usually
//! from the operating system's kernel, which is able to reschedule the threads //! implemented with help from the operating system's kernel, which is
//! while they are blocked on acquiring a lock. //! able to reschedule the threads while they are blocked on acquiring
//! a lock.
//! //!
//! ## Efficiency //! The following is an overview of the available synchronization
//! objects:
//! //!
//! Higher-level synchronization mechanisms are usually heavy-weight. //! - [`Arc`]: Atomically Reference-Counted pointer, which can be used
//! While most atomic operations can execute instantaneously, acquiring a //! in multithreaded environments to prolong the lifetime of some
//! [`Mutex`] can involve blocking until another thread releases it. //! data until all the threads have finished using it.
//! For [`RwLock`], while any number of readers may acquire it without
//! blocking, each writer will have exclusive access.
//! //!
//! On the other hand, communication over [channels] can provide a fairly //! - [`Barrier`]: Ensures multiple threads will wait for each other
//! high-level interface without sacrificing performance, at the cost of //! to reach a point in the program, before continuing execution all
//! somewhat more memory. //! together.
//! //!
//! The more synchronization exists between CPUs, the smaller the performance //! - [`Condvar`]: Condition Variable, providing the ability to block
//! gains from multithreading will be. //! a thread while waiting for an event to occur.
//! //!
//! - [`mpsc`]: Multi-producer, single-consumer queues, used for
//! message-based communication. Can provide a lightweight
//! inter-thread synchronisation mechanism, at the cost of some
//! extra memory.
//!
//! - [`Mutex`]: Mutual Exclusion mechanism, which ensures that at
//! most one thread at a time is able to access some data.
//!
//! - [`Once`]: Used for thread-safe, one-time initialization of a
//! global variable.
//!
//! - [`RwLock`]: Provides a mutual exclusion mechanism which allows
//! multiple readers at the same time, while allowing only one
//! writer at a time. In some cases, this can be more efficient than
//! a mutex.
//!
//! [`Arc`]: crate::sync::Arc
//! [`Barrier`]: crate::sync::Barrier
//! [`Condvar`]: crate::sync::Condvar
//! [`mpsc`]: crate::sync::mpsc
//! [`Mutex`]: crate::sync::Mutex //! [`Mutex`]: crate::sync::Mutex
//! [`Once`]: crate::sync::Once
//! [`RwLock`]: crate::sync::RwLock //! [`RwLock`]: crate::sync::RwLock
//! [channels]: crate::sync::mpsc
#![stable(feature = "rust1", since = "1.0.0")] #![stable(feature = "rust1", since = "1.0.0")]