Auto merge of #25243 - Manishearth:rollup, r=Manishearth

- Successful merges: #25216, #25227
- Failed merges:
This commit is contained in:
bors 2015-05-09 13:10:43 +00:00
commit 497942332f
75 changed files with 203 additions and 198 deletions

View File

@ -226,7 +226,7 @@ pub fn run_tests(config: &Config) {
}
// android debug-info test uses remote debugger
// so, we test 1 task at once.
// so, we test 1 thread at once.
// also trying to isolate problems with adb_run_wrapper.sh ilooping
env::set_var("RUST_TEST_THREADS","1");
}
@ -234,7 +234,7 @@ pub fn run_tests(config: &Config) {
match config.mode {
DebugInfoLldb => {
// Some older versions of LLDB seem to have problems with multiple
// instances running in parallel, so only run one test task at a
// instances running in parallel, so only run one test thread at a
// time.
env::set_var("RUST_TEST_THREADS", "1");
}

View File

@ -96,7 +96,7 @@ code should need to run is a stack.
possibility is covered by the `match`, adding further variants to the `enum`
in the future will prompt a compilation failure, rather than runtime panic.
Second, it makes cost explicit. In general, the only safe way to have a
non-exhaustive match would be to panic the task if nothing is matched, though
non-exhaustive match would be to panic the thread if nothing is matched, though
it could fall through if the type of the `match` expression is `()`. This sort
of hidden cost and special casing is against the language's philosophy. It's
easy to ignore certain cases by using the `_` wildcard:

View File

@ -62,15 +62,15 @@ Data values in the language can only be constructed through a fixed set of initi
* There is no global inter-crate namespace; all name management occurs within a crate.
* Using another crate binds the root of _its_ namespace into the user's namespace.
## Why is panic unwinding non-recoverable within a task? Why not try to "catch exceptions"?
## Why is panic unwinding non-recoverable within a thread? Why not try to "catch exceptions"?
In short, because too few guarantees could be made about the dynamic environment of the catch block, as well as invariants holding in the unwound heap, to be able to safely resume; we believe that other methods of signalling and logging errors are more appropriate, with tasks playing the role of a "hard" isolation boundary between separate heaps.
In short, because too few guarantees could be made about the dynamic environment of the catch block, as well as invariants holding in the unwound heap, to be able to safely resume; we believe that other methods of signalling and logging errors are more appropriate, with threads playing the role of a "hard" isolation boundary between separate heaps.
Rust provides, instead, three predictable and well-defined options for handling any combination of the three main categories of "catch" logic:
* Failure _logging_ is done by the integrated logging subsystem.
* _Recovery_ after a panic is done by trapping a task panic from _outside_
the task, where other tasks are known to be unaffected.
* _Recovery_ after a panic is done by trapping a thread panic from _outside_
the thread, where other threads are known to be unaffected.
* _Cleanup_ of resources is done by RAII-style objects with destructors.
Cleanup through RAII-style destructors is more likely to work than in catch blocks anyways, since it will be better tested (part of the non-error control paths, so executed all the time).
@ -91,8 +91,8 @@ We don't know if there's an obvious, easy, efficient, stock-textbook way of supp
There's a lot of debate on this topic; it's easy to find a proponent of default-sync or default-async communication, and there are good reasons for either. Our choice rests on the following arguments:
* Part of the point of isolating tasks is to decouple tasks from one another, such that assumptions in one task do not cause undue constraints (or bugs, if violated!) in another. Temporal coupling is as real as any other kind; async-by-default relaxes the default case to only _causal_ coupling.
* Default-async supports buffering and batching communication, reducing the frequency and severity of task-switching and inter-task / inter-domain synchronization.
* Part of the point of isolating threads is to decouple threads from one another, such that assumptions in one thread do not cause undue constraints (or bugs, if violated!) in another. Temporal coupling is as real as any other kind; async-by-default relaxes the default case to only _causal_ coupling.
* Default-async supports buffering and batching communication, reducing the frequency and severity of thread-switching and inter-thread / inter-domain synchronization.
* Default-async with transmittable channels is the lowest-level building block on which more-complex synchronization topologies and strategies can be built; it is not clear to us that the majority of cases fit the 2-party full-synchronization pattern rather than some more complex multi-party or multi-stage scenario. We did not want to force all programs to pay for wiring the former assumption into all communications.
## Why are channels half-duplex (one-way)?

View File

@ -789,8 +789,8 @@ bound := path | lifetime
### Boxes
## Tasks
## Threads
### Communication between tasks
### Communication between threads
### Task lifecycle
### Thread lifecycle

View File

@ -3635,7 +3635,7 @@ that have since been removed):
* ML Kit, Cyclone: region based memory management
* Haskell (GHC): typeclasses, type families
* Newsqueak, Alef, Limbo: channels, concurrency
* Erlang: message passing, task failure, ~~linked task failure~~,
* Erlang: message passing, thread failure, ~~linked thread failure~~,
~~lightweight concurrency~~
* Swift: optional bindings
* Scheme: hygienic macros

View File

@ -1,7 +1,7 @@
% Handling errors
### Use task isolation to cope with failure. [FIXME]
### Use thread isolation to cope with failure. [FIXME]
> **[FIXME]** Explain how to isolate tasks and detect task failure for recovery.
> **[FIXME]** Explain how to isolate threads and detect thread failure for recovery.
### Consuming `Result` [FIXME]

View File

@ -11,13 +11,13 @@ Errors fall into one of three categories:
The basic principle of the convention is that:
* Catastrophic errors and programming errors (bugs) can and should only be
recovered at a *coarse grain*, i.e. a task boundary.
recovered at a *coarse grain*, i.e. a thread boundary.
* Obstructions preventing an operation should be reported at a maximally *fine
grain* -- to the immediate invoker of the operation.
## Catastrophic errors
An error is _catastrophic_ if there is no meaningful way for the current task to
An error is _catastrophic_ if there is no meaningful way for the current thread to
continue after the error occurs.
Catastrophic errors are _extremely_ rare, especially outside of `libstd`.
@ -28,7 +28,7 @@ Catastrophic errors are _extremely_ rare, especially outside of `libstd`.
For errors like stack overflow, Rust currently aborts the process, but
could in principle panic, which (in the best case) would allow
reporting and recovery from a supervisory task.
reporting and recovery from a supervisory thread.
## Contract violations
@ -44,7 +44,7 @@ existing borrows have been relinquished.
A contract violation is always a bug, and for bugs we follow the Erlang
philosophy of "let it crash": we assume that software *will* have bugs, and we
design coarse-grained task boundaries to report, and perhaps recover, from these
design coarse-grained thread boundaries to report, and perhaps recover, from these
bugs.
### Contract design

View File

@ -23,7 +23,7 @@ If `T` is such a data structure, consider introducing a `T` _builder_:
4. The builder should provide one or more "_terminal_" methods for actually building a `T`.
The builder pattern is especially appropriate when building a `T` involves side
effects, such as spawning a task or launching a process.
effects, such as spawning a thread or launching a process.
In Rust, there are two variants of the builder pattern, differing in the
treatment of ownership, as described below.
@ -115,24 +115,24 @@ Sometimes builders must transfer ownership when constructing the final type
`T`, meaning that the terminal methods must take `self` rather than `&self`:
```rust
// A simplified excerpt from std::task::TaskBuilder
// A simplified excerpt from std::thread::Builder
impl TaskBuilder {
/// Name the task-to-be. Currently the name is used for identification
impl ThreadBuilder {
/// Name the thread-to-be. Currently the name is used for identification
/// only in failure messages.
pub fn named(mut self, name: String) -> TaskBuilder {
pub fn named(mut self, name: String) -> ThreadBuilder {
self.name = Some(name);
self
}
/// Redirect task-local stdout.
pub fn stdout(mut self, stdout: Box<Writer + Send>) -> TaskBuilder {
/// Redirect thread-local stdout.
pub fn stdout(mut self, stdout: Box<Writer + Send>) -> ThreadBuilder {
self.stdout = Some(stdout);
// ^~~~~~ this is owned and cannot be cloned/re-used
self
}
/// Creates and executes a new child task.
/// Creates and executes a new child thread.
pub fn spawn(self, f: proc():Send) {
// consume self
...
@ -141,7 +141,7 @@ impl TaskBuilder {
```
Here, the `stdout` configuration involves passing ownership of a `Writer`,
which must be transferred to the task upon construction (in `spawn`).
which must be transferred to the thread upon construction (in `spawn`).
When the terminal methods of the builder require ownership, there is a basic tradeoff:
@ -158,17 +158,17 @@ builder methods for a consuming builder should take and returned an owned
```rust
// One-liners
TaskBuilder::new().named("my_task").spawn(proc() { ... });
ThreadBuilder::new().named("my_thread").spawn(proc() { ... });
// Complex configuration
let mut task = TaskBuilder::new();
task = task.named("my_task_2"); // must re-assign to retain ownership
let mut thread = ThreadBuilder::new();
thread = thread.named("my_thread_2"); // must re-assign to retain ownership
if reroute {
task = task.stdout(mywriter);
thread = thread.stdout(mywriter);
}
task.spawn(proc() { ... });
thread.spawn(proc() { ... });
```
One-liners work as before, because ownership is threaded through each of the

View File

@ -8,7 +8,7 @@ go out of scope.
### Destructors should not fail. [FIXME: needs RFC]
Destructors are executed on task failure, and in that context a failing
Destructors are executed on thread failure, and in that context a failing
destructor causes the program to abort.
Instead of failing in a destructor, provide a separate method for checking for

View File

@ -5,7 +5,7 @@
Use line comments:
``` rust
// Wait for the main task to return, and set the process error code
// Wait for the main thread to return, and set the process error code
// appropriately.
```
@ -13,7 +13,7 @@ Instead of:
``` rust
/*
* Wait for the main task to return, and set the process error code
* Wait for the main thread to return, and set the process error code
* appropriately.
*/
```
@ -55,7 +55,7 @@ For example:
/// Sets up a default runtime configuration, given compiler-supplied arguments.
///
/// This function will block until the entire pool of M:N schedulers has
/// exited. This function also requires a local task to be available.
/// exited. This function also requires a local thread to be available.
///
/// # Arguments
///
@ -64,7 +64,7 @@ For example:
/// * `main` - The initial procedure to run inside of the M:N scheduling pool.
/// Once this procedure exits, the scheduling pool will begin to shut
/// down. The entire pool (and this function) will only return once
/// all child tasks have finished executing.
/// all child threads have finished executing.
///
/// # Return value
///

View File

@ -5,7 +5,7 @@ they enclose. Accessor methods often have variants to access the data
by value, by reference, and by mutable reference.
In general, the `get` family of methods is used to access contained
data without any risk of task failure; they return `Option` as
data without any risk of thread failure; they return `Option` as
appropriate. This name is chosen rather than names like `find` or
`lookup` because it is appropriate for a wider range of container types.

View File

@ -6,7 +6,7 @@ and more cores, yet many programmers aren't prepared to fully utilize them.
Rust's memory safety features also apply to its concurrency story too. Even
concurrent Rust programs must be memory safe, having no data races. Rust's type
system is up to the task, and gives you powerful ways to reason about
system is up to the thread, and gives you powerful ways to reason about
concurrent code at compile time.
Before we talk about the concurrency features that come with Rust, it's important

View File

@ -42,7 +42,7 @@ loop is just a handy way to write this `loop`/`match`/`break` construct.
`for` loops aren't the only thing that uses iterators, however. Writing your
own iterator involves implementing the `Iterator` trait. While doing that is
outside of the scope of this guide, Rust provides a number of useful iterators
to accomplish various tasks. Before we talk about those, we should talk about a
to accomplish various threads. Before we talk about those, we should talk about a
Rust anti-pattern. And that's using ranges like this.
Yes, we just talked about how ranges are cool. But ranges are also very

View File

@ -31,7 +31,7 @@
//!
//! # Examples
//!
//! Sharing some immutable data between tasks:
//! Sharing some immutable data between threads:
//!
//! ```no_run
//! use std::sync::Arc;
@ -48,7 +48,7 @@
//! }
//! ```
//!
//! Sharing mutable data safely between tasks with a `Mutex`:
//! Sharing mutable data safely between threads with a `Mutex`:
//!
//! ```no_run
//! use std::sync::{Arc, Mutex};
@ -89,9 +89,9 @@ use heap::deallocate;
///
/// # Examples
///
/// In this example, a large vector of floats is shared between several tasks.
/// In this example, a large vector of floats is shared between several threads.
/// With simple pipes, without `Arc`, a copy would have to be made for each
/// task.
/// thread.
///
/// When you clone an `Arc<T>`, it will create another pointer to the data and
/// increase the reference counter.

View File

@ -26,14 +26,14 @@
//! There can only be one owner of a `Box`, and the owner can decide to mutate
//! the contents, which live on the heap.
//!
//! This type can be sent among tasks efficiently as the size of a `Box` value
//! This type can be sent among threads efficiently as the size of a `Box` value
//! is the same as that of a pointer. Tree-like data structures are often built
//! with boxes because each node often has only one owner, the parent.
//!
//! ## Reference counted pointers
//!
//! The [`Rc`](rc/index.html) type is a non-threadsafe reference-counted pointer
//! type intended for sharing memory within a task. An `Rc` pointer wraps a
//! type intended for sharing memory within a thread. An `Rc` pointer wraps a
//! type, `T`, and only allows access to `&T`, a shared reference.
//!
//! This type is useful when inherited mutability (such as using `Box`) is too

View File

@ -644,6 +644,10 @@ impl BitVec {
/// Splits the `BitVec` into two at the given bit,
/// retaining the first half in-place and returning the second one.
///
/// # Panics
///
/// Panics if `at` is out of bounds.
///
/// # Examples
///
/// ```

View File

@ -52,20 +52,20 @@
//! spinlock_clone.store(0, Ordering::SeqCst);
//! });
//!
//! // Wait for the other task to release the lock
//! // Wait for the other thread to release the lock
//! while spinlock.load(Ordering::SeqCst) != 0 {}
//! }
//! ```
//!
//! Keep a global count of live tasks:
//! Keep a global count of live threads:
//!
//! ```
//! use std::sync::atomic::{AtomicUsize, Ordering, ATOMIC_USIZE_INIT};
//!
//! static GLOBAL_TASK_COUNT: AtomicUsize = ATOMIC_USIZE_INIT;
//! static GLOBAL_THREAD_COUNT: AtomicUsize = ATOMIC_USIZE_INIT;
//!
//! let old_task_count = GLOBAL_TASK_COUNT.fetch_add(1, Ordering::SeqCst);
//! println!("live tasks: {}", old_task_count + 1);
//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::SeqCst);
//! println!("live threads: {}", old_thread_count + 1);
//! ```
#![stable(feature = "rust1", since = "1.0.0")]

View File

@ -24,7 +24,8 @@
//! claim temporary, exclusive, mutable access to the inner value. Borrows for `RefCell<T>`s are
//! tracked 'at runtime', unlike Rust's native reference types which are entirely tracked
//! statically, at compile time. Because `RefCell<T>` borrows are dynamic it is possible to attempt
//! to borrow a value that is already mutably borrowed; when this happens it results in task panic.
//! to borrow a value that is already mutably borrowed; when this happens it results in thread
//! panic.
//!
//! # When to choose interior mutability
//!
@ -100,7 +101,7 @@
//! // Recursive call to return the just-cached value.
//! // Note that if we had not let the previous borrow
//! // of the cache fall out of scope then the subsequent
//! // recursive borrow would cause a dynamic task panic.
//! // recursive borrow would cause a dynamic thread panic.
//! // This is the major hazard of using `RefCell`.
//! self.minimum_spanning_tree()
//! }

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/// Entry point of task panic, for details, see std::macros
/// Entry point of thread panic, for details, see std::macros
#[macro_export]
macro_rules! panic {
() => (

View File

@ -228,7 +228,7 @@ thread_local! {
}
}
/// A trait used to represent an interface to a task-local logger. Each task
/// A trait used to represent an interface to a thread-local logger. Each thread
/// can have its own custom logger which can respond to logging messages
/// however it likes.
pub trait Logger {
@ -324,7 +324,7 @@ pub fn log(level: u32, loc: &'static LogLocation, args: fmt::Arguments) {
#[inline(always)]
pub fn log_level() -> u32 { unsafe { LOG_LEVEL } }
/// Replaces the task-local logger with the specified logger, returning the old
/// Replaces the thread-local logger with the specified logger, returning the old
/// logger.
pub fn set_logger(logger: Box<Logger + Send>) -> Option<Box<Logger + Send>> {
let mut l = Some(logger);

View File

@ -4630,7 +4630,7 @@ pub fn expr_ty_opt<'tcx>(cx: &ctxt<'tcx>, expr: &ast::Expr) -> Option<Ty<'tcx>>
/// require serializing and deserializing the type and, although that's not
/// hard to do, I just hate that code so much I didn't want to touch it
/// unless it was to fix it properly, which seemed a distraction from the
/// task at hand! -nmatsakis
/// thread at hand! -nmatsakis
pub fn expr_ty_adjusted<'tcx>(cx: &ctxt<'tcx>, expr: &ast::Expr) -> Ty<'tcx> {
adjust_ty(cx, expr.span, expr.id, expr_ty(cx, expr),
cx.adjustments.borrow().get(&expr.id),

View File

@ -131,7 +131,7 @@ impl<'a> PluginLoader<'a> {
// Intentionally leak the dynamic library. We can't ever unload it
// since the library can make things that will live arbitrarily long
// (e.g. an @-box cycle or a task).
// (e.g. an @-box cycle or a thread).
mem::forget(lib);
registrar

View File

@ -86,8 +86,8 @@ struct Diagnostic {
}
// We use an Arc instead of just returning a list of diagnostics from the
// child task because we need to make sure that the messages are seen even
// if the child task panics (for example, when `fatal` is called).
// child thread because we need to make sure that the messages are seen even
// if the child thread panics (for example, when `fatal` is called).
#[derive(Clone)]
struct SharedEmitter {
buffer: Arc<Mutex<Vec<Diagnostic>>>,
@ -637,7 +637,7 @@ pub fn run_passes(sess: &Session,
metadata_config.set_flags(sess, trans);
// Populate a buffer with a list of codegen tasks. Items are processed in
// Populate a buffer with a list of codegen threads. Items are processed in
// LIFO order, just because it's a tiny bit simpler that way. (The order
// doesn't actually matter.)
let mut work_items = Vec::with_capacity(1 + trans.modules.len());

View File

@ -147,7 +147,7 @@ pub fn type_is_fat_ptr<'tcx>(cx: &ty::ctxt<'tcx>, ty: Ty<'tcx>) -> bool {
}
// Some things don't need cleanups during unwinding because the
// task can free them all at once later. Currently only things
// thread can free them all at once later. Currently only things
// that only contain scalars and shared boxes can avoid unwind
// cleanups.
pub fn type_needs_unwind_cleanup<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ty: Ty<'tcx>) -> bool {

View File

@ -477,7 +477,7 @@ impl LangString {
/// By default this markdown renderer generates anchors for each header in the
/// rendered document. The anchor name is the contents of the header separated
/// by hyphens, and a task-local map is used to disambiguate among duplicate
/// by hyphens, and a thread-local map is used to disambiguate among duplicate
/// headers (numbers are appended).
///
/// This method will reset the local table for these headers. This is typically

View File

@ -17,10 +17,10 @@
//!
//! The rendering process is largely driven by the `Context` and `Cache`
//! structures. The cache is pre-populated by crawling the crate in question,
//! and then it is shared among the various rendering tasks. The cache is meant
//! and then it is shared among the various rendering threads. The cache is meant
//! to be a fairly large structure not implementing `Clone` (because it's shared
//! among tasks). The context, however, should be a lightweight structure. This
//! is cloned per-task and contains information about what is currently being
//! among threads). The context, however, should be a lightweight structure. This
//! is cloned per-thread and contains information about what is currently being
//! rendered.
//!
//! In order to speed up rendering (mostly because of markdown rendering), the
@ -30,7 +30,7 @@
//!
//! In addition to rendering the crate itself, this module is also responsible
//! for creating the corresponding search index and source file renderings.
//! These tasks are not parallelized (they haven't been a bottleneck yet), and
//! These threads are not parallelized (they haven't been a bottleneck yet), and
//! both occur before the crate is rendered.
pub use self::ExternalLocation::*;
@ -154,7 +154,7 @@ impl Impl {
/// This structure purposefully does not implement `Clone` because it's intended
/// to be a fairly large and expensive structure to clone. Instead this adheres
/// to `Send` so it may be stored in a `Arc` instance and shared among the various
/// rendering tasks.
/// rendering threads.
#[derive(Default)]
pub struct Cache {
/// Mapping of typaram ids to the name of the type parameter. This is used
@ -688,7 +688,7 @@ fn write(dst: PathBuf, contents: &[u8]) -> io::Result<()> {
try!(File::create(&dst)).write_all(contents)
}
/// Makes a directory on the filesystem, failing the task if an error occurs and
/// Makes a directory on the filesystem, failing the thread if an error occurs and
/// skipping if the directory already exists.
fn mkdir(path: &Path) -> io::Result<()> {
if !path.exists() {

View File

@ -180,9 +180,9 @@ fn runtest(test: &str, cratename: &str, libs: SearchPaths,
// an explicit handle into rustc to collect output messages, but we also
// want to catch the error message that rustc prints when it fails.
//
// We take our task-local stderr (likely set by the test runner) and replace
// We take our thread-local stderr (likely set by the test runner) and replace
// it with a sink that is also passed to rustc itself. When this function
// returns the output of the sink is copied onto the output of our own task.
// returns the output of the sink is copied onto the output of our own thread.
//
// The basic idea is to not use a default_handler() for rustc, and then also
// not print things by default to the actual stderr.

View File

@ -205,7 +205,7 @@ fn test_resize_policy() {
/// A hash map implementation which uses linear probing with Robin
/// Hood bucket stealing.
///
/// The hashes are all keyed by the task-local random number generator
/// The hashes are all keyed by the thread-local random number generator
/// on creation by default. This means that the ordering of the keys is
/// randomized, but makes the tables more resistant to
/// denial-of-service attacks (Hash DoS). This behaviour can be

View File

@ -271,7 +271,7 @@
//! ```
//!
//! Iterators also provide a series of *adapter* methods for performing common
//! tasks to sequences. Among the adapters are functional favorites like `map`,
//! threads to sequences. Among the adapters are functional favorites like `map`,
//! `fold`, `skip`, and `take`. Of particular interest to collections is the
//! `rev` adapter, that reverses any iterator that supports this operation. Most
//! collections provide reversible iterators as the way to iterate over them in

View File

@ -457,8 +457,8 @@ static EXIT_STATUS: AtomicIsize = ATOMIC_ISIZE_INIT;
/// Sets the process exit code
///
/// Sets the exit code returned by the process if all supervised tasks
/// terminate successfully (without panicking). If the current root task panics
/// Sets the exit code returned by the process if all supervised threads
/// terminate successfully (without panicking). If the current root thread panics
/// and is supervised by the scheduler then any user-specified exit status is
/// ignored and the process exits with the default panic status.
///

View File

@ -355,13 +355,13 @@ impl<'a> Write for StderrLock<'a> {
}
}
/// Resets the task-local stderr handle to the specified writer
/// Resets the thread-local stderr handle to the specified writer
///
/// This will replace the current task's stderr handle, returning the old
/// This will replace the current thread's stderr handle, returning the old
/// handle. All future calls to `panic!` and friends will emit their output to
/// this specified handle.
///
/// Note that this does not need to be called for all new tasks; the default
/// Note that this does not need to be called for all new threads; the default
/// output handle is to the process's stderr stream.
#[unstable(feature = "set_stdio",
reason = "this function may disappear completely or be replaced \
@ -378,13 +378,13 @@ pub fn set_panic(sink: Box<Write + Send>) -> Option<Box<Write + Send>> {
})
}
/// Resets the task-local stdout handle to the specified writer
/// Resets the thread-local stdout handle to the specified writer
///
/// This will replace the current task's stdout handle, returning the old
/// This will replace the current thread's stdout handle, returning the old
/// handle. All future calls to `print!` and friends will emit their output to
/// this specified handle.
///
/// Note that this does not need to be called for all new tasks; the default
/// Note that this does not need to be called for all new threads; the default
/// output handle is to the process's stdout stream.
#[unstable(feature = "set_stdio",
reason = "this function may disappear completely or be replaced \

View File

@ -16,10 +16,10 @@
#![unstable(feature = "std_misc")]
/// The entry point for panic of Rust tasks.
/// The entry point for panic of Rust threads.
///
/// This macro is used to inject panic into a Rust task, causing the task to
/// unwind and panic entirely. Each task's panic can be reaped as the
/// This macro is used to inject panic into a Rust thread, causing the thread to
/// unwind and panic entirely. Each thread's panic can be reaped as the
/// `Box<Any>` type, and the single-argument form of the `panic!` macro will be
/// the value which is transmitted.
///
@ -38,10 +38,10 @@
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
/// The entry point for panic of Rust tasks.
/// The entry point for panic of Rust threads.
///
/// This macro is used to inject panic into a Rust task, causing the task to
/// unwind and panic entirely. Each task's panic can be reaped as the
/// This macro is used to inject panic into a Rust thread, causing the thread to
/// unwind and panic entirely. Each thread's panic can be reaped as the
/// `Box<Any>` type, and the single-argument form of the `panic!` macro will be
/// the value which is transmitted.
///
@ -143,17 +143,17 @@ macro_rules! try {
/// use std::sync::mpsc;
///
/// // two placeholder functions for now
/// fn long_running_task() {}
/// fn long_running_thread() {}
/// fn calculate_the_answer() -> u32 { 42 }
///
/// let (tx1, rx1) = mpsc::channel();
/// let (tx2, rx2) = mpsc::channel();
///
/// thread::spawn(move|| { long_running_task(); tx1.send(()).unwrap(); });
/// thread::spawn(move|| { long_running_thread(); tx1.send(()).unwrap(); });
/// thread::spawn(move|| { tx2.send(calculate_the_answer()).unwrap(); });
///
/// select! {
/// _ = rx1.recv() => println!("the long running task finished first"),
/// _ = rx1.recv() => println!("the long running thread finished first"),
/// answer = rx2.recv() => {
/// println!("the answer was: {}", answer.unwrap());
/// }

View File

@ -444,7 +444,7 @@ mod tests {
let _t = thread::spawn(move|| {
let acceptor = acceptor;
for (i, stream) in acceptor.incoming().enumerate().take(MAX) {
// Start another task to handle the connection
// Start another thread to handle the connection
let _t = thread::spawn(move|| {
let mut stream = t!(stream);
let mut buf = [0];
@ -478,7 +478,7 @@ mod tests {
let _t = thread::spawn(move|| {
for stream in acceptor.incoming().take(MAX) {
// Start another task to handle the connection
// Start another thread to handle the connection
let _t = thread::spawn(move|| {
let mut stream = t!(stream);
let mut buf = [0];
@ -738,7 +738,7 @@ mod tests {
assert_eq!(t!(s2.read(&mut [0])), 0);
tx.send(()).unwrap();
});
// this should wake up the child task
// this should wake up the child thread
t!(s.shutdown(Shutdown::Read));
// this test will never finish if the child doesn't wake up
@ -752,7 +752,7 @@ mod tests {
each_ip(&mut |addr| {
let accept = t!(TcpListener::bind(&addr));
// Enqueue a task to write to a socket
// Enqueue a thread to write to a socket
let (tx, rx) = channel();
let (txdone, rxdone) = channel();
let txdone2 = txdone.clone();

View File

@ -19,7 +19,7 @@
//! ```
//!
//! This means that the contents of std can be accessed from any context
//! with the `std::` path prefix, as in `use std::vec`, `use std::task::spawn`,
//! with the `std::` path prefix, as in `use std::vec`, `use std::thread::spawn`,
//! etc.
//!
//! Additionally, `std` contains a `prelude` module that reexports many of the

View File

@ -374,7 +374,7 @@ mod tests {
txs.push(tx);
thread::spawn(move|| {
// wait until all the tasks are ready to go.
// wait until all the threads are ready to go.
rx.recv().unwrap();
// deschedule to attempt to interleave things as much
@ -394,7 +394,7 @@ mod tests {
});
}
// start all the tasks
// start all the threads
for tx in &txs {
tx.send(()).unwrap();
}

View File

@ -590,7 +590,7 @@ fn begin_unwind_inner(msg: Box<Any + Send>,
/// This is an unsafe and experimental API which allows for an arbitrary
/// callback to be invoked when a thread panics. This callback is invoked on both
/// the initial unwinding and a double unwinding if one occurs. Additionally,
/// the local `Task` will be in place for the duration of the callback, and
/// the local `Thread` will be in place for the duration of the callback, and
/// the callback must ensure that it remains in place once the callback returns.
///
/// Only a limited number of callbacks can be registered, and this function

View File

@ -10,7 +10,7 @@
use sync::{Mutex, Condvar};
/// A barrier enables multiple tasks to synchronize the beginning
/// A barrier enables multiple threads to synchronize the beginning
/// of some computation.
///
/// ```
@ -128,7 +128,7 @@ mod tests {
});
}
// At this point, all spawned tasks should be blocked,
// At this point, all spawned threads should be blocked,
// so we shouldn't get anything from the port
assert!(match rx.try_recv() {
Err(TryRecvError::Empty) => true,

View File

@ -107,7 +107,7 @@
//!
//! let (tx, rx) = sync_channel::<i32>(0);
//! thread::spawn(move|| {
//! // This will wait for the parent task to start receiving
//! // This will wait for the parent thread to start receiving
//! tx.send(53).unwrap();
//! });
//! rx.recv().unwrap();
@ -253,7 +253,7 @@
// blocking. The implementation is essentially the entire blocking procedure
// followed by an increment as soon as its woken up. The cancellation procedure
// involves an increment and swapping out of to_wake to acquire ownership of the
// task to unblock.
// thread to unblock.
//
// Sadly this current implementation requires multiple allocations, so I have
// seen the throughput of select() be much worse than it should be. I do not
@ -289,7 +289,7 @@ mod mpsc_queue;
mod spsc_queue;
/// The receiving-half of Rust's channel type. This half can only be owned by
/// one task
/// one thread
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Receiver<T> {
inner: UnsafeCell<Flavor<T>>,
@ -316,7 +316,7 @@ pub struct IntoIter<T> {
}
/// The sending-half of Rust's asynchronous channel type. This half can only be
/// owned by one task, but it can be cloned to send to other tasks.
/// owned by one thread, but it can be cloned to send to other threads.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Sender<T> {
inner: UnsafeCell<Flavor<T>>,
@ -327,7 +327,7 @@ pub struct Sender<T> {
unsafe impl<T: Send> Send for Sender<T> { }
/// The sending-half of Rust's synchronous channel type. This half can only be
/// owned by one task, but it can be cloned to send to other tasks.
/// owned by one thread, but it can be cloned to send to other threads.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct SyncSender<T> {
inner: Arc<UnsafeCell<sync::Packet<T>>>,
@ -421,7 +421,7 @@ impl<T> UnsafeFlavor<T> for Receiver<T> {
/// Creates a new asynchronous channel, returning the sender/receiver halves.
///
/// All data sent on the sender will become available on the receiver, and no
/// send will block the calling task (this channel has an "infinite buffer").
/// send will block the calling thread (this channel has an "infinite buffer").
///
/// # Examples
///
@ -1596,7 +1596,7 @@ mod tests {
drop(rx); // destroy a shared
tx2.send(()).unwrap();
});
// make sure the other task has gone to sleep
// make sure the other thread has gone to sleep
for _ in 0..5000 { thread::yield_now(); }
// upgrade to a shared chan and send a message
@ -1604,7 +1604,7 @@ mod tests {
drop(tx);
t.send(()).unwrap();
// wait for the child task to exit before we exit
// wait for the child thread to exit before we exit
rx2.recv().unwrap();
}
}
@ -2060,7 +2060,7 @@ mod sync_tests {
drop(rx); // destroy a shared
tx2.send(()).unwrap();
});
// make sure the other task has gone to sleep
// make sure the other thread has gone to sleep
for _ in 0..5000 { thread::yield_now(); }
// upgrade to a shared chan and send a message
@ -2068,7 +2068,7 @@ mod sync_tests {
drop(tx);
t.send(()).unwrap();
// wait for the child task to exit before we exit
// wait for the child thread to exit before we exit
rx2.recv().unwrap();
}

View File

@ -28,7 +28,7 @@
//! A mostly lock-free multi-producer, single consumer queue.
//!
//! This module contains an implementation of a concurrent MPSC queue. This
//! queue can be used to share data between tasks, and is also used as the
//! queue can be used to share data between threads, and is also used as the
//! building block of channels in rust.
//!
//! Note that the current implementation of this queue has a caveat of the `pop`

View File

@ -23,7 +23,7 @@
/// # Implementation
///
/// Oneshots are implemented around one atomic usize variable. This variable
/// indicates both the state of the port/chan but also contains any tasks
/// indicates both the state of the port/chan but also contains any threads
/// blocked on the port. All atomic operations happen on this one word.
///
/// In order to upgrade a oneshot channel, an upgrade is considered a disconnect
@ -55,7 +55,7 @@ const DISCONNECTED: usize = 2; // channel is disconnected OR upgraded
// whoever changed the state.
pub struct Packet<T> {
// Internal state of the chan/port pair (stores the blocked task as well)
// Internal state of the chan/port pair (stores the blocked thread as well)
state: AtomicUsize,
// One-shot data slot location
data: Option<T>,
@ -139,7 +139,7 @@ impl<T> Packet<T> {
}
pub fn recv(&mut self) -> Result<T, Failure<T>> {
// Attempt to not block the task (it's a little expensive). If it looks
// Attempt to not block the thread (it's a little expensive). If it looks
// like we're not empty, then immediately go through to `try_recv`.
if self.state.load(Ordering::SeqCst) == EMPTY {
let (wait_token, signal_token) = blocking::tokens();
@ -317,8 +317,8 @@ impl<T> Packet<T> {
}
}
// Remove a previous selecting task from this port. This ensures that the
// blocked task will no longer be visible to any other threads.
// Remove a previous selecting thread from this port. This ensures that the
// blocked thread will no longer be visible to any other threads.
//
// The return value indicates whether there's data on this port.
pub fn abort_selection(&mut self) -> Result<bool, Receiver<T>> {
@ -329,7 +329,7 @@ impl<T> Packet<T> {
s @ DATA |
s @ DISCONNECTED => s,
// If we've got a blocked task, then use an atomic to gain ownership
// If we've got a blocked thread, then use an atomic to gain ownership
// of it (may fail)
ptr => self.state.compare_and_swap(ptr, EMPTY, Ordering::SeqCst)
};
@ -338,7 +338,7 @@ impl<T> Packet<T> {
// about it.
match state {
EMPTY => unreachable!(),
// our task used for select was stolen
// our thread used for select was stolen
DATA => Ok(true),
// If the other end has hung up, then we have complete ownership

View File

@ -229,7 +229,7 @@ impl Select {
// woken us up (although the wakeup is guaranteed to fail).
//
// This situation happens in the window of where a sender invokes
// increment(), sees -1, and then decides to wake up the task. After
// increment(), sees -1, and then decides to wake up the thread. After
// all this is done, the sending thread will set `selecting` to
// `false`. Until this is done, we cannot return. If we were to
// return, then a sender could wake up a receiver which has gone

View File

@ -91,8 +91,8 @@ impl<T> Packet<T> {
}
// This function is used at the creation of a shared packet to inherit a
// previously blocked task. This is done to prevent spurious wakeups of
// tasks in select().
// previously blocked thread. This is done to prevent spurious wakeups of
// threads in select().
//
// This can only be called at channel-creation time
pub fn inherit_blocker(&mut self,
@ -424,7 +424,7 @@ impl<T> Packet<T> {
}
}
// Cancels a previous task waiting on this port, returning whether there's
// Cancels a previous thread waiting on this port, returning whether there's
// data on the port.
//
// This is similar to the stream implementation (hence fewer comments), but

View File

@ -30,7 +30,7 @@
//! A single-producer single-consumer concurrent queue
//!
//! This module contains the implementation of an SPSC queue which can be used
//! concurrently between two tasks. This data structure is safe to use and
//! concurrently between two threads. This data structure is safe to use and
//! enforces the semantics that there is one pusher and one popper.
#![unstable(feature = "std_misc")]

View File

@ -181,7 +181,7 @@ impl<T> Packet<T> {
data => return data,
}
// Welp, our channel has no data. Deschedule the current task and
// Welp, our channel has no data. Deschedule the current thread and
// initiate the blocking protocol.
let (wait_token, signal_token) = blocking::tokens();
if self.decrement(signal_token).is_ok() {
@ -385,7 +385,7 @@ impl<T> Packet<T> {
}
}
// Removes a previous task from being blocked in this port
// Removes a previous thread from being blocked in this port
pub fn abort_selection(&mut self,
was_upgrade: bool) -> Result<bool, Receiver<T>> {
// If we're aborting selection after upgrading from a oneshot, then
@ -414,7 +414,7 @@ impl<T> Packet<T> {
let prev = self.bump(steals + 1);
// If we were previously disconnected, then we know for sure that there
// is no task in to_wake, so just keep going
// is no thread in to_wake, so just keep going
let has_data = if prev == DISCONNECTED {
assert_eq!(self.to_wake.load(Ordering::SeqCst), 0);
true // there is data, that data is that we're disconnected
@ -428,7 +428,7 @@ impl<T> Packet<T> {
//
// If the previous count was positive then we're in a tougher
// situation. A possible race is that a sender just incremented
// through -1 (meaning it's going to try to wake a task up), but it
// through -1 (meaning it's going to try to wake a thread up), but it
// hasn't yet read the to_wake. In order to prevent a future recv()
// from waking up too early (this sender picking up the plastered
// over to_wake), we spin loop here waiting for to_wake to be 0.

View File

@ -19,7 +19,7 @@
/// which means that every successful send is paired with a successful recv.
///
/// This flavor of channels defines a new `send_opt` method for channels which
/// is the method by which a message is sent but the task does not panic if it
/// is the method by which a message is sent but the thread does not panic if it
/// cannot be delivered.
///
/// Another major difference is that send() will *always* return back the data
@ -62,12 +62,12 @@ unsafe impl<T: Send> Sync for Packet<T> { }
struct State<T> {
disconnected: bool, // Is the channel disconnected yet?
queue: Queue, // queue of senders waiting to send data
blocker: Blocker, // currently blocked task on this channel
blocker: Blocker, // currently blocked thread on this channel
buf: Buffer<T>, // storage for buffered messages
cap: usize, // capacity of this channel
/// A curious flag used to indicate whether a sender failed or succeeded in
/// blocking. This is used to transmit information back to the task that it
/// blocking. This is used to transmit information back to the thread that it
/// must dequeue its message from the buffer because it was not received.
/// This is only relevant in the 0-buffer case. This obviously cannot be
/// safely constructed, but it's guaranteed to always have a valid pointer
@ -84,7 +84,7 @@ enum Blocker {
NoneBlocked
}
/// Simple queue for threading tasks together. Nodes are stack-allocated, so
/// Simple queue for threading threads together. Nodes are stack-allocated, so
/// this structure is not safe at all
struct Queue {
head: *mut Node,
@ -130,7 +130,7 @@ fn wait<'a, 'b, T>(lock: &'a Mutex<State<T>>,
/// Wakes up a thread, dropping the lock at the correct time
fn wakeup<T>(token: SignalToken, guard: MutexGuard<State<T>>) {
// We need to be careful to wake up the waiting task *outside* of the mutex
// We need to be careful to wake up the waiting thread *outside* of the mutex
// in case it incurs a context switch.
drop(guard);
token.signal();
@ -298,7 +298,7 @@ impl<T> Packet<T> {
};
mem::drop(guard);
// only outside of the lock do we wake up the pending tasks
// only outside of the lock do we wake up the pending threads
pending_sender1.map(|t| t.signal());
pending_sender2.map(|t| t.signal());
}
@ -394,8 +394,8 @@ impl<T> Packet<T> {
}
}
// Remove a previous selecting task from this port. This ensures that the
// blocked task will no longer be visible to any other threads.
// Remove a previous selecting thread from this port. This ensures that the
// blocked thread will no longer be visible to any other threads.
//
// The return value indicates whether there's data on this port.
pub fn abort_selection(&self) -> bool {
@ -446,7 +446,7 @@ impl<T> Buffer<T> {
}
////////////////////////////////////////////////////////////////////////////////
// Queue, a simple queue to enqueue tasks with (stack-allocated nodes)
// Queue, a simple queue to enqueue threads with (stack-allocated nodes)
////////////////////////////////////////////////////////////////////////////////
impl Queue {

View File

@ -30,7 +30,7 @@ use sys_common::poison::{self, TryLockError, TryLockResult, LockResult};
///
/// The mutexes in this module implement a strategy called "poisoning" where a
/// mutex is considered poisoned whenever a thread panics while holding the
/// lock. Once a mutex is poisoned, all other tasks are unable to access the
/// lock. Once a mutex is poisoned, all other threads are unable to access the
/// data by default as it is likely tainted (some invariant is not being
/// upheld).
///
@ -56,7 +56,7 @@ use sys_common::poison::{self, TryLockError, TryLockResult, LockResult};
/// // Spawn a few threads to increment a shared variable (non-atomically), and
/// // let the main thread know once all increments are done.
/// //
/// // Here we're using an Arc to share memory among tasks, and the data inside
/// // Here we're using an Arc to share memory among threads, and the data inside
/// // the Arc is protected with a mutex.
/// let data = Arc::new(Mutex::new(0));
///
@ -69,7 +69,7 @@ use sys_common::poison::{self, TryLockError, TryLockResult, LockResult};
/// // which can access the shared state when the lock is held.
/// //
/// // We unwrap() the return value to assert that we are not expecting
/// // tasks to ever fail while holding the lock.
/// // threads to ever fail while holding the lock.
/// let mut data = data.lock().unwrap();
/// *data += 1;
/// if *data == N {
@ -195,10 +195,10 @@ impl<T> Mutex<T> {
}
impl<T: ?Sized> Mutex<T> {
/// Acquires a mutex, blocking the current task until it is able to do so.
/// Acquires a mutex, blocking the current thread until it is able to do so.
///
/// This function will block the local task until it is available to acquire
/// the mutex. Upon returning, the task is the only task with the mutex
/// This function will block the local thread until it is available to acquire
/// the mutex. Upon returning, the thread is the only thread with the mutex
/// held. An RAII guard is returned to allow scoped unlock of the lock. When
/// the guard goes out of scope, the mutex will be unlocked.
///

View File

@ -55,13 +55,13 @@ impl Once {
/// will be executed if this is the first time `call_once` has been called,
/// and otherwise the routine will *not* be invoked.
///
/// This method will block the calling task if another initialization
/// This method will block the calling thread if another initialization
/// routine is currently running.
///
/// When this function returns, it is guaranteed that some initialization
/// has run and completed (it may not be the closure specified). It is also
/// guaranteed that any memory writes performed by the executed closure can
/// be reliably observed by other tasks at this point (there is a
/// be reliably observed by other threads at this point (there is a
/// happens-before relation between the closure and code executing after the
/// return).
#[stable(feature = "rust1", since = "1.0.0")]

View File

@ -25,7 +25,7 @@ use sys_common::rwlock as sys;
/// typically allows for read-only access (shared access).
///
/// The type parameter `T` represents the data that this lock protects. It is
/// required that `T` satisfies `Send` to be shared across tasks and `Sync` to
/// required that `T` satisfies `Send` to be shared across threads and `Sync` to
/// allow concurrent access through readers. The RAII guards returned from the
/// locking methods implement `Deref` (and `DerefMut` for the `write` methods)
/// to allow access to the contained of the lock.

View File

@ -55,7 +55,7 @@ pub struct Guard {
/// A type of error which can be returned whenever a lock is acquired.
///
/// Both Mutexes and RwLocks are poisoned whenever a task fails while the lock
/// Both Mutexes and RwLocks are poisoned whenever a thread fails while the lock
/// is held. The precise semantics for when a lock is poisoned is documented on
/// each lock, but once a lock is poisoned then all future acquisitions will
/// return this error.
@ -68,7 +68,7 @@ pub struct PoisonError<T> {
/// `try_lock` method.
#[stable(feature = "rust1", since = "1.0.0")]
pub enum TryLockError<T> {
/// The lock could not be acquired because another task failed while holding
/// The lock could not be acquired because another thread failed while holding
/// the lock.
#[stable(feature = "rust1", since = "1.0.0")]
Poisoned(PoisonError<T>),

View File

@ -11,13 +11,13 @@
//! Rust stack-limit management
//!
//! Currently Rust uses a segmented-stack-like scheme in order to detect stack
//! overflow for rust tasks. In this scheme, the prologue of all functions are
//! overflow for rust threads. In this scheme, the prologue of all functions are
//! preceded with a check to see whether the current stack limits are being
//! exceeded.
//!
//! This module provides the functionality necessary in order to manage these
//! stack limits (which are stored in platform-specific locations). The
//! functions here are used at the borders of the task lifetime in order to
//! functions here are used at the borders of the thread lifetime in order to
//! manage these limits.
//!
//! This function is an unstable module because this scheme for stack overflow

View File

@ -22,7 +22,7 @@
/// getting both accurate backtraces and accurate symbols across platforms.
/// This route was not chosen in favor of the next option, however.
///
/// * We're already using libgcc_s for exceptions in rust (triggering task
/// * We're already using libgcc_s for exceptions in rust (triggering thread
/// unwinding and running destructors on the stack), and it turns out that it
/// conveniently comes with a function that also gives us a backtrace. All of
/// these functions look like _Unwind_*, but it's not quite the full
@ -116,7 +116,7 @@ pub fn write(w: &mut Write) -> io::Result<()> {
// while it doesn't requires lock for work as everything is
// local, it still displays much nicer backtraces when a
// couple of tasks panic simultaneously
// couple of threads panic simultaneously
static LOCK: StaticMutex = MUTEX_INIT;
let _g = LOCK.lock();

View File

@ -32,7 +32,7 @@ pub type Dtor = unsafe extern fn(*mut u8);
// somewhere to run arbitrary code on thread termination. With this in place
// we'll be able to run anything we like, including all TLS destructors!
//
// To accomplish this feat, we perform a number of tasks, all contained
// To accomplish this feat, we perform a number of threads, all contained
// within this module:
//
// * All TLS destructors are tracked by *us*, not the windows runtime. This

View File

@ -32,7 +32,7 @@ pub mod __impl {
/// primary method is the `with` method.
///
/// The `with` method yields a reference to the contained value which cannot be
/// sent across tasks or escape the given closure.
/// sent across threads or escape the given closure.
///
/// # Initialization and Destruction
///

View File

@ -873,8 +873,8 @@ mod tests {
#[test]
fn test_child_doesnt_ref_parent() {
// If the child refcounts the parent task, this will stack overflow when
// climbing the task tree to dereference each ancestor. (See #1789)
// If the child refcounts the parent thread, this will stack overflow when
// climbing the thread tree to dereference each ancestor. (See #1789)
// (well, it would if the constant were 8000+ - I lowered it to be more
// valgrind-friendly. try this at home, instead..!)
const GENERATIONS: u32 = 16;
@ -983,6 +983,6 @@ mod tests {
thread::sleep_ms(2);
}
// NOTE: the corresponding test for stderr is in run-pass/task-stderr, due
// NOTE: the corresponding test for stderr is in run-pass/thread-stderr, due
// to the test harness apparently interfering with stderr configuration.
}

View File

@ -77,7 +77,7 @@
//!
//! The `cs_...` functions ("combine substructure) are designed to
//! make life easier by providing some pre-made recipes for common
//! tasks; mostly calling the function being derived on all the
//! threads; mostly calling the function being derived on all the
//! arguments and then combining them back together in some way (or
//! letting the user chose that). They are not meant to be the only
//! way to handle the structures that this code creates.

View File

@ -601,7 +601,7 @@ pub type IdentInterner = StrInterner;
// if an interner exists in TLS, return it. Otherwise, prepare a
// fresh one.
// FIXME(eddyb) #8726 This should probably use a task-local reference.
// FIXME(eddyb) #8726 This should probably use a thread-local reference.
pub fn get_ident_interner() -> Rc<IdentInterner> {
thread_local!(static KEY: Rc<::parse::token::IdentInterner> = {
Rc::new(mk_fresh_ident_interner())
@ -615,14 +615,14 @@ pub fn reset_ident_interner() {
interner.reset(mk_fresh_ident_interner());
}
/// Represents a string stored in the task-local interner. Because the
/// interner lives for the life of the task, this can be safely treated as an
/// immortal string, as long as it never crosses between tasks.
/// Represents a string stored in the thread-local interner. Because the
/// interner lives for the life of the thread, this can be safely treated as an
/// immortal string, as long as it never crosses between threads.
///
/// FIXME(pcwalton): You must be careful about what you do in the destructors
/// of objects stored in TLS, because they may run after the interner is
/// destroyed. In particular, they must not access string contents. This can
/// be fixed in the future by just leaking all strings until task death
/// be fixed in the future by just leaking all strings until thread death
/// somehow.
#[derive(Clone, PartialEq, Hash, PartialOrd, Eq, Ord)]
pub struct InternedString {
@ -697,14 +697,14 @@ impl Encodable for InternedString {
}
}
/// Returns the string contents of a name, using the task-local interner.
/// Returns the string contents of a name, using the thread-local interner.
#[inline]
pub fn get_name(name: ast::Name) -> InternedString {
let interner = get_ident_interner();
InternedString::new_from_rc_str(interner.get(name))
}
/// Returns the string contents of an identifier, using the task-local
/// Returns the string contents of an identifier, using the thread-local
/// interner.
#[inline]
pub fn get_ident(ident: ast::Ident) -> InternedString {
@ -712,7 +712,7 @@ pub fn get_ident(ident: ast::Ident) -> InternedString {
}
/// Interns and returns the string contents of an identifier, using the
/// task-local interner.
/// thread-local interner.
#[inline]
pub fn intern_and_get_ident(s: &str) -> InternedString {
get_name(intern(s))

View File

@ -146,7 +146,7 @@ pub trait TDynBenchFn: Send {
// A function that runs a test. If the function returns successfully,
// the test succeeds; if the function panics then the test fails. We
// may need to come up with a more clever definition of test in order
// to support isolation of tests into tasks.
// to support isolation of tests into threads.
pub enum TestFn {
StaticTestFn(fn()),
StaticBenchFn(fn(&mut Bencher)),

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// This test creates a bunch of tasks that simultaneously send to each
// This test creates a bunch of threads that simultaneously send to each
// other in a ring. The messages should all be basically
// independent.
// This is like msgsend-ring-pipes but adapted to use Arcs.
@ -52,7 +52,7 @@ fn thread_ring(i: usize, count: usize, num_chan: pipe, num_port: pipe) {
let mut num_port = Some(num_port);
// Send/Receive lots of messages.
for j in 0..count {
//println!("task %?, iter %?", i, j);
//println!("thread %?, iter %?", i, j);
let num_chan2 = num_chan.take().unwrap();
let num_port2 = num_port.take().unwrap();
send(&num_chan2, i * j);

View File

@ -21,14 +21,14 @@ use std::sync::mpsc::channel;
use std::env;
use std::thread;
// This is a simple bench that creates M pairs of tasks. These
// tasks ping-pong back and forth over a pair of streams. This is a
// This is a simple bench that creates M pairs of threads. These
// threads ping-pong back and forth over a pair of streams. This is a
// canonical message-passing benchmark as it heavily strains message
// passing and almost nothing else.
fn ping_pong_bench(n: usize, m: usize) {
// Create pairs of tasks that pingpong back and forth.
// Create pairs of threads that pingpong back and forth.
fn run_pair(n: usize) {
// Create a channel: A->B
let (atx, arx) = channel();

View File

@ -13,7 +13,7 @@ use std::env;
use std::thread;
// A simple implementation of parfib. One subtree is found in a new
// task and communicated over a oneshot pipe, the other is found
// thread and communicated over a oneshot pipe, the other is found
// locally. There is no sequential-mode threshold.
fn parfib(n: u64) -> u64 {

View File

@ -11,7 +11,7 @@
// ignore-android: FIXME(#10393) hangs without output
// ignore-pretty very bad with line comments
// multi tasking k-nucleotide
// multi threading k-nucleotide
use std::ascii::AsciiExt;
use std::cmp::Ordering::{self, Less, Greater, Equal};

View File

@ -8,9 +8,9 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Test performance of a task "spawn ladder", in which children task have
// many ancestor taskgroups, but with only a few such groups alive at a time.
// Each child task has to enlist as a descendant in each of its ancestor
// Test performance of a thread "spawn ladder", in which children thread have
// many ancestor threadgroups, but with only a few such groups alive at a time.
// Each child thread has to enlist as a descendant in each of its ancestor
// groups, but that shouldn't have to happen for already-dead groups.
//
// The filename is a song reference; google it in quotes.
@ -23,7 +23,7 @@ use std::thread;
fn child_generation(gens_left: usize, tx: Sender<()>) {
// This used to be O(n^2) in the number of generations that ever existed.
// With this code, only as many generations are alive at a time as tasks
// With this code, only as many generations are alive at a time as threads
// alive at a time,
thread::spawn(move|| {
if gens_left & 1 == 1 {

View File

@ -9,7 +9,7 @@
// except according to those terms.
// ignore-test
// error-pattern: task '<main>' has overflowed its stack
// error-pattern: thread '<main>' has overflowed its stack
struct R {
b: isize,

View File

@ -17,7 +17,7 @@ use std::env;
fn main() {
error!("whatever");
// 101 is the code the runtime uses on task panic and the value
// 101 is the code the runtime uses on thread panic and the value
// compiletest expects run-fail tests to return.
env::set_exit_status(101);
}

View File

@ -8,12 +8,12 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// error-pattern:Ensure that the child task runs by panicking
// error-pattern:Ensure that the child thread runs by panicking
use std::thread;
fn main() {
// the purpose of this test is to make sure that task::spawn()
// the purpose of this test is to make sure that thread::spawn()
// works when provided with a bare function:
let r = thread::spawn(startfn).join();
if r.is_err() {
@ -22,5 +22,5 @@ fn main() {
}
fn startfn() {
assert!("Ensure that the child task runs by panicking".is_empty());
assert!("Ensure that the child thread runs by panicking".is_empty());
}

View File

@ -11,7 +11,7 @@
// ignore-test leaks
// error-pattern:ran out of stack
// Test that the task panicks after hitting the recursion limit
// Test that the thread panicks after hitting the recursion limit
// during unwinding
fn recurse() {

View File

@ -40,7 +40,7 @@ fn count(n: libc::uintptr_t) -> libc::uintptr_t {
}
pub fn main() {
// Make sure we're on a task with small Rust stacks (main currently
// Make sure we're on a thread with small Rust stacks (main currently
// has a large stack)
thread::spawn(move|| {
let result = count(1000);

View File

@ -44,7 +44,7 @@ fn count(n: libc::uintptr_t) -> libc::uintptr_t {
}
pub fn main() {
// Make sure we're on a task with small Rust stacks (main currently
// Make sure we're on a thread with small Rust stacks (main currently
// has a large stack)
thread::spawn(move|| {
let result = count(12);

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// calling pin_task and that's having weird side-effects.
// calling pin_thread and that's having weird side-effects.
// pretty-expanded FIXME #23616

View File

@ -68,7 +68,7 @@ mod map_reduce {
pub fn map_reduce(inputs: Vec<String>) {
let (tx, rx) = channel();
// This task becomes the master control task. It spawns others
// This thread becomes the master control thread. It spawns others
// to do the rest.
let mut reducers: HashMap<String, isize>;

View File

@ -22,14 +22,14 @@ fn test00() {
start(i)
});
// Sleep long enough for the task to finish.
// Sleep long enough for the thread to finish.
let mut i = 0_usize;
while i < 10000 {
thread::yield_now();
i += 1;
}
// Try joining tasks that have already finished.
// Try joining threads that have already finished.
result.join();
println!("Joined task.");

View File

@ -16,7 +16,7 @@ use std::thread;
pub fn main() {
let (tx, rx) = channel();
// Spawn 10 tasks each sending us back one isize.
// Spawn 10 threads each sending us back one isize.
let mut i = 10;
while (i > 0) {
println!("{}", i);
@ -25,7 +25,7 @@ pub fn main() {
i = i - 1;
}
// Spawned tasks are likely killed before they get a chance to send
// Spawned threads are likely killed before they get a chance to send
// anything back, so we deadlock here.
i = 10;

View File

@ -24,7 +24,7 @@ fn start(tx: &Sender<isize>, i0: isize) {
}
pub fn main() {
// Spawn a task that sends us back messages. The parent task
// Spawn a thread that sends us back messages. The parent thread
// is likely to terminate before the child completes, so from
// the child's point of view the receiver may die. We should
// drop messages on the floor in this case, and not crash!

View File

@ -38,7 +38,7 @@ fn test00() {
let mut i: isize = 0;
// Create and spawn tasks...
// Create and spawn threads...
let mut results = Vec::new();
while i < number_of_tasks {
let tx = tx.clone();
@ -51,7 +51,7 @@ fn test00() {
i = i + 1;
}
// Read from spawned tasks...
// Read from spawned threads...
let mut sum = 0;
for _r in &results {
i = 0;
@ -62,12 +62,12 @@ fn test00() {
}
}
// Join spawned tasks...
// Join spawned threads...
for r in results { r.join(); }
println!("Completed: Final number is: ");
println!("{}", sum);
// assert (sum == (((number_of_tasks * (number_of_tasks - 1)) / 2) *
// assert (sum == (((number_of_threads * (number_of_threads - 1)) / 2) *
// number_of_messages));
assert_eq!(sum, 480);
}

View File

@ -9,7 +9,7 @@
// except according to those terms.
// Tests that a heterogeneous list of existential types can be put inside an Arc
// and shared between tasks as long as all types fulfill Send.
// and shared between threads as long as all types fulfill Send.
// ignore-pretty