I was benchmarking rust-http recently, and I saw that 50% of its time was spent
creating buffered readers/writers. Albeit rust-http wasn't using
std::rt::io::buffered, but the same idea applies here. It's much cheaper to
malloc a large region and not initialize it than to set it all to 0. Buffered
readers/writers never use uninitialized data, and their internal buffers are
encapsulated, so any usage of uninitialized slots are an implementation bug in
the readers/writers.
The UvProcess exit callback is called with a zero exit status and non-zero termination signal when a child is terminated by a signal.
If a parent checks only the exit status (for example, only checks the return value from `wait()`), it may believe the process completed successfully when it actually failed.
Helpers for common use-cases are in `std::rt::io::process`.
Should resolve https://github.com/mozilla/rust/issues/10062.
As we start to move runtime components into the crate map, it's becoming harder
and harder to start the runtime from a C function as rust is embedded in another
application. Right now if you compile a rust crate as a dynamic library which is
then linked to another application, when using std::rt::start there are no I/O
local services, even though rustuv was linked against and requested. The reason
for this is that there is no top level crate map available specifying where to
find libuv I/O.
This option is not meant to be used regularly, but rather whenever compiling a
final library crate and linking it into another application. This lifts the
requirement that to get a crate map you must have the final destination be an
executable.
precise_time_ns
The QueryPerformance* functions take a LARGE_INTEGER, which is a signed
64-bit integer rather than an unsigned 64-bit integer. `ts.tv_sec`, too,
is a signed integer so `ns_per_s` has been changed to a int64_t.
The commit messages have more details, but this removes all analysis and usage related to fixed_stack_segment and rust_stack attributes. It's now the assumption that we always have "enough stack" and we'll implement detection of stack overflow through other means.
The stack overflow detection is currently implemented for rust functions, but it is unimplemented for C functions (we still don't have guard pages).
I increased this to 4MB when I implemented abort-on-stack-overflow for Rust
functions. Now that the fixed_stack_segment attribute is removed, no rust
function will ever reasonably request 2MB of stack (due to calling an FFI
function).
The default size of 2MB should be plenty for everyday use-cases, and tasks can
still request more stack via the spawning API.
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
This was marked xfail-test
According to rust's issue #912, this will not be fixed.
There's no point in keeping the test if it is never intended to pass.
I was benchmarking rust-http recently, and I saw that 50% of its time was spent
creating buffered readers/writers. Albeit rust-http wasn't using
std::rt::io::buffered, but the same idea applies here. It's much cheaper to
malloc a large region and not initialize it than to set it all to 0. Buffered
readers/writers never use uninitialized data, and their internal buffers are
encapsulated, so any usage of uninitialized slots are an implementation bug in
the readers/writers.
As we start to move runtime components into the crate map, it's becoming harder
and harder to start the runtime from a C function as rust is embedded in another
application. Right now if you compile a rust crate as a dynamic library which is
then linked to another application, when using std::rt::start there are no I/O
local services, even though rustuv was linked against and requested. The reason
for this is that there is no top level crate map available specifying where to
find libuv I/O.
This option is not meant to be used regularly, but rather whenever compiling a
final library crate and linking it into another application. This lifts the
requirement that to get a crate map you must have the final destination be an
executable.
The logging macros all use libuv-based I/O, and there was one stray debug
statement in task::spawn which was executing before the I/O context was ready.
Remove it and add a test to make sure that we can continue to debug this sort of
code.
Closes#10405
Added two new rules to create epubs out of the tutorial and reference manual source files. This is useful and doesn't add any new dependencies to the build process.
The logging macros all use libuv-based I/O, and there was one stray debug
statement in task::spawn which was executing before the I/O context was ready.
Remove it and add a test to make sure that we can continue to debug this sort of
code.
Closes#10405
The major impetus for this pull request was to remove all usage of `~fn()` in `librustuv`. This construct is going away as a language feature, and additionally it imposes the requirement that all I/O operations have at least one allocation. This allocation has been seen to have a fairly high performance impact in profiles of I/O benchmarks.
I've migrated `librustuv` away from all usage of `~fn()`, and at the same time it no longer allocates on every I/O operation anywhere. The scheduler is now much more tightly integrated with all of the libuv bindings and most of the uv callbacks are specialized functions for a certain procedure. This is a step backwards in terms of making `librustuv` usable anywhere else, but I think that the performance gains are a big win here.
In just a simple benchmark of reading/writing 4k of 0s at a time between a tcp client/server in separate processes on the same system, I have witnessed the throughput increase from ~750MB/s to ~1200MB/s with this change applied.
I'm still in the process of testing this change, although all the major bugs (to my knowledge) have been fleshed out and removed. There are still a few spurious segfaults, and that's what I'm currently investigating. In the meantime, I wanted to put this up for review to get some eyes on it other than mine. I'll update this once I've got all the tests passing reliably again.
When a channel is destroyed, it may attempt scheduler operations which could
move a task off of it's I/O scheduler. This is obviously a bad interaction, and
some finesse is required to make it work (making destructors run at the right
time).
Closes#10375
It appears that uv's support for interacting with a stdio stream as a tty when
it's actually a pipe is pretty problematic. To get around this, promote a check
to see if the stream is a tty to the top of the tty constructor, and bail out
quickly if it's not identified as a tty.
Closes#10237
In the ideal world, uv I/O could be canceled safely at any time. In reality,
however, we are unable to do this. Right now linked failure is fairly flaky as
implemented in the runtime, making it very difficult to test whether the linked
failure mechanisms inside of the uv bindings are ready for this kind of
interaction.
Right now, all constructors will execute in a task::unkillable block, and all
homing I/O operations will prevent linked failure in the duration of the homing
operation. What this means is that tasks which perform I/O are still susceptible
to linked failure, but the I/O operations themselves will never get interrupted.
Instead, the linked failure will be received at the edge of the I/O operation.
It turns out that the uv implementation would cause use-after-free if the idle
callback was used after the call to `close`, and additionally nothing would ever
really work that well if `start()` were called twice. To change this, the
`start` and `close` methods were removed in favor of specifying the callback at
creation, and allowing destruction to take care of closing the watcher.