This is preparation for removing `@fn`.
This does *not* use default methods yet, because I don't know
whether they work. If they do, a forthcoming PR will use them.
This also changes the precedence of `as`.
Added functions to cryptoutil.rs that perform an addition after shifting
the 2nd parameter by a specified constant. These function fail!() if integer
overflow will result. Updated the Sha2 implementation to use these functions.
Create a helper function in cryptoutil.rs which feeds 1,000,000 'a's into
a Digest with varying input sizes and then checks the result. This is
essentially the same as one of Sha1's existing tests, so, that test was
re-implemented using this method. New tests were added using this method for
Sha512 and Sha256.
The Sha2 compression functions were re-written to execute the message
scheduling calculations in the same loop as the rest of the compression
function. The compiler is able to generate much better code. Additionally,
innermost part of the compression functions were turned into macros to
reduce code duplicate and to make the functions more concise.
There are 2 main pieces of functionality in cryptoutil.rs:
* A set of unsafe function for efficiently reading and writing u32 and u64
values. All of these functions are fairly easy to audit to confirm that
they do what they are supposed to.
* A FixedBuffer struct. This struct keeps track of input data until there
is enough of it to execute the a function on it which expects a fixed
block of data.
The Sha2 module was rewritten to take advantage of the new functions in
cryptoutil as well as FixedBuffer. The result is that the duplicate code
for maintaining a buffer of input data is removed from the Sha512 and
Sha256 implementation. Additionally, the FixedBuffer code is much more
efficient than the previous code was.
The result_X() methods just calculate an output of a fixed size. They don't
really have much to do with running the actually hash algorithm until the very
last step - the output. It makes much more sense to put all this logic into
the Digest impls for each specific variation on the hash function.
The code was arranged so that the core Sha2 code came first, and then
all of the various implementation of Digest followed along later. The
problem is that the Sha512 compression function code is far away from
the Sha512 Digest implementation, so, if you are trying to read over
the code, you need to scroll all around the file for no good reason. The
code was rearranged so that all of the Sha512 code is in one place and
all of the Sha256 code is in another and so that all impls for a struct
are near the definition of that struct.
In the first commit it is obvious why some of the barriers can be changed to ```Relaxed```, but it is not as obvious for the once I changed in ```kill.rs```. The rationale for those is documented as part of the documenting commit.
Also the last commit is a temporary hack to prevent kill signals from being received in taskgroup cleanup code, which could be fixed in a more principled way once the old runtime is gone.
* All globals marked as `pub` won't have the `internal` linkage type set
* All global references across crates are forced to use the address of the
global in the other crate via an external reference.
r? @graydon
Closes#8179
old design the TLS held the scheduler struct, and the scheduler struct
held the active task. This posed all sorts of weird problems due to
how we wanted to use the contents of TLS. The cleaner approach is to
leave the active task in TLS and have the task hold the scheduler. To
make this work out the scheduler has to run inside a regular task, and
then once that is the case the context switching code is massively
simplified, as instead of three possible paths there is only one. The
logical flow is also easier to follow, as the scheduler struct acts
somewhat like a "token" indicating what is active.
These changes also necessitated changing a large number of runtime
tests, and rewriting most of the runtime testing helpers.
Polish level is "low", as I will very soon start on more scheduler
changes that will require wiping the polish off. That being said there
should be sufficient comments around anything complex to make this
entirely respectable as a standalone commit.
The pipes compiler produced data types that encoded efficient and safe
bounded message passing protocols between two endpoints. It was also
capable of producing unbounded protocols.
It was useful research but was arguably done before its proper time.
I am removing it for the following reasons:
* In practice we used it only for producing the `oneshot` protcol and
the unbounded `stream` protocol and all communication in Rust use those.
* The interface between the proto! macro and the standard library
has a large surface area and was difficult to maintain through
language and library changes.
* It is now written in an old dialect of Rust and generates code
which would likely be considered non-idiomatic.
* Both the compiler and the runtime are difficult to understand,
and likewise the relationship between the generated code and
the library is hard to understand. Debugging is difficult.
* The new scheduler implements `stream` and `oneshot` by hand
in a way that will be significantly easier to maintain.
This shouldn't be taken as an indication that 'channel protocols'
for Rust are not worth pursuing again in the future.
Concerned parties may include: @graydon, @pcwalton, @eholk, @bblum
The most likely candidates for closing are #7666, #3018, #3020, #7021, #7667, #7303, #3658, #3295.
The pipes compiler produced data types that encoded efficient and safe
bounded message passing protocols between two endpoints. It was also
capable of producing unbounded protocols.
It was useful research but was arguably done before its proper time.
I am removing it for the following reasons:
* In practice we used it only for producing the `oneshot` and `stream`
unbounded protocols and all communication in Rust use those.
* The interface between the proto! macro and the standard library
has a large surface area and was difficult to maintain through
language and library changes.
* It is now written in an old dialect of Rust and generates code
which would likely be considered non-idiomatic.
* Both the compiler and the runtime are difficult to understand,
and likewise the relationship between the generated code and
the library is hard to understand. Debugging is difficult.
* The new scheduler implements `stream` and `oneshot` by hand
in a way that will be significantly easier to maintain.
This shouldn't be taken as an indication that 'channel protocols'
for Rust are not worth pursuing again in the future.
Change the former repetition::
for 5.times { }
to::
do 5.times { }
.times() cannot be broken with `break` or `return` anymore; for those
cases, use a numerical range loop instead.