This plugs a leak where resolve was treating enums defined in parent modules as
in-scope for all children modules when resolving a pattern identifier. This
eliminates the code path in resolve entirely.
If this breaks any existing code, then it indicates that the variants need to be
explicitly imported into the module.
Closes#14221
Use sync:1️⃣:Once to fetch the mach_timebase_info only once when
running precise_time_ns(). This helps because mach_timebase_info() is
surprisingly inefficient. Also fix the order of operations when applying
the timebase to the mach absolute time value.
This improves the time on my machine from
```
test tests::bench_precise_time_ns ... bench: 157 ns/iter (+/- 4)
```
to
```
test tests::bench_precise_time_ns ... bench: 38 ns/iter (+/- 3)
```
and it will get even faster once #14174 lands.
This plugs a leak where resolve was treating enums defined in parent modules as
in-scope for all children modules when resolving a pattern identifier. This
eliminates the code path in resolve entirely.
If this breaks any existing code, then it indicates that the variants need to be
explicitly imported into the module.
Closes#14221
[breaking-change]
This changes the previously naive string searching algorithm to a two-way search like glibc, which should be faster on average while still maintaining worst case linear time complexity. This fixes#14107. Note that I don't think this should be merged yet, as this is the only approach to speeding up search I've tried - it's worth considering options like Boyer-Moore or adding a bad character shift table to this. However, the benchmarks look quite good so far:
test str::bench::bench_contains_bad_naive ... bench: 290 ns/iter (+/- 12) from 1309 ns/iter (+/- 36)
test str::bench::bench_contains_equal ... bench: 479 ns/iter (+/- 10) from 137 ns/iter (+/- 2)
test str::bench::bench_contains_short_long ... bench: 2844 ns/iter (+/- 105) from 5473 ns/iter (+/- 14)
test str::bench::bench_contains_short_short ... bench: 55 ns/iter (+/- 4) from 57 ns/iter (+/- 6)
Except for the case specifically designed to be optimal for the naive case (`bench_contains_equal`), this gets as good or better performance as the previous code.
Use sync:1️⃣:Once to fetch the mach_timebase_info only once when
running precise_time_ns(). This helps because mach_timebase_info() is
surprisingly inefficient. Also fix the order of operations when applying
the timebase to the mach absolute time value.
This improves the time on my machine from
```
test tests::bench_precise_time_ns ... bench: 157 ns/iter (+/- 4)
```
to
```
test tests::bench_precise_time_ns ... bench: 38 ns/iter (+/- 3)
```
and it will get even faster once #14174 lands.
By default, jemalloc is building itself with -g3 if the local compiler supports
it. It looks like this is generating a good deal of debug info that windows
isn't optimizing out (on the order of 18MB). Windows gcc/ld is also not
optimizing this data away, causing hello world to be 18MB in size.
There's no current real need for debugging jemalloc to a great extent, so this
commit manually passes -g1 to override -g3 which jemalloc is using. This is
confirmed to drop the size of executables on windows back to a more reasonable
size (2.0MB, as they were before).
Closes#14144
This was a more difficult change than I thought it would be, and it is unfortunately a breaking change rather than a drop-in replacement. Most of the rationale can be found in the third commit.
cc #13851
In an attempt to phase out the std::num::strconv module's string formatting
functionality, this commit reimplements some provided methods for formatting
integers on top of format!() instead of the custom (and slower) implementation
inside of num::strconv.
Primarily, this deprecates int_to_str_bytes_common
This is a migration of the std::{f32, f64}::to_str* functionality to the core
library. This removes the growable `Vec` used in favor of a large stack buffer.
The maximum base 10 exponent for f64 is 308, so a stack buffer of 512 bytes
should be sufficient to store all floats.
1. Wherever the `buf` field of a `Formatter` was used, the `Formatter` is used
instead.
2. The usage of `write_fmt` is minimized as much as possible, the `write!` macro
is preferred wherever possible.
3. Usage of `fmt::write` is minimized, favoring the `write!` macro instead.
Currently, the format_args!() macro takes as its first argument an expression
which is the callee of an ExprCall. This means that if format_args!() is used
with calling a method a closure must be used. Consider this code, however:
format_args!(|args| { foo.writer.write_fmt(args) }, "{}", foo.field)
The closure borrows the entire `foo` structure, disallowing the later borrow of
`foo.field`. To preserve the semantics of the `write!` macro, it is also
impossible to borrow specifically the `writer` field of the `foo` structure
because it must be borrowed mutably, but the `foo` structure is not guaranteed
to be mutable itself.
This new macro is invoked like:
format_args_method!(foo.writer, write_fmt, "{}", foo.field)
This macro will generate an ExprMethodCall which allows the borrow checker to
understand that `writer` and `field` should be borrowed separately.
This macro is not strictly necessary, with DST or possibly UFCS other
workarounds could be used. For now, though, it looks like this is required to
implement the `write!` macro.
This new method, write_fmt(), is the one way to write a formatted list of
arguments into a Writer stream. This has a special adaptor to preserve errors
which occur on the writer.
All macros will be updated to use this method explicitly.