Commit Graph

32288 Commits

Author SHA1 Message Date
Alex Crichton
2703fcf988 rollup merge of : nathantypanski/generic-lifetime-trait-impl 2014-09-09 12:07:12 -07:00
Alex Crichton
e8e62393bb rollup merge of : nathantypanski/test-borrowck-trait 2014-09-09 12:07:12 -07:00
Alex Crichton
8158463122 rollup merge of : pcwalton/subslice-syntax 2014-09-09 12:07:12 -07:00
Alex Crichton
2c66c296db rollup merge of : pcwalton/feature-gate-subslices 2014-09-09 12:07:11 -07:00
Alex Crichton
fb3c67a65c rollup merge of : kmcallister/borrow-extctxt 2014-09-09 12:07:11 -07:00
Alex Crichton
d1d9d195c9 rollup merge of : nodakai/libnative-c_int 2014-09-09 12:07:11 -07:00
Alex Crichton
a0b3701a21 rollup merge of : rgawdzik/literal_int 2014-09-09 12:07:11 -07:00
Alex Crichton
f48b701213 rollup merge of : nick29581/impl2 2014-09-09 12:07:11 -07:00
Alex Crichton
679b4e1b38 rollup merge of : treeman/json-decode 2014-09-09 12:07:11 -07:00
Stuart Pernsteiner
ba43f7bc8c ignore uninitialized submodules when checking if ./configure should be re-run 2014-09-09 11:33:53 -07:00
Jakub Wieczorek
28bc56828f Change method lookup to require invariance for mutable references
Fixes .
Fixes .
2014-09-09 20:25:31 +02:00
bors
b625d43f8f auto merge of : dotdash/rust/llvmup, r=alexcrichton 2014-09-09 17:16:18 +00:00
bors
504ed55775 auto merge of : steveklabnik/rust/fix_doc_index, r=brson
Fixes 
2014-09-09 13:26:16 +00:00
bors
3884f5fc8e auto merge of : huonw/rust/isaac-oob--, r=alexcrichton
rand: inform the optimiser that indexing is never out-of-bounds.

This uses a bitwise mask to ensure that there's no bounds checking for
the array accesses when generating the next random number. This isn't
costless, but the single instruction is nothing compared to the branch.

A `debug_assert` for "bounds check" is preserved to ensure that
refactoring doesn't accidentally break it (i.e. create values of `cnt`
that are out of bounds with the masking causing it to silently wrap-
around).

Before:

test test::rand_isaac   ... bench: 990 ns/iter (+/- 24) = 808 MB/s
test test::rand_isaac64 ... bench: 614 ns/iter (+/- 25) = 1302 MB/s

After:

test test::rand_isaac   ... bench: 877 ns/iter (+/- 134) = 912 MB/s
test test::rand_isaac64 ... bench: 470 ns/iter (+/- 30) = 1702 MB/s

(It also removes the unsafe code in Isaac64Rng.next_u64, with a *gain*
in performance; today is a good day.)
2014-09-09 11:11:17 +00:00
Jonas Hietala
947a1b923b Remove some test warnings. 2014-09-09 11:32:58 +02:00
bors
7ab58f67d1 auto merge of : thestinger/rust/jemalloc, r=alexcrichton
The performance hit from these checks is significant, but unoptimized
builds are already incredibly slow. Enabling these checks results in
better test coverage since there are bots doing unoptimized builds, and
the cost is relatively small in the context of an unoptimized build.
This also allows using `JEMALLOC_FLAGS` to override the default
configure flags.
2014-09-09 07:56:19 +00:00
Nick Cameron
b1916288bf Handle Sized? in type items.
Resolves bounds for `type` and adds the warning for 'unbounds' (? bounds) that we have for bounds.

Closes 
2014-09-09 18:22:20 +12:00
Jonas Hietala
4f4a3dfb1a Decoding json now defaults Option<_> to None.
Closes 
2014-09-09 07:28:59 +02:00
bors
641b1980a4 auto merge of : steveklabnik/rust/fix_manual_array_terms, r=brson
fixes 
2014-09-09 04:26:18 +00:00
Patrick Walton
eb678ff87f librustc: Change the syntax of subslice matching to use postfix ..
instead of prefix `..`.

This breaks code that looked like:

    match foo {
        [ first, ..middle, last ] => { ... }
    }

Change this code to:

    match foo {
        [ first, middle.., last ] => { ... }
    }

RFC .

Closes .

[breaking-change]
2014-09-08 16:12:13 -07:00
Steve Klabnik
18f1f5a06e guide: Remove reference to uninitialized bindings
There isn't a good way to fit this in, so let's just not
mention it.

Fixes .
2014-09-08 18:50:08 -04:00
Nick Cameron
c2fcd4ca72 Check traits for built-in bounds in impls 2014-09-09 10:41:27 +12:00
bors
325808a33d auto merge of : alexcrichton/rust/windows-large-console-write, r=brson
I've found that 64k is still too much and continue to see the errors as reported
in . I've locally found that 32k fails, and 24k succeeds, so I've trimmed
the size down to 10000 which the included links in the added comment end up
recommending.

It sounds like the limit can still be hit with many threads in play, but I have
yet to reproduce this, so I figure we can wait until that's hit (if it's
possible) and then take action.
2014-09-08 20:51:14 +00:00
Alex Crichton
198030fadf std: Turn down the stdout chunk size
I've found that 64k is still too much and continue to see the errors as reported
in . I've locally found that 32k fails, and 24k succeeds, so I've trimmed
the size down to 8192 which libuv happens to use as well.

It sounds like the limit can still be hit with many threads in play, but I have
yet to reproduce this, so I figure we can wait until that's hit (if it's
possible) and then take action.
2014-09-08 12:54:32 -07:00
Keegan McAllister
2b3619412f quote: Explicitly borrow the ExtCtxt
Fixes .
2014-09-08 11:30:55 -07:00
Patrick Walton
22179f49e5 librustc: Feature gate subslice matching in non-tail positions.
This breaks code that uses the `..xs` form anywhere but at the end of a
slice. For example:

    match foo {
        [ 1, ..xs, 2 ]
        [ ..xs, 1, 2 ]
    }

Add the `#![feature(advanced_slice_patterns)]` gate to reenable the
syntax.

RFC .

Closes .

[breaking-change]
2014-09-08 11:04:14 -07:00
Patrick Walton
3ca53d3a10 librustc: Make sure lifetimes in for loop heads outlive the for loop
itself.

This breaks code like:

    for &x in my_vector.iter() {
        my_vector[2] = "wibble";
        ...
    }

Change this code to not invalidate iterators. For example:

    for i in range(0, my_vector.len()) {
        my_vector[2] = "wibble";
        ...
    }

The `for-loop-does-not-borrow-iterators` test for  was incorrect
and has been removed.

Closes .

[breaking-change]
2014-09-08 10:50:34 -07:00
bors
0c73e5fc5f auto merge of : eddyb/rust/ty-arena, r=cmr
This was inspired by seeing a LLVM flatline of **~600MB** when running rustc with jemalloc (each type's `t_box_` is allocated on the heap, creating a lot of fragmentation, which jemalloc can deal with, unlike glibc).
2014-09-08 15:36:13 +00:00
Björn Steinbrink
cdfa637dad Update LLVM to fix a crash in the MergeFunc pass 2014-09-08 17:07:03 +02:00
bors
6f34760e41 auto merge of : mahkoh/rust/move_items_unwrap, r=aturon
Closes 
2014-09-08 13:46:15 +00:00
Eduard Burtescu
8bfbcddf53 rustdoc: fix fallout from the addition of a 'tcx lifetime on tcx. 2014-09-08 15:28:25 +03:00
Eduard Burtescu
f7a997be05 rustc: fix fallout from the addition of a 'tcx lifetime on trans::Block. 2014-09-08 15:28:24 +03:00
Eduard Burtescu
28be695b2c rustc: fix fallout from the addition of a 'tcx lifetime on tcx. 2014-09-08 15:28:23 +03:00
Eduard Burtescu
22f8b8462e rustc: use a TypedArena to allocate types in the type context. 2014-09-08 15:14:10 +03:00
bors
5c3987985e auto merge of : thestinger/rust/large_address_aware, r=sfackler,cmr
By default, 32-bit Windows executables are restricted to 2GiB of address
space even when running on 64-bit Windows when 4GiB is available.

Closes 
2014-09-08 11:56:12 +00:00
bors
ab7b1c896d auto merge of : huonw/rust/snap, r=alexcrichton
Closes .
2014-09-08 10:06:15 +00:00
Huon Wilson
cc6a4877a4 rand: inform the optimiser that indexing is never out-of-bounds.
This uses a bitwise mask to ensure that there's no bounds checking for
the array accesses when generating the next random number. This isn't
costless, but the single instruction is nothing compared to the branch.

A `debug_assert` for "bounds check" is preserved to ensure that
refactoring doesn't accidentally break it (i.e. create values of `cnt`
that are out of bounds with the masking causing it to silently wrap-
around).

Before:

    test test::rand_isaac   ... bench: 990 ns/iter (+/- 24) = 808 MB/s
    test test::rand_isaac64 ... bench: 614 ns/iter (+/- 25) = 1302 MB/s

After:

    test test::rand_isaac   ... bench: 877 ns/iter (+/- 134) = 912 MB/s
    test test::rand_isaac64 ... bench: 470 ns/iter (+/- 30) = 1702 MB/s

(It also removes the unsafe code in Isaac64Rng.next_u64, with a *gain*
in performance; today is a good day.)
2014-09-08 19:33:37 +10:00
bors
a39f69f91d auto merge of : pczarn/rust/issue-15913-ICE-with-call-trans, r=alexcrichton
A match in callee.rs was recognizing some foreign fns as named tuple constructors. A reproducible test case for this is nearly impossible since it depends on the way NodeIds happen to be assigned in different crates.

Fixes 
2014-09-08 08:06:18 +00:00
Guillaume Pinot
13013d8f91 Relicense shootout-chameneos-redux.rs to the shootout license.
Everyone agreed. fix 
2014-09-08 08:47:26 +02:00
Nathan Typanski
a1d9010f51 Add ICE regression test with unboxed closures
This code used to produce the following ICE:

   error: internal compiler error: get_unique_type_id_of_type() -
   unexpected type: closure,
   ty_unboxed_closure(syntax::ast::DefId{krate: 0u32, node: 66u32},
   ReScope(63u32))

This is a regression test for issue .
2014-09-07 23:45:27 -04:00
bors
dd626b48c4 auto merge of : nick29581/rust/dst-rvalue, r=nikomatsakis
Closes  

r? @nikomatsakis I feel like I should be checking more things in check_rvalues, but not sure what - I don't properly understand expr_use_visitor
2014-09-08 02:36:15 +00:00
NODA, Kai
52e99cbcaa libnative/io: generic retry() for Unix 64 bit read/write().
Win32/WinSock APIs never call WSASetLastError() with WSAEINTR
unless a programmer specifically cancels the ongoing blocking call by
a deprecated WinSock1 API WSACancelBlockingCall().
So the errno check was simply removed and retry() became an id function
on Windows.
Note: Windows' equivalent of SIGINT is always handled in a separate thread:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682541%28v=vs.85%29.aspx
"CTRL+C and CTRL+BREAK Signals"

Also, incidentally rename a type parameter and clean up some module imports.
2014-09-08 08:17:13 +08:00
Alex Crichton
6c5c1ee34f rustdoc: Don't strip #-lines if notrust
Other languages may not want to have a leading #-line get stripped.
2014-09-07 17:01:16 -07:00
bors
aaf141d399 auto merge of : alexcrichton/rust/remove-net-assert, r=brson
This assert was likely inherited from some point, but it's not quite valid as a
no-timeout read may enter this loop, but data could be stolen by any other read
after the socket is deemed readable.

I saw this fail in a recent bors run where the assertion was tripped.
2014-09-07 23:01:34 +00:00
Dan Albert
8c3db5bc53 Allow Rust to be built with LLVM trunk (3.6). 2014-09-07 14:42:48 -07:00
Nick Cameron
742f49c961 Forbid unsized rvalues
Closes 
2014-09-08 09:32:52 +12:00
bors
19dc574890 auto merge of : huonw/rust/moar-jquery, r=alexcrichton
Sometimes (e.g. on Rust CI) the "expand description" text of the
collapse toggle was displayed by default, when a page is first
loaded (even though the description is expanded), because some
Content-Security-Policy settings disable inline CSS.

Setting it the style with the `.css` method allows the output to be used
in more places.
2014-09-07 21:06:29 +00:00
Jakub Wieczorek
c98a80e472 Fix casts in constant expressions
Fixes .
2014-09-07 21:24:18 +02:00
bors
86730e43c0 auto merge of : inrustwetrust/rust/link-path-order, r=alexcrichton
Issue can be reproduced by the following:
```
$ cat main.rs
fn main() {}
$ rustc -Z print-link-args -Lfoo -Lbar main.rs
```
Run the rustc command a few times and observe that the order of the '-L' 'foo' '-L' 'bar' options randomly changes.

Actually hit this issue in practice on Windows when specifying two -L directories to rustc, one with rust-sdl2 in it and one with the C SDL2.dll. Since Windows file systems aren't case-sensitive, gcc randomly attempted to link against the rust sdl2.dll instead of SDL2.dll if that -L directory happened to come first.

The randomness was due to addl_lib_search_paths being a HashSet. Changed it to a Vec instead which maintains the ordering.
Unsure how to test this though since it is random by nature; suggestions very welcome.
2014-09-07 19:06:28 +00:00
Daniel Micay
1ee099da36 enable jemalloc debugging in unoptimized builds
The performance hit from these checks is significant, but unoptimized
builds are already incredibly slow. Enabling these checks results in
better test coverage since there are bots doing unoptimized builds, and
the cost is relatively small in the context of an unoptimized build.
This also allows using `JEMALLOC_FLAGS` to override the default
configure flags.
2014-09-07 14:23:48 -04:00