Auto merge of #49669 - SimonSapin:global-alloc, r=alexcrichton

Add GlobalAlloc trait + tweaks for initial stabilization

This is the outcome of discussion at the Rust All Hands in Berlin. The high-level goal is stabilizing sooner rather than later the ability to [change the global allocator](https://github.com/rust-lang/rust/issues/27389), as well as allocating memory without abusing `Vec::with_capacity` + `mem::forget`.

Since we’re not ready to settle every detail of the `Alloc` trait for the purpose of collections that are generic over the allocator type (for example the possibility of a separate trait for deallocation only, and what that would look like exactly), we propose introducing separately **a new `GlobalAlloc` trait**, for use with the `#[global_allocator]` attribute.

We also propose a number of changes to existing APIs. They are batched in this one PR in order to minimize disruption to Nightly users.

The plan for initial stabilization is detailed in the tracking issue https://github.com/rust-lang/rust/issues/49668.

CC @rust-lang/libs, @glandium

## Immediate breaking changes to unstable features

* For pointers to allocated memory, change the pointed type from `u8` to `Opaque`, a new public [extern type](https://github.com/rust-lang/rust/issues/43467). Since extern types are not `Sized`, `<*mut _>::offset` cannot be used without first casting to another pointer type. (We hope that extern types can also be stabilized soon.)
* In the `Alloc` trait, change these pointers to `ptr::NonNull` and change the `AllocErr` type to a zero-size struct. This makes return types `Result<ptr::NonNull<Opaque>, AllocErr>` be pointer-sized.
* Instead of a new `Layout`, `realloc` takes only a new size (in addition to the pointer and old `Layout`). Changing the alignment is not supported with `realloc`.
* Change the return type of `Layout::from_size_align` from `Option<Self>` to `Result<Self, LayoutErr>`, with `LayoutErr` a new opaque struct.
* A `static` item registered as the global allocator with the `#[global_allocator]` **must now implement the new `GlobalAlloc` trait** instead of `Alloc`.

## Eventually-breaking changes to unstable features, with a deprecation period

* Rename the respective `heap` modules to `alloc` in the `core`, `alloc`, and `std` crates. (Yes, this does mean that `::alloc::alloc::Alloc::alloc` is a valid path to a trait method if you have `exetrn crate alloc;`)
* Rename the the `Heap` type to `Global`, since it is the entry point for what’s registered with `#[global_allocator]`.

Old names remain available for now, as deprecated `pub use` reexports.

## Backward-compatible changes

* Add a new [extern type](https://github.com/rust-lang/rust/issues/43467) `Opaque`, for use in pointers to allocated memory.
* Add a new `GlobalAlloc` trait shown below. Unlike `Alloc`, it uses bare `*mut Opaque` without `NonNull` or `Result`. NULL in return values indicates an error (of unspecified nature). This is easier to implement on top of `malloc`-like APIs.
* Add impls of `GlobalAlloc` for both the `Global` and `System` types, in addition to existing impls of `Alloc`. This enables calling `GlobalAlloc` methods on the stable channel before `Alloc` is stable. Implementing two traits with identical method names can make some calls ambiguous, but most code is expected to have no more than one of the two traits in scope. Erroneous code like `use std::alloc::Global; #[global_allocator] static A: Global = Global;` (where `Global` is defined to call itself, causing infinite recursion) is not statically prevented by the type system, but we count on it being hard enough to do accidentally and easy enough to diagnose.

```rust
extern {
    pub type Opaque;
}

pub unsafe trait GlobalAlloc {
    unsafe fn alloc(&self, layout: Layout) -> *mut Opaque;
    unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout);

    unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
        // Default impl: self.alloc() and ptr::write_bytes()
    }
    unsafe fn realloc(&self, ptr: *mut Opaque, old_layout: Layout, new_size: usize) -> *mut Opaque {
        // Default impl: self.alloc() and ptr::copy_nonoverlapping() and self.dealloc()
    }

    fn oom(&self) -> ! {
        // intrinsics::abort
    }

    // More methods with default impls may be added in the future
}
```

## Bikeshed

The tracking issue https://github.com/rust-lang/rust/issues/49668 lists some open questions. If consensus is reached before this PR is merged, changes can be integrated.
This commit is contained in:
bors 2018-04-13 10:33:51 +00:00
commit 99d4886ead
56 changed files with 1058 additions and 1516 deletions

3
src/Cargo.lock generated
View File

@ -19,7 +19,6 @@ dependencies = [
name = "alloc_jemalloc" name = "alloc_jemalloc"
version = "0.0.0" version = "0.0.0"
dependencies = [ dependencies = [
"alloc 0.0.0",
"alloc_system 0.0.0", "alloc_system 0.0.0",
"build_helper 0.1.0", "build_helper 0.1.0",
"cc 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)", "cc 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)",
@ -32,7 +31,6 @@ dependencies = [
name = "alloc_system" name = "alloc_system"
version = "0.0.0" version = "0.0.0"
dependencies = [ dependencies = [
"alloc 0.0.0",
"compiler_builtins 0.0.0", "compiler_builtins 0.0.0",
"core 0.0.0", "core 0.0.0",
"dlmalloc 0.0.0", "dlmalloc 0.0.0",
@ -542,7 +540,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
name = "dlmalloc" name = "dlmalloc"
version = "0.0.0" version = "0.0.0"
dependencies = [ dependencies = [
"alloc 0.0.0",
"compiler_builtins 0.0.0", "compiler_builtins 0.0.0",
"core 0.0.0", "core 0.0.0",
] ]

@ -1 +1 @@
Subproject commit 9b2dcac06c3e23235f8997b3c5f2325a6d3382df Subproject commit c99638dc2ecfc750cc1656f6edb2bd062c1e0981

@ -1 +1 @@
Subproject commit 6a8f0a27e9a58c55c89d07bc43a176fdae5e051c Subproject commit 3c56329d1bd9038e5341f1962bcd8d043312a712

View File

@ -29,16 +29,17 @@ looks like:
```rust ```rust
#![feature(global_allocator, allocator_api, heap_api)] #![feature(global_allocator, allocator_api, heap_api)]
use std::heap::{Alloc, System, Layout, AllocErr}; use std::alloc::{GlobalAlloc, System, Layout, Opaque};
use std::ptr::NonNull;
struct MyAllocator; struct MyAllocator;
unsafe impl<'a> Alloc for &'a MyAllocator { unsafe impl GlobalAlloc for MyAllocator {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
System.alloc(layout) System.alloc(layout)
} }
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
System.dealloc(ptr, layout) System.dealloc(ptr, layout)
} }
} }

215
src/liballoc/alloc.rs Normal file
View File

@ -0,0 +1,215 @@
// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked \
slightly, especially to possibly take into account the \
types being stored to make room for a future \
tracing garbage collector",
issue = "32838")]
use core::intrinsics::{min_align_of_val, size_of_val};
use core::ptr::NonNull;
use core::usize;
#[doc(inline)]
pub use core::alloc::*;
#[cfg(stage0)]
extern "Rust" {
#[allocator]
#[rustc_allocator_nounwind]
fn __rust_alloc(size: usize, align: usize, err: *mut u8) -> *mut u8;
#[cold]
#[rustc_allocator_nounwind]
fn __rust_oom(err: *const u8) -> !;
#[rustc_allocator_nounwind]
fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
#[rustc_allocator_nounwind]
fn __rust_realloc(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_zeroed(size: usize, align: usize, err: *mut u8) -> *mut u8;
}
#[cfg(not(stage0))]
extern "Rust" {
#[allocator]
#[rustc_allocator_nounwind]
fn __rust_alloc(size: usize, align: usize) -> *mut u8;
#[cold]
#[rustc_allocator_nounwind]
fn __rust_oom() -> !;
#[rustc_allocator_nounwind]
fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
#[rustc_allocator_nounwind]
fn __rust_realloc(ptr: *mut u8,
old_size: usize,
align: usize,
new_size: usize) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8;
}
#[derive(Copy, Clone, Default, Debug)]
pub struct Global;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "type renamed to `Global`")]
pub type Heap = Global;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "type renamed to `Global`")]
#[allow(non_upper_case_globals)]
pub const Heap: Global = Global;
unsafe impl GlobalAlloc for Global {
#[inline]
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
#[cfg(not(stage0))]
let ptr = __rust_alloc(layout.size(), layout.align());
#[cfg(stage0)]
let ptr = __rust_alloc(layout.size(), layout.align(), &mut 0);
ptr as *mut Opaque
}
#[inline]
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
__rust_dealloc(ptr as *mut u8, layout.size(), layout.align())
}
#[inline]
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
#[cfg(not(stage0))]
let ptr = __rust_realloc(ptr as *mut u8, layout.size(), layout.align(), new_size);
#[cfg(stage0)]
let ptr = __rust_realloc(ptr as *mut u8, layout.size(), layout.align(),
new_size, layout.align(), &mut 0);
ptr as *mut Opaque
}
#[inline]
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
#[cfg(not(stage0))]
let ptr = __rust_alloc_zeroed(layout.size(), layout.align());
#[cfg(stage0)]
let ptr = __rust_alloc_zeroed(layout.size(), layout.align(), &mut 0);
ptr as *mut Opaque
}
#[inline]
fn oom(&self) -> ! {
unsafe {
#[cfg(not(stage0))]
__rust_oom();
#[cfg(stage0)]
__rust_oom(&mut 0);
}
}
}
unsafe impl Alloc for Global {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc(self, layout)).ok_or(AllocErr)
}
#[inline]
unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
GlobalAlloc::dealloc(self, ptr.as_ptr(), layout)
}
#[inline]
unsafe fn realloc(&mut self,
ptr: NonNull<Opaque>,
layout: Layout,
new_size: usize)
-> Result<NonNull<Opaque>, AllocErr>
{
NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc_zeroed(self, layout)).ok_or(AllocErr)
}
#[inline]
fn oom(&mut self) -> ! {
GlobalAlloc::oom(self)
}
}
/// The allocator for unique pointers.
// This function must not unwind. If it does, MIR trans will fail.
#[cfg(not(test))]
#[lang = "exchange_malloc"]
#[inline]
unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 {
if size == 0 {
align as *mut u8
} else {
let layout = Layout::from_size_align_unchecked(size, align);
let ptr = Global.alloc(layout);
if !ptr.is_null() {
ptr as *mut u8
} else {
Global.oom()
}
}
}
#[cfg_attr(not(test), lang = "box_free")]
#[inline]
pub(crate) unsafe fn box_free<T: ?Sized>(ptr: *mut T) {
let size = size_of_val(&*ptr);
let align = min_align_of_val(&*ptr);
// We do not allocate for Box<T> when T is ZST, so deallocation is also not necessary.
if size != 0 {
let layout = Layout::from_size_align_unchecked(size, align);
Global.dealloc(ptr as *mut Opaque, layout);
}
}
#[cfg(test)]
mod tests {
extern crate test;
use self::test::Bencher;
use boxed::Box;
use alloc::{Global, Alloc, Layout};
#[test]
fn allocate_zeroed() {
unsafe {
let layout = Layout::from_size_align(1024, 1).unwrap();
let ptr = Global.alloc_zeroed(layout.clone())
.unwrap_or_else(|_| Global.oom());
let mut i = ptr.cast::<u8>().as_ptr();
let end = i.offset(layout.size() as isize);
while i < end {
assert_eq!(*i, 0);
i = i.offset(1);
}
Global.dealloc(ptr, layout);
}
}
#[bench]
fn alloc_owned_small(b: &mut Bencher) {
b.iter(|| {
let _: Box<_> = box 10;
})
}
}

View File

@ -21,7 +21,6 @@ use core::sync::atomic::Ordering::{Acquire, Relaxed, Release, SeqCst};
use core::borrow; use core::borrow;
use core::fmt; use core::fmt;
use core::cmp::Ordering; use core::cmp::Ordering;
use core::heap::{Alloc, Layout};
use core::intrinsics::abort; use core::intrinsics::abort;
use core::mem::{self, align_of_val, size_of_val, uninitialized}; use core::mem::{self, align_of_val, size_of_val, uninitialized};
use core::ops::Deref; use core::ops::Deref;
@ -32,7 +31,7 @@ use core::hash::{Hash, Hasher};
use core::{isize, usize}; use core::{isize, usize};
use core::convert::From; use core::convert::From;
use heap::{Heap, box_free}; use alloc::{Global, Alloc, Layout, box_free};
use boxed::Box; use boxed::Box;
use string::String; use string::String;
use vec::Vec; use vec::Vec;
@ -513,15 +512,13 @@ impl<T: ?Sized> Arc<T> {
// Non-inlined part of `drop`. // Non-inlined part of `drop`.
#[inline(never)] #[inline(never)]
unsafe fn drop_slow(&mut self) { unsafe fn drop_slow(&mut self) {
let ptr = self.ptr.as_ptr();
// Destroy the data at this time, even though we may not free the box // Destroy the data at this time, even though we may not free the box
// allocation itself (there may still be weak pointers lying around). // allocation itself (there may still be weak pointers lying around).
ptr::drop_in_place(&mut self.ptr.as_mut().data); ptr::drop_in_place(&mut self.ptr.as_mut().data);
if self.inner().weak.fetch_sub(1, Release) == 1 { if self.inner().weak.fetch_sub(1, Release) == 1 {
atomic::fence(Acquire); atomic::fence(Acquire);
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)) Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()))
} }
} }
@ -555,11 +552,11 @@ impl<T: ?Sized> Arc<T> {
let layout = Layout::for_value(&*fake_ptr); let layout = Layout::for_value(&*fake_ptr);
let mem = Heap.alloc(layout) let mem = Global.alloc(layout)
.unwrap_or_else(|e| Heap.oom(e)); .unwrap_or_else(|_| Global.oom());
// Initialize the real ArcInner // Initialize the real ArcInner
let inner = set_data_ptr(ptr as *mut T, mem) as *mut ArcInner<T>; let inner = set_data_ptr(ptr as *mut T, mem.as_ptr() as *mut u8) as *mut ArcInner<T>;
ptr::write(&mut (*inner).strong, atomic::AtomicUsize::new(1)); ptr::write(&mut (*inner).strong, atomic::AtomicUsize::new(1));
ptr::write(&mut (*inner).weak, atomic::AtomicUsize::new(1)); ptr::write(&mut (*inner).weak, atomic::AtomicUsize::new(1));
@ -626,7 +623,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
// In the event of a panic, elements that have been written // In the event of a panic, elements that have been written
// into the new ArcInner will be dropped, then the memory freed. // into the new ArcInner will be dropped, then the memory freed.
struct Guard<T> { struct Guard<T> {
mem: *mut u8, mem: NonNull<u8>,
elems: *mut T, elems: *mut T,
layout: Layout, layout: Layout,
n_elems: usize, n_elems: usize,
@ -640,7 +637,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
let slice = from_raw_parts_mut(self.elems, self.n_elems); let slice = from_raw_parts_mut(self.elems, self.n_elems);
ptr::drop_in_place(slice); ptr::drop_in_place(slice);
Heap.dealloc(self.mem, self.layout.clone()); Global.dealloc(self.mem.as_opaque(), self.layout.clone());
} }
} }
} }
@ -656,7 +653,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
let elems = &mut (*ptr).data as *mut [T] as *mut T; let elems = &mut (*ptr).data as *mut [T] as *mut T;
let mut guard = Guard{ let mut guard = Guard{
mem: mem, mem: NonNull::new_unchecked(mem),
elems: elems, elems: elems,
layout: layout, layout: layout,
n_elems: 0, n_elems: 0,
@ -1148,8 +1145,6 @@ impl<T: ?Sized> Drop for Weak<T> {
/// assert!(other_weak_foo.upgrade().is_none()); /// assert!(other_weak_foo.upgrade().is_none());
/// ``` /// ```
fn drop(&mut self) { fn drop(&mut self) {
let ptr = self.ptr.as_ptr();
// If we find out that we were the last weak pointer, then its time to // If we find out that we were the last weak pointer, then its time to
// deallocate the data entirely. See the discussion in Arc::drop() about // deallocate the data entirely. See the discussion in Arc::drop() about
// the memory orderings // the memory orderings
@ -1161,7 +1156,7 @@ impl<T: ?Sized> Drop for Weak<T> {
if self.inner().weak.fetch_sub(1, Release) == 1 { if self.inner().weak.fetch_sub(1, Release) == 1 {
atomic::fence(Acquire); atomic::fence(Acquire);
unsafe { unsafe {
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)) Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()))
} }
} }
} }

View File

@ -41,14 +41,13 @@
// - A node of length `n` has `n` keys, `n` values, and (in an internal node) `n + 1` edges. // - A node of length `n` has `n` keys, `n` values, and (in an internal node) `n + 1` edges.
// This implies that even an empty internal node has at least one edge. // This implies that even an empty internal node has at least one edge.
use core::heap::{Alloc, Layout};
use core::marker::PhantomData; use core::marker::PhantomData;
use core::mem; use core::mem;
use core::ptr::{self, Unique, NonNull}; use core::ptr::{self, Unique, NonNull};
use core::slice; use core::slice;
use alloc::{Global, Alloc, Layout};
use boxed::Box; use boxed::Box;
use heap::Heap;
const B: usize = 6; const B: usize = 6;
pub const MIN_LEN: usize = B - 1; pub const MIN_LEN: usize = B - 1;
@ -237,7 +236,7 @@ impl<K, V> Root<K, V> {
pub fn pop_level(&mut self) { pub fn pop_level(&mut self) {
debug_assert!(self.height > 0); debug_assert!(self.height > 0);
let top = self.node.ptr.as_ptr() as *mut u8; let top = self.node.ptr;
self.node = unsafe { self.node = unsafe {
BoxedNode::from_ptr(self.as_mut() BoxedNode::from_ptr(self.as_mut()
@ -250,7 +249,7 @@ impl<K, V> Root<K, V> {
self.as_mut().as_leaf_mut().parent = ptr::null(); self.as_mut().as_leaf_mut().parent = ptr::null();
unsafe { unsafe {
Heap.dealloc(top, Layout::new::<InternalNode<K, V>>()); Global.dealloc(NonNull::from(top).as_opaque(), Layout::new::<InternalNode<K, V>>());
} }
} }
} }
@ -434,9 +433,9 @@ impl<K, V> NodeRef<marker::Owned, K, V, marker::Leaf> {
marker::Edge marker::Edge
> >
> { > {
let ptr = self.as_leaf() as *const LeafNode<K, V> as *const u8 as *mut u8; let node = self.node;
let ret = self.ascend().ok(); let ret = self.ascend().ok();
Heap.dealloc(ptr, Layout::new::<LeafNode<K, V>>()); Global.dealloc(node.as_opaque(), Layout::new::<LeafNode<K, V>>());
ret ret
} }
} }
@ -455,9 +454,9 @@ impl<K, V> NodeRef<marker::Owned, K, V, marker::Internal> {
marker::Edge marker::Edge
> >
> { > {
let ptr = self.as_internal() as *const InternalNode<K, V> as *const u8 as *mut u8; let node = self.node;
let ret = self.ascend().ok(); let ret = self.ascend().ok();
Heap.dealloc(ptr, Layout::new::<InternalNode<K, V>>()); Global.dealloc(node.as_opaque(), Layout::new::<InternalNode<K, V>>());
ret ret
} }
} }
@ -1239,13 +1238,13 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
).correct_parent_link(); ).correct_parent_link();
} }
Heap.dealloc( Global.dealloc(
right_node.node.as_ptr() as *mut u8, right_node.node.as_opaque(),
Layout::new::<InternalNode<K, V>>(), Layout::new::<InternalNode<K, V>>(),
); );
} else { } else {
Heap.dealloc( Global.dealloc(
right_node.node.as_ptr() as *mut u8, right_node.node.as_opaque(),
Layout::new::<LeafNode<K, V>>(), Layout::new::<LeafNode<K, V>>(),
); );
} }

View File

@ -8,282 +8,103 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
#![unstable(feature = "allocator_api", #![allow(deprecated)]
reason = "the precise API and guarantees it provides may be tweaked \
slightly, especially to possibly take into account the \
types being stored to make room for a future \
tracing garbage collector",
issue = "32838")]
use core::intrinsics::{min_align_of_val, size_of_val}; pub use alloc::{Layout, AllocErr, CannotReallocInPlace, Opaque};
use core::mem::{self, ManuallyDrop}; use core::alloc::Alloc as CoreAlloc;
use core::usize; use core::ptr::NonNull;
pub use core::heap::*;
#[doc(hidden)] #[doc(hidden)]
pub mod __core { pub mod __core {
pub use core::*; pub use core::*;
} }
extern "Rust" { #[derive(Debug)]
#[allocator] pub struct Excess(pub *mut u8, pub usize);
#[rustc_allocator_nounwind]
fn __rust_alloc(size: usize, align: usize, err: *mut u8) -> *mut u8;
#[cold]
#[rustc_allocator_nounwind]
fn __rust_oom(err: *const u8) -> !;
#[rustc_allocator_nounwind]
fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
#[rustc_allocator_nounwind]
fn __rust_usable_size(layout: *const u8,
min: *mut usize,
max: *mut usize);
#[rustc_allocator_nounwind]
fn __rust_realloc(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_zeroed(size: usize, align: usize, err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_excess(size: usize,
align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_realloc_excess(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_grow_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8;
#[rustc_allocator_nounwind]
fn __rust_shrink_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8;
}
#[derive(Copy, Clone, Default, Debug)] /// Compatibility with older versions of #[global_allocator] during bootstrap
pub struct Heap; pub unsafe trait Alloc {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
unsafe impl Alloc for Heap { unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
#[inline] fn oom(&mut self, err: AllocErr) -> !;
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { fn usable_size(&self, layout: &Layout) -> (usize, usize);
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>());
let ptr = __rust_alloc(layout.size(),
layout.align(),
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(ptr)
}
}
#[inline]
#[cold]
fn oom(&mut self, err: AllocErr) -> ! {
unsafe {
__rust_oom(&err as *const AllocErr as *const u8)
}
}
#[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
__rust_dealloc(ptr, layout.size(), layout.align())
}
#[inline]
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
let mut min = 0;
let mut max = 0;
unsafe {
__rust_usable_size(layout as *const Layout as *const u8,
&mut min,
&mut max);
}
(min, max)
}
#[inline]
unsafe fn realloc(&mut self, unsafe fn realloc(&mut self,
ptr: *mut u8, ptr: *mut u8,
layout: Layout, layout: Layout,
new_layout: Layout) new_layout: Layout) -> Result<*mut u8, AllocErr>;
-> Result<*mut u8, AllocErr> unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
{ unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>;
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); unsafe fn realloc_excess(&mut self,
let ptr = __rust_realloc(ptr, ptr: *mut u8,
layout.size(), layout: Layout,
layout.align(), new_layout: Layout) -> Result<Excess, AllocErr>;
new_layout.size(), unsafe fn grow_in_place(&mut self,
new_layout.align(), ptr: *mut u8,
&mut *err as *mut AllocErr as *mut u8); layout: Layout,
if ptr.is_null() { new_layout: Layout) -> Result<(), CannotReallocInPlace>;
Err(ManuallyDrop::into_inner(err)) unsafe fn shrink_in_place(&mut self,
} else { ptr: *mut u8,
mem::forget(err); layout: Layout,
Ok(ptr) new_layout: Layout) -> Result<(), CannotReallocInPlace>;
} }
unsafe impl<T> Alloc for T where T: CoreAlloc {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
CoreAlloc::alloc(self, layout).map(|ptr| ptr.cast().as_ptr())
}
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::dealloc(self, ptr, layout)
}
fn oom(&mut self, _: AllocErr) -> ! {
CoreAlloc::oom(self)
}
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
CoreAlloc::usable_size(self, layout)
}
unsafe fn realloc(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::realloc(self, ptr, layout, new_layout.size()).map(|ptr| ptr.cast().as_ptr())
} }
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); CoreAlloc::alloc_zeroed(self, layout).map(|ptr| ptr.cast().as_ptr())
let ptr = __rust_alloc_zeroed(layout.size(),
layout.align(),
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(ptr)
}
} }
#[inline]
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> { unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); CoreAlloc::alloc_excess(self, layout)
let mut size = 0; .map(|e| Excess(e.0 .cast().as_ptr(), e.1))
let ptr = __rust_alloc_excess(layout.size(),
layout.align(),
&mut size,
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(Excess(ptr, size))
}
} }
#[inline]
unsafe fn realloc_excess(&mut self, unsafe fn realloc_excess(&mut self,
ptr: *mut u8, ptr: *mut u8,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> { new_layout: Layout) -> Result<Excess, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
let mut size = 0; CoreAlloc::realloc_excess(self, ptr, layout, new_layout.size())
let ptr = __rust_realloc_excess(ptr, .map(|e| Excess(e.0 .cast().as_ptr(), e.1))
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align(),
&mut size,
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(Excess(ptr, size))
}
} }
#[inline]
unsafe fn grow_in_place(&mut self, unsafe fn grow_in_place(&mut self,
ptr: *mut u8, ptr: *mut u8,
layout: Layout, layout: Layout,
new_layout: Layout) new_layout: Layout) -> Result<(), CannotReallocInPlace> {
-> Result<(), CannotReallocInPlace> let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
{ CoreAlloc::grow_in_place(self, ptr, layout, new_layout.size())
debug_assert!(new_layout.size() >= layout.size());
debug_assert!(new_layout.align() == layout.align());
let ret = __rust_grow_in_place(ptr,
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align());
if ret != 0 {
Ok(())
} else {
Err(CannotReallocInPlace)
}
} }
#[inline]
unsafe fn shrink_in_place(&mut self, unsafe fn shrink_in_place(&mut self,
ptr: *mut u8, ptr: *mut u8,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> { new_layout: Layout) -> Result<(), CannotReallocInPlace> {
debug_assert!(new_layout.size() <= layout.size()); let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
debug_assert!(new_layout.align() == layout.align()); CoreAlloc::shrink_in_place(self, ptr, layout, new_layout.size())
let ret = __rust_shrink_in_place(ptr,
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align());
if ret != 0 {
Ok(())
} else {
Err(CannotReallocInPlace)
}
}
}
/// The allocator for unique pointers.
// This function must not unwind. If it does, MIR trans will fail.
#[cfg(not(test))]
#[lang = "exchange_malloc"]
#[inline]
unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 {
if size == 0 {
align as *mut u8
} else {
let layout = Layout::from_size_align_unchecked(size, align);
Heap.alloc(layout).unwrap_or_else(|err| {
Heap.oom(err)
})
}
}
#[cfg_attr(not(test), lang = "box_free")]
#[inline]
pub(crate) unsafe fn box_free<T: ?Sized>(ptr: *mut T) {
let size = size_of_val(&*ptr);
let align = min_align_of_val(&*ptr);
// We do not allocate for Box<T> when T is ZST, so deallocation is also not necessary.
if size != 0 {
let layout = Layout::from_size_align_unchecked(size, align);
Heap.dealloc(ptr as *mut u8, layout);
}
}
#[cfg(test)]
mod tests {
extern crate test;
use self::test::Bencher;
use boxed::Box;
use heap::{Heap, Alloc, Layout};
#[test]
fn allocate_zeroed() {
unsafe {
let layout = Layout::from_size_align(1024, 1).unwrap();
let ptr = Heap.alloc_zeroed(layout.clone())
.unwrap_or_else(|e| Heap.oom(e));
let end = ptr.offset(layout.size() as isize);
let mut i = ptr;
while i < end {
assert_eq!(*i, 0);
i = i.offset(1);
}
Heap.dealloc(ptr, layout);
}
}
#[bench]
fn alloc_owned_small(b: &mut Bencher) {
b.iter(|| {
let _: Box<_> = box 10;
})
} }
} }

View File

@ -57,7 +57,7 @@
//! //!
//! ## Heap interfaces //! ## Heap interfaces
//! //!
//! The [`heap`](heap/index.html) module defines the low-level interface to the //! The [`alloc`](alloc/index.html) module defines the low-level interface to the
//! default global allocator. It is not compatible with the libc allocator API. //! default global allocator. It is not compatible with the libc allocator API.
#![allow(unused_attributes)] #![allow(unused_attributes)]
@ -97,7 +97,9 @@
#![feature(from_ref)] #![feature(from_ref)]
#![feature(fundamental)] #![feature(fundamental)]
#![feature(lang_items)] #![feature(lang_items)]
#![feature(libc)]
#![feature(needs_allocator)] #![feature(needs_allocator)]
#![feature(nonnull_cast)]
#![feature(nonzero)] #![feature(nonzero)]
#![feature(optin_builtin_traits)] #![feature(optin_builtin_traits)]
#![feature(pattern)] #![feature(pattern)]
@ -141,10 +143,26 @@ mod macros;
#[rustc_deprecated(since = "1.27.0", reason = "use the heap module in core, alloc, or std instead")] #[rustc_deprecated(since = "1.27.0", reason = "use the heap module in core, alloc, or std instead")]
#[unstable(feature = "allocator_api", issue = "32838")] #[unstable(feature = "allocator_api", issue = "32838")]
pub use core::heap as allocator; /// Use the `alloc` module instead.
pub mod allocator {
pub use alloc::*;
}
// Heaps provided for low-level allocation strategies // Heaps provided for low-level allocation strategies
pub mod alloc;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
/// Use the `alloc` module instead.
#[cfg(not(stage0))]
pub mod heap {
pub use alloc::*;
}
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
#[cfg(stage0)]
pub mod heap; pub mod heap;
// Primitive types using the heaps above // Primitive types using the heaps above

View File

@ -8,13 +8,12 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::{Alloc, Layout, Global};
use core::cmp; use core::cmp;
use core::heap::{Alloc, Layout};
use core::mem; use core::mem;
use core::ops::Drop; use core::ops::Drop;
use core::ptr::{self, Unique}; use core::ptr::{self, NonNull, Unique};
use core::slice; use core::slice;
use heap::Heap;
use super::boxed::Box; use super::boxed::Box;
use super::allocator::CollectionAllocErr; use super::allocator::CollectionAllocErr;
use super::allocator::CollectionAllocErr::*; use super::allocator::CollectionAllocErr::*;
@ -47,7 +46,7 @@ use super::allocator::CollectionAllocErr::*;
/// field. This allows zero-sized types to not be special-cased by consumers of /// field. This allows zero-sized types to not be special-cased by consumers of
/// this type. /// this type.
#[allow(missing_debug_implementations)] #[allow(missing_debug_implementations)]
pub struct RawVec<T, A: Alloc = Heap> { pub struct RawVec<T, A: Alloc = Global> {
ptr: Unique<T>, ptr: Unique<T>,
cap: usize, cap: usize,
a: A, a: A,
@ -91,7 +90,7 @@ impl<T, A: Alloc> RawVec<T, A> {
// handles ZSTs and `cap = 0` alike // handles ZSTs and `cap = 0` alike
let ptr = if alloc_size == 0 { let ptr = if alloc_size == 0 {
mem::align_of::<T>() as *mut u8 NonNull::<T>::dangling().as_opaque()
} else { } else {
let align = mem::align_of::<T>(); let align = mem::align_of::<T>();
let result = if zeroed { let result = if zeroed {
@ -101,12 +100,12 @@ impl<T, A: Alloc> RawVec<T, A> {
}; };
match result { match result {
Ok(ptr) => ptr, Ok(ptr) => ptr,
Err(err) => a.oom(err), Err(_) => a.oom(),
} }
}; };
RawVec { RawVec {
ptr: Unique::new_unchecked(ptr as *mut _), ptr: ptr.cast().into(),
cap, cap,
a, a,
} }
@ -114,14 +113,14 @@ impl<T, A: Alloc> RawVec<T, A> {
} }
} }
impl<T> RawVec<T, Heap> { impl<T> RawVec<T, Global> {
/// Creates the biggest possible RawVec (on the system heap) /// Creates the biggest possible RawVec (on the system heap)
/// without allocating. If T has positive size, then this makes a /// without allocating. If T has positive size, then this makes a
/// RawVec with capacity 0. If T has 0 size, then it makes a /// RawVec with capacity 0. If T has 0 size, then it makes a
/// RawVec with capacity `usize::MAX`. Useful for implementing /// RawVec with capacity `usize::MAX`. Useful for implementing
/// delayed allocation. /// delayed allocation.
pub fn new() -> Self { pub fn new() -> Self {
Self::new_in(Heap) Self::new_in(Global)
} }
/// Creates a RawVec (on the system heap) with exactly the /// Creates a RawVec (on the system heap) with exactly the
@ -141,13 +140,13 @@ impl<T> RawVec<T, Heap> {
/// Aborts on OOM /// Aborts on OOM
#[inline] #[inline]
pub fn with_capacity(cap: usize) -> Self { pub fn with_capacity(cap: usize) -> Self {
RawVec::allocate_in(cap, false, Heap) RawVec::allocate_in(cap, false, Global)
} }
/// Like `with_capacity` but guarantees the buffer is zeroed. /// Like `with_capacity` but guarantees the buffer is zeroed.
#[inline] #[inline]
pub fn with_capacity_zeroed(cap: usize) -> Self { pub fn with_capacity_zeroed(cap: usize) -> Self {
RawVec::allocate_in(cap, true, Heap) RawVec::allocate_in(cap, true, Global)
} }
} }
@ -168,7 +167,7 @@ impl<T, A: Alloc> RawVec<T, A> {
} }
} }
impl<T> RawVec<T, Heap> { impl<T> RawVec<T, Global> {
/// Reconstitutes a RawVec from a pointer, capacity. /// Reconstitutes a RawVec from a pointer, capacity.
/// ///
/// # Undefined Behavior /// # Undefined Behavior
@ -180,7 +179,7 @@ impl<T> RawVec<T, Heap> {
RawVec { RawVec {
ptr: Unique::new_unchecked(ptr), ptr: Unique::new_unchecked(ptr),
cap, cap,
a: Heap, a: Global,
} }
} }
@ -310,14 +309,13 @@ impl<T, A: Alloc> RawVec<T, A> {
// `from_size_align_unchecked`. // `from_size_align_unchecked`.
let new_cap = 2 * self.cap; let new_cap = 2 * self.cap;
let new_size = new_cap * elem_size; let new_size = new_cap * elem_size;
let new_layout = Layout::from_size_align_unchecked(new_size, cur.align());
alloc_guard(new_size).expect("capacity overflow"); alloc_guard(new_size).expect("capacity overflow");
let ptr_res = self.a.realloc(self.ptr.as_ptr() as *mut u8, let ptr_res = self.a.realloc(NonNull::from(self.ptr).as_opaque(),
cur, cur,
new_layout); new_size);
match ptr_res { match ptr_res {
Ok(ptr) => (new_cap, Unique::new_unchecked(ptr as *mut T)), Ok(ptr) => (new_cap, ptr.cast().into()),
Err(e) => self.a.oom(e), Err(_) => self.a.oom(),
} }
} }
None => { None => {
@ -326,7 +324,7 @@ impl<T, A: Alloc> RawVec<T, A> {
let new_cap = if elem_size > (!0) / 8 { 1 } else { 4 }; let new_cap = if elem_size > (!0) / 8 { 1 } else { 4 };
match self.a.alloc_array::<T>(new_cap) { match self.a.alloc_array::<T>(new_cap) {
Ok(ptr) => (new_cap, ptr.into()), Ok(ptr) => (new_cap, ptr.into()),
Err(e) => self.a.oom(e), Err(_) => self.a.oom(),
} }
} }
}; };
@ -371,9 +369,7 @@ impl<T, A: Alloc> RawVec<T, A> {
let new_cap = 2 * self.cap; let new_cap = 2 * self.cap;
let new_size = new_cap * elem_size; let new_size = new_cap * elem_size;
alloc_guard(new_size).expect("capacity overflow"); alloc_guard(new_size).expect("capacity overflow");
let ptr = self.ptr() as *mut _; match self.a.grow_in_place(NonNull::from(self.ptr).as_opaque(), old_layout, new_size) {
let new_layout = Layout::from_size_align_unchecked(new_size, old_layout.align());
match self.a.grow_in_place(ptr, old_layout, new_layout) {
Ok(_) => { Ok(_) => {
// We can't directly divide `size`. // We can't directly divide `size`.
self.cap = new_cap; self.cap = new_cap;
@ -423,19 +419,19 @@ impl<T, A: Alloc> RawVec<T, A> {
// Nothing we can really do about these checks :( // Nothing we can really do about these checks :(
let new_cap = used_cap.checked_add(needed_extra_cap).ok_or(CapacityOverflow)?; let new_cap = used_cap.checked_add(needed_extra_cap).ok_or(CapacityOverflow)?;
let new_layout = Layout::array::<T>(new_cap).ok_or(CapacityOverflow)?; let new_layout = Layout::array::<T>(new_cap).map_err(|_| CapacityOverflow)?;
alloc_guard(new_layout.size())?; alloc_guard(new_layout.size())?;
let res = match self.current_layout() { let res = match self.current_layout() {
Some(layout) => { Some(layout) => {
let old_ptr = self.ptr.as_ptr() as *mut u8; debug_assert!(new_layout.align() == layout.align());
self.a.realloc(old_ptr, layout, new_layout) self.a.realloc(NonNull::from(self.ptr).as_opaque(), layout, new_layout.size())
} }
None => self.a.alloc(new_layout), None => self.a.alloc(new_layout),
}; };
self.ptr = Unique::new_unchecked(res? as *mut T); self.ptr = res?.cast().into();
self.cap = new_cap; self.cap = new_cap;
Ok(()) Ok(())
@ -445,7 +441,7 @@ impl<T, A: Alloc> RawVec<T, A> {
pub fn reserve_exact(&mut self, used_cap: usize, needed_extra_cap: usize) { pub fn reserve_exact(&mut self, used_cap: usize, needed_extra_cap: usize) {
match self.try_reserve_exact(used_cap, needed_extra_cap) { match self.try_reserve_exact(used_cap, needed_extra_cap) {
Err(CapacityOverflow) => panic!("capacity overflow"), Err(CapacityOverflow) => panic!("capacity overflow"),
Err(AllocErr(e)) => self.a.oom(e), Err(AllocErr) => self.a.oom(),
Ok(()) => { /* yay */ } Ok(()) => { /* yay */ }
} }
} }
@ -531,20 +527,20 @@ impl<T, A: Alloc> RawVec<T, A> {
} }
let new_cap = self.amortized_new_size(used_cap, needed_extra_cap)?; let new_cap = self.amortized_new_size(used_cap, needed_extra_cap)?;
let new_layout = Layout::array::<T>(new_cap).ok_or(CapacityOverflow)?; let new_layout = Layout::array::<T>(new_cap).map_err(|_| CapacityOverflow)?;
// FIXME: may crash and burn on over-reserve // FIXME: may crash and burn on over-reserve
alloc_guard(new_layout.size())?; alloc_guard(new_layout.size())?;
let res = match self.current_layout() { let res = match self.current_layout() {
Some(layout) => { Some(layout) => {
let old_ptr = self.ptr.as_ptr() as *mut u8; debug_assert!(new_layout.align() == layout.align());
self.a.realloc(old_ptr, layout, new_layout) self.a.realloc(NonNull::from(self.ptr).as_opaque(), layout, new_layout.size())
} }
None => self.a.alloc(new_layout), None => self.a.alloc(new_layout),
}; };
self.ptr = Unique::new_unchecked(res? as *mut T); self.ptr = res?.cast().into();
self.cap = new_cap; self.cap = new_cap;
Ok(()) Ok(())
@ -555,7 +551,7 @@ impl<T, A: Alloc> RawVec<T, A> {
pub fn reserve(&mut self, used_cap: usize, needed_extra_cap: usize) { pub fn reserve(&mut self, used_cap: usize, needed_extra_cap: usize) {
match self.try_reserve(used_cap, needed_extra_cap) { match self.try_reserve(used_cap, needed_extra_cap) {
Err(CapacityOverflow) => panic!("capacity overflow"), Err(CapacityOverflow) => panic!("capacity overflow"),
Err(AllocErr(e)) => self.a.oom(e), Err(AllocErr) => self.a.oom(),
Ok(()) => { /* yay */ } Ok(()) => { /* yay */ }
} }
} }
@ -601,11 +597,12 @@ impl<T, A: Alloc> RawVec<T, A> {
// (regardless of whether `self.cap - used_cap` wrapped). // (regardless of whether `self.cap - used_cap` wrapped).
// Therefore we can safely call grow_in_place. // Therefore we can safely call grow_in_place.
let ptr = self.ptr() as *mut _;
let new_layout = Layout::new::<T>().repeat(new_cap).unwrap().0; let new_layout = Layout::new::<T>().repeat(new_cap).unwrap().0;
// FIXME: may crash and burn on over-reserve // FIXME: may crash and burn on over-reserve
alloc_guard(new_layout.size()).expect("capacity overflow"); alloc_guard(new_layout.size()).expect("capacity overflow");
match self.a.grow_in_place(ptr, old_layout, new_layout) { match self.a.grow_in_place(
NonNull::from(self.ptr).as_opaque(), old_layout, new_layout.size(),
) {
Ok(_) => { Ok(_) => {
self.cap = new_cap; self.cap = new_cap;
true true
@ -665,12 +662,11 @@ impl<T, A: Alloc> RawVec<T, A> {
let new_size = elem_size * amount; let new_size = elem_size * amount;
let align = mem::align_of::<T>(); let align = mem::align_of::<T>();
let old_layout = Layout::from_size_align_unchecked(old_size, align); let old_layout = Layout::from_size_align_unchecked(old_size, align);
let new_layout = Layout::from_size_align_unchecked(new_size, align); match self.a.realloc(NonNull::from(self.ptr).as_opaque(),
match self.a.realloc(self.ptr.as_ptr() as *mut u8,
old_layout, old_layout,
new_layout) { new_size) {
Ok(p) => self.ptr = Unique::new_unchecked(p as *mut T), Ok(p) => self.ptr = p.cast().into(),
Err(err) => self.a.oom(err), Err(_) => self.a.oom(),
} }
} }
self.cap = amount; self.cap = amount;
@ -678,7 +674,7 @@ impl<T, A: Alloc> RawVec<T, A> {
} }
} }
impl<T> RawVec<T, Heap> { impl<T> RawVec<T, Global> {
/// Converts the entire buffer into `Box<[T]>`. /// Converts the entire buffer into `Box<[T]>`.
/// ///
/// While it is not *strictly* Undefined Behavior to call /// While it is not *strictly* Undefined Behavior to call
@ -702,8 +698,7 @@ impl<T, A: Alloc> RawVec<T, A> {
let elem_size = mem::size_of::<T>(); let elem_size = mem::size_of::<T>();
if elem_size != 0 { if elem_size != 0 {
if let Some(layout) = self.current_layout() { if let Some(layout) = self.current_layout() {
let ptr = self.ptr() as *mut u8; self.a.dealloc(NonNull::from(self.ptr).as_opaque(), layout);
self.a.dealloc(ptr, layout);
} }
} }
} }
@ -739,6 +734,7 @@ fn alloc_guard(alloc_size: usize) -> Result<(), CollectionAllocErr> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use alloc::Opaque;
#[test] #[test]
fn allocator_param() { fn allocator_param() {
@ -758,18 +754,18 @@ mod tests {
// before allocation attempts start failing. // before allocation attempts start failing.
struct BoundedAlloc { fuel: usize } struct BoundedAlloc { fuel: usize }
unsafe impl Alloc for BoundedAlloc { unsafe impl Alloc for BoundedAlloc {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
let size = layout.size(); let size = layout.size();
if size > self.fuel { if size > self.fuel {
return Err(AllocErr::Unsupported { details: "fuel exhausted" }); return Err(AllocErr);
} }
match Heap.alloc(layout) { match Global.alloc(layout) {
ok @ Ok(_) => { self.fuel -= size; ok } ok @ Ok(_) => { self.fuel -= size; ok }
err @ Err(_) => err, err @ Err(_) => err,
} }
} }
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
Heap.dealloc(ptr, layout) Global.dealloc(ptr, layout)
} }
} }

View File

@ -250,7 +250,6 @@ use core::cell::Cell;
use core::cmp::Ordering; use core::cmp::Ordering;
use core::fmt; use core::fmt;
use core::hash::{Hash, Hasher}; use core::hash::{Hash, Hasher};
use core::heap::{Alloc, Layout};
use core::intrinsics::abort; use core::intrinsics::abort;
use core::marker; use core::marker;
use core::marker::{Unsize, PhantomData}; use core::marker::{Unsize, PhantomData};
@ -260,7 +259,7 @@ use core::ops::CoerceUnsized;
use core::ptr::{self, NonNull}; use core::ptr::{self, NonNull};
use core::convert::From; use core::convert::From;
use heap::{Heap, box_free}; use alloc::{Global, Alloc, Layout, Opaque, box_free};
use string::String; use string::String;
use vec::Vec; use vec::Vec;
@ -668,11 +667,11 @@ impl<T: ?Sized> Rc<T> {
let layout = Layout::for_value(&*fake_ptr); let layout = Layout::for_value(&*fake_ptr);
let mem = Heap.alloc(layout) let mem = Global.alloc(layout)
.unwrap_or_else(|e| Heap.oom(e)); .unwrap_or_else(|_| Global.oom());
// Initialize the real RcBox // Initialize the real RcBox
let inner = set_data_ptr(ptr as *mut T, mem) as *mut RcBox<T>; let inner = set_data_ptr(ptr as *mut T, mem.as_ptr() as *mut u8) as *mut RcBox<T>;
ptr::write(&mut (*inner).strong, Cell::new(1)); ptr::write(&mut (*inner).strong, Cell::new(1));
ptr::write(&mut (*inner).weak, Cell::new(1)); ptr::write(&mut (*inner).weak, Cell::new(1));
@ -738,7 +737,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
// In the event of a panic, elements that have been written // In the event of a panic, elements that have been written
// into the new RcBox will be dropped, then the memory freed. // into the new RcBox will be dropped, then the memory freed.
struct Guard<T> { struct Guard<T> {
mem: *mut u8, mem: NonNull<Opaque>,
elems: *mut T, elems: *mut T,
layout: Layout, layout: Layout,
n_elems: usize, n_elems: usize,
@ -752,7 +751,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
let slice = from_raw_parts_mut(self.elems, self.n_elems); let slice = from_raw_parts_mut(self.elems, self.n_elems);
ptr::drop_in_place(slice); ptr::drop_in_place(slice);
Heap.dealloc(self.mem, self.layout.clone()); Global.dealloc(self.mem, self.layout.clone());
} }
} }
} }
@ -761,14 +760,14 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
let v_ptr = v as *const [T]; let v_ptr = v as *const [T];
let ptr = Self::allocate_for_ptr(v_ptr); let ptr = Self::allocate_for_ptr(v_ptr);
let mem = ptr as *mut _ as *mut u8; let mem = ptr as *mut _ as *mut Opaque;
let layout = Layout::for_value(&*ptr); let layout = Layout::for_value(&*ptr);
// Pointer to first element // Pointer to first element
let elems = &mut (*ptr).value as *mut [T] as *mut T; let elems = &mut (*ptr).value as *mut [T] as *mut T;
let mut guard = Guard{ let mut guard = Guard{
mem: mem, mem: NonNull::new_unchecked(mem),
elems: elems, elems: elems,
layout: layout, layout: layout,
n_elems: 0, n_elems: 0,
@ -835,8 +834,6 @@ unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> {
/// ``` /// ```
fn drop(&mut self) { fn drop(&mut self) {
unsafe { unsafe {
let ptr = self.ptr.as_ptr();
self.dec_strong(); self.dec_strong();
if self.strong() == 0 { if self.strong() == 0 {
// destroy the contained object // destroy the contained object
@ -847,7 +844,7 @@ unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> {
self.dec_weak(); self.dec_weak();
if self.weak() == 0 { if self.weak() == 0 {
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)); Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()));
} }
} }
} }
@ -1267,13 +1264,11 @@ impl<T: ?Sized> Drop for Weak<T> {
/// ``` /// ```
fn drop(&mut self) { fn drop(&mut self) {
unsafe { unsafe {
let ptr = self.ptr.as_ptr();
self.dec_weak(); self.dec_weak();
// the weak count starts at 1, and will only go to zero if all // the weak count starts at 1, and will only go to zero if all
// the strong pointers have disappeared. // the strong pointers have disappeared.
if self.weak() == 0 { if self.weak() == 0 {
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)); Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()));
} }
} }
} }

View File

@ -9,7 +9,7 @@
// except according to those terms. // except according to those terms.
use alloc_system::System; use alloc_system::System;
use std::heap::{Heap, Alloc, Layout}; use std::alloc::{Global, Alloc, Layout};
/// https://github.com/rust-lang/rust/issues/45955 /// https://github.com/rust-lang/rust/issues/45955
/// ///
@ -22,7 +22,7 @@ fn alloc_system_overaligned_request() {
#[test] #[test]
fn std_heap_overaligned_request() { fn std_heap_overaligned_request() {
check_overalign_requests(Heap) check_overalign_requests(Global)
} }
fn check_overalign_requests<T: Alloc>(mut allocator: T) { fn check_overalign_requests<T: Alloc>(mut allocator: T) {
@ -34,7 +34,8 @@ fn check_overalign_requests<T: Alloc>(mut allocator: T) {
allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap()
}).collect(); }).collect();
for &ptr in &pointers { for &ptr in &pointers {
assert_eq!((ptr as usize) % align, 0, "Got a pointer less aligned than requested") assert_eq!((ptr.as_ptr() as usize) % align, 0,
"Got a pointer less aligned than requested")
} }
// Clean up // Clean up

View File

@ -575,11 +575,11 @@ fn test_try_reserve() {
} else { panic!("usize::MAX should trigger an overflow!") } } else { panic!("usize::MAX should trigger an overflow!") }
} else { } else {
// Check isize::MAX + 1 is an OOM // Check isize::MAX + 1 is an OOM
if let Err(AllocErr(_)) = empty_string.try_reserve(MAX_CAP + 1) { if let Err(AllocErr) = empty_string.try_reserve(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
// Check usize::MAX is an OOM // Check usize::MAX is an OOM
if let Err(AllocErr(_)) = empty_string.try_reserve(MAX_USIZE) { if let Err(AllocErr) = empty_string.try_reserve(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") } } else { panic!("usize::MAX should trigger an OOM!") }
} }
} }
@ -599,7 +599,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) { if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) { if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
// Should always overflow in the add-to-len // Should always overflow in the add-to-len
@ -637,10 +637,10 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = empty_string.try_reserve_exact(MAX_USIZE) { if let Err(CapacityOverflow) = empty_string.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an overflow!") } } else { panic!("usize::MAX should trigger an overflow!") }
} else { } else {
if let Err(AllocErr(_)) = empty_string.try_reserve_exact(MAX_CAP + 1) { if let Err(AllocErr) = empty_string.try_reserve_exact(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
if let Err(AllocErr(_)) = empty_string.try_reserve_exact(MAX_USIZE) { if let Err(AllocErr) = empty_string.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") } } else { panic!("usize::MAX should trigger an OOM!") }
} }
} }
@ -659,7 +659,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) {

View File

@ -1016,11 +1016,11 @@ fn test_try_reserve() {
} else { panic!("usize::MAX should trigger an overflow!") } } else { panic!("usize::MAX should trigger an overflow!") }
} else { } else {
// Check isize::MAX + 1 is an OOM // Check isize::MAX + 1 is an OOM
if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_CAP + 1) { if let Err(AllocErr) = empty_bytes.try_reserve(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
// Check usize::MAX is an OOM // Check usize::MAX is an OOM
if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_USIZE) { if let Err(AllocErr) = empty_bytes.try_reserve(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") } } else { panic!("usize::MAX should trigger an OOM!") }
} }
} }
@ -1040,7 +1040,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) { if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) { if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
// Should always overflow in the add-to-len // Should always overflow in the add-to-len
@ -1063,7 +1063,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { if let Err(AllocErr) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
// Should fail in the mul-by-size // Should fail in the mul-by-size
@ -1103,10 +1103,10 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = empty_bytes.try_reserve_exact(MAX_USIZE) { if let Err(CapacityOverflow) = empty_bytes.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an overflow!") } } else { panic!("usize::MAX should trigger an overflow!") }
} else { } else {
if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_CAP + 1) { if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_USIZE) { if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") } } else { panic!("usize::MAX should trigger an OOM!") }
} }
} }
@ -1125,7 +1125,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) {
@ -1146,7 +1146,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { if let Err(AllocErr) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) { if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) {

View File

@ -1073,7 +1073,7 @@ fn test_try_reserve() {
// VecDeque starts with capacity 7, always adds 1 to the capacity // VecDeque starts with capacity 7, always adds 1 to the capacity
// and also rounds the number to next power of 2 so this is the // and also rounds the number to next power of 2 so this is the
// furthest we can go without triggering CapacityOverflow // furthest we can go without triggering CapacityOverflow
if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_CAP) { if let Err(AllocErr) = empty_bytes.try_reserve(MAX_CAP) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
} }
@ -1093,7 +1093,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) { if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) { if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
// Should always overflow in the add-to-len // Should always overflow in the add-to-len
@ -1116,7 +1116,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { if let Err(AllocErr) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
// Should fail in the mul-by-size // Should fail in the mul-by-size
@ -1160,7 +1160,7 @@ fn test_try_reserve_exact() {
// VecDeque starts with capacity 7, always adds 1 to the capacity // VecDeque starts with capacity 7, always adds 1 to the capacity
// and also rounds the number to next power of 2 so this is the // and also rounds the number to next power of 2 so this is the
// furthest we can go without triggering CapacityOverflow // furthest we can go without triggering CapacityOverflow
if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_CAP) { if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_CAP) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
} }
@ -1179,7 +1179,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) {
@ -1200,7 +1200,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else { } else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { if let Err(AllocErr) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) { if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) {

View File

@ -12,7 +12,6 @@ test = false
doc = false doc = false
[dependencies] [dependencies]
alloc = { path = "../liballoc" }
alloc_system = { path = "../liballoc_system" } alloc_system = { path = "../liballoc_system" }
core = { path = "../libcore" } core = { path = "../libcore" }
libc = { path = "../rustc/libc_shim" } libc = { path = "../rustc/libc_shim" }

View File

@ -30,9 +30,7 @@ extern crate libc;
pub use contents::*; pub use contents::*;
#[cfg(not(dummy_jemalloc))] #[cfg(not(dummy_jemalloc))]
mod contents { mod contents {
use core::ptr; use core::alloc::GlobalAlloc;
use core::heap::{Alloc, AllocErr, Layout};
use alloc_system::System; use alloc_system::System;
use libc::{c_int, c_void, size_t}; use libc::{c_int, c_void, size_t};
@ -52,18 +50,10 @@ mod contents {
target_os = "dragonfly", target_os = "windows", target_env = "musl"), target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_rallocx")] link_name = "je_rallocx")]
fn rallocx(ptr: *mut c_void, size: size_t, flags: c_int) -> *mut c_void; fn rallocx(ptr: *mut c_void, size: size_t, flags: c_int) -> *mut c_void;
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_xallocx")]
fn xallocx(ptr: *mut c_void, size: size_t, extra: size_t, flags: c_int) -> size_t;
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios", #[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows", target_env = "musl"), target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_sdallocx")] link_name = "je_sdallocx")]
fn sdallocx(ptr: *mut c_void, size: size_t, flags: c_int); fn sdallocx(ptr: *mut c_void, size: size_t, flags: c_int);
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_nallocx")]
fn nallocx(size: size_t, flags: c_int) -> size_t;
} }
const MALLOCX_ZERO: c_int = 0x40; const MALLOCX_ZERO: c_int = 0x40;
@ -104,23 +94,16 @@ mod contents {
#[no_mangle] #[no_mangle]
#[rustc_std_internal_symbol] #[rustc_std_internal_symbol]
pub unsafe extern fn __rde_alloc(size: usize, pub unsafe extern fn __rde_alloc(size: usize, align: usize) -> *mut u8 {
align: usize,
err: *mut u8) -> *mut u8 {
let flags = align_to_flags(align, size); let flags = align_to_flags(align, size);
let ptr = mallocx(size as size_t, flags) as *mut u8; let ptr = mallocx(size as size_t, flags) as *mut u8;
if ptr.is_null() {
let layout = Layout::from_size_align_unchecked(size, align);
ptr::write(err as *mut AllocErr,
AllocErr::Exhausted { request: layout });
}
ptr ptr
} }
#[no_mangle] #[no_mangle]
#[rustc_std_internal_symbol] #[rustc_std_internal_symbol]
pub unsafe extern fn __rde_oom(err: *const u8) -> ! { pub unsafe extern fn __rde_oom() -> ! {
System.oom((*(err as *const AllocErr)).clone()) System.oom()
} }
#[no_mangle] #[no_mangle]
@ -132,118 +115,26 @@ mod contents {
sdallocx(ptr as *mut c_void, size, flags); sdallocx(ptr as *mut c_void, size, flags);
} }
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_usable_size(layout: *const u8,
min: *mut usize,
max: *mut usize) {
let layout = &*(layout as *const Layout);
let flags = align_to_flags(layout.align(), layout.size());
let size = nallocx(layout.size(), flags) as usize;
*min = layout.size();
if size > 0 {
*max = size;
} else {
*max = layout.size();
}
}
#[no_mangle] #[no_mangle]
#[rustc_std_internal_symbol] #[rustc_std_internal_symbol]
pub unsafe extern fn __rde_realloc(ptr: *mut u8, pub unsafe extern fn __rde_realloc(ptr: *mut u8,
_old_size: usize, _old_size: usize,
old_align: usize, align: usize,
new_size: usize, new_size: usize) -> *mut u8 {
new_align: usize, let flags = align_to_flags(align, new_size);
err: *mut u8) -> *mut u8 {
if new_align != old_align {
ptr::write(err as *mut AllocErr,
AllocErr::Unsupported { details: "can't change alignments" });
return 0 as *mut u8
}
let flags = align_to_flags(new_align, new_size);
let ptr = rallocx(ptr as *mut c_void, new_size, flags) as *mut u8; let ptr = rallocx(ptr as *mut c_void, new_size, flags) as *mut u8;
if ptr.is_null() {
let layout = Layout::from_size_align_unchecked(new_size, new_align);
ptr::write(err as *mut AllocErr,
AllocErr::Exhausted { request: layout });
}
ptr ptr
} }
#[no_mangle] #[no_mangle]
#[rustc_std_internal_symbol] #[rustc_std_internal_symbol]
pub unsafe extern fn __rde_alloc_zeroed(size: usize, pub unsafe extern fn __rde_alloc_zeroed(size: usize, align: usize) -> *mut u8 {
align: usize,
err: *mut u8) -> *mut u8 {
let ptr = if align <= MIN_ALIGN && align <= size { let ptr = if align <= MIN_ALIGN && align <= size {
calloc(size as size_t, 1) as *mut u8 calloc(size as size_t, 1) as *mut u8
} else { } else {
let flags = align_to_flags(align, size) | MALLOCX_ZERO; let flags = align_to_flags(align, size) | MALLOCX_ZERO;
mallocx(size as size_t, flags) as *mut u8 mallocx(size as size_t, flags) as *mut u8
}; };
if ptr.is_null() {
let layout = Layout::from_size_align_unchecked(size, align);
ptr::write(err as *mut AllocErr,
AllocErr::Exhausted { request: layout });
}
ptr ptr
} }
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_alloc_excess(size: usize,
align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let p = __rde_alloc(size, align, err);
if !p.is_null() {
let flags = align_to_flags(align, size);
*excess = nallocx(size, flags) as usize;
}
return p
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_realloc_excess(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let p = __rde_realloc(ptr, old_size, old_align, new_size, new_align, err);
if !p.is_null() {
let flags = align_to_flags(new_align, new_size);
*excess = nallocx(new_size, flags) as usize;
}
p
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_grow_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
__rde_shrink_in_place(ptr, old_size, old_align, new_size, new_align)
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_shrink_in_place(ptr: *mut u8,
_old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
if old_align == new_align {
let flags = align_to_flags(new_align, new_size);
(xallocx(ptr as *mut c_void, new_size, 0, flags) == new_size) as u8
} else {
0
}
}
} }

View File

@ -10,7 +10,6 @@ test = false
doc = false doc = false
[dependencies] [dependencies]
alloc = { path = "../liballoc" }
core = { path = "../libcore" } core = { path = "../libcore" }
libc = { path = "../rustc/libc_shim" } libc = { path = "../rustc/libc_shim" }
compiler_builtins = { path = "../rustc/compiler_builtins_shim" } compiler_builtins = { path = "../rustc/compiler_builtins_shim" }

View File

@ -41,7 +41,8 @@ const MIN_ALIGN: usize = 8;
#[allow(dead_code)] #[allow(dead_code)]
const MIN_ALIGN: usize = 16; const MIN_ALIGN: usize = 16;
use core::heap::{Alloc, AllocErr, Layout, Excess, CannotReallocInPlace}; use core::alloc::{Alloc, GlobalAlloc, AllocErr, Layout, Opaque};
use core::ptr::NonNull;
#[unstable(feature = "allocator_api", issue = "32838")] #[unstable(feature = "allocator_api", issue = "32838")]
pub struct System; pub struct System;
@ -49,66 +50,86 @@ pub struct System;
#[unstable(feature = "allocator_api", issue = "32838")] #[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl Alloc for System { unsafe impl Alloc for System {
#[inline] #[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
(&*self).alloc(layout) NonNull::new(GlobalAlloc::alloc(self, layout)).ok_or(AllocErr)
} }
#[inline] #[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
-> Result<*mut u8, AllocErr> NonNull::new(GlobalAlloc::alloc_zeroed(self, layout)).ok_or(AllocErr)
{
(&*self).alloc_zeroed(layout)
} }
#[inline] #[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
(&*self).dealloc(ptr, layout) GlobalAlloc::dealloc(self, ptr.as_ptr(), layout)
} }
#[inline] #[inline]
unsafe fn realloc(&mut self, unsafe fn realloc(&mut self,
ptr: *mut u8, ptr: NonNull<Opaque>,
old_layout: Layout, layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> { new_size: usize) -> Result<NonNull<Opaque>, AllocErr> {
(&*self).realloc(ptr, old_layout, new_layout) NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
}
fn oom(&mut self, err: AllocErr) -> ! {
(&*self).oom(err)
} }
#[inline] #[inline]
fn usable_size(&self, layout: &Layout) -> (usize, usize) { fn oom(&mut self) -> ! {
(&self).usable_size(layout) ::oom()
}
}
#[cfg(stage0)]
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc(*self, layout)).ok_or(AllocErr)
} }
#[inline] #[inline]
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> { unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
(&*self).alloc_excess(layout) NonNull::new(GlobalAlloc::alloc_zeroed(*self, layout)).ok_or(AllocErr)
} }
#[inline] #[inline]
unsafe fn realloc_excess(&mut self, unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
ptr: *mut u8, GlobalAlloc::dealloc(*self, ptr.as_ptr(), layout)
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> {
(&*self).realloc_excess(ptr, layout, new_layout)
} }
#[inline] #[inline]
unsafe fn grow_in_place(&mut self, unsafe fn realloc(&mut self,
ptr: *mut u8, ptr: NonNull<Opaque>,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> { new_size: usize) -> Result<NonNull<Opaque>, AllocErr> {
(&*self).grow_in_place(ptr, layout, new_layout) NonNull::new(GlobalAlloc::realloc(*self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
} }
#[inline] #[inline]
unsafe fn shrink_in_place(&mut self, fn oom(&mut self) -> ! {
ptr: *mut u8, ::oom()
layout: Layout, }
new_layout: Layout) -> Result<(), CannotReallocInPlace> { }
(&*self).shrink_in_place(ptr, layout, new_layout)
#[cfg(any(windows, unix, target_os = "cloudabi", target_os = "redox"))]
mod realloc_fallback {
use core::alloc::{GlobalAlloc, Opaque, Layout};
use core::cmp;
use core::ptr;
impl super::System {
pub(crate) unsafe fn realloc_fallback(&self, ptr: *mut Opaque, old_layout: Layout,
new_size: usize) -> *mut Opaque {
// Docs for GlobalAlloc::realloc require this to be valid:
let new_layout = Layout::from_size_align_unchecked(new_size, old_layout.align());
let new_ptr = GlobalAlloc::alloc(self, new_layout);
if !new_ptr.is_null() {
let size = cmp::min(old_layout.size(), new_size);
ptr::copy_nonoverlapping(ptr as *mut u8, new_ptr as *mut u8, size);
GlobalAlloc::dealloc(self, ptr, old_layout);
}
new_ptr
}
} }
} }
@ -116,132 +137,62 @@ unsafe impl Alloc for System {
mod platform { mod platform {
extern crate libc; extern crate libc;
use core::cmp;
use core::ptr; use core::ptr;
use MIN_ALIGN; use MIN_ALIGN;
use System; use System;
use core::heap::{Alloc, AllocErr, Layout}; use core::alloc::{GlobalAlloc, Layout, Opaque};
#[unstable(feature = "allocator_api", issue = "32838")] #[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System { unsafe impl GlobalAlloc for System {
#[inline] #[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
let ptr = if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() { if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() {
libc::malloc(layout.size()) as *mut u8 libc::malloc(layout.size()) as *mut Opaque
} else { } else {
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
{ {
if layout.align() > (1 << 31) { if layout.align() > (1 << 31) {
return Err(AllocErr::Unsupported { // FIXME: use Opaque::null_mut
details: "requested alignment too large" // https://github.com/rust-lang/rust/issues/49659
}) return 0 as *mut Opaque
} }
} }
aligned_malloc(&layout) aligned_malloc(&layout)
};
if !ptr.is_null() {
Ok(ptr)
} else {
Err(AllocErr::Exhausted { request: layout })
} }
} }
#[inline] #[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
-> Result<*mut u8, AllocErr>
{
if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() { if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() {
let ptr = libc::calloc(layout.size(), 1) as *mut u8; libc::calloc(layout.size(), 1) as *mut Opaque
if !ptr.is_null() {
Ok(ptr)
} else {
Err(AllocErr::Exhausted { request: layout })
}
} else { } else {
let ret = self.alloc(layout.clone()); let ptr = self.alloc(layout.clone());
if let Ok(ptr) = ret { if !ptr.is_null() {
ptr::write_bytes(ptr, 0, layout.size()); ptr::write_bytes(ptr as *mut u8, 0, layout.size());
} }
ret ptr
} }
} }
#[inline] #[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, _layout: Layout) { unsafe fn dealloc(&self, ptr: *mut Opaque, _layout: Layout) {
libc::free(ptr as *mut libc::c_void) libc::free(ptr as *mut libc::c_void)
} }
#[inline] #[inline]
unsafe fn realloc(&mut self, unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
ptr: *mut u8, if layout.align() <= MIN_ALIGN && layout.align() <= new_size {
old_layout: Layout, libc::realloc(ptr as *mut libc::c_void, new_size) as *mut Opaque
new_layout: Layout) -> Result<*mut u8, AllocErr> {
if old_layout.align() != new_layout.align() {
return Err(AllocErr::Unsupported {
details: "cannot change alignment on `realloc`",
})
}
if new_layout.align() <= MIN_ALIGN && new_layout.align() <= new_layout.size(){
let ptr = libc::realloc(ptr as *mut libc::c_void, new_layout.size());
if !ptr.is_null() {
Ok(ptr as *mut u8)
} else {
Err(AllocErr::Exhausted { request: new_layout })
}
} else { } else {
let res = self.alloc(new_layout.clone()); self.realloc_fallback(ptr, layout, new_size)
if let Ok(new_ptr) = res {
let size = cmp::min(old_layout.size(), new_layout.size());
ptr::copy_nonoverlapping(ptr, new_ptr, size);
self.dealloc(ptr, old_layout);
}
res
}
}
fn oom(&mut self, err: AllocErr) -> ! {
use core::fmt::{self, Write};
// Print a message to stderr before aborting to assist with
// debugging. It is critical that this code does not allocate any
// memory since we are in an OOM situation. Any errors are ignored
// while printing since there's nothing we can do about them and we
// are about to exit anyways.
drop(writeln!(Stderr, "fatal runtime error: {}", err));
unsafe {
::core::intrinsics::abort();
}
struct Stderr;
impl Write for Stderr {
#[cfg(target_os = "cloudabi")]
fn write_str(&mut self, _: &str) -> fmt::Result {
// CloudABI does not have any reserved file descriptor
// numbers. We should not attempt to write to file
// descriptor #2, as it may be associated with any kind of
// resource.
Ok(())
}
#[cfg(not(target_os = "cloudabi"))]
fn write_str(&mut self, s: &str) -> fmt::Result {
unsafe {
libc::write(libc::STDERR_FILENO,
s.as_ptr() as *const libc::c_void,
s.len());
}
Ok(())
}
} }
} }
} }
#[cfg(any(target_os = "android", target_os = "redox", target_os = "solaris"))] #[cfg(any(target_os = "android", target_os = "redox", target_os = "solaris"))]
#[inline] #[inline]
unsafe fn aligned_malloc(layout: &Layout) -> *mut u8 { unsafe fn aligned_malloc(layout: &Layout) -> *mut Opaque {
// On android we currently target API level 9 which unfortunately // On android we currently target API level 9 which unfortunately
// doesn't have the `posix_memalign` API used below. Instead we use // doesn't have the `posix_memalign` API used below. Instead we use
// `memalign`, but this unfortunately has the property on some systems // `memalign`, but this unfortunately has the property on some systems
@ -259,18 +210,19 @@ mod platform {
// [3]: https://bugs.chromium.org/p/chromium/issues/detail?id=138579 // [3]: https://bugs.chromium.org/p/chromium/issues/detail?id=138579
// [4]: https://chromium.googlesource.com/chromium/src/base/+/master/ // [4]: https://chromium.googlesource.com/chromium/src/base/+/master/
// /memory/aligned_memory.cc // /memory/aligned_memory.cc
libc::memalign(layout.align(), layout.size()) as *mut u8 libc::memalign(layout.align(), layout.size()) as *mut Opaque
} }
#[cfg(not(any(target_os = "android", target_os = "redox", target_os = "solaris")))] #[cfg(not(any(target_os = "android", target_os = "redox", target_os = "solaris")))]
#[inline] #[inline]
unsafe fn aligned_malloc(layout: &Layout) -> *mut u8 { unsafe fn aligned_malloc(layout: &Layout) -> *mut Opaque {
let mut out = ptr::null_mut(); let mut out = ptr::null_mut();
let ret = libc::posix_memalign(&mut out, layout.align(), layout.size()); let ret = libc::posix_memalign(&mut out, layout.align(), layout.size());
if ret != 0 { if ret != 0 {
ptr::null_mut() // FIXME: use Opaque::null_mut https://github.com/rust-lang/rust/issues/49659
0 as *mut Opaque
} else { } else {
out as *mut u8 out as *mut Opaque
} }
} }
} }
@ -278,22 +230,15 @@ mod platform {
#[cfg(windows)] #[cfg(windows)]
#[allow(bad_style)] #[allow(bad_style)]
mod platform { mod platform {
use core::cmp;
use core::ptr;
use MIN_ALIGN; use MIN_ALIGN;
use System; use System;
use core::heap::{Alloc, AllocErr, Layout, CannotReallocInPlace}; use core::alloc::{GlobalAlloc, Opaque, Layout};
type LPVOID = *mut u8; type LPVOID = *mut u8;
type HANDLE = LPVOID; type HANDLE = LPVOID;
type SIZE_T = usize; type SIZE_T = usize;
type DWORD = u32; type DWORD = u32;
type BOOL = i32; type BOOL = i32;
type LPDWORD = *mut DWORD;
type LPOVERLAPPED = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" { extern "system" {
fn GetProcessHeap() -> HANDLE; fn GetProcessHeap() -> HANDLE;
@ -301,20 +246,12 @@ mod platform {
fn HeapReAlloc(hHeap: HANDLE, dwFlags: DWORD, lpMem: LPVOID, dwBytes: SIZE_T) -> LPVOID; fn HeapReAlloc(hHeap: HANDLE, dwFlags: DWORD, lpMem: LPVOID, dwBytes: SIZE_T) -> LPVOID;
fn HeapFree(hHeap: HANDLE, dwFlags: DWORD, lpMem: LPVOID) -> BOOL; fn HeapFree(hHeap: HANDLE, dwFlags: DWORD, lpMem: LPVOID) -> BOOL;
fn GetLastError() -> DWORD; fn GetLastError() -> DWORD;
fn WriteFile(hFile: HANDLE,
lpBuffer: LPVOID,
nNumberOfBytesToWrite: DWORD,
lpNumberOfBytesWritten: LPDWORD,
lpOverlapped: LPOVERLAPPED)
-> BOOL;
fn GetStdHandle(which: DWORD) -> HANDLE;
} }
#[repr(C)] #[repr(C)]
struct Header(*mut u8); struct Header(*mut u8);
const HEAP_ZERO_MEMORY: DWORD = 0x00000008; const HEAP_ZERO_MEMORY: DWORD = 0x00000008;
const HEAP_REALLOC_IN_PLACE_ONLY: DWORD = 0x00000010;
unsafe fn get_header<'a>(ptr: *mut u8) -> &'a mut Header { unsafe fn get_header<'a>(ptr: *mut u8) -> &'a mut Header {
&mut *(ptr as *mut Header).offset(-1) &mut *(ptr as *mut Header).offset(-1)
@ -327,9 +264,7 @@ mod platform {
} }
#[inline] #[inline]
unsafe fn allocate_with_flags(layout: Layout, flags: DWORD) unsafe fn allocate_with_flags(layout: Layout, flags: DWORD) -> *mut Opaque {
-> Result<*mut u8, AllocErr>
{
let ptr = if layout.align() <= MIN_ALIGN { let ptr = if layout.align() <= MIN_ALIGN {
HeapAlloc(GetProcessHeap(), flags, layout.size()) HeapAlloc(GetProcessHeap(), flags, layout.size())
} else { } else {
@ -341,35 +276,29 @@ mod platform {
align_ptr(ptr, layout.align()) align_ptr(ptr, layout.align())
} }
}; };
if ptr.is_null() { ptr as *mut Opaque
Err(AllocErr::Exhausted { request: layout })
} else {
Ok(ptr as *mut u8)
}
} }
#[unstable(feature = "allocator_api", issue = "32838")] #[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System { unsafe impl GlobalAlloc for System {
#[inline] #[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
allocate_with_flags(layout, 0) allocate_with_flags(layout, 0)
} }
#[inline] #[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
-> Result<*mut u8, AllocErr>
{
allocate_with_flags(layout, HEAP_ZERO_MEMORY) allocate_with_flags(layout, HEAP_ZERO_MEMORY)
} }
#[inline] #[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
if layout.align() <= MIN_ALIGN { if layout.align() <= MIN_ALIGN {
let err = HeapFree(GetProcessHeap(), 0, ptr as LPVOID); let err = HeapFree(GetProcessHeap(), 0, ptr as LPVOID);
debug_assert!(err != 0, "Failed to free heap memory: {}", debug_assert!(err != 0, "Failed to free heap memory: {}",
GetLastError()); GetLastError());
} else { } else {
let header = get_header(ptr); let header = get_header(ptr as *mut u8);
let err = HeapFree(GetProcessHeap(), 0, header.0 as LPVOID); let err = HeapFree(GetProcessHeap(), 0, header.0 as LPVOID);
debug_assert!(err != 0, "Failed to free heap memory: {}", debug_assert!(err != 0, "Failed to free heap memory: {}",
GetLastError()); GetLastError());
@ -377,98 +306,11 @@ mod platform {
} }
#[inline] #[inline]
unsafe fn realloc(&mut self, unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
ptr: *mut u8, if layout.align() <= MIN_ALIGN {
old_layout: Layout, HeapReAlloc(GetProcessHeap(), 0, ptr as LPVOID, new_size) as *mut Opaque
new_layout: Layout) -> Result<*mut u8, AllocErr> {
if old_layout.align() != new_layout.align() {
return Err(AllocErr::Unsupported {
details: "cannot change alignment on `realloc`",
})
}
if new_layout.align() <= MIN_ALIGN {
let ptr = HeapReAlloc(GetProcessHeap(),
0,
ptr as LPVOID,
new_layout.size());
if !ptr.is_null() {
Ok(ptr as *mut u8)
} else {
Err(AllocErr::Exhausted { request: new_layout })
}
} else { } else {
let res = self.alloc(new_layout.clone()); self.realloc_fallback(ptr, layout, new_size)
if let Ok(new_ptr) = res {
let size = cmp::min(old_layout.size(), new_layout.size());
ptr::copy_nonoverlapping(ptr, new_ptr, size);
self.dealloc(ptr, old_layout);
}
res
}
}
#[inline]
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
self.shrink_in_place(ptr, layout, new_layout)
}
#[inline]
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
old_layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
if old_layout.align() != new_layout.align() {
return Err(CannotReallocInPlace)
}
let new = if new_layout.align() <= MIN_ALIGN {
HeapReAlloc(GetProcessHeap(),
HEAP_REALLOC_IN_PLACE_ONLY,
ptr as LPVOID,
new_layout.size())
} else {
let header = get_header(ptr);
HeapReAlloc(GetProcessHeap(),
HEAP_REALLOC_IN_PLACE_ONLY,
header.0 as LPVOID,
new_layout.size() + new_layout.align())
};
if new.is_null() {
Err(CannotReallocInPlace)
} else {
Ok(())
}
}
fn oom(&mut self, err: AllocErr) -> ! {
use core::fmt::{self, Write};
// Same as with unix we ignore all errors here
drop(writeln!(Stderr, "fatal runtime error: {}", err));
unsafe {
::core::intrinsics::abort();
}
struct Stderr;
impl Write for Stderr {
fn write_str(&mut self, s: &str) -> fmt::Result {
unsafe {
// WriteFile silently fails if it is passed an invalid
// handle, so there is no need to check the result of
// GetStdHandle.
WriteFile(GetStdHandle(STD_ERROR_HANDLE),
s.as_ptr() as LPVOID,
s.len() as DWORD,
ptr::null_mut(),
ptr::null_mut());
}
Ok(())
}
} }
} }
} }
@ -495,69 +337,92 @@ mod platform {
mod platform { mod platform {
extern crate dlmalloc; extern crate dlmalloc;
use core::heap::{Alloc, AllocErr, Layout, Excess, CannotReallocInPlace}; use core::alloc::{GlobalAlloc, Layout, Opaque};
use System; use System;
use self::dlmalloc::GlobalDlmalloc;
// No need for synchronization here as wasm is currently single-threaded
static mut DLMALLOC: dlmalloc::Dlmalloc = dlmalloc::DLMALLOC_INIT;
#[unstable(feature = "allocator_api", issue = "32838")] #[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System { unsafe impl GlobalAlloc for System {
#[inline] #[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
GlobalDlmalloc.alloc(layout) DLMALLOC.malloc(layout.size(), layout.align()) as *mut Opaque
} }
#[inline] #[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
-> Result<*mut u8, AllocErr> DLMALLOC.calloc(layout.size(), layout.align()) as *mut Opaque
{
GlobalDlmalloc.alloc_zeroed(layout)
} }
#[inline] #[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
GlobalDlmalloc.dealloc(ptr, layout) DLMALLOC.free(ptr as *mut u8, layout.size(), layout.align())
} }
#[inline] #[inline]
unsafe fn realloc(&mut self, unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
ptr: *mut u8, DLMALLOC.realloc(ptr as *mut u8, layout.size(), layout.align(), new_size) as *mut Opaque
old_layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
GlobalDlmalloc.realloc(ptr, old_layout, new_layout)
}
#[inline]
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
GlobalDlmalloc.usable_size(layout)
}
#[inline]
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
GlobalDlmalloc.alloc_excess(layout)
}
#[inline]
unsafe fn realloc_excess(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> {
GlobalDlmalloc.realloc_excess(ptr, layout, new_layout)
}
#[inline]
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
GlobalDlmalloc.grow_in_place(ptr, layout, new_layout)
}
#[inline]
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
GlobalDlmalloc.shrink_in_place(ptr, layout, new_layout)
} }
} }
} }
#[inline]
fn oom() -> ! {
write_to_stderr("fatal runtime error: memory allocation failed");
unsafe {
::core::intrinsics::abort();
}
}
#[cfg(any(unix, target_os = "redox"))]
#[inline]
fn write_to_stderr(s: &str) {
extern crate libc;
unsafe {
libc::write(libc::STDERR_FILENO,
s.as_ptr() as *const libc::c_void,
s.len());
}
}
#[cfg(windows)]
#[inline]
fn write_to_stderr(s: &str) {
use core::ptr;
type LPVOID = *mut u8;
type HANDLE = LPVOID;
type DWORD = u32;
type BOOL = i32;
type LPDWORD = *mut DWORD;
type LPOVERLAPPED = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn WriteFile(hFile: HANDLE,
lpBuffer: LPVOID,
nNumberOfBytesToWrite: DWORD,
lpNumberOfBytesWritten: LPDWORD,
lpOverlapped: LPOVERLAPPED)
-> BOOL;
fn GetStdHandle(which: DWORD) -> HANDLE;
}
unsafe {
// WriteFile silently fails if it is passed an invalid
// handle, so there is no need to check the result of
// GetStdHandle.
WriteFile(GetStdHandle(STD_ERROR_HANDLE),
s.as_ptr() as LPVOID,
s.len() as DWORD,
ptr::null_mut(),
ptr::null_mut());
}
}
#[cfg(not(any(windows, unix, target_os = "redox")))]
#[inline]
fn write_to_stderr(_: &str) {}

View File

@ -21,10 +21,30 @@ use mem;
use usize; use usize;
use ptr::{self, NonNull}; use ptr::{self, NonNull};
extern {
/// An opaque, unsized type. Used for pointers to allocated memory.
///
/// This type can only be used behind a pointer like `*mut Opaque` or `ptr::NonNull<Opaque>`.
/// Such pointers are similar to Cs `void*` type.
pub type Opaque;
}
impl Opaque {
/// Similar to `std::ptr::null`, which requires `T: Sized`.
pub fn null() -> *const Self {
0 as _
}
/// Similar to `std::ptr::null_mut`, which requires `T: Sized`.
pub fn null_mut() -> *mut Self {
0 as _
}
}
/// Represents the combination of a starting address and /// Represents the combination of a starting address and
/// a total capacity of the returned block. /// a total capacity of the returned block.
#[derive(Debug)] #[derive(Debug)]
pub struct Excess(pub *mut u8, pub usize); pub struct Excess(pub NonNull<Opaque>, pub usize);
fn size_align<T>() -> (usize, usize) { fn size_align<T>() -> (usize, usize) {
(mem::size_of::<T>(), mem::align_of::<T>()) (mem::size_of::<T>(), mem::align_of::<T>())
@ -74,9 +94,9 @@ impl Layout {
/// must not overflow (i.e. the rounded value must be less than /// must not overflow (i.e. the rounded value must be less than
/// `usize::MAX`). /// `usize::MAX`).
#[inline] #[inline]
pub fn from_size_align(size: usize, align: usize) -> Option<Layout> { pub fn from_size_align(size: usize, align: usize) -> Result<Self, LayoutErr> {
if !align.is_power_of_two() { if !align.is_power_of_two() {
return None; return Err(LayoutErr { private: () });
} }
// (power-of-two implies align != 0.) // (power-of-two implies align != 0.)
@ -94,11 +114,11 @@ impl Layout {
// Above implies that checking for summation overflow is both // Above implies that checking for summation overflow is both
// necessary and sufficient. // necessary and sufficient.
if size > usize::MAX - (align - 1) { if size > usize::MAX - (align - 1) {
return None; return Err(LayoutErr { private: () });
} }
unsafe { unsafe {
Some(Layout::from_size_align_unchecked(size, align)) Ok(Layout::from_size_align_unchecked(size, align))
} }
} }
@ -110,7 +130,7 @@ impl Layout {
/// a power-of-two nor `size` aligned to `align` fits within the /// a power-of-two nor `size` aligned to `align` fits within the
/// address space (i.e. the `Layout::from_size_align` preconditions). /// address space (i.e. the `Layout::from_size_align` preconditions).
#[inline] #[inline]
pub unsafe fn from_size_align_unchecked(size: usize, align: usize) -> Layout { pub unsafe fn from_size_align_unchecked(size: usize, align: usize) -> Self {
Layout { size: size, align: align } Layout { size: size, align: align }
} }
@ -209,15 +229,17 @@ impl Layout {
/// ///
/// On arithmetic overflow, returns `None`. /// On arithmetic overflow, returns `None`.
#[inline] #[inline]
pub fn repeat(&self, n: usize) -> Option<(Self, usize)> { pub fn repeat(&self, n: usize) -> Result<(Self, usize), LayoutErr> {
let padded_size = self.size.checked_add(self.padding_needed_for(self.align))?; let padded_size = self.size.checked_add(self.padding_needed_for(self.align))
let alloc_size = padded_size.checked_mul(n)?; .ok_or(LayoutErr { private: () })?;
let alloc_size = padded_size.checked_mul(n)
.ok_or(LayoutErr { private: () })?;
// We can assume that `self.align` is a power-of-two. // We can assume that `self.align` is a power-of-two.
// Furthermore, `alloc_size` has already been rounded up // Furthermore, `alloc_size` has already been rounded up
// to a multiple of `self.align`; therefore, the call to // to a multiple of `self.align`; therefore, the call to
// `Layout::from_size_align` below should never panic. // `Layout::from_size_align` below should never panic.
Some((Layout::from_size_align(alloc_size, self.align).unwrap(), padded_size)) Ok((Layout::from_size_align(alloc_size, self.align).unwrap(), padded_size))
} }
/// Creates a layout describing the record for `self` followed by /// Creates a layout describing the record for `self` followed by
@ -231,17 +253,19 @@ impl Layout {
/// (assuming that the record itself starts at offset 0). /// (assuming that the record itself starts at offset 0).
/// ///
/// On arithmetic overflow, returns `None`. /// On arithmetic overflow, returns `None`.
pub fn extend(&self, next: Self) -> Option<(Self, usize)> { pub fn extend(&self, next: Self) -> Result<(Self, usize), LayoutErr> {
let new_align = cmp::max(self.align, next.align); let new_align = cmp::max(self.align, next.align);
let realigned = Layout::from_size_align(self.size, new_align)?; let realigned = Layout::from_size_align(self.size, new_align)?;
let pad = realigned.padding_needed_for(next.align); let pad = realigned.padding_needed_for(next.align);
let offset = self.size.checked_add(pad)?; let offset = self.size.checked_add(pad)
let new_size = offset.checked_add(next.size)?; .ok_or(LayoutErr { private: () })?;
let new_size = offset.checked_add(next.size)
.ok_or(LayoutErr { private: () })?;
let layout = Layout::from_size_align(new_size, new_align)?; let layout = Layout::from_size_align(new_size, new_align)?;
Some((layout, offset)) Ok((layout, offset))
} }
/// Creates a layout describing the record for `n` instances of /// Creates a layout describing the record for `n` instances of
@ -256,8 +280,8 @@ impl Layout {
/// aligned. /// aligned.
/// ///
/// On arithmetic overflow, returns `None`. /// On arithmetic overflow, returns `None`.
pub fn repeat_packed(&self, n: usize) -> Option<Self> { pub fn repeat_packed(&self, n: usize) -> Result<Self, LayoutErr> {
let size = self.size().checked_mul(n)?; let size = self.size().checked_mul(n).ok_or(LayoutErr { private: () })?;
Layout::from_size_align(size, self.align) Layout::from_size_align(size, self.align)
} }
@ -276,16 +300,17 @@ impl Layout {
/// `extend`.) /// `extend`.)
/// ///
/// On arithmetic overflow, returns `None`. /// On arithmetic overflow, returns `None`.
pub fn extend_packed(&self, next: Self) -> Option<(Self, usize)> { pub fn extend_packed(&self, next: Self) -> Result<(Self, usize), LayoutErr> {
let new_size = self.size().checked_add(next.size())?; let new_size = self.size().checked_add(next.size())
.ok_or(LayoutErr { private: () })?;
let layout = Layout::from_size_align(new_size, self.align)?; let layout = Layout::from_size_align(new_size, self.align)?;
Some((layout, self.size())) Ok((layout, self.size()))
} }
/// Creates a layout describing the record for a `[T; n]`. /// Creates a layout describing the record for a `[T; n]`.
/// ///
/// On arithmetic overflow, returns `None`. /// On arithmetic overflow, returns `None`.
pub fn array<T>(n: usize) -> Option<Self> { pub fn array<T>(n: usize) -> Result<Self, LayoutErr> {
Layout::new::<T>() Layout::new::<T>()
.repeat(n) .repeat(n)
.map(|(k, offs)| { .map(|(k, offs)| {
@ -295,55 +320,31 @@ impl Layout {
} }
} }
/// The parameters given to `Layout::from_size_align` do not satisfy
/// its documented constraints.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct LayoutErr {
private: ()
}
// (we need this for downstream impl of trait Error)
impl fmt::Display for LayoutErr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str("invalid parameters to Layout::from_size_align")
}
}
/// The `AllocErr` error specifies whether an allocation failure is /// The `AllocErr` error specifies whether an allocation failure is
/// specifically due to resource exhaustion or if it is due to /// specifically due to resource exhaustion or if it is due to
/// something wrong when combining the given input arguments with this /// something wrong when combining the given input arguments with this
/// allocator. /// allocator.
#[derive(Clone, PartialEq, Eq, Debug)] #[derive(Clone, PartialEq, Eq, Debug)]
pub enum AllocErr { pub struct AllocErr;
/// Error due to hitting some resource limit or otherwise running
/// out of memory. This condition strongly implies that *some*
/// series of deallocations would allow a subsequent reissuing of
/// the original allocation request to succeed.
Exhausted { request: Layout },
/// Error due to allocator being fundamentally incapable of
/// satisfying the original request. This condition implies that
/// such an allocation request will never succeed on the given
/// allocator, regardless of environment, memory pressure, or
/// other contextual conditions.
///
/// For example, an allocator that does not support requests for
/// large memory blocks might return this error variant.
Unsupported { details: &'static str },
}
impl AllocErr {
#[inline]
pub fn invalid_input(details: &'static str) -> Self {
AllocErr::Unsupported { details: details }
}
#[inline]
pub fn is_memory_exhausted(&self) -> bool {
if let AllocErr::Exhausted { .. } = *self { true } else { false }
}
#[inline]
pub fn is_request_unsupported(&self) -> bool {
if let AllocErr::Unsupported { .. } = *self { true } else { false }
}
#[inline]
pub fn description(&self) -> &str {
match *self {
AllocErr::Exhausted { .. } => "allocator memory exhausted",
AllocErr::Unsupported { .. } => "unsupported allocator request",
}
}
}
// (we need this for downstream impl of trait Error) // (we need this for downstream impl of trait Error)
impl fmt::Display for AllocErr { impl fmt::Display for AllocErr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.description()) f.write_str("memory allocation failed")
} }
} }
@ -374,13 +375,85 @@ pub enum CollectionAllocErr {
/// (usually `isize::MAX` bytes). /// (usually `isize::MAX` bytes).
CapacityOverflow, CapacityOverflow,
/// Error due to the allocator (see the `AllocErr` type's docs). /// Error due to the allocator (see the `AllocErr` type's docs).
AllocErr(AllocErr), AllocErr,
} }
#[unstable(feature = "try_reserve", reason = "new API", issue="48043")] #[unstable(feature = "try_reserve", reason = "new API", issue="48043")]
impl From<AllocErr> for CollectionAllocErr { impl From<AllocErr> for CollectionAllocErr {
fn from(err: AllocErr) -> Self { fn from(AllocErr: AllocErr) -> Self {
CollectionAllocErr::AllocErr(err) CollectionAllocErr::AllocErr
}
}
/// A memory allocator that can be registered to be the one backing `std::alloc::Global`
/// though the `#[global_allocator]` attributes.
pub unsafe trait GlobalAlloc {
/// Allocate memory as described by the given `layout`.
///
/// Returns a pointer to newly-allocated memory,
/// or NULL to indicate allocation failure.
///
/// # Safety
///
/// **FIXME:** what are the exact requirements?
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque;
/// Deallocate the block of memory at the given `ptr` pointer with the given `layout`.
///
/// # Safety
///
/// **FIXME:** what are the exact requirements?
/// In particular around layout *fit*. (See docs for the `Alloc` trait.)
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout);
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
let size = layout.size();
let ptr = self.alloc(layout);
if !ptr.is_null() {
ptr::write_bytes(ptr as *mut u8, 0, size);
}
ptr
}
/// Shink or grow a block of memory to the given `new_size`.
/// The block is described by the given `ptr` pointer and `layout`.
///
/// Return a new pointer (which may or may not be the same as `ptr`),
/// or NULL to indicate reallocation failure.
///
/// If reallocation is successful, the old `ptr` pointer is considered
/// to have been deallocated.
///
/// # Safety
///
/// `new_size`, when rounded up to the nearest multiple of `old_layout.align()`,
/// must not overflow (i.e. the rounded value must be less than `usize::MAX`).
///
/// **FIXME:** what are the exact requirements?
/// In particular around layout *fit*. (See docs for the `Alloc` trait.)
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let new_ptr = self.alloc(new_layout);
if !new_ptr.is_null() {
ptr::copy_nonoverlapping(
ptr as *const u8,
new_ptr as *mut u8,
cmp::min(layout.size(), new_size),
);
self.dealloc(ptr, layout);
}
new_ptr
}
/// Aborts the thread or process, optionally performing
/// cleanup or logging diagnostic information before panicking or
/// aborting.
///
/// `oom` is meant to be used by clients unable to cope with an
/// unsatisfied allocation request, and wish to abandon
/// computation rather than attempt to recover locally.
fn oom(&self) -> ! {
unsafe { ::intrinsics::abort() }
} }
} }
@ -515,7 +588,7 @@ pub unsafe trait Alloc {
/// Clients wishing to abort computation in response to an /// Clients wishing to abort computation in response to an
/// allocation error are encouraged to call the allocator's `oom` /// allocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar. /// method, rather than directly invoking `panic!` or similar.
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>; unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr>;
/// Deallocate the memory referenced by `ptr`. /// Deallocate the memory referenced by `ptr`.
/// ///
@ -532,7 +605,7 @@ pub unsafe trait Alloc {
/// * In addition to fitting the block of memory `layout`, the /// * In addition to fitting the block of memory `layout`, the
/// alignment of the `layout` must match the alignment used /// alignment of the `layout` must match the alignment used
/// to allocate that block of memory. /// to allocate that block of memory.
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout); unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout);
/// Allocator-specific method for signaling an out-of-memory /// Allocator-specific method for signaling an out-of-memory
/// condition. /// condition.
@ -542,12 +615,8 @@ pub unsafe trait Alloc {
/// aborting. /// aborting.
/// ///
/// `oom` is meant to be used by clients unable to cope with an /// `oom` is meant to be used by clients unable to cope with an
/// unsatisfied allocation request (signaled by an error such as /// unsatisfied allocation request, and wish to abandon
/// `AllocErr::Exhausted`), and wish to abandon computation rather /// computation rather than attempt to recover locally.
/// than attempt to recover locally. Such clients should pass the
/// signaling error value back into `oom`, where the allocator
/// may incorporate that error value into its diagnostic report
/// before aborting.
/// ///
/// Implementations of the `oom` method are discouraged from /// Implementations of the `oom` method are discouraged from
/// infinitely regressing in nested calls to `oom`. In /// infinitely regressing in nested calls to `oom`. In
@ -560,7 +629,7 @@ pub unsafe trait Alloc {
/// instead they should return an appropriate error from the /// instead they should return an appropriate error from the
/// invoked method, and let the client decide whether to invoke /// invoked method, and let the client decide whether to invoke
/// this `oom` method in response. /// this `oom` method in response.
fn oom(&mut self, _: AllocErr) -> ! { fn oom(&mut self) -> ! {
unsafe { ::intrinsics::abort() } unsafe { ::intrinsics::abort() }
} }
@ -602,9 +671,10 @@ pub unsafe trait Alloc {
// realloc. alloc_excess, realloc_excess // realloc. alloc_excess, realloc_excess
/// Returns a pointer suitable for holding data described by /// Returns a pointer suitable for holding data described by
/// `new_layout`, meeting its size and alignment guarantees. To /// a new layout with `layout`s alginment and a size given
/// by `new_size`. To
/// accomplish this, this may extend or shrink the allocation /// accomplish this, this may extend or shrink the allocation
/// referenced by `ptr` to fit `new_layout`. /// referenced by `ptr` to fit the new layout.
/// ///
/// If this returns `Ok`, then ownership of the memory block /// If this returns `Ok`, then ownership of the memory block
/// referenced by `ptr` has been transferred to this /// referenced by `ptr` has been transferred to this
@ -617,12 +687,6 @@ pub unsafe trait Alloc {
/// block has not been transferred to this allocator, and the /// block has not been transferred to this allocator, and the
/// contents of the memory block are unaltered. /// contents of the memory block are unaltered.
/// ///
/// For best results, `new_layout` should not impose a different
/// alignment constraint than `layout`. (In other words,
/// `new_layout.align()` should equal `layout.align()`.) However,
/// behavior is well-defined (though underspecified) when this
/// constraint is violated; further discussion below.
///
/// # Safety /// # Safety
/// ///
/// This function is unsafe because undefined behavior can result /// This function is unsafe because undefined behavior can result
@ -630,12 +694,13 @@ pub unsafe trait Alloc {
/// ///
/// * `ptr` must be currently allocated via this allocator, /// * `ptr` must be currently allocated via this allocator,
/// ///
/// * `layout` must *fit* the `ptr` (see above). (The `new_layout` /// * `layout` must *fit* the `ptr` (see above). (The `new_size`
/// argument need not fit it.) /// argument need not fit it.)
/// ///
/// * `new_layout` must have size greater than zero. /// * `new_size` must be greater than zero.
/// ///
/// * the alignment of `new_layout` is non-zero. /// * `new_size`, when rounded up to the nearest multiple of `layout.align()`,
/// must not overflow (i.e. the rounded value must be less than `usize::MAX`).
/// ///
/// (Extension subtraits might provide more specific bounds on /// (Extension subtraits might provide more specific bounds on
/// behavior, e.g. guarantee a sentinel address or a null pointer /// behavior, e.g. guarantee a sentinel address or a null pointer
@ -643,18 +708,11 @@ pub unsafe trait Alloc {
/// ///
/// # Errors /// # Errors
/// ///
/// Returns `Err` only if `new_layout` does not match the /// Returns `Err` only if the new layout
/// alignment of `layout`, or does not meet the allocator's size /// does not meet the allocator's size
/// and alignment constraints of the allocator, or if reallocation /// and alignment constraints of the allocator, or if reallocation
/// otherwise fails. /// otherwise fails.
/// ///
/// (Note the previous sentence did not say "if and only if" -- in
/// particular, an implementation of this method *can* return `Ok`
/// if `new_layout.align() != old_layout.align()`; or it can
/// return `Err` in that scenario, depending on whether this
/// allocator can dynamically adjust the alignment constraint for
/// the block.)
///
/// Implementations are encouraged to return `Err` on memory /// Implementations are encouraged to return `Err` on memory
/// exhaustion rather than panicking or aborting, but this is not /// exhaustion rather than panicking or aborting, but this is not
/// a strict requirement. (Specifically: it is *legal* to /// a strict requirement. (Specifically: it is *legal* to
@ -665,27 +723,28 @@ pub unsafe trait Alloc {
/// reallocation error are encouraged to call the allocator's `oom` /// reallocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar. /// method, rather than directly invoking `panic!` or similar.
unsafe fn realloc(&mut self, unsafe fn realloc(&mut self,
ptr: *mut u8, ptr: NonNull<Opaque>,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> { new_size: usize) -> Result<NonNull<Opaque>, AllocErr> {
let new_size = new_layout.size();
let old_size = layout.size(); let old_size = layout.size();
let aligns_match = layout.align == new_layout.align;
if new_size >= old_size && aligns_match { if new_size >= old_size {
if let Ok(()) = self.grow_in_place(ptr, layout.clone(), new_layout.clone()) { if let Ok(()) = self.grow_in_place(ptr, layout.clone(), new_size) {
return Ok(ptr); return Ok(ptr);
} }
} else if new_size < old_size && aligns_match { } else if new_size < old_size {
if let Ok(()) = self.shrink_in_place(ptr, layout.clone(), new_layout.clone()) { if let Ok(()) = self.shrink_in_place(ptr, layout.clone(), new_size) {
return Ok(ptr); return Ok(ptr);
} }
} }
// otherwise, fall back on alloc + copy + dealloc. // otherwise, fall back on alloc + copy + dealloc.
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let result = self.alloc(new_layout); let result = self.alloc(new_layout);
if let Ok(new_ptr) = result { if let Ok(new_ptr) = result {
ptr::copy_nonoverlapping(ptr as *const u8, new_ptr, cmp::min(old_size, new_size)); ptr::copy_nonoverlapping(ptr.as_ptr() as *const u8,
new_ptr.as_ptr() as *mut u8,
cmp::min(old_size, new_size));
self.dealloc(ptr, layout); self.dealloc(ptr, layout);
} }
result result
@ -707,11 +766,11 @@ pub unsafe trait Alloc {
/// Clients wishing to abort computation in response to an /// Clients wishing to abort computation in response to an
/// allocation error are encouraged to call the allocator's `oom` /// allocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar. /// method, rather than directly invoking `panic!` or similar.
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
let size = layout.size(); let size = layout.size();
let p = self.alloc(layout); let p = self.alloc(layout);
if let Ok(p) = p { if let Ok(p) = p {
ptr::write_bytes(p, 0, size); ptr::write_bytes(p.as_ptr() as *mut u8, 0, size);
} }
p p
} }
@ -756,19 +815,21 @@ pub unsafe trait Alloc {
/// reallocation error are encouraged to call the allocator's `oom` /// reallocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar. /// method, rather than directly invoking `panic!` or similar.
unsafe fn realloc_excess(&mut self, unsafe fn realloc_excess(&mut self,
ptr: *mut u8, ptr: NonNull<Opaque>,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> { new_size: usize) -> Result<Excess, AllocErr> {
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let usable_size = self.usable_size(&new_layout); let usable_size = self.usable_size(&new_layout);
self.realloc(ptr, layout, new_layout) self.realloc(ptr, layout, new_size)
.map(|p| Excess(p, usable_size.1)) .map(|p| Excess(p, usable_size.1))
} }
/// Attempts to extend the allocation referenced by `ptr` to fit `new_layout`. /// Attempts to extend the allocation referenced by `ptr` to fit `new_size`.
/// ///
/// If this returns `Ok`, then the allocator has asserted that the /// If this returns `Ok`, then the allocator has asserted that the
/// memory block referenced by `ptr` now fits `new_layout`, and thus can /// memory block referenced by `ptr` now fits `new_size`, and thus can
/// be used to carry data of that layout. (The allocator is allowed to /// be used to carry data of a layout of that size and same alignment as
/// `layout`. (The allocator is allowed to
/// expend effort to accomplish this, such as extending the memory block to /// expend effort to accomplish this, such as extending the memory block to
/// include successor blocks, or virtual memory tricks.) /// include successor blocks, or virtual memory tricks.)
/// ///
@ -784,11 +845,9 @@ pub unsafe trait Alloc {
/// * `ptr` must be currently allocated via this allocator, /// * `ptr` must be currently allocated via this allocator,
/// ///
/// * `layout` must *fit* the `ptr` (see above); note the /// * `layout` must *fit* the `ptr` (see above); note the
/// `new_layout` argument need not fit it, /// `new_size` argument need not fit it,
/// ///
/// * `new_layout.size()` must not be less than `layout.size()`, /// * `new_size` must not be less than `layout.size()`,
///
/// * `new_layout.align()` must equal `layout.align()`.
/// ///
/// # Errors /// # Errors
/// ///
@ -801,26 +860,25 @@ pub unsafe trait Alloc {
/// `grow_in_place` failures without aborting, or to fall back on /// `grow_in_place` failures without aborting, or to fall back on
/// another reallocation method before resorting to an abort. /// another reallocation method before resorting to an abort.
unsafe fn grow_in_place(&mut self, unsafe fn grow_in_place(&mut self,
ptr: *mut u8, ptr: NonNull<Opaque>,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> { new_size: usize) -> Result<(), CannotReallocInPlace> {
let _ = ptr; // this default implementation doesn't care about the actual address. let _ = ptr; // this default implementation doesn't care about the actual address.
debug_assert!(new_layout.size >= layout.size); debug_assert!(new_size >= layout.size);
debug_assert!(new_layout.align == layout.align);
let (_l, u) = self.usable_size(&layout); let (_l, u) = self.usable_size(&layout);
// _l <= layout.size() [guaranteed by usable_size()] // _l <= layout.size() [guaranteed by usable_size()]
// layout.size() <= new_layout.size() [required by this method] // layout.size() <= new_layout.size() [required by this method]
if new_layout.size <= u { if new_size <= u {
return Ok(()); return Ok(());
} else { } else {
return Err(CannotReallocInPlace); return Err(CannotReallocInPlace);
} }
} }
/// Attempts to shrink the allocation referenced by `ptr` to fit `new_layout`. /// Attempts to shrink the allocation referenced by `ptr` to fit `new_size`.
/// ///
/// If this returns `Ok`, then the allocator has asserted that the /// If this returns `Ok`, then the allocator has asserted that the
/// memory block referenced by `ptr` now fits `new_layout`, and /// memory block referenced by `ptr` now fits `new_size`, and
/// thus can only be used to carry data of that smaller /// thus can only be used to carry data of that smaller
/// layout. (The allocator is allowed to take advantage of this, /// layout. (The allocator is allowed to take advantage of this,
/// carving off portions of the block for reuse elsewhere.) The /// carving off portions of the block for reuse elsewhere.) The
@ -841,13 +899,11 @@ pub unsafe trait Alloc {
/// * `ptr` must be currently allocated via this allocator, /// * `ptr` must be currently allocated via this allocator,
/// ///
/// * `layout` must *fit* the `ptr` (see above); note the /// * `layout` must *fit* the `ptr` (see above); note the
/// `new_layout` argument need not fit it, /// `new_size` argument need not fit it,
/// ///
/// * `new_layout.size()` must not be greater than `layout.size()` /// * `new_size` must not be greater than `layout.size()`
/// (and must be greater than zero), /// (and must be greater than zero),
/// ///
/// * `new_layout.align()` must equal `layout.align()`.
///
/// # Errors /// # Errors
/// ///
/// Returns `Err(CannotReallocInPlace)` when the allocator is /// Returns `Err(CannotReallocInPlace)` when the allocator is
@ -859,16 +915,15 @@ pub unsafe trait Alloc {
/// `shrink_in_place` failures without aborting, or to fall back /// `shrink_in_place` failures without aborting, or to fall back
/// on another reallocation method before resorting to an abort. /// on another reallocation method before resorting to an abort.
unsafe fn shrink_in_place(&mut self, unsafe fn shrink_in_place(&mut self,
ptr: *mut u8, ptr: NonNull<Opaque>,
layout: Layout, layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> { new_size: usize) -> Result<(), CannotReallocInPlace> {
let _ = ptr; // this default implementation doesn't care about the actual address. let _ = ptr; // this default implementation doesn't care about the actual address.
debug_assert!(new_layout.size <= layout.size); debug_assert!(new_size <= layout.size);
debug_assert!(new_layout.align == layout.align);
let (l, _u) = self.usable_size(&layout); let (l, _u) = self.usable_size(&layout);
// layout.size() <= _u [guaranteed by usable_size()] // layout.size() <= _u [guaranteed by usable_size()]
// new_layout.size() <= layout.size() [required by this method] // new_layout.size() <= layout.size() [required by this method]
if l <= new_layout.size { if l <= new_size {
return Ok(()); return Ok(());
} else { } else {
return Err(CannotReallocInPlace); return Err(CannotReallocInPlace);
@ -911,9 +966,9 @@ pub unsafe trait Alloc {
{ {
let k = Layout::new::<T>(); let k = Layout::new::<T>();
if k.size() > 0 { if k.size() > 0 {
unsafe { self.alloc(k).map(|p| NonNull::new_unchecked(p as *mut T)) } unsafe { self.alloc(k).map(|p| p.cast()) }
} else { } else {
Err(AllocErr::invalid_input("zero-sized type invalid for alloc_one")) Err(AllocErr)
} }
} }
@ -937,10 +992,9 @@ pub unsafe trait Alloc {
unsafe fn dealloc_one<T>(&mut self, ptr: NonNull<T>) unsafe fn dealloc_one<T>(&mut self, ptr: NonNull<T>)
where Self: Sized where Self: Sized
{ {
let raw_ptr = ptr.as_ptr() as *mut u8;
let k = Layout::new::<T>(); let k = Layout::new::<T>();
if k.size() > 0 { if k.size() > 0 {
self.dealloc(raw_ptr, k); self.dealloc(ptr.as_opaque(), k);
} }
} }
@ -978,15 +1032,12 @@ pub unsafe trait Alloc {
where Self: Sized where Self: Sized
{ {
match Layout::array::<T>(n) { match Layout::array::<T>(n) {
Some(ref layout) if layout.size() > 0 => { Ok(ref layout) if layout.size() > 0 => {
unsafe { unsafe {
self.alloc(layout.clone()) self.alloc(layout.clone()).map(|p| p.cast())
.map(|p| {
NonNull::new_unchecked(p as *mut T)
})
} }
} }
_ => Err(AllocErr::invalid_input("invalid layout for alloc_array")), _ => Err(AllocErr),
} }
} }
@ -1028,13 +1079,13 @@ pub unsafe trait Alloc {
n_new: usize) -> Result<NonNull<T>, AllocErr> n_new: usize) -> Result<NonNull<T>, AllocErr>
where Self: Sized where Self: Sized
{ {
match (Layout::array::<T>(n_old), Layout::array::<T>(n_new), ptr.as_ptr()) { match (Layout::array::<T>(n_old), Layout::array::<T>(n_new)) {
(Some(ref k_old), Some(ref k_new), ptr) if k_old.size() > 0 && k_new.size() > 0 => { (Ok(ref k_old), Ok(ref k_new)) if k_old.size() > 0 && k_new.size() > 0 => {
self.realloc(ptr as *mut u8, k_old.clone(), k_new.clone()) debug_assert!(k_old.align() == k_new.align());
.map(|p| NonNull::new_unchecked(p as *mut T)) self.realloc(ptr.as_opaque(), k_old.clone(), k_new.size()).map(NonNull::cast)
} }
_ => { _ => {
Err(AllocErr::invalid_input("invalid layout for realloc_array")) Err(AllocErr)
} }
} }
} }
@ -1062,13 +1113,12 @@ pub unsafe trait Alloc {
unsafe fn dealloc_array<T>(&mut self, ptr: NonNull<T>, n: usize) -> Result<(), AllocErr> unsafe fn dealloc_array<T>(&mut self, ptr: NonNull<T>, n: usize) -> Result<(), AllocErr>
where Self: Sized where Self: Sized
{ {
let raw_ptr = ptr.as_ptr() as *mut u8;
match Layout::array::<T>(n) { match Layout::array::<T>(n) {
Some(ref k) if k.size() > 0 => { Ok(ref k) if k.size() > 0 => {
Ok(self.dealloc(raw_ptr, k.clone())) Ok(self.dealloc(ptr.as_opaque(), k.clone()))
} }
_ => { _ => {
Err(AllocErr::invalid_input("invalid layout for dealloc_array")) Err(AllocErr)
} }
} }
} }

View File

@ -75,6 +75,7 @@
#![feature(custom_attribute)] #![feature(custom_attribute)]
#![feature(doc_cfg)] #![feature(doc_cfg)]
#![feature(doc_spotlight)] #![feature(doc_spotlight)]
#![feature(extern_types)]
#![feature(fn_must_use)] #![feature(fn_must_use)]
#![feature(fundamental)] #![feature(fundamental)]
#![feature(intrinsics)] #![feature(intrinsics)]
@ -184,7 +185,14 @@ pub mod unicode;
/* Heap memory allocator trait */ /* Heap memory allocator trait */
#[allow(missing_docs)] #[allow(missing_docs)]
pub mod heap; pub mod alloc;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
/// Use the `alloc` module instead.
pub mod heap {
pub use alloc::*;
}
// note: does not need to be public // note: does not need to be public
mod iter_private; mod iter_private;

View File

@ -2750,6 +2750,14 @@ impl<T: ?Sized> NonNull<T> {
NonNull::new_unchecked(self.as_ptr() as *mut U) NonNull::new_unchecked(self.as_ptr() as *mut U)
} }
} }
/// Cast to an `Opaque` pointer
#[unstable(feature = "allocator_api", issue = "32838")]
pub fn as_opaque(self) -> NonNull<::alloc::Opaque> {
unsafe {
NonNull::new_unchecked(self.as_ptr() as _)
}
}
} }
#[stable(feature = "nonnull", since = "1.25.0")] #[stable(feature = "nonnull", since = "1.25.0")]

View File

@ -11,7 +11,7 @@
use rustc::middle::allocator::AllocatorKind; use rustc::middle::allocator::AllocatorKind;
use rustc_errors; use rustc_errors;
use syntax::abi::Abi; use syntax::abi::Abi;
use syntax::ast::{Crate, Attribute, LitKind, StrStyle, ExprKind}; use syntax::ast::{Crate, Attribute, LitKind, StrStyle};
use syntax::ast::{Unsafety, Constness, Generics, Mutability, Ty, Mac, Arg}; use syntax::ast::{Unsafety, Constness, Generics, Mutability, Ty, Mac, Arg};
use syntax::ast::{self, Ident, Item, ItemKind, TyKind, VisibilityKind, Expr}; use syntax::ast::{self, Ident, Item, ItemKind, TyKind, VisibilityKind, Expr};
use syntax::attr; use syntax::attr;
@ -88,7 +88,7 @@ impl<'a> Folder for ExpandAllocatorDirectives<'a> {
span, span,
kind: AllocatorKind::Global, kind: AllocatorKind::Global,
global: item.ident, global: item.ident,
alloc: Ident::from_str("alloc"), core: Ident::from_str("core"),
cx: ExtCtxt::new(self.sess, ecfg, self.resolver), cx: ExtCtxt::new(self.sess, ecfg, self.resolver),
}; };
let super_path = f.cx.path(f.span, vec![ let super_path = f.cx.path(f.span, vec![
@ -96,7 +96,7 @@ impl<'a> Folder for ExpandAllocatorDirectives<'a> {
f.global, f.global,
]); ]);
let mut items = vec![ let mut items = vec![
f.cx.item_extern_crate(f.span, f.alloc), f.cx.item_extern_crate(f.span, f.core),
f.cx.item_use_simple( f.cx.item_use_simple(
f.span, f.span,
respan(f.span.shrink_to_lo(), VisibilityKind::Inherited), respan(f.span.shrink_to_lo(), VisibilityKind::Inherited),
@ -126,7 +126,7 @@ struct AllocFnFactory<'a> {
span: Span, span: Span,
kind: AllocatorKind, kind: AllocatorKind,
global: Ident, global: Ident,
alloc: Ident, core: Ident,
cx: ExtCtxt<'a>, cx: ExtCtxt<'a>,
} }
@ -143,8 +143,7 @@ impl<'a> AllocFnFactory<'a> {
self.arg_ty(ty, &mut abi_args, mk) self.arg_ty(ty, &mut abi_args, mk)
}).collect(); }).collect();
let result = self.call_allocator(method.name, args); let result = self.call_allocator(method.name, args);
let (output_ty, output_expr) = let (output_ty, output_expr) = self.ret_ty(&method.output, result);
self.ret_ty(&method.output, &mut abi_args, mk, result);
let kind = ItemKind::Fn(self.cx.fn_decl(abi_args, ast::FunctionRetTy::Ty(output_ty)), let kind = ItemKind::Fn(self.cx.fn_decl(abi_args, ast::FunctionRetTy::Ty(output_ty)),
Unsafety::Unsafe, Unsafety::Unsafe,
dummy_spanned(Constness::NotConst), dummy_spanned(Constness::NotConst),
@ -159,16 +158,15 @@ impl<'a> AllocFnFactory<'a> {
fn call_allocator(&self, method: &str, mut args: Vec<P<Expr>>) -> P<Expr> { fn call_allocator(&self, method: &str, mut args: Vec<P<Expr>>) -> P<Expr> {
let method = self.cx.path(self.span, vec![ let method = self.cx.path(self.span, vec![
self.alloc, self.core,
Ident::from_str("heap"), Ident::from_str("alloc"),
Ident::from_str("Alloc"), Ident::from_str("GlobalAlloc"),
Ident::from_str(method), Ident::from_str(method),
]); ]);
let method = self.cx.expr_path(method); let method = self.cx.expr_path(method);
let allocator = self.cx.path_ident(self.span, self.global); let allocator = self.cx.path_ident(self.span, self.global);
let allocator = self.cx.expr_path(allocator); let allocator = self.cx.expr_path(allocator);
let allocator = self.cx.expr_addr_of(self.span, allocator); let allocator = self.cx.expr_addr_of(self.span, allocator);
let allocator = self.cx.expr_mut_addr_of(self.span, allocator);
args.insert(0, allocator); args.insert(0, allocator);
self.cx.expr_call(self.span, method, args) self.cx.expr_call(self.span, method, args)
@ -205,8 +203,8 @@ impl<'a> AllocFnFactory<'a> {
args.push(self.cx.arg(self.span, align, ty_usize)); args.push(self.cx.arg(self.span, align, ty_usize));
let layout_new = self.cx.path(self.span, vec![ let layout_new = self.cx.path(self.span, vec![
self.alloc, self.core,
Ident::from_str("heap"), Ident::from_str("alloc"),
Ident::from_str("Layout"), Ident::from_str("Layout"),
Ident::from_str("from_size_align_unchecked"), Ident::from_str("from_size_align_unchecked"),
]); ]);
@ -219,240 +217,38 @@ impl<'a> AllocFnFactory<'a> {
layout layout
} }
AllocatorTy::LayoutRef => {
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.ptr_u8()));
// Convert our `arg: *const u8` via:
//
// &*(arg as *const Layout)
let expr = self.cx.expr_ident(self.span, ident);
let expr = self.cx.expr_cast(self.span, expr, self.layout_ptr());
let expr = self.cx.expr_deref(self.span, expr);
self.cx.expr_addr_of(self.span, expr)
}
AllocatorTy::AllocErr => {
// We're creating:
//
// (*(arg as *const AllocErr)).clone()
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.ptr_u8()));
let expr = self.cx.expr_ident(self.span, ident);
let expr = self.cx.expr_cast(self.span, expr, self.alloc_err_ptr());
let expr = self.cx.expr_deref(self.span, expr);
self.cx.expr_method_call(
self.span,
expr,
Ident::from_str("clone"),
Vec::new()
)
}
AllocatorTy::Ptr => { AllocatorTy::Ptr => {
let ident = ident(); let ident = ident();
args.push(self.cx.arg(self.span, ident, self.ptr_u8())); args.push(self.cx.arg(self.span, ident, self.ptr_u8()));
let arg = self.cx.expr_ident(self.span, ident);
self.cx.expr_cast(self.span, arg, self.ptr_opaque())
}
AllocatorTy::Usize => {
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.usize()));
self.cx.expr_ident(self.span, ident) self.cx.expr_ident(self.span, ident)
} }
AllocatorTy::ResultPtr | AllocatorTy::ResultPtr |
AllocatorTy::ResultExcess |
AllocatorTy::ResultUnit |
AllocatorTy::Bang | AllocatorTy::Bang |
AllocatorTy::UsizePair |
AllocatorTy::Unit => { AllocatorTy::Unit => {
panic!("can't convert AllocatorTy to an argument") panic!("can't convert AllocatorTy to an argument")
} }
} }
} }
fn ret_ty(&self, fn ret_ty(&self, ty: &AllocatorTy, expr: P<Expr>) -> (P<Ty>, P<Expr>) {
ty: &AllocatorTy,
args: &mut Vec<Arg>,
ident: &mut FnMut() -> Ident,
expr: P<Expr>) -> (P<Ty>, P<Expr>)
{
match *ty { match *ty {
AllocatorTy::UsizePair => {
// We're creating:
//
// let arg = #expr;
// *min = arg.0;
// *max = arg.1;
let min = ident();
let max = ident();
args.push(self.cx.arg(self.span, min, self.ptr_usize()));
args.push(self.cx.arg(self.span, max, self.ptr_usize()));
let ident = ident();
let stmt = self.cx.stmt_let(self.span, false, ident, expr);
let min = self.cx.expr_ident(self.span, min);
let max = self.cx.expr_ident(self.span, max);
let layout = self.cx.expr_ident(self.span, ident);
let assign_min = self.cx.expr(self.span, ExprKind::Assign(
self.cx.expr_deref(self.span, min),
self.cx.expr_tup_field_access(self.span, layout.clone(), 0),
));
let assign_min = self.cx.stmt_semi(assign_min);
let assign_max = self.cx.expr(self.span, ExprKind::Assign(
self.cx.expr_deref(self.span, max),
self.cx.expr_tup_field_access(self.span, layout.clone(), 1),
));
let assign_max = self.cx.stmt_semi(assign_max);
let stmts = vec![stmt, assign_min, assign_max];
let block = self.cx.block(self.span, stmts);
let ty_unit = self.cx.ty(self.span, TyKind::Tup(Vec::new()));
(ty_unit, self.cx.expr_block(block))
}
AllocatorTy::ResultExcess => {
// We're creating:
//
// match #expr {
// Ok(ptr) => {
// *excess = ptr.1;
// ptr.0
// }
// Err(e) => {
// ptr::write(err_ptr, e);
// 0 as *mut u8
// }
// }
let excess_ptr = ident();
args.push(self.cx.arg(self.span, excess_ptr, self.ptr_usize()));
let excess_ptr = self.cx.expr_ident(self.span, excess_ptr);
let err_ptr = ident();
args.push(self.cx.arg(self.span, err_ptr, self.ptr_u8()));
let err_ptr = self.cx.expr_ident(self.span, err_ptr);
let err_ptr = self.cx.expr_cast(self.span,
err_ptr,
self.alloc_err_ptr());
let name = ident();
let ok_expr = {
let ptr = self.cx.expr_ident(self.span, name);
let write = self.cx.expr(self.span, ExprKind::Assign(
self.cx.expr_deref(self.span, excess_ptr),
self.cx.expr_tup_field_access(self.span, ptr.clone(), 1),
));
let write = self.cx.stmt_semi(write);
let ret = self.cx.expr_tup_field_access(self.span,
ptr.clone(),
0);
let ret = self.cx.stmt_expr(ret);
let block = self.cx.block(self.span, vec![write, ret]);
self.cx.expr_block(block)
};
let pat = self.cx.pat_ident(self.span, name);
let ok = self.cx.path_ident(self.span, Ident::from_str("Ok"));
let ok = self.cx.pat_tuple_struct(self.span, ok, vec![pat]);
let ok = self.cx.arm(self.span, vec![ok], ok_expr);
let name = ident();
let err_expr = {
let err = self.cx.expr_ident(self.span, name);
let write = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("__core"),
Ident::from_str("ptr"),
Ident::from_str("write"),
]);
let write = self.cx.expr_path(write);
let write = self.cx.expr_call(self.span, write,
vec![err_ptr, err]);
let write = self.cx.stmt_semi(write);
let null = self.cx.expr_usize(self.span, 0);
let null = self.cx.expr_cast(self.span, null, self.ptr_u8());
let null = self.cx.stmt_expr(null);
let block = self.cx.block(self.span, vec![write, null]);
self.cx.expr_block(block)
};
let pat = self.cx.pat_ident(self.span, name);
let err = self.cx.path_ident(self.span, Ident::from_str("Err"));
let err = self.cx.pat_tuple_struct(self.span, err, vec![pat]);
let err = self.cx.arm(self.span, vec![err], err_expr);
let expr = self.cx.expr_match(self.span, expr, vec![ok, err]);
(self.ptr_u8(), expr)
}
AllocatorTy::ResultPtr => { AllocatorTy::ResultPtr => {
// We're creating: // We're creating:
// //
// match #expr { // #expr as *mut u8
// Ok(ptr) => ptr,
// Err(e) => {
// ptr::write(err_ptr, e);
// 0 as *mut u8
// }
// }
let err_ptr = ident(); let expr = self.cx.expr_cast(self.span, expr, self.ptr_u8());
args.push(self.cx.arg(self.span, err_ptr, self.ptr_u8()));
let err_ptr = self.cx.expr_ident(self.span, err_ptr);
let err_ptr = self.cx.expr_cast(self.span,
err_ptr,
self.alloc_err_ptr());
let name = ident();
let ok_expr = self.cx.expr_ident(self.span, name);
let pat = self.cx.pat_ident(self.span, name);
let ok = self.cx.path_ident(self.span, Ident::from_str("Ok"));
let ok = self.cx.pat_tuple_struct(self.span, ok, vec![pat]);
let ok = self.cx.arm(self.span, vec![ok], ok_expr);
let name = ident();
let err_expr = {
let err = self.cx.expr_ident(self.span, name);
let write = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("__core"),
Ident::from_str("ptr"),
Ident::from_str("write"),
]);
let write = self.cx.expr_path(write);
let write = self.cx.expr_call(self.span, write,
vec![err_ptr, err]);
let write = self.cx.stmt_semi(write);
let null = self.cx.expr_usize(self.span, 0);
let null = self.cx.expr_cast(self.span, null, self.ptr_u8());
let null = self.cx.stmt_expr(null);
let block = self.cx.block(self.span, vec![write, null]);
self.cx.expr_block(block)
};
let pat = self.cx.pat_ident(self.span, name);
let err = self.cx.path_ident(self.span, Ident::from_str("Err"));
let err = self.cx.pat_tuple_struct(self.span, err, vec![pat]);
let err = self.cx.arm(self.span, vec![err], err_expr);
let expr = self.cx.expr_match(self.span, expr, vec![ok, err]);
(self.ptr_u8(), expr) (self.ptr_u8(), expr)
} }
AllocatorTy::ResultUnit => {
// We're creating:
//
// #expr.is_ok() as u8
let cast = self.cx.expr_method_call(
self.span,
expr,
Ident::from_str("is_ok"),
Vec::new()
);
let u8 = self.cx.path_ident(self.span, Ident::from_str("u8"));
let u8 = self.cx.ty_path(u8);
let cast = self.cx.expr_cast(self.span, cast, u8.clone());
(u8, cast)
}
AllocatorTy::Bang => { AllocatorTy::Bang => {
(self.cx.ty(self.span, TyKind::Never), expr) (self.cx.ty(self.span, TyKind::Never), expr)
} }
@ -461,44 +257,32 @@ impl<'a> AllocFnFactory<'a> {
(self.cx.ty(self.span, TyKind::Tup(Vec::new())), expr) (self.cx.ty(self.span, TyKind::Tup(Vec::new())), expr)
} }
AllocatorTy::AllocErr |
AllocatorTy::Layout | AllocatorTy::Layout |
AllocatorTy::LayoutRef | AllocatorTy::Usize |
AllocatorTy::Ptr => { AllocatorTy::Ptr => {
panic!("can't convert AllocatorTy to an output") panic!("can't convert AllocatorTy to an output")
} }
} }
} }
fn usize(&self) -> P<Ty> {
let usize = self.cx.path_ident(self.span, Ident::from_str("usize"));
self.cx.ty_path(usize)
}
fn ptr_u8(&self) -> P<Ty> { fn ptr_u8(&self) -> P<Ty> {
let u8 = self.cx.path_ident(self.span, Ident::from_str("u8")); let u8 = self.cx.path_ident(self.span, Ident::from_str("u8"));
let ty_u8 = self.cx.ty_path(u8); let ty_u8 = self.cx.ty_path(u8);
self.cx.ty_ptr(self.span, ty_u8, Mutability::Mutable) self.cx.ty_ptr(self.span, ty_u8, Mutability::Mutable)
} }
fn ptr_usize(&self) -> P<Ty> { fn ptr_opaque(&self) -> P<Ty> {
let usize = self.cx.path_ident(self.span, Ident::from_str("usize")); let opaque = self.cx.path(self.span, vec![
let ty_usize = self.cx.ty_path(usize); self.core,
self.cx.ty_ptr(self.span, ty_usize, Mutability::Mutable) Ident::from_str("alloc"),
} Ident::from_str("Opaque"),
fn layout_ptr(&self) -> P<Ty> {
let layout = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("Layout"),
]); ]);
let layout = self.cx.ty_path(layout); let ty_opaque = self.cx.ty_path(opaque);
self.cx.ty_ptr(self.span, layout, Mutability::Mutable) self.cx.ty_ptr(self.span, ty_opaque, Mutability::Mutable)
}
fn alloc_err_ptr(&self) -> P<Ty> {
let err = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("AllocErr"),
]);
let err = self.cx.ty_path(err);
self.cx.ty_ptr(self.span, err, Mutability::Mutable)
} }
} }

View File

@ -25,7 +25,7 @@ pub static ALLOCATOR_METHODS: &[AllocatorMethod] = &[
}, },
AllocatorMethod { AllocatorMethod {
name: "oom", name: "oom",
inputs: &[AllocatorTy::AllocErr], inputs: &[],
output: AllocatorTy::Bang, output: AllocatorTy::Bang,
}, },
AllocatorMethod { AllocatorMethod {
@ -33,14 +33,9 @@ pub static ALLOCATOR_METHODS: &[AllocatorMethod] = &[
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout], inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout],
output: AllocatorTy::Unit, output: AllocatorTy::Unit,
}, },
AllocatorMethod {
name: "usable_size",
inputs: &[AllocatorTy::LayoutRef],
output: AllocatorTy::UsizePair,
},
AllocatorMethod { AllocatorMethod {
name: "realloc", name: "realloc",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout], inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Usize],
output: AllocatorTy::ResultPtr, output: AllocatorTy::ResultPtr,
}, },
AllocatorMethod { AllocatorMethod {
@ -48,26 +43,6 @@ pub static ALLOCATOR_METHODS: &[AllocatorMethod] = &[
inputs: &[AllocatorTy::Layout], inputs: &[AllocatorTy::Layout],
output: AllocatorTy::ResultPtr, output: AllocatorTy::ResultPtr,
}, },
AllocatorMethod {
name: "alloc_excess",
inputs: &[AllocatorTy::Layout],
output: AllocatorTy::ResultExcess,
},
AllocatorMethod {
name: "realloc_excess",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
output: AllocatorTy::ResultExcess,
},
AllocatorMethod {
name: "grow_in_place",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
output: AllocatorTy::ResultUnit,
},
AllocatorMethod {
name: "shrink_in_place",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
output: AllocatorTy::ResultUnit,
},
]; ];
pub struct AllocatorMethod { pub struct AllocatorMethod {
@ -77,14 +52,10 @@ pub struct AllocatorMethod {
} }
pub enum AllocatorTy { pub enum AllocatorTy {
AllocErr,
Bang, Bang,
Layout, Layout,
LayoutRef,
Ptr, Ptr,
ResultExcess,
ResultPtr, ResultPtr,
ResultUnit,
Unit, Unit,
UsizePair, Usize,
} }

View File

@ -30,7 +30,6 @@ pub(crate) unsafe fn trans(tcx: TyCtxt, mods: &ModuleLlvm, kind: AllocatorKind)
}; };
let i8 = llvm::LLVMInt8TypeInContext(llcx); let i8 = llvm::LLVMInt8TypeInContext(llcx);
let i8p = llvm::LLVMPointerType(i8, 0); let i8p = llvm::LLVMPointerType(i8, 0);
let usizep = llvm::LLVMPointerType(usize, 0);
let void = llvm::LLVMVoidTypeInContext(llcx); let void = llvm::LLVMVoidTypeInContext(llcx);
for method in ALLOCATOR_METHODS { for method in ALLOCATOR_METHODS {
@ -41,40 +40,21 @@ pub(crate) unsafe fn trans(tcx: TyCtxt, mods: &ModuleLlvm, kind: AllocatorKind)
args.push(usize); // size args.push(usize); // size
args.push(usize); // align args.push(usize); // align
} }
AllocatorTy::LayoutRef => args.push(i8p),
AllocatorTy::Ptr => args.push(i8p), AllocatorTy::Ptr => args.push(i8p),
AllocatorTy::AllocErr => args.push(i8p), AllocatorTy::Usize => args.push(usize),
AllocatorTy::Bang | AllocatorTy::Bang |
AllocatorTy::ResultExcess |
AllocatorTy::ResultPtr | AllocatorTy::ResultPtr |
AllocatorTy::ResultUnit |
AllocatorTy::UsizePair |
AllocatorTy::Unit => panic!("invalid allocator arg"), AllocatorTy::Unit => panic!("invalid allocator arg"),
} }
} }
let output = match method.output { let output = match method.output {
AllocatorTy::UsizePair => {
args.push(usizep); // min
args.push(usizep); // max
None
}
AllocatorTy::Bang => None, AllocatorTy::Bang => None,
AllocatorTy::ResultExcess => { AllocatorTy::ResultPtr => Some(i8p),
args.push(i8p); // excess_ptr
args.push(i8p); // err_ptr
Some(i8p)
}
AllocatorTy::ResultPtr => {
args.push(i8p); // err_ptr
Some(i8p)
}
AllocatorTy::ResultUnit => Some(i8),
AllocatorTy::Unit => None, AllocatorTy::Unit => None,
AllocatorTy::AllocErr |
AllocatorTy::Layout | AllocatorTy::Layout |
AllocatorTy::LayoutRef | AllocatorTy::Usize |
AllocatorTy::Ptr => panic!("invalid allocator output"), AllocatorTy::Ptr => panic!("invalid allocator output"),
}; };
let ty = llvm::LLVMFunctionType(output.unwrap_or(void), let ty = llvm::LLVMFunctionType(output.unwrap_or(void),

121
src/libstd/alloc.rs Normal file
View File

@ -0,0 +1,121 @@
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! dox
#![unstable(issue = "32838", feature = "allocator_api")]
#[doc(inline)] #[allow(deprecated)] pub use alloc_crate::alloc::Heap;
#[doc(inline)] pub use alloc_crate::alloc::Global;
#[doc(inline)] pub use alloc_system::System;
#[doc(inline)] pub use core::alloc::*;
#[cfg(not(test))]
#[doc(hidden)]
#[allow(unused_attributes)]
pub mod __default_lib_allocator {
use super::{System, Layout, GlobalAlloc, Opaque};
// for symbol names src/librustc/middle/allocator.rs
// for signatures src/librustc_allocator/lib.rs
// linkage directives are provided as part of the current compiler allocator
// ABI
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc(size: usize, align: usize) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
System.alloc(layout) as *mut u8
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_oom() -> ! {
System.oom()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_dealloc(ptr: *mut u8,
size: usize,
align: usize) {
System.dealloc(ptr as *mut Opaque, Layout::from_size_align_unchecked(size, align))
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc(ptr: *mut u8,
old_size: usize,
align: usize,
new_size: usize) -> *mut u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, align);
System.realloc(ptr as *mut Opaque, old_layout, new_size) as *mut u8
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_zeroed(size: usize, align: usize) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
System.alloc_zeroed(layout) as *mut u8
}
#[cfg(stage0)]
pub mod stage0 {
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_usable_size(_layout: *const u8,
_min: *mut usize,
_max: *mut usize) {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_excess(_size: usize,
_align: usize,
_excess: *mut usize,
_err: *mut u8) -> *mut u8 {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc_excess(_ptr: *mut u8,
_old_size: usize,
_old_align: usize,
_new_size: usize,
_new_align: usize,
_excess: *mut usize,
_err: *mut u8) -> *mut u8 {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_grow_in_place(_ptr: *mut u8,
_old_size: usize,
_old_align: usize,
_new_size: usize,
_new_align: usize) -> u8 {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_shrink_in_place(_ptr: *mut u8,
_old_size: usize,
_old_align: usize,
_new_size: usize,
_new_align: usize) -> u8 {
unimplemented!()
}
}
}

View File

@ -11,10 +11,8 @@
use self::Entry::*; use self::Entry::*;
use self::VacantEntryState::*; use self::VacantEntryState::*;
use alloc::heap::Heap; use alloc::{Global, Alloc, CollectionAllocErr};
use alloc::allocator::CollectionAllocErr;
use cell::Cell; use cell::Cell;
use core::heap::Alloc;
use borrow::Borrow; use borrow::Borrow;
use cmp::max; use cmp::max;
use fmt::{self, Debug}; use fmt::{self, Debug};
@ -786,7 +784,7 @@ impl<K, V, S> HashMap<K, V, S>
pub fn reserve(&mut self, additional: usize) { pub fn reserve(&mut self, additional: usize) {
match self.try_reserve(additional) { match self.try_reserve(additional) {
Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"), Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"),
Err(CollectionAllocErr::AllocErr(e)) => Heap.oom(e), Err(CollectionAllocErr::AllocErr) => Global.oom(),
Ok(()) => { /* yay */ } Ok(()) => { /* yay */ }
} }
} }
@ -3636,7 +3634,7 @@ mod test_map {
if let Err(CapacityOverflow) = empty_bytes.try_reserve(max_no_ovf) { if let Err(CapacityOverflow) = empty_bytes.try_reserve(max_no_ovf) {
} else { panic!("isize::MAX + 1 should trigger a CapacityOverflow!") } } else { panic!("isize::MAX + 1 should trigger a CapacityOverflow!") }
} else { } else {
if let Err(AllocErr(_)) = empty_bytes.try_reserve(max_no_ovf) { if let Err(AllocErr) = empty_bytes.try_reserve(max_no_ovf) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") } } else { panic!("isize::MAX + 1 should trigger an OOM!") }
} }
} }

View File

@ -8,9 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::heap::Heap; use alloc::{Global, Alloc, Layout, CollectionAllocErr};
use core::heap::{Alloc, Layout};
use cmp; use cmp;
use hash::{BuildHasher, Hash, Hasher}; use hash::{BuildHasher, Hash, Hasher};
use marker; use marker;
@ -18,7 +16,6 @@ use mem::{align_of, size_of, needs_drop};
use mem; use mem;
use ops::{Deref, DerefMut}; use ops::{Deref, DerefMut};
use ptr::{self, Unique, NonNull}; use ptr::{self, Unique, NonNull};
use alloc::allocator::CollectionAllocErr;
use self::BucketState::*; use self::BucketState::*;
@ -757,15 +754,13 @@ impl<K, V> RawTable<K, V> {
return Err(CollectionAllocErr::CapacityOverflow); return Err(CollectionAllocErr::CapacityOverflow);
} }
let buffer = Heap.alloc(Layout::from_size_align(size, alignment) let buffer = Global.alloc(Layout::from_size_align(size, alignment)
.ok_or(CollectionAllocErr::CapacityOverflow)?)?; .map_err(|_| CollectionAllocErr::CapacityOverflow)?)?;
let hashes = buffer as *mut HashUint;
Ok(RawTable { Ok(RawTable {
capacity_mask: capacity.wrapping_sub(1), capacity_mask: capacity.wrapping_sub(1),
size: 0, size: 0,
hashes: TaggedHashUintPtr::new(hashes), hashes: TaggedHashUintPtr::new(buffer.cast().as_ptr()),
marker: marker::PhantomData, marker: marker::PhantomData,
}) })
} }
@ -775,7 +770,7 @@ impl<K, V> RawTable<K, V> {
unsafe fn new_uninitialized(capacity: usize) -> RawTable<K, V> { unsafe fn new_uninitialized(capacity: usize) -> RawTable<K, V> {
match Self::try_new_uninitialized(capacity) { match Self::try_new_uninitialized(capacity) {
Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"), Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"),
Err(CollectionAllocErr::AllocErr(e)) => Heap.oom(e), Err(CollectionAllocErr::AllocErr) => Global.oom(),
Ok(table) => { table } Ok(table) => { table }
} }
} }
@ -814,7 +809,7 @@ impl<K, V> RawTable<K, V> {
pub fn new(capacity: usize) -> RawTable<K, V> { pub fn new(capacity: usize) -> RawTable<K, V> {
match Self::try_new(capacity) { match Self::try_new(capacity) {
Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"), Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"),
Err(CollectionAllocErr::AllocErr(e)) => Heap.oom(e), Err(CollectionAllocErr::AllocErr) => Global.oom(),
Ok(table) => { table } Ok(table) => { table }
} }
} }
@ -1188,8 +1183,8 @@ unsafe impl<#[may_dangle] K, #[may_dangle] V> Drop for RawTable<K, V> {
debug_assert!(!oflo, "should be impossible"); debug_assert!(!oflo, "should be impossible");
unsafe { unsafe {
Heap.dealloc(self.hashes.ptr() as *mut u8, Global.dealloc(NonNull::new_unchecked(self.hashes.ptr()).as_opaque(),
Layout::from_size_align(size, align).unwrap()); Layout::from_size_align(size, align).unwrap());
// Remember how everything was allocated out of one buffer // Remember how everything was allocated out of one buffer
// during initialization? We only need one call to free here. // during initialization? We only need one call to free here.
} }

View File

@ -424,13 +424,13 @@
#[doc(hidden)] #[doc(hidden)]
pub use ops::Bound; pub use ops::Bound;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{BinaryHeap, BTreeMap, BTreeSet}; pub use alloc_crate::{BinaryHeap, BTreeMap, BTreeSet};
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{LinkedList, VecDeque}; pub use alloc_crate::{LinkedList, VecDeque};
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{binary_heap, btree_map, btree_set}; pub use alloc_crate::{binary_heap, btree_map, btree_set};
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{linked_list, vec_deque}; pub use alloc_crate::{linked_list, vec_deque};
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use self::hash_map::HashMap; pub use self::hash_map::HashMap;
@ -446,7 +446,7 @@ pub mod range {
} }
#[unstable(feature = "try_reserve", reason = "new API", issue="48043")] #[unstable(feature = "try_reserve", reason = "new API", issue="48043")]
pub use alloc::allocator::CollectionAllocErr; pub use heap::CollectionAllocErr;
mod hash; mod hash;

View File

@ -51,13 +51,13 @@
// coherence challenge (e.g., specialization, neg impls, etc) we can // coherence challenge (e.g., specialization, neg impls, etc) we can
// reconsider what crate these items belong in. // reconsider what crate these items belong in.
use alloc::allocator;
use any::TypeId; use any::TypeId;
use borrow::Cow; use borrow::Cow;
use cell; use cell;
use char; use char;
use core::array; use core::array;
use fmt::{self, Debug, Display}; use fmt::{self, Debug, Display};
use heap::{AllocErr, LayoutErr, CannotReallocInPlace};
use mem::transmute; use mem::transmute;
use num; use num;
use str; use str;
@ -241,18 +241,27 @@ impl Error for ! {
#[unstable(feature = "allocator_api", #[unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked.", reason = "the precise API and guarantees it provides may be tweaked.",
issue = "32838")] issue = "32838")]
impl Error for allocator::AllocErr { impl Error for AllocErr {
fn description(&self) -> &str { fn description(&self) -> &str {
allocator::AllocErr::description(self) "memory allocation failed"
} }
} }
#[unstable(feature = "allocator_api", #[unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked.", reason = "the precise API and guarantees it provides may be tweaked.",
issue = "32838")] issue = "32838")]
impl Error for allocator::CannotReallocInPlace { impl Error for LayoutErr {
fn description(&self) -> &str { fn description(&self) -> &str {
allocator::CannotReallocInPlace::description(self) "invalid parameters to Layout::from_size_align"
}
}
#[unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked.",
issue = "32838")]
impl Error for CannotReallocInPlace {
fn description(&self) -> &str {
CannotReallocInPlace::description(self)
} }
} }

View File

@ -1,176 +0,0 @@
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! dox
#![unstable(issue = "32838", feature = "allocator_api")]
pub use alloc::heap::Heap;
pub use alloc_system::System;
pub use core::heap::*;
#[cfg(not(test))]
#[doc(hidden)]
#[allow(unused_attributes)]
pub mod __default_lib_allocator {
use super::{System, Layout, Alloc, AllocErr};
use ptr;
// for symbol names src/librustc/middle/allocator.rs
// for signatures src/librustc_allocator/lib.rs
// linkage directives are provided as part of the current compiler allocator
// ABI
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc(size: usize,
align: usize,
err: *mut u8) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
match System.alloc(layout) {
Ok(p) => p,
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_oom(err: *const u8) -> ! {
System.oom((*(err as *const AllocErr)).clone())
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_dealloc(ptr: *mut u8,
size: usize,
align: usize) {
System.dealloc(ptr, Layout::from_size_align_unchecked(size, align))
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_usable_size(layout: *const u8,
min: *mut usize,
max: *mut usize) {
let pair = System.usable_size(&*(layout as *const Layout));
*min = pair.0;
*max = pair.1;
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.realloc(ptr, old_layout, new_layout) {
Ok(p) => p,
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_zeroed(size: usize,
align: usize,
err: *mut u8) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
match System.alloc_zeroed(layout) {
Ok(p) => p,
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_excess(size: usize,
align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
match System.alloc_excess(layout) {
Ok(p) => {
*excess = p.1;
p.0
}
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc_excess(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.realloc_excess(ptr, old_layout, new_layout) {
Ok(p) => {
*excess = p.1;
p.0
}
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_grow_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.grow_in_place(ptr, old_layout, new_layout) {
Ok(()) => 1,
Err(_) => 0,
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_shrink_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.shrink_in_place(ptr, old_layout, new_layout) {
Ok(()) => 1,
Err(_) => 0,
}
}
}

View File

@ -275,6 +275,7 @@
#![feature(macro_reexport)] #![feature(macro_reexport)]
#![feature(macro_vis_matcher)] #![feature(macro_vis_matcher)]
#![feature(needs_panic_runtime)] #![feature(needs_panic_runtime)]
#![feature(nonnull_cast)]
#![feature(exhaustive_patterns)] #![feature(exhaustive_patterns)]
#![feature(nonzero)] #![feature(nonzero)]
#![feature(num_bits_bytes)] #![feature(num_bits_bytes)]
@ -351,7 +352,7 @@ extern crate core as __core;
#[macro_use] #[macro_use]
#[macro_reexport(vec, format)] #[macro_reexport(vec, format)]
extern crate alloc; extern crate alloc as alloc_crate;
extern crate alloc_system; extern crate alloc_system;
#[doc(masked)] #[doc(masked)]
extern crate libc; extern crate libc;
@ -437,21 +438,21 @@ pub use core::u32;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use core::u64; pub use core::u64;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::boxed; pub use alloc_crate::boxed;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::rc; pub use alloc_crate::rc;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::borrow; pub use alloc_crate::borrow;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::fmt; pub use alloc_crate::fmt;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::slice; pub use alloc_crate::slice;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::str; pub use alloc_crate::str;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::string; pub use alloc_crate::string;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::vec; pub use alloc_crate::vec;
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use core::char; pub use core::char;
#[stable(feature = "i128", since = "1.26.0")] #[stable(feature = "i128", since = "1.26.0")]
@ -477,7 +478,14 @@ pub mod path;
pub mod process; pub mod process;
pub mod sync; pub mod sync;
pub mod time; pub mod time;
pub mod heap; pub mod alloc;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
/// Use the `alloc` module instead.
pub mod heap {
pub use alloc::*;
}
// Platform-abstraction modules // Platform-abstraction modules
#[macro_use] #[macro_use]

View File

@ -18,7 +18,7 @@
#![stable(feature = "rust1", since = "1.0.0")] #![stable(feature = "rust1", since = "1.0.0")]
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::arc::{Arc, Weak}; pub use alloc_crate::arc::{Arc, Weak};
#[stable(feature = "rust1", since = "1.0.0")] #[stable(feature = "rust1", since = "1.0.0")]
pub use core::sync::atomic; pub use core::sync::atomic;

View File

@ -23,10 +23,9 @@
pub use self::PopResult::*; pub use self::PopResult::*;
use alloc::boxed::Box;
use core::ptr; use core::ptr;
use core::cell::UnsafeCell; use core::cell::UnsafeCell;
use boxed::Box;
use sync::atomic::{AtomicPtr, Ordering}; use sync::atomic::{AtomicPtr, Ordering};
/// A result of the `pop` function. /// A result of the `pop` function.

View File

@ -16,7 +16,7 @@
// http://www.1024cores.net/home/lock-free-algorithms/queues/unbounded-spsc-queue // http://www.1024cores.net/home/lock-free-algorithms/queues/unbounded-spsc-queue
use alloc::boxed::Box; use boxed::Box;
use core::ptr; use core::ptr;
use core::cell::UnsafeCell; use core::cell::UnsafeCell;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::boxed::FnBox; use boxed::FnBox;
use cmp; use cmp;
use ffi::CStr; use ffi::CStr;
use io; use io;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::boxed::FnBox; use boxed::FnBox;
use ffi::CStr; use ffi::CStr;
use io; use io;
use mem; use mem;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::boxed::FnBox; use boxed::FnBox;
use cmp; use cmp;
use ffi::CStr; use ffi::CStr;
use io; use io;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::boxed::FnBox; use boxed::FnBox;
use ffi::CStr; use ffi::CStr;
use io; use io;
use sys::{unsupported, Void}; use sys::{unsupported, Void};

View File

@ -31,7 +31,7 @@ use sys::stdio;
use sys::cvt; use sys::cvt;
use sys_common::{AsInner, FromInner, IntoInner}; use sys_common::{AsInner, FromInner, IntoInner};
use sys_common::process::{CommandEnv, EnvKey}; use sys_common::process::{CommandEnv, EnvKey};
use alloc::borrow::Borrow; use borrow::Borrow;
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
// Command // Command

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::boxed::FnBox; use boxed::FnBox;
use io; use io;
use ffi::CStr; use ffi::CStr;
use mem; use mem;

View File

@ -12,7 +12,7 @@
//! //!
//! Documentation can be found on the `rt::at_exit` function. //! Documentation can be found on the `rt::at_exit` function.
use alloc::boxed::FnBox; use boxed::FnBox;
use ptr; use ptr;
use sys_common::mutex::Mutex; use sys_common::mutex::Mutex;

View File

@ -14,7 +14,7 @@
use ffi::{OsStr, OsString}; use ffi::{OsStr, OsString};
use env; use env;
use collections::BTreeMap; use collections::BTreeMap;
use alloc::borrow::Borrow; use borrow::Borrow;
pub trait EnvKey: pub trait EnvKey:
From<OsString> + Into<OsString> + From<OsString> + Into<OsString> +

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed // option. This file may not be copied, modified, or distributed
// except according to those terms. // except according to those terms.
use alloc::boxed::FnBox; use boxed::FnBox;
use env; use env;
use sync::atomic::{self, Ordering}; use sync::atomic::{self, Ordering};
use sys::stack_overflow; use sys::stack_overflow;

@ -1 +1 @@
Subproject commit 6ceaaa4b0176a200e4bbd347d6a991ab6c776ede Subproject commit 7243155b1c3da0a980c868a87adebf00e0b33989

View File

@ -12,4 +12,3 @@ doc = false
[dependencies] [dependencies]
core = { path = "../../libcore" } core = { path = "../../libcore" }
compiler_builtins = { path = "../../rustc/compiler_builtins_shim" } compiler_builtins = { path = "../../rustc/compiler_builtins_shim" }
alloc = { path = "../../liballoc" }

View File

@ -1,4 +1,4 @@
# If this file is modified, then llvm will be (optionally) cleaned and then rebuilt. # If this file is modified, then llvm will be (optionally) cleaned and then rebuilt.
# The actual contents of this file do not matter, but to trigger a change on the # The actual contents of this file do not matter, but to trigger a change on the
# build bots then the contents should be changed so git updates the mtime. # build bots then the contents should be changed so git updates the mtime.
2018-03-10 2018-04-05

View File

@ -12,15 +12,10 @@
#[global_allocator] #[global_allocator]
static A: usize = 0; static A: usize = 0;
//~^ the trait bound `&usize: //~^ the trait bound `usize:
//~| the trait bound `&usize: //~| the trait bound `usize:
//~| the trait bound `&usize: //~| the trait bound `usize:
//~| the trait bound `&usize: //~| the trait bound `usize:
//~| the trait bound `&usize: //~| the trait bound `usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
fn main() {} fn main() {}

View File

@ -11,16 +11,16 @@
#![feature(allocator_api)] #![feature(allocator_api)]
#![crate_type = "rlib"] #![crate_type = "rlib"]
use std::heap::*; use std::alloc::*;
pub struct A; pub struct A;
unsafe impl<'a> Alloc for &'a A { unsafe impl GlobalAlloc for A {
unsafe fn alloc(&mut self, _: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, _: Layout) -> *mut Opaque {
loop {} loop {}
} }
unsafe fn dealloc(&mut self, _ptr: *mut u8, _: Layout) { unsafe fn dealloc(&self, _ptr: *mut Opaque, _: Layout) {
loop {} loop {}
} }
} }

View File

@ -14,8 +14,8 @@ use std::heap::{Heap, Alloc};
fn main() { fn main() {
unsafe { unsafe {
let ptr = Heap.alloc_one::<i32>().unwrap_or_else(|e| { let ptr = Heap.alloc_one::<i32>().unwrap_or_else(|_| {
Heap.oom(e) Heap.oom()
}); });
*ptr.as_ptr() = 4; *ptr.as_ptr() = 4;
assert_eq!(*ptr.as_ptr(), 4); assert_eq!(*ptr.as_ptr(), 4);

View File

@ -13,18 +13,18 @@
#![feature(heap_api, allocator_api)] #![feature(heap_api, allocator_api)]
#![crate_type = "rlib"] #![crate_type = "rlib"]
use std::heap::{Alloc, System, AllocErr, Layout}; use std::heap::{GlobalAlloc, System, Layout, Opaque};
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
pub struct A(pub AtomicUsize); pub struct A(pub AtomicUsize);
unsafe impl<'a> Alloc for &'a A { unsafe impl GlobalAlloc for A {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
self.0.fetch_add(1, Ordering::SeqCst); self.0.fetch_add(1, Ordering::SeqCst);
System.alloc(layout) System.alloc(layout)
} }
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
self.0.fetch_add(1, Ordering::SeqCst); self.0.fetch_add(1, Ordering::SeqCst);
System.dealloc(ptr, layout) System.dealloc(ptr, layout)
} }

View File

@ -15,20 +15,20 @@
extern crate helper; extern crate helper;
use std::heap::{Heap, Alloc, System, Layout, AllocErr}; use std::alloc::{self, Global, Alloc, System, Layout, Opaque};
use std::sync::atomic::{AtomicUsize, Ordering, ATOMIC_USIZE_INIT}; use std::sync::atomic::{AtomicUsize, Ordering, ATOMIC_USIZE_INIT};
static HITS: AtomicUsize = ATOMIC_USIZE_INIT; static HITS: AtomicUsize = ATOMIC_USIZE_INIT;
struct A; struct A;
unsafe impl<'a> Alloc for &'a A { unsafe impl alloc::GlobalAlloc for A {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
HITS.fetch_add(1, Ordering::SeqCst); HITS.fetch_add(1, Ordering::SeqCst);
System.alloc(layout) System.alloc(layout)
} }
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
HITS.fetch_add(1, Ordering::SeqCst); HITS.fetch_add(1, Ordering::SeqCst);
System.dealloc(ptr, layout) System.dealloc(ptr, layout)
} }
@ -45,10 +45,10 @@ fn main() {
unsafe { unsafe {
let layout = Layout::from_size_align(4, 2).unwrap(); let layout = Layout::from_size_align(4, 2).unwrap();
let ptr = Heap.alloc(layout.clone()).unwrap(); let ptr = Global.alloc(layout.clone()).unwrap();
helper::work_with(&ptr); helper::work_with(&ptr);
assert_eq!(HITS.load(Ordering::SeqCst), n + 1); assert_eq!(HITS.load(Ordering::SeqCst), n + 1);
Heap.dealloc(ptr, layout.clone()); Global.dealloc(ptr, layout.clone());
assert_eq!(HITS.load(Ordering::SeqCst), n + 2); assert_eq!(HITS.load(Ordering::SeqCst), n + 2);
let s = String::with_capacity(10); let s = String::with_capacity(10);

View File

@ -17,7 +17,7 @@
extern crate custom; extern crate custom;
extern crate helper; extern crate helper;
use std::heap::{Heap, Alloc, System, Layout}; use std::alloc::{Global, Alloc, System, Layout};
use std::sync::atomic::{Ordering, ATOMIC_USIZE_INIT}; use std::sync::atomic::{Ordering, ATOMIC_USIZE_INIT};
#[global_allocator] #[global_allocator]
@ -28,10 +28,10 @@ fn main() {
let n = GLOBAL.0.load(Ordering::SeqCst); let n = GLOBAL.0.load(Ordering::SeqCst);
let layout = Layout::from_size_align(4, 2).unwrap(); let layout = Layout::from_size_align(4, 2).unwrap();
let ptr = Heap.alloc(layout.clone()).unwrap(); let ptr = Global.alloc(layout.clone()).unwrap();
helper::work_with(&ptr); helper::work_with(&ptr);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 1); assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 1);
Heap.dealloc(ptr, layout.clone()); Global.dealloc(ptr, layout.clone());
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 2); assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 2);
let ptr = System.alloc(layout.clone()).unwrap(); let ptr = System.alloc(layout.clone()).unwrap();

View File

@ -19,7 +19,7 @@ extern crate custom;
extern crate custom_as_global; extern crate custom_as_global;
extern crate helper; extern crate helper;
use std::heap::{Heap, Alloc, System, Layout}; use std::alloc::{Global, Alloc, GlobalAlloc, System, Layout};
use std::sync::atomic::{Ordering, ATOMIC_USIZE_INIT}; use std::sync::atomic::{Ordering, ATOMIC_USIZE_INIT};
static GLOBAL: custom::A = custom::A(ATOMIC_USIZE_INIT); static GLOBAL: custom::A = custom::A(ATOMIC_USIZE_INIT);
@ -30,25 +30,25 @@ fn main() {
let layout = Layout::from_size_align(4, 2).unwrap(); let layout = Layout::from_size_align(4, 2).unwrap();
// Global allocator routes to the `custom_as_global` global // Global allocator routes to the `custom_as_global` global
let ptr = Heap.alloc(layout.clone()).unwrap(); let ptr = Global.alloc(layout.clone());
helper::work_with(&ptr); helper::work_with(&ptr);
assert_eq!(custom_as_global::get(), n + 1); assert_eq!(custom_as_global::get(), n + 1);
Heap.dealloc(ptr, layout.clone()); Global.dealloc(ptr, layout.clone());
assert_eq!(custom_as_global::get(), n + 2); assert_eq!(custom_as_global::get(), n + 2);
// Usage of the system allocator avoids all globals // Usage of the system allocator avoids all globals
let ptr = System.alloc(layout.clone()).unwrap(); let ptr = System.alloc(layout.clone());
helper::work_with(&ptr); helper::work_with(&ptr);
assert_eq!(custom_as_global::get(), n + 2); assert_eq!(custom_as_global::get(), n + 2);
System.dealloc(ptr, layout.clone()); System.dealloc(ptr, layout.clone());
assert_eq!(custom_as_global::get(), n + 2); assert_eq!(custom_as_global::get(), n + 2);
// Usage of our personal allocator doesn't affect other instances // Usage of our personal allocator doesn't affect other instances
let ptr = (&GLOBAL).alloc(layout.clone()).unwrap(); let ptr = GLOBAL.alloc(layout.clone());
helper::work_with(&ptr); helper::work_with(&ptr);
assert_eq!(custom_as_global::get(), n + 2); assert_eq!(custom_as_global::get(), n + 2);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), 1); assert_eq!(GLOBAL.0.load(Ordering::SeqCst), 1);
(&GLOBAL).dealloc(ptr, layout); GLOBAL.dealloc(ptr, layout);
assert_eq!(custom_as_global::get(), n + 2); assert_eq!(custom_as_global::get(), n + 2);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), 2); assert_eq!(GLOBAL.0.load(Ordering::SeqCst), 2);
} }

View File

@ -13,10 +13,10 @@
// Ideally this would be revised to use no_std, but for now it serves // Ideally this would be revised to use no_std, but for now it serves
// well enough to reproduce (and illustrate) the bug from #16687. // well enough to reproduce (and illustrate) the bug from #16687.
#![feature(heap_api, allocator_api)] #![feature(heap_api, allocator_api, nonnull_cast)]
use std::heap::{Heap, Alloc, Layout}; use std::alloc::{Global, Alloc, Layout};
use std::ptr; use std::ptr::{self, NonNull};
fn main() { fn main() {
unsafe { unsafe {
@ -50,13 +50,13 @@ unsafe fn test_triangle() -> bool {
println!("allocate({:?})", layout); println!("allocate({:?})", layout);
} }
let ret = Heap.alloc(layout.clone()).unwrap_or_else(|e| Heap.oom(e)); let ret = Global.alloc(layout.clone()).unwrap_or_else(|_| Global.oom());
if PRINT { if PRINT {
println!("allocate({:?}) = {:?}", layout, ret); println!("allocate({:?}) = {:?}", layout, ret);
} }
ret ret.cast().as_ptr()
} }
unsafe fn deallocate(ptr: *mut u8, layout: Layout) { unsafe fn deallocate(ptr: *mut u8, layout: Layout) {
@ -64,7 +64,7 @@ unsafe fn test_triangle() -> bool {
println!("deallocate({:?}, {:?}", ptr, layout); println!("deallocate({:?}, {:?}", ptr, layout);
} }
Heap.dealloc(ptr, layout); Global.dealloc(NonNull::new_unchecked(ptr).as_opaque(), layout);
} }
unsafe fn reallocate(ptr: *mut u8, old: Layout, new: Layout) -> *mut u8 { unsafe fn reallocate(ptr: *mut u8, old: Layout, new: Layout) -> *mut u8 {
@ -72,14 +72,14 @@ unsafe fn test_triangle() -> bool {
println!("reallocate({:?}, old={:?}, new={:?})", ptr, old, new); println!("reallocate({:?}, old={:?}, new={:?})", ptr, old, new);
} }
let ret = Heap.realloc(ptr, old.clone(), new.clone()) let ret = Global.realloc(NonNull::new_unchecked(ptr).as_opaque(), old.clone(), new.size())
.unwrap_or_else(|e| Heap.oom(e)); .unwrap_or_else(|_| Global.oom());
if PRINT { if PRINT {
println!("reallocate({:?}, old={:?}, new={:?}) = {:?}", println!("reallocate({:?}, old={:?}, new={:?}) = {:?}",
ptr, old, new, ret); ptr, old, new, ret);
} }
ret ret.cast().as_ptr()
} }
fn idx_to_size(i: usize) -> usize { (i+1) * 10 } fn idx_to_size(i: usize) -> usize { (i+1) * 10 }

View File

@ -13,6 +13,7 @@
#![feature(allocator_api)] #![feature(allocator_api)]
use std::heap::{Alloc, Heap, Layout}; use std::heap::{Alloc, Heap, Layout};
use std::ptr::NonNull;
struct arena(()); struct arena(());
@ -32,8 +33,8 @@ struct Ccx {
fn alloc<'a>(_bcx : &'a arena) -> &'a Bcx<'a> { fn alloc<'a>(_bcx : &'a arena) -> &'a Bcx<'a> {
unsafe { unsafe {
let ptr = Heap.alloc(Layout::new::<Bcx>()) let ptr = Heap.alloc(Layout::new::<Bcx>())
.unwrap_or_else(|e| Heap.oom(e)); .unwrap_or_else(|_| Heap.oom());
&*(ptr as *const _) &*(ptr.as_ptr() as *const _)
} }
} }
@ -45,7 +46,7 @@ fn g(fcx : &Fcx) {
let bcx = Bcx { fcx: fcx }; let bcx = Bcx { fcx: fcx };
let bcx2 = h(&bcx); let bcx2 = h(&bcx);
unsafe { unsafe {
Heap.dealloc(bcx2 as *const _ as *mut _, Layout::new::<Bcx>()); Heap.dealloc(NonNull::new_unchecked(bcx2 as *const _ as *mut _), Layout::new::<Bcx>());
} }
} }