Auto merge of #49669 - SimonSapin:global-alloc, r=alexcrichton

Add GlobalAlloc trait + tweaks for initial stabilization

This is the outcome of discussion at the Rust All Hands in Berlin. The high-level goal is stabilizing sooner rather than later the ability to [change the global allocator](https://github.com/rust-lang/rust/issues/27389), as well as allocating memory without abusing `Vec::with_capacity` + `mem::forget`.

Since we’re not ready to settle every detail of the `Alloc` trait for the purpose of collections that are generic over the allocator type (for example the possibility of a separate trait for deallocation only, and what that would look like exactly), we propose introducing separately **a new `GlobalAlloc` trait**, for use with the `#[global_allocator]` attribute.

We also propose a number of changes to existing APIs. They are batched in this one PR in order to minimize disruption to Nightly users.

The plan for initial stabilization is detailed in the tracking issue https://github.com/rust-lang/rust/issues/49668.

CC @rust-lang/libs, @glandium

## Immediate breaking changes to unstable features

* For pointers to allocated memory, change the pointed type from `u8` to `Opaque`, a new public [extern type](https://github.com/rust-lang/rust/issues/43467). Since extern types are not `Sized`, `<*mut _>::offset` cannot be used without first casting to another pointer type. (We hope that extern types can also be stabilized soon.)
* In the `Alloc` trait, change these pointers to `ptr::NonNull` and change the `AllocErr` type to a zero-size struct. This makes return types `Result<ptr::NonNull<Opaque>, AllocErr>` be pointer-sized.
* Instead of a new `Layout`, `realloc` takes only a new size (in addition to the pointer and old `Layout`). Changing the alignment is not supported with `realloc`.
* Change the return type of `Layout::from_size_align` from `Option<Self>` to `Result<Self, LayoutErr>`, with `LayoutErr` a new opaque struct.
* A `static` item registered as the global allocator with the `#[global_allocator]` **must now implement the new `GlobalAlloc` trait** instead of `Alloc`.

## Eventually-breaking changes to unstable features, with a deprecation period

* Rename the respective `heap` modules to `alloc` in the `core`, `alloc`, and `std` crates. (Yes, this does mean that `::alloc::alloc::Alloc::alloc` is a valid path to a trait method if you have `exetrn crate alloc;`)
* Rename the the `Heap` type to `Global`, since it is the entry point for what’s registered with `#[global_allocator]`.

Old names remain available for now, as deprecated `pub use` reexports.

## Backward-compatible changes

* Add a new [extern type](https://github.com/rust-lang/rust/issues/43467) `Opaque`, for use in pointers to allocated memory.
* Add a new `GlobalAlloc` trait shown below. Unlike `Alloc`, it uses bare `*mut Opaque` without `NonNull` or `Result`. NULL in return values indicates an error (of unspecified nature). This is easier to implement on top of `malloc`-like APIs.
* Add impls of `GlobalAlloc` for both the `Global` and `System` types, in addition to existing impls of `Alloc`. This enables calling `GlobalAlloc` methods on the stable channel before `Alloc` is stable. Implementing two traits with identical method names can make some calls ambiguous, but most code is expected to have no more than one of the two traits in scope. Erroneous code like `use std::alloc::Global; #[global_allocator] static A: Global = Global;` (where `Global` is defined to call itself, causing infinite recursion) is not statically prevented by the type system, but we count on it being hard enough to do accidentally and easy enough to diagnose.

```rust
extern {
    pub type Opaque;
}

pub unsafe trait GlobalAlloc {
    unsafe fn alloc(&self, layout: Layout) -> *mut Opaque;
    unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout);

    unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
        // Default impl: self.alloc() and ptr::write_bytes()
    }
    unsafe fn realloc(&self, ptr: *mut Opaque, old_layout: Layout, new_size: usize) -> *mut Opaque {
        // Default impl: self.alloc() and ptr::copy_nonoverlapping() and self.dealloc()
    }

    fn oom(&self) -> ! {
        // intrinsics::abort
    }

    // More methods with default impls may be added in the future
}
```

## Bikeshed

The tracking issue https://github.com/rust-lang/rust/issues/49668 lists some open questions. If consensus is reached before this PR is merged, changes can be integrated.
This commit is contained in:
bors 2018-04-13 10:33:51 +00:00
commit 99d4886ead
56 changed files with 1058 additions and 1516 deletions

3
src/Cargo.lock generated
View File

@ -19,7 +19,6 @@ dependencies = [
name = "alloc_jemalloc"
version = "0.0.0"
dependencies = [
"alloc 0.0.0",
"alloc_system 0.0.0",
"build_helper 0.1.0",
"cc 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)",
@ -32,7 +31,6 @@ dependencies = [
name = "alloc_system"
version = "0.0.0"
dependencies = [
"alloc 0.0.0",
"compiler_builtins 0.0.0",
"core 0.0.0",
"dlmalloc 0.0.0",
@ -542,7 +540,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
name = "dlmalloc"
version = "0.0.0"
dependencies = [
"alloc 0.0.0",
"compiler_builtins 0.0.0",
"core 0.0.0",
]

@ -1 +1 @@
Subproject commit 9b2dcac06c3e23235f8997b3c5f2325a6d3382df
Subproject commit c99638dc2ecfc750cc1656f6edb2bd062c1e0981

@ -1 +1 @@
Subproject commit 6a8f0a27e9a58c55c89d07bc43a176fdae5e051c
Subproject commit 3c56329d1bd9038e5341f1962bcd8d043312a712

View File

@ -29,16 +29,17 @@ looks like:
```rust
#![feature(global_allocator, allocator_api, heap_api)]
use std::heap::{Alloc, System, Layout, AllocErr};
use std::alloc::{GlobalAlloc, System, Layout, Opaque};
use std::ptr::NonNull;
struct MyAllocator;
unsafe impl<'a> Alloc for &'a MyAllocator {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
unsafe impl GlobalAlloc for MyAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
System.alloc(layout)
}
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
System.dealloc(ptr, layout)
}
}

215
src/liballoc/alloc.rs Normal file
View File

@ -0,0 +1,215 @@
// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked \
slightly, especially to possibly take into account the \
types being stored to make room for a future \
tracing garbage collector",
issue = "32838")]
use core::intrinsics::{min_align_of_val, size_of_val};
use core::ptr::NonNull;
use core::usize;
#[doc(inline)]
pub use core::alloc::*;
#[cfg(stage0)]
extern "Rust" {
#[allocator]
#[rustc_allocator_nounwind]
fn __rust_alloc(size: usize, align: usize, err: *mut u8) -> *mut u8;
#[cold]
#[rustc_allocator_nounwind]
fn __rust_oom(err: *const u8) -> !;
#[rustc_allocator_nounwind]
fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
#[rustc_allocator_nounwind]
fn __rust_realloc(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_zeroed(size: usize, align: usize, err: *mut u8) -> *mut u8;
}
#[cfg(not(stage0))]
extern "Rust" {
#[allocator]
#[rustc_allocator_nounwind]
fn __rust_alloc(size: usize, align: usize) -> *mut u8;
#[cold]
#[rustc_allocator_nounwind]
fn __rust_oom() -> !;
#[rustc_allocator_nounwind]
fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
#[rustc_allocator_nounwind]
fn __rust_realloc(ptr: *mut u8,
old_size: usize,
align: usize,
new_size: usize) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8;
}
#[derive(Copy, Clone, Default, Debug)]
pub struct Global;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "type renamed to `Global`")]
pub type Heap = Global;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "type renamed to `Global`")]
#[allow(non_upper_case_globals)]
pub const Heap: Global = Global;
unsafe impl GlobalAlloc for Global {
#[inline]
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
#[cfg(not(stage0))]
let ptr = __rust_alloc(layout.size(), layout.align());
#[cfg(stage0)]
let ptr = __rust_alloc(layout.size(), layout.align(), &mut 0);
ptr as *mut Opaque
}
#[inline]
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
__rust_dealloc(ptr as *mut u8, layout.size(), layout.align())
}
#[inline]
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
#[cfg(not(stage0))]
let ptr = __rust_realloc(ptr as *mut u8, layout.size(), layout.align(), new_size);
#[cfg(stage0)]
let ptr = __rust_realloc(ptr as *mut u8, layout.size(), layout.align(),
new_size, layout.align(), &mut 0);
ptr as *mut Opaque
}
#[inline]
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
#[cfg(not(stage0))]
let ptr = __rust_alloc_zeroed(layout.size(), layout.align());
#[cfg(stage0)]
let ptr = __rust_alloc_zeroed(layout.size(), layout.align(), &mut 0);
ptr as *mut Opaque
}
#[inline]
fn oom(&self) -> ! {
unsafe {
#[cfg(not(stage0))]
__rust_oom();
#[cfg(stage0)]
__rust_oom(&mut 0);
}
}
}
unsafe impl Alloc for Global {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc(self, layout)).ok_or(AllocErr)
}
#[inline]
unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
GlobalAlloc::dealloc(self, ptr.as_ptr(), layout)
}
#[inline]
unsafe fn realloc(&mut self,
ptr: NonNull<Opaque>,
layout: Layout,
new_size: usize)
-> Result<NonNull<Opaque>, AllocErr>
{
NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc_zeroed(self, layout)).ok_or(AllocErr)
}
#[inline]
fn oom(&mut self) -> ! {
GlobalAlloc::oom(self)
}
}
/// The allocator for unique pointers.
// This function must not unwind. If it does, MIR trans will fail.
#[cfg(not(test))]
#[lang = "exchange_malloc"]
#[inline]
unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 {
if size == 0 {
align as *mut u8
} else {
let layout = Layout::from_size_align_unchecked(size, align);
let ptr = Global.alloc(layout);
if !ptr.is_null() {
ptr as *mut u8
} else {
Global.oom()
}
}
}
#[cfg_attr(not(test), lang = "box_free")]
#[inline]
pub(crate) unsafe fn box_free<T: ?Sized>(ptr: *mut T) {
let size = size_of_val(&*ptr);
let align = min_align_of_val(&*ptr);
// We do not allocate for Box<T> when T is ZST, so deallocation is also not necessary.
if size != 0 {
let layout = Layout::from_size_align_unchecked(size, align);
Global.dealloc(ptr as *mut Opaque, layout);
}
}
#[cfg(test)]
mod tests {
extern crate test;
use self::test::Bencher;
use boxed::Box;
use alloc::{Global, Alloc, Layout};
#[test]
fn allocate_zeroed() {
unsafe {
let layout = Layout::from_size_align(1024, 1).unwrap();
let ptr = Global.alloc_zeroed(layout.clone())
.unwrap_or_else(|_| Global.oom());
let mut i = ptr.cast::<u8>().as_ptr();
let end = i.offset(layout.size() as isize);
while i < end {
assert_eq!(*i, 0);
i = i.offset(1);
}
Global.dealloc(ptr, layout);
}
}
#[bench]
fn alloc_owned_small(b: &mut Bencher) {
b.iter(|| {
let _: Box<_> = box 10;
})
}
}

View File

@ -21,7 +21,6 @@ use core::sync::atomic::Ordering::{Acquire, Relaxed, Release, SeqCst};
use core::borrow;
use core::fmt;
use core::cmp::Ordering;
use core::heap::{Alloc, Layout};
use core::intrinsics::abort;
use core::mem::{self, align_of_val, size_of_val, uninitialized};
use core::ops::Deref;
@ -32,7 +31,7 @@ use core::hash::{Hash, Hasher};
use core::{isize, usize};
use core::convert::From;
use heap::{Heap, box_free};
use alloc::{Global, Alloc, Layout, box_free};
use boxed::Box;
use string::String;
use vec::Vec;
@ -513,15 +512,13 @@ impl<T: ?Sized> Arc<T> {
// Non-inlined part of `drop`.
#[inline(never)]
unsafe fn drop_slow(&mut self) {
let ptr = self.ptr.as_ptr();
// Destroy the data at this time, even though we may not free the box
// allocation itself (there may still be weak pointers lying around).
ptr::drop_in_place(&mut self.ptr.as_mut().data);
if self.inner().weak.fetch_sub(1, Release) == 1 {
atomic::fence(Acquire);
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr))
Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()))
}
}
@ -555,11 +552,11 @@ impl<T: ?Sized> Arc<T> {
let layout = Layout::for_value(&*fake_ptr);
let mem = Heap.alloc(layout)
.unwrap_or_else(|e| Heap.oom(e));
let mem = Global.alloc(layout)
.unwrap_or_else(|_| Global.oom());
// Initialize the real ArcInner
let inner = set_data_ptr(ptr as *mut T, mem) as *mut ArcInner<T>;
let inner = set_data_ptr(ptr as *mut T, mem.as_ptr() as *mut u8) as *mut ArcInner<T>;
ptr::write(&mut (*inner).strong, atomic::AtomicUsize::new(1));
ptr::write(&mut (*inner).weak, atomic::AtomicUsize::new(1));
@ -626,7 +623,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
// In the event of a panic, elements that have been written
// into the new ArcInner will be dropped, then the memory freed.
struct Guard<T> {
mem: *mut u8,
mem: NonNull<u8>,
elems: *mut T,
layout: Layout,
n_elems: usize,
@ -640,7 +637,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
let slice = from_raw_parts_mut(self.elems, self.n_elems);
ptr::drop_in_place(slice);
Heap.dealloc(self.mem, self.layout.clone());
Global.dealloc(self.mem.as_opaque(), self.layout.clone());
}
}
}
@ -656,7 +653,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
let elems = &mut (*ptr).data as *mut [T] as *mut T;
let mut guard = Guard{
mem: mem,
mem: NonNull::new_unchecked(mem),
elems: elems,
layout: layout,
n_elems: 0,
@ -1148,8 +1145,6 @@ impl<T: ?Sized> Drop for Weak<T> {
/// assert!(other_weak_foo.upgrade().is_none());
/// ```
fn drop(&mut self) {
let ptr = self.ptr.as_ptr();
// If we find out that we were the last weak pointer, then its time to
// deallocate the data entirely. See the discussion in Arc::drop() about
// the memory orderings
@ -1161,7 +1156,7 @@ impl<T: ?Sized> Drop for Weak<T> {
if self.inner().weak.fetch_sub(1, Release) == 1 {
atomic::fence(Acquire);
unsafe {
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr))
Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()))
}
}
}

View File

@ -41,14 +41,13 @@
// - A node of length `n` has `n` keys, `n` values, and (in an internal node) `n + 1` edges.
// This implies that even an empty internal node has at least one edge.
use core::heap::{Alloc, Layout};
use core::marker::PhantomData;
use core::mem;
use core::ptr::{self, Unique, NonNull};
use core::slice;
use alloc::{Global, Alloc, Layout};
use boxed::Box;
use heap::Heap;
const B: usize = 6;
pub const MIN_LEN: usize = B - 1;
@ -237,7 +236,7 @@ impl<K, V> Root<K, V> {
pub fn pop_level(&mut self) {
debug_assert!(self.height > 0);
let top = self.node.ptr.as_ptr() as *mut u8;
let top = self.node.ptr;
self.node = unsafe {
BoxedNode::from_ptr(self.as_mut()
@ -250,7 +249,7 @@ impl<K, V> Root<K, V> {
self.as_mut().as_leaf_mut().parent = ptr::null();
unsafe {
Heap.dealloc(top, Layout::new::<InternalNode<K, V>>());
Global.dealloc(NonNull::from(top).as_opaque(), Layout::new::<InternalNode<K, V>>());
}
}
}
@ -434,9 +433,9 @@ impl<K, V> NodeRef<marker::Owned, K, V, marker::Leaf> {
marker::Edge
>
> {
let ptr = self.as_leaf() as *const LeafNode<K, V> as *const u8 as *mut u8;
let node = self.node;
let ret = self.ascend().ok();
Heap.dealloc(ptr, Layout::new::<LeafNode<K, V>>());
Global.dealloc(node.as_opaque(), Layout::new::<LeafNode<K, V>>());
ret
}
}
@ -455,9 +454,9 @@ impl<K, V> NodeRef<marker::Owned, K, V, marker::Internal> {
marker::Edge
>
> {
let ptr = self.as_internal() as *const InternalNode<K, V> as *const u8 as *mut u8;
let node = self.node;
let ret = self.ascend().ok();
Heap.dealloc(ptr, Layout::new::<InternalNode<K, V>>());
Global.dealloc(node.as_opaque(), Layout::new::<InternalNode<K, V>>());
ret
}
}
@ -1239,13 +1238,13 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
).correct_parent_link();
}
Heap.dealloc(
right_node.node.as_ptr() as *mut u8,
Global.dealloc(
right_node.node.as_opaque(),
Layout::new::<InternalNode<K, V>>(),
);
} else {
Heap.dealloc(
right_node.node.as_ptr() as *mut u8,
Global.dealloc(
right_node.node.as_opaque(),
Layout::new::<LeafNode<K, V>>(),
);
}

View File

@ -8,282 +8,103 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked \
slightly, especially to possibly take into account the \
types being stored to make room for a future \
tracing garbage collector",
issue = "32838")]
#![allow(deprecated)]
use core::intrinsics::{min_align_of_val, size_of_val};
use core::mem::{self, ManuallyDrop};
use core::usize;
pub use alloc::{Layout, AllocErr, CannotReallocInPlace, Opaque};
use core::alloc::Alloc as CoreAlloc;
use core::ptr::NonNull;
pub use core::heap::*;
#[doc(hidden)]
pub mod __core {
pub use core::*;
}
extern "Rust" {
#[allocator]
#[rustc_allocator_nounwind]
fn __rust_alloc(size: usize, align: usize, err: *mut u8) -> *mut u8;
#[cold]
#[rustc_allocator_nounwind]
fn __rust_oom(err: *const u8) -> !;
#[rustc_allocator_nounwind]
fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
#[rustc_allocator_nounwind]
fn __rust_usable_size(layout: *const u8,
min: *mut usize,
max: *mut usize);
#[rustc_allocator_nounwind]
fn __rust_realloc(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_zeroed(size: usize, align: usize, err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_alloc_excess(size: usize,
align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_realloc_excess(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8;
#[rustc_allocator_nounwind]
fn __rust_grow_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8;
#[rustc_allocator_nounwind]
fn __rust_shrink_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8;
}
#[derive(Debug)]
pub struct Excess(pub *mut u8, pub usize);
#[derive(Copy, Clone, Default, Debug)]
pub struct Heap;
unsafe impl Alloc for Heap {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>());
let ptr = __rust_alloc(layout.size(),
layout.align(),
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(ptr)
}
}
#[inline]
#[cold]
fn oom(&mut self, err: AllocErr) -> ! {
unsafe {
__rust_oom(&err as *const AllocErr as *const u8)
}
}
#[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
__rust_dealloc(ptr, layout.size(), layout.align())
}
#[inline]
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
let mut min = 0;
let mut max = 0;
unsafe {
__rust_usable_size(layout as *const Layout as *const u8,
&mut min,
&mut max);
}
(min, max)
}
#[inline]
/// Compatibility with older versions of #[global_allocator] during bootstrap
pub unsafe trait Alloc {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
fn oom(&mut self, err: AllocErr) -> !;
fn usable_size(&self, layout: &Layout) -> (usize, usize);
unsafe fn realloc(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout)
-> Result<*mut u8, AllocErr>
{
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>());
let ptr = __rust_realloc(ptr,
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align(),
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
mem::forget(err);
Ok(ptr)
}
new_layout: Layout) -> Result<*mut u8, AllocErr>;
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>;
unsafe fn realloc_excess(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr>;
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace>;
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace>;
}
unsafe impl<T> Alloc for T where T: CoreAlloc {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
CoreAlloc::alloc(self, layout).map(|ptr| ptr.cast().as_ptr())
}
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::dealloc(self, ptr, layout)
}
fn oom(&mut self, _: AllocErr) -> ! {
CoreAlloc::oom(self)
}
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
CoreAlloc::usable_size(self, layout)
}
unsafe fn realloc(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::realloc(self, ptr, layout, new_layout.size()).map(|ptr| ptr.cast().as_ptr())
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>());
let ptr = __rust_alloc_zeroed(layout.size(),
layout.align(),
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(ptr)
}
CoreAlloc::alloc_zeroed(self, layout).map(|ptr| ptr.cast().as_ptr())
}
#[inline]
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>());
let mut size = 0;
let ptr = __rust_alloc_excess(layout.size(),
layout.align(),
&mut size,
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(Excess(ptr, size))
}
CoreAlloc::alloc_excess(self, layout)
.map(|e| Excess(e.0 .cast().as_ptr(), e.1))
}
#[inline]
unsafe fn realloc_excess(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> {
let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>());
let mut size = 0;
let ptr = __rust_realloc_excess(ptr,
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align(),
&mut size,
&mut *err as *mut AllocErr as *mut u8);
if ptr.is_null() {
Err(ManuallyDrop::into_inner(err))
} else {
Ok(Excess(ptr, size))
}
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::realloc_excess(self, ptr, layout, new_layout.size())
.map(|e| Excess(e.0 .cast().as_ptr(), e.1))
}
#[inline]
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout)
-> Result<(), CannotReallocInPlace>
{
debug_assert!(new_layout.size() >= layout.size());
debug_assert!(new_layout.align() == layout.align());
let ret = __rust_grow_in_place(ptr,
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align());
if ret != 0 {
Ok(())
} else {
Err(CannotReallocInPlace)
}
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::grow_in_place(self, ptr, layout, new_layout.size())
}
#[inline]
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
debug_assert!(new_layout.size() <= layout.size());
debug_assert!(new_layout.align() == layout.align());
let ret = __rust_shrink_in_place(ptr,
layout.size(),
layout.align(),
new_layout.size(),
new_layout.align());
if ret != 0 {
Ok(())
} else {
Err(CannotReallocInPlace)
}
}
}
/// The allocator for unique pointers.
// This function must not unwind. If it does, MIR trans will fail.
#[cfg(not(test))]
#[lang = "exchange_malloc"]
#[inline]
unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 {
if size == 0 {
align as *mut u8
} else {
let layout = Layout::from_size_align_unchecked(size, align);
Heap.alloc(layout).unwrap_or_else(|err| {
Heap.oom(err)
})
}
}
#[cfg_attr(not(test), lang = "box_free")]
#[inline]
pub(crate) unsafe fn box_free<T: ?Sized>(ptr: *mut T) {
let size = size_of_val(&*ptr);
let align = min_align_of_val(&*ptr);
// We do not allocate for Box<T> when T is ZST, so deallocation is also not necessary.
if size != 0 {
let layout = Layout::from_size_align_unchecked(size, align);
Heap.dealloc(ptr as *mut u8, layout);
}
}
#[cfg(test)]
mod tests {
extern crate test;
use self::test::Bencher;
use boxed::Box;
use heap::{Heap, Alloc, Layout};
#[test]
fn allocate_zeroed() {
unsafe {
let layout = Layout::from_size_align(1024, 1).unwrap();
let ptr = Heap.alloc_zeroed(layout.clone())
.unwrap_or_else(|e| Heap.oom(e));
let end = ptr.offset(layout.size() as isize);
let mut i = ptr;
while i < end {
assert_eq!(*i, 0);
i = i.offset(1);
}
Heap.dealloc(ptr, layout);
}
}
#[bench]
fn alloc_owned_small(b: &mut Bencher) {
b.iter(|| {
let _: Box<_> = box 10;
})
let ptr = NonNull::new_unchecked(ptr as *mut Opaque);
CoreAlloc::shrink_in_place(self, ptr, layout, new_layout.size())
}
}

View File

@ -57,7 +57,7 @@
//!
//! ## Heap interfaces
//!
//! The [`heap`](heap/index.html) module defines the low-level interface to the
//! The [`alloc`](alloc/index.html) module defines the low-level interface to the
//! default global allocator. It is not compatible with the libc allocator API.
#![allow(unused_attributes)]
@ -97,7 +97,9 @@
#![feature(from_ref)]
#![feature(fundamental)]
#![feature(lang_items)]
#![feature(libc)]
#![feature(needs_allocator)]
#![feature(nonnull_cast)]
#![feature(nonzero)]
#![feature(optin_builtin_traits)]
#![feature(pattern)]
@ -141,10 +143,26 @@ mod macros;
#[rustc_deprecated(since = "1.27.0", reason = "use the heap module in core, alloc, or std instead")]
#[unstable(feature = "allocator_api", issue = "32838")]
pub use core::heap as allocator;
/// Use the `alloc` module instead.
pub mod allocator {
pub use alloc::*;
}
// Heaps provided for low-level allocation strategies
pub mod alloc;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
/// Use the `alloc` module instead.
#[cfg(not(stage0))]
pub mod heap {
pub use alloc::*;
}
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
#[cfg(stage0)]
pub mod heap;
// Primitive types using the heaps above

View File

@ -8,13 +8,12 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::{Alloc, Layout, Global};
use core::cmp;
use core::heap::{Alloc, Layout};
use core::mem;
use core::ops::Drop;
use core::ptr::{self, Unique};
use core::ptr::{self, NonNull, Unique};
use core::slice;
use heap::Heap;
use super::boxed::Box;
use super::allocator::CollectionAllocErr;
use super::allocator::CollectionAllocErr::*;
@ -47,7 +46,7 @@ use super::allocator::CollectionAllocErr::*;
/// field. This allows zero-sized types to not be special-cased by consumers of
/// this type.
#[allow(missing_debug_implementations)]
pub struct RawVec<T, A: Alloc = Heap> {
pub struct RawVec<T, A: Alloc = Global> {
ptr: Unique<T>,
cap: usize,
a: A,
@ -91,7 +90,7 @@ impl<T, A: Alloc> RawVec<T, A> {
// handles ZSTs and `cap = 0` alike
let ptr = if alloc_size == 0 {
mem::align_of::<T>() as *mut u8
NonNull::<T>::dangling().as_opaque()
} else {
let align = mem::align_of::<T>();
let result = if zeroed {
@ -101,12 +100,12 @@ impl<T, A: Alloc> RawVec<T, A> {
};
match result {
Ok(ptr) => ptr,
Err(err) => a.oom(err),
Err(_) => a.oom(),
}
};
RawVec {
ptr: Unique::new_unchecked(ptr as *mut _),
ptr: ptr.cast().into(),
cap,
a,
}
@ -114,14 +113,14 @@ impl<T, A: Alloc> RawVec<T, A> {
}
}
impl<T> RawVec<T, Heap> {
impl<T> RawVec<T, Global> {
/// Creates the biggest possible RawVec (on the system heap)
/// without allocating. If T has positive size, then this makes a
/// RawVec with capacity 0. If T has 0 size, then it makes a
/// RawVec with capacity `usize::MAX`. Useful for implementing
/// delayed allocation.
pub fn new() -> Self {
Self::new_in(Heap)
Self::new_in(Global)
}
/// Creates a RawVec (on the system heap) with exactly the
@ -141,13 +140,13 @@ impl<T> RawVec<T, Heap> {
/// Aborts on OOM
#[inline]
pub fn with_capacity(cap: usize) -> Self {
RawVec::allocate_in(cap, false, Heap)
RawVec::allocate_in(cap, false, Global)
}
/// Like `with_capacity` but guarantees the buffer is zeroed.
#[inline]
pub fn with_capacity_zeroed(cap: usize) -> Self {
RawVec::allocate_in(cap, true, Heap)
RawVec::allocate_in(cap, true, Global)
}
}
@ -168,7 +167,7 @@ impl<T, A: Alloc> RawVec<T, A> {
}
}
impl<T> RawVec<T, Heap> {
impl<T> RawVec<T, Global> {
/// Reconstitutes a RawVec from a pointer, capacity.
///
/// # Undefined Behavior
@ -180,7 +179,7 @@ impl<T> RawVec<T, Heap> {
RawVec {
ptr: Unique::new_unchecked(ptr),
cap,
a: Heap,
a: Global,
}
}
@ -310,14 +309,13 @@ impl<T, A: Alloc> RawVec<T, A> {
// `from_size_align_unchecked`.
let new_cap = 2 * self.cap;
let new_size = new_cap * elem_size;
let new_layout = Layout::from_size_align_unchecked(new_size, cur.align());
alloc_guard(new_size).expect("capacity overflow");
let ptr_res = self.a.realloc(self.ptr.as_ptr() as *mut u8,
let ptr_res = self.a.realloc(NonNull::from(self.ptr).as_opaque(),
cur,
new_layout);
new_size);
match ptr_res {
Ok(ptr) => (new_cap, Unique::new_unchecked(ptr as *mut T)),
Err(e) => self.a.oom(e),
Ok(ptr) => (new_cap, ptr.cast().into()),
Err(_) => self.a.oom(),
}
}
None => {
@ -326,7 +324,7 @@ impl<T, A: Alloc> RawVec<T, A> {
let new_cap = if elem_size > (!0) / 8 { 1 } else { 4 };
match self.a.alloc_array::<T>(new_cap) {
Ok(ptr) => (new_cap, ptr.into()),
Err(e) => self.a.oom(e),
Err(_) => self.a.oom(),
}
}
};
@ -371,9 +369,7 @@ impl<T, A: Alloc> RawVec<T, A> {
let new_cap = 2 * self.cap;
let new_size = new_cap * elem_size;
alloc_guard(new_size).expect("capacity overflow");
let ptr = self.ptr() as *mut _;
let new_layout = Layout::from_size_align_unchecked(new_size, old_layout.align());
match self.a.grow_in_place(ptr, old_layout, new_layout) {
match self.a.grow_in_place(NonNull::from(self.ptr).as_opaque(), old_layout, new_size) {
Ok(_) => {
// We can't directly divide `size`.
self.cap = new_cap;
@ -423,19 +419,19 @@ impl<T, A: Alloc> RawVec<T, A> {
// Nothing we can really do about these checks :(
let new_cap = used_cap.checked_add(needed_extra_cap).ok_or(CapacityOverflow)?;
let new_layout = Layout::array::<T>(new_cap).ok_or(CapacityOverflow)?;
let new_layout = Layout::array::<T>(new_cap).map_err(|_| CapacityOverflow)?;
alloc_guard(new_layout.size())?;
let res = match self.current_layout() {
Some(layout) => {
let old_ptr = self.ptr.as_ptr() as *mut u8;
self.a.realloc(old_ptr, layout, new_layout)
debug_assert!(new_layout.align() == layout.align());
self.a.realloc(NonNull::from(self.ptr).as_opaque(), layout, new_layout.size())
}
None => self.a.alloc(new_layout),
};
self.ptr = Unique::new_unchecked(res? as *mut T);
self.ptr = res?.cast().into();
self.cap = new_cap;
Ok(())
@ -445,7 +441,7 @@ impl<T, A: Alloc> RawVec<T, A> {
pub fn reserve_exact(&mut self, used_cap: usize, needed_extra_cap: usize) {
match self.try_reserve_exact(used_cap, needed_extra_cap) {
Err(CapacityOverflow) => panic!("capacity overflow"),
Err(AllocErr(e)) => self.a.oom(e),
Err(AllocErr) => self.a.oom(),
Ok(()) => { /* yay */ }
}
}
@ -531,20 +527,20 @@ impl<T, A: Alloc> RawVec<T, A> {
}
let new_cap = self.amortized_new_size(used_cap, needed_extra_cap)?;
let new_layout = Layout::array::<T>(new_cap).ok_or(CapacityOverflow)?;
let new_layout = Layout::array::<T>(new_cap).map_err(|_| CapacityOverflow)?;
// FIXME: may crash and burn on over-reserve
alloc_guard(new_layout.size())?;
let res = match self.current_layout() {
Some(layout) => {
let old_ptr = self.ptr.as_ptr() as *mut u8;
self.a.realloc(old_ptr, layout, new_layout)
debug_assert!(new_layout.align() == layout.align());
self.a.realloc(NonNull::from(self.ptr).as_opaque(), layout, new_layout.size())
}
None => self.a.alloc(new_layout),
};
self.ptr = Unique::new_unchecked(res? as *mut T);
self.ptr = res?.cast().into();
self.cap = new_cap;
Ok(())
@ -555,7 +551,7 @@ impl<T, A: Alloc> RawVec<T, A> {
pub fn reserve(&mut self, used_cap: usize, needed_extra_cap: usize) {
match self.try_reserve(used_cap, needed_extra_cap) {
Err(CapacityOverflow) => panic!("capacity overflow"),
Err(AllocErr(e)) => self.a.oom(e),
Err(AllocErr) => self.a.oom(),
Ok(()) => { /* yay */ }
}
}
@ -601,11 +597,12 @@ impl<T, A: Alloc> RawVec<T, A> {
// (regardless of whether `self.cap - used_cap` wrapped).
// Therefore we can safely call grow_in_place.
let ptr = self.ptr() as *mut _;
let new_layout = Layout::new::<T>().repeat(new_cap).unwrap().0;
// FIXME: may crash and burn on over-reserve
alloc_guard(new_layout.size()).expect("capacity overflow");
match self.a.grow_in_place(ptr, old_layout, new_layout) {
match self.a.grow_in_place(
NonNull::from(self.ptr).as_opaque(), old_layout, new_layout.size(),
) {
Ok(_) => {
self.cap = new_cap;
true
@ -665,12 +662,11 @@ impl<T, A: Alloc> RawVec<T, A> {
let new_size = elem_size * amount;
let align = mem::align_of::<T>();
let old_layout = Layout::from_size_align_unchecked(old_size, align);
let new_layout = Layout::from_size_align_unchecked(new_size, align);
match self.a.realloc(self.ptr.as_ptr() as *mut u8,
match self.a.realloc(NonNull::from(self.ptr).as_opaque(),
old_layout,
new_layout) {
Ok(p) => self.ptr = Unique::new_unchecked(p as *mut T),
Err(err) => self.a.oom(err),
new_size) {
Ok(p) => self.ptr = p.cast().into(),
Err(_) => self.a.oom(),
}
}
self.cap = amount;
@ -678,7 +674,7 @@ impl<T, A: Alloc> RawVec<T, A> {
}
}
impl<T> RawVec<T, Heap> {
impl<T> RawVec<T, Global> {
/// Converts the entire buffer into `Box<[T]>`.
///
/// While it is not *strictly* Undefined Behavior to call
@ -702,8 +698,7 @@ impl<T, A: Alloc> RawVec<T, A> {
let elem_size = mem::size_of::<T>();
if elem_size != 0 {
if let Some(layout) = self.current_layout() {
let ptr = self.ptr() as *mut u8;
self.a.dealloc(ptr, layout);
self.a.dealloc(NonNull::from(self.ptr).as_opaque(), layout);
}
}
}
@ -739,6 +734,7 @@ fn alloc_guard(alloc_size: usize) -> Result<(), CollectionAllocErr> {
#[cfg(test)]
mod tests {
use super::*;
use alloc::Opaque;
#[test]
fn allocator_param() {
@ -758,18 +754,18 @@ mod tests {
// before allocation attempts start failing.
struct BoundedAlloc { fuel: usize }
unsafe impl Alloc for BoundedAlloc {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
let size = layout.size();
if size > self.fuel {
return Err(AllocErr::Unsupported { details: "fuel exhausted" });
return Err(AllocErr);
}
match Heap.alloc(layout) {
match Global.alloc(layout) {
ok @ Ok(_) => { self.fuel -= size; ok }
err @ Err(_) => err,
}
}
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
Heap.dealloc(ptr, layout)
unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
Global.dealloc(ptr, layout)
}
}

View File

@ -250,7 +250,6 @@ use core::cell::Cell;
use core::cmp::Ordering;
use core::fmt;
use core::hash::{Hash, Hasher};
use core::heap::{Alloc, Layout};
use core::intrinsics::abort;
use core::marker;
use core::marker::{Unsize, PhantomData};
@ -260,7 +259,7 @@ use core::ops::CoerceUnsized;
use core::ptr::{self, NonNull};
use core::convert::From;
use heap::{Heap, box_free};
use alloc::{Global, Alloc, Layout, Opaque, box_free};
use string::String;
use vec::Vec;
@ -668,11 +667,11 @@ impl<T: ?Sized> Rc<T> {
let layout = Layout::for_value(&*fake_ptr);
let mem = Heap.alloc(layout)
.unwrap_or_else(|e| Heap.oom(e));
let mem = Global.alloc(layout)
.unwrap_or_else(|_| Global.oom());
// Initialize the real RcBox
let inner = set_data_ptr(ptr as *mut T, mem) as *mut RcBox<T>;
let inner = set_data_ptr(ptr as *mut T, mem.as_ptr() as *mut u8) as *mut RcBox<T>;
ptr::write(&mut (*inner).strong, Cell::new(1));
ptr::write(&mut (*inner).weak, Cell::new(1));
@ -738,7 +737,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
// In the event of a panic, elements that have been written
// into the new RcBox will be dropped, then the memory freed.
struct Guard<T> {
mem: *mut u8,
mem: NonNull<Opaque>,
elems: *mut T,
layout: Layout,
n_elems: usize,
@ -752,7 +751,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
let slice = from_raw_parts_mut(self.elems, self.n_elems);
ptr::drop_in_place(slice);
Heap.dealloc(self.mem, self.layout.clone());
Global.dealloc(self.mem, self.layout.clone());
}
}
}
@ -761,14 +760,14 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
let v_ptr = v as *const [T];
let ptr = Self::allocate_for_ptr(v_ptr);
let mem = ptr as *mut _ as *mut u8;
let mem = ptr as *mut _ as *mut Opaque;
let layout = Layout::for_value(&*ptr);
// Pointer to first element
let elems = &mut (*ptr).value as *mut [T] as *mut T;
let mut guard = Guard{
mem: mem,
mem: NonNull::new_unchecked(mem),
elems: elems,
layout: layout,
n_elems: 0,
@ -835,8 +834,6 @@ unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> {
/// ```
fn drop(&mut self) {
unsafe {
let ptr = self.ptr.as_ptr();
self.dec_strong();
if self.strong() == 0 {
// destroy the contained object
@ -847,7 +844,7 @@ unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> {
self.dec_weak();
if self.weak() == 0 {
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr));
Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()));
}
}
}
@ -1267,13 +1264,11 @@ impl<T: ?Sized> Drop for Weak<T> {
/// ```
fn drop(&mut self) {
unsafe {
let ptr = self.ptr.as_ptr();
self.dec_weak();
// the weak count starts at 1, and will only go to zero if all
// the strong pointers have disappeared.
if self.weak() == 0 {
Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr));
Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref()));
}
}
}

View File

@ -9,7 +9,7 @@
// except according to those terms.
use alloc_system::System;
use std::heap::{Heap, Alloc, Layout};
use std::alloc::{Global, Alloc, Layout};
/// https://github.com/rust-lang/rust/issues/45955
///
@ -22,7 +22,7 @@ fn alloc_system_overaligned_request() {
#[test]
fn std_heap_overaligned_request() {
check_overalign_requests(Heap)
check_overalign_requests(Global)
}
fn check_overalign_requests<T: Alloc>(mut allocator: T) {
@ -34,7 +34,8 @@ fn check_overalign_requests<T: Alloc>(mut allocator: T) {
allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap()
}).collect();
for &ptr in &pointers {
assert_eq!((ptr as usize) % align, 0, "Got a pointer less aligned than requested")
assert_eq!((ptr.as_ptr() as usize) % align, 0,
"Got a pointer less aligned than requested")
}
// Clean up

View File

@ -575,11 +575,11 @@ fn test_try_reserve() {
} else { panic!("usize::MAX should trigger an overflow!") }
} else {
// Check isize::MAX + 1 is an OOM
if let Err(AllocErr(_)) = empty_string.try_reserve(MAX_CAP + 1) {
if let Err(AllocErr) = empty_string.try_reserve(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
// Check usize::MAX is an OOM
if let Err(AllocErr(_)) = empty_string.try_reserve(MAX_USIZE) {
if let Err(AllocErr) = empty_string.try_reserve(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") }
}
}
@ -599,7 +599,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) {
if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
// Should always overflow in the add-to-len
@ -637,10 +637,10 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = empty_string.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an overflow!") }
} else {
if let Err(AllocErr(_)) = empty_string.try_reserve_exact(MAX_CAP + 1) {
if let Err(AllocErr) = empty_string.try_reserve_exact(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
if let Err(AllocErr(_)) = empty_string.try_reserve_exact(MAX_USIZE) {
if let Err(AllocErr) = empty_string.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") }
}
}
@ -659,7 +659,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) {

View File

@ -1016,11 +1016,11 @@ fn test_try_reserve() {
} else { panic!("usize::MAX should trigger an overflow!") }
} else {
// Check isize::MAX + 1 is an OOM
if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_CAP + 1) {
if let Err(AllocErr) = empty_bytes.try_reserve(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
// Check usize::MAX is an OOM
if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_USIZE) {
if let Err(AllocErr) = empty_bytes.try_reserve(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") }
}
}
@ -1040,7 +1040,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) {
if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
// Should always overflow in the add-to-len
@ -1063,7 +1063,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
if let Err(AllocErr) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
// Should fail in the mul-by-size
@ -1103,10 +1103,10 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = empty_bytes.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an overflow!") }
} else {
if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_CAP + 1) {
if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_CAP + 1) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_USIZE) {
if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_USIZE) {
} else { panic!("usize::MAX should trigger an OOM!") }
}
}
@ -1125,7 +1125,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) {
@ -1146,7 +1146,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
if let Err(AllocErr) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) {

View File

@ -1073,7 +1073,7 @@ fn test_try_reserve() {
// VecDeque starts with capacity 7, always adds 1 to the capacity
// and also rounds the number to next power of 2 so this is the
// furthest we can go without triggering CapacityOverflow
if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_CAP) {
if let Err(AllocErr) = empty_bytes.try_reserve(MAX_CAP) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
}
@ -1093,7 +1093,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) {
if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
// Should always overflow in the add-to-len
@ -1116,7 +1116,7 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
if let Err(AllocErr) = ten_u32s.try_reserve(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
// Should fail in the mul-by-size
@ -1160,7 +1160,7 @@ fn test_try_reserve_exact() {
// VecDeque starts with capacity 7, always adds 1 to the capacity
// and also rounds the number to next power of 2 so this is the
// furthest we can go without triggering CapacityOverflow
if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_CAP) {
if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_CAP) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
}
@ -1179,7 +1179,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) {
@ -1200,7 +1200,7 @@ fn test_try_reserve_exact() {
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an overflow!"); }
} else {
if let Err(AllocErr(_)) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
if let Err(AllocErr) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) {

View File

@ -12,7 +12,6 @@ test = false
doc = false
[dependencies]
alloc = { path = "../liballoc" }
alloc_system = { path = "../liballoc_system" }
core = { path = "../libcore" }
libc = { path = "../rustc/libc_shim" }

View File

@ -30,9 +30,7 @@ extern crate libc;
pub use contents::*;
#[cfg(not(dummy_jemalloc))]
mod contents {
use core::ptr;
use core::heap::{Alloc, AllocErr, Layout};
use core::alloc::GlobalAlloc;
use alloc_system::System;
use libc::{c_int, c_void, size_t};
@ -52,18 +50,10 @@ mod contents {
target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_rallocx")]
fn rallocx(ptr: *mut c_void, size: size_t, flags: c_int) -> *mut c_void;
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_xallocx")]
fn xallocx(ptr: *mut c_void, size: size_t, extra: size_t, flags: c_int) -> size_t;
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_sdallocx")]
fn sdallocx(ptr: *mut c_void, size: size_t, flags: c_int);
#[cfg_attr(any(target_os = "macos", target_os = "android", target_os = "ios",
target_os = "dragonfly", target_os = "windows", target_env = "musl"),
link_name = "je_nallocx")]
fn nallocx(size: size_t, flags: c_int) -> size_t;
}
const MALLOCX_ZERO: c_int = 0x40;
@ -104,23 +94,16 @@ mod contents {
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_alloc(size: usize,
align: usize,
err: *mut u8) -> *mut u8 {
pub unsafe extern fn __rde_alloc(size: usize, align: usize) -> *mut u8 {
let flags = align_to_flags(align, size);
let ptr = mallocx(size as size_t, flags) as *mut u8;
if ptr.is_null() {
let layout = Layout::from_size_align_unchecked(size, align);
ptr::write(err as *mut AllocErr,
AllocErr::Exhausted { request: layout });
}
ptr
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_oom(err: *const u8) -> ! {
System.oom((*(err as *const AllocErr)).clone())
pub unsafe extern fn __rde_oom() -> ! {
System.oom()
}
#[no_mangle]
@ -132,118 +115,26 @@ mod contents {
sdallocx(ptr as *mut c_void, size, flags);
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_usable_size(layout: *const u8,
min: *mut usize,
max: *mut usize) {
let layout = &*(layout as *const Layout);
let flags = align_to_flags(layout.align(), layout.size());
let size = nallocx(layout.size(), flags) as usize;
*min = layout.size();
if size > 0 {
*max = size;
} else {
*max = layout.size();
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_realloc(ptr: *mut u8,
_old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8 {
if new_align != old_align {
ptr::write(err as *mut AllocErr,
AllocErr::Unsupported { details: "can't change alignments" });
return 0 as *mut u8
}
let flags = align_to_flags(new_align, new_size);
align: usize,
new_size: usize) -> *mut u8 {
let flags = align_to_flags(align, new_size);
let ptr = rallocx(ptr as *mut c_void, new_size, flags) as *mut u8;
if ptr.is_null() {
let layout = Layout::from_size_align_unchecked(new_size, new_align);
ptr::write(err as *mut AllocErr,
AllocErr::Exhausted { request: layout });
}
ptr
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_alloc_zeroed(size: usize,
align: usize,
err: *mut u8) -> *mut u8 {
pub unsafe extern fn __rde_alloc_zeroed(size: usize, align: usize) -> *mut u8 {
let ptr = if align <= MIN_ALIGN && align <= size {
calloc(size as size_t, 1) as *mut u8
} else {
let flags = align_to_flags(align, size) | MALLOCX_ZERO;
mallocx(size as size_t, flags) as *mut u8
};
if ptr.is_null() {
let layout = Layout::from_size_align_unchecked(size, align);
ptr::write(err as *mut AllocErr,
AllocErr::Exhausted { request: layout });
}
ptr
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_alloc_excess(size: usize,
align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let p = __rde_alloc(size, align, err);
if !p.is_null() {
let flags = align_to_flags(align, size);
*excess = nallocx(size, flags) as usize;
}
return p
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_realloc_excess(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let p = __rde_realloc(ptr, old_size, old_align, new_size, new_align, err);
if !p.is_null() {
let flags = align_to_flags(new_align, new_size);
*excess = nallocx(new_size, flags) as usize;
}
p
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_grow_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
__rde_shrink_in_place(ptr, old_size, old_align, new_size, new_align)
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rde_shrink_in_place(ptr: *mut u8,
_old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
if old_align == new_align {
let flags = align_to_flags(new_align, new_size);
(xallocx(ptr as *mut c_void, new_size, 0, flags) == new_size) as u8
} else {
0
}
}
}

View File

@ -10,7 +10,6 @@ test = false
doc = false
[dependencies]
alloc = { path = "../liballoc" }
core = { path = "../libcore" }
libc = { path = "../rustc/libc_shim" }
compiler_builtins = { path = "../rustc/compiler_builtins_shim" }

View File

@ -41,7 +41,8 @@ const MIN_ALIGN: usize = 8;
#[allow(dead_code)]
const MIN_ALIGN: usize = 16;
use core::heap::{Alloc, AllocErr, Layout, Excess, CannotReallocInPlace};
use core::alloc::{Alloc, GlobalAlloc, AllocErr, Layout, Opaque};
use core::ptr::NonNull;
#[unstable(feature = "allocator_api", issue = "32838")]
pub struct System;
@ -49,66 +50,86 @@ pub struct System;
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl Alloc for System {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
(&*self).alloc(layout)
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc(self, layout)).ok_or(AllocErr)
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout)
-> Result<*mut u8, AllocErr>
{
(&*self).alloc_zeroed(layout)
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc_zeroed(self, layout)).ok_or(AllocErr)
}
#[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
(&*self).dealloc(ptr, layout)
unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
GlobalAlloc::dealloc(self, ptr.as_ptr(), layout)
}
#[inline]
unsafe fn realloc(&mut self,
ptr: *mut u8,
old_layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
(&*self).realloc(ptr, old_layout, new_layout)
}
fn oom(&mut self, err: AllocErr) -> ! {
(&*self).oom(err)
ptr: NonNull<Opaque>,
layout: Layout,
new_size: usize) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
}
#[inline]
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
(&self).usable_size(layout)
fn oom(&mut self) -> ! {
::oom()
}
}
#[cfg(stage0)]
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc(*self, layout)).ok_or(AllocErr)
}
#[inline]
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
(&*self).alloc_excess(layout)
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::alloc_zeroed(*self, layout)).ok_or(AllocErr)
}
#[inline]
unsafe fn realloc_excess(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> {
(&*self).realloc_excess(ptr, layout, new_layout)
unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) {
GlobalAlloc::dealloc(*self, ptr.as_ptr(), layout)
}
#[inline]
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
(&*self).grow_in_place(ptr, layout, new_layout)
unsafe fn realloc(&mut self,
ptr: NonNull<Opaque>,
layout: Layout,
new_size: usize) -> Result<NonNull<Opaque>, AllocErr> {
NonNull::new(GlobalAlloc::realloc(*self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
}
#[inline]
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
(&*self).shrink_in_place(ptr, layout, new_layout)
fn oom(&mut self) -> ! {
::oom()
}
}
#[cfg(any(windows, unix, target_os = "cloudabi", target_os = "redox"))]
mod realloc_fallback {
use core::alloc::{GlobalAlloc, Opaque, Layout};
use core::cmp;
use core::ptr;
impl super::System {
pub(crate) unsafe fn realloc_fallback(&self, ptr: *mut Opaque, old_layout: Layout,
new_size: usize) -> *mut Opaque {
// Docs for GlobalAlloc::realloc require this to be valid:
let new_layout = Layout::from_size_align_unchecked(new_size, old_layout.align());
let new_ptr = GlobalAlloc::alloc(self, new_layout);
if !new_ptr.is_null() {
let size = cmp::min(old_layout.size(), new_size);
ptr::copy_nonoverlapping(ptr as *mut u8, new_ptr as *mut u8, size);
GlobalAlloc::dealloc(self, ptr, old_layout);
}
new_ptr
}
}
}
@ -116,132 +137,62 @@ unsafe impl Alloc for System {
mod platform {
extern crate libc;
use core::cmp;
use core::ptr;
use MIN_ALIGN;
use System;
use core::heap::{Alloc, AllocErr, Layout};
use core::alloc::{GlobalAlloc, Layout, Opaque};
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System {
unsafe impl GlobalAlloc for System {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
let ptr = if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() {
libc::malloc(layout.size()) as *mut u8
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() {
libc::malloc(layout.size()) as *mut Opaque
} else {
#[cfg(target_os = "macos")]
{
if layout.align() > (1 << 31) {
return Err(AllocErr::Unsupported {
details: "requested alignment too large"
})
// FIXME: use Opaque::null_mut
// https://github.com/rust-lang/rust/issues/49659
return 0 as *mut Opaque
}
}
aligned_malloc(&layout)
};
if !ptr.is_null() {
Ok(ptr)
} else {
Err(AllocErr::Exhausted { request: layout })
}
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout)
-> Result<*mut u8, AllocErr>
{
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
if layout.align() <= MIN_ALIGN && layout.align() <= layout.size() {
let ptr = libc::calloc(layout.size(), 1) as *mut u8;
if !ptr.is_null() {
Ok(ptr)
} else {
Err(AllocErr::Exhausted { request: layout })
}
libc::calloc(layout.size(), 1) as *mut Opaque
} else {
let ret = self.alloc(layout.clone());
if let Ok(ptr) = ret {
ptr::write_bytes(ptr, 0, layout.size());
let ptr = self.alloc(layout.clone());
if !ptr.is_null() {
ptr::write_bytes(ptr as *mut u8, 0, layout.size());
}
ret
ptr
}
}
#[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, _layout: Layout) {
unsafe fn dealloc(&self, ptr: *mut Opaque, _layout: Layout) {
libc::free(ptr as *mut libc::c_void)
}
#[inline]
unsafe fn realloc(&mut self,
ptr: *mut u8,
old_layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
if old_layout.align() != new_layout.align() {
return Err(AllocErr::Unsupported {
details: "cannot change alignment on `realloc`",
})
}
if new_layout.align() <= MIN_ALIGN && new_layout.align() <= new_layout.size(){
let ptr = libc::realloc(ptr as *mut libc::c_void, new_layout.size());
if !ptr.is_null() {
Ok(ptr as *mut u8)
} else {
Err(AllocErr::Exhausted { request: new_layout })
}
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
if layout.align() <= MIN_ALIGN && layout.align() <= new_size {
libc::realloc(ptr as *mut libc::c_void, new_size) as *mut Opaque
} else {
let res = self.alloc(new_layout.clone());
if let Ok(new_ptr) = res {
let size = cmp::min(old_layout.size(), new_layout.size());
ptr::copy_nonoverlapping(ptr, new_ptr, size);
self.dealloc(ptr, old_layout);
}
res
}
}
fn oom(&mut self, err: AllocErr) -> ! {
use core::fmt::{self, Write};
// Print a message to stderr before aborting to assist with
// debugging. It is critical that this code does not allocate any
// memory since we are in an OOM situation. Any errors are ignored
// while printing since there's nothing we can do about them and we
// are about to exit anyways.
drop(writeln!(Stderr, "fatal runtime error: {}", err));
unsafe {
::core::intrinsics::abort();
}
struct Stderr;
impl Write for Stderr {
#[cfg(target_os = "cloudabi")]
fn write_str(&mut self, _: &str) -> fmt::Result {
// CloudABI does not have any reserved file descriptor
// numbers. We should not attempt to write to file
// descriptor #2, as it may be associated with any kind of
// resource.
Ok(())
}
#[cfg(not(target_os = "cloudabi"))]
fn write_str(&mut self, s: &str) -> fmt::Result {
unsafe {
libc::write(libc::STDERR_FILENO,
s.as_ptr() as *const libc::c_void,
s.len());
}
Ok(())
}
self.realloc_fallback(ptr, layout, new_size)
}
}
}
#[cfg(any(target_os = "android", target_os = "redox", target_os = "solaris"))]
#[inline]
unsafe fn aligned_malloc(layout: &Layout) -> *mut u8 {
unsafe fn aligned_malloc(layout: &Layout) -> *mut Opaque {
// On android we currently target API level 9 which unfortunately
// doesn't have the `posix_memalign` API used below. Instead we use
// `memalign`, but this unfortunately has the property on some systems
@ -259,18 +210,19 @@ mod platform {
// [3]: https://bugs.chromium.org/p/chromium/issues/detail?id=138579
// [4]: https://chromium.googlesource.com/chromium/src/base/+/master/
// /memory/aligned_memory.cc
libc::memalign(layout.align(), layout.size()) as *mut u8
libc::memalign(layout.align(), layout.size()) as *mut Opaque
}
#[cfg(not(any(target_os = "android", target_os = "redox", target_os = "solaris")))]
#[inline]
unsafe fn aligned_malloc(layout: &Layout) -> *mut u8 {
unsafe fn aligned_malloc(layout: &Layout) -> *mut Opaque {
let mut out = ptr::null_mut();
let ret = libc::posix_memalign(&mut out, layout.align(), layout.size());
if ret != 0 {
ptr::null_mut()
// FIXME: use Opaque::null_mut https://github.com/rust-lang/rust/issues/49659
0 as *mut Opaque
} else {
out as *mut u8
out as *mut Opaque
}
}
}
@ -278,22 +230,15 @@ mod platform {
#[cfg(windows)]
#[allow(bad_style)]
mod platform {
use core::cmp;
use core::ptr;
use MIN_ALIGN;
use System;
use core::heap::{Alloc, AllocErr, Layout, CannotReallocInPlace};
use core::alloc::{GlobalAlloc, Opaque, Layout};
type LPVOID = *mut u8;
type HANDLE = LPVOID;
type SIZE_T = usize;
type DWORD = u32;
type BOOL = i32;
type LPDWORD = *mut DWORD;
type LPOVERLAPPED = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn GetProcessHeap() -> HANDLE;
@ -301,20 +246,12 @@ mod platform {
fn HeapReAlloc(hHeap: HANDLE, dwFlags: DWORD, lpMem: LPVOID, dwBytes: SIZE_T) -> LPVOID;
fn HeapFree(hHeap: HANDLE, dwFlags: DWORD, lpMem: LPVOID) -> BOOL;
fn GetLastError() -> DWORD;
fn WriteFile(hFile: HANDLE,
lpBuffer: LPVOID,
nNumberOfBytesToWrite: DWORD,
lpNumberOfBytesWritten: LPDWORD,
lpOverlapped: LPOVERLAPPED)
-> BOOL;
fn GetStdHandle(which: DWORD) -> HANDLE;
}
#[repr(C)]
struct Header(*mut u8);
const HEAP_ZERO_MEMORY: DWORD = 0x00000008;
const HEAP_REALLOC_IN_PLACE_ONLY: DWORD = 0x00000010;
unsafe fn get_header<'a>(ptr: *mut u8) -> &'a mut Header {
&mut *(ptr as *mut Header).offset(-1)
@ -327,9 +264,7 @@ mod platform {
}
#[inline]
unsafe fn allocate_with_flags(layout: Layout, flags: DWORD)
-> Result<*mut u8, AllocErr>
{
unsafe fn allocate_with_flags(layout: Layout, flags: DWORD) -> *mut Opaque {
let ptr = if layout.align() <= MIN_ALIGN {
HeapAlloc(GetProcessHeap(), flags, layout.size())
} else {
@ -341,35 +276,29 @@ mod platform {
align_ptr(ptr, layout.align())
}
};
if ptr.is_null() {
Err(AllocErr::Exhausted { request: layout })
} else {
Ok(ptr as *mut u8)
}
ptr as *mut Opaque
}
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System {
unsafe impl GlobalAlloc for System {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
allocate_with_flags(layout, 0)
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout)
-> Result<*mut u8, AllocErr>
{
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
allocate_with_flags(layout, HEAP_ZERO_MEMORY)
}
#[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
if layout.align() <= MIN_ALIGN {
let err = HeapFree(GetProcessHeap(), 0, ptr as LPVOID);
debug_assert!(err != 0, "Failed to free heap memory: {}",
GetLastError());
} else {
let header = get_header(ptr);
let header = get_header(ptr as *mut u8);
let err = HeapFree(GetProcessHeap(), 0, header.0 as LPVOID);
debug_assert!(err != 0, "Failed to free heap memory: {}",
GetLastError());
@ -377,98 +306,11 @@ mod platform {
}
#[inline]
unsafe fn realloc(&mut self,
ptr: *mut u8,
old_layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
if old_layout.align() != new_layout.align() {
return Err(AllocErr::Unsupported {
details: "cannot change alignment on `realloc`",
})
}
if new_layout.align() <= MIN_ALIGN {
let ptr = HeapReAlloc(GetProcessHeap(),
0,
ptr as LPVOID,
new_layout.size());
if !ptr.is_null() {
Ok(ptr as *mut u8)
} else {
Err(AllocErr::Exhausted { request: new_layout })
}
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
if layout.align() <= MIN_ALIGN {
HeapReAlloc(GetProcessHeap(), 0, ptr as LPVOID, new_size) as *mut Opaque
} else {
let res = self.alloc(new_layout.clone());
if let Ok(new_ptr) = res {
let size = cmp::min(old_layout.size(), new_layout.size());
ptr::copy_nonoverlapping(ptr, new_ptr, size);
self.dealloc(ptr, old_layout);
}
res
}
}
#[inline]
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
self.shrink_in_place(ptr, layout, new_layout)
}
#[inline]
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
old_layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
if old_layout.align() != new_layout.align() {
return Err(CannotReallocInPlace)
}
let new = if new_layout.align() <= MIN_ALIGN {
HeapReAlloc(GetProcessHeap(),
HEAP_REALLOC_IN_PLACE_ONLY,
ptr as LPVOID,
new_layout.size())
} else {
let header = get_header(ptr);
HeapReAlloc(GetProcessHeap(),
HEAP_REALLOC_IN_PLACE_ONLY,
header.0 as LPVOID,
new_layout.size() + new_layout.align())
};
if new.is_null() {
Err(CannotReallocInPlace)
} else {
Ok(())
}
}
fn oom(&mut self, err: AllocErr) -> ! {
use core::fmt::{self, Write};
// Same as with unix we ignore all errors here
drop(writeln!(Stderr, "fatal runtime error: {}", err));
unsafe {
::core::intrinsics::abort();
}
struct Stderr;
impl Write for Stderr {
fn write_str(&mut self, s: &str) -> fmt::Result {
unsafe {
// WriteFile silently fails if it is passed an invalid
// handle, so there is no need to check the result of
// GetStdHandle.
WriteFile(GetStdHandle(STD_ERROR_HANDLE),
s.as_ptr() as LPVOID,
s.len() as DWORD,
ptr::null_mut(),
ptr::null_mut());
}
Ok(())
}
self.realloc_fallback(ptr, layout, new_size)
}
}
}
@ -495,69 +337,92 @@ mod platform {
mod platform {
extern crate dlmalloc;
use core::heap::{Alloc, AllocErr, Layout, Excess, CannotReallocInPlace};
use core::alloc::{GlobalAlloc, Layout, Opaque};
use System;
use self::dlmalloc::GlobalDlmalloc;
// No need for synchronization here as wasm is currently single-threaded
static mut DLMALLOC: dlmalloc::Dlmalloc = dlmalloc::DLMALLOC_INIT;
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl<'a> Alloc for &'a System {
unsafe impl GlobalAlloc for System {
#[inline]
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
GlobalDlmalloc.alloc(layout)
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
DLMALLOC.malloc(layout.size(), layout.align()) as *mut Opaque
}
#[inline]
unsafe fn alloc_zeroed(&mut self, layout: Layout)
-> Result<*mut u8, AllocErr>
{
GlobalDlmalloc.alloc_zeroed(layout)
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
DLMALLOC.calloc(layout.size(), layout.align()) as *mut Opaque
}
#[inline]
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
GlobalDlmalloc.dealloc(ptr, layout)
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
DLMALLOC.free(ptr as *mut u8, layout.size(), layout.align())
}
#[inline]
unsafe fn realloc(&mut self,
ptr: *mut u8,
old_layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
GlobalDlmalloc.realloc(ptr, old_layout, new_layout)
}
#[inline]
fn usable_size(&self, layout: &Layout) -> (usize, usize) {
GlobalDlmalloc.usable_size(layout)
}
#[inline]
unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
GlobalDlmalloc.alloc_excess(layout)
}
#[inline]
unsafe fn realloc_excess(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> {
GlobalDlmalloc.realloc_excess(ptr, layout, new_layout)
}
#[inline]
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
GlobalDlmalloc.grow_in_place(ptr, layout, new_layout)
}
#[inline]
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
GlobalDlmalloc.shrink_in_place(ptr, layout, new_layout)
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
DLMALLOC.realloc(ptr as *mut u8, layout.size(), layout.align(), new_size) as *mut Opaque
}
}
}
#[inline]
fn oom() -> ! {
write_to_stderr("fatal runtime error: memory allocation failed");
unsafe {
::core::intrinsics::abort();
}
}
#[cfg(any(unix, target_os = "redox"))]
#[inline]
fn write_to_stderr(s: &str) {
extern crate libc;
unsafe {
libc::write(libc::STDERR_FILENO,
s.as_ptr() as *const libc::c_void,
s.len());
}
}
#[cfg(windows)]
#[inline]
fn write_to_stderr(s: &str) {
use core::ptr;
type LPVOID = *mut u8;
type HANDLE = LPVOID;
type DWORD = u32;
type BOOL = i32;
type LPDWORD = *mut DWORD;
type LPOVERLAPPED = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn WriteFile(hFile: HANDLE,
lpBuffer: LPVOID,
nNumberOfBytesToWrite: DWORD,
lpNumberOfBytesWritten: LPDWORD,
lpOverlapped: LPOVERLAPPED)
-> BOOL;
fn GetStdHandle(which: DWORD) -> HANDLE;
}
unsafe {
// WriteFile silently fails if it is passed an invalid
// handle, so there is no need to check the result of
// GetStdHandle.
WriteFile(GetStdHandle(STD_ERROR_HANDLE),
s.as_ptr() as LPVOID,
s.len() as DWORD,
ptr::null_mut(),
ptr::null_mut());
}
}
#[cfg(not(any(windows, unix, target_os = "redox")))]
#[inline]
fn write_to_stderr(_: &str) {}

View File

@ -21,10 +21,30 @@ use mem;
use usize;
use ptr::{self, NonNull};
extern {
/// An opaque, unsized type. Used for pointers to allocated memory.
///
/// This type can only be used behind a pointer like `*mut Opaque` or `ptr::NonNull<Opaque>`.
/// Such pointers are similar to Cs `void*` type.
pub type Opaque;
}
impl Opaque {
/// Similar to `std::ptr::null`, which requires `T: Sized`.
pub fn null() -> *const Self {
0 as _
}
/// Similar to `std::ptr::null_mut`, which requires `T: Sized`.
pub fn null_mut() -> *mut Self {
0 as _
}
}
/// Represents the combination of a starting address and
/// a total capacity of the returned block.
#[derive(Debug)]
pub struct Excess(pub *mut u8, pub usize);
pub struct Excess(pub NonNull<Opaque>, pub usize);
fn size_align<T>() -> (usize, usize) {
(mem::size_of::<T>(), mem::align_of::<T>())
@ -74,9 +94,9 @@ impl Layout {
/// must not overflow (i.e. the rounded value must be less than
/// `usize::MAX`).
#[inline]
pub fn from_size_align(size: usize, align: usize) -> Option<Layout> {
pub fn from_size_align(size: usize, align: usize) -> Result<Self, LayoutErr> {
if !align.is_power_of_two() {
return None;
return Err(LayoutErr { private: () });
}
// (power-of-two implies align != 0.)
@ -94,11 +114,11 @@ impl Layout {
// Above implies that checking for summation overflow is both
// necessary and sufficient.
if size > usize::MAX - (align - 1) {
return None;
return Err(LayoutErr { private: () });
}
unsafe {
Some(Layout::from_size_align_unchecked(size, align))
Ok(Layout::from_size_align_unchecked(size, align))
}
}
@ -110,7 +130,7 @@ impl Layout {
/// a power-of-two nor `size` aligned to `align` fits within the
/// address space (i.e. the `Layout::from_size_align` preconditions).
#[inline]
pub unsafe fn from_size_align_unchecked(size: usize, align: usize) -> Layout {
pub unsafe fn from_size_align_unchecked(size: usize, align: usize) -> Self {
Layout { size: size, align: align }
}
@ -209,15 +229,17 @@ impl Layout {
///
/// On arithmetic overflow, returns `None`.
#[inline]
pub fn repeat(&self, n: usize) -> Option<(Self, usize)> {
let padded_size = self.size.checked_add(self.padding_needed_for(self.align))?;
let alloc_size = padded_size.checked_mul(n)?;
pub fn repeat(&self, n: usize) -> Result<(Self, usize), LayoutErr> {
let padded_size = self.size.checked_add(self.padding_needed_for(self.align))
.ok_or(LayoutErr { private: () })?;
let alloc_size = padded_size.checked_mul(n)
.ok_or(LayoutErr { private: () })?;
// We can assume that `self.align` is a power-of-two.
// Furthermore, `alloc_size` has already been rounded up
// to a multiple of `self.align`; therefore, the call to
// `Layout::from_size_align` below should never panic.
Some((Layout::from_size_align(alloc_size, self.align).unwrap(), padded_size))
Ok((Layout::from_size_align(alloc_size, self.align).unwrap(), padded_size))
}
/// Creates a layout describing the record for `self` followed by
@ -231,17 +253,19 @@ impl Layout {
/// (assuming that the record itself starts at offset 0).
///
/// On arithmetic overflow, returns `None`.
pub fn extend(&self, next: Self) -> Option<(Self, usize)> {
pub fn extend(&self, next: Self) -> Result<(Self, usize), LayoutErr> {
let new_align = cmp::max(self.align, next.align);
let realigned = Layout::from_size_align(self.size, new_align)?;
let pad = realigned.padding_needed_for(next.align);
let offset = self.size.checked_add(pad)?;
let new_size = offset.checked_add(next.size)?;
let offset = self.size.checked_add(pad)
.ok_or(LayoutErr { private: () })?;
let new_size = offset.checked_add(next.size)
.ok_or(LayoutErr { private: () })?;
let layout = Layout::from_size_align(new_size, new_align)?;
Some((layout, offset))
Ok((layout, offset))
}
/// Creates a layout describing the record for `n` instances of
@ -256,8 +280,8 @@ impl Layout {
/// aligned.
///
/// On arithmetic overflow, returns `None`.
pub fn repeat_packed(&self, n: usize) -> Option<Self> {
let size = self.size().checked_mul(n)?;
pub fn repeat_packed(&self, n: usize) -> Result<Self, LayoutErr> {
let size = self.size().checked_mul(n).ok_or(LayoutErr { private: () })?;
Layout::from_size_align(size, self.align)
}
@ -276,16 +300,17 @@ impl Layout {
/// `extend`.)
///
/// On arithmetic overflow, returns `None`.
pub fn extend_packed(&self, next: Self) -> Option<(Self, usize)> {
let new_size = self.size().checked_add(next.size())?;
pub fn extend_packed(&self, next: Self) -> Result<(Self, usize), LayoutErr> {
let new_size = self.size().checked_add(next.size())
.ok_or(LayoutErr { private: () })?;
let layout = Layout::from_size_align(new_size, self.align)?;
Some((layout, self.size()))
Ok((layout, self.size()))
}
/// Creates a layout describing the record for a `[T; n]`.
///
/// On arithmetic overflow, returns `None`.
pub fn array<T>(n: usize) -> Option<Self> {
pub fn array<T>(n: usize) -> Result<Self, LayoutErr> {
Layout::new::<T>()
.repeat(n)
.map(|(k, offs)| {
@ -295,55 +320,31 @@ impl Layout {
}
}
/// The parameters given to `Layout::from_size_align` do not satisfy
/// its documented constraints.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct LayoutErr {
private: ()
}
// (we need this for downstream impl of trait Error)
impl fmt::Display for LayoutErr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str("invalid parameters to Layout::from_size_align")
}
}
/// The `AllocErr` error specifies whether an allocation failure is
/// specifically due to resource exhaustion or if it is due to
/// something wrong when combining the given input arguments with this
/// allocator.
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum AllocErr {
/// Error due to hitting some resource limit or otherwise running
/// out of memory. This condition strongly implies that *some*
/// series of deallocations would allow a subsequent reissuing of
/// the original allocation request to succeed.
Exhausted { request: Layout },
/// Error due to allocator being fundamentally incapable of
/// satisfying the original request. This condition implies that
/// such an allocation request will never succeed on the given
/// allocator, regardless of environment, memory pressure, or
/// other contextual conditions.
///
/// For example, an allocator that does not support requests for
/// large memory blocks might return this error variant.
Unsupported { details: &'static str },
}
impl AllocErr {
#[inline]
pub fn invalid_input(details: &'static str) -> Self {
AllocErr::Unsupported { details: details }
}
#[inline]
pub fn is_memory_exhausted(&self) -> bool {
if let AllocErr::Exhausted { .. } = *self { true } else { false }
}
#[inline]
pub fn is_request_unsupported(&self) -> bool {
if let AllocErr::Unsupported { .. } = *self { true } else { false }
}
#[inline]
pub fn description(&self) -> &str {
match *self {
AllocErr::Exhausted { .. } => "allocator memory exhausted",
AllocErr::Unsupported { .. } => "unsupported allocator request",
}
}
}
pub struct AllocErr;
// (we need this for downstream impl of trait Error)
impl fmt::Display for AllocErr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.description())
f.write_str("memory allocation failed")
}
}
@ -374,13 +375,85 @@ pub enum CollectionAllocErr {
/// (usually `isize::MAX` bytes).
CapacityOverflow,
/// Error due to the allocator (see the `AllocErr` type's docs).
AllocErr(AllocErr),
AllocErr,
}
#[unstable(feature = "try_reserve", reason = "new API", issue="48043")]
impl From<AllocErr> for CollectionAllocErr {
fn from(err: AllocErr) -> Self {
CollectionAllocErr::AllocErr(err)
fn from(AllocErr: AllocErr) -> Self {
CollectionAllocErr::AllocErr
}
}
/// A memory allocator that can be registered to be the one backing `std::alloc::Global`
/// though the `#[global_allocator]` attributes.
pub unsafe trait GlobalAlloc {
/// Allocate memory as described by the given `layout`.
///
/// Returns a pointer to newly-allocated memory,
/// or NULL to indicate allocation failure.
///
/// # Safety
///
/// **FIXME:** what are the exact requirements?
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque;
/// Deallocate the block of memory at the given `ptr` pointer with the given `layout`.
///
/// # Safety
///
/// **FIXME:** what are the exact requirements?
/// In particular around layout *fit*. (See docs for the `Alloc` trait.)
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout);
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
let size = layout.size();
let ptr = self.alloc(layout);
if !ptr.is_null() {
ptr::write_bytes(ptr as *mut u8, 0, size);
}
ptr
}
/// Shink or grow a block of memory to the given `new_size`.
/// The block is described by the given `ptr` pointer and `layout`.
///
/// Return a new pointer (which may or may not be the same as `ptr`),
/// or NULL to indicate reallocation failure.
///
/// If reallocation is successful, the old `ptr` pointer is considered
/// to have been deallocated.
///
/// # Safety
///
/// `new_size`, when rounded up to the nearest multiple of `old_layout.align()`,
/// must not overflow (i.e. the rounded value must be less than `usize::MAX`).
///
/// **FIXME:** what are the exact requirements?
/// In particular around layout *fit*. (See docs for the `Alloc` trait.)
unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque {
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let new_ptr = self.alloc(new_layout);
if !new_ptr.is_null() {
ptr::copy_nonoverlapping(
ptr as *const u8,
new_ptr as *mut u8,
cmp::min(layout.size(), new_size),
);
self.dealloc(ptr, layout);
}
new_ptr
}
/// Aborts the thread or process, optionally performing
/// cleanup or logging diagnostic information before panicking or
/// aborting.
///
/// `oom` is meant to be used by clients unable to cope with an
/// unsatisfied allocation request, and wish to abandon
/// computation rather than attempt to recover locally.
fn oom(&self) -> ! {
unsafe { ::intrinsics::abort() }
}
}
@ -515,7 +588,7 @@ pub unsafe trait Alloc {
/// Clients wishing to abort computation in response to an
/// allocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar.
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr>;
/// Deallocate the memory referenced by `ptr`.
///
@ -532,7 +605,7 @@ pub unsafe trait Alloc {
/// * In addition to fitting the block of memory `layout`, the
/// alignment of the `layout` must match the alignment used
/// to allocate that block of memory.
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout);
/// Allocator-specific method for signaling an out-of-memory
/// condition.
@ -542,12 +615,8 @@ pub unsafe trait Alloc {
/// aborting.
///
/// `oom` is meant to be used by clients unable to cope with an
/// unsatisfied allocation request (signaled by an error such as
/// `AllocErr::Exhausted`), and wish to abandon computation rather
/// than attempt to recover locally. Such clients should pass the
/// signaling error value back into `oom`, where the allocator
/// may incorporate that error value into its diagnostic report
/// before aborting.
/// unsatisfied allocation request, and wish to abandon
/// computation rather than attempt to recover locally.
///
/// Implementations of the `oom` method are discouraged from
/// infinitely regressing in nested calls to `oom`. In
@ -560,7 +629,7 @@ pub unsafe trait Alloc {
/// instead they should return an appropriate error from the
/// invoked method, and let the client decide whether to invoke
/// this `oom` method in response.
fn oom(&mut self, _: AllocErr) -> ! {
fn oom(&mut self) -> ! {
unsafe { ::intrinsics::abort() }
}
@ -602,9 +671,10 @@ pub unsafe trait Alloc {
// realloc. alloc_excess, realloc_excess
/// Returns a pointer suitable for holding data described by
/// `new_layout`, meeting its size and alignment guarantees. To
/// a new layout with `layout`s alginment and a size given
/// by `new_size`. To
/// accomplish this, this may extend or shrink the allocation
/// referenced by `ptr` to fit `new_layout`.
/// referenced by `ptr` to fit the new layout.
///
/// If this returns `Ok`, then ownership of the memory block
/// referenced by `ptr` has been transferred to this
@ -617,12 +687,6 @@ pub unsafe trait Alloc {
/// block has not been transferred to this allocator, and the
/// contents of the memory block are unaltered.
///
/// For best results, `new_layout` should not impose a different
/// alignment constraint than `layout`. (In other words,
/// `new_layout.align()` should equal `layout.align()`.) However,
/// behavior is well-defined (though underspecified) when this
/// constraint is violated; further discussion below.
///
/// # Safety
///
/// This function is unsafe because undefined behavior can result
@ -630,12 +694,13 @@ pub unsafe trait Alloc {
///
/// * `ptr` must be currently allocated via this allocator,
///
/// * `layout` must *fit* the `ptr` (see above). (The `new_layout`
/// * `layout` must *fit* the `ptr` (see above). (The `new_size`
/// argument need not fit it.)
///
/// * `new_layout` must have size greater than zero.
/// * `new_size` must be greater than zero.
///
/// * the alignment of `new_layout` is non-zero.
/// * `new_size`, when rounded up to the nearest multiple of `layout.align()`,
/// must not overflow (i.e. the rounded value must be less than `usize::MAX`).
///
/// (Extension subtraits might provide more specific bounds on
/// behavior, e.g. guarantee a sentinel address or a null pointer
@ -643,18 +708,11 @@ pub unsafe trait Alloc {
///
/// # Errors
///
/// Returns `Err` only if `new_layout` does not match the
/// alignment of `layout`, or does not meet the allocator's size
/// Returns `Err` only if the new layout
/// does not meet the allocator's size
/// and alignment constraints of the allocator, or if reallocation
/// otherwise fails.
///
/// (Note the previous sentence did not say "if and only if" -- in
/// particular, an implementation of this method *can* return `Ok`
/// if `new_layout.align() != old_layout.align()`; or it can
/// return `Err` in that scenario, depending on whether this
/// allocator can dynamically adjust the alignment constraint for
/// the block.)
///
/// Implementations are encouraged to return `Err` on memory
/// exhaustion rather than panicking or aborting, but this is not
/// a strict requirement. (Specifically: it is *legal* to
@ -665,27 +723,28 @@ pub unsafe trait Alloc {
/// reallocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar.
unsafe fn realloc(&mut self,
ptr: *mut u8,
ptr: NonNull<Opaque>,
layout: Layout,
new_layout: Layout) -> Result<*mut u8, AllocErr> {
let new_size = new_layout.size();
new_size: usize) -> Result<NonNull<Opaque>, AllocErr> {
let old_size = layout.size();
let aligns_match = layout.align == new_layout.align;
if new_size >= old_size && aligns_match {
if let Ok(()) = self.grow_in_place(ptr, layout.clone(), new_layout.clone()) {
if new_size >= old_size {
if let Ok(()) = self.grow_in_place(ptr, layout.clone(), new_size) {
return Ok(ptr);
}
} else if new_size < old_size && aligns_match {
if let Ok(()) = self.shrink_in_place(ptr, layout.clone(), new_layout.clone()) {
} else if new_size < old_size {
if let Ok(()) = self.shrink_in_place(ptr, layout.clone(), new_size) {
return Ok(ptr);
}
}
// otherwise, fall back on alloc + copy + dealloc.
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let result = self.alloc(new_layout);
if let Ok(new_ptr) = result {
ptr::copy_nonoverlapping(ptr as *const u8, new_ptr, cmp::min(old_size, new_size));
ptr::copy_nonoverlapping(ptr.as_ptr() as *const u8,
new_ptr.as_ptr() as *mut u8,
cmp::min(old_size, new_size));
self.dealloc(ptr, layout);
}
result
@ -707,11 +766,11 @@ pub unsafe trait Alloc {
/// Clients wishing to abort computation in response to an
/// allocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar.
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> {
let size = layout.size();
let p = self.alloc(layout);
if let Ok(p) = p {
ptr::write_bytes(p, 0, size);
ptr::write_bytes(p.as_ptr() as *mut u8, 0, size);
}
p
}
@ -756,19 +815,21 @@ pub unsafe trait Alloc {
/// reallocation error are encouraged to call the allocator's `oom`
/// method, rather than directly invoking `panic!` or similar.
unsafe fn realloc_excess(&mut self,
ptr: *mut u8,
ptr: NonNull<Opaque>,
layout: Layout,
new_layout: Layout) -> Result<Excess, AllocErr> {
new_size: usize) -> Result<Excess, AllocErr> {
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let usable_size = self.usable_size(&new_layout);
self.realloc(ptr, layout, new_layout)
self.realloc(ptr, layout, new_size)
.map(|p| Excess(p, usable_size.1))
}
/// Attempts to extend the allocation referenced by `ptr` to fit `new_layout`.
/// Attempts to extend the allocation referenced by `ptr` to fit `new_size`.
///
/// If this returns `Ok`, then the allocator has asserted that the
/// memory block referenced by `ptr` now fits `new_layout`, and thus can
/// be used to carry data of that layout. (The allocator is allowed to
/// memory block referenced by `ptr` now fits `new_size`, and thus can
/// be used to carry data of a layout of that size and same alignment as
/// `layout`. (The allocator is allowed to
/// expend effort to accomplish this, such as extending the memory block to
/// include successor blocks, or virtual memory tricks.)
///
@ -784,11 +845,9 @@ pub unsafe trait Alloc {
/// * `ptr` must be currently allocated via this allocator,
///
/// * `layout` must *fit* the `ptr` (see above); note the
/// `new_layout` argument need not fit it,
/// `new_size` argument need not fit it,
///
/// * `new_layout.size()` must not be less than `layout.size()`,
///
/// * `new_layout.align()` must equal `layout.align()`.
/// * `new_size` must not be less than `layout.size()`,
///
/// # Errors
///
@ -801,26 +860,25 @@ pub unsafe trait Alloc {
/// `grow_in_place` failures without aborting, or to fall back on
/// another reallocation method before resorting to an abort.
unsafe fn grow_in_place(&mut self,
ptr: *mut u8,
ptr: NonNull<Opaque>,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
new_size: usize) -> Result<(), CannotReallocInPlace> {
let _ = ptr; // this default implementation doesn't care about the actual address.
debug_assert!(new_layout.size >= layout.size);
debug_assert!(new_layout.align == layout.align);
debug_assert!(new_size >= layout.size);
let (_l, u) = self.usable_size(&layout);
// _l <= layout.size() [guaranteed by usable_size()]
// layout.size() <= new_layout.size() [required by this method]
if new_layout.size <= u {
if new_size <= u {
return Ok(());
} else {
return Err(CannotReallocInPlace);
}
}
/// Attempts to shrink the allocation referenced by `ptr` to fit `new_layout`.
/// Attempts to shrink the allocation referenced by `ptr` to fit `new_size`.
///
/// If this returns `Ok`, then the allocator has asserted that the
/// memory block referenced by `ptr` now fits `new_layout`, and
/// memory block referenced by `ptr` now fits `new_size`, and
/// thus can only be used to carry data of that smaller
/// layout. (The allocator is allowed to take advantage of this,
/// carving off portions of the block for reuse elsewhere.) The
@ -841,13 +899,11 @@ pub unsafe trait Alloc {
/// * `ptr` must be currently allocated via this allocator,
///
/// * `layout` must *fit* the `ptr` (see above); note the
/// `new_layout` argument need not fit it,
/// `new_size` argument need not fit it,
///
/// * `new_layout.size()` must not be greater than `layout.size()`
/// * `new_size` must not be greater than `layout.size()`
/// (and must be greater than zero),
///
/// * `new_layout.align()` must equal `layout.align()`.
///
/// # Errors
///
/// Returns `Err(CannotReallocInPlace)` when the allocator is
@ -859,16 +915,15 @@ pub unsafe trait Alloc {
/// `shrink_in_place` failures without aborting, or to fall back
/// on another reallocation method before resorting to an abort.
unsafe fn shrink_in_place(&mut self,
ptr: *mut u8,
ptr: NonNull<Opaque>,
layout: Layout,
new_layout: Layout) -> Result<(), CannotReallocInPlace> {
new_size: usize) -> Result<(), CannotReallocInPlace> {
let _ = ptr; // this default implementation doesn't care about the actual address.
debug_assert!(new_layout.size <= layout.size);
debug_assert!(new_layout.align == layout.align);
debug_assert!(new_size <= layout.size);
let (l, _u) = self.usable_size(&layout);
// layout.size() <= _u [guaranteed by usable_size()]
// new_layout.size() <= layout.size() [required by this method]
if l <= new_layout.size {
if l <= new_size {
return Ok(());
} else {
return Err(CannotReallocInPlace);
@ -911,9 +966,9 @@ pub unsafe trait Alloc {
{
let k = Layout::new::<T>();
if k.size() > 0 {
unsafe { self.alloc(k).map(|p| NonNull::new_unchecked(p as *mut T)) }
unsafe { self.alloc(k).map(|p| p.cast()) }
} else {
Err(AllocErr::invalid_input("zero-sized type invalid for alloc_one"))
Err(AllocErr)
}
}
@ -937,10 +992,9 @@ pub unsafe trait Alloc {
unsafe fn dealloc_one<T>(&mut self, ptr: NonNull<T>)
where Self: Sized
{
let raw_ptr = ptr.as_ptr() as *mut u8;
let k = Layout::new::<T>();
if k.size() > 0 {
self.dealloc(raw_ptr, k);
self.dealloc(ptr.as_opaque(), k);
}
}
@ -978,15 +1032,12 @@ pub unsafe trait Alloc {
where Self: Sized
{
match Layout::array::<T>(n) {
Some(ref layout) if layout.size() > 0 => {
Ok(ref layout) if layout.size() > 0 => {
unsafe {
self.alloc(layout.clone())
.map(|p| {
NonNull::new_unchecked(p as *mut T)
})
self.alloc(layout.clone()).map(|p| p.cast())
}
}
_ => Err(AllocErr::invalid_input("invalid layout for alloc_array")),
_ => Err(AllocErr),
}
}
@ -1028,13 +1079,13 @@ pub unsafe trait Alloc {
n_new: usize) -> Result<NonNull<T>, AllocErr>
where Self: Sized
{
match (Layout::array::<T>(n_old), Layout::array::<T>(n_new), ptr.as_ptr()) {
(Some(ref k_old), Some(ref k_new), ptr) if k_old.size() > 0 && k_new.size() > 0 => {
self.realloc(ptr as *mut u8, k_old.clone(), k_new.clone())
.map(|p| NonNull::new_unchecked(p as *mut T))
match (Layout::array::<T>(n_old), Layout::array::<T>(n_new)) {
(Ok(ref k_old), Ok(ref k_new)) if k_old.size() > 0 && k_new.size() > 0 => {
debug_assert!(k_old.align() == k_new.align());
self.realloc(ptr.as_opaque(), k_old.clone(), k_new.size()).map(NonNull::cast)
}
_ => {
Err(AllocErr::invalid_input("invalid layout for realloc_array"))
Err(AllocErr)
}
}
}
@ -1062,13 +1113,12 @@ pub unsafe trait Alloc {
unsafe fn dealloc_array<T>(&mut self, ptr: NonNull<T>, n: usize) -> Result<(), AllocErr>
where Self: Sized
{
let raw_ptr = ptr.as_ptr() as *mut u8;
match Layout::array::<T>(n) {
Some(ref k) if k.size() > 0 => {
Ok(self.dealloc(raw_ptr, k.clone()))
Ok(ref k) if k.size() > 0 => {
Ok(self.dealloc(ptr.as_opaque(), k.clone()))
}
_ => {
Err(AllocErr::invalid_input("invalid layout for dealloc_array"))
Err(AllocErr)
}
}
}

View File

@ -75,6 +75,7 @@
#![feature(custom_attribute)]
#![feature(doc_cfg)]
#![feature(doc_spotlight)]
#![feature(extern_types)]
#![feature(fn_must_use)]
#![feature(fundamental)]
#![feature(intrinsics)]
@ -184,7 +185,14 @@ pub mod unicode;
/* Heap memory allocator trait */
#[allow(missing_docs)]
pub mod heap;
pub mod alloc;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
/// Use the `alloc` module instead.
pub mod heap {
pub use alloc::*;
}
// note: does not need to be public
mod iter_private;

View File

@ -2750,6 +2750,14 @@ impl<T: ?Sized> NonNull<T> {
NonNull::new_unchecked(self.as_ptr() as *mut U)
}
}
/// Cast to an `Opaque` pointer
#[unstable(feature = "allocator_api", issue = "32838")]
pub fn as_opaque(self) -> NonNull<::alloc::Opaque> {
unsafe {
NonNull::new_unchecked(self.as_ptr() as _)
}
}
}
#[stable(feature = "nonnull", since = "1.25.0")]

View File

@ -11,7 +11,7 @@
use rustc::middle::allocator::AllocatorKind;
use rustc_errors;
use syntax::abi::Abi;
use syntax::ast::{Crate, Attribute, LitKind, StrStyle, ExprKind};
use syntax::ast::{Crate, Attribute, LitKind, StrStyle};
use syntax::ast::{Unsafety, Constness, Generics, Mutability, Ty, Mac, Arg};
use syntax::ast::{self, Ident, Item, ItemKind, TyKind, VisibilityKind, Expr};
use syntax::attr;
@ -88,7 +88,7 @@ impl<'a> Folder for ExpandAllocatorDirectives<'a> {
span,
kind: AllocatorKind::Global,
global: item.ident,
alloc: Ident::from_str("alloc"),
core: Ident::from_str("core"),
cx: ExtCtxt::new(self.sess, ecfg, self.resolver),
};
let super_path = f.cx.path(f.span, vec![
@ -96,7 +96,7 @@ impl<'a> Folder for ExpandAllocatorDirectives<'a> {
f.global,
]);
let mut items = vec![
f.cx.item_extern_crate(f.span, f.alloc),
f.cx.item_extern_crate(f.span, f.core),
f.cx.item_use_simple(
f.span,
respan(f.span.shrink_to_lo(), VisibilityKind::Inherited),
@ -126,7 +126,7 @@ struct AllocFnFactory<'a> {
span: Span,
kind: AllocatorKind,
global: Ident,
alloc: Ident,
core: Ident,
cx: ExtCtxt<'a>,
}
@ -143,8 +143,7 @@ impl<'a> AllocFnFactory<'a> {
self.arg_ty(ty, &mut abi_args, mk)
}).collect();
let result = self.call_allocator(method.name, args);
let (output_ty, output_expr) =
self.ret_ty(&method.output, &mut abi_args, mk, result);
let (output_ty, output_expr) = self.ret_ty(&method.output, result);
let kind = ItemKind::Fn(self.cx.fn_decl(abi_args, ast::FunctionRetTy::Ty(output_ty)),
Unsafety::Unsafe,
dummy_spanned(Constness::NotConst),
@ -159,16 +158,15 @@ impl<'a> AllocFnFactory<'a> {
fn call_allocator(&self, method: &str, mut args: Vec<P<Expr>>) -> P<Expr> {
let method = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("Alloc"),
self.core,
Ident::from_str("alloc"),
Ident::from_str("GlobalAlloc"),
Ident::from_str(method),
]);
let method = self.cx.expr_path(method);
let allocator = self.cx.path_ident(self.span, self.global);
let allocator = self.cx.expr_path(allocator);
let allocator = self.cx.expr_addr_of(self.span, allocator);
let allocator = self.cx.expr_mut_addr_of(self.span, allocator);
args.insert(0, allocator);
self.cx.expr_call(self.span, method, args)
@ -205,8 +203,8 @@ impl<'a> AllocFnFactory<'a> {
args.push(self.cx.arg(self.span, align, ty_usize));
let layout_new = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
self.core,
Ident::from_str("alloc"),
Ident::from_str("Layout"),
Ident::from_str("from_size_align_unchecked"),
]);
@ -219,240 +217,38 @@ impl<'a> AllocFnFactory<'a> {
layout
}
AllocatorTy::LayoutRef => {
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.ptr_u8()));
// Convert our `arg: *const u8` via:
//
// &*(arg as *const Layout)
let expr = self.cx.expr_ident(self.span, ident);
let expr = self.cx.expr_cast(self.span, expr, self.layout_ptr());
let expr = self.cx.expr_deref(self.span, expr);
self.cx.expr_addr_of(self.span, expr)
}
AllocatorTy::AllocErr => {
// We're creating:
//
// (*(arg as *const AllocErr)).clone()
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.ptr_u8()));
let expr = self.cx.expr_ident(self.span, ident);
let expr = self.cx.expr_cast(self.span, expr, self.alloc_err_ptr());
let expr = self.cx.expr_deref(self.span, expr);
self.cx.expr_method_call(
self.span,
expr,
Ident::from_str("clone"),
Vec::new()
)
}
AllocatorTy::Ptr => {
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.ptr_u8()));
let arg = self.cx.expr_ident(self.span, ident);
self.cx.expr_cast(self.span, arg, self.ptr_opaque())
}
AllocatorTy::Usize => {
let ident = ident();
args.push(self.cx.arg(self.span, ident, self.usize()));
self.cx.expr_ident(self.span, ident)
}
AllocatorTy::ResultPtr |
AllocatorTy::ResultExcess |
AllocatorTy::ResultUnit |
AllocatorTy::Bang |
AllocatorTy::UsizePair |
AllocatorTy::Unit => {
panic!("can't convert AllocatorTy to an argument")
}
}
}
fn ret_ty(&self,
ty: &AllocatorTy,
args: &mut Vec<Arg>,
ident: &mut FnMut() -> Ident,
expr: P<Expr>) -> (P<Ty>, P<Expr>)
{
fn ret_ty(&self, ty: &AllocatorTy, expr: P<Expr>) -> (P<Ty>, P<Expr>) {
match *ty {
AllocatorTy::UsizePair => {
// We're creating:
//
// let arg = #expr;
// *min = arg.0;
// *max = arg.1;
let min = ident();
let max = ident();
args.push(self.cx.arg(self.span, min, self.ptr_usize()));
args.push(self.cx.arg(self.span, max, self.ptr_usize()));
let ident = ident();
let stmt = self.cx.stmt_let(self.span, false, ident, expr);
let min = self.cx.expr_ident(self.span, min);
let max = self.cx.expr_ident(self.span, max);
let layout = self.cx.expr_ident(self.span, ident);
let assign_min = self.cx.expr(self.span, ExprKind::Assign(
self.cx.expr_deref(self.span, min),
self.cx.expr_tup_field_access(self.span, layout.clone(), 0),
));
let assign_min = self.cx.stmt_semi(assign_min);
let assign_max = self.cx.expr(self.span, ExprKind::Assign(
self.cx.expr_deref(self.span, max),
self.cx.expr_tup_field_access(self.span, layout.clone(), 1),
));
let assign_max = self.cx.stmt_semi(assign_max);
let stmts = vec![stmt, assign_min, assign_max];
let block = self.cx.block(self.span, stmts);
let ty_unit = self.cx.ty(self.span, TyKind::Tup(Vec::new()));
(ty_unit, self.cx.expr_block(block))
}
AllocatorTy::ResultExcess => {
// We're creating:
//
// match #expr {
// Ok(ptr) => {
// *excess = ptr.1;
// ptr.0
// }
// Err(e) => {
// ptr::write(err_ptr, e);
// 0 as *mut u8
// }
// }
let excess_ptr = ident();
args.push(self.cx.arg(self.span, excess_ptr, self.ptr_usize()));
let excess_ptr = self.cx.expr_ident(self.span, excess_ptr);
let err_ptr = ident();
args.push(self.cx.arg(self.span, err_ptr, self.ptr_u8()));
let err_ptr = self.cx.expr_ident(self.span, err_ptr);
let err_ptr = self.cx.expr_cast(self.span,
err_ptr,
self.alloc_err_ptr());
let name = ident();
let ok_expr = {
let ptr = self.cx.expr_ident(self.span, name);
let write = self.cx.expr(self.span, ExprKind::Assign(
self.cx.expr_deref(self.span, excess_ptr),
self.cx.expr_tup_field_access(self.span, ptr.clone(), 1),
));
let write = self.cx.stmt_semi(write);
let ret = self.cx.expr_tup_field_access(self.span,
ptr.clone(),
0);
let ret = self.cx.stmt_expr(ret);
let block = self.cx.block(self.span, vec![write, ret]);
self.cx.expr_block(block)
};
let pat = self.cx.pat_ident(self.span, name);
let ok = self.cx.path_ident(self.span, Ident::from_str("Ok"));
let ok = self.cx.pat_tuple_struct(self.span, ok, vec![pat]);
let ok = self.cx.arm(self.span, vec![ok], ok_expr);
let name = ident();
let err_expr = {
let err = self.cx.expr_ident(self.span, name);
let write = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("__core"),
Ident::from_str("ptr"),
Ident::from_str("write"),
]);
let write = self.cx.expr_path(write);
let write = self.cx.expr_call(self.span, write,
vec![err_ptr, err]);
let write = self.cx.stmt_semi(write);
let null = self.cx.expr_usize(self.span, 0);
let null = self.cx.expr_cast(self.span, null, self.ptr_u8());
let null = self.cx.stmt_expr(null);
let block = self.cx.block(self.span, vec![write, null]);
self.cx.expr_block(block)
};
let pat = self.cx.pat_ident(self.span, name);
let err = self.cx.path_ident(self.span, Ident::from_str("Err"));
let err = self.cx.pat_tuple_struct(self.span, err, vec![pat]);
let err = self.cx.arm(self.span, vec![err], err_expr);
let expr = self.cx.expr_match(self.span, expr, vec![ok, err]);
(self.ptr_u8(), expr)
}
AllocatorTy::ResultPtr => {
// We're creating:
//
// match #expr {
// Ok(ptr) => ptr,
// Err(e) => {
// ptr::write(err_ptr, e);
// 0 as *mut u8
// }
// }
// #expr as *mut u8
let err_ptr = ident();
args.push(self.cx.arg(self.span, err_ptr, self.ptr_u8()));
let err_ptr = self.cx.expr_ident(self.span, err_ptr);
let err_ptr = self.cx.expr_cast(self.span,
err_ptr,
self.alloc_err_ptr());
let name = ident();
let ok_expr = self.cx.expr_ident(self.span, name);
let pat = self.cx.pat_ident(self.span, name);
let ok = self.cx.path_ident(self.span, Ident::from_str("Ok"));
let ok = self.cx.pat_tuple_struct(self.span, ok, vec![pat]);
let ok = self.cx.arm(self.span, vec![ok], ok_expr);
let name = ident();
let err_expr = {
let err = self.cx.expr_ident(self.span, name);
let write = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("__core"),
Ident::from_str("ptr"),
Ident::from_str("write"),
]);
let write = self.cx.expr_path(write);
let write = self.cx.expr_call(self.span, write,
vec![err_ptr, err]);
let write = self.cx.stmt_semi(write);
let null = self.cx.expr_usize(self.span, 0);
let null = self.cx.expr_cast(self.span, null, self.ptr_u8());
let null = self.cx.stmt_expr(null);
let block = self.cx.block(self.span, vec![write, null]);
self.cx.expr_block(block)
};
let pat = self.cx.pat_ident(self.span, name);
let err = self.cx.path_ident(self.span, Ident::from_str("Err"));
let err = self.cx.pat_tuple_struct(self.span, err, vec![pat]);
let err = self.cx.arm(self.span, vec![err], err_expr);
let expr = self.cx.expr_match(self.span, expr, vec![ok, err]);
let expr = self.cx.expr_cast(self.span, expr, self.ptr_u8());
(self.ptr_u8(), expr)
}
AllocatorTy::ResultUnit => {
// We're creating:
//
// #expr.is_ok() as u8
let cast = self.cx.expr_method_call(
self.span,
expr,
Ident::from_str("is_ok"),
Vec::new()
);
let u8 = self.cx.path_ident(self.span, Ident::from_str("u8"));
let u8 = self.cx.ty_path(u8);
let cast = self.cx.expr_cast(self.span, cast, u8.clone());
(u8, cast)
}
AllocatorTy::Bang => {
(self.cx.ty(self.span, TyKind::Never), expr)
}
@ -461,44 +257,32 @@ impl<'a> AllocFnFactory<'a> {
(self.cx.ty(self.span, TyKind::Tup(Vec::new())), expr)
}
AllocatorTy::AllocErr |
AllocatorTy::Layout |
AllocatorTy::LayoutRef |
AllocatorTy::Usize |
AllocatorTy::Ptr => {
panic!("can't convert AllocatorTy to an output")
}
}
}
fn usize(&self) -> P<Ty> {
let usize = self.cx.path_ident(self.span, Ident::from_str("usize"));
self.cx.ty_path(usize)
}
fn ptr_u8(&self) -> P<Ty> {
let u8 = self.cx.path_ident(self.span, Ident::from_str("u8"));
let ty_u8 = self.cx.ty_path(u8);
self.cx.ty_ptr(self.span, ty_u8, Mutability::Mutable)
}
fn ptr_usize(&self) -> P<Ty> {
let usize = self.cx.path_ident(self.span, Ident::from_str("usize"));
let ty_usize = self.cx.ty_path(usize);
self.cx.ty_ptr(self.span, ty_usize, Mutability::Mutable)
}
fn layout_ptr(&self) -> P<Ty> {
let layout = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("Layout"),
fn ptr_opaque(&self) -> P<Ty> {
let opaque = self.cx.path(self.span, vec![
self.core,
Ident::from_str("alloc"),
Ident::from_str("Opaque"),
]);
let layout = self.cx.ty_path(layout);
self.cx.ty_ptr(self.span, layout, Mutability::Mutable)
}
fn alloc_err_ptr(&self) -> P<Ty> {
let err = self.cx.path(self.span, vec![
self.alloc,
Ident::from_str("heap"),
Ident::from_str("AllocErr"),
]);
let err = self.cx.ty_path(err);
self.cx.ty_ptr(self.span, err, Mutability::Mutable)
let ty_opaque = self.cx.ty_path(opaque);
self.cx.ty_ptr(self.span, ty_opaque, Mutability::Mutable)
}
}

View File

@ -25,7 +25,7 @@ pub static ALLOCATOR_METHODS: &[AllocatorMethod] = &[
},
AllocatorMethod {
name: "oom",
inputs: &[AllocatorTy::AllocErr],
inputs: &[],
output: AllocatorTy::Bang,
},
AllocatorMethod {
@ -33,14 +33,9 @@ pub static ALLOCATOR_METHODS: &[AllocatorMethod] = &[
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout],
output: AllocatorTy::Unit,
},
AllocatorMethod {
name: "usable_size",
inputs: &[AllocatorTy::LayoutRef],
output: AllocatorTy::UsizePair,
},
AllocatorMethod {
name: "realloc",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Usize],
output: AllocatorTy::ResultPtr,
},
AllocatorMethod {
@ -48,26 +43,6 @@ pub static ALLOCATOR_METHODS: &[AllocatorMethod] = &[
inputs: &[AllocatorTy::Layout],
output: AllocatorTy::ResultPtr,
},
AllocatorMethod {
name: "alloc_excess",
inputs: &[AllocatorTy::Layout],
output: AllocatorTy::ResultExcess,
},
AllocatorMethod {
name: "realloc_excess",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
output: AllocatorTy::ResultExcess,
},
AllocatorMethod {
name: "grow_in_place",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
output: AllocatorTy::ResultUnit,
},
AllocatorMethod {
name: "shrink_in_place",
inputs: &[AllocatorTy::Ptr, AllocatorTy::Layout, AllocatorTy::Layout],
output: AllocatorTy::ResultUnit,
},
];
pub struct AllocatorMethod {
@ -77,14 +52,10 @@ pub struct AllocatorMethod {
}
pub enum AllocatorTy {
AllocErr,
Bang,
Layout,
LayoutRef,
Ptr,
ResultExcess,
ResultPtr,
ResultUnit,
Unit,
UsizePair,
Usize,
}

View File

@ -30,7 +30,6 @@ pub(crate) unsafe fn trans(tcx: TyCtxt, mods: &ModuleLlvm, kind: AllocatorKind)
};
let i8 = llvm::LLVMInt8TypeInContext(llcx);
let i8p = llvm::LLVMPointerType(i8, 0);
let usizep = llvm::LLVMPointerType(usize, 0);
let void = llvm::LLVMVoidTypeInContext(llcx);
for method in ALLOCATOR_METHODS {
@ -41,40 +40,21 @@ pub(crate) unsafe fn trans(tcx: TyCtxt, mods: &ModuleLlvm, kind: AllocatorKind)
args.push(usize); // size
args.push(usize); // align
}
AllocatorTy::LayoutRef => args.push(i8p),
AllocatorTy::Ptr => args.push(i8p),
AllocatorTy::AllocErr => args.push(i8p),
AllocatorTy::Usize => args.push(usize),
AllocatorTy::Bang |
AllocatorTy::ResultExcess |
AllocatorTy::ResultPtr |
AllocatorTy::ResultUnit |
AllocatorTy::UsizePair |
AllocatorTy::Unit => panic!("invalid allocator arg"),
}
}
let output = match method.output {
AllocatorTy::UsizePair => {
args.push(usizep); // min
args.push(usizep); // max
None
}
AllocatorTy::Bang => None,
AllocatorTy::ResultExcess => {
args.push(i8p); // excess_ptr
args.push(i8p); // err_ptr
Some(i8p)
}
AllocatorTy::ResultPtr => {
args.push(i8p); // err_ptr
Some(i8p)
}
AllocatorTy::ResultUnit => Some(i8),
AllocatorTy::ResultPtr => Some(i8p),
AllocatorTy::Unit => None,
AllocatorTy::AllocErr |
AllocatorTy::Layout |
AllocatorTy::LayoutRef |
AllocatorTy::Usize |
AllocatorTy::Ptr => panic!("invalid allocator output"),
};
let ty = llvm::LLVMFunctionType(output.unwrap_or(void),

121
src/libstd/alloc.rs Normal file
View File

@ -0,0 +1,121 @@
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! dox
#![unstable(issue = "32838", feature = "allocator_api")]
#[doc(inline)] #[allow(deprecated)] pub use alloc_crate::alloc::Heap;
#[doc(inline)] pub use alloc_crate::alloc::Global;
#[doc(inline)] pub use alloc_system::System;
#[doc(inline)] pub use core::alloc::*;
#[cfg(not(test))]
#[doc(hidden)]
#[allow(unused_attributes)]
pub mod __default_lib_allocator {
use super::{System, Layout, GlobalAlloc, Opaque};
// for symbol names src/librustc/middle/allocator.rs
// for signatures src/librustc_allocator/lib.rs
// linkage directives are provided as part of the current compiler allocator
// ABI
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc(size: usize, align: usize) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
System.alloc(layout) as *mut u8
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_oom() -> ! {
System.oom()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_dealloc(ptr: *mut u8,
size: usize,
align: usize) {
System.dealloc(ptr as *mut Opaque, Layout::from_size_align_unchecked(size, align))
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc(ptr: *mut u8,
old_size: usize,
align: usize,
new_size: usize) -> *mut u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, align);
System.realloc(ptr as *mut Opaque, old_layout, new_size) as *mut u8
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_zeroed(size: usize, align: usize) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
System.alloc_zeroed(layout) as *mut u8
}
#[cfg(stage0)]
pub mod stage0 {
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_usable_size(_layout: *const u8,
_min: *mut usize,
_max: *mut usize) {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_excess(_size: usize,
_align: usize,
_excess: *mut usize,
_err: *mut u8) -> *mut u8 {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc_excess(_ptr: *mut u8,
_old_size: usize,
_old_align: usize,
_new_size: usize,
_new_align: usize,
_excess: *mut usize,
_err: *mut u8) -> *mut u8 {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_grow_in_place(_ptr: *mut u8,
_old_size: usize,
_old_align: usize,
_new_size: usize,
_new_align: usize) -> u8 {
unimplemented!()
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_shrink_in_place(_ptr: *mut u8,
_old_size: usize,
_old_align: usize,
_new_size: usize,
_new_align: usize) -> u8 {
unimplemented!()
}
}
}

View File

@ -11,10 +11,8 @@
use self::Entry::*;
use self::VacantEntryState::*;
use alloc::heap::Heap;
use alloc::allocator::CollectionAllocErr;
use alloc::{Global, Alloc, CollectionAllocErr};
use cell::Cell;
use core::heap::Alloc;
use borrow::Borrow;
use cmp::max;
use fmt::{self, Debug};
@ -786,7 +784,7 @@ impl<K, V, S> HashMap<K, V, S>
pub fn reserve(&mut self, additional: usize) {
match self.try_reserve(additional) {
Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"),
Err(CollectionAllocErr::AllocErr(e)) => Heap.oom(e),
Err(CollectionAllocErr::AllocErr) => Global.oom(),
Ok(()) => { /* yay */ }
}
}
@ -3636,7 +3634,7 @@ mod test_map {
if let Err(CapacityOverflow) = empty_bytes.try_reserve(max_no_ovf) {
} else { panic!("isize::MAX + 1 should trigger a CapacityOverflow!") }
} else {
if let Err(AllocErr(_)) = empty_bytes.try_reserve(max_no_ovf) {
if let Err(AllocErr) = empty_bytes.try_reserve(max_no_ovf) {
} else { panic!("isize::MAX + 1 should trigger an OOM!") }
}
}

View File

@ -8,9 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::heap::Heap;
use core::heap::{Alloc, Layout};
use alloc::{Global, Alloc, Layout, CollectionAllocErr};
use cmp;
use hash::{BuildHasher, Hash, Hasher};
use marker;
@ -18,7 +16,6 @@ use mem::{align_of, size_of, needs_drop};
use mem;
use ops::{Deref, DerefMut};
use ptr::{self, Unique, NonNull};
use alloc::allocator::CollectionAllocErr;
use self::BucketState::*;
@ -757,15 +754,13 @@ impl<K, V> RawTable<K, V> {
return Err(CollectionAllocErr::CapacityOverflow);
}
let buffer = Heap.alloc(Layout::from_size_align(size, alignment)
.ok_or(CollectionAllocErr::CapacityOverflow)?)?;
let hashes = buffer as *mut HashUint;
let buffer = Global.alloc(Layout::from_size_align(size, alignment)
.map_err(|_| CollectionAllocErr::CapacityOverflow)?)?;
Ok(RawTable {
capacity_mask: capacity.wrapping_sub(1),
size: 0,
hashes: TaggedHashUintPtr::new(hashes),
hashes: TaggedHashUintPtr::new(buffer.cast().as_ptr()),
marker: marker::PhantomData,
})
}
@ -775,7 +770,7 @@ impl<K, V> RawTable<K, V> {
unsafe fn new_uninitialized(capacity: usize) -> RawTable<K, V> {
match Self::try_new_uninitialized(capacity) {
Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"),
Err(CollectionAllocErr::AllocErr(e)) => Heap.oom(e),
Err(CollectionAllocErr::AllocErr) => Global.oom(),
Ok(table) => { table }
}
}
@ -814,7 +809,7 @@ impl<K, V> RawTable<K, V> {
pub fn new(capacity: usize) -> RawTable<K, V> {
match Self::try_new(capacity) {
Err(CollectionAllocErr::CapacityOverflow) => panic!("capacity overflow"),
Err(CollectionAllocErr::AllocErr(e)) => Heap.oom(e),
Err(CollectionAllocErr::AllocErr) => Global.oom(),
Ok(table) => { table }
}
}
@ -1188,8 +1183,8 @@ unsafe impl<#[may_dangle] K, #[may_dangle] V> Drop for RawTable<K, V> {
debug_assert!(!oflo, "should be impossible");
unsafe {
Heap.dealloc(self.hashes.ptr() as *mut u8,
Layout::from_size_align(size, align).unwrap());
Global.dealloc(NonNull::new_unchecked(self.hashes.ptr()).as_opaque(),
Layout::from_size_align(size, align).unwrap());
// Remember how everything was allocated out of one buffer
// during initialization? We only need one call to free here.
}

View File

@ -424,13 +424,13 @@
#[doc(hidden)]
pub use ops::Bound;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{BinaryHeap, BTreeMap, BTreeSet};
pub use alloc_crate::{BinaryHeap, BTreeMap, BTreeSet};
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{LinkedList, VecDeque};
pub use alloc_crate::{LinkedList, VecDeque};
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{binary_heap, btree_map, btree_set};
pub use alloc_crate::{binary_heap, btree_map, btree_set};
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::{linked_list, vec_deque};
pub use alloc_crate::{linked_list, vec_deque};
#[stable(feature = "rust1", since = "1.0.0")]
pub use self::hash_map::HashMap;
@ -446,7 +446,7 @@ pub mod range {
}
#[unstable(feature = "try_reserve", reason = "new API", issue="48043")]
pub use alloc::allocator::CollectionAllocErr;
pub use heap::CollectionAllocErr;
mod hash;

View File

@ -51,13 +51,13 @@
// coherence challenge (e.g., specialization, neg impls, etc) we can
// reconsider what crate these items belong in.
use alloc::allocator;
use any::TypeId;
use borrow::Cow;
use cell;
use char;
use core::array;
use fmt::{self, Debug, Display};
use heap::{AllocErr, LayoutErr, CannotReallocInPlace};
use mem::transmute;
use num;
use str;
@ -241,18 +241,27 @@ impl Error for ! {
#[unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked.",
issue = "32838")]
impl Error for allocator::AllocErr {
impl Error for AllocErr {
fn description(&self) -> &str {
allocator::AllocErr::description(self)
"memory allocation failed"
}
}
#[unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked.",
issue = "32838")]
impl Error for allocator::CannotReallocInPlace {
impl Error for LayoutErr {
fn description(&self) -> &str {
allocator::CannotReallocInPlace::description(self)
"invalid parameters to Layout::from_size_align"
}
}
#[unstable(feature = "allocator_api",
reason = "the precise API and guarantees it provides may be tweaked.",
issue = "32838")]
impl Error for CannotReallocInPlace {
fn description(&self) -> &str {
CannotReallocInPlace::description(self)
}
}

View File

@ -1,176 +0,0 @@
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! dox
#![unstable(issue = "32838", feature = "allocator_api")]
pub use alloc::heap::Heap;
pub use alloc_system::System;
pub use core::heap::*;
#[cfg(not(test))]
#[doc(hidden)]
#[allow(unused_attributes)]
pub mod __default_lib_allocator {
use super::{System, Layout, Alloc, AllocErr};
use ptr;
// for symbol names src/librustc/middle/allocator.rs
// for signatures src/librustc_allocator/lib.rs
// linkage directives are provided as part of the current compiler allocator
// ABI
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc(size: usize,
align: usize,
err: *mut u8) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
match System.alloc(layout) {
Ok(p) => p,
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_oom(err: *const u8) -> ! {
System.oom((*(err as *const AllocErr)).clone())
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_dealloc(ptr: *mut u8,
size: usize,
align: usize) {
System.dealloc(ptr, Layout::from_size_align_unchecked(size, align))
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_usable_size(layout: *const u8,
min: *mut usize,
max: *mut usize) {
let pair = System.usable_size(&*(layout as *const Layout));
*min = pair.0;
*max = pair.1;
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
err: *mut u8) -> *mut u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.realloc(ptr, old_layout, new_layout) {
Ok(p) => p,
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_zeroed(size: usize,
align: usize,
err: *mut u8) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
match System.alloc_zeroed(layout) {
Ok(p) => p,
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_alloc_excess(size: usize,
align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(size, align);
match System.alloc_excess(layout) {
Ok(p) => {
*excess = p.1;
p.0
}
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_realloc_excess(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize,
excess: *mut usize,
err: *mut u8) -> *mut u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.realloc_excess(ptr, old_layout, new_layout) {
Ok(p) => {
*excess = p.1;
p.0
}
Err(e) => {
ptr::write(err as *mut AllocErr, e);
0 as *mut u8
}
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_grow_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.grow_in_place(ptr, old_layout, new_layout) {
Ok(()) => 1,
Err(_) => 0,
}
}
#[no_mangle]
#[rustc_std_internal_symbol]
pub unsafe extern fn __rdl_shrink_in_place(ptr: *mut u8,
old_size: usize,
old_align: usize,
new_size: usize,
new_align: usize) -> u8 {
let old_layout = Layout::from_size_align_unchecked(old_size, old_align);
let new_layout = Layout::from_size_align_unchecked(new_size, new_align);
match System.shrink_in_place(ptr, old_layout, new_layout) {
Ok(()) => 1,
Err(_) => 0,
}
}
}

View File

@ -275,6 +275,7 @@
#![feature(macro_reexport)]
#![feature(macro_vis_matcher)]
#![feature(needs_panic_runtime)]
#![feature(nonnull_cast)]
#![feature(exhaustive_patterns)]
#![feature(nonzero)]
#![feature(num_bits_bytes)]
@ -351,7 +352,7 @@ extern crate core as __core;
#[macro_use]
#[macro_reexport(vec, format)]
extern crate alloc;
extern crate alloc as alloc_crate;
extern crate alloc_system;
#[doc(masked)]
extern crate libc;
@ -437,21 +438,21 @@ pub use core::u32;
#[stable(feature = "rust1", since = "1.0.0")]
pub use core::u64;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::boxed;
pub use alloc_crate::boxed;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::rc;
pub use alloc_crate::rc;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::borrow;
pub use alloc_crate::borrow;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::fmt;
pub use alloc_crate::fmt;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::slice;
pub use alloc_crate::slice;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::str;
pub use alloc_crate::str;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::string;
pub use alloc_crate::string;
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::vec;
pub use alloc_crate::vec;
#[stable(feature = "rust1", since = "1.0.0")]
pub use core::char;
#[stable(feature = "i128", since = "1.26.0")]
@ -477,7 +478,14 @@ pub mod path;
pub mod process;
pub mod sync;
pub mod time;
pub mod heap;
pub mod alloc;
#[unstable(feature = "allocator_api", issue = "32838")]
#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")]
/// Use the `alloc` module instead.
pub mod heap {
pub use alloc::*;
}
// Platform-abstraction modules
#[macro_use]

View File

@ -18,7 +18,7 @@
#![stable(feature = "rust1", since = "1.0.0")]
#[stable(feature = "rust1", since = "1.0.0")]
pub use alloc::arc::{Arc, Weak};
pub use alloc_crate::arc::{Arc, Weak};
#[stable(feature = "rust1", since = "1.0.0")]
pub use core::sync::atomic;

View File

@ -23,10 +23,9 @@
pub use self::PopResult::*;
use alloc::boxed::Box;
use core::ptr;
use core::cell::UnsafeCell;
use boxed::Box;
use sync::atomic::{AtomicPtr, Ordering};
/// A result of the `pop` function.

View File

@ -16,7 +16,7 @@
// http://www.1024cores.net/home/lock-free-algorithms/queues/unbounded-spsc-queue
use alloc::boxed::Box;
use boxed::Box;
use core::ptr;
use core::cell::UnsafeCell;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::boxed::FnBox;
use boxed::FnBox;
use cmp;
use ffi::CStr;
use io;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::boxed::FnBox;
use boxed::FnBox;
use ffi::CStr;
use io;
use mem;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::boxed::FnBox;
use boxed::FnBox;
use cmp;
use ffi::CStr;
use io;

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::boxed::FnBox;
use boxed::FnBox;
use ffi::CStr;
use io;
use sys::{unsupported, Void};

View File

@ -31,7 +31,7 @@ use sys::stdio;
use sys::cvt;
use sys_common::{AsInner, FromInner, IntoInner};
use sys_common::process::{CommandEnv, EnvKey};
use alloc::borrow::Borrow;
use borrow::Borrow;
////////////////////////////////////////////////////////////////////////////////
// Command

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::boxed::FnBox;
use boxed::FnBox;
use io;
use ffi::CStr;
use mem;

View File

@ -12,7 +12,7 @@
//!
//! Documentation can be found on the `rt::at_exit` function.
use alloc::boxed::FnBox;
use boxed::FnBox;
use ptr;
use sys_common::mutex::Mutex;

View File

@ -14,7 +14,7 @@
use ffi::{OsStr, OsString};
use env;
use collections::BTreeMap;
use alloc::borrow::Borrow;
use borrow::Borrow;
pub trait EnvKey:
From<OsString> + Into<OsString> +

View File

@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use alloc::boxed::FnBox;
use boxed::FnBox;
use env;
use sync::atomic::{self, Ordering};
use sys::stack_overflow;

@ -1 +1 @@
Subproject commit 6ceaaa4b0176a200e4bbd347d6a991ab6c776ede
Subproject commit 7243155b1c3da0a980c868a87adebf00e0b33989

View File

@ -12,4 +12,3 @@ doc = false
[dependencies]
core = { path = "../../libcore" }
compiler_builtins = { path = "../../rustc/compiler_builtins_shim" }
alloc = { path = "../../liballoc" }

View File

@ -1,4 +1,4 @@
# If this file is modified, then llvm will be (optionally) cleaned and then rebuilt.
# The actual contents of this file do not matter, but to trigger a change on the
# build bots then the contents should be changed so git updates the mtime.
2018-03-10
2018-04-05

View File

@ -12,15 +12,10 @@
#[global_allocator]
static A: usize = 0;
//~^ the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~| the trait bound `&usize:
//~^ the trait bound `usize:
//~| the trait bound `usize:
//~| the trait bound `usize:
//~| the trait bound `usize:
//~| the trait bound `usize:
fn main() {}

View File

@ -11,16 +11,16 @@
#![feature(allocator_api)]
#![crate_type = "rlib"]
use std::heap::*;
use std::alloc::*;
pub struct A;
unsafe impl<'a> Alloc for &'a A {
unsafe fn alloc(&mut self, _: Layout) -> Result<*mut u8, AllocErr> {
unsafe impl GlobalAlloc for A {
unsafe fn alloc(&self, _: Layout) -> *mut Opaque {
loop {}
}
unsafe fn dealloc(&mut self, _ptr: *mut u8, _: Layout) {
unsafe fn dealloc(&self, _ptr: *mut Opaque, _: Layout) {
loop {}
}
}

View File

@ -14,8 +14,8 @@ use std::heap::{Heap, Alloc};
fn main() {
unsafe {
let ptr = Heap.alloc_one::<i32>().unwrap_or_else(|e| {
Heap.oom(e)
let ptr = Heap.alloc_one::<i32>().unwrap_or_else(|_| {
Heap.oom()
});
*ptr.as_ptr() = 4;
assert_eq!(*ptr.as_ptr(), 4);

View File

@ -13,18 +13,18 @@
#![feature(heap_api, allocator_api)]
#![crate_type = "rlib"]
use std::heap::{Alloc, System, AllocErr, Layout};
use std::heap::{GlobalAlloc, System, Layout, Opaque};
use std::sync::atomic::{AtomicUsize, Ordering};
pub struct A(pub AtomicUsize);
unsafe impl<'a> Alloc for &'a A {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
unsafe impl GlobalAlloc for A {
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
self.0.fetch_add(1, Ordering::SeqCst);
System.alloc(layout)
}
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
self.0.fetch_add(1, Ordering::SeqCst);
System.dealloc(ptr, layout)
}

View File

@ -15,20 +15,20 @@
extern crate helper;
use std::heap::{Heap, Alloc, System, Layout, AllocErr};
use std::alloc::{self, Global, Alloc, System, Layout, Opaque};
use std::sync::atomic::{AtomicUsize, Ordering, ATOMIC_USIZE_INIT};
static HITS: AtomicUsize = ATOMIC_USIZE_INIT;
struct A;
unsafe impl<'a> Alloc for &'a A {
unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
unsafe impl alloc::GlobalAlloc for A {
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque {
HITS.fetch_add(1, Ordering::SeqCst);
System.alloc(layout)
}
unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) {
HITS.fetch_add(1, Ordering::SeqCst);
System.dealloc(ptr, layout)
}
@ -45,10 +45,10 @@ fn main() {
unsafe {
let layout = Layout::from_size_align(4, 2).unwrap();
let ptr = Heap.alloc(layout.clone()).unwrap();
let ptr = Global.alloc(layout.clone()).unwrap();
helper::work_with(&ptr);
assert_eq!(HITS.load(Ordering::SeqCst), n + 1);
Heap.dealloc(ptr, layout.clone());
Global.dealloc(ptr, layout.clone());
assert_eq!(HITS.load(Ordering::SeqCst), n + 2);
let s = String::with_capacity(10);

View File

@ -17,7 +17,7 @@
extern crate custom;
extern crate helper;
use std::heap::{Heap, Alloc, System, Layout};
use std::alloc::{Global, Alloc, System, Layout};
use std::sync::atomic::{Ordering, ATOMIC_USIZE_INIT};
#[global_allocator]
@ -28,10 +28,10 @@ fn main() {
let n = GLOBAL.0.load(Ordering::SeqCst);
let layout = Layout::from_size_align(4, 2).unwrap();
let ptr = Heap.alloc(layout.clone()).unwrap();
let ptr = Global.alloc(layout.clone()).unwrap();
helper::work_with(&ptr);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 1);
Heap.dealloc(ptr, layout.clone());
Global.dealloc(ptr, layout.clone());
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 2);
let ptr = System.alloc(layout.clone()).unwrap();

View File

@ -19,7 +19,7 @@ extern crate custom;
extern crate custom_as_global;
extern crate helper;
use std::heap::{Heap, Alloc, System, Layout};
use std::alloc::{Global, Alloc, GlobalAlloc, System, Layout};
use std::sync::atomic::{Ordering, ATOMIC_USIZE_INIT};
static GLOBAL: custom::A = custom::A(ATOMIC_USIZE_INIT);
@ -30,25 +30,25 @@ fn main() {
let layout = Layout::from_size_align(4, 2).unwrap();
// Global allocator routes to the `custom_as_global` global
let ptr = Heap.alloc(layout.clone()).unwrap();
let ptr = Global.alloc(layout.clone());
helper::work_with(&ptr);
assert_eq!(custom_as_global::get(), n + 1);
Heap.dealloc(ptr, layout.clone());
Global.dealloc(ptr, layout.clone());
assert_eq!(custom_as_global::get(), n + 2);
// Usage of the system allocator avoids all globals
let ptr = System.alloc(layout.clone()).unwrap();
let ptr = System.alloc(layout.clone());
helper::work_with(&ptr);
assert_eq!(custom_as_global::get(), n + 2);
System.dealloc(ptr, layout.clone());
assert_eq!(custom_as_global::get(), n + 2);
// Usage of our personal allocator doesn't affect other instances
let ptr = (&GLOBAL).alloc(layout.clone()).unwrap();
let ptr = GLOBAL.alloc(layout.clone());
helper::work_with(&ptr);
assert_eq!(custom_as_global::get(), n + 2);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), 1);
(&GLOBAL).dealloc(ptr, layout);
GLOBAL.dealloc(ptr, layout);
assert_eq!(custom_as_global::get(), n + 2);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), 2);
}

View File

@ -13,10 +13,10 @@
// Ideally this would be revised to use no_std, but for now it serves
// well enough to reproduce (and illustrate) the bug from #16687.
#![feature(heap_api, allocator_api)]
#![feature(heap_api, allocator_api, nonnull_cast)]
use std::heap::{Heap, Alloc, Layout};
use std::ptr;
use std::alloc::{Global, Alloc, Layout};
use std::ptr::{self, NonNull};
fn main() {
unsafe {
@ -50,13 +50,13 @@ unsafe fn test_triangle() -> bool {
println!("allocate({:?})", layout);
}
let ret = Heap.alloc(layout.clone()).unwrap_or_else(|e| Heap.oom(e));
let ret = Global.alloc(layout.clone()).unwrap_or_else(|_| Global.oom());
if PRINT {
println!("allocate({:?}) = {:?}", layout, ret);
}
ret
ret.cast().as_ptr()
}
unsafe fn deallocate(ptr: *mut u8, layout: Layout) {
@ -64,7 +64,7 @@ unsafe fn test_triangle() -> bool {
println!("deallocate({:?}, {:?}", ptr, layout);
}
Heap.dealloc(ptr, layout);
Global.dealloc(NonNull::new_unchecked(ptr).as_opaque(), layout);
}
unsafe fn reallocate(ptr: *mut u8, old: Layout, new: Layout) -> *mut u8 {
@ -72,14 +72,14 @@ unsafe fn test_triangle() -> bool {
println!("reallocate({:?}, old={:?}, new={:?})", ptr, old, new);
}
let ret = Heap.realloc(ptr, old.clone(), new.clone())
.unwrap_or_else(|e| Heap.oom(e));
let ret = Global.realloc(NonNull::new_unchecked(ptr).as_opaque(), old.clone(), new.size())
.unwrap_or_else(|_| Global.oom());
if PRINT {
println!("reallocate({:?}, old={:?}, new={:?}) = {:?}",
ptr, old, new, ret);
}
ret
ret.cast().as_ptr()
}
fn idx_to_size(i: usize) -> usize { (i+1) * 10 }

View File

@ -13,6 +13,7 @@
#![feature(allocator_api)]
use std::heap::{Alloc, Heap, Layout};
use std::ptr::NonNull;
struct arena(());
@ -32,8 +33,8 @@ struct Ccx {
fn alloc<'a>(_bcx : &'a arena) -> &'a Bcx<'a> {
unsafe {
let ptr = Heap.alloc(Layout::new::<Bcx>())
.unwrap_or_else(|e| Heap.oom(e));
&*(ptr as *const _)
.unwrap_or_else(|_| Heap.oom());
&*(ptr.as_ptr() as *const _)
}
}
@ -45,7 +46,7 @@ fn g(fcx : &Fcx) {
let bcx = Bcx { fcx: fcx };
let bcx2 = h(&bcx);
unsafe {
Heap.dealloc(bcx2 as *const _ as *mut _, Layout::new::<Bcx>());
Heap.dealloc(NonNull::new_unchecked(bcx2 as *const _ as *mut _), Layout::new::<Bcx>());
}
}