Skip to content

Commit 8b32d04

Browse files
qinsoonwksbgamari
authored
Add alloc_with_options (#1218)
This PR adds an alternative allocation functions `alloc_with_options`. The attached `AllocationOptions` change the allocation behavior, such as avoiding GCs and allowing overcommit. Our current `alloc` functions assume they are GC safe points and will trigger GCs internally. A runtime may have different assumptions. We see GHC has `allocateMightFail` (https://gitlab.haskell.org/ghc/ghc/-/blob/90746a591919fc51a0ec9dec58d8f1c8397040e3/rts/sm/Storage.c?page=2#L1089). Also Julia assumes perm alloc will not trigger a GC (mmtk/mmtk-julia#172). Having a variant of `alloc` that is not GC safepoints could be generally useful. --------- Co-authored-by: Kunshan Wang <wks1986@gmail.com> Co-authored-by: Ben Gamari <ben@smart-cactus.org>
1 parent df7a1f1 commit 8b32d04

18 files changed

+544
-61
lines changed

src/memory_manager.rs

Lines changed: 87 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,9 @@ use crate::plan::AllocationSemantics;
1717
use crate::plan::{Mutator, MutatorContext};
1818
use crate::scheduler::WorkBucketStage;
1919
use crate::scheduler::{GCWork, GCWorker};
20+
use crate::util::alloc::allocator::AllocationOptions;
2021
use crate::util::alloc::allocators::AllocatorSelector;
21-
use crate::util::constants::{LOG_BYTES_IN_PAGE, MIN_OBJECT_SIZE};
22+
use crate::util::constants::LOG_BYTES_IN_PAGE;
2223
use crate::util::heap::layout::vm_layout::vm_layout;
2324
use crate::util::opaque_pointer::*;
2425
use crate::util::{Address, ObjectReference};
@@ -139,11 +140,28 @@ pub fn flush_mutator<VM: VMBinding>(mutator: &mut Mutator<VM>) {
139140
mutator.flush()
140141
}
141142

142-
/// Allocate memory for an object. For performance reasons, a VM should
143-
/// implement the allocation fast-path on their side rather than just calling this function.
143+
/// Allocate memory for an object.
144144
///
145-
/// If the VM provides a non-zero `offset` parameter, then the returned address will be
146-
/// such that the `RETURNED_ADDRESS + offset` is aligned to the `align` parameter.
145+
/// When the allocation is successful, it returns the starting address of the new object. The
146+
/// memory range for the new object is `size` bytes starting from the returned address, and
147+
/// `RETURNED_ADDRESS + offset` is guaranteed to be aligned to the `align` parameter. The returned
148+
/// address of a successful allocation will never be zero.
149+
///
150+
/// If MMTk fails to allocate memory, it will attempt a GC to free up some memory and retry the
151+
/// allocation. After triggering GC, it will call [`crate::vm::Collection::block_for_gc`] to suspend
152+
/// the current thread that is allocating. Callers of `alloc` must be aware of this behavior.
153+
/// For example, JIT compilers that support
154+
/// precise stack scanning need to make the call site of `alloc` a GC-safe point by generating stack maps. See
155+
/// [`alloc_with_options`] if it is undesirable to trigger GC at this allocation site.
156+
///
157+
/// If MMTk has attempted at least one GC, and still cannot free up enough memory, it will call
158+
/// [`crate::vm::Collection::out_of_memory`] to inform the binding. The VM binding
159+
/// can implement that method to handle the out-of-memory event in a VM-specific way, including but
160+
/// not limited to throwing exceptions or errors. If [`crate::vm::Collection::out_of_memory`] returns
161+
/// normally without panicking or throwing exceptions, this function will return zero.
162+
///
163+
/// For performance reasons, a VM should implement the allocation fast-path on their side rather
164+
/// than just calling this function.
147165
///
148166
/// Arguments:
149167
/// * `mutator`: The mutator to perform this allocation request.
@@ -158,24 +176,46 @@ pub fn alloc<VM: VMBinding>(
158176
offset: usize,
159177
semantics: AllocationSemantics,
160178
) -> Address {
161-
// MMTk has assumptions about minimal object size.
162-
// We need to make sure that all allocations comply with the min object size.
163-
// Ideally, we check the allocation size, and if it is smaller, we transparently allocate the min
164-
// object size (the VM does not need to know this). However, for the VM bindings we support at the moment,
165-
// their object sizes are all larger than MMTk's min object size, so we simply put an assertion here.
166-
// If you plan to use MMTk with a VM with its object size smaller than MMTk's min object size, you should
167-
// meet the min object size in the fastpath.
168-
debug_assert!(size >= MIN_OBJECT_SIZE);
169-
// Assert alignment
170-
debug_assert!(align >= VM::MIN_ALIGNMENT);
171-
debug_assert!(align <= VM::MAX_ALIGNMENT);
172-
// Assert offset
173-
debug_assert!(VM::USE_ALLOCATION_OFFSET || offset == 0);
179+
#[cfg(debug_assertions)]
180+
crate::util::alloc::allocator::assert_allocation_args::<VM>(size, align, offset);
174181

175182
mutator.alloc(size, align, offset, semantics)
176183
}
177184

178-
/// Invoke the allocation slow path. This is only intended for use when a binding implements the fastpath on
185+
/// Allocate memory for an object.
186+
///
187+
/// This allocation function allows alternation to the allocation behaviors, specified by the
188+
/// [`crate::util::alloc::AllocationOptions`]. For example, one can allow
189+
/// overcommit the memory to go beyond the heap size without triggering a GC. This function can be
190+
/// used in certain cases where the runtime needs a different allocation behavior other than
191+
/// what the default [`alloc`] provides.
192+
///
193+
/// Arguments:
194+
/// * `mutator`: The mutator to perform this allocation request.
195+
/// * `size`: The number of bytes required for the object.
196+
/// * `align`: Required alignment for the object.
197+
/// * `offset`: Offset associated with the alignment.
198+
/// * `semantics`: The allocation semantic required for the allocation.
199+
/// * `options`: the allocation options to change the default allocation behavior for this request.
200+
pub fn alloc_with_options<VM: VMBinding>(
201+
mutator: &mut Mutator<VM>,
202+
size: usize,
203+
align: usize,
204+
offset: usize,
205+
semantics: AllocationSemantics,
206+
options: crate::util::alloc::allocator::AllocationOptions,
207+
) -> Address {
208+
#[cfg(debug_assertions)]
209+
crate::util::alloc::allocator::assert_allocation_args::<VM>(size, align, offset);
210+
211+
mutator.alloc_with_options(size, align, offset, semantics, options)
212+
}
213+
214+
/// Invoke the allocation slow path of [`alloc`].
215+
/// Like [`alloc`], this function may trigger GC and call [`crate::vm::Collection::block_for_gc`] or
216+
/// [`crate::vm::Collection::out_of_memory`]. The caller needs to be aware of that.
217+
///
218+
/// *Notes*: This is only intended for use when a binding implements the fastpath on
179219
/// the binding side. When the binding handles fast path allocation and the fast path fails, it can use this
180220
/// method for slow path allocation. Calling before exhausting fast path allocaiton buffer will lead to bad
181221
/// performance.
@@ -196,6 +236,34 @@ pub fn alloc_slow<VM: VMBinding>(
196236
mutator.alloc_slow(size, align, offset, semantics)
197237
}
198238

239+
/// Invoke the allocation slow path of [`alloc_with_options`].
240+
///
241+
/// Like [`alloc_with_options`], This allocation function allows alternation to the allocation behaviors, specified by the
242+
/// [`crate::util::alloc::AllocationOptions`]. For example, one can allow
243+
/// overcommit the memory to go beyond the heap size without triggering a GC. This function can be
244+
/// used in certain cases where the runtime needs a different allocation behavior other than
245+
/// what the default [`alloc`] provides.
246+
///
247+
/// Like [`alloc_slow`], this function is also only intended for use when a binding implements the
248+
/// fastpath on the binding side.
249+
///
250+
/// Arguments:
251+
/// * `mutator`: The mutator to perform this allocation request.
252+
/// * `size`: The number of bytes required for the object.
253+
/// * `align`: Required alignment for the object.
254+
/// * `offset`: Offset associated with the alignment.
255+
/// * `semantics`: The allocation semantic required for the allocation.
256+
pub fn alloc_slow_with_options<VM: VMBinding>(
257+
mutator: &mut Mutator<VM>,
258+
size: usize,
259+
align: usize,
260+
offset: usize,
261+
semantics: AllocationSemantics,
262+
options: AllocationOptions,
263+
) -> Address {
264+
mutator.alloc_slow_with_options(size, align, offset, semantics, options)
265+
}
266+
199267
/// Perform post-allocation actions, usually initializing object metadata. For many allocators none are
200268
/// required. For performance reasons, a VM should implement the post alloc fast-path on their side
201269
/// rather than just calling this function.

src/plan/mutator_context.rs

Lines changed: 78 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ use crate::plan::barriers::Barrier;
44
use crate::plan::global::Plan;
55
use crate::plan::AllocationSemantics;
66
use crate::policy::space::Space;
7+
use crate::util::alloc::allocator::AllocationOptions;
78
use crate::util::alloc::allocators::{AllocatorSelector, Allocators};
89
use crate::util::alloc::Allocator;
910
use crate::util::{Address, ObjectReference};
@@ -158,11 +159,30 @@ impl<VM: VMBinding> MutatorContext<VM> for Mutator<VM> {
158159
offset: usize,
159160
allocator: AllocationSemantics,
160161
) -> Address {
161-
unsafe {
162+
let allocator = unsafe {
162163
self.allocators
163164
.get_allocator_mut(self.config.allocator_mapping[allocator])
164-
}
165-
.alloc(size, align, offset)
165+
};
166+
// The value should be default/unset at the beginning of an allocation request.
167+
debug_assert!(allocator.get_context().get_alloc_options().is_default());
168+
allocator.alloc(size, align, offset)
169+
}
170+
171+
fn alloc_with_options(
172+
&mut self,
173+
size: usize,
174+
align: usize,
175+
offset: usize,
176+
allocator: AllocationSemantics,
177+
options: AllocationOptions,
178+
) -> Address {
179+
let allocator = unsafe {
180+
self.allocators
181+
.get_allocator_mut(self.config.allocator_mapping[allocator])
182+
};
183+
// The value should be default/unset at the beginning of an allocation request.
184+
debug_assert!(allocator.get_context().get_alloc_options().is_default());
185+
allocator.alloc_with_options(size, align, offset, options)
166186
}
167187

168188
fn alloc_slow(
@@ -172,11 +192,30 @@ impl<VM: VMBinding> MutatorContext<VM> for Mutator<VM> {
172192
offset: usize,
173193
allocator: AllocationSemantics,
174194
) -> Address {
175-
unsafe {
195+
let allocator = unsafe {
176196
self.allocators
177197
.get_allocator_mut(self.config.allocator_mapping[allocator])
178-
}
179-
.alloc_slow(size, align, offset)
198+
};
199+
// The value should be default/unset at the beginning of an allocation request.
200+
debug_assert!(allocator.get_context().get_alloc_options().is_default());
201+
allocator.alloc_slow(size, align, offset)
202+
}
203+
204+
fn alloc_slow_with_options(
205+
&mut self,
206+
size: usize,
207+
align: usize,
208+
offset: usize,
209+
allocator: AllocationSemantics,
210+
options: AllocationOptions,
211+
) -> Address {
212+
let allocator = unsafe {
213+
self.allocators
214+
.get_allocator_mut(self.config.allocator_mapping[allocator])
215+
};
216+
// The value should be default/unset at the beginning of an allocation request.
217+
debug_assert!(allocator.get_context().get_alloc_options().is_default());
218+
allocator.alloc_slow_with_options(size, align, offset, options)
180219
}
181220

182221
// Note that this method is slow, and we expect VM bindings that care about performance to implement allocation fastpath sequence in their bindings.
@@ -308,7 +347,7 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
308347
fn prepare(&mut self, tls: VMWorkerThread);
309348
/// Do the release work for this mutator.
310349
fn release(&mut self, tls: VMWorkerThread);
311-
/// Allocate memory for an object.
350+
/// Allocate memory for an object. This function will trigger a GC on failed allocation.
312351
///
313352
/// Arguments:
314353
/// * `size`: the number of bytes required for the object.
@@ -322,7 +361,25 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
322361
offset: usize,
323362
allocator: AllocationSemantics,
324363
) -> Address;
325-
/// The slow path allocation. This is only useful when the binding
364+
/// Allocate memory for an object with more options to control this allocation request, e.g. not triggering a GC on fail.
365+
///
366+
/// Arguments:
367+
/// * `size`: the number of bytes required for the object.
368+
/// * `align`: required alignment for the object.
369+
/// * `offset`: offset associated with the alignment. The result plus the offset will be aligned to the given alignment.
370+
/// * `allocator`: the allocation semantic used for this object.
371+
/// * `options`: the allocation options to change the default allocation behavior for this request.
372+
fn alloc_with_options(
373+
&mut self,
374+
size: usize,
375+
align: usize,
376+
offset: usize,
377+
allocator: AllocationSemantics,
378+
options: AllocationOptions,
379+
) -> Address;
380+
/// The slow path allocation for [`MutatorContext::alloc`]. This function will trigger a GC on failed allocation.
381+
///
382+
/// This is only useful when the binding
326383
/// implements the fast path allocation, and would like to explicitly
327384
/// call the slow path after the fast path allocation fails.
328385
fn alloc_slow(
@@ -332,6 +389,19 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
332389
offset: usize,
333390
allocator: AllocationSemantics,
334391
) -> Address;
392+
/// The slow path allocation for [`MutatorContext::alloc_with_options`].
393+
///
394+
/// This is only useful when the binding
395+
/// implements the fast path allocation, and would like to explicitly
396+
/// call the slow path after the fast path allocation fails.
397+
fn alloc_slow_with_options(
398+
&mut self,
399+
size: usize,
400+
align: usize,
401+
offset: usize,
402+
allocator: AllocationSemantics,
403+
options: AllocationOptions,
404+
) -> Address;
335405
/// Perform post-allocation actions. For many allocators none are
336406
/// required.
337407
///

src/policy/immix/immixspace.rs

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ use crate::policy::sft::GCWorkerMutRef;
77
use crate::policy::sft::SFT;
88
use crate::policy::sft_map::SFTMap;
99
use crate::policy::space::{CommonSpace, Space};
10+
use crate::util::alloc::allocator::AllocationOptions;
1011
use crate::util::alloc::allocator::AllocatorContext;
1112
use crate::util::constants::LOG_BYTES_IN_PAGE;
1213
use crate::util::heap::chunk_map::*;
@@ -532,8 +533,13 @@ impl<VM: VMBinding> ImmixSpace<VM> {
532533
}
533534

534535
/// Allocate a clean block.
535-
pub fn get_clean_block(&self, tls: VMThread, copy: bool) -> Option<Block> {
536-
let block_address = self.acquire(tls, Block::PAGES);
536+
pub fn get_clean_block(
537+
&self,
538+
tls: VMThread,
539+
copy: bool,
540+
alloc_options: AllocationOptions,
541+
) -> Option<Block> {
542+
let block_address = self.acquire(tls, Block::PAGES, alloc_options);
537543
if block_address.is_zero() {
538544
return None;
539545
}

src/policy/largeobjectspace.rs

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ use crate::plan::VectorObjectQueue;
55
use crate::policy::sft::GCWorkerMutRef;
66
use crate::policy::sft::SFT;
77
use crate::policy::space::{CommonSpace, Space};
8+
use crate::util::alloc::allocator::AllocationOptions;
89
use crate::util::constants::BYTES_IN_PAGE;
910
use crate::util::heap::{FreeListPageResource, PageResource};
1011
use crate::util::metadata;
@@ -309,8 +310,13 @@ impl<VM: VMBinding> LargeObjectSpace<VM> {
309310
}
310311

311312
/// Allocate an object
312-
pub fn allocate_pages(&self, tls: VMThread, pages: usize) -> Address {
313-
self.acquire(tls, pages)
313+
pub fn allocate_pages(
314+
&self,
315+
tls: VMThread,
316+
pages: usize,
317+
alloc_options: AllocationOptions,
318+
) -> Address {
319+
self.acquire(tls, pages, alloc_options)
314320
}
315321

316322
/// Test if the object's mark bit is the same as the given value. If it is not the same,

src/policy/lockfreeimmortalspace.rs

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ use crate::policy::sft::SFT;
88
use crate::policy::space::{CommonSpace, Space};
99
use crate::util::address::Address;
1010

11+
use crate::util::alloc::allocator::AllocationOptions;
1112
use crate::util::conversions;
1213
use crate::util::heap::gc_trigger::GCTrigger;
1314
use crate::util::heap::layout::vm_layout::vm_layout;
@@ -140,7 +141,7 @@ impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM> {
140141
data_pages + meta_pages
141142
}
142143

143-
fn acquire(&self, _tls: VMThread, pages: usize) -> Address {
144+
fn acquire(&self, _tls: VMThread, pages: usize, alloc_options: AllocationOptions) -> Address {
144145
trace!("LockFreeImmortalSpace::acquire");
145146
let bytes = conversions::pages_to_bytes(pages);
146147
let start = self
@@ -150,7 +151,11 @@ impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM> {
150151
})
151152
.expect("update cursor failed");
152153
if start + bytes > self.limit {
153-
panic!("OutOfMemory")
154+
if alloc_options.on_fail.allow_oom_call() {
155+
panic!("OutOfMemory");
156+
} else {
157+
return Address::ZERO;
158+
}
154159
}
155160
if self.slow_path_zeroing {
156161
crate::util::memory::zero(start, bytes);

src/policy/marksweepspace/native_ms/global.rs

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ use crate::plan::ObjectQueue;
2424
use crate::plan::VectorObjectQueue;
2525
use crate::policy::sft::SFT;
2626
use crate::policy::space::{CommonSpace, Space};
27+
use crate::util::alloc::allocator::AllocationOptions;
2728
use crate::util::constants::LOG_BYTES_IN_PAGE;
2829
use crate::util::heap::chunk_map::*;
2930
use crate::util::linear_scan::Region;
@@ -462,7 +463,13 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
462463
crate::util::metadata::vo_bit::bzero_vo_bit(block.start(), Block::BYTES);
463464
}
464465

465-
pub fn acquire_block(&self, tls: VMThread, size: usize, align: usize) -> BlockAcquireResult {
466+
pub fn acquire_block(
467+
&self,
468+
tls: VMThread,
469+
size: usize,
470+
align: usize,
471+
alloc_options: AllocationOptions,
472+
) -> BlockAcquireResult {
466473
{
467474
let mut abandoned = self.abandoned.lock().unwrap();
468475
let bin = mi_bin::<VM>(size, align);
@@ -484,7 +491,7 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
484491
}
485492
}
486493

487-
let acquired = self.acquire(tls, Block::BYTES >> LOG_BYTES_IN_PAGE);
494+
let acquired = self.acquire(tls, Block::BYTES >> LOG_BYTES_IN_PAGE, alloc_options);
488495
if acquired.is_zero() {
489496
BlockAcquireResult::Exhausted
490497
} else {

0 commit comments

Comments
 (0)