Skip to content

Typed access to metadata #647

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 38 commits into from
Sep 13, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
729d990
Use u64 instead usize for 8bytes side metadata
qinsoon Aug 23, 2022
1d1a9bc
Merge branch 'master' into fix-side-metadata-usize
qinsoon Aug 23, 2022
4a0d8ff
Use u64 instead usize for 8bytes side metadata
qinsoon Aug 23, 2022
76debb8
Apply same change to header metadata
qinsoon Aug 23, 2022
39fb2c1
MetadataValue Prototype
qinsoon Aug 25, 2022
241c6df
Refactor side metadata methods with MetadataValue
qinsoon Aug 26, 2022
5079d01
Refactor Metadata and HeaderMetadata with MetadataValue
qinsoon Aug 29, 2022
4296583
Provide a default impl for header metadata access in ObjectModel
qinsoon Aug 29, 2022
5c28dc2
Tidy up side metadata sanity: use verify_update instead verify_<op>
qinsoon Aug 29, 2022
c432b50
Add fetch_and/or/update
qinsoon Aug 30, 2022
ec838ab
Fix current tests
qinsoon Aug 30, 2022
4b63bf2
Move a few methods to SideMetadata
qinsoon Aug 30, 2022
4078e90
WIP: Add some tests
qinsoon Aug 30, 2022
b19e242
WIP: more tests
qinsoon Aug 31, 2022
44a21c0
More tests on SideMetadata
qinsoon Aug 31, 2022
cf3d204
Metadata ops should call ObjectModel methods
qinsoon Sep 1, 2022
967fe5f
Add test for header metadata
qinsoon Sep 6, 2022
fe11527
Separate atomic/non-atomic load/store for metadata and header metadata
qinsoon Sep 6, 2022
21ba305
Fix test/style
qinsoon Sep 6, 2022
9ecb707
Use u64 for side metadata sanity map
qinsoon Sep 6, 2022
01054e7
Remove code that was commented out
qinsoon Sep 6, 2022
43188f4
Merge branch 'master' into fix-side-metadata-usize
qinsoon Sep 6, 2022
067d6e5
Fix build and test on 32 bits
qinsoon Sep 6, 2022
d07d7e8
Clean up
qinsoon Sep 6, 2022
3b6d365
Merge branch 'master' into fix-side-metadata-usize
qinsoon Sep 6, 2022
53a568d
inline(always) for a few places
qinsoon Sep 7, 2022
8526afc
Use atomic load for object barrier
qinsoon Sep 8, 2022
0c6ff4a
Move return description to the end of the description.
qinsoon Sep 8, 2022
ed6e905
Outdated comments
qinsoon Sep 8, 2022
7c51e72
Assert input value for side metadata access. Allow serial_test lock
qinsoon Sep 9, 2022
9e58b7b
Use fetch_update for atomic_store and fetch_ops for bits. Use
qinsoon Sep 9, 2022
32355d2
Use fetch_update for fetch_update_atomic of bits. cargo fmt
qinsoon Sep 9, 2022
ff95893
Use fetch_update over compare_and_exchange on header metadata as well.
qinsoon Sep 9, 2022
02d7b99
Use fetch_update for HeaderMetadataSpec.fetch_ops_on_bits.
qinsoon Sep 9, 2022
06d6395
compare_exchange returns Result
qinsoon Sep 12, 2022
1c4cc86
Merge branch 'master' into fix-side-metadata-usize
qinsoon Sep 12, 2022
99a9840
Use relaxed atomic load for mark sweep trace_object. Use Option.map for
qinsoon Sep 12, 2022
24bb41b
Use explicit type parameter for fetch_update
qinsoon Sep 13, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ enum-map = "=2.1"
downcast-rs = "1.1.1"
atomic-traits = "0.2.0"
atomic = "0.4.6"
num-traits = "0.2"
spin = "0.5.2"
env_logger = "0.8.2"
pfm = { version = "0.1.0-beta.1", optional = true }
Expand All @@ -43,6 +44,7 @@ strum_macros = "0.24"

[dev-dependencies]
rand = "0.7.3"
paste = "1.0.8"

[build-dependencies]
built = { version = "0.5.1", features = ["git2"] }
Expand Down
38 changes: 24 additions & 14 deletions src/plan/barriers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@ use atomic::Ordering;

use crate::scheduler::gc_work::*;
use crate::scheduler::WorkBucketStage;
use crate::util::metadata::load_metadata;
use crate::util::metadata::{compare_exchange_metadata, MetadataSpec};
use crate::util::metadata::MetadataSpec;
use crate::util::*;
use crate::MMTK;

Expand Down Expand Up @@ -69,20 +68,24 @@ impl<E: ProcessEdgesWork> ObjectRememberingBarrier<E> {
// Try set the bit from 1 to 0 (log object). This may fail, if
// 1. the bit is cleared by others, or
// 2. other bits in the same byte may get modified if we use side metadata
if compare_exchange_metadata::<E::VM>(
&self.meta,
object,
1,
0,
None,
Ordering::SeqCst,
Ordering::SeqCst,
) {
if self
.meta
.compare_exchange_metadata::<E::VM, u8>(
object,
1,
0,
None,
Ordering::SeqCst,
Ordering::SeqCst,
)
.is_ok()
{
// We just logged the object
return true;
} else {
let old_value =
load_metadata::<E::VM>(&self.meta, object, None, Some(Ordering::SeqCst));
let old_value = self
.meta
.load_atomic::<E::VM, u8>(object, None, Ordering::SeqCst);
Comment on lines +86 to +88
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually cmpxchg returns the old value regardless of success or failure, so it should not be necessary to load again. Consider changing the API of cmpxchg.

Alternatively, try fetch_and_atomic. It should be faster on architectures that has special instructions that never fails.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the "redundant load" I was talking about. This can be replaced by the return value of the cmpxchg.

match self.meta.compare_exchange_metadata(...) {
    Ok(_) => { return true; }, // We just logged the object
    Err(old_value) => {
        ... // The else branch goes here.
    },
}

We may fix this in another PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I will do it in another PR.

// If the bit is cleared before, someone else has logged the object. Return false.
if old_value == 0 {
return false;
Expand All @@ -104,7 +107,14 @@ impl<E: ProcessEdgesWork> ObjectRememberingBarrier<E> {

#[inline(always)]
fn barrier(&mut self, obj: ObjectReference) {
if load_metadata::<E::VM>(&self.meta, obj, None, None) == 0 {
// Perform a relaxed load for performance.
// It is okay if this check fails occasionally and
// the execution goes to the slowpath, we can take care of that in the slowpath.
if self
.meta
.load_atomic::<E::VM, u8>(obj, None, Ordering::Relaxed)
== 0
{
return;
}
self.barrier_slow(obj);
Expand Down
9 changes: 3 additions & 6 deletions src/policy/copyspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ use crate::util::heap::HeapMeta;
use crate::util::heap::VMRequest;
use crate::util::heap::{MonotonePageResource, PageResource};
use crate::util::metadata::side_metadata::{SideMetadataContext, SideMetadataSpec};
use crate::util::metadata::{extract_side_metadata, side_metadata, MetadataSpec};
use crate::util::metadata::{extract_side_metadata, MetadataSpec};
use crate::util::object_forwarding;
use crate::util::{Address, ObjectReference};
use crate::vm::*;
Expand Down Expand Up @@ -182,11 +182,8 @@ impl<VM: VMBinding> CopySpace<VM> {
if let MetadataSpec::OnSide(side_forwarding_status_table) =
*<VM::VMObjectModel as ObjectModel<VM>>::LOCAL_FORWARDING_BITS_SPEC
{
side_metadata::bzero_metadata(
&side_forwarding_status_table,
self.common.start,
self.pr.cursor() - self.common.start,
);
side_forwarding_status_table
.bzero_metadata(self.common.start, self.pr.cursor() - self.common.start);
}
}

Expand Down
33 changes: 9 additions & 24 deletions src/policy/immix/block.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ use super::line::Line;
use super::ImmixSpace;
use crate::util::constants::*;
use crate::util::linear_scan::{Region, RegionIterator};
use crate::util::metadata::side_metadata::{self, *};
use crate::util::metadata::side_metadata::{MetadataByteArrayRef, SideMetadataSpec};
use crate::util::Address;
use crate::vm::*;
use spin::{Mutex, MutexGuard};
Expand Down Expand Up @@ -125,16 +125,15 @@ impl Block {
/// Get block mark state.
#[inline(always)]
pub fn get_state(&self) -> BlockState {
let byte =
side_metadata::load_atomic(&Self::MARK_TABLE, self.start(), Ordering::SeqCst) as u8;
let byte = Self::MARK_TABLE.load_atomic::<u8>(self.start(), Ordering::SeqCst);
byte.into()
}

/// Set block mark state.
#[inline(always)]
pub fn set_state(&self, state: BlockState) {
let state = u8::from(state) as usize;
side_metadata::store_atomic(&Self::MARK_TABLE, self.start(), state, Ordering::SeqCst);
let state = u8::from(state);
Self::MARK_TABLE.store_atomic::<u8>(self.start(), state, Ordering::SeqCst);
}

// Defrag byte
Expand All @@ -144,9 +143,7 @@ impl Block {
/// Test if the block is marked for defragmentation.
#[inline(always)]
pub fn is_defrag_source(&self) -> bool {
let byte =
side_metadata::load_atomic(&Self::DEFRAG_STATE_TABLE, self.start(), Ordering::SeqCst)
as u8;
let byte = Self::DEFRAG_STATE_TABLE.load_atomic::<u8>(self.start(), Ordering::SeqCst);
debug_assert!(byte == 0 || byte == Self::DEFRAG_SOURCE_STATE);
byte == Self::DEFRAG_SOURCE_STATE
}
Expand All @@ -155,31 +152,19 @@ impl Block {
#[inline(always)]
pub fn set_as_defrag_source(&self, defrag: bool) {
let byte = if defrag { Self::DEFRAG_SOURCE_STATE } else { 0 };
side_metadata::store_atomic(
&Self::DEFRAG_STATE_TABLE,
self.start(),
byte as usize,
Ordering::SeqCst,
);
Self::DEFRAG_STATE_TABLE.store_atomic::<u8>(self.start(), byte, Ordering::SeqCst);
}

/// Record the number of holes in the block.
#[inline(always)]
pub fn set_holes(&self, holes: usize) {
side_metadata::store_atomic(
&Self::DEFRAG_STATE_TABLE,
self.start(),
holes,
Ordering::SeqCst,
);
Self::DEFRAG_STATE_TABLE.store_atomic::<u8>(self.start(), holes as u8, Ordering::SeqCst);
}

/// Get the number of holes.
#[inline(always)]
pub fn get_holes(&self) -> usize {
let byte =
side_metadata::load_atomic(&Self::DEFRAG_STATE_TABLE, self.start(), Ordering::SeqCst)
as u8;
let byte = Self::DEFRAG_STATE_TABLE.load_atomic::<u8>(self.start(), Ordering::SeqCst);
debug_assert_ne!(byte, Self::DEFRAG_SOURCE_STATE);
byte as usize
}
Expand All @@ -192,7 +177,7 @@ impl Block {
} else {
BlockState::Unmarked
});
side_metadata::store_atomic(&Self::DEFRAG_STATE_TABLE, self.start(), 0, Ordering::SeqCst);
Self::DEFRAG_STATE_TABLE.store_atomic::<u8>(self.start(), 0, Ordering::SeqCst);
}

/// Deinitalize a block before releasing.
Expand Down
6 changes: 3 additions & 3 deletions src/policy/immix/chunk.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ use super::block::{Block, BlockState};
use super::defrag::Histogram;
use super::immixspace::ImmixSpace;
use crate::util::linear_scan::{Region, RegionIterator};
use crate::util::metadata::side_metadata::{self, SideMetadataSpec};
use crate::util::metadata::side_metadata::SideMetadataSpec;
use crate::{
scheduler::*,
util::{heap::layout::vm_layout_constants::LOG_BYTES_IN_CHUNK, Address},
Expand Down Expand Up @@ -111,7 +111,7 @@ impl ChunkMap {
return;
}
// Update alloc byte
unsafe { side_metadata::store(&Self::ALLOC_TABLE, chunk.start(), state as u8 as _) };
unsafe { Self::ALLOC_TABLE.store::<u8>(chunk.start(), state as u8) };
// If this is a newly allcoated chunk, then expand the chunk range.
if state == ChunkState::Allocated {
debug_assert!(!chunk.start().is_zero());
Expand All @@ -129,7 +129,7 @@ impl ChunkMap {

/// Get chunk state
pub fn get(&self, chunk: Chunk) -> ChunkState {
let byte = unsafe { side_metadata::load(&Self::ALLOC_TABLE, chunk.start()) as u8 };
let byte = unsafe { Self::ALLOC_TABLE.load::<u8>(chunk.start()) };
match byte {
0 => ChunkState::Free,
1 => ChunkState::Allocated,
Expand Down
49 changes: 23 additions & 26 deletions src/policy/immix/immixspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,8 @@ use crate::util::heap::HeapMeta;
use crate::util::heap::PageResource;
use crate::util::heap::VMRequest;
use crate::util::linear_scan::{Region, RegionIterator};
use crate::util::metadata::side_metadata::{self, *};
use crate::util::metadata::{
self, compare_exchange_metadata, load_metadata, store_metadata, MetadataSpec,
};
use crate::util::metadata::side_metadata::{SideMetadataContext, SideMetadataSpec};
use crate::util::metadata::{self, MetadataSpec};
use crate::util::object_forwarding as ForwardingWord;
use crate::util::{Address, ObjectReference};
use crate::vm::*;
Expand Down Expand Up @@ -509,25 +507,26 @@ impl<VM: VMBinding> ImmixSpace<VM> {
#[inline(always)]
fn attempt_mark(&self, object: ObjectReference, mark_state: u8) -> bool {
loop {
let old_value = load_metadata::<VM>(
&VM::VMObjectModel::LOCAL_MARK_BIT_SPEC,
let old_value = VM::VMObjectModel::LOCAL_MARK_BIT_SPEC.load_atomic::<VM, u8>(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depending on mark_state, fetch_and or fetch_or can be more efficient.

object,
None,
Some(Ordering::SeqCst),
) as u8;
Ordering::SeqCst,
);
if old_value == mark_state {
return false;
}

if compare_exchange_metadata::<VM>(
&VM::VMObjectModel::LOCAL_MARK_BIT_SPEC,
object,
old_value as usize,
mark_state as usize,
None,
Ordering::SeqCst,
Ordering::SeqCst,
) {
if VM::VMObjectModel::LOCAL_MARK_BIT_SPEC
.compare_exchange_metadata::<VM, u8>(
object,
old_value,
mark_state,
None,
Ordering::SeqCst,
Ordering::SeqCst,
)
.is_ok()
{
break;
}
}
Expand All @@ -537,12 +536,11 @@ impl<VM: VMBinding> ImmixSpace<VM> {
/// Check if an object is marked.
#[inline(always)]
fn is_marked(&self, object: ObjectReference, mark_state: u8) -> bool {
let old_value = load_metadata::<VM>(
&VM::VMObjectModel::LOCAL_MARK_BIT_SPEC,
let old_value = VM::VMObjectModel::LOCAL_MARK_BIT_SPEC.load_atomic::<VM, u8>(
object,
None,
Some(Ordering::SeqCst),
) as u8;
Ordering::SeqCst,
);
old_value == mark_state
}

Expand Down Expand Up @@ -618,7 +616,7 @@ impl<VM: VMBinding> PrepareBlockState<VM> {
#[inline(always)]
fn reset_object_mark(chunk: Chunk) {
if let MetadataSpec::OnSide(side) = *VM::VMObjectModel::LOCAL_MARK_BIT_SPEC {
side_metadata::bzero_metadata(&side, chunk.start(), Chunk::BYTES);
side.bzero_metadata(chunk.start(), Chunk::BYTES);
}
}
}
Expand Down Expand Up @@ -689,12 +687,11 @@ impl<VM: VMBinding> PolicyCopyContext for ImmixCopyContext<VM> {
#[inline(always)]
fn post_copy(&mut self, obj: ObjectReference, _bytes: usize) {
// Mark the object
store_metadata::<VM>(
&VM::VMObjectModel::LOCAL_MARK_BIT_SPEC,
VM::VMObjectModel::LOCAL_MARK_BIT_SPEC.store_atomic::<VM, u8>(
obj,
self.get_space().mark_state as usize,
self.get_space().mark_state,
None,
Some(Ordering::SeqCst),
Ordering::SeqCst,
);
// Mark the line
if !super::MARK_LINE_AT_SCAN_TIME {
Expand Down
6 changes: 3 additions & 3 deletions src/policy/immix/line.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use super::block::Block;
use crate::util::linear_scan::{Region, RegionIterator};
use crate::util::metadata::side_metadata::{self, *};
use crate::util::metadata::side_metadata::SideMetadataSpec;
use crate::{
util::{Address, ObjectReference},
vm::*,
Expand Down Expand Up @@ -60,15 +60,15 @@ impl Line {
pub fn mark(&self, state: u8) {
debug_assert!(!super::BLOCK_ONLY);
unsafe {
side_metadata::store(&Self::MARK_TABLE, self.start(), state as _);
Self::MARK_TABLE.store::<u8>(self.start(), state);
}
}

/// Test line mark state.
#[inline(always)]
pub fn is_marked(&self, state: u8) -> bool {
debug_assert!(!super::BLOCK_ONLY);
unsafe { side_metadata::load(&Self::MARK_TABLE, self.start()) as u8 == state }
unsafe { Self::MARK_TABLE.load::<u8>(self.start()) == state }
}

/// Mark all lines the object is spanned to.
Expand Down
Loading