Skip to content

Global allocation bit #390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 20 commits into from
Aug 19, 2021
Merged

Global allocation bit #390

merged 20 commits into from
Aug 19, 2021

Conversation

YiluoWei
Copy link
Collaborator

  • Create new module util::alloc_bit.

  • Create new feature global_alloc_bit.

  • Create a new global side metadata for the alloc bit. 1 bit per min object size.

  • Add the alloc bit spec to plans.

  • Set alloc bit in initialize_object_metadata() for each policy.

  • Unset/zero bits for each policy.

  • Remove the alloc bit spec in MallocSpace, and change all the related functions to use the functions from the new alloc bit module.

  • issue:
    Need to set the alloc bit in the allocation fastpath in OpenJDK, since we do not call post_alloc in the fastpath.

  • issue:
    https://github.com/mmtk/mmtk-core/blob/master/src/policy/space.rs#L238

pub fn is_in_space(&self, object: ObjectReference) -> bool {
        let not_in_space = object.to_address().chunk_index() >= self.sft.len()
            || self.get(object.to_address()).name() == EMPTY_SPACE_SFT.name();

        if not_in_space {
            // special case - we do not yet have SFT entries for malloc space
            use crate::policy::mallocspace::is_alloced_by_malloc;
            is_alloced_by_malloc(object)
        } else {
            true
        }
    }

This function works for now, becuase we only use it to see whether an address is mapped, but it might be problematic.

@qinsoon
Copy link
Member

qinsoon commented Jul 28, 2021

Can you merge with master and resolve conflicts?

@qinsoon
Copy link
Member

qinsoon commented Aug 2, 2021

The build still fails. You can probably try run .github/scripts/ci-build.sh locally to figure out the issue. @YiluoWei

@YiluoWei
Copy link
Collaborator Author

YiluoWei commented Aug 2, 2021

This should fix the problem. I ran .github/scripts/ci-build.sh locally and it successfully built.

@qinsoon qinsoon added the PR-testing Run binding tests for the pull request (deprecated: use PR-extended-testing instead) label Aug 3, 2021
@qinsoon
Copy link
Member

qinsoon commented Aug 4, 2021

The JikesRVM tests still failed. I don't think it is the known issue as mmtk/mmtk-jikesrvm#78. Probably worth looking into that. The error message about 'not in RVM space' basically means it is possibly a segfault in Rust code. @YiluoWei

@YiluoWei
Copy link
Collaborator Author

YiluoWei commented Aug 4, 2021

yiluow@rat:~/mmtk-dev/mmtk-jikesrvm/repos/jikesrvm$ export RUST_BACKTRACE=1; ./dist/RBaseBaseMarkSweep_x86_64_m32-linux/rvm -X:gc:threads=16 -Xms75M -Xmx75M -jar /usr/share/benchmarks/dacapo/dacapo-2006-10-MR2.jar bloat
[2021-08-04T05:34:46Z INFO  mmtk::memory_manager] Initialized MMTk with MarkSweep
thread '<unnamed>' panicked at '0x2b400000 of size 4096 is not mapped', /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/memory.rs:121:18
stack backtrace:
   0: rust_begin_unwind
             at /rustc/952fdf2a1119affa1b37bcacb0c49cf9f0168ac8/library/std/src/panicking.rs:515:5
   1: std::panicking::begin_panic_fmt
             at /rustc/952fdf2a1119affa1b37bcacb0c49cf9f0168ac8/library/std/src/panicking.rs:457:5
   2: mmtk::util::memory::panic_if_unmapped
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/memory.rs:121:18
   3: mmtk::util::metadata::side_metadata::global::ensure_metadata_is_mapped
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/metadata/side_metadata/global.rs:325:5
   4: mmtk::util::metadata::side_metadata::global::store_atomic
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/metadata/side_metadata/global.rs:377:9
   5: mmtk::policy::mallocspace::metadata::set_page_mark
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/policy/mallocspace/metadata.rs:204:5
   6: <mmtk::policy::mallocspace::global::MallocSpace<VM> as mmtk::policy::space::SFT>::initialize_object_metadata
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/policy/mallocspace/global.rs:77:9
   7: <mmtk::plan::mutator_context::Mutator<VM> as mmtk::plan::mutator_context::MutatorContext<VM>>::post_alloc
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/plan/mutator_context.rs:75:9
   8: mmtk::memory_manager::post_alloc
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/memory_manager.rs:147:5
   9: post_alloc
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/mmtk/src/api.rs:80:5
  10: <unknown>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
fatal runtime error: failed to initiate panic, error 5
Aborted (core dumped)

Seems that sometimes, the (probably the first) page_mark metadata is not mapped. On rat, it has an around 50% chance to pass.

yiluow@rat:~/mmtk-dev/mmtk-jikesrvm/repos/jikesrvm$ export RUST_BACKTRACE=1; ./dist/RFastAdaptiveMarkSweep_x86_64_m32-linux/rvm -X:gc:threads=16 -Xms75M -Xmx75M -jar /usr/share/benchmarks/dacapo/dacapo-2006-10-MR2.jar bloat
===== DaCapo bloat starting =====
Optimized with: EDU.purdue.cs.bloat.optimize.Main -only EDU.purdue.cs.bloat.trans -pre -dce -diva -prop -stack-alloc -peel-loops all -f EDU.purdue.cs.bloat.trans.CompactArrayInitializer ./scratch/optimizedcode
===== DaCapo bloat PASSED in 15909 msec =====
yiluow@rat:~/mmtk-dev/mmtk-jikesrvm/repos/jikesrvm$ export RUST_BACKTRACE=1; ./dist/RFastAdaptiveMarkSweep_x86_64_m32-linux/rvm -X:gc:threads=16 -Xms75M -Xmx75M -jar /usr/share/benchmarks/dacapo/dacapo-2006-10-MR2.jar bloat
===== DaCapo bloat starting =====
Optimized with: EDU.purdue.cs.bloat.optimize.Main -only EDU.purdue.cs.bloat.trans -pre -dce -diva -prop -stack-alloc -peel-loops all -f EDU.purdue.cs.bloat.trans.CompactArrayInitializer ./scratch/optimizedcode
===== DaCapo bloat PASSED in 14435 msec =====

@qinsoon
Copy link
Member

qinsoon commented Aug 4, 2021

yiluow@rat:~/mmtk-dev/mmtk-jikesrvm/repos/jikesrvm$ export RUST_BACKTRACE=1; ./dist/RBaseBaseMarkSweep_x86_64_m32-linux/rvm -X:gc:threads=16 -Xms75M -Xmx75M -jar /usr/share/benchmarks/dacapo/dacapo-2006-10-MR2.jar bloat
[2021-08-04T05:34:46Z INFO  mmtk::memory_manager] Initialized MMTk with MarkSweep
thread '<unnamed>' panicked at '0x2b400000 of size 4096 is not mapped', /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/memory.rs:121:18
stack backtrace:
   0: rust_begin_unwind
             at /rustc/952fdf2a1119affa1b37bcacb0c49cf9f0168ac8/library/std/src/panicking.rs:515:5
   1: std::panicking::begin_panic_fmt
             at /rustc/952fdf2a1119affa1b37bcacb0c49cf9f0168ac8/library/std/src/panicking.rs:457:5
   2: mmtk::util::memory::panic_if_unmapped
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/memory.rs:121:18
   3: mmtk::util::metadata::side_metadata::global::ensure_metadata_is_mapped
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/metadata/side_metadata/global.rs:325:5
   4: mmtk::util::metadata::side_metadata::global::store_atomic
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/util/metadata/side_metadata/global.rs:377:9
   5: mmtk::policy::mallocspace::metadata::set_page_mark
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/policy/mallocspace/metadata.rs:204:5
   6: <mmtk::policy::mallocspace::global::MallocSpace<VM> as mmtk::policy::space::SFT>::initialize_object_metadata
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/policy/mallocspace/global.rs:77:9
   7: <mmtk::plan::mutator_context::Mutator<VM> as mmtk::plan::mutator_context::MutatorContext<VM>>::post_alloc
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/plan/mutator_context.rs:75:9
   8: mmtk::memory_manager::post_alloc
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/repos/mmtk-core/src/memory_manager.rs:147:5
   9: post_alloc
             at /home/yiluow/mmtk-dev/mmtk-jikesrvm/mmtk/src/api.rs:80:5
  10: <unknown>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
fatal runtime error: failed to initiate panic, error 5
Aborted (core dumped)

Seems that sometimes, the (probably the first) page_mark metadata is not mapped. On rat, it has an around 50% chance to pass.

yiluow@rat:~/mmtk-dev/mmtk-jikesrvm/repos/jikesrvm$ export RUST_BACKTRACE=1; ./dist/RFastAdaptiveMarkSweep_x86_64_m32-linux/rvm -X:gc:threads=16 -Xms75M -Xmx75M -jar /usr/share/benchmarks/dacapo/dacapo-2006-10-MR2.jar bloat
===== DaCapo bloat starting =====
Optimized with: EDU.purdue.cs.bloat.optimize.Main -only EDU.purdue.cs.bloat.trans -pre -dce -diva -prop -stack-alloc -peel-loops all -f EDU.purdue.cs.bloat.trans.CompactArrayInitializer ./scratch/optimizedcode
===== DaCapo bloat PASSED in 15909 msec =====
yiluow@rat:~/mmtk-dev/mmtk-jikesrvm/repos/jikesrvm$ export RUST_BACKTRACE=1; ./dist/RFastAdaptiveMarkSweep_x86_64_m32-linux/rvm -X:gc:threads=16 -Xms75M -Xmx75M -jar /usr/share/benchmarks/dacapo/dacapo-2006-10-MR2.jar bloat
===== DaCapo bloat starting =====
Optimized with: EDU.purdue.cs.bloat.optimize.Main -only EDU.purdue.cs.bloat.trans -pre -dce -diva -prop -stack-alloc -peel-loops all -f EDU.purdue.cs.bloat.trans.CompactArrayInitializer ./scratch/optimizedcode
===== DaCapo bloat PASSED in 14435 msec =====

I see. Then it is possible that this is the same issue as mmtk/mmtk-jikesrvm#78.

Copy link
Member

@qinsoon qinsoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two things that are missing.

  1. The alloc bit is not implemented for the Immix policy, which was added recently. The alloc bit is not implemented for LockFreeImmortalSpace as well.
  2. There is no test/assertion to check if the alloc bit is implemented correctly. This is why 1 would happen. As we are not using the alloc bit right now (except marksweep and 'malloc space'), testing it directly would not be easy. But it is possible to add assertions to check alloc bit (guarded by extreme_assertions).

I would suggest adding some assertions to check if alloc bit is correct. These are just some examples:

  • before we set alloc bit for an object, the bit should be 0 before we set it (add the assertion to set_alloc_bit()).
  • before we unset alloc bit for an object, the bit should be 1 before we unset it (add the assertion to unset_alloc_bit()).
  • in trace_object() for each policy, if global_alloc_bit is enabled, the bit should be 1 before tracing.

Then I would suggest add alloc bit implementation for the immix space and the lockfree immortal space.

@@ -38,7 +40,11 @@ impl<VM: VMBinding> SFT for CopySpace<VM> {
fn is_sane(&self) -> bool {
!self.from_space()
}
fn initialize_object_metadata(&self, _object: ObjectReference, _alloc: bool) {}
fn initialize_object_metadata(&self, _object: ObjectReference, _alloc: bool) {
// This is only effective in slow path. We do not call post_alloc in fast path
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this line of comment. Did you mean for OpenJDK, we do not call post_alloc in fastpath? If so, I feel this is irrelevant to mmtk-core and this line should be removed.

@@ -136,8 +138,14 @@ impl<VM: VMBinding> MarkSweep<VM> {
options: Arc<UnsafeOptionsWrapper>,
) -> Self {
let heap = HeapMeta::new(HEAP_START, HEAP_END);
#[cfg(feature = "global_alloc_bit")]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments are needed here because this looks counter-intuitive. Normally people will expect that, with global_alloc_bit, we need the alloc bit spec. But we are not adding the alloc bit spec for global_alloc_bit, as in that case, the spec is already added to the global specs in new_global_specs().

@@ -172,7 +172,7 @@ impl<VM: VMBinding> MallocSpace<VM> {
metadata: SideMetadataContext {
global: global_side_metadata_specs,
local: metadata::extract_side_metadata(&[
MetadataSpec::OnSide(ALLOC_SIDE_METADATA_SPEC),
// MetadataSpec::OnSide(ALLOC_SIDE_METADATA_SPEC),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this instead of commenting it out.

log_num_of_bits: 0,
log_min_obj_size: constants::LOG_MIN_OBJECT_SIZE as usize,
};
// pub(crate) const ALLOC_SIDE_METADATA_SPEC: SideMetadataSpec = SideMetadataSpec {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this instead of commenting it out.

@@ -153,11 +153,11 @@ pub fn is_alloced(object: ObjectReference) -> bool {
}

pub fn is_alloced_object(address: Address) -> bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel most of these functions can be removed. You can directly call to alloc_bit::...

@@ -106,6 +113,7 @@ pub const GLOBAL_SIDE_METADATA_VM_BASE_OFFSET: SideMetadataOffset =
// - Offset after Immix block mark byte
// 6 - Immix chumk-map mark byte:
// - Offset after Immix chumk-map mark byte
// 1 - MarkSweep Active Page byte
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be a merging problem.

log_num_of_bits: 0,
log_min_obj_size: constants::LOG_MIN_OBJECT_SIZE as usize,
};

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need comments for the functions below. For unsafe functions, we also need comments to explain why they are unsafe, and how to correctly call them.

@qinsoon
Copy link
Member

qinsoon commented Aug 19, 2021

Is there any other change you would like to add to this PR? Otherwise, I think we can merge it. @YiluoWei

@YiluoWei
Copy link
Collaborator Author

Is there any other change you would like to add to this PR? Otherwise, I think we can merge it. @YiluoWei

I think we can merge it.

@qinsoon qinsoon merged commit d2324cf into mmtk:master Aug 19, 2021
@qinsoon qinsoon mentioned this pull request Jun 21, 2023
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR-testing Run binding tests for the pull request (deprecated: use PR-extended-testing instead)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants