Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

elastic scaling: add core selector to cumulus #5372

Merged
merged 33 commits into from
Sep 23, 2024

Conversation

alindima
Copy link
Contributor

@alindima alindima commented Aug 15, 2024

Partially implements #5048

  • adds a core selection runtime API to cumulus and a generic way of configuring it for a parachain
  • modifies the slot based collator to utilise the claim queue and the generic core selection

What's left to be implemented (in a follow-up PR):

  • add the UMP signal for core selection into the parachain-system pallet

View the RFC for more context: polkadot-fellows/RFCs#103

@alindima alindima added I5-enhancement An additional feature request. T9-cumulus This PR/Issue is related to cumulus. labels Aug 15, 2024
@@ -332,6 +332,10 @@ pub mod rpsr_digest {
}
}

/// The default claim queue offset to be used if it's not configured/accessible in the parachain
/// runtime
pub const DEFAULT_CLAIM_QUEUE_OFFSET: u8 = 1;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

open question: should this be 0 or 1.

0 would preserve backwards compatibility with the current collators (they are all building right at the top of the claim queue).
1 would be what core-sharing parachains should use.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 should be the default value. With 0 they can't fully use the 2s of execution if the core is shared and can't find any reason why this would be useful in practice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the default needs to be configurable by the parachain ... or we derive it from async backing settings. In other words, we either can make a default that does the right thing for all chains or we should not provide one, but make it configurable with simple doc explaining when to use what. (async backing/6s block times yet or not?)

@bkchr
Copy link
Member

bkchr commented Aug 15, 2024

  • adds a claim queue offset storage item to the parachain-system pallet

Can you please explain why we need this? AFAIR the parachain runtime does not really cares on which core it is build, it is just more about ensuring that someone is not sending a block build for core X to be included on core Y.

@bkchr
Copy link
Member

bkchr commented Aug 15, 2024

IMO passing a pre-digest that contains the core would be a better way here. This would also make it quite easy to find out which core a block was build for.

@alindima
Copy link
Contributor Author

  • adds a claim queue offset storage item to the parachain-system pallet

Can you please explain why we need this? AFAIR the parachain runtime does not really cares on which core it is build, it is just more about ensuring that someone is not sending a block build for core X to be included on core Y.

The RFC has more details about it. @sandreim may have more inside info but from my understanding:

The parachain runtime will indirectly specify the core on which the candidate is valid on (to prevent the situation you mentioned). The way it specifies the core is by sending a UMP message with a sequence number (which can be the parachain block number) and a claim queue offset. The relay chain and the collator node will then use this info to determine the core index. This was done to avoid passing the claim queue snapshot into the parachain's inherent data.

So indeed the parachain doesn't really care on which core it is built, but it needs to somehow make sure that it commits to one core. That's one reason why we don't specify the core in the UMP message.
But this sequence number is not enough, because the claim queue has multiple entries for each core. So the parachain should pick on which depth of the claim queue it's building

@@ -332,6 +332,10 @@ pub mod rpsr_digest {
}
}

/// The default claim queue offset to be used if it's not configured/accessible in the parachain
/// runtime
pub const DEFAULT_CLAIM_QUEUE_OFFSET: u8 = 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 should be the default value. With 0 they can't fully use the 2s of execution if the core is shared and can't find any reason why this would be useful in practice.

@sandreim
Copy link
Contributor

sandreim commented Aug 15, 2024

IMO passing a pre-digest that contains the core would be a better way here. This would also make it quite easy to find out which core a block was build for.

Can you detail how it would work ? The only way the runtime could know the exact core index would require to see the claim queue assignments and that is more expensive in terms of storage proof compared to the current proposal.

@bkchr
Copy link
Member

bkchr commented Aug 15, 2024

IMO passing a pre-digest that contains the core would be a better way here. This would also make it quite easy to find out which core a block was build for.

Can you detail how it would work ? The only way the runtime could know the exact core index would require to see the claim queue assignments and that is more expensive in terms of storage proof compared to the current proposal.

The block author just provides a pre-digest. Like for example the BABE randomness or the AURA author index. The runtime can read it from there. From there it will be able to expose the information at validation. I don't see the need for having the parachain runtime know anything about the claim queue.

@sandreim
Copy link
Contributor

IMO passing a pre-digest that contains the core would be a better way here. This would also make it quite easy to find out which core a block was build for.

Can you detail how it would work ? The only way the runtime could know the exact core index would require to see the claim queue assignments and that is more expensive in terms of storage proof compared to the current proposal.

The block author just provides a pre-digest. Like for example the BABE randomness or the AURA author index. The runtime can read it from there. From there it will be able to expose the information at validation. I don't see the need for having the parachain runtime know anything about the claim queue.

I am not familiar with that code, but I will look into it. If we do this, does the collator actually have any control over the core index ? Also, how would the parachain runtime verify if the core index is correct ?

@bkchr
Copy link
Member

bkchr commented Aug 15, 2024

Also, how would the parachain runtime verify if the core index is correct ?

I first thought it doesn't need, but we need to have at least the selector. I mean in the runtime doesn't need to care about the index as long as it outputs the correct index. I left some questions on the RFC. I think if they are answered, I have a better understanding. 👍

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable 1/3
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7055802

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable 2/3
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7055803

@skunert
Copy link
Contributor

skunert commented Aug 16, 2024

I am also having some problems fully wrapping my head around what the impact of a claim queue offset is.
Let say we look at para A with elastic scaling and 2s block times. Claim qeueue offset of 2. If I have the following claim queue:

Core Pos 0 Pos 1 Pos 2 Pos 3
0 Para B Para B Para A Para A
1 Para A Para A Para A Para A
2 Para B Para B Para A Para A

Lets imagine an offset of 2. During authoring I check the claim queue at position 2 and see that I have cores 0, 1, 2. Now I submit 3 blocks at this relay parent. If for some reason, previous validators skipped a block, would the claims for core 1 at position 0 and 1 interfere with the candidate chain? Basically collators before me skipped some blocks and one of the blocks I submit on core 1 get an earlier claim but might depend on blocks I submitted on core 0 and 2? Or my mental model does not match what is actually happening?

@alindima
Copy link
Contributor Author

I am also having some problems fully wrapping my head around what the impact of a claim queue offset is. Let say we look at para A with elastic scaling and 2s block times. Claim qeueue offset of 2. If I have the following claim queue:

Core Pos 0 Pos 1 Pos 2 Pos 3
0 Para B Para B Para A Para A
1 Para A Para A Para A Para A
2 Para B Para B Para A Para A
Lets imagine an offset of 2. During authoring I check the claim queue at position 2 and see that I have cores 0, 1, 2. Now I submit 3 blocks at this relay parent. If for some reason, previous validators skipped a block, would the claims for core 1 at position 0 and 1 interfere with the candidate chain? Basically collators before me skipped some blocks and one of the blocks I submit on core 1 get an earlier claim but might depend on blocks I submitted on core 0 and 2? Or my mental model does not match what is actually happening?

Collators don't submit to a specific core. They submit to the nth assigned core in the claim queue.
But the relay chain block author only backs a candidate on claims in the claim queue which are in the top position (offset 0)
So the candidates you authored for cores 0 and 2 will be backed sooner than you expected (both on core 1).

So if the previous collators skipped authoring, you can use their slots in the claim queue.

@sandreim correct me if I'm wrong

@@ -395,4 +399,10 @@ sp_api::decl_runtime_apis! {
/// we are collecting the collation info for.
fn collect_collation_info(header: &Block::Header) -> CollationInfo;
}

/// Runtime api to fetch the claim queue offset.
pub trait FetchClaimQueueOffset {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should probably tag the UnincludedSegment api as deprecated and then merge both of the functions into a new ParachainConsesusExt or similar. Or is this a dumb idea?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this would be very useful. I think it's nice to have this separation.
Also, there is this

if we need a trait that groups all runtime APIs

@alindima alindima changed the title elastic scaling: add claim queue offset to cumulus elastic scaling: add core selector to cumulus Sep 16, 2024
@burdges
Copy link

burdges commented Sep 17, 2024

all elastic scaling cores are handled by the same collator

That'd be a serious limit on elastic scaling. You can make a parachain with faster collators, but you cannot speed up validators so easily. I suppose the conversation moved on from there though.

Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but left a few comments.

Comment on lines +470 to +471
self.last_data = Some((relay_parent, data));
Ok(&mut self.last_data.as_mut().expect("last_data was just set above").1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels a bit clunky. Can we just check/update if needed before and then in this match we'd just return a mutable reference to the inner ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried making this look nicer but this was the best I could :D

- name: rococo-parachain-runtime
bump: minor
- name: polkadot-parachain-bin
bump: major
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this a major bump ? Anyone using the binary is broken ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a major bump on the polkadot-parachain-lib because we add a new trait bound to a public trait.
When I initially created this prdoc the polkadot-parachain-lib was part of the bin crate so that's why I bumped the bin crate. Fixed it now

@@ -186,6 +191,25 @@ pub mod ump_constants {
pub const MESSAGE_SIZE_FEE_BASE: FixedU128 = FixedU128::from_rational(1, 1000); // 0.001
}

/// Trait for selecting the next core to build the candidate for.
pub trait SelectCore {
fn select_core_for_child() -> (CoreSelector, ClaimQueueOffset);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why for_child suffix ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so that it better expresses what this API returns. It doesn't return the core selector for the current block. It returns the core selector for the next block.

Copy link
Contributor

@skunert skunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good, only nits. I like the trait based config approach we have here now.

cumulus/pallets/parachain-system/src/lib.rs Outdated Show resolved Hide resolved
if !claimed_cores.insert(*core_index) {
tracing::debug!(
target: LOG_TARGET,
"Core {:?} was already claimed at this relay chain slot",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is basically the condition that is described above when we have not enough cores scheduled to support the paras slot duration. (at least without merging of para blocks)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

cumulus/client/consensus/aura/src/collators/mod.rs Outdated Show resolved Hide resolved
@@ -363,4 +374,11 @@ where
async fn version(&self, relay_parent: PHash) -> RelayChainResult<RuntimeVersion> {
(**self).version(relay_parent).await
}

async fn claim_queue(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the introduction of this can we remove availability_cores from the interface? Did not check, but should not be used anymore now right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we could. But is there a good reason for that? There's still information that the availability-cores API contains that you cannot get elsewhere: like when a core is occupied or not. Maybe this will come in handy at some point in the future.

@alindima alindima added this pull request to the merge queue Sep 23, 2024
Merged via the queue into master with commit b9eb68b Sep 23, 2024
199 of 209 checks passed
@alindima alindima deleted the alindima/cumulus-open-collator-set branch September 23, 2024 08:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I5-enhancement An additional feature request. T9-cumulus This PR/Issue is related to cumulus.
Projects
Status: Backlog
Status: Completed
Development

Successfully merging this pull request may close these issues.

8 participants