Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Ensure outgoing XCMP respects unincluded segment #2471

Closed
rphmeier opened this issue Apr 20, 2023 · 8 comments
Closed

Ensure outgoing XCMP respects unincluded segment #2471

rphmeier opened this issue Apr 20, 2023 · 8 comments

Comments

@rphmeier
Copy link
Contributor

          @rphmeier 
/// A means of figuring out what outbound XCMP messages should be being sent.
pub trait XcmpMessageSource {
	/// Take a single XCMP message from the queue for the given `dest`, if one exists.
	fn take_outbound_messages(maximum_channels: usize) -> Vec<(ParaId, Vec<u8>)>;
}

I believe this trait not only should account for max messages taken from the storage, but also for segment limitations if we want to make this panic correct.
Probably primitives for segment should be extracted into common crate to be accessible from both pallets.

Originally posted by @slumber in #2438 (comment)

@rphmeier
Copy link
Contributor Author

rphmeier commented May 2, 2023

Ok, UMP is fully addressed as-of #2501 .

HRMP will require some tweaks to the ChannelInfo implementation in parachain-system. We need to compute and store the remaining bandwidth for all channels during the inherent processing, and then use that to compute max_size_now correctly in the ChannelInfo implementation.

Investigating this also raised another issue, which is weight calculation. We need to calculate the weight of on_finalize in on_initialize, but the code executed in on_finalize depends on the values read out of the host config/storage proof in the set_validation_data inherent. See this comment for an explanation of what was done before asynchronous backing:

// Here, in `on_initialize` we must report the weight for both `on_initialize` and
// `on_finalize`.
//
// One complication here, is that the `host_configuration` is updated by an inherent
// and those are processed after the block initialization phase. Therefore, we have to
// be content only with the configuration as per the previous block. That means that
// the configuration can be either stale (or be abscent altogether in case of the
// beginning of the chain).
//
// In order to mitigate this, we do the following. At the time, we are only concerned
// about `hrmp_max_message_num_per_candidate`. We reserve the amount of weight to
// process the number of HRMP messages according to the potentially stale
// configuration. In `on_finalize` we will process only the maximum between the
// announced number of messages and the actual received in the fresh configuration.
//
// In the common case, they will be the same. In the case the actual value is smaller
// than the announced, we would waste some of weight. In the case the actual value is
// greater than the announced, we will miss opportunity to send a couple of messages.

We may need to adapt some similar strategy of "predicting" the amount of bandwidth we can use in on_initialize before we get the inherent data, and then using that as a maximum in on_finalize, although it's really inconvenient and stupid. It makes me think we need to come up with ways for the relay chain storage proof to be processed during on_initialize instead. cc @bkchr @skunert @ggwpez any ideas?

@xlc
Copy link
Contributor

xlc commented May 2, 2023

or use on_post_inherent to specify the weight paritytech/polkadot-sdk#312

@bkchr
Copy link
Member

bkchr commented May 2, 2023

It makes me think we need to come up with ways for the relay chain storage proof to be processed during on_initialize instead. cc @bkchr @skunert @ggwpez any ideas?

This would be: paritytech/polkadot-sdk#82

However, I don't think that we need this there. In general I would assume that the host configuration is changed in a way to allow "more". Reducing something is probably never gonna happen. Nevertheless, we can use the "old" host configuration to calculate the weight. We then just need to ensure to that min(old_config.hrmp_max_message_num_per_candidate, new_config.hrmp_max_message_num_per_candidate). Then we are staying the weight we set in on_initialize and also honor the potential config updates.

@ggwpez
Copy link
Member

ggwpez commented May 2, 2023

Looks like the implementation of XcmpMessageSource in the XcmpQueue does respect the relay and receiver suspension logic.
So in theory that expect should be fine, but we obviously need a test to ensure that the implementation is correct.

About the HostConfig/weight thing:
Why is this being set in every block? The HostConfig only changes on Epoch changes, right? So most of the time your weight/bandwidth calculations should be correct when using the current config.
Proper solution would probably be the on_post_inherent mentioned above…

@slumber
Copy link
Contributor

slumber commented May 2, 2023

Looks like the implementation of XcmpMessageSource in the XcmpQueue does respect the relay and receiver suspension logic. So in theory that expect should be fine, but we obviously need a test to ensure that the implementation is correct.

How's it insured for the whole segment of unincluded blocks?

@rphmeier
Copy link
Contributor Author

rphmeier commented May 2, 2023

It may be that only the number of channels we can send messages on impacts the weight, so that logic would remain correct. Still, we need to determine the maximum sizes at every block and then load that maximum repeatedly during the ChannelInfo implementation.

It would be cleaner for the XcmpQueue pallet to take an &impl ChannelInfo as an argument, so we could make it stateful and not have to load from storage on every request.

@rphmeier
Copy link
Contributor Author

Turned out the simplest way to do this was to update the RelevantMessagingState in storage in on_finalize.

@slumber
Copy link
Contributor

slumber commented Aug 2, 2023

Closed by #2948

@slumber slumber closed this as completed Aug 2, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants