Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

XCM configs review within release process #1682

Open
muharem opened this issue Sep 26, 2022 · 6 comments
Open

XCM configs review within release process #1682

muharem opened this issue Sep 26, 2022 · 6 comments
Labels
A3-in_progress Pull request is in progress. No review needed at this stage.

Comments

@muharem
Copy link
Contributor

muharem commented Sep 26, 2022

Problem

XCM configs require a high confidence level for every release. The possible issues and the review process is not well defined.

Context

XCM(P) due its power and api flexibility has a rich and complex configuration of the runtime. In addition the functionality of different use cases like teleport depends on XCM configuration and calls' weights of every chain involved in communication.

The risk of misconfiguration and possible high const of it requires a high confidence level of XCM configs correctness for every release.

In the document (and comments) we will propose how we can achieve:

  • high confidence level of XCM configs correctness
  • reasonable check times (human check up to 4 hours, machine up to 12 hours)

part of an initiative - #1657

Proposal 1, automation

There are two main ways to verify the configs during the release process right now, a manual review of config files and integrations tests.
First has some drawbacks:

  • requires a person proficient with XCM configs and be familiar with all involved chains, and their possible cases;
  • time consuming;
  • possible human error;

Second has its own:

  • must cover all expected and unexpected (fail) cases to provide a high confidence,
  • those 'all cases' from above is hard to identify,
  • tests get outdated.

If the drawbacks of integration tests solution can be solved or at least controlled they can help us to achieve all goals we defined and will require no manpower within the release process.

First we should try to list 'all cases' and see if its achievable:

  • receive non native assets via teleport

TBD @muharem

Proposal 2, manual review

From the drawbacks of the manual review listed above there is one we can not solve, it's a possible human error.
The other two requires team members to be proficient with XCM configs and get familiar with all valid XCM cases.
Which sends us back to the definition of 'all cases'.

@muharem muharem added the A3-in_progress Pull request is in progress. No review needed at this stage. label Sep 26, 2022
@muharem
Copy link
Contributor Author

muharem commented Oct 4, 2022

After writing down the thoughts, it feels to me if scoping all possible success and fail cases is a doable task, automating this step from the release process via integration tests is a good idea.

All possible cases can be written down in xcm config file docs and have some flag which tells if all cases covered via integration tests.

@NachoPal
Copy link
Contributor

NachoPal commented Oct 4, 2022

All possible cases can be written down in xcm config file docs and have some flag which tells if all cases covered via integration tests.

I would say a README is a better place than the xcm_config.rs file

it feels to me if scoping all possible success and fail cases is a doable task

It is doable and should be doable. If we are not able to identify all cases, the we have a problem. On the other hand, it is true that is difficult to identify/overlook what SHOULD NOT happen. Actually, the list can be "endless", so we'll have to identify the most important cases.

If we are gonna trust the integration tests, everybody should get involved in maintaining and reviewing them. I have the feeling that they are not properly reviewed.

@NachoPal
Copy link
Contributor

NachoPal commented Oct 4, 2022

In addition, I think that a xcm_config.rs diff between releases for both Polkadot/Statemint runtimes is also valuable.

It is still a manual process, but can quickly spotlight some obvious expected changes that might have an impact on integration tests results

@muharem
Copy link
Contributor Author

muharem commented Oct 4, 2022

@NachoPal do you think that for reviewing it a person should be proficient with XCM configs and be familiar with all involved chains (like Statemint) to identify if some case is valid?

@NachoPal
Copy link
Contributor

NachoPal commented Oct 4, 2022

@NachoPal do you think that for reviewing it a person should be proficient with XCM configs and be familiar with all involved chains (like Statemint) to identify if some case is valid?

Not really, I think that person should understand what is the expected functionality/behaviour of Statemint and its interaction with Polkadot and other Parachains. How that behaviour is achieved (XCM config) is not necessary to know. Obviously, some XCM knowledge is going to be positive, mainly to understand the Events outcomes.

@NachoPal
Copy link
Contributor

NachoPal commented Oct 4, 2022

@NachoPal do you think that for reviewing it a person should be proficient with XCM configs and be familiar with all involved chains (like Statemint) to identify if some case is valid?

Not really, I think that person should understand what is expected functionality/behaviour of Statemint and its interaction with Polkadot and other Parachains. To know how that behaviour is achieved (XCM config) is not necessary. Obviously, some XCM knowledge is going to be positive, mainly to understand the Events outcomes.

I am talking here about reviewing the integration tests. About reviewing the xcm_config.rs diffs, yes, that person should proficient with XCM.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A3-in_progress Pull request is in progress. No review needed at this stage.
Projects
None yet
Development

No branches or pull requests

2 participants