-
Notifications
You must be signed in to change notification settings - Fork 766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure parachain blocks are taking the PVF validation memory limit into account #71
Comments
Hi. We currently have an on-chain heap pages value that is too high ( |
The full solution will require some more work which I currently can not real foresee. I have some ideas, but nothing concrete. However, to move forward you could introduce a pr to Substrate that is adding a method to |
For the proper solution as outlined in this issue here, we will require more work to support setting the heap pages per instance (which currently doesn't work that good, as we cache instances based on the heap pages). Then we need support on the upper layers (runtime api) to forward the requested heap pages down to the executor. However, I'm still preparing the changes to the runtime api to make this possible. If we have this stuff, we then need to think on how to forward this information to the block builder to respect the heap pages. |
Thanks for the reply! I've created a new PR paritytech/substrate#14508 following your suggestion |
When relay chain validators validate a PVF the wasm execution gets a certain amount of memory allocated. If a collator is building a block while using a higher memory limit it may fails at validation. Thus, we should ensure that a honest collator respects these memory limits set on the relay chain. We actually should run at 50% (number is discuss-able) of the memory limit set for the validation. The reason behind this is that the validation needs to keep more stuff in memory (the entire POV, data that is written etc) as while building the block.
The memory limit is stored in the
ExecutorParams
.The text was updated successfully, but these errors were encountered: