-
Notifications
You must be signed in to change notification settings - Fork 890
Fix wrong columns getting processed on a CGC change #7792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Conversation
I've labeled it as |
Some required checks have failed. Could you please take a look @jimmygchen? 🙏 |
…l verify and propagate them.
ab0e887
to
fb08ab2
Compare
…consistency with RPC and better separation of concerns
.map_err(|e| format!("Failed to compute custody groups: {:?}", e))?; | ||
chain | ||
.data_availability_checker | ||
.custody_context() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: we can probably remove the clone
in custody_context()
all_custody_groups_ordered: Vec<CustodyIndex>, | ||
spec: &ChainSpec, | ||
) -> Result<(), String> { | ||
let mut ordered_custody_groups = vec![]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this.
We basically want to store an ordering of data columns for a given NodeId
.
Why are we storing columns for every custody index here? Can't we just compute the ordering for the node id and take a slice [..cgc] for the required cgc value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the direction of this, its less invasive than I thought originally
Issue Addressed
This PR fixes a bug where wrong columns could get processed immediately after a CGC increase.
Scenario:
Proposed fix
In theI've move this logic tohandle_gossip
function, we computeshould_process
for each data column that arrived, and pass it to the beacon processor - this column will get verified, propagated, but NOT processed / imported.DataAvailabilityChecker
for consistency with RPC and better separation of concerns.globals.sampling_columns
(already updated to the new CGC).The trade off here is we'd have to initialise the
CustodyContext
with the results fromget_custody_groups
after network is initialised, which isn't ideal but will allow us to simplify and clean things up a lot by keeping custody related logic inCustodyContext
. With this approach, we could potentially removenetwork_globals.sampling_columns
too - I haven't done it in this PR though, wanted to see how this approach look like and get some feedback first.The other options considered:
node_id
beforeBeaconChain
is constructed, and have beacon chain also compute the custody columns. I think this leaks networking types into theBeaconChain
, so IMO isn't great for separation of concerns and also introduces a circular dependency which will bite us later.