-
Notifications
You must be signed in to change notification settings - Fork 20.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
beacon/engine, core/txpool, eth/catalyst: add engine_getBlobsV1 API #30537
Conversation
// Map all the blobs to the transaction hash | ||
for _, vhash := range tx.vhashes { | ||
if _, ok := l.blobIndex[vhash]; !ok { | ||
l.blobIndex[vhash] = make(map[common.Hash]struct{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand this correctly: for every blobhash, you create a map.
So if there are 50 blob txs, each with 2 blobs, you'll create a blobIndex with 100
entries, each referencing a map
which each has one single entry, the tx
which reference it?
Seems a bit overkill, to me. Instead of map[common.Hash]map[common.Hash]struct{})
, why not make it e..g map[common.Hash][]byte)
, and you can just shove any number of tx hashes into the []byte
slice. Or, for that matter, a slice of hashes if you want to be a bit less lowlevel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because then deleting would require iterating that thing. A simpler same construct would be a []common.Hash instead of a []byte, but it still requires iteration which I don't want to do. Sure, in the normal case it doesn't matter, but in an adversarial case I'd rather avoid linear overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iterating a slice would only be slower than looking up in a map if the slice it fairly large. And making a fairly large slice there means that the adversary would have to create, maybe ~100 transactions referencing the same blob.
Myeah, maybe you have a point
Terence says:
|
To be exact, I'm always hitting this line, but I'm most likely sure the error is on my side. I dont see anything wrong in this PR. Will report back once I figure it out. id, exists := p.lookup.storeidOfBlob(vhash)
if !exists {
p.lock.RUnlock()
continue
} |
Sorry for the noise. The issue was on our end. I can confirm this PR is working as expected with Prysm on mainnet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge with no merge in change of corp
Implements engine_getBlobsV1.
The getBlobs method requests blobs via their own hashes, not via the transaction hashes (supposedly the consensus client does not know the hash of the transactions, at least not yet when it wants to call this method). Due to that, a new index is needed in the blobpool that maps blob hashes to a set of transaction hashes (it can happen that the same blob is present in multiple transactions).
This PR does basically that, starts tracking the blob hash to tx hash mappings in the blob pool in addition to everything it does now. With that added, it becomes possible to look up transactions from blob hashes, and then to look up the billy storage items from those and then extract the actual blob data from disk.