Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

beacon/engine, core/txpool, eth/catalyst: add engine_getBlobsV1 API #30537

Merged
merged 2 commits into from
Oct 17, 2024

Conversation

karalabe
Copy link
Member

@karalabe karalabe commented Oct 2, 2024

Implements engine_getBlobsV1.

The getBlobs method requests blobs via their own hashes, not via the transaction hashes (supposedly the consensus client does not know the hash of the transactions, at least not yet when it wants to call this method). Due to that, a new index is needed in the blobpool that maps blob hashes to a set of transaction hashes (it can happen that the same blob is present in multiple transactions).

This PR does basically that, starts tracking the blob hash to tx hash mappings in the blob pool in addition to everything it does now. With that added, it becomes possible to look up transactions from blob hashes, and then to look up the billy storage items from those and then extract the actual blob data from disk.

@karalabe karalabe added this to the 1.14.12 milestone Oct 2, 2024
// Map all the blobs to the transaction hash
for _, vhash := range tx.vhashes {
if _, ok := l.blobIndex[vhash]; !ok {
l.blobIndex[vhash] = make(map[common.Hash]struct{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand this correctly: for every blobhash, you create a map.
So if there are 50 blob txs, each with 2 blobs, you'll create a blobIndex with 100 entries, each referencing a map which each has one single entry, the tx which reference it?

Seems a bit overkill, to me. Instead of map[common.Hash]map[common.Hash]struct{}), why not make it e..g map[common.Hash][]byte), and you can just shove any number of tx hashes into the []byte slice. Or, for that matter, a slice of hashes if you want to be a bit less lowlevel

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because then deleting would require iterating that thing. A simpler same construct would be a []common.Hash instead of a []byte, but it still requires iteration which I don't want to do. Sure, in the normal case it doesn't matter, but in an adversarial case I'd rather avoid linear overhead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Iterating a slice would only be slower than looking up in a map if the slice it fairly large. And making a fairly large slice there means that the adversary would have to create, maybe ~100 transactions referencing the same blob.

Myeah, maybe you have a point

@lightclient
Copy link
Member

Terence says:

Hello everyone,

Is #30537 ready for testing? I tried the branch with Prysm, but it always seems to continue at this point. I'm not sure if my local blobpool isn’t receiving enough blob transactions or if there’s another issue : )

@terencechain
Copy link

Terence says:

Hello everyone,

Is #30537 ready for testing? I tried the branch with Prysm, but it always seems to continue at this point. I'm not sure if my local blobpool isn’t receiving enough blob transactions or if there’s another issue : )

To be exact, I'm always hitting this line, but I'm most likely sure the error is on my side. I dont see anything wrong in this PR. Will report back once I figure it out.

		id, exists := p.lookup.storeidOfBlob(vhash)
		if !exists {
			p.lock.RUnlock()
			continue
		}

@terencechain
Copy link

Terence says:

Hello everyone,

Is #30537 ready for testing? I tried the branch with Prysm, but it always seems to continue at this point. I'm not sure if my local blobpool isn’t receiving enough blob transactions or if there’s another issue : )

To be exact, I'm always hitting this line, but I'm most likely sure the error is on my side. I dont see anything wrong in this PR. Will report back once I figure it out.

		id, exists := p.lookup.storeidOfBlob(vhash)
		if !exists {
			p.lock.RUnlock()
			continue
		}

Sorry for the noise. The issue was on our end. I can confirm this PR is working as expected with Prysm on mainnet

@karalabe karalabe merged commit afea3bd into ethereum:master Oct 17, 2024
3 checks passed
Copy link

@Pjrich1313 Pjrich1313 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge with no merge in change of corp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants