-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for besu's non-unique logIndex #7072
Conversation
The besu client doesn't strictly conform to the eth jsonrpc spec. Instead of logIndex being unique within a block, it's only unique within a transaction. This adds TxIndex to the key we use to track duplicate logs, making the chainlink node robust enough to support besu or any other clients which behave this way. Also: - Update vrf's custom deduper with same change - Update tests
I see that you haven't updated any CHANGELOG files. Would it make sense to do so? |
I am concerned that uniqueness is not sufficient to work around this issue. |
That's a good point, in principle it could. I'm not sure whether that is causing any problem but it might be. What I know for sure is a problem is when two different log events we care about get mined in the same block, and we miss the second one because they both have the same log index. (At least, this is what's currently causing the keepers to malfunction with besu... not sure about vrf but I'm assuming it's the same problem there.) This will fix that problem without introducing any negative side effects. However, with or without this fix a single log event received from both besu and another client could potentially get interpreted as two different log events. At least, the log broadcaster would see it that way, receiving and re-broadcasting them both. In the case of keepers, this definitely wouldn't hurt anything because the same log event twice will just update the same db table twice to the same thing. In the more general case where a log consumer might want to behave differently after receiving 2 of the same events in sequence compared to 1, it could hypothetically be a problem. If there were a contract which can emit the same log event multiple times within the same transaction, then there would be no way to tell the difference (I'm not sure if that behavior on the part of a smart contract is something we use, or even possible though?). Assuming that's not the case, it wouldn't be difficult for the consumer to tell the difference, since in one case the txid's will match and in the other case they won't. But I'm not sure if we make that check in practice, and it doesn't seem like it ought to be the responsibility of the log consumer. I'll have to think about this more, thanks for pointing this out. |
AND log_index = $2 | ||
AND job_id = $3 | ||
AND evm_chain_id = $4 | ||
AND (tx_index is NULL OR tx_index = $2) -- NULL-check only for safe upgrade from 1.6.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to clean up NULL
tx_index
columns along the way?
Consider if one (A
) of two previously conflicting logs (A
,B
) was consumed, and then both are replayed after upgrading. This check would match both A
and B
, which results in B
never actually being consumed.
Is it possible to fill in these null columns with the value from the first log? (is that even an invariant that holds? or could it be any of them that was actually consumed?)
This reverts commit 0189506.
The besu client doesn't strictly conform to the eth jsonrpc spec (see hyperledger/besu#4114, hyperledger/besu#4164). Instead of logIndex being unique within a block, it's only unique within a transaction.
This PR adds TxIndex to the key we use to track duplicate logs, making the chainlink node robust enough to support besu or any other clients which behave this way. There is one implementation of log deduplication in LogBroadcaster, and a similar one in vrf--both have been updated.