-
Notifications
You must be signed in to change notification settings - Fork 649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking: some upstream test fixtures fail when updating to v7.0.0-beta.1 #1857
Comments
That is, sequentially call precompiles All the precompiles have The comment shows that it's expected to OOG-fail. Variants 0 through 3 have Variants 0 and 3 clear the nonce on precompile 3 ( Candidate 1: the difference between
Candidate 2: very telling is the fact that variants 0 and 3 of the test say that precompile 3 ( |
Relating to candidate 2: This all sounds very familiar, only this time it's just the From ethereum/aleth#5664 (good summary with lots of links):
To clarify: when all the conditions in italics are true, then bold should still happen. We do try to catch this, here and a bit above, too: py-evm/eth/vm/forks/spurious_dragon/_utils.py Lines 44 to 52 in 727f2ca
... but it seems that |
So, in short: In This is because we recurse into nested computations if the current one is successful: py-evm/eth/vm/forks/spurious_dragon/_utils.py Lines 54 to 57 in 727f2ca
... but now we have to do it regardless, just to see if someone crumbled I can't figure out how to do this without a double-pass through the nested computations, or ugly hacks - not today, at least, and got to run anyway. |
This commit fixes the first four failing cases.
|
For test cases
insufficient (69K) gas is provided, with the expectation that they should OOG-fail. The precompile called, This seems consistent to me. The call that OOG-fails is nested, as shown in the trace from
Solution: need to relax rules further, and not consider Implemented in last three commits of work branch. Diff of changes to file, to cover both types of failures described. |
Slowest tests: https://gist.github.com/veox/02336c518f049cb749608657622d127f Also includes Seems that the initial sampling was too limited, printing only 200 of the slowest durations, which skewed the results heavily on Istanbul's side. Re-running... Re-ran with 10000 as cut-off, same result. The list at test is almost entirely populated with Istanbul entries. Still, |
Closing since PR #1858 was superseded by PR #1871 and addressed this; the issue was left hanging. JIC, I was reminded of this because a discussion arose recently on gitter of That said, I don't remember if The sherlocking is now split across several issues/PRs pertaining to different network upgrades, and tests filled by two different implementations. So, just dropping this here for future archeologists. Pinging @holiman, too, as participant in gitter discussion. (There's likely no actionables here; just a notification.) |
@veox, finally I found the way to fix it openethereum/openethereum@74eabcc. In short, this test has a preexistent OE was not "reverting the deletions", so there was a problem in the state trie root. Take care, with the current gas parameters defined in the |
... Except if it's pre-compile number Toodly-doo!.. 🎷 In all seriousness: always mention the exception. This is all OT to the issue(s) you've found; still!
A "proper" way to address the above would be† to have an irregular state transition (IST) to "right" this previous "accidental" IST that we let slip through, oh because it was a secondary failure in an already debilitating net-wide consensus failure. The common reasoning against such "right the wrong" ISTs is "the tech debt only increases with that"; to which I can only reply that the cognitive debt keeps piling on, with each new node implementation, with each new developer who is about to find out the hard way. 🤯 Doesn't matter if you start from a snapshot, doesn't matter if you're running a testnet. I swear, we'll take this Perhaps I should write an EIP for that, if only to be filed as a formal complaint. Maybe next March 32nd, instead of dealing with integer overflows. Remind me a few days in advance if you could, please! † Or "would have been", I gather, considering this is years-old.
|
Haha, it has been an interesting issue to fix, to be the first one 😅 I agree with you that this TriePrunning/0x3 specific issue maybe is one of the cases that deserve an IST, because it happens that the embedded logic to fix this irregular stuff is also deployed to other blockchains, not only mainnet. I put on the agenda to ping you on next March 32 (with integer overflows 😜), hehe. |
This issue is to track findings on why upstream fixtures fail in PR #1852, so I don't trash the PR discussion.
Working in
update-fixtures-to-v7.0.0-beta.1
branch of mine, rolling off the one in PR #1852.There are 6 failures for
--fork Istanbul
, all infixtures/BlockchainTests/GeneralStateTests/stRevertTest
:These were added in ethereum/tests#580, then updated/refilled in ethereum/tests#609 and ethereum/tests#610.
For the first four, data indices 0 and 3 produce the same result upstream, so it's no surprise that 0 and 3 fail while 1 and 2 do not.
The last two are also grouped upstream.
The text was updated successfully, but these errors were encountered: