-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(forge test
): memory leak in latest foundry
#8014
Comments
@grandizzy I wonder if this is fuzzing related @0xTimepunk were you able to narrow this down to a specific test? |
Invariants are excluded from project tests, so they're not the culprit. Going to check for regular fuzz tests, the build that does not reproduce it is the one right before cancun update (project is set to paris) but don't have any evidence this causes it. |
I'm not sure if the nightly I posted is the last well operating nightly as I haven't tested. Memory usage increase seems independent of test type, however we do have fuzzying in some scenario files which are part of make ftest or make coverage |
@0xTimepunk I was under the impression you're excluding invariant tests when you hit such (here's the CI failure I was pointing to check https://github.com/superform-xyz/superform-core/actions/runs/9289996043/job/25566018918 ), but looking at |
So, to give extra context. I thought using I think this is independent of invariant tests, because it happens also with I think it may have to do with fuzzing. Please note that our fuzz tests involve running actions repeatedly between 2 or more forks with different input parameters. The scenario / assertions are quite large ,but this didn't happen before. on
This is desired. This forces script folder not to be built as it's not needed for coverage run |
@mattsse @grandizzy (also please read conclusions at the bottom) Scenario 1 - the worst (with all latest versions)SettingsOn forge-std @52715a2 (latest) latest foundry (foundryup) Note: this checks out to a commit with latest forge-std and removed import of ds-test
|
This issue is extra tricky to solve in a temporary fix because using latest forge allowed to overcome the problem (totally unrelated) in #7852 (which was fixed this month), and by having to use an old one we'll probably need to maintain two forge versions to continue operating. (Or perhaps try to use a87faf6 associated nightly which has the fix and the memory leak is not so bad, together with old forge-std + ds-test) Edit: the nightly associated a87faf6 has the memory leak, so we will operate with two foundry versions as needed for now |
Update: While this works on our main/develop branches (scenario 6, forge-std 1.7.3 and forge f625d0f), this breaks something in an upstream branch where tests on mainnet are pinned to a more recent block compared to main/develop. In this block a USDC upgrade was already made, which only works with forge-std 1.8.0+ (see release notes of https://github.com/foundry-rs/forge-std/releases/tag/v1.8.0 and foundry-rs/forge-std#505) Some testingTesting forge-std 1.8.0 with f625d0f does not work due to the crash in scenario 3 above. Then I upgraded again foundry to cafc260. Testing once again (1.8.0 + cafc260) to see if the memory leak is there by running Results: 13gb, the memory leak is back Conclusion: the memory leak seems to be definitely have been brought up by forge-std in versions >=1.8.0 crossover with forge. for awareness @mattsse @grandizzy and tagging @mds1 There is an urgency in solving this situation as this is almost completely halting our development. We need more advice how we can short term fix this to put down the memory leak and have the USDC fix brought up by 1.8.0 in place (is there a commit we can cherry pick for this) |
@0xTimepunk running with trace enabled I can see mem grows when test forks chains (there are many tests with many chains forked running at the same time so eventually OS kicks in and kills the task). What worked for me locally (and I think you can apply it for now as an workaround too to get you going) is to split the
so mem doesn't ever grow up to the point where process needs to be killed (this also helped me not to get rate limited by infura) Looking more into this issue but I think you can use such for now? |
Hey @grandizzy it's super cumbersome to run a large test suite like ours like that suggested way unfortunately. is there any further help we can give on this problem? Thanks |
yeah, I'm actively looking into this, just provided some thoughts while debugging |
@mattsse @grandizzy requesting to re-open the issue as something got merged around 2 days ago or more into foundry that made the issue appear once again Ci exits once again with error 137 (out of memory) Running This is blocking our test/coverage checks mainly on CI (because it auto-kills with 137) and impeding progress |
@klkvr @grandizzy does this ring a bell? |
forge 0.2.0 (fb86e5d 2024-06-05T00:17:33.761343000Z) was one of the first versions which contained the initial fix to the memory issue which was working. However, given that the nightlies are removed after 3 days we can no longer access this nightly in the CI. You can probably use this version to diff check against current one to see what changed |
nothing I could think at but will do some tests shortly and try figure out if something changed to affect this. @0xTimepunk to rule out the possibility of your PR 516 exposing a new issue (as it adds new code and tests), could you please check if you see same coverage problem running latest forge against your master (or v1) branch? (Also, if I correctly understand the issue now is with forge coverage, initially the failure described in this issue was with forge test, so pretty sure that's not caused by the same) |
I am not able to locally spot any difference between June 5 forge build mentioned as working OK (fb86e5d) and latest (fb86e5d) - when running same when testing v1 branch with both forge versions so I am pretty sure there's no regression. @0xTimepunk maybe there's an issue with the RPC URLs used by your CI that could somehow slow down tests / make mem accumulate? |
I'm seeing a variant of this when running Foundry Versionforge 0.2.0 (c4a984f 2024-06-28T00:53:16.491646468Z) Reproducer
Running
Etching a really large runtime code seems to trigger the issue. I use an empty buffer for illustrative purposes, but the run still consumes alot of memory even when a valid runtime bytecode is used. Reducing the size of the runtime code seems to work, with minimal memory usage. I'm sure there's some buffer size threshold where memory usage spikes, but I haven't narrowed it down. |
@Inphi yes, please open a new issue for etch, going to use this one to optimize coverage report creation. Thank you. |
forge test
): memory leak in latest foundry
@0xTimepunk can we now close this issue as if I correctly remember it's not the same as original report but a forge coverage high mem usage? |
@0xTimepunk going to close this for now per above comment, please open a new one if still issues. Thank you! |
View a chang comment. |
Component
Forge
Have you ensured that all of these are up to date?
What version of Foundry are you on?
forge 0.2.0 (5494c33 2024-05-30T00:18:10.251370000Z)
What command(s) is the bug in?
forge test (make test)
Operating System
macOS (Apple Silicon)
Describe the bug
With the following PR as an example superform-xyz/superform-core#541
To reproduce
Part 1
foundryup
(latest foundry)make ftest
code 137 in https://github.com/superform-xyz/superform-core/actions/runs/9289996043/job/25566018918
Please note that this is a powerful dedicated Github actions runner, so the memory limits shouldn't actually be reached
Part 2
foundryup -v nightly-cafc2606a2187a42b236df4aa65f4e8cdfcea970
make ftest
A whopping difference of 36gb.
The text was updated successfully, but these errors were encountered: