-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(anvil): Memory consumption steadily increases during prolonged transaction replays #6017
Comments
I think the offloading to disk could def use some work. do you have some kind script that I can use to repro this by any chance? that would help, I don't think this is complex to fix, but having something available for debugging would really help here |
Hey @mattsse thanks for the reply, the backtester is still private so I just created this repo that is a minimal reproduction of the issue. It would be great if the anvil node had a very lightweight configuration so it drops all historical/archive state and for most cases(including mine on an MBP M1 16GB) would likely not even need to offload to disk, I thought the |
thanks! |
another issue I noticed is that the anvil node slows down significantly as more and more transactions are replayed, it took about 1 hour to replay the first 50k logs and then about 5 hours to replay the next 50k logs, it'll likely only get worse on more logs. Edit: It does get worse to replay the next 50k logs i.e. from 100-150k took about 10 hours. |
Hi @mattsse don't want to come off as impatient, but I just wanted to ask if you could provide a rough ETA on this? |
+1 Hi guys! We've been building a general smart contract testing framework called dojo, but we're heavily limited by the throughput In particular, uniswap mint transactions appear to take a long time to process, presumably because it's a complex transaction (x-axis is transaction index, y-axis is seconds): How far do you think it would be feasible to cut this throughput time down? Chaos Labs appear to have a private fork of If it's useful, me and @scheuclu are more than happy to help with this if we could get some onboarding guidance as well :) |
@09tangriro thanks for sharing. Could you share a graph of the processing times for the uniswap mint transactions(and perhaps even for swap transactions) for the first 10,000 logs and compare how much faster it is to the later average of ~2.2s If I had to guess it probably slows down as much as 100x; from ~20ms to 2s |
@09tangriro thanks, are these scatter plots over a similar time frame/number of transactions as the initial line plot you shared? I would guess not as the transaction_mint time doesn't eventually end up around 2s. Do the times for the other transactions also end up around 2s? |
The other plot uses a forked chain, which increases transaction times a lot. To better isolate anvil it should be noted that the scatter plots obtained recently are not run on a forked chain, but purely local development. Ideally, I'd like for these times to be O(1) and also try to cut times in general by a factor of 10 so a transaction takes in the order of 1ms. I'm curious, on the surface of it, the Ethereum opcodes shouldn't require ms to process, it's just addition, subtraction, and memory access. According to this benchmark though, |
@09tangriro thanks for clarifying that.
I agree, I see no reason why the processing time for a transaction should increase so drastically. Your scatter plots seem to be processing only a few thousand txs, so it's not as pronounced. In any case, the plots seem to be linear so the processing time per transaction would likely eventually be much greater than the initial processing time which is unacceptable imo. |
I'm unable to run the example after adding the required env vars:
what's the fix here? |
@mattsse not sure what's causing your issue, I just confirmed there's no issue following these steps:
Can you make sure that |
Hi @mattsse have you managed to come right? |
@mshakeg please try running with We also would appreciate if private optimizations to Anvil were upstreamed vs kept in private :) |
@gakonst thanks that works, the memory is still increasing but very significantly slower. I want to run a few more tests using methods @09tangriro would appreciate it if you could also try this and share your findings, especially relating to the processing time for transactions. |
Thanks @gakonst :) Unfortunately wrt speed there still seems to be a positive linear trend, although arguably shallower?: |
@09tangriro could you please give it another try and share results with latest anvil? mind also comment here #4399 (comment) what happens is that if |
Hey @grandizzy I've had a chance to test on this example repo Execution performance has improved by about 2x since I initially created this issue and doesn't degrade over time, additionally memory use & history pruning seems to be much better. If @09tangriro agrees this issue can be closed. |
Agreed, let's close it :) |
Closing per above comments, thank you |
Component
Anvil
Describe the feature you would like
Description
I've been utilizing Anvil
0.2.0
(5be158b 2023-10-02T00:23:45.472182000Z) as a local Ethereum node for a Uniswap V3 backtester project. The backtester replays all transactions for a specific Uniswap V3 Pool. However, I've noticed a consistent and steady increase in memory usage over time as more transactions are replayed, even with the--prune-history
flag enabled. Below is the exact command I'm using to start the anvil node:anvil --prune-history --timestamp 1619820000 --order fifo --code-size-limit 4294967296 -m "test test test test test test test test test test test junk" --gas-limit 100000000000 --gas-price 0 --base-fee 0
Observations
evm_setNextBlockTimestamp
and then utilizeevm_mine
to mine the block.Expected Behavior
Stable memory consumption or a slower, more controlled growth in memory usage over time when replaying transactions.
Possible Solutions
While I'm not certain about the root cause, I'd appreciate if the team could investigate:
--prune-history
flag could be further optimized or if there's a possibility of introducing additional pruning or memory management features.Additional context
Environment Details
0.2.0
The text was updated successfully, but these errors were encountered: