-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pruned or corrupt world state > Shutdown due to a VM fatal error. #1162
Comments
@AionMiner Thanks for the issue report. We will investigate the issue. There is another mode called "TOP", it also saves DB spaces (Not like SPREAD mode, always prune the data, TOP mode keeps the full DB state of the top few hundreds block then prune it), if this node runs well. I think you can switch to this mode. |
Have switched to TOP mode, working well 👍 |
Have encountered similar issue with TOP mode today (fresh 1.6.2 install, sync db from genesis online for ~3 days before issue). Kernel was running fine until block height 7446583, then kernel outputs wall of WARN logs. ctrl+c graceful shutdown was successful but am now unable to get kernel to launch, attempted revert DB but similar notice as previous.
Continues WARN output many lines until interrupt with ctrl+c graceful shutdown:
Attempting to launch kernel after DB revert (tried a few reverts to different heights, all report corrupt world state then RuntimeException follows):
Update, tried reverting further back again but now kernel just hangs looking for ancestor block:
|
@AionMiner The WARN log of the Transaction is fine. Someone sent the invalid transaction to the network so has been reverted. You can set the logger Level to Do you still have the log when you see the launching issue when you wad shutting down the kernel at 7446583? |
@AionJayT The WARN log was repeating itself over hundreds of thousands of lines, there was many log files created just full of the WARN notices and nothing else. I noticed the TOP node out of sync from the connection to my FULL node and logged in to find the kernel just spitting out those WARN notices before I manually shut it down. The launch log starting at 20-12-18 07:25:30.498 was directly after the actions: shutdown, launch (constant WARN logs again), shutdown, revert to 7440000 (reported success), launch (exception). Unfortunately I no longer have access to the logs but I will spin up another server and see if it reproduces. |
Has been over 10 days of uptime and TOP mode kernel has yet to reproduce the sync halt issue experienced previously. However, there's been no invalid block/tx logged since becoming fully sync'd (since block 7464349 till now 7550424). I suspect the invalid block/tx event is what lead to the previous sync halt, being that the kernel was still running yet out of sync for ~7 hours (since the invalid block) with the only output whilst in this state being invalid block/tx WARN CONS until I intervened with shutdown. |
Description
Since v1.6 cannot keep
SPREAD
node alive, non-graceful shutdown occurs then having restarting (due rocksDB lock) and reverting several thousand blocks, node will be in sync for some hours before non-graceful shutdown occurs again (observed 3 times over 3 days).Have since swapped to
FULL
node and no issues.System Information
I'm running:
The text was updated successfully, but these errors were encountered: