-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why my chaindata size up to 400GB? #15797
Comments
After it's initial sync, Geth switches to "full sync" where all historical state form that point onward is retained. If you resync, then only the latest state is downloaded. The latest state with the blockchain data is worth about 50GB, but since we don't have state pruning yet, after a sync the data just keeps accumulating. |
so , the chaindata size would be the longer time it run ,the size would much larger than fast-mode chaindata size? and how i can get the real "full sync" chaindata-size from beginning till now ? thank @karalabe ~ |
@karalabe What do you recommend doing to reduce the size but stay in sync? I have limited SSD storage and an always on node, I let it run 24/7 but it keeps getting too big for my SSD. So I have to removedb and re-sync from scratch every once in a while...which of course takes at least a few hours...is there a better method? Like some kind of Export -> Quick Import we could do? I have storage space available just not SSD storage 😄 |
@MysticRyuujin We're working on a memory cache to reduce database writes quite significantly (PoC tests show about 60-70% less data written to disk). That will hopefully land in Geth 1.8.0 and make this problem a rarer issue #15857. Geth also supports "fast syncing" with itself, which you can use to synchronize an existing chain into a fresh data directory and then swap out the old one with the fresh one:
Please do the above manually, I just wrote up the |
thx @karalabe ~ |
any workaround so far ? this makes it hard to run a production node with high availability without having a large disk (unnecessarily expensive ... ) |
Hi, |
Hi,as far as i know ,no. |
Partially. The chain still grows, but at a much much slower rate than before. Doing a full sync on mainnet (i.e. not fast) results in about a 2x database size compared to a pruned node. We're still working on getting final pruning implemented to keep it even slower. |
Hi, |
To starting from Regarding to the https://github.com/ethereum/go-ethereum/wiki/command-line-options
If you do not want to spend time syncing Ethereum node |
I was led here for the copydb help. Just for future reference for people landing here, the copydb interface has apparently also been updated with the ancient folder feature. You have to specify the ancient folder as the second argument. For example:
I was assuming that it would search for the ancient folder inside the chaindata folder if it wasn't specified, so I was hung up on this for a minute. #22203 |
Hi @oleksiikoshlatyi , thanks very much for your explanation! I'm referring to @karalabe 's reply here, where he points out that synching the chain on an HDD is extremely slow if not impossible. |
@dorianhenning - Yes, it's fast enough |
I am confuse about the chaindata size , it seems growing unresonable.
I have set up several server for running geth-full-node . and the chaindata size is not the same size .394GB,212GB...etc.
I try the commond:
geth removedb
nohup ./geth --fast --cache=1024 --rpc --rpcapi "db,eth,net,web3,personal" &
and download again the chaindata , it take serveral hours up to 50GB , and every thing is ok .my wallet data and the block data is the latest.
so , how can i cutdown the chaindata size ? is 400GB normal ?
thanks.
The text was updated successfully, but these errors were encountered: