You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Assuming that we will launch stateless validation with RPC nodes tracking all shards using memtrie, analyze the memory requirements from RPC nodes in this setup.
As arule of thumb, after stateless validation is launched, if a node tracks all shards and needs to catch up with the network, then it needs to enable memtries, because the network will start operating faster due to memtries and tracking less shards (well it is the premise of the stateless validation), so without memtrie any other node tracking all shards will fall behind.
Thus, we need to understand how much memory RPC and archival nodes will need when they start using memtries while tracking all shards. This could be done by spinning up an RPC node with fresh neard and tracking the delta between with and without memtries:
RAM usage (should increase but how much?)
Disk usage (should not change)
Something indicating chunk processing like apply-chunk latency (how much?)
The text was updated successfully, but these errors were encountered:
tayfunelmas
changed the title
Profile memory usage requirements of RPC nodes in stateless validation
Profile memory usage requirements of RPC and archival nodes in stateless validation
May 9, 2024
I tried to design a way how to estimate the requirements, but there are too many unknowns.
Let's have a look at the existing requirements https://near-nodes.io/rpc/hardware-rpc
Is it for mainnet? For testnet? What's about localnet? What if I have localnet 10 times more congested than mainnet?
What if I want to make millions of queries per second? Will it change the requirements?
Instruction says that recommended configuration is 8 cores, 20 GB RAM. The minimum is 8 cores, 12 GB RAM.
TBH, I find these numbers useless.
For tiny localnet, 1 core and 4 GB RAM would be more than enough.
Let's assume it's for mainnet, and let's look at the use cases.
In reality, Pagoda runs each mainnet regular node on 32 vCPU and 128 GB memory.
I know that some of our partners run even beefier machines.
Should we update the recommended configuration with these numbers? Why don't we use the recommended configuration?
I can suggest 2 options:
Redesign this doc from 0, suggesting different configurations for different use cases
Leave everything as it is and bump the numbers if the users will complain
I personally vote for the second option, because otherwise we should update the doc each time we have yet another network congestion
If we decide to redesign the doc from 0, we need to start from defining audience for this doc. It's now unclear for me, who we're trying to help.
Closing this one as we have identified that nodes tracking all shards will need more than 64 GB memory (RPC and archival nodes, as well we validators before transitioning from stateful to stateless validation).
Assuming that we will launch stateless validation with RPC nodes tracking all shards using memtrie, analyze the memory requirements from RPC nodes in this setup.
As arule of thumb, after stateless validation is launched, if a node tracks all shards and needs to catch up with the network, then it needs to enable memtries, because the network will start operating faster due to memtries and tracking less shards (well it is the premise of the stateless validation), so without memtrie any other node tracking all shards will fall behind.
Thus, we need to understand how much memory RPC and archival nodes will need when they start using memtries while tracking all shards. This could be done by spinning up an RPC node with fresh neard and tracking the delta between with and without memtries:
The text was updated successfully, but these errors were encountered: