You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the decentralized bonder role, anyone can bring their own RPC node. There is no standardization as it relates to the size of the getLogs block range or size of response.
The bonder needs to handle all of the cases in a way that isn't too inefficient.
Note
Taking a step back, is there a world where this limitation is not a concern of ours? Can we rearchitect in such a way where this does not matter?
Consideration
AFAICT if data is too big, most providers just truncate data and don't tell you. It is not clear how to handle this case, but might need to be handled.
An RPC provider may change their requirements without our notice, though this is likely a rare case.
Reference Conversation
ideally there’s a dynamic way to determine the block range to use. Maybe by doing test queries on node start, that use a large range and known short range but with large response size and store the limits of that provider in db or cache
The text was updated successfully, but these errors were encountered:
With the decentralized bonder role, anyone can bring their own RPC node. There is no standardization as it relates to the size of the
getLogs
block range or size of response.The bonder needs to handle all of the cases in a way that isn't too inefficient.
Note
Taking a step back, is there a world where this limitation is not a concern of ours? Can we rearchitect in such a way where this does not matter?
Consideration
Reference Conversation
The text was updated successfully, but these errors were encountered: