-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat!: improves Credit Pool to protect from cumulative increasing amount of gaps #5520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
also, pls rebase on top of the most recent |
|
pls see 1c9e21e |
I updated based on your diff and added couple more changes:
|
UdjinM6
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, utACK.
|
This pull request has conflicts, please rebase. |
|
resolved conflict due to #5526 by rebasing to top of develop |
UdjinM6
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re-utACK
|
Couldn't we just also remove the limit or something like that ish? I don't understand why ranges are better than "skipped"? |
PastaPastaPasta
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Asked questions
We can't just remove limit. If by any reason the platform will issue Asset Unlock transaction with index 1e9 (or -1_32bit = 0xFFFFFFFF or 0xFFFFFFFFFFFFFFFF), all Dash Core clients will immediately allocate 8Gb RAM (or more depends which disruptive value) to enumerate billions of meaningless continuous numbers. For range will be allocated only a pair of 2 integers. It looks like Denial-of-Service for me with too high consequence: most nodes may die after that. That's why limit of capacity has been introduced at first, to prevent dangerous consequence. |
UdjinM6
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re-utACK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK for merge via merge commit
This data structure provide efficient storage for numbers if amount of gaps between them is not too big It works similarly to CSkipSet but amount of gaps now in unlimited
By design we can have more and more and more gaps in indexes list so far as we can not re-sign expired transaction of asset-unlock. CRangesList is protected from this situation
01be514 to
aeef1a6
Compare
Issue being fixed or feature implemented
Each Asset Unlock transaction requires an unique
index. This index can't be re-used so far as I know because no quorum can re-sign message with samerequestID, but 2 txes with same indexes will have samerequestID.Normally, most of asset-unlock txes should be mined, but some of them won't be mined, for example, by various reasons:
feeis too low, trying to unlock more that current limits for block, or any other reason. Asset unlock transactions also permanently expires by design when height of blockchain achieves some numbers since this unlock-tx is signed. But the specific index is already occupied.It means, that amount of gaps in a datastructure that keep all indexes will be increased over time continuously and earlier or later limit in 10k gaps in indexes could be achieved. Maybe it will take 10 years to achieve this day, may be it will take just a day due to some bug in platform.
Let's take a precaution and update storage by this PR.
Instead of old way with limited storage capacity (no more than 10k for total size of all gaps), let's keep indexes as an ordered list of ranges. For example, the set of these indexes: {1, 2, 3, 5, 7, 8, 9, 1000, 1001} would be presented like this in memory and in evo db:
{[1, 3], [5, 5], [7, 9], [1000, 1001]}.For reference, CSkipList keep this list like:
{all indexes below 1001 is mined except: 4, 6, 10, 11, 12, 13, 14, ... , 999}NOTE: this changes may become useless if platform will generate Asset Unlock transaction by other way when
indexesof expired transactions can be re-used. For now it is impossible so far as I know.What was done?
Implemented a new datastructure
CRangesSetto replaceCSkipSetand used in CreditPool's impementation in core.Also updated related unit test.
How Has This Been Tested?
feature_asset_locks.py-> it failed as expected because transaction that should generate too big gap successfully mined.CSkipListBreaking Changes
It changes behavior of Credit Pool for case of big gaps. Won't affect anything at the moment, because it is theoretical situation in future.
Checklist: