-
Notifications
You must be signed in to change notification settings - Fork 3
feat: withdrawalQueue and deallocationQueue logic into rewards calculation #476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: seri.choi/withdrawal-deallocation-queue
Are you sure you want to change the base?
feat: withdrawalQueue and deallocationQueue logic into rewards calculation #476
Conversation
53e12db to
ef95a04
Compare
| qsw.strategy, | ||
| qsw.shares_to_withdraw as shares, | ||
| b_queued.block_time as queued_timestamp, | ||
| date(b_queued.block_time) as queued_date, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is rounding being done here (cast block_time to date)? My thinking is that the purpose of this table is to be used to keep track of shares that were in the queue and then once they are completable we insert a diff back into the staker_shares table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is date rounding, not allocation rounding. This is converting block timestamps to dates to figure out what day a withdrawal was queued, compare with snapshot_date, and calculating +14 day withdrawable_date.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to do date rounding either. Ideally we just insert the share difference, once the withdrawal is completable, into the staker_shares table. That way, we let the logic we've already written handle the date rounding (staker_share_snapshots)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A design principle that makes sense is that we should do as little rounding as possible. For this functionality, I'd like us to ideally just insert into the staker_shares table. With that in mind, maybe we should have a parallel staker shares table just for rewards?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A design principle that makes sense is that we should do as little rounding as possible
+1 to this. dont round, treat this table as a time-series table like the staker shares table
I'd like us to ideally just insert into the staker_shares table.
Dont do this. The resulting values for the snapshots table should just query the staker shares and whatever other tables are needed to derive the values. Since we need to account for some kind of balances based on an effective timestamp, theres no clean way to trigger an insert for that, so querying it and deriving it is much cleaner and doesnt pollute the staker_shares table which is entirely sourced from on-chain events.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. We should just join a parallel withdrawal_queue table with staker_shares and then have that processed into staker_share_snapshots.
| operator, | ||
| avs, | ||
| strategy, | ||
| magnitude_decrease, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why store the diff? Fine to just store the magnitude itself and then get the latest magnitude for the given day upon calculation of the slashable stake
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would also work to store prev_magnitude, but I think it's easier to track the diff for multiple pending deallocations and easier aggregation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One invariant of the core protocol is there can be at most 1 pending allocation or deallocation at a time. Unsure if we can take advantage of this. It seems like we're not using the diff in the operatorAllocationSnapshots file
|
The withdrawals are still processed immediately instead of 14 days after. The stakerShares file also has to be edited to reflect this. One thing to think about here is that we would (presumably) want the sidecar API to return the actual shares of the user in the core protocol, which should immediately decremented on a queue. However, for rewards we do not want the shares to be immediately decremented. |
f51270b to
c6ae65a
Compare
11b9e55 to
d06c153
Compare
|
@ypatil12 This is the current architecture:
That's why it would be better not to edit stakerShares but having additional deltas table to correctly calculate rewards. |
d06c153 to
6d252e7
Compare
bfc21ac to
4d8299b
Compare
6d252e7 to
ef0aba3
Compare
ypatil12
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be helpful to have sequence diagrams on how events are processed, including the propogtaion of an event through each file, for allocation/deallocation and withdrawals
| qsw.strategy, | ||
| qsw.shares_to_withdraw as shares, | ||
| b_queued.block_time as queued_timestamp, | ||
| date(b_queued.block_time) as queued_date, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to do date rounding either. Ideally we just insert the share difference, once the withdrawal is completable, into the staker_shares table. That way, we let the logic we've already written handle the date rounding (staker_share_snapshots)
|
|
||
| func (r *RewardsCalculator) AdjustStakerShareSnapshotsForWithdrawalQueue(snapshotDate string) error { | ||
| adjustQuery := ` | ||
| insert into staker_share_snapshots(staker, strategy, shares, snapshot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we can actually do this. My hunch is that we can run into issues with the snapshotting that's already been done and overwrite state. For example, say the staker shares table looked like:
Day 0: Alice queues a withdrawal for 30 shares.
- Day 1: Alice has 35 shares
- Day 2: Alice has 40 shares (Deposits 5 extra)
Then on Day 2, we also apply a queued withdrawal decrementing her shares down to 10. How do we handle this diff appropriately?
Furthermore, staker_shares keeps track of a running sum of shares and am not sure this handles the shares that were previously credit to the staker.
| qsw.strategy, | ||
| qsw.shares_to_withdraw as shares, | ||
| b_queued.block_time as queued_timestamp, | ||
| date(b_queued.block_time) as queued_date, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A design principle that makes sense is that we should do as little rounding as possible. For this functionality, I'd like us to ideally just insert into the staker_shares table. With that in mind, maybe we should have a parallel staker shares table just for rewards?
ef0aba3 to
9faf22a
Compare
ypatil12
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be worth writing down a diagram or flow on how the staker shares propagate based on a withdrawal and the tables at each step of the process
| primary key (staker, strategy, snapshot), | ||
| constraint uniq_withdrawal_queue_share_snapshots unique (staker, strategy, snapshot) | ||
| )`, | ||
| `create index if not exists idx_withdrawal_queue_share_snapshots_snapshot on withdrawal_queue_share_snapshots(snapshot)`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the withdrawal queue shares need to be snapshotted (have daily rounding). We should just insert them into staker shares
My thinking here is that:
- Alice queue a withdrawal. The staker shares table has a decrement for 100 shares
- We add back 100 shares into the staker shares table to cancel out the withdrawal
- 14 days later, we add a decrement of 100 shares into the staker shares table so that its processed properly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless the above is overly complex? But I do think it is clean to only snapshot on top of the staker shares table and not withdrawals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other option is for rewards, have an if case where we don't add withdrawals immediately to staker shares. We just add it to withdrawal_queue table. Then, we do step 3 above. Likely is cleanest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than thrashing the table and having to subtract, then add, then subtract again, why not create a new table to record these cases that you can then sum together with the staker shares table?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. But you still need to make sure that the staker_shares table doesn't process withdrawals right when they're queued. So, you'd have both:
- A queued withdrawal table that has the share decrement with the block 14 days from now. This is joined with staker shares
- Staker shares doesn't have withdrawals added to it immediately
| zap.Uint64("maxBlock", maxBlock), | ||
| zap.Uint64("sabineForkBlock", sabineFork.BlockNumber), | ||
| ) | ||
| return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think we should insert this into staker_shares table
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it's best for us to rely on the snapshotting only done on the staker_shares table. We simply need to be smart about when to insert a withdrawal into that table
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, when a withdrawal happens, we don't insert the share decrement. After a withdrawal is completable we do insert the share decrement
41d0383 to
691afe7
Compare
| func (m *Migration) Up(db *sql.DB, grm *gorm.DB, cfg *config.Config) error { | ||
| queries := []string{ | ||
| // VIEW combines base shares + withdrawal queue adjustments | ||
| `create or replace view staker_share_snapshots_final as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
creating this as a view is going to be problematic since views are basically just aliases to queries. materializing this data into a proper snapshot table should be the solution. a snapshot row shouldnt ever change in the future once its calculated, so Id worry about a view where that might not be the case.
| coalesce(base.strategy, wq.strategy) as strategy, | ||
| (coalesce(base.shares::numeric, 0) + coalesce(wq.shares::numeric, 0))::text as shares, | ||
| coalesce(base.snapshot, wq.snapshot) as snapshot | ||
| from staker_share_snapshots base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not adjust the query that populates the staker_share_snapshots table to account for this case?
| ROW_NUMBER() OVER (PARTITION BY operator, avs, strategy, operator_set_id, cast(block_time AS DATE) ORDER BY block_time DESC, log_index DESC) AS rn | ||
| FROM operator_allocations oa | ||
| INNER JOIN blocks b ON oa.block_number = b.number | ||
| INNER JOIN blocks b ON oa.effective_block = b.number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are you dealing with backwards compatibility?
| -- First allocation: round down to current day (conservative default) | ||
| date_trunc('day', block_time) | ||
| -- First allocation: round up to next day | ||
| date_trunc('day', block_time) + INTERVAL '1' day |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are you dealing with backwards compatibility?
Key Benefits:✅ Clean Separation of Concerns
✅ Performance
✅ Data Integrity
✅ Backward Compatibility
|
4ddeab6 to
89ab4b5
Compare
|
My main point on the above design is that from a design perspective, we should strive to insert share increments/decrements into base bronze tables and then have the snapshot table process these share changes. It's cleaner than inserting a snapshotted record into an already snapshotted table... unless im missing something: So... you have |
691afe7 to
3174111
Compare
3174111 to
9f3db12
Compare
89ab4b5 to
5d945b9
Compare
5d945b9 to
cc6120e
Compare
Description
Operators were not earning rewards during the deallocation queue period, even though their allocations remain at risk until the
effective_date.Current behavior:
operator_allocationstable immediately shows reduced allocationExpected behavior:
effective_dateeffective_date, earn on NEW (lower) allocationSolution
Added
AdjustOperatorShareSnapshotsForDeallocationQueue()function that:magnitude_decreasefromdeallocation_queue_snapshotsoperator_share_snapshotsduring queue periodThis mirrors the existing withdrawal queue adjustment for stakers.
Changes
deallocationQueueShareSnapshots.gorewards.go(line 684-689)effective_date- no manual cleanup neededType of change
How Has This Been Tested?
Checklist: