Skip to content

Conversation

@serichoi65
Copy link
Contributor

@serichoi65 serichoi65 commented Nov 17, 2025

Description

Operators were not earning rewards during the deallocation queue period, even though their allocations remain at risk until the effective_date.

Current behavior:

  • Operator deallocates 100 from allocation on Day 1 (effective_date = Day 15)
  • operator_allocations table immediately shows reduced allocation
  • Rewards calculation uses reduced amount → ❌ Operator loses 14 days of rewards

Expected behavior:

  • Operator should continue earning on OLD (higher) allocation until effective_date
  • After effective_date, earn on NEW (lower) allocation

Solution

Added AdjustOperatorShareSnapshotsForDeallocationQueue() function that:

  1. Reads magnitude_decrease from deallocation_queue_snapshots
  2. Aggregates across all AVSs per (operator, strategy)
  3. Adds the decrease back to operator_share_snapshots during queue period

This mirrors the existing withdrawal queue adjustment for stakers.

Changes

  • Added adjustment function in deallocationQueueShareSnapshots.go
  • Integrated into rewards calculation pipeline in rewards.go (line 684-689)
  • Automatically filters by effective_date - no manual cleanup needed

Type of change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 53e12db to ef95a04 Compare November 20, 2025 14:18
@serichoi65 serichoi65 changed the base branch from master to seri.choi/withdrawal-deallocation-queue November 20, 2025 14:19
@serichoi65 serichoi65 marked this pull request as ready for review November 20, 2025 14:23
@serichoi65 serichoi65 requested a review from a team as a code owner November 20, 2025 14:23
@serichoi65 serichoi65 requested review from 0xrajath and seanmcgary and removed request for a team November 20, 2025 14:23
qsw.strategy,
qsw.shares_to_withdraw as shares,
b_queued.block_time as queued_timestamp,
date(b_queued.block_time) as queued_date,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is rounding being done here (cast block_time to date)? My thinking is that the purpose of this table is to be used to keep track of shares that were in the queue and then once they are completable we insert a diff back into the staker_shares table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is date rounding, not allocation rounding. This is converting block timestamps to dates to figure out what day a withdrawal was queued, compare with snapshot_date, and calculating +14 day withdrawable_date.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to do date rounding either. Ideally we just insert the share difference, once the withdrawal is completable, into the staker_shares table. That way, we let the logic we've already written handle the date rounding (staker_share_snapshots)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A design principle that makes sense is that we should do as little rounding as possible. For this functionality, I'd like us to ideally just insert into the staker_shares table. With that in mind, maybe we should have a parallel staker shares table just for rewards?

Copy link
Member

@seanmcgary seanmcgary Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A design principle that makes sense is that we should do as little rounding as possible

+1 to this. dont round, treat this table as a time-series table like the staker shares table

I'd like us to ideally just insert into the staker_shares table.

Dont do this. The resulting values for the snapshots table should just query the staker shares and whatever other tables are needed to derive the values. Since we need to account for some kind of balances based on an effective timestamp, theres no clean way to trigger an insert for that, so querying it and deriving it is much cleaner and doesnt pollute the staker_shares table which is entirely sourced from on-chain events.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. We should just join a parallel withdrawal_queue table with staker_shares and then have that processed into staker_share_snapshots.

operator,
avs,
strategy,
magnitude_decrease,
Copy link
Contributor

@ypatil12 ypatil12 Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why store the diff? Fine to just store the magnitude itself and then get the latest magnitude for the given day upon calculation of the slashable stake

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would also work to store prev_magnitude, but I think it's easier to track the diff for multiple pending deallocations and easier aggregation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One invariant of the core protocol is there can be at most 1 pending allocation or deallocation at a time. Unsure if we can take advantage of this. It seems like we're not using the diff in the operatorAllocationSnapshots file

@ypatil12
Copy link
Contributor

ypatil12 commented Nov 29, 2025

The withdrawals are still processed immediately instead of 14 days after. The stakerShares file also has to be edited to reflect this. One thing to think about here is that we would (presumably) want the sidecar API to return the actual shares of the user in the core protocol, which should immediately decremented on a queue. However, for rewards we do not want the shares to be immediately decremented.

@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch 2 times, most recently from f51270b to c6ae65a Compare December 1, 2025 09:44
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch 2 times, most recently from 11b9e55 to d06c153 Compare December 1, 2025 10:24
@serichoi65
Copy link
Contributor Author

serichoi65 commented Dec 1, 2025

@ypatil12 This is the current architecture:

Blockchain Event (SlashingWithdrawalQueued)
  ↓
eigenState: Create negative delta (protocol truth)
  ↓
staker_share_deltas table (single source of truth)
  ↓
Rewards: Read deltas + apply queue adjustment
  • eigenState should always reflect protocol state
    • When SlashingWithdrawalQueued fires, shares ARE immediately decremented on-chain
    • eigenState indexer mirrors this exactly - that's its job
    • Changing this breaks the "state mirror" principle
  • Single source of truth
    • All downstream consumers read from staker_share_deltas
    • Rewards calculator applies its domain logic on top
    • Clean data flow: raw state → business logic

That's why it would be better not to edit stakerShares but having additional deltas table to correctly calculate rewards.

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from d06c153 to 6d252e7 Compare December 1, 2025 10:38
@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch 2 times, most recently from bfc21ac to 4d8299b Compare December 1, 2025 17:08
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 6d252e7 to ef0aba3 Compare December 1, 2025 22:11
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be helpful to have sequence diagrams on how events are processed, including the propogtaion of an event through each file, for allocation/deallocation and withdrawals

qsw.strategy,
qsw.shares_to_withdraw as shares,
b_queued.block_time as queued_timestamp,
date(b_queued.block_time) as queued_date,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to do date rounding either. Ideally we just insert the share difference, once the withdrawal is completable, into the staker_shares table. That way, we let the logic we've already written handle the date rounding (staker_share_snapshots)


func (r *RewardsCalculator) AdjustStakerShareSnapshotsForWithdrawalQueue(snapshotDate string) error {
adjustQuery := `
insert into staker_share_snapshots(staker, strategy, shares, snapshot)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we can actually do this. My hunch is that we can run into issues with the snapshotting that's already been done and overwrite state. For example, say the staker shares table looked like:

Day 0: Alice queues a withdrawal for 30 shares.

  1. Day 1: Alice has 35 shares
  2. Day 2: Alice has 40 shares (Deposits 5 extra)

Then on Day 2, we also apply a queued withdrawal decrementing her shares down to 10. How do we handle this diff appropriately?

Furthermore, staker_shares keeps track of a running sum of shares and am not sure this handles the shares that were previously credit to the staker.

qsw.strategy,
qsw.shares_to_withdraw as shares,
b_queued.block_time as queued_timestamp,
date(b_queued.block_time) as queued_date,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A design principle that makes sense is that we should do as little rounding as possible. For this functionality, I'd like us to ideally just insert into the staker_shares table. With that in mind, maybe we should have a parallel staker shares table just for rewards?

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from ef0aba3 to 9faf22a Compare December 2, 2025 15:31
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be worth writing down a diagram or flow on how the staker shares propagate based on a withdrawal and the tables at each step of the process

primary key (staker, strategy, snapshot),
constraint uniq_withdrawal_queue_share_snapshots unique (staker, strategy, snapshot)
)`,
`create index if not exists idx_withdrawal_queue_share_snapshots_snapshot on withdrawal_queue_share_snapshots(snapshot)`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the withdrawal queue shares need to be snapshotted (have daily rounding). We should just insert them into staker shares

My thinking here is that:

  1. Alice queue a withdrawal. The staker shares table has a decrement for 100 shares
  2. We add back 100 shares into the staker shares table to cancel out the withdrawal
  3. 14 days later, we add a decrement of 100 shares into the staker shares table so that its processed properly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless the above is overly complex? But I do think it is clean to only snapshot on top of the staker shares table and not withdrawals.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other option is for rewards, have an if case where we don't add withdrawals immediately to staker shares. We just add it to withdrawal_queue table. Then, we do step 3 above. Likely is cleanest

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than thrashing the table and having to subtract, then add, then subtract again, why not create a new table to record these cases that you can then sum together with the staker shares table?

Copy link
Contributor

@ypatil12 ypatil12 Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. But you still need to make sure that the staker_shares table doesn't process withdrawals right when they're queued. So, you'd have both:

  1. A queued withdrawal table that has the share decrement with the block 14 days from now. This is joined with staker shares
  2. Staker shares doesn't have withdrawals added to it immediately

zap.Uint64("maxBlock", maxBlock),
zap.Uint64("sabineForkBlock", sabineFork.BlockNumber),
)
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think we should insert this into staker_shares table

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it's best for us to rely on the snapshotting only done on the staker_shares table. We simply need to be smart about when to insert a withdrawal into that table

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, when a withdrawal happens, we don't insert the share decrement. After a withdrawal is completable we do insert the share decrement

@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch from 41d0383 to 691afe7 Compare December 3, 2025 06:39
func (m *Migration) Up(db *sql.DB, grm *gorm.DB, cfg *config.Config) error {
queries := []string{
// VIEW combines base shares + withdrawal queue adjustments
`create or replace view staker_share_snapshots_final as
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

creating this as a view is going to be problematic since views are basically just aliases to queries. materializing this data into a proper snapshot table should be the solution. a snapshot row shouldnt ever change in the future once its calculated, so Id worry about a view where that might not be the case.

coalesce(base.strategy, wq.strategy) as strategy,
(coalesce(base.shares::numeric, 0) + coalesce(wq.shares::numeric, 0))::text as shares,
coalesce(base.snapshot, wq.snapshot) as snapshot
from staker_share_snapshots base
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not adjust the query that populates the staker_share_snapshots table to account for this case?

ROW_NUMBER() OVER (PARTITION BY operator, avs, strategy, operator_set_id, cast(block_time AS DATE) ORDER BY block_time DESC, log_index DESC) AS rn
FROM operator_allocations oa
INNER JOIN blocks b ON oa.block_number = b.number
INNER JOIN blocks b ON oa.effective_block = b.number
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are you dealing with backwards compatibility?

-- First allocation: round down to current day (conservative default)
date_trunc('day', block_time)
-- First allocation: round up to next day
date_trunc('day', block_time) + INTERVAL '1' day
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are you dealing with backwards compatibility?

@serichoi65
Copy link
Contributor Author

serichoi65 commented Dec 4, 2025

┌─────────────────────────────────────────────────────────────┐
│ DATA FLOW: Blockchain Events → Snapshots → Rewards         │
└─────────────────────────────────────────────────────────────┘

1. BLOCKCHAIN EVENTS (Pure, Immutable)
   ├─> staker_shares (time-series, protocol truth)
   └─> queued_slashing_withdrawals (time-series, protocol truth)

2. SNAPSHOT GENERATION (Materialized, Immutable)
   ├─> withdrawal_queue_share_snapshots 
   │   └─ Tracks shares in 14-day queue (audit trail)
   │
   └─> staker_share_snapshots ✨ MATERIALIZED HERE ✨
       └─ Base shares + Withdrawal queue adjustments
       └─ Immutable once calculated
       └─ No VIEW overhead at query time

3. GOLD TABLES (Rewards Calculation)
   └─> Read staker_share_snapshots directly
       └─ Already includes all adjustments
       └─ Simple, fast queries

Key Benefits:

✅ Clean Separation of Concerns

  • staker_shares: Pure blockchain state
  • withdrawal_queue_share_snapshots: Audit trail
  • staker_share_snapshots: Materialized business logic

✅ Performance

  • No VIEW overhead at query time
  • Data materialized once during generation
  • Gold queries stay simple

✅ Data Integrity

  • Snapshots immutable once calculated
  • Easy to audit and debug
  • Clear data lineage

✅ Backward Compatibility

  • Old snapshots won't be recalculated
  • Graceful fallbacks for missing data
  • Fork-based logic for behavioral changes

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 4ddeab6 to 89ab4b5 Compare December 4, 2025 22:43
@ypatil12
Copy link
Contributor

ypatil12 commented Dec 6, 2025

My main point on the above design is that from a design perspective, we should strive to insert share increments/decrements into base bronze tables and then have the snapshot table process these share changes. It's cleaner than inserting a snapshotted record into an already snapshotted table... unless im missing something:

So... you have withdrawal_queue_shares which is joined with staker_shares, and that new table is eventually processed into staker_share_snapshots

@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch from 691afe7 to 3174111 Compare December 7, 2025 10:41
@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch from 3174111 to 9f3db12 Compare December 7, 2025 13:24
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 89ab4b5 to 5d945b9 Compare December 7, 2025 13:26
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 5d945b9 to cc6120e Compare December 7, 2025 13:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants