Skip to content

Conversation

@def-
Copy link
Contributor

@def- def- commented Feb 5, 2026

Checklist

  • This PR has adequate test coverage / QA involvement has been duly considered. (trigger-ci for additional test/nightly runs)
  • This PR has an associated up-to-date design doc, is a design doc (template), or is sufficiently small to not require a design.
  • If this PR evolves an existing $T ⇔ Proto$T mapping (possibly in a backwards-incompatible way), then it is tagged with a T-proto label.
  • If this PR will require changes to cloud orchestration or tests, there is a companion cloud PR to account for those changes that is tagged with the release-blocker label (example).
  • If this PR includes major user-facing behavior changes, I have pinged the relevant PM to schedule a changelog post.

@def- def- force-pushed the pr-iceberg-sink-testing branch 14 times, most recently from 4281285 to a9b6c61 Compare February 5, 2026 22:57
Snapshot batches can contain millions of rows, causing the DeltaWriter's
seen_rows HashMap to grow unbounded and consume excessive memory.

For snapshots, disable position delete tracking by setting max_seen_rows=0.
All deletes will use equality deletes instead, eliminating the memory
overhead at the cost of slightly slower reads (acceptable for snapshots).

Normal post-snapshot batches continue using position deletes as usual.

Requires iceberg-rust 1b01c099 which adds the disable feature.
For fresh sinks, the catch-up batch was incorrectly starting from
Timestamp::minimum() instead of as_of, causing it to cover a range
where no data exists.

Use max(resume_upper, as_of) as the batch lower bound to handle both:
- Fresh sinks: start from as_of (where data actually begins)
- Resuming sinks: start from resume_upper (where we left off)
Add debug! and trace! logging at key points to help diagnose issues:
- Batch description minting (catch-up and future batches)
- Waiting for first batch description before processing data
- Batch descriptions received by write operator
- Stashed rows (trace level) and periodic stash size warnings
- Batch closing with frontier positions
- Files written per batch

This will help debug snapshot processing issues and frontier advancement.
Track max observed timestamps before init to synthesize an upper when a bounded input closes, and exit cleanly once the frontier is empty after init. Start minting once the frontier reaches as_of/resume_upper instead of waiting past them. Close write batches when the input frontier reaches the batch upper and only rescan when batch/frontier advances.
@def-
Copy link
Contributor Author

def- commented Feb 10, 2026

@def- def- force-pushed the pr-iceberg-sink-testing branch 3 times, most recently from 4e747b2 to 0f020f8 Compare February 10, 2026 19:54
@def- def- force-pushed the pr-iceberg-sink-testing branch from 0f020f8 to 790431b Compare February 11, 2026 02:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants