You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we increment valid reads, and tests assert that num produced == valid reads, don't we have this covered already?
If we consume a record twice then valid reads will be larger than num produced. If we are missing records then valid reads will be less than num produced.
There are a lot of cases where continuity of offsets is ok to be violated as mentioned. We can't easily verify whether continuity is required or not from the kgo-verifier side. Unless, we add a flag and ask the user to specify whether it is expected or not.
However, the users can check easily in their tests the continuity of offsets by asserting the number of valid reads and max consumed offsets. If you get 10 valid reads and 9 is the highest consumed offset then offsets are continuous.
There was only one exception to that case: if offsets rewinded then jumped over a gap equal to rewind distance, you could still get 10 valid reads and 9 would be the highest offset.
We should default to expecting that kafka offsets on consumed data will be contiguous, unless transactions and/or compaction is in use.
This would help to detect issues like #10782 , when combined with fault injection.
JIRA Link: CORE-1331
The text was updated successfully, but these errors were encountered: