Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent the consumption of messages for topics paused while fetch is in-flight #397

Merged
merged 9 commits into from
Jun 18, 2019
51 changes: 51 additions & 0 deletions src/consumer/__tests__/consumeMessages.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -609,6 +609,57 @@ describe('Consumer', () => {
})
})

it('discards messages received when pausing while fetch is in-flight', async () => {
consumer = createConsumer({
cluster: createCluster(),
groupId,
maxWaitTimeInMs: 1000,
logger: newLogger(),
})

const messages = Array(10)
.fill()
.map(() => {
const value = secureRandom()
return { key: `key-${value}`, value: `value-${value}` }
})
await producer.connect()
await producer.send({ acks: 1, topic: topicName, messages })

await consumer.connect()

await consumer.subscribe({ topic: topicName, fromBeginning: true })

const sleep = value => waitFor(delay => delay >= value)
let offsetsConsumed = []

const eachBatch = async ({ batch, heartbeat }) => {
for (const message of batch.messages) {
offsetsConsumed.push(message.offset)
}

await heartbeat()
}

consumer.run({
eachBatch,
})

await waitForConsumerToJoinGroup(consumer)

await waitFor(() => offsetsConsumed.length === messages.length, { delay: 50 })
await sleep(50)

// Hope that we're now in an active fetch state? Something like FETCH_START might help
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it's gonna be extremely flaky. We should indeed add a new instrumentation event.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An instrumentation event would definitely be better, however in all past test runs since the first version of this was introduced (in #367) I haven't found a trace of failing once. That said, if Kafka responds a bunch slower through doing some compaction or whatever, the timing is most likely hosed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These things tend to become an issue in CI, even if they're reasonably reliable locally.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added FETCH_START, so far without a payload, as the only use case for it now doesn't require one to function. It's mirrored after GROUP_JOIN, with subject first, action second. Was a bit of a judgement call, as the BATCH ones do list the verb first. I reckoned subject first should help group them nicely when listing them, which seems as good a reason for something rather arbitrary as any other :)

const seekedOffset = offsetsConsumed[Math.floor(messages.length / 2)]
JaapRood marked this conversation as resolved.
Show resolved Hide resolved
consumer.pause([{ topic: topicName }])
await producer.send({ acks: 1, topic: topicName, messages }) // trigger completion of fetch

await sleep(200)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than sleeping an arbitrary amount and hoping we've consumed the message by then, use the waitFor utility.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point is that no message has been consumed, that nothing has happened, so that makes a waitFor a bit trickier! Happy to hear if you've got any suggestions for that :)

Copy link
Collaborator

@Nevon Nevon Jun 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I should have read it more properly. Perhaps there's an event you could wait for having happened, such as END_BATCH_PROCESS to make sure that a fetch has happened without necessarily having to wait 200ms.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ended up going with FETCH, which seems to capture the use case correctly. Looked into END_BATCH_PROCESS, which worked for the unfixed implementation, but never triggered in the first as the filtered response was an empty batch, which get skipped!


expect(offsetsConsumed.length).toEqual(messages.length)
})

describe('transactions', () => {
testIfKafka_0_11('accepts messages from an idempotent producer', async () => {
cluster = createCluster({ allowExperimentalV011: true })
Expand Down
38 changes: 21 additions & 17 deletions src/consumer/consumerGroup.js
Original file line number Diff line number Diff line change
Expand Up @@ -369,23 +369,27 @@ module.exports = class ConsumerGroup {
topics: requestsPerLeader[nodeId],
})

const batchesPerPartition = responses.map(({ topicName, partitions }) => {
const topicRequestData = requestsPerLeader[nodeId].find(
({ topic }) => topic === topicName
)

return partitions
.filter(partitionData => !this.seekOffset.has(topicName, partitionData.partition))
.map(partitionData => {
const partitionRequestData = topicRequestData.partitions.find(
({ partition }) => partition === partitionData.partition
)

const fetchedOffset = partitionRequestData.fetchOffset

return new Batch(topicName, fetchedOffset, partitionData)
})
})
const pausedAtResponse = this.subscriptionState.paused()

const batchesPerPartition = responses
.filter(({ topicName }) => !pausedAtResponse.includes(topicName))
.map(({ topicName, partitions }) => {
const topicRequestData = requestsPerLeader[nodeId].find(
({ topic }) => topic === topicName
)

return partitions
.filter(partitionData => !this.seekOffset.has(topicName, partitionData.partition))
.map(partitionData => {
const partitionRequestData = topicRequestData.partitions.find(
({ partition }) => partition === partitionData.partition
)

const fetchedOffset = partitionRequestData.fetchOffset

return new Batch(topicName, fetchedOffset, partitionData)
})
})

return flatten(batchesPerPartition)
})
Expand Down