-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add max_processing_time config to Kafka Consumer input #9988
Conversation
Not sure what I'm doing wrong.
|
|
Weird, I didn't get that output on a fresh clone mounted to a clean golang docker container instance. I assumed if it wanted to remove the spaces from |
What should be done with this linting issues
I added imports of In
Is refactoring the tests, renaming that |
As we try to clean up lint issues with each PR, you should feel free to make changes to make things lint clean in your PR. I would add a comment to your initial PR message why you made those changes though. |
After a closer look, I'm not knowledgeable enough to be comfortable fixing the |
Hey, Thanks for taking the time to put this together with tests and all. I do not know kafka very well, but based on the bug reports it looks like the underlying issue is telegraf does not process events as fast as kafka wants and as a result closes the connection? How would a user know to increase this setting and to what value? |
If I understand correctly (and I very well may not), the consumer needs to acknowledge completion of processing a message, not just ack receiving it. By default, Sarama sets a default timeout of 100ms to complete processing (due to implementation, that can be as much as 200ms).
Either the Telegraf debug log or Kafka broker log showing frequent reconnects.
A safe general rule should be to set And I don't know how aggregator plugins would come into play here.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the details, that helps a lot. I think if we update the README to guide the user just a little bit more, I am +1 on this.
Thanks again!
🥳 This pull request decreases the Telegraf binary size by -0.01 % for linux amd64 (new size: 132.1 MB, nightly size 132.1 MB) 📦 Looks like new artifacts were built from this PR. Expand this list to get them here! 🐯Artifact URLs |
Required for all PRs:
resolves #7622 where the kafka_consumer input plugin continually abandons and is resubscribed to a topic partition when Telegraf output takes longer than the 100ms sarama default.
Added the ability to set sarama Consumer.MaxProcessingTime in the Kafka Consumer input plugin configuration, by defining a new
max_processing_time
configuraiton parameter. Documentation is based on the sarama source.Created tests for default and custom configuration values. Did not create a test for an invalid configuration because any valid Duration (already has test coverage) should be a valid configuration.
This change locally hard codes the sarama default of 100ms. The alternative, not setting a default here and falling back on the sarama default (the existing behavior), could result in the local documentation being incorrect if the sarama default is changed in the future.
I see the issue from #7622 on my system, verified it can be reproduced with this build at the default
max_processing_time
of100ms
, and verified it was resolved with a largermax_processing_time
of500ms
.Full disclosure: First-time contributor & golang beginner. I tried to emulate existing code patterns. If I did anything wrong here, please let me know.