-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Airflow Kafka Provider "commit_cadence" Not Working as Expected #34213
Comments
Do you know which part of
Seems like you found solution and exactly know what should be done in case of usage Confluence Kafka. So maybe you could contribute this part into provider documentation? It could be easily done by click on Suggest a change on this page in https://airflow.apache.org/docs/apache-airflow-providers-apache-kafka/stable/operators/index.html ? |
would like to take this :) |
@Taragolis I do not know what part should be changed, but I feel as if something should. Basically while the "enable.auto.commit" option is on, it doesn't really matter what you put in the "commit_cadence" option in the operator, because its going to commit the offset every 5 seconds by default. |
Or maybe solution already exists and you could provide required parameters to Consumer thought connection? |
|
@Taragolis but if user did not specify that, "enable.auto.commit" would be on by default and in this case commit_cadence selection would be redundant. |
Visit the SASSA website and log into your account if you have Sassa status. You can usually check the status of your benefits and claims online. https://sassacheckup.co.za/ |
Apache Airflow version
Other Airflow 2 version (please specify below)
What happened
When running the Airflow Kafka Provider Operator "ConsumeFromTopicOperator", I had one of my runs fail. Naturally since I have the "commit_cadence" option set to "end_of_operator", I was expecting to have duplicate records since it should have not commit the offset because the operator failed. Well the day ended and my counts were off, and when I looked in my DB I found that during the time it failed is when it missed the messages. So when the DAG run failed the offset was for some reason still committed even though I had set it to "end_of_operator".
What you think should happen instead
Based on your description, the offset should not get committed until the operator has completed successfully. If the DAG fails, it should go back to the offset the operator started on.
How to reproduce
Run the Kafka Provider on a topic and mid DAG run fail it, and see if it goes back and gets the messages it missed. The connection information I used is:
{
"bootstrap.servers": SERVERS,
"group.id": GROUPID,
"auto.offset.reset": "earliest",
"security.protocol": "SSL",
"ssl.ca.location": "CA",
"ssl.certificate.location": "CERT",
"ssl.key.location": "KEY",
"ssl.key.password": "PW"
}
Operating System
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
Versions of Apache Airflow Providers
apache-airflow-providers-apache-kafka==1.1.2
Deployment
Official Apache Airflow Helm Chart
Deployment details
No response
Anything else
Looking through the Confluent Kafka Documentation, I suspect what is happening here is because for Confluent's consumers they have an option "enable.auto.commit" that defaults to true, and it commits the offset every 5 seconds (https://docs.confluent.io/platform/current/clients/consumer.html#id1). When I turned this option to false, it worked as expected and I was getting duplicate messages on fails.
I don't really know what the expected behavior here is, but either 1) the code should be changed to turn this option off in the source code or 2) the documentation should specifically say that you need to turn this option to false in order for the commit_cadence option to work.
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: