-
Notifications
You must be signed in to change notification settings - Fork 194
Kafka Extension upgrade from 3.6.0 to 3.7.0 & Fixing the null attribute issue #1232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@kshyju - can you please approve this PR as it is pending for out-proc worker extension nuget package release? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noticed some public methods/types are missing comments/summary. Could we add that as well? ex: https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Kafka/src/KafkaTriggerAttribute.cs#L15
@@ -32,38 +32,38 @@ public KafkaOutputAttribute(string brokerList, string topic) | |||
/// Gets or sets the Avro schema. | |||
/// Should be used only if a generic record should be generated | |||
/// </summary> | |||
public string? AvroSchema { get; set; } | |||
public string AvroSchema { get; set; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why changing from nullable to non nullable? Does tooling/static analysis shows a warning about this (non nullable)prop not being initialized? I think it is best to embrace nullable type feature the language is offering.
Will customers start seeing warnings about props not being initialized after they upgrade from 3.6.0 to 3.70?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for the change is the compilation error with use of nullable attributes as discussed here.
Agreed that we should embrace the nullable types - something we will definitely look into going forward. But for now - In order to maintain parity with the kafka extension we are keeping the attributes non-nullable.
Regarding warnings to customers - No customers will not see any warnings about the property not being initialized this being an optional parameter for the attribute supported by C#.
@@ -32,38 +32,38 @@ public KafkaOutputAttribute(string brokerList, string topic) | |||
/// Gets or sets the Avro schema. | |||
/// Should be used only if a generic record should be generated | |||
/// </summary> | |||
public string? AvroSchema { get; set; } | |||
public string AvroSchema { get; set; } | |||
|
|||
/// <summary> | |||
/// Gets or sets the Maximum transmit message size. Default: 1MB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
Use lowercase maximum.
Consider using consistent casing across the file (ex: default or Default?).
Consider rephrasing to include the unit as well (like "in bytes").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
/// <summary> | ||
/// How many times to retry sending a failing Message. **Note:** default: 2 | ||
/// </summary> | ||
/// <remarks>Retrying may cause reordering unless <c>EnableIdempotence</c> is set to <c>true</c>.</remarks> | ||
public int? MaxRetries { get; set; } | ||
public int MaxRetries { get; set; } = int.MaxValue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 billion max retries? 😵
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is in line with the extension and the underlying Confluent Kafka sdk. Even when set to null this is the value taken in the underlying sdk. Explicitly setting as default will help customers.
|
||
/// <summary> | ||
/// Path to client's certificate. | ||
/// ssl.certificate.location in librdkafka | ||
/// </summary> | ||
public string? SslCertificateLocation { get; set; } | ||
public string SslCertificateLocation { get; set; } | ||
|
||
/// <summary> | ||
/// Password for client's certificate. | ||
/// ssl.key.password in librdkafka |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Is this a valid comment to present here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
/// <summary> | ||
/// Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. This is the maximum time used to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. default: 300000 | ||
/// </summary> | ||
public int? MessageTimeoutMs { get; set; } | ||
public int MessageTimeoutMs { get; set; } = 300_000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Would this change be considered as a breaking change? Is the code which consumes this property (may be in host/kafka webjobs extn) updated to handle the default value change? Earlier this was sent as null and now it will send 300000 when sending the serialized version of this class instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. This would not be a breaking change. When set to null the value used was 300000 in the underlying librdkafka library.
@raorugan @shrohilla Ping on this PR. |
@shrohilla / @raorugan can you take a look at the comments above? Thanks! |
This pull request has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 7 days. It will be closed if no further activity occurs within 7 days of this comment. |
@kshyju The Kafka Extension pipeline seems to be waiting for some permission. Can you please help with that? |
All good now. https://azfunc.visualstudio.com/Azure%20Functions/_build/results?buildId=113680&view=results |
Closing in favor of #1336 |
Issue describing the changes in this PR
resolves #issue_for_this_pr
Pull request checklist
release_notes.md
Additional information
Additional PR information