Skip to content

Kafka Extension upgrade from 3.6.0 to 3.7.0 & Fixing the null attribute issue #1232

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

shrohilla
Copy link
Contributor

@shrohilla shrohilla commented Dec 14, 2022

Issue describing the changes in this PR

resolves #issue_for_this_pr

Pull request checklist

  • My changes do not require documentation changes
    • Otherwise: Documentation issue linked to PR
  • My changes should not be added to the release notes for the next release
    • Otherwise: I've added my notes to release_notes.md
  • My changes do not need to be backported to a previous version
    • Otherwise: Backport tracked by issue/PR #issue_or_pr
  • I have added all required tests (Unit tests, E2E tests)

Additional information

Additional PR information

@shrohilla shrohilla marked this pull request as draft December 14, 2022 13:13
@shrohilla shrohilla marked this pull request as ready for review December 25, 2022 13:14
@shrohilla shrohilla requested a review from kshyju December 25, 2022 13:14
@raorugan
Copy link

raorugan commented Jan 2, 2023

@kshyju - can you please approve this PR as it is pending for out-proc worker extension nuget package release?

Copy link
Member

@kshyju kshyju left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noticed some public methods/types are missing comments/summary. Could we add that as well? ex: https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Kafka/src/KafkaTriggerAttribute.cs#L15

@@ -32,38 +32,38 @@ public KafkaOutputAttribute(string brokerList, string topic)
/// Gets or sets the Avro schema.
/// Should be used only if a generic record should be generated
/// </summary>
public string? AvroSchema { get; set; }
public string AvroSchema { get; set; }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why changing from nullable to non nullable? Does tooling/static analysis shows a warning about this (non nullable)prop not being initialized? I think it is best to embrace nullable type feature the language is offering.

Will customers start seeing warnings about props not being initialized after they upgrade from 3.6.0 to 3.70?

Copy link
Contributor

@jainharsh98 jainharsh98 Feb 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for the change is the compilation error with use of nullable attributes as discussed here.
Agreed that we should embrace the nullable types - something we will definitely look into going forward. But for now - In order to maintain parity with the kafka extension we are keeping the attributes non-nullable.
Regarding warnings to customers - No customers will not see any warnings about the property not being initialized this being an optional parameter for the attribute supported by C#.

@@ -32,38 +32,38 @@ public KafkaOutputAttribute(string brokerList, string topic)
/// Gets or sets the Avro schema.
/// Should be used only if a generic record should be generated
/// </summary>
public string? AvroSchema { get; set; }
public string AvroSchema { get; set; }

/// <summary>
/// Gets or sets the Maximum transmit message size. Default: 1MB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:
Use lowercase maximum.
Consider using consistent casing across the file (ex: default or Default?).
Consider rephrasing to include the unit as well (like "in bytes").

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


/// <summary>
/// How many times to retry sending a failing Message. **Note:** default: 2
/// </summary>
/// <remarks>Retrying may cause reordering unless <c>EnableIdempotence</c> is set to <c>true</c>.</remarks>
public int? MaxRetries { get; set; }
public int MaxRetries { get; set; } = int.MaxValue;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 billion max retries? 😵

Copy link
Contributor

@jainharsh98 jainharsh98 Feb 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is in line with the extension and the underlying Confluent Kafka sdk. Even when set to null this is the value taken in the underlying sdk. Explicitly setting as default will help customers.


/// <summary>
/// Path to client's certificate.
/// ssl.certificate.location in librdkafka
/// </summary>
public string? SslCertificateLocation { get; set; }
public string SslCertificateLocation { get; set; }

/// <summary>
/// Password for client's certificate.
/// ssl.key.password in librdkafka
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Is this a valid comment to present here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


/// <summary>
/// Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. This is the maximum time used to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded. default: 300000
/// </summary>
public int? MessageTimeoutMs { get; set; }
public int MessageTimeoutMs { get; set; } = 300_000;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Would this change be considered as a breaking change? Is the code which consumes this property (may be in host/kafka webjobs extn) updated to handle the default value change? Earlier this was sent as null and now it will send 300000 when sending the serialized version of this class instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. This would not be a breaking change. When set to null the value used was 300000 in the underlying librdkafka library.

@kshyju
Copy link
Member

kshyju commented Jan 18, 2023

@raorugan @shrohilla Ping on this PR.

@fabiocav
Copy link
Member

@shrohilla / @raorugan can you take a look at the comments above? Thanks!

@ghost ghost added the no-recent-activity label Jan 26, 2023
@ghost
Copy link

ghost commented Jan 26, 2023

This pull request has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 7 days. It will be closed if no further activity occurs within 7 days of this comment.

@ghost ghost closed this Feb 2, 2023
@jainharsh98 jainharsh98 reopened this Feb 6, 2023
@ghost ghost removed the no-recent-activity label Feb 6, 2023
@jainharsh98
Copy link
Contributor

@kshyju The Kafka Extension pipeline seems to be waiting for some permission. Can you please help with that?

@kshyju kshyju requested a review from satvu February 6, 2023 17:07
@kshyju
Copy link
Member

kshyju commented Feb 6, 2023

@kshyju The Kafka Extension pipeline seems to be waiting for some permission. Can you please help with that?

All good now. https://azfunc.visualstudio.com/Azure%20Functions/_build/results?buildId=113680&view=results

@jainharsh98
Copy link
Contributor

Closing in favor of #1336

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants