-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ServiceBus] revert link credit calc to cumulative #40300
[ServiceBus] revert link credit calc to cumulative #40300
Conversation
/azp run python - servicebus - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
API change check API changes are not detected in this pull request. |
/azp run python - eventhub - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR reverts the previous changes to link credit handling in both the ServiceBus and EventHub packages to address expired message issues (#40156). The key changes include reverting the decrements and recalculations of total link credit and updating associated tests and client logic to use only current link credit.
- Revert removal of cumulative link credit updates in _incoming_transfer methods.
- Update flow methods in link implementations to reset the current link credit.
- Adjust tests and client calls to align with the reverted behavior.
Reviewed Changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
sdk/servicebus/azure-servicebus/azure/servicebus/_pyamqp/receiver.py | Removed decrement of total_link_credit during message transfer handling. |
sdk/servicebus/azure-servicebus/azure/servicebus/_pyamqp/link.py | Reverted flow logic to reset current_link_credit instead of cumulatively adding to total_link_credit. |
sdk/servicebus/azure-servicebus/azure/servicebus/_pyamqp/client.py | Updated keep-alive and client run methods to check current_link_credit. |
sdk/servicebus/azure-servicebus/azure/servicebus/_pyamqp/aio/* | Similar changes for async implementations in receiver, link, and client modules. |
sdk/eventhub/azure-eventhub/... | Equivalent reversion of link credit handling in receiver, link, client, and async modules. |
sdk/eventhub/azure-eventhub/tests/pyamqp_tests/unittest/test_link.py | Adjusted test expectations to match updated link credit handling. |
Comments suppressed due to low confidence (3)
sdk/servicebus/azure-servicebus/azure/servicebus/_pyamqp/link.py:94
- Since the total_link_credit is no longer used in the current flow calculations, consider removing the attribute entirely from the class to simplify the code and avoid future confusion.
self.total_link_credit = self.link_credit
sdk/eventhub/azure-eventhub/tests/pyamqp_tests/unittest/test_link.py:149
- Review the test assertions to ensure that the new expectations for current_link_credit are fully aligned with the intended production behavior after reverting to cumulative credit calculations.
assert link.current_link_credit == 99
sdk/servicebus/azure-servicebus/azure/servicebus/_pyamqp/link.py:275
- [nitpick] Confirm that resetting current_link_credit directly in the flow method accurately captures the desired cumulative behavior without unintended side effects in scenarios with intermittent message transfers.
self.current_link_credit = link_credit if link_credit is not None else self.link_credit
addresses #40156
Reverting #37427 as it's causing the issue in #40156 - After comparing against .NET using the AMQP proxy to inspect traffic, found that .NET also has the decreasing lock time on large messages behavior, since they also request credit but don't release extra messages in the background or drain extra link credit. The client is receiving expired messages. Whether this is because of service behavior or network latency is yet to be determined.
As a workaround for the decreasing message locked_until, users should increase the lock duration on the queue and set max_message_count to a lower value when receiving large messages.