-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix scalers leaking #684
Fix scalers leaking #684
Conversation
9dcc955
to
08d4757
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Thanks @holyketzer for the fix! There is one minor comment about fixing the existing error message as well that would be good to fix in this change as well.
Other than that, the change LGTM
Signed-off-by: Alex Emelyanov <holyketzer@gmail.com>
Signed-off-by: Alex Emelyanov <holyketzer@gmail.com>
d44f0bf
to
bf9c408
Compare
I renamed
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, thank you!
We are using Keda with RabbitMQ, after 30 days running of
keda-operator
pod we have more than 1000 open connections in RabbitMQ console. It wasn't our app or something else, we checked it by restarting step by step. Connections were dropped only afterkeda-operator
restart.Fixes:
In case of error
GetDeploymentScalers
andgetJobScalers
returns acquired scalers, but callers of these methods doesn't use them or close them. So it doesn't make sense to return scalers at all in case of error, and it should close all already opened scalers to avoid leaking.