-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RabbitMQ connections are not closed when a cron trigger is active in the same ScaledObject #1413
Comments
Im not in anyway a rabbitmq expert, but seems like we are not closing the channel but only the connection is being closed. keda/pkg/scalers/rabbitmq_scaler.go Lines 142 to 151 in 76fb858
If you get the chance to try this out by doing a |
According to https://pkg.go.dev/github.com/streadway/amqp#Connection.Close, one does not need to manually close the channels. Closing the connection will also close all the underlying channels. And from my experience working with There may be one or more code paths where the Two places where I could find that the scalers are created and perhaps not closed are: keda/pkg/scaling/scale_handler.go Line 145 in 76fb858
keda/pkg/scaling/scale_handler.go Line 181 in 76fb858
The second one seems more suspect to me because it is called multiple times within the scale loop whereas the first is only called once at startup (if I understood correctly). EDIT: Nevermind, the scalers are closed in: keda/pkg/scaling/scale_handler.go Line 200 in 76fb858
keda/pkg/scaling/scale_handler.go Line 223 in 76fb858
|
@mboutet thanks for report and investigation! |
RabbitMQ scaler does not seem to properly close connections leading to KEDA being OOM killed and unnecessary memory/cpu pressure on RabbitMQ. This happens when a
ScaledObject
has both an active cron trigger and rabbitmq trigger (active or not).Expected Behavior
The connection should be closed after use. In an ideal scenario, connections should be reused as suggested in #1121. Initiating an AMQP connection is really expensive as the client and the server has to exchange 7 TCP packages.
Actual Behavior
More connections are being opened than being closed.
Here's an excerpt of a tcpdump. As you can see, a lot of connections are opened and rarely closed. In fact, I think the connection is closed by RabbitMQ and not by KEDA.
Memory usage of KEDA:
Connection usage of RabbitMQ:
Steps to Reproduce the Problem
ScaledObject
using the RabbitMQ trigger and a cron trigger such as:Additional Info
There have been some related issues in the past:
In particular, #318 seemed to have been addressed by #361 using a
defer scaler.Close()
in the scale loop. However, I can not find this deferred call in the v2 codebase.I initially introduced a connection pooler to sit in between RabbitMQ and the clients. However, this did not help as now the connection pooler was also being OOM killed and not reusing connections. I eventually identified KEDA as the client opening so much connections. You can read the discussion in the issue I opened in the connection pooler here cloudamqp/amqproxy#43 (comment).
Specifications
ScaledObject
and 20ScaledJob
. EachScaledObject
has 3 RabbitMQ triggers on average (i.e. scaled based on 3 different queues) whereas eachScaledJob
only has 1 RabbitMQ trigger.The text was updated successfully, but these errors were encountered: