Description
Version: 4.6.0
Platform: Python 3.10 / Docker or Linux
Server : Redis 7.0.11
Description:
We have an application using redis in an async context for months now. With the ugrade of redis-py from 4.5.x to 4.6.0, we saw our app on staging environment starts failing with ConnectionError max number of clients reached
I first run a review of the code to add the missing await redis_conn.close()
statements that were missing but the issue still occurs.
It seems that we reach the 10.000 max clients settings in one hour approx whereas we never used to so far. Unfortunately on this environment, I don't track this metric.
On staging last week, I used a workaround to set a client timeout to pass the one hour issue (config set timeout 60
)
Doing it again a few minutes ago, I could drop connection from 8k to 160 and it's stable then around 160.
On production environment (Python 3.9, redis 4.5.5), we have 25 connected clients on avergage over the last 6 months with a similar activity to our staging environment. And there is no timeout
configuration set, so it's disabled (default configuration)
On staging, reverting to 4.5.5 version of redis-py and with timeout disabled (aka value to 0), after a few minutes it's stable around 20 connections whereas it was to be 1000+ in the same amount of time with 4.6.0.
My code used to be basically :
from redis.asyncio import Redis
from app import settings
async def get_redis_connection():
redis = Redis.from_url(settings.REDIS_URL)
return redis
async def list_queued_job_ids():
redis_conn = await get_redis_connection()
... do something with redis data and related code ...
await redis_conn.close()
return something
So I don't know if it's a regression in 4.6.0 with the AbstractClass maybe ? or something to add/change in my code but I didn't find any documentation that could help me on the changes to be made.
Let me know if you need further details.