You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Facing this issue intermittently where Celery gives the following error on scheduled run. I believe this is happening because of some race condition due to asyncio. We have used single pod solution only, even with that configuration this issue pops up randomly
{"stackTrace": "Traceback (most recent call last):\n File \"/code/ops/tasks/anomalyDetectionTasks.py\", line 85, in
anomalyDetectionJob\n result = _detectionJobs.get()\n File \"/opt/venv/lib/python3.7/site-packages/celery/result.py\", line 680, in get\n on_interval=on_interval,\n File \"/opt/venv/lib/python3.7/site-packages/celery/result.py\", line 799, in
join_native\n on_message, on_interval):\n File \"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\",
line 150, in iter_native\n for _ in self._wait_for_pending(result, no_ack=no_ack, **kwargs):\n File
\"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\", line 267, in _wait_for_pending\n
on_interval=on_interval):\n File \"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\", line 54, in
drain_events_until\n yield self.wait_for(p, wait, timeout=interval)\n File \"/opt/venv/lib/python3.7/site-
packages/celery/backends/asynchronous.py\", line 63, in wait_for\n wait(timeout=timeout)\n File
\"/opt/venv/lib/python3.7/site-packages/celery/backends/redis.py\", line 152, in drain_events\n message =
self._pubsub.get_message(timeout=timeout)\n File \"/opt/venv/lib/python3.7/site-packages/redis/client.py\", line 3617, in
get_message\n response = self.parse_response(block=False, timeout=timeout)\n File \"/opt/venv/lib/python3.7/site-
packages/redis/client.py\", line 3505, in parse_response\n response = self._execute(conn, conn.read_response)\n File
\"/opt/venv/lib/python3.7/site-packages/redis/client.py\", line 3479, in _execute\n return command(*args, **kwargs)\n File
\"/opt/venv/lib/python3.7/site-packages/redis/connection.py\", line 756, in read_response\n raise
response\nredis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command\n", "message": "wrong
number of arguments for 'subscribe' command"}
To Reproduce
Steps to reproduce the behavior:
Create an anomaly definition
Schedule it to run at specific time
Few times schedule might succeed where as few other times you might see the above error
Expected behavior
Is there any work around that we can use to avoid this issue?, please help
The text was updated successfully, but these errors were encountered:
@SHARANTANGEDA
I believe it is an internal issue with celery and redis-py for these versions and might or might not have to do with Azure Cache.
I found a recent discussion regarding the same issue and it's some functional parameter mismatch when celery is trying to subscribe to the redis server via redis-py. It also has the same issue. Will try to find out more, have attached the link below
@ankitkpandey Thanks for sharing this, so I assume this is happening because Celery doesn't support Asyncio yet, and the call is being made in async fashion.
On a separate note, is there any workaround you can provide around this?
Describe the bug
Facing this issue intermittently where Celery gives the following error on scheduled run. I believe this is happening because of some race condition due to asyncio. We have used single pod solution only, even with that configuration this issue pops up randomly
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Is there any work around that we can use to avoid this issue?, please help
The text was updated successfully, but these errors were encountered: