You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If someone is using Redis for GCS FT, the key(s) used by their Ray clusters will currently pile up forever. The user is responsible for removing the keys, either by knowing their names and removing them/setting expirations themselves, or by configuring the Redis cluster to use a key eviction policy, neither of which might be desirable depending on the user's particular needs.
The suggested feature is that Ray/KubeRay should manage the expiration/cleanup of its own GCS FT keys, possibly by setting (and refreshing as needed) expiration times on them, or by whatever mechanism is appropriate.
Use case
I work with a Redis cluster for FT that we can't set to evict keys, and I want to make sure I don't fill it up with old Ray GCS information.
Related issues
No response
Are you willing to submit a PR?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered:
Search before asking
Description
Context: https://ray-distributed.slack.com/archives/C02GFQ82JPM/p1678805965595599?thread_ts=1677186556.706449&cid=C02GFQ82JPM
If someone is using Redis for GCS FT, the key(s) used by their Ray clusters will currently pile up forever. The user is responsible for removing the keys, either by knowing their names and removing them/setting expirations themselves, or by configuring the Redis cluster to use a key eviction policy, neither of which might be desirable depending on the user's particular needs.
The suggested feature is that Ray/KubeRay should manage the expiration/cleanup of its own GCS FT keys, possibly by setting (and refreshing as needed) expiration times on them, or by whatever mechanism is appropriate.
Use case
I work with a Redis cluster for FT that we can't set to evict keys, and I want to make sure I don't fill it up with old Ray GCS information.
Related issues
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: