-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[WIP] Implementing GCS Sharding #2281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Test PASSed. |
src/ray/gcs/client.cc
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are using /* */
in the legacy Ray codebase, but this should be converted to //
.
src/ray/gcs/tables.cc
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am removing the std::move. Still, this part looks questionable..
src/ray/gcs/client.cc
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kind of leave out the "use_task_shards" in @stephanie-wang 's original commit.
What does that part do?
Test FAILed. |
Test FAILed. |
Test FAILed. |
Test FAILed. |
Test FAILed. |
Basically a re-implementation of #2281, with modifications of #2298 (A fix of #2334, for rebasing issues.). [+] Implement sharding for gcs tables. [+] Keep ClientTable and ErrorTable managed by the primary_shard. TaskTable is managed by the primary_shard for now, until a good hashing for tasks is implemented. [+] Move AsyncGcsClient's initialization into Connect function. [-] Move GetRedisShard and bool sharding from RedisContext's connect into AsyncGcsClient. This may make the interface cleaner.
Implementing GCS sharding initial commit.
What do these changes do?
First commit: transfering getRedisShards function, which shall be called by the primary shard. The idea is to call this function on primary shard after connection, pushing all the shards on the client vector, and then do the sharding in table's functions.
#PS: This PR currently serves as a "ongoing" branch. It may probably break the build or doing something weird.
Related issue number
Gcs sharding implementation.