-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grpcproxy: efficiently handle millions of unique watches #7624
Comments
Does the current stream coalescing model work like the following?
Would what your proposing work like this, assuming that the clients want to watch millions of keys which happen to be mostly unique? If that's the case, are we essentially removing load from the etcd server by transmitting all events to the grpc proxy without discriminating depending on whether any client actually wanted to receiving updates on that specific key. Then allowing the grpc proxy to handle the discrimination part?
|
Yes. The proxy would stream events from the server into its own mvcc backend to service watches. The diagram should have an mvcc hanging off the gRPC proxy, though. |
moving to 3.4 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions. |
The current watch proxy will scale for many clients watching one key / one key range by coalescing similar watch streams. However, there are cases where it may be a single client per key with millions of clients. The current grpcproxy would help a little by using a single grpcstream, but it will still open many substreams on an etcd server. This wastes server resources and makes disconnects expensive since the proxy must (serially) reregister all the watches.
Two approaches to scaling:
The backend watcher proxy won't replace the current stream coalescing proxy; the two should be able to work together or independently.
The text was updated successfully, but these errors were encountered: