-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General: use client-side round-robin load balancing for grpc #5225
General: use client-side round-robin load balancing for grpc #5225
Conversation
Thank you for your contribution! 🙏 We will review your PR as soon as possible. While you are waiting, make sure to:
Learn more about: |
/run-e2e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM but I have a really poor knowledge about gRPC
WDYT @zroubalik ?
/run-e2e |
The E2E test failure seems unrelated? Seems like it failed on the cleanup steps |
e2e tests are unstable right now, I'm working on a PR to fix it, once the fix is merged I'll trigger e2e test again here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@BojanZelic please rebase and resolve conflicts, let's merge this if e2e pass then. Thanks! |
Signed-off-by: Bojan Zelic <bnzelic@gmail.com>
Signed-off-by: Bojan Zelic <bnzelic@gmail.com>
Signed-off-by: Bojan Zelic <bnzelic@gmail.com>
5dd293a
to
c4387da
Compare
/run-e2e |
…e#5225) Signed-off-by: Bojan Zelic <bnzelic@gmail.com> Signed-off-by: anton.lysina <alysina@gmail.com>
Use Client-side grpc round-robin load balancing to better spread the load when using the external scaler;
Scenario A:
keda operator makes a grpc call to a Service with a clusterIP. There's only 1 clusterIP. This PR won't affect this setup. Load balancing (within k8s/iptables) only happens when new connections or a connection failure. Requests are hitting the same underlying pod; Users should migrate to headless service.
Scenario B:
Keda operator makes a grpc call to a headless service & DNS randomizes the pod IP order. load balancing happens whenever there's a connection failure or if the server set a MaxConnectionAge, otherwise 1 pod is going to be receiving the all of the load. This PR fixes this
Scenario C:
Keda operator makes a grpc call to a headless service & DNS doesn't randomize the pod IP order. load balancing currently never happens, 1 pod will be receiving all the load. This PR fixes this.
Checklist
Fixes # #5224