-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Client] LB across multiple endpoints? #3
Comments
I have been thinking about this for some time now. While I liked the idea of sending data to all proxies configured / assigned in the beginning I think this could become a problem for players that do not have good enough internet. We have the luxury in many cases where internet connections are around 50Mbit/s or better, but this is not the case in many places around the world where ADSL is the most common type of internet connection. I have not finished all testing yet, but will do some testing on bandwidth usage for different FPS games to create a better understanding of how many Mbit/s a typical FPS game uses. A suggestion on solution for this would be that the client only sends data to the proxy closest to the client, but keeps two (or more) sessions open to other proxies which then acts as fallback if connection to the primary proxy dies. This would eliminate us from sending 2-3 times the data and risking of oversaturating the players internet connection. Thoughts on this? |
Updated the ticket with the new client/server nomenclature. Thanks for your thoughts and research @suom1
Is the assumption here that the client proxy would send to every endpoint? My assumption was that it would be something like a round robin, so that a single packet would not get repeated, just sent to a different endpoint each time. If I'm following what you said above, the concern is the extra bandwidth of duplicating packets? Is that correct? In that case, I'm not advocating for duplicate packets (unless that was something a user specifically configured/requested for their specific use case)
My assumption here is that people would set up redundant proxies in the same region that would be essentially the same from a latency / network connection perspective (although it is really up to the user). So in a GCP case, I would have several Server proxies in the same GCP zone, all pointing to Game Servers hosted in the same zone. So therefore sending data to any of the Server proxies is essentially the same for all intents and purposes. Does that make sense? That being said, down the line ,we could do some kind of weighted load balancing / detection of ping time / something else for more advanced load balancing options like you described above. What do you think? |
Sorry for late reply!
It was based on previous discussions (in meeting) where it was mentioned that we would send data on X endpoints in order to have redundancy. And I think that's where my thought process picked it up.
That was exactly my concern!
That's absolutely how I would assume most of the implementations would be. My use case comes from the possibility to use these proxies to proxy traffic to any provider. Another potential setup would be to deploy proxies in all datacenters (even those you don't have servers in) and then use the internal network of Google to transport the traffic. That's where you would want the client to check latency, this might be a very specific solution but one that I think most game creators which does games that depends on latency would like to have.
It does make sense! |
This commit adds RoundRobin and Random load balancing support to a proxy as well as config support for multiple addresses and load balancer policy on the client proxy config. The default behavior if no policy is set or is a server proxy sends packets to all endpoints. This also introduces the `rand` crate as a dependency, used by the random load balancing implementation. Resolves #3
This commit adds RoundRobin and Random load balancing support to a proxy as well as config support for multiple addresses and load balancer policy on the client proxy config. The default behavior if no policy is set or is a server proxy sends packets to all endpoints. This also introduces the `rand` crate as a dependency, used by the random load balancing implementation. Resolves #3
* Add client proxy load balancing support This commit adds RoundRobin and Random load balancing support to a proxy as well as config support for multiple addresses and load balancer policy on the client proxy config. The default behavior if no policy is set or is a server proxy sends packets to all endpoints. This also introduces the `rand` crate as a dependency, used by the random load balancing implementation. Resolves #3
Should a Client proxy be able to send packets to multiple Server proxy endpoints, probably in some sort of load balancer way - such as round robin, or random manner?
This provides another layer of redundancy in case a singular Server proxy goes down, and it takes time to realise and move to a new one. At least this way, some traffic is going through -- but seemingly at a slightly lower latency.
Maybe a configuration like:
The text was updated successfully, but these errors were encountered: