-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress-nginx unbalanced traffic #10061
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@simonemilano |
@simonemilano Please provide answers to the questions asked in a new issue template. You have not even copy/pasted the output of kubectl commands that describe the controller , ingress, service, curl request etc. , so any discussion here is going to be based on guess work. Some kind of ability to reproduce a break in round-robin loadbalancing or at least a deep understanding of your requests and ingress, coupled with the complexity of response time depending on a call by backend pod to some internet endpoint etc. needs to be actionable by a developer, if there is a problem in the code. /remove-kind bug |
Can you provide the exact network setup, the external cluster policy, and the workers, is it the default? Can you explain this a little more to understand the traffic routing?
/triage needs-information |
Hi, The problem seems very similar to the one described here https://technology.lastminute.com/ingress-nginx-bug-makes-comeback/. @strongjz Ingress nginx is in a dedicated namespace. Microservice1 and Microservice2 are in the same namespace. We have a bunch of VMs located beside the cluster where we use jmeter. From that vm's we call MIcroservice1 through the ingress-nginx. With the same situation ewma seems to behave correctly. Apart from the istantaneous cpu usage that swing up and down (but i think it's normal). Overall pods are balanced. |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
@simonemilano sorry for no action from the project on this for so long. We just did not have enough resources to research and experiment such a complex issue. First the Now after so long, the first thought I get is the that more unconventional data gathering is needed to even think of possibilities. Test cases like ;
If this is not to be worked on, then please close the issue. This update comes in the light that the project had to make some required decisions owing to shortage of resources. We even had to deprecate popular features as we can not support/maintain them (of-course needless to say that loadbalancing algo is not in that category as loadbalancing ia direct implication of the K8S Ingress-API). |
/remove-kind support |
Hi,
we are experiencing unbalanced traffic using ingress-nginx on Google Kubernetes Engine. We are using ingress-nginx v 1.1.1 to expose a deployment that at the moment is making an https call to an external service and then returns the answer to the caller.
Using round robin (default for ingress-nginx) we are experiencing very unbalanced traffic. We observe that initially the load is symmetrical between the pods but then when the deployment scales ingress-nginx sends increasingly traffic on the new pod. In some occasion the new pod has 90% of the cpu usage while the others 30%.
Using ewma seems to fix but the cpu usage seems to go up and and down observing it in scales of seconds.
Any idea of why round robin behaves like that?
The text was updated successfully, but these errors were encountered: