-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pending pod triggers new node instead of evict a pod with lower priority #1410
Comments
Thanks for reporting that. Are the "paused" pods actually being created? |
Thanks for helping. The "paused" pods are getting created and they take over of 6000m CPU and 10000Mi Memory each. By the way, we are using kops 1.10 and cluster-autoscaler 1.3.0 |
I don't know :) Depends how be resource requests the applications have. Also could you please check if |
I am not sure if that is possible but you may also try to run scheduler with log verbosity set to at least Btw. Which version of k8s and CA are you using? |
we are using kops 1.10 (with Kubernetes 1.10.6) and cluster-autoscaler 1.3.0
Maybe that's the reason why my "paused" pods are not being rescheduled. I will enable this field and I'll let you know. |
Regarding the application pods's priority, we have defined a default priorityClass in our cluster, therefore all new pods will get this class as default. |
I found out how to make it works. There are a couple of parameters more to set than what cluster-autoscaler describes.
That will enable podPriority and Preemption in your cluster. Thank you for helping me! |
Link AWS kops setup instructions from #1410
@mmingorance-dh thanks for posting the solution! |
@mmingorance-dh there are a few typos in your yaml (missing colon, duplicated keys and inconsistent case), should be: kubeAPIServer:
runtimeConfig:
scheduling.k8s.io/v1alpha1: "true"
admissionregistration.k8s.io/v1beta1: "true"
autoscaling/v2beta1: "true"
admissionControl:
- Priority
featureGates:
PodPriority: "true"
kubelet:
featureGates:
PodPriority: "true"
kubeScheduler:
featureGates:
PodPriority: "true"
kubeControllerManager:
horizontalPodAutoscalerUseRestClients: true
featureGates:
PodPriority: "true" |
@aarongorka thanks for catching that. I just updated my comment as well. |
Updated. |
@mmingorance-dh Which config file do I set that config in? the kops config or the cluster auto scaler config? |
@njfix6 in the Kops cluster config directly. |
Ok cool sounds good. Is there a plan to enable this by default in 1.12 or 1.13? It would be really nice. |
It's already enabled by default as a beta feature on Kubernetes 1.12 |
Ok awesome! Thanks for the help!
…On Tue, Mar 12, 2019 at 4:08 PM Miguel Mingorance ***@***.***> wrote:
It's already enabled by default as a beta feature on Kubernetes 1.12
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1410 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGh0jcPo67oOyPhluj_byJeNkOm3tt07ks5vWDNbgaJpZM4YmS7p>
.
|
You're welcome. Give it a try! |
@mmingorance-dh I tried to install this helm chart on cluster created with kops-1.18.0-beta1, but I didn't put any listed changes in kops configuration file from above. It is not working -- nothing happens, no pause containers created. What is the status of this snippet when latest kops versions are in question, do we need it? |
@linecolumn you shouldn't need the snippet configuration anymore. This configuration is only required on Kubernetes clusters with version 1.11 or previous. Priority and preemptions are enabled by default from Kubernetes 1.12. This means the chart should work out of the box. |
Something is not okay, when deploying default helm chart, on fresh cluster. No pods are created. Any ideas how to debug this chart?
|
@linecolumn The
This means no deployment is being created. You can see that actually there is not any deployment being created, only 2 priority clases are being created by the chart. Please create a deployment given this example: https://github.com/helm/charts/blob/master/stable/cluster-overprovisioner/ci/additional-deploys-values.yaml |
@mmingorance-dh thank you! When I put:
deployment and pod is up:
But, I fail to understand/configure how this can be utilized b/c Should I up number of replicas and/or put some resources so it kick new node creation? |
@linecolumn The way you can overprovision a cluster with this chart is taking advantage of the This way, those pods are running, occupying space in the cluster, and every time a new pod with a higher priority is created and in |
Hi all,
Yesterday we started to test cluster-autoscaler with priorityClasses and podPriority to always have some available extra capacity in our cluster, however whenever a new pod comes up and is in pending state, this one triggers a new node with cluster-autoscaler instead of replacing any pod from the "paused" deployment running with lower priorityClass.
This is configuration I added to my cluster:
As I could see in my masters, these features seems to be enabled:
--feature-gates=PodPriority=true
--enable-admission-plugins=Priority,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota
Is there anything else that I'm missing in my config?
The overscaling deployment is the same one you can find in cluster-autoscaler FAQ.
The text was updated successfully, but these errors were encountered: