Description
Hi all,
Yesterday we started to test cluster-autoscaler with priorityClasses and podPriority to always have some available extra capacity in our cluster, however whenever a new pod comes up and is in pending state, this one triggers a new node with cluster-autoscaler instead of replacing any pod from the "paused" deployment running with lower priorityClass.
This is configuration I added to my cluster:
authorizationMode: RBAC
authorizationRbacSuperUser: admin
runtimeConfig:
scheduling.k8s.io/v1alpha1: "true"
admissionregistration.k8s.io/v1beta1: "true"
autoscaling/v2beta1: "true"
kubelet:
featureGates:
PodPriority: "true"
kubeAPIServer:
featureGates:
PodPriority: "true"
kubeAPIServer:
admissionControl:
- Priority
- NamespaceLifecycle
- LimitRanger
- ServiceAccount
- PersistentVolumeLabel
- DefaultStorageClass
- ResourceQuota
- DefaultTolerationSeconds
kubeControllerManager:
horizontalPodAutoscalerDownscaleDelay: 1h0m0s
horizontalPodAutoscalerUseRestClients: true
As I could see in my masters, these features seems to be enabled:
--feature-gates=PodPriority=true
--enable-admission-plugins=Priority,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota
Is there anything else that I'm missing in my config?
The overscaling deployment is the same one you can find in cluster-autoscaler FAQ.