Description
Hy there,
and thanks for your great work! ❤️
During the debugging of githubixx/ansible-role-kubernetes-ca#14 I noticed that the kube-controller-manager certificate generated by the githubixx.ansible-role-kubernetes-ca Ansible role is not used in the kube-controller-manager systemd unit file.
See the default value for the current release here:
k8s_controller_manager_settings:
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
"secure-port": "10257"
"cluster-cidr": "10.200.0.0/16"
"allocate-node-cidrs": "true"
"cluster-name": "kubernetes"
"authentication-kubeconfig": "{{ k8s_controller_manager_conf_dir }}/kubeconfig"
"authorization-kubeconfig": "{{ k8s_controller_manager_conf_dir }}/kubeconfig"
"kubeconfig": "{{ k8s_controller_manager_conf_dir }}/kubeconfig"
"leader-elect": "true"
"service-cluster-ip-range": "10.32.0.0/16"
"cluster-signing-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem"
"cluster-signing-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver-key.pem"
"root-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
"requestheader-client-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
"service-account-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager-sa-key.pem"
"use-service-account-credentials": "true"
In the kube-controller-manager docs it states for the --tls-cert-file
flag:
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
When following your fantastic blog series Kubernetes the not so hard way with Ansible, a user would very likely use the githubixx.ansible-role-kubernetes-ca role to generate all certificates.
By default a kube-controller-manager cert is generated with this role, but it is not used with the default values of githubixx.ansible-role-kubernetes-controller.
Therefore I suggest the following change to the default values:
k8s_controller_manager_settings:
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
"secure-port": "10257"
"cluster-cidr": "10.200.0.0/16"
"allocate-node-cidrs": "true"
"cluster-name": "kubernetes"
"authentication-kubeconfig": "{{ k8s_controller_manager_conf_dir }}/kubeconfig"
"authorization-kubeconfig": "{{ k8s_controller_manager_conf_dir }}/kubeconfig"
"kubeconfig": "{{ k8s_controller_manager_conf_dir }}/kubeconfig"
"leader-elect": "true"
"service-cluster-ip-range": "10.32.0.0/16"
"cluster-signing-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem"
"cluster-signing-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver-key.pem"
"root-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
"requestheader-client-ca-file": "{{ k8s_ctl_pki_dir }}/ca-k8s-apiserver.pem"
"service-account-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager-sa-key.pem"
"use-service-account-credentials": "true"
"client-ca-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem" <<<<<<<<<<<<<<<<<<< ADDED
"tls-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager.pem" <<<<<<<<<<<<< ADDED
"tls-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager-key.pem" <<<<< ADDED
These added lines:
"client-ca-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-apiserver.pem"
"tls-cert-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager.pem"
"tls-private-key-file": "{{ k8s_ctl_pki_dir }}/cert-k8s-controller-manager-key.pem"
are also required to correctly scrape the metrics of kube-controller-manager - see githubixx/ansible-role-kubernetes-ca #14.
ℹ️ I successfully tested these changes without any problems