Instructions for creating a HA kubernetes cluster based on kubeadm's single-master deployment
cd /var/tmp
git clone https://github.com/cgilmour/kubernetes-ha
cd kubernetes-ha
./make_root_ca
./make_apiserver_certs --dns-name your-apiserver.external.dns.name 192.168.99.10 192.168.99.11 192.168.99.12
./make_sa_key
./make_discovery_config your-apiserver.external.dns.name:6443
This will emit the discovery token. You'll need this to add minions later on.
./make_kubelet_conf your-apiserver.external.dns.name:6443
./make_manifests 192.168.99.10 192.168.99.11 192.168.99.12
mkdir /var/tmp/ha-cluster
cd /var/tmp/ha-cluster
wget https://raw.githubusercontent.com/cgilmour/kubernetes-ha/master/install_files
chmod +x install_files
./install_files 192.168.99.10:/var/tmp/kubernetes-ha host-ip-of-node
Connect to each master. On each node, run the command below.
sudo mv /etc/kubernetes/disabled-manifests/etcd-bootstrap.yaml /etc/kubernetes/manifests
Check it is launching appropriately with docker ps
and docker logs
.
sudo mv /etc/kubernetes/disabled-manifests/kube-*.yaml /etc/kubernetes/manifests
Once the kubernetes cluster is up on all three nodes and stable, its bootstrap configuration should be replaced with a stable config. This should be run on each node. On each node, wait for the instance to recover and rejoin before doing another node.
sudo rm /etc/kubernetes/manifests/etcd-bootstrap.yaml
Wait for node to drop from the cluster
sudo mv /etc/kubernetes/disabled-manifests/etcd.yaml /etc/kubernetes/manifests
Wait for node to rejoin the cluster
For each master node, apply the taint as follows:
kubectl taint node node-name dedicated=master:NoSchedule
kubectl label node node-name kubeadm.alpha.kubernetes.io/role=master
kubectl apply -f kube-proxy-daemonset.yaml
kubectl apply -f clusterinfo-secret.yaml
kubectl apply -f kube-discovery-deployment.yaml
kubectl apply -f kube-dns-deployment.yaml
kubectl apply -f kube-dns-service.yaml
Only one node will actually be running it. Use kubectl -n kube-system get pods -o wide | grep kube-discovery
to find where it is.
For example purposes, below assumes it runs on node #3, 192.168.99.12
Find the discovery token from the earlier step. The example uses afa67b.b5f052ecc18d8f8c
kubeadm join --token=afa67b.b5f052ecc18d8f8c 192.168.99.12
./make_romana_manifests 192.168.99.10 192.168.99.11 192.168.99.12
wget https://raw.githubusercontent.com/cgilmour/kubernetes-ha/master/install_romana_files
chmod +x install_romana_files
./install_romana_files 192.168.99.10:/var/tmp/kubernetes-ha host-ip-of-node
Connect to each master. On each node, run the command below.
sudo mv /etc/kubernetes/disabled-manifests/romana-etcd-bootstrap.yaml /etc/kubernetes/manifests
Once the kubernetes cluster is up on all three nodes and stable, its bootstrap configuration should be replaced with a stable config. This should be run on each node. On each node, wait for the instance to recover and rejoin before doing another node.
sudo rm /etc/kubernetes/manifests/romana-etcd-bootstrap.yaml
Wait for node to drop from the cluster
sudo mv /etc/kubernetes/disabled-manifests/romana-etcd.yaml /etc/kubernetes/manifests
Wait for node to rejoin the cluster
kubectl apply -f romana-datastore-secret.yaml
NOTE: This must be done in order, because of the way mariadb initializes a cluster. The first master node has a different configuration from other members.
sudo mv /etc/kubernetes/disabled-manifests/romana-datastore-bootstrap.yaml /etc/kubernetes/manifests
Wait for this to completely initialize before running it on other nodes.
sudo rm /etc/kubernetes/manifests/romana-datastore-bootstrap.yaml
Wait for node to drop from the cluster
sudo mv /etc/kubernetes/disabled-manifests/romana-datastore.yaml /etc/kubernetes/manifests
Wait for node to rejoin the cluster
kubectl apply -f romana-cluster-kubeadm.yaml