Skip to content

Commit 1ecdeca

Browse files
author
Jayendra Patil
committed
Added Kubeadm Cluster Upgrade scenario
1 parent 20b1ee9 commit 1ecdeca

File tree

3 files changed

+330
-1
lines changed

3 files changed

+330
-1
lines changed

cka/1.cluster_architecture_installation_configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ TBD
3838

3939
<br />
4040

41-
Refer [Upgrading Kubeadm Clusters](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
41+
Refer [Upgrading Kubeadm Clusters](../topics/cluster_upgrade.md)
4242

4343
<br />
4444

topics/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ Topics cover test exercises for each topics
88
- [Auditing](./auditing.md)
99
- [Authentication](../authentication.md)
1010
- [Platform Binary Verfication](./binary_verification.md)
11+
- [Cluster Upgrade](./cluster_upgrade.md)
1112
- [ConfigMaps](./configmaps.md)
1213
- [Deployments](./deployments.md)
1314
- [Falco](./falco.md)

topics/cluster_upgrade.md

Lines changed: 328 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,328 @@
1+
# [Cluster Upgrade](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
2+
3+
**NOTE** - This was performed on the [katacoda playground](https://www.katacoda.com/courses/kubernetes/playground) with two node cluster at v1.18.0 version. It was upgrade to 1.19.3 version. Version 1.19.4 was not used as it had issues upgrading on the worker node.
4+
5+
<br />
6+
7+
### Upgrade Control Panel nodes
8+
9+
<br />
10+
11+
#### Check current version
12+
13+
```bash
14+
kubectl get nodes
15+
16+
# NAME STATUS ROLES AGE VERSION
17+
# controlplane Ready master 4m53s v1.18.0
18+
# node01 Ready <none> 4m25s v1.18.0
19+
```
20+
21+
#### Determine which version to upgrade to - Choosing 1.19.3
22+
23+
```bash
24+
apt update
25+
apt-cache madison kubeadm
26+
# kubeadm | 1.19.3-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
27+
```
28+
29+
#### Upgrading control plane nodes
30+
31+
```bash
32+
# upgrade kubeadm
33+
apt-get update && \
34+
apt-get install -y --allow-change-held-packages kubeadm=1.19.3-00
35+
36+
# Setting up kubernetes-cni (0.8.7-00) ...
37+
# Setting up kubeadm (1.19.3-00) ...
38+
```
39+
40+
```bash
41+
# Verify that the download works and has the expected version:
42+
kubeadm version
43+
44+
# kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
45+
```
46+
47+
```bash
48+
sudo kubeadm upgrade plan
49+
50+
# [upgrade/config] Making sure the configuration is correct:
51+
# [upgrade/config] Reading configuration from the cluster...
52+
# [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
53+
# [preflight] Running pre-flight checks.
54+
# [upgrade] Running cluster health checks
55+
# [upgrade] Fetching available versions to upgrade to
56+
# [upgrade/versions] Cluster version: v1.18.0
57+
# [upgrade/versions] kubeadm version: v1.19.3
58+
# I1217 07:14:14.966727 9206 version.go:252] remote version is much newer: v1.23.1; falling back to: stable-1.19
59+
# [upgrade/versions] Latest stable version: v1.19.16
60+
# [upgrade/versions] Latest stable version: v1.19.16
61+
# [upgrade/versions] Latest version in the v1.18 series: v1.18.20
62+
# [upgrade/versions] Latest version in the v1.18 series: v1.18.20
63+
64+
# Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
65+
# COMPONENT CURRENT AVAILABLE
66+
# kubelet 2 x v1.18.0 v1.18.20
67+
68+
# Upgrade to the latest version in the v1.18 series:
69+
70+
# COMPONENT CURRENT AVAILABLE
71+
# kube-apiserver v1.18.0 v1.18.20
72+
# kube-controller-manager v1.18.0 v1.18.20
73+
# kube-scheduler v1.18.0 v1.18.20
74+
# kube-proxy v1.18.0 v1.18.20
75+
# CoreDNS 1.6.7 1.7.0
76+
# etcd 3.4.3-0 3.4.3-0
77+
78+
# You can now apply the upgrade by executing the following command:
79+
80+
# kubeadm upgrade apply v1.18.20
81+
82+
# _____________________________________________________________________
83+
84+
# Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
85+
# COMPONENT CURRENT AVAILABLE
86+
# kubelet 2 x v1.18.0 v1.19.16
87+
88+
# Upgrade to the latest stable version:
89+
90+
# COMPONENT CURRENT AVAILABLE
91+
# kube-apiserver v1.18.0 v1.19.16
92+
# kube-controller-manager v1.18.0 v1.19.16
93+
# kube-scheduler v1.18.0 v1.19.16
94+
# kube-proxy v1.18.0 v1.19.16
95+
# CoreDNS 1.6.7 1.7.0
96+
# etcd 3.4.3-0 3.4.13-0
97+
98+
# You can now apply the upgrade by executing the following command:
99+
100+
# kubeadm upgrade apply v1.19.16
101+
102+
# Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.16.
103+
104+
# _____________________________________________________________________
105+
106+
107+
# The table below shows the current state of component configs as understood by this version of kubeadm.
108+
# Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
109+
# resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
110+
# upgrade to is denoted in the "PREFERRED VERSION" column.
111+
112+
# API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
113+
# kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
114+
# kubelet.config.k8s.io v1beta1 v1beta1 no
115+
# _____________________________________________________________________
116+
```
117+
118+
```bash
119+
sudo kubeadm upgrade apply v1.19.3
120+
121+
# [upgrade/config] Making sure the configuration is correct:
122+
# [upgrade/config] Reading configuration from the cluster...
123+
# [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
124+
# [preflight] Running pre-flight checks.
125+
# [upgrade] Running cluster health checks
126+
# [upgrade/version] You have chosen to change the cluster version to "v1.19.3"
127+
# [upgrade/versions] Cluster version: v1.18.0
128+
# [upgrade/versions] kubeadm version: v1.19.3
129+
# [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
130+
# [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
131+
# [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
132+
# [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
133+
# [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.3"...
134+
# Static pod: kube-apiserver-controlplane hash: 32d269b25126efaf2f4d5b79beada591
135+
# Static pod: kube-controller-manager-controlplane hash: f9b9c6969be80756638e9cf4927b5881
136+
# Static pod: kube-scheduler-controlplane hash: 5795d0c442cb997ff93c49feeb9f6386
137+
# [upgrade/etcd] Upgrading to TLS for etcd
138+
# Static pod: etcd-controlplane hash: 7831b536f3a79e96fe34049ff61c499b
139+
# [upgrade/staticpods] Preparing for "etcd" upgrade
140+
# [upgrade/staticpods] Renewing etcd-server certificate
141+
# [upgrade/staticpods] Renewing etcd-peer certificate
142+
# [upgrade/staticpods] Renewing etcd-healthcheck-client certificate
143+
# [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-17-07-15-37/etcd.yaml"
144+
# [upgrade/staticpods] Waiting for the kubelet to restart the component
145+
# [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
146+
# Static pod: etcd-controlplane hash: 7831b536f3a79e96fe34049ff61c499b
147+
# Static pod: etcd-controlplane hash: 7831b536f3a79e96fe34049ff61c499b
148+
# Static pod: etcd-controlplane hash: f291ed490602f9995ce3fae0c7278fde
149+
# [apiclient] Found 1 Pods for label selector component=etcd
150+
# [upgrade/staticpods] Component "etcd" upgraded successfully!
151+
# [upgrade/etcd] Waiting for etcd to become available
152+
# [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests684409789"
153+
# [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
154+
# [upgrade/staticpods] Renewing apiserver certificate
155+
# [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
156+
# [upgrade/staticpods] Renewing front-proxy-client certificate
157+
# [upgrade/staticpods] Renewing apiserver-etcd-client certificate
158+
# [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-17-07-15-37/kube-apiserver.yaml"
159+
# [upgrade/staticpods] Waiting for the kubelet to restart the component
160+
# [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
161+
# Static pod: kube-apiserver-controlplane hash: 32d269b25126efaf2f4d5b79beada591
162+
# Static pod: kube-apiserver-controlplane hash: 5bd0c975123753bb782dc1caf5ae2380
163+
# [apiclient] Found 1 Pods for label selector component=kube-apiserver
164+
# [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
165+
# [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
166+
# [upgrade/staticpods] Renewing controller-manager.conf certificate
167+
# [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-17-07-15-37/kube-controller-manager.yaml"
168+
# [upgrade/staticpods] Waiting for the kubelet to restart the component
169+
# [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
170+
# Static pod: kube-controller-manager-controlplane hash: f9b9c6969be80756638e9cf4927b5881
171+
# Static pod: kube-controller-manager-controlplane hash: 27ef001ee9e1781a258a9c2a188cd888
172+
# [apiclient] Found 1 Pods for label selector component=kube-controller-manager
173+
# [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
174+
# [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
175+
# [upgrade/staticpods] Renewing scheduler.conf certificate
176+
# [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-17-07-15-37/kube-scheduler.yaml"
177+
# [upgrade/staticpods] Waiting for the kubelet to restart the component
178+
# [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
179+
# Static pod: kube-scheduler-controlplane hash: 5795d0c442cb997ff93c49feeb9f6386
180+
# Static pod: kube-scheduler-controlplane hash: c4e7975f4329949f35219b973dfc69c5
181+
# [apiclient] Found 1 Pods for label selector component=kube-scheduler
182+
# [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
183+
# [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
184+
# [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
185+
# [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
186+
# [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
187+
# [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
188+
# [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
189+
# [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
190+
# [addons] Applied essential addon: CoreDNS
191+
# [addons] Applied essential addon: kube-proxy
192+
193+
# [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.3". Enjoy!
194+
195+
# [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
196+
```
197+
198+
#### Upgrade additional control plane nodes
199+
200+
```bash
201+
# for any additional control panel nodes (if any) - currently none
202+
sudo kubeadm upgrade node
203+
```
204+
205+
#### Drain the control plane node
206+
207+
```bash
208+
kubectl drain controlplane --ignore-daemonsets
209+
210+
# node/controlplane cordoned
211+
# WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-jjlnt, kube-system/kube-proxy-np9zl
212+
# evicting pod kube-system/coredns-f9fd979d6-j6w5s
213+
# pod/coredns-f9fd979d6-j6w5s evicted
214+
# node/controlplane evicted
215+
```
216+
217+
#### Upgrade kubelet and kubectl
218+
219+
```bash
220+
apt-get update && \
221+
apt-get install -y --allow-change-held-packages kubelet=1.19.3-00 kubectl=1.19.3-00
222+
223+
# Unpacking kubelet (1.19.3-00) over (1.18.0-00) ...
224+
# Setting up kubelet (1.19.3-00) ...
225+
# Setting up kubectl (1.19.3-00) ...
226+
227+
sudo systemctl daemon-reload
228+
sudo systemctl restart kubelet
229+
```
230+
231+
#### Uncordon the control plane node
232+
233+
```bash
234+
kubectl uncordon controlplane
235+
# node/controlplane uncordoned
236+
```
237+
238+
#### Check the nodes
239+
240+
```bash
241+
kubectl get nodes
242+
NAME STATUS ROLES AGE VERSION
243+
# controlplane Ready master 15m v1.19.3
244+
# node01 Ready <none> 14m v1.18.0
245+
```
246+
247+
<br />
248+
249+
### Upgrade worker nodes
250+
251+
<br />
252+
253+
#### Upgrade kubeadm
254+
```bash
255+
apt update
256+
apt-cache madison kubeadm
257+
258+
apt-get update && \
259+
apt-get install -y --allow-change-held-packages kubeadm=1.19.3-00
260+
# Unpacking kubeadm (1.19.3-00) over (1.18.0-00) ...
261+
# Setting up kubernetes-cni (0.8.7-00) ...
262+
# Setting up kubeadm (1.19.3-00) ...
263+
```
264+
265+
#### Upgrade the kubelet configuration
266+
267+
```bash
268+
sudo kubeadm upgrade node
269+
270+
# [upgrade] Reading configuration from the cluster...
271+
# [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
272+
# [preflight] Running pre-flight checks
273+
# [preflight] Skipping prepull. Not a control plane node.
274+
# [upgrade] Skipping phase. Not a control plane node.
275+
# [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
276+
# [upgrade] The configuration for this node was successfully updated!
277+
# [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
278+
```
279+
280+
#### Drain the node - Execute this on the master/control panel node
281+
282+
```bash
283+
kubectl drain node01 --ignore-daemonsets
284+
285+
# node/node01 cordoned
286+
# WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-26gz5, kube-system/kube-keepalived-vip-dskqw, kube-system/kube-proxy-jwpgs
287+
# evicting pod kube-system/coredns-f9fd979d6-gjfpn
288+
# evicting pod kube-system/coredns-f9fd979d6-xvh8h
289+
# evicting pod kube-system/katacoda-cloud-provider-5f5fc5786f-565r6
290+
# pod/katacoda-cloud-provider-5f5fc5786f-565r6 evicted
291+
# pod/coredns-f9fd979d6-gjfpn evicted
292+
# pod/coredns-f9fd979d6-xvh8h evicted
293+
# node/node01 evicted
294+
```
295+
296+
#### Upgrade kubelet and kubectl
297+
298+
```bash
299+
apt-get update && \
300+
> apt-get install -y --allow-change-held-packages kubelet=1.19.3-00 kubectl=1.19.3-00
301+
302+
# ....
303+
# kubectl is already the newest version (1.19.3-00).
304+
# kubelet is already the newest version (1.19.3-00).
305+
# The following packages were automatically installed and are no longer required:
306+
# libc-ares2 libhttp-parser2.7.1 libnetplan0 libuv1 nodejs-doc python3-netifaces
307+
# Use 'apt autoremove' to remove them.
308+
# 0 upgraded, 0 newly installed, 0 to remove and 201 not upgraded.
309+
310+
sudo systemctl daemon-reload
311+
sudo systemctl restart kubelet
312+
```
313+
314+
#### Uncordon the node - Execute this on the master/control panel node
315+
316+
```bash
317+
kubectl uncordon node01
318+
# node/node01 uncordoned
319+
```
320+
321+
#### Verify nodes are upgraded
322+
323+
```shell
324+
kubectl get nodes
325+
# NAME STATUS ROLES AGE VERSION
326+
# controlplane Ready master 22m v1.19.3
327+
# node01 Ready <none> 22m v1.19.3
328+
```

0 commit comments

Comments
 (0)