Skip to content

Commit 3059a87

Browse files
committed
Merge pull request #1240 from dgoodwin/downgrade
Add documentation for OSE 3.1 to 3.0 downgrade.
2 parents 4f2fbf6 + 42e1079 commit 3059a87

File tree

2 files changed

+254
-0
lines changed

2 files changed

+254
-0
lines changed

_build_cfg.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -245,6 +245,9 @@ Topics:
245245
File: building_dependency_trees
246246
- Name: Troubleshooting Networking
247247
File: sdn_troubleshooting
248+
- Name: Downgrading to 3.0
249+
File: downgrade
250+
Distros: openshift-enterprise
248251

249252
---
250253
Name: CLI Reference

admin_guide/downgrade.adoc

Lines changed: 251 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,251 @@
1+
= Downgrading OpenShift Enterprise 3.1 to 3.0
2+
{product-author}
3+
{product-version}
4+
:icons: font
5+
:experimental:
6+
:toc: macro
7+
:toc-title:
8+
:prewrap!:
9+
:description: Manual steps to revert to OpenShift 3.0 after an upgrade to 3.1.
10+
:keywords: yum
11+
12+
toc::[]
13+
14+
== Overview
15+
In extreme cases it may be desirable to downgrade to 3.0 following an upgrade with atomic-openshift-installer, or the openshift-ansible playbooks. The following steps will help to explain the steps that are required on each system in the cluster.
16+
17+
== Step 1: Verify Backups Are In Place
18+
19+
The openshift-ansible upgrade playbooks should have created a backup of the master-config.yaml, and the etcd data directory. Please ensure these exist on your masters/etcd members. In the case of a separate etcd cluster, the backup is likely created on all etcd members, though we'll only need one to recover.
20+
21+
====
22+
----
23+
/etc/openshift/master/master-config.yaml.[TIMESTAMP]
24+
/var/lib/openshift/etcd-backup-[TIMESTAMP]
25+
----
26+
====
27+
28+
The rpm downgrade will likely create .rpmsave backups of these files, but it may be a good idea to keep a separate copy of them regardless:
29+
30+
====
31+
----
32+
/etc/sysconfig/openshift-master
33+
/etc/etcd/etcd.conf (if using a separate etcd cluster)
34+
----
35+
====
36+
37+
38+
== Step 2: Shutdown Cluster
39+
40+
On all masters, nodes, and etcd members (if using a separate etcd cluster), ensure relevant services are stopped.
41+
42+
====
43+
----
44+
$ systemctl stop atomic-openshift-master
45+
$ systemctl stop atomic-openshift-node
46+
$ systemctl stop etcd
47+
----
48+
====
49+
50+
== Step 3: Remove 3.1 Atomic OpenShift RPMs
51+
52+
On each master / node / etcd member:
53+
54+
====
55+
----
56+
yum remove atomic-openshift atomic-openshift-clients atomic-openshift-node atomic-openshift-master etcd openvswitch tuned-projects-atomic-openshift-node atomic-openshift-sdn-ovs tuned-profiles-atomic-openshift-node
57+
----
58+
====
59+
60+
61+
== Step 4: Re-install 3.0 RPMs
62+
63+
Disable 3.1 repositories, and re-enable 3.0 repositories:
64+
65+
====
66+
----
67+
$ subscription-manager repos --disable=rhel-7-server-ose-3.1-rpms --enable=rhel-7-server-ose-3.0-rpms
68+
----
69+
====
70+
71+
On each OpenShift master:
72+
73+
====
74+
----
75+
$ yum install openshift openshift-master openshift-node openshift-sdn-ovs
76+
----
77+
====
78+
79+
On each OpenShift node:
80+
81+
====
82+
----
83+
$ yum install openshift openshift-node openshift-sdn-ovs
84+
----
85+
====
86+
87+
If using a separate etcd cluster, on each etcd member:
88+
89+
====
90+
----
91+
$ yum install etcd
92+
----
93+
====
94+
95+
96+
== Step 5: Restore etcd
97+
98+
=== Create New Etcd Cluster From Backup
99+
100+
For both embedded etcd as well as separate etcd clusters, the first step is to restore the backup by creating a new single node etcd cluster.
101+
102+
Choose a system to be the initial etcd member and restore it's backup and configuration.
103+
104+
WARNING: If you are using embedded non-clustered etcd, please use /var/lib/openshift/openshift.local.etcd for ETCD_DIR in the commands below. If you are using a separate etcd cluster, please use /var/lib/etcd/ for ETCD_DIR.
105+
106+
====
107+
----
108+
$ mv $ETCD_DIR /var/lib/etcd.orig
109+
$ cp -Rp /var/lib/openshift/etcd-backup-20151120093517/ $ETCD_DIR
110+
$ chcon -R --reference /var/lib/etcd.orig/ $ETCD_DIR
111+
$ chown -R etcd:etcd $ETCD_DIR
112+
----
113+
====
114+
115+
If you are using a separate etcd cluster, you should also restore /etc/etcd/etcd.conf from backup or .rpmsave.
116+
117+
We now create the new single node cluster using etcd's --force-new-cluster option. We can do this with a long complex command using the values from /etc/etcd/etcd.conf, or we can temporarily modify the systemd file and start the service normally.
118+
119+
Edit /usr/lib/systemd/system/etcd.service and add --force-new-cluster:
120+
121+
====
122+
----
123+
$ sed -i '/ExecStart/s/"$/ --force-new-cluster"/' /usr/lib/systemd/system/etcd.service
124+
$ cat /usr/lib/systemd/system/etcd.service | grep ExecStart
125+
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --force-new-cluster"
126+
$ systemctl daemon-reload
127+
$ systemctl start etcd
128+
----
129+
====
130+
131+
Verify the etcd service started correctly, then re-edit /usr/lib/systemd/system/etcd.service and remove the --force-new-cluster option.
132+
133+
====
134+
----
135+
$ sed -i '/ExecStart/s/ --force-new-cluster//' /usr/lib/systemd/system/etcd.service
136+
$ cat /usr/lib/systemd/system/etcd.service | grep ExecStart
137+
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd"
138+
$ systemctl daemon-reload
139+
$ systemctl restart etcd
140+
----
141+
====
142+
143+
Etcd should now be running correctly and will display OpenShift's configuration:
144+
145+
====
146+
----
147+
etcdctl --cert-file=/etc/etcd/peer.crt --key-file=/etc/etcd/peer.key --ca-file=/etc/etcd/ca.crt --peers="https://172.16.4.18:2379,https://172.16.4.27:2379" ls /
148+
----
149+
====
150+
151+
152+
### Add Additional Etcd Members
153+
154+
If you are using a separate etcd cluster additional steps are necessary.
155+
156+
Adjust the default localhost peerURL for the first member so we can add additional members to the cluster.
157+
158+
Get the member ID for the first member:
159+
160+
====
161+
----
162+
$ etcdctl --cert-file=/etc/etcd/peer.crt --key-file=/etc/etcd/peer.key --ca-file=/etc/etcd/ca.crt --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" member list
163+
----
164+
====
165+
166+
Update the peerURL. In etcd 2.2 and beyond this can be done with etcdctl member update. On etcd 2.1 and below we must use curl:
167+
168+
====
169+
----
170+
$ curl --cacert /etc/etcd/ca.crt --cert /etc/etcd/peer.crt --key /etc/etcd/peer.key https://172.18.1.18:2379/v2/members/511b7fb6cc0001 -XPUT -H "Content-Type: application/json" -d '{"peerURLs":["https://172.18.1.18:2380"]}'
171+
----
172+
====
173+
174+
Re-run member list and ensure the peerURLs no longer points to localhost.
175+
176+
Now we add each member to the cluster, one at a time.
177+
178+
WARNING: Each member must be fully added and brought online one at a time.
179+
180+
WARNING: When adding each member to the cluster, the peerURL list must be correct for that point in time, so it will grow by one for each member we add. The etcdctl "member add" command will output the values that need to be set in etcd.conf as you add each member.
181+
182+
For each member, add it to the cluster using the values that can be found in that system's etcd.conf:
183+
184+
====
185+
----
186+
$ etcdctl --cert-file=/etc/etcd/peer.crt --key-file=/etc/etcd/peer.key --ca-file=/etc/etcd/ca.crt --peers="https://172.16.4.18:2379,https://172.16.4.27:2379" member add 10.3.9.222 https://172.16.4.27:2380
187+
Added member named 10.3.9.222 with ID 4e1db163a21d7651 to cluster
188+
189+
ETCD_NAME="10.3.9.222"
190+
ETCD_INITIAL_CLUSTER="10.3.9.221=https://172.16.4.18:2380,10.3.9.222=https://172.16.4.27:2380"
191+
ETCD_INITIAL_CLUSTER_STATE="existing"
192+
----
193+
====
194+
195+
The output contains the environment variables we need. Edit /etc/etcd/etcd.conf on the member system itself and ensure these settings match.
196+
197+
We are now ready to start etcd on the new member:
198+
199+
====
200+
----
201+
$ rm -rf /var/lib/etcd/member
202+
$ systemctl enable etcd
203+
$ systemctl start etcd
204+
----
205+
====
206+
207+
Ensure the service starts correctly and the etcd cluster is now healthy.
208+
209+
====
210+
----
211+
$ etcdctl --cert-file=/etc/etcd/peer.crt --key-file=/etc/etcd/peer.key --ca-file=/etc/etcd/ca.crt --peers="https://172.16.4.18:2379,https://172.16.4.27:2379" member list
212+
51251b34b80001: name=10.3.9.221 peerURLs=https://172.16.4.18:2380 clientURLs=https://172.16.4.18:2379
213+
d266df286a41a8a4: name=10.3.9.222 peerURLs=https://172.16.4.27:2380 clientURLs=https://172.16.4.27:2379
214+
215+
$ etcdctl --cert-file=/etc/etcd/peer.crt --key-file=/etc/etcd/peer.key --ca-file=/etc/etcd/ca.crt --peers="https://172.16.4.18:2379,https://172.16.4.27:2379" cluster-health
216+
cluster is healthy
217+
member 51251b34b80001 is healthy
218+
member d266df286a41a8a4 is healthy
219+
----
220+
====
221+
222+
Now repeat this process for the next member to add to the cluster.
223+
224+
== Step 6: Bring OpenShift Services Back Online
225+
226+
=== OpenShift Masters
227+
228+
Restore your openshift-master configuration from backup:
229+
230+
====
231+
----
232+
$ cp /etc/sysconfig/openshift-master.rpmsave /etc/sysconfig/openshift-master
233+
$ cp /etc/openshift/master/master-config.yaml.2015-11-20\@08\:36\:51~ /etc/openshift/master/master-config.yaml
234+
$ systemctl enable openshift-master
235+
$ systemctl enable openshift-node
236+
$ systemctl start openshift-master
237+
$ systemctl start openshift-node
238+
----
239+
====
240+
241+
=== OpenShift Nodes
242+
243+
====
244+
----
245+
$ systemctl enable openshift-node
246+
$ systemctl start openshift-node
247+
----
248+
====
249+
250+
Your cluster should now be back online.
251+

0 commit comments

Comments
 (0)