Skip to content

Commit e81220e

Browse files
committed
OKD lab
1 parent 1028f52 commit e81220e

File tree

4 files changed

+292
-101
lines changed

4 files changed

+292
-101
lines changed

README.md

Lines changed: 59 additions & 99 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ This lab will walk you through the deployment of our sample MicroProfile applica
5151

5252
## Setting up the cluster
5353

54-
To setup a VM in vLaunch and install OKD, see [instructions here](https://apps.na.collabserv.com/wikis/home?lang=en-us#!/wiki/Wfe97e7c353a2_4510_8471_7148220c0bec/page/Setting%20up%20a%20vLaunch%20System%20for%20Red%20Hat%20OpenShift%20Lab). If you do not have access to IBM's vLaunch, follow [instructions here](https://github.com/gshipley/installcentos).
54+
To setup a VM in vLaunch and install OKD, see [instructions here](https://apps.na.collabserv.com/wikis/home?lang=en-us#!/wiki/Wfe97e7c353a2_4510_8471_7148220c0bec/page/Setting%20up%20a%20vLaunch%20System%20for%20Red%20Hat%20OpenShift%20Lab).
5555

5656
## Part 1A: Build the application and Docker container
5757

@@ -105,28 +105,27 @@ The following steps will build the sample application and create a Docker image
105105

106106
1. Navigate into the sample application directory if you are not already:
107107
```bash
108-
cd kubernetes-microprofile-lab/lab-artifacts/application
108+
$ cd kubernetes-microprofile-lab/lab-artifacts/application
109109
```
110110
1. Build the sample application:
111111
```bash
112-
mvn clean package
112+
$ mvn clean package
113113
```
114114
1. Navigate into the `lab-artifacts` directory
115115
```bash
116-
cd ..
116+
$ cd ..
117117
```
118118
1. Build and tag the Enterprise Docker image:
119119
```bash
120-
cd ..
121-
docker build -t microservice-enterprise-web:1.0.0 -f EnterpriseDockerfile .
120+
$ docker build -t microservice-enterprise-web:1.0.0 -f EnterpriseDockerfile .
122121
```
123122
1. Build and tag the Application Docker image:
124123
```bash
125-
docker build -t microservice-vote:1.0.0 -f ApplicationDockerfile .
124+
$ docker build -t microservice-vote:1.0.0 -f ApplicationDockerfile .
126125
```
127126
1. You can use the Docker CLI to verify that your image is built.
128127
```bash
129-
docker images
128+
$ docker images
130129
```
131130

132131
## Part 1B: Upload the Docker image to OKD's internal registry
@@ -135,80 +134,83 @@ OKD provides an internal, integrated container image registry. For this lab, we
135134

136135
1. Ensure you are logged in to OKD. You can use OKD command line interface (CLI) to interact with the cluster. Replace `<username>`, `<password>` and `<okd_ip>` with appropriate values:
137136
```bash
138-
oc login --username=<username> --password=<password>
137+
$ oc login --username=<username> --password=<password>
139138
```
140139
1. Create a new project to host our application:
141140
```bash
142-
oc new-project myproject
141+
$ oc new-project myproject
143142
```
144143
1. Log into the internal registry:
145144
```bash
146-
oc registry login --skip-check
145+
$ oc registry login --skip-check
147146
```
148147
1. Tag your Docker image:
149148
```bash
150-
docker tag microservice-vote:1.0.0 docker-registry.default.svc:5000/myproject/microservice-vote:1.0.0
149+
$ docker tag microservice-vote:1.0.0 docker-registry-default.apps.<okd_ip>.nip.io/myproject/microservice-vote:1.0.0
151150
```
152151
1. Now your tagged image into the registry:
153152
```bash
154-
docker push docker-registry.default.svc:5000/myproject/microservice-vote:1.0.0
153+
$ docker push docker-registry-default.apps.<okd_ip>.nip.io/myproject/microservice-vote:1.0.0
155154
```
156-
1. Your image is now available in the Docker registry in OKD. You can verify this through the OKD's Registry Dashboard at `https://registry-console-default.apps.<okd_ip>.nip.io/registry`. You can use the same username and password as the one used in `oc login` command.
155+
1. Your image is now available in the internal registry in OKD. You can verify this through the OKD's Registry Dashboard available at `https://registry-console-default.apps.<okd_ip>.nip.io/registry`. You can use the same username and password as the one used in `oc login` command. You Should see
157156

158157
## Part 2: Deploy Open Liberty operator and and CouchDB Helm chart
159158

160159
In this part of the lab you will install an operator and a Helm chart.
161160

162-
### Deploy CouchDB
161+
### Deploy CouchDB Helm
163162

164-
In this section we will deploy CouchDB Helm chart. OKD does not come with tiller. So we need to install tiller first.
163+
In this section, we will deploy CouchDB Helm chart. However, as OKD does not come with tiller, we will install tiller on the cluster and set up Helm CLI to be able to communicate with the tiller.
165164

166165
1. Create a project for Tiller
167166
```bash
168-
oc new-project tiller
169-
```
170-
If you already have `tiller` project, switch to the project:
171-
```bash
172-
oc project tiller
167+
$ oc new-project tiller
173168
```
174169
1. Download Helm CLI and install the Helm client locally:
175170

176171
Linux:
177172
```bash
178-
curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.14.1-linux-amd64.tar.gz | tar xz
179-
cd linux-amd64
173+
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-linux-amd64.tar.gz | tar xz
174+
$ cd linux-amd64
180175
```
176+
181177
OSX:
182178
```bash
183-
curl -s https://storage.googleapis.com/kubernetes-helm/ lm-v2.14.1-darwin-amd64.tar.gz | tar xz
184-
cd darwin-amd64
179+
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-darwin-amd64.tar.gz | tar xz
180+
$ cd darwin-amd64
185181
```
186182

187-
Now configure the Helm client locally:
183+
1. Now configure the Helm client locally:
188184
```bash
189-
sudo mv helm /usr/local/bin
190-
sudo chmod a+x /usr/local/bin/helm
191-
./helm init --client-only
185+
$ sudo mv helm /usr/local/bin
186+
$ sudo chmod a+x /usr/local/bin/helm
187+
$ helm init --client-only
192188
```
193189
1. Install the Tiller server:
194190
```bash
195-
oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller" -p HELM_VERSION=v2.14.1 | oc create -f -
196-
oc rollout status deployment tiller
191+
$ oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller" -p HELM_VERSION=v2.9.0 | oc create -f -
192+
$ oc rollout status deployment tiller
197193
```
194+
Rollout process might take a few minutes to complete.
198195
1. If things go well, the following commands should run successfully:
199196
```bash
200-
helm version
197+
$ helm version
198+
```
199+
1. Grant the Tiller server `edit` and `admin` access to the current project:
200+
```bash
201+
$ oc policy add-role-to-user edit "system:serviceaccount:tiller:tiller"
202+
$ oc policy add-role-to-user admin "system:serviceaccount:tiller:tiller"
201203
```
202204

203-
Now that the Helm is configured locally and on OKD, you can deploy CouchDB Helm chart.
205+
Now that Helm is configured both locally and on OKD, you can deploy CouchDB Helm chart.
204206
1. Navigate to `lab-artifacts/helm/database`:
205207
```bash
206-
cd lab-artifacts/helm/database
208+
$ cd ../helm/database
207209
```
208210
1. Deploy the CouchDB Helm chart:
209211
```bash
210-
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
211-
helm install incubator/couchdb -f db_values.yaml --name couchdb
212+
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
213+
$ helm install incubator/couchdb -f db_values.yaml --name couchdb
212214
```
213215
Ensure the CouchDB pod is up and running by executing `kubectl get pods` command. Your output will look similar to the following:
214216
```bash
@@ -224,36 +226,34 @@ Now that the Helm is configured locally and on OKD, you can deploy CouchDB Helm
224226

225227
1. Navigate to Open Liberty Operator artifact directory:
226228
```bash
227-
cd lab-artifacts/operator/open-liberty-operator
229+
$ cd lab-artifacts/operator/open-liberty-operator
228230
```
229231
1. Install Open Liberty Operator artifacts:
230232
```bash
231-
kubectl apply -f olm/open-liberty-crd.yaml
232-
kubectl apply -f deploy/service_account.yaml
233-
kubectl apply -f deploy/role.yaml
234-
kubectl apply -f deploy/role_binding.yaml
235-
kubectl apply -f deploy/operator.yaml
233+
$ kubectl apply -f olm/open-liberty-crd.yaml
234+
$ kubectl apply -f deploy/service_account.yaml
235+
$ kubectl apply -f deploy/role.yaml
236+
$ kubectl apply -f deploy/role_binding.yaml
237+
$ kubectl apply -f deploy/operator.yaml
236238
```
237239
1. Creating a custom Security Context Constraints (SCC). SCC controls the actions that a pod can perform and what it has the ability to access.
238240
```bash
239-
kubectl apply -f deploy/ibm-open-liberty-scc.yaml --validate=false
241+
$ kubectl apply -f deploy/ibm-open-liberty-scc.yaml --validate=false
240242
```
241-
1. Grant the default namespace's service account access to the newly created SCC, `ibm-open-liberty-scc`. Update `<namespace>` with the appropriate namespace:
243+
1. Grant the default namespace's service account access to the newly created SCC, `ibm-open-liberty-scc`.
242244
```bash
243-
oc adm policy add-scc-to-group ibm-open-liberty-scc system:serviceaccounts:<namespace>
245+
$ oc adm policy add-scc-to-group ibm-open-liberty-scc system:serviceaccounts:myproject
244246
```
245247

246248
#### Deploy application
247249

248250
1. Deploy the microservice application using the provided CR:
249251
```bash
250-
cd ../application
251-
kubectl apply -f application-cr.yaml
252+
$ cd ../application
253+
$ kubectl apply -f application-cr.yaml
252254
```
253255
1. You can view the status of your deployment by running `kubectl get deployments`. If the deployment is not coming up after a few minutes one way to debug what happened is to query the pods with `kubectl get pods` and then fetch the logs of the Liberty pod with `kubectl logs <pod>`.
254-
1. Use `kubectl get ing | awk 'FNR == 2 {print $3;}'` to determine the address of the application. Note: If the previous command is printing out a port, such as `80`, please wait a few more minutes for the `URL` to be available.
255-
1. Add `/openapi/ui` to the end of URL to reach the OpenAPI User Interface. For example, `https://<IP>:<PORT>/openapi/ui`.
256-
1. If you find that your OKD ingress is taking too long to return the result of the invocation and you get a timeout error, you can bypass the ingress and reach the application via its NodePort layer. To do that, simply find the NodePort port by finding out your service name with `kubectl get services` and then running the command `kubectl describe service <myservice> | grep NodePort | awk 'FNR == 2 {print $3;}' | awk -F '/' '{print $1;}'` and then inserting that port in your current URL using `http`, for example `http://9.8.7.6.nip.io:30698/openapi/ui/`. If those invocations are still taking long, please wait a few minutes for the deployment to fully initiate.
256+
1. We will access the application using NodePort service. To do that, simply find the NodePort port by finding out your service name with `kubectl get services` and then running the command `kubectl describe service <myservice> | grep NodePort | awk 'FNR == 2 {print $3;}' | awk -F '/' '{print $1;}'` and then inserting that port in your current URL using `http`, for example `http://9.8.7.6.nip.io:30698/openapi/ui/`. If those invocations are still taking long, please wait a few minutes for the deployment to fully initiate.
257257
1. Congratulations! You have successfully deployed a [MicroProfile](http://microprofile.io/) container into an OKD cluster using operators!
258258

259259
## Part 3: Explore the application
@@ -269,62 +269,22 @@ The `vote` application is using various MicroProfile specifications. The `/open
269269
1. Click on `Execute` and inspect that the `Respond body` contains the same name that you created in step 2. You successfully triggered a fetch from our microservice into the CouchDB database.
270270
1. Feel free to explore the other APIs and play around with the microservice!
271271

272-
## Part 4: Update the Helm release
272+
## Part 4: Update the Liberty Operator release
273273

274-
In this part of the lab you will practice how to make changes to the Helm release you just deployed on the cluster using the Helm CLI.
274+
In this part of the lab you will practice how to make changes to the Liberty deployment you just deployed on the cluster using the Open Liberty Operator.
275275

276-
So far, the database you deployed stores the data inside the container running the database. This means if the container gets deleted or restarted for any reason, all the data stored in the database would be lost.
277-
278-
In order to store the data outside of the database container, you would need to enable data persistence through the Helm chart. When you enable persistence, the database would store the data in a PersistentVolume. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or by an automatic provisioner.
279-
280-
The steps below would guide you how to enable persistence for your database:
281-
282-
1. In [Part 3](#Part-3-Explore-the-application), you would've observed that calling `GET /attendee/{id}` returns the `name` you specified. Calling `GET` would read the data from the database.
283-
1. Find the name of the pod that is running the database container:
284-
```bash
285-
kubectl get pods
286-
```
287-
You should see a pod name similar to `couchdb-couchdb-0`.
288-
1. Delete the CouchDB pod to delete the container running the database.
289-
```bash
290-
kubectl delete pod couchdb-couchdb-0
291-
```
292-
1. Run the following command to see the state of deployments:
293-
```bash
294-
kubectl get pods
295-
```
296-
You should get an output similar to the following:
297-
```bash
298-
NAME READY STATUS RESTARTS AGE
299-
couchdb-couchdb-0 2/2 Running 0 3m
300-
vote-userx-ibm-open-5b44d988bd-kqrjn 1/1 Running 0 3m
301-
```
302-
Again, you need to wait until the couchdb pod is ready. Wait until the value under `READY` column becomes `2/2`.
276+
The update scenario is that you will increase the number of replicas for the Liberty deployment to 3. That will increase the number of Open Liberty pods to 3.
303277

304-
1. Call again the `GET /attendee/{id}` endpoint from the OpenAPI UI page and see that the server does not return the attendee you created anymore. Instead, it returns 404. That's because the data was stored in the couchdb pod and was lost when the pod was deleted. Let's upgrade our release to add persistence.
305-
1. Now let's enable persistence for our database:
278+
1. In `lab-artifacts/operator/application/application-cr.yaml` file, change `replicaCount` value to 3.
279+
1. Navigate to `lab-artifacts/operator/application` directory:
306280
```bash
307-
helm upgrade --recreate-pods --force --reuse-values --set persistentVolume.enabled=true couchdb incubator/couchdb
281+
$ cd lab-artifacts/operator/application
308282
```
309-
1. Let's also upgrade the Liberty release for high availability by increasing the number of replicas:
283+
1. Apply the changes into the cluster:
310284
```bash
311-
helm upgrade --recreate-pods --force --reuse-values --set replicaCount=2 <release_name> ibm-charts/ibm-open-liberty
285+
$ kubectl apply -f application-cr.yaml
312286
```
313-
1. List the deployed packages with their chart versions by running:
314-
```bash
315-
helm ls
316-
```
317-
You can see that the number of revision should be 2 now for couchdb and Liberty.
318-
1. Run the following command to see the state of deployments:
319-
```bash
320-
kubectl get pods
321-
```
322-
You need to wait until the couchdb and Liberty pods become ready. The old pods may be terminating while the new ones start up.
323-
324-
For Liberty, you will now see 2 pods, since we increased the number of replicas.
325-
1. Refresh the page. You may need to add the security exception again. If you get `Failed to load API definition` message then try refreshing again.
326-
1. Now add a new attendee through the OpenAPI UI as before.
327-
1. Now repeat Steps 1-5 in this section to see that even though you delete the couchdb database container, data still gets recovered from the PersistentVolume.
287+
1. You can view the status of your deployment by running `kubectl get deployments`. It might take a few minutes until all the pods are ready.
328288

329289
In this part you were introduced to rolling updates. DevOps teams can perform zero-downtime application upgrades, which is an important consideration for production environments.
330290

0 commit comments

Comments
 (0)