Skip to content

Commit de8d4d3

Browse files
authored
Merge pull request #22 from microservices-api/okd
OKD lab
2 parents ae7270f + ce9295a commit de8d4d3

File tree

6 files changed

+333
-186
lines changed

6 files changed

+333
-186
lines changed

README.md

Lines changed: 99 additions & 142 deletions
Original file line numberDiff line numberDiff line change
@@ -11,21 +11,22 @@ For questions/comments about Open Liberty Docker container or Open Liberty Opera
1111

1212
# Before you begin
1313

14-
You'll need a few different artifacts to this lab. Check if you have these installed by running:
15-
16-
```bash
17-
git --help
18-
mvn --help
19-
java -help
20-
docker --help
21-
kubectl --help
22-
oc --help
14+
You'll need a few different artifacts to this lab. _If you are running these commands on the same VM as the one you installed OKD, all commands except Maven are installed._
15+
Check if you have these installed by running:
16+
17+
```console
18+
$ git --help
19+
$ mvn --help
20+
$ java -help
21+
$ docker --help
22+
$ kubectl --help
23+
$ :oc --help
2324
```
2425

2526
If any of these are not installed:
2627

2728
* Install [Git client](https://git-scm.com/download/mac)
28-
* Install [Maven](https://maven.apache.org/download.cgi)
29+
* Install [Maven](https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.2.1/html/installation_on_jboss_eap/install_maven)
2930
* Install [Docker engine](https://docs.docker.com/engine/installation/)
3031
* Install [Java 8](https://java.com/en/download/)
3132
* Install [kubectl](https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz)
@@ -51,7 +52,7 @@ This lab will walk you through the deployment of our sample MicroProfile applica
5152

5253
## Setting up the cluster
5354

54-
To setup a VM in vLaunch and install OKD, see [instructions here](https://apps.na.collabserv.com/wikis/home?lang=en-us#!/wiki/Wfe97e7c353a2_4510_8471_7148220c0bec/page/Setting%20up%20a%20vLaunch%20System%20for%20Red%20Hat%20OpenShift%20Lab). If you do not have access to IBM's vLaunch, follow [instructions here](https://github.com/gshipley/installcentos).
55+
To setup a VM in vLaunch and install OKD, see [instructions here](https://apps.na.collabserv.com/wikis/home?lang=en-us#!/wiki/Wfe97e7c353a2_4510_8471_7148220c0bec/page/Setting%20up%20a%20vLaunch%20System%20for%20Red%20Hat%20OpenShift%20Lab).
5556

5657
## Part 1A: Build the application and Docker container
5758

@@ -63,11 +64,11 @@ You can clone the lab artifacts and explore the application:
6364

6465
1. Clone the project into your machine.
6566
```console
66-
git clone https://github.com/microservices-api/kubernetes-microprofile-lab.git
67+
$ git clone https://github.com/microservices-api/kubernetes-microprofile-lab.git
6768
```
6869
1. Navigate into the sample application directory:
6970
```console
70-
cd kubernetes-microprofile-lab/lab-artifacts/application
71+
$ cd kubernetes-microprofile-lab/lab-artifacts/application
7172
```
7273
1. See if you can find where technologies described below are used in the application.
7374

@@ -104,114 +105,123 @@ In this lab we demonstrate a best-practice pattern which separates the concerns
104105
The following steps will build the sample application and create a Docker image that includes the vote microservice:
105106

106107
1. Navigate into the sample application directory if you are not already:
107-
```bash
108-
cd kubernetes-microprofile-lab/lab-artifacts/application
108+
```console
109+
$ cd kubernetes-microprofile-lab/lab-artifacts/application
109110
```
110111
1. Build the sample application:
111-
```bash
112-
mvn clean package
112+
```console
113+
$ mvn clean package
113114
```
114115
1. Navigate into the `lab-artifacts` directory
115-
```bash
116-
cd ..
116+
```console
117+
$ cd ..
117118
```
118119
1. Build and tag the Enterprise Docker image:
119-
```bash
120-
cd ..
121-
docker build -t microservice-enterprise-web:1.0.0 -f EnterpriseDockerfile .
120+
```console
121+
$ docker build -t microservice-enterprise-web:1.0.0 -f EnterpriseDockerfile .
122122
```
123123
1. Build and tag the Application Docker image:
124-
```bash
125-
docker build -t microservice-vote:1.0.0 -f ApplicationDockerfile .
124+
```console
125+
$ docker build -t microservice-vote:1.0.0 -f ApplicationDockerfile .
126126
```
127127
1. You can use the Docker CLI to verify that your image is built.
128-
```bash
129-
docker images
128+
```console
129+
$ docker images
130130
```
131131

132132
## Part 1B: Upload the Docker image to OKD's internal registry
133133

134134
OKD provides an internal, integrated container image registry. For this lab, we will use this registry to host our application image.
135135

136136
1. Ensure you are logged in to OKD. You can use OKD command line interface (CLI) to interact with the cluster. Replace `<username>`, `<password>` and `<okd_ip>` with appropriate values:
137-
```bash
138-
oc login --username=<username> --password=<password>
137+
```console
138+
$ oc login --username=<username> --password=<password> https://console.<okd_ip>.nip.io:8443/
139139
```
140140
1. Create a new project to host our application:
141-
```bash
142-
oc new-project myproject
141+
```console
142+
$ oc new-project myproject
143143
```
144144
1. Log into the internal registry:
145-
```bash
146-
oc registry login --skip-check
145+
```console
146+
$ docker login -u $(oc whoami) -p $(oc whoami -t) docker-registry-default.apps.<okd_ip>.nip.io
147147
```
148148
1. Tag your Docker image:
149-
```bash
150-
docker tag microservice-vote:1.0.0 docker-registry.default.svc:5000/myproject/microservice-vote:1.0.0
149+
```console
150+
$ docker tag microservice-vote:1.0.0 docker-registry-default.apps.<okd_ip>.nip.io/myproject/microservice-vote:1.0.0
151151
```
152152
1. Now your tagged image into the registry:
153-
```bash
154-
docker push docker-registry.default.svc:5000/myproject/microservice-vote:1.0.0
153+
```console
154+
$ docker push docker-registry-default.apps.<okd_ip>.nip.io/myproject/microservice-vote:1.0.0
155155
```
156-
1. Your image is now available in the Docker registry in OKD. You can verify this through the OKD's Registry Dashboard at `https://registry-console-default.apps.<okd_ip>.nip.io/registry`. You can use the same username and password as the one used in `oc login` command.
156+
1. Your image is now available in the internal registry in OKD. You can verify this through the OKD's Registry Dashboard available at `https://registry-console-default.apps.<okd_ip>.nip.io/registry`. You can use the same username and password as the one used in `oc login` command. You Should see
157157

158158
## Part 2: Deploy Open Liberty operator and and CouchDB Helm chart
159159

160160
In this part of the lab you will install an operator and a Helm chart.
161161

162-
### Deploy CouchDB
162+
### Deploy CouchDB Helm
163163

164-
In this section we will deploy CouchDB Helm chart. OKD does not come with tiller. So we need to install tiller first.
164+
In this section, we will deploy CouchDB Helm chart. However, as OKD does not come with tiller, we will install tiller on the cluster and set up Helm CLI to be able to communicate with the tiller.
165165

166166
1. Create a project for Tiller
167-
```bash
168-
oc new-project tiller
169-
```
170-
If you already have `tiller` project, switch to the project:
171-
```bash
172-
oc project tiller
167+
```console
168+
$ oc new-project tiller
173169
```
174170
1. Download Helm CLI and install the Helm client locally:
175171

176172
Linux:
177-
```bash
178-
curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.14.1-linux-amd64.tar.gz | tar xz
179-
cd linux-amd64
173+
```console
174+
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-linux-amd64.tar.gz | tar xz
175+
$ cd linux-amd64
180176
```
177+
181178
OSX:
182-
```bash
183-
curl -s https://storage.googleapis.com/kubernetes-helm/ lm-v2.14.1-darwin-amd64.tar.gz | tar xz
184-
cd darwin-amd64
179+
```console
180+
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-darwin-amd64.tar.gz | tar xz
181+
$ cd darwin-amd64
185182
```
186183

187-
Now configure the Helm client locally:
188-
```bash
189-
sudo mv helm /usr/local/bin
190-
sudo chmod a+x /usr/local/bin/helm
191-
./helm init --client-only
184+
1. Now configure the Helm client locally. **Note:** _This will replace your current's Helm CLI. If you can create a back up of your current Helm CLI and replace the lab's Helm CLI after you are done with the lab_:
185+
```console
186+
$ sudo mv helm /usr/local/bin
187+
$ sudo chmod a+x /usr/local/bin/helm
188+
$ helm init --client-only
192189
```
193190
1. Install the Tiller server:
194-
```bash
195-
oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller" -p HELM_VERSION=v2.14.1 | oc create -f -
196-
oc rollout status deployment tiller
191+
```console
192+
$ oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller" -p HELM_VERSION=v2.9.0 | oc create -f -
193+
$ oc rollout status deployment tiller
197194
```
198-
1. If things go well, the following commands should run successfully:
199-
```bash
200-
helm version
195+
Rollout process might take a few minutes to complete. You can check the status of the deployment using `oc get deployment`.
196+
1. If things go well, the following commands should run successfully and you will see version of both the client and the server:
197+
```console
198+
$ helm version
199+
```
200+
1. Grant the Tiller server `edit` and `admin` access to the current project:
201+
```console
202+
$ oc policy add-role-to-user edit "system:serviceaccount:tiller:tiller"
203+
$ oc policy add-role-to-user admin "system:serviceaccount:tiller:tiller"
201204
```
202205

203-
Now that the Helm is configured locally and on OKD, you can deploy CouchDB Helm chart.
206+
Now that Helm is configured both locally and on OKD, you can deploy CouchDB Helm chart.
204207
1. Navigate to `lab-artifacts/helm/database`:
205-
```bash
206-
cd lab-artifacts/helm/database
208+
```console
209+
$ cd ../helm/database
210+
```
211+
1. Switch to your application project:
212+
```console
213+
$ oc project myproject
214+
```
215+
1. Allow the `myproject` namespace to run containers as any UID by changing the namespace's Security Context Constraints (SCC):
216+
```console
217+
$ oc adm policy add-scc-to-user anyuid system:serviceaccount:myproject:default
207218
```
208219
1. Deploy the CouchDB Helm chart:
209-
```bash
210-
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
211-
helm install incubator/couchdb -f db_values.yaml --name couchdb
220+
```console
221+
$ helm install couchdb-1.2.0.tgz -f db_values.yaml --name couchdb
212222
```
213223
Ensure the CouchDB pod is up and running by executing `kubectl get pods` command. Your output will look similar to the following:
214-
```bash
224+
```console
215225
NAME READY STATUS RESTARTS AGE
216226
couchdb-couchdb-0 2/2 Running 0 3m
217227
```
@@ -223,37 +233,24 @@ Now that the Helm is configured locally and on OKD, you can deploy CouchDB Helm
223233
#### Install Open Liberty artifacts
224234

225235
1. Navigate to Open Liberty Operator artifact directory:
226-
```bash
227-
cd lab-artifacts/operator/open-liberty-operator
236+
```console
237+
$ cd ../../operator/open-liberty-operator
228238
```
229239
1. Install Open Liberty Operator artifacts:
230-
```bash
231-
kubectl apply -f olm/open-liberty-crd.yaml
232-
kubectl apply -f deploy/service_account.yaml
233-
kubectl apply -f deploy/role.yaml
234-
kubectl apply -f deploy/role_binding.yaml
235-
kubectl apply -f deploy/operator.yaml
236-
```
237-
1. Creating a custom Security Context Constraints (SCC). SCC controls the actions that a pod can perform and what it has the ability to access.
238-
```bash
239-
kubectl apply -f deploy/ibm-open-liberty-scc.yaml --validate=false
240-
```
241-
1. Grant the default namespace's service account access to the newly created SCC, `ibm-open-liberty-scc`. Update `<namespace>` with the appropriate namespace:
242-
```bash
243-
oc adm policy add-scc-to-group ibm-open-liberty-scc system:serviceaccounts:<namespace>
240+
```console
241+
$ kubectl apply -f olm/
242+
$ kubectl apply -f deploy/
244243
```
245244

246245
#### Deploy application
247246

248247
1. Deploy the microservice application using the provided CR:
249-
```bash
250-
cd ../application
251-
kubectl apply -f application-cr.yaml
248+
```console
249+
$ cd ../application
250+
$ kubectl apply -f application-cr.yaml
252251
```
253252
1. You can view the status of your deployment by running `kubectl get deployments`. If the deployment is not coming up after a few minutes one way to debug what happened is to query the pods with `kubectl get pods` and then fetch the logs of the Liberty pod with `kubectl logs <pod>`.
254-
1. Use `kubectl get ing | awk 'FNR == 2 {print $3;}'` to determine the address of the application. Note: If the previous command is printing out a port, such as `80`, please wait a few more minutes for the `URL` to be available.
255-
1. Add `/openapi/ui` to the end of URL to reach the OpenAPI User Interface. For example, `https://<IP>:<PORT>/openapi/ui`.
256-
1. If you find that your OKD ingress is taking too long to return the result of the invocation and you get a timeout error, you can bypass the ingress and reach the application via its NodePort layer. To do that, simply find the NodePort port by finding out your service name with `kubectl get services` and then running the command `kubectl describe service <myservice> | grep NodePort | awk 'FNR == 2 {print $3;}' | awk -F '/' '{print $1;}'` and then inserting that port in your current URL using `http`, for example `http://9.8.7.6.nip.io:30698/openapi/ui/`. If those invocations are still taking long, please wait a few minutes for the deployment to fully initiate.
253+
1. We will access the application using NodePort service. To do that, simply find the NodePort port by finding out your service name with `kubectl get services` and then running the command `kubectl describe service <myservice> | grep NodePort | awk 'FNR == 2 {print $3;}' | awk -F '/' '{print $1;}'` and then inserting that port in your current URL using `http`, for example `http://9.8.7.6.nip.io:30698/openapi/ui/`. If those invocations are still taking long, please wait a few minutes for the deployment to fully initiate.
257254
1. Congratulations! You have successfully deployed a [MicroProfile](http://microprofile.io/) container into an OKD cluster using operators!
258255

259256
## Part 3: Explore the application
@@ -269,62 +266,22 @@ The `vote` application is using various MicroProfile specifications. The `/open
269266
1. Click on `Execute` and inspect that the `Respond body` contains the same name that you created in step 2. You successfully triggered a fetch from our microservice into the CouchDB database.
270267
1. Feel free to explore the other APIs and play around with the microservice!
271268

272-
## Part 4: Update the Helm release
273-
274-
In this part of the lab you will practice how to make changes to the Helm release you just deployed on the cluster using the Helm CLI.
269+
## Part 4: Update the Liberty Operator release
275270

276-
So far, the database you deployed stores the data inside the container running the database. This means if the container gets deleted or restarted for any reason, all the data stored in the database would be lost.
271+
In this part of the lab you will practice how to make changes to the Liberty deployment you just deployed on the cluster using the Open Liberty Operator.
277272

278-
In order to store the data outside of the database container, you would need to enable data persistence through the Helm chart. When you enable persistence, the database would store the data in a PersistentVolume. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or by an automatic provisioner.
273+
The update scenario is that you will increase the number of replicas for the Liberty deployment to 3. That will increase the number of Open Liberty pods to 3.
279274

280-
The steps below would guide you how to enable persistence for your database:
281-
282-
1. In [Part 3](#Part-3-Explore-the-application), you would've observed that calling `GET /attendee/{id}` returns the `name` you specified. Calling `GET` would read the data from the database.
283-
1. Find the name of the pod that is running the database container:
284-
```bash
285-
kubectl get pods
286-
```
287-
You should see a pod name similar to `couchdb-couchdb-0`.
288-
1. Delete the CouchDB pod to delete the container running the database.
289-
```bash
290-
kubectl delete pod couchdb-couchdb-0
291-
```
292-
1. Run the following command to see the state of deployments:
293-
```bash
294-
kubectl get pods
295-
```
296-
You should get an output similar to the following:
297-
```bash
298-
NAME READY STATUS RESTARTS AGE
299-
couchdb-couchdb-0 2/2 Running 0 3m
300-
vote-userx-ibm-open-5b44d988bd-kqrjn 1/1 Running 0 3m
301-
```
302-
Again, you need to wait until the couchdb pod is ready. Wait until the value under `READY` column becomes `2/2`.
303-
304-
1. Call again the `GET /attendee/{id}` endpoint from the OpenAPI UI page and see that the server does not return the attendee you created anymore. Instead, it returns 404. That's because the data was stored in the couchdb pod and was lost when the pod was deleted. Let's upgrade our release to add persistence.
305-
1. Now let's enable persistence for our database:
306-
```bash
307-
helm upgrade --recreate-pods --force --reuse-values --set persistentVolume.enabled=true couchdb incubator/couchdb
308-
```
309-
1. Let's also upgrade the Liberty release for high availability by increasing the number of replicas:
310-
```bash
311-
helm upgrade --recreate-pods --force --reuse-values --set replicaCount=2 <release_name> ibm-charts/ibm-open-liberty
312-
```
313-
1. List the deployed packages with their chart versions by running:
314-
```bash
315-
helm ls
275+
1. In `lab-artifacts/operator/application/application-cr.yaml` file, change `replicaCount` value to 3.
276+
1. Navigate to `lab-artifacts/operator/application` directory:
277+
```console
278+
$ cd lab-artifacts/operator/application
316279
```
317-
You can see that the number of revision should be 2 now for couchdb and Liberty.
318-
1. Run the following command to see the state of deployments:
319-
```bash
320-
kubectl get pods
280+
1. Apply the changes into the cluster:
281+
```console
282+
$ kubectl apply -f application-cr.yaml
321283
```
322-
You need to wait until the couchdb and Liberty pods become ready. The old pods may be terminating while the new ones start up.
323-
324-
For Liberty, you will now see 2 pods, since we increased the number of replicas.
325-
1. Refresh the page. You may need to add the security exception again. If you get `Failed to load API definition` message then try refreshing again.
326-
1. Now add a new attendee through the OpenAPI UI as before.
327-
1. Now repeat Steps 1-5 in this section to see that even though you delete the couchdb database container, data still gets recovered from the PersistentVolume.
284+
1. You can view the status of your deployment by running `kubectl get deployments`. It might take a few minutes until all the pods are ready.
328285

329286
In this part you were introduced to rolling updates. DevOps teams can perform zero-downtime application upgrades, which is an important consideration for production environments.
330287

0 commit comments

Comments
 (0)