@@ -13,7 +13,7 @@ For questions/comments about Open Liberty Docker container or Open Liberty Opera
13
13
14
14
You'll need a few different artifacts to this lab. Check if you have these installed by running:
15
15
16
- ``` bash
16
+ ``` console
17
17
git --help
18
18
mvn --help
19
19
java -help
@@ -63,11 +63,11 @@ You can clone the lab artifacts and explore the application:
63
63
64
64
1 . Clone the project into your machine.
65
65
``` console
66
- git clone https://github.com/microservices-api/kubernetes-microprofile-lab.git
66
+ $ git clone https://github.com/microservices-api/kubernetes-microprofile-lab.git
67
67
```
68
68
1. Navigate into the sample application directory:
69
69
```console
70
- cd kubernetes-microprofile-lab/lab-artifacts/application
70
+ $ cd kubernetes-microprofile-lab/lab-artifacts/application
71
71
```
72
72
1. See if you can find where technologies described below are used in the application.
73
73
@@ -104,27 +104,27 @@ In this lab we demonstrate a best-practice pattern which separates the concerns
104
104
The following steps will build the sample application and create a Docker image that includes the vote microservice:
105
105
106
106
1. Navigate into the sample application directory if you are not already:
107
- ```bash
107
+ ```console
108
108
$ cd kubernetes-microprofile-lab/lab-artifacts/application
109
109
```
110
110
1. Build the sample application:
111
- ```bash
111
+ ```console
112
112
$ mvn clean package
113
113
```
114
114
1. Navigate into the `lab-artifacts` directory
115
- ```bash
115
+ ```console
116
116
$ cd ..
117
117
```
118
118
1. Build and tag the Enterprise Docker image:
119
- ```bash
119
+ ```console
120
120
$ docker build -t microservice-enterprise-web:1.0.0 -f EnterpriseDockerfile .
121
121
```
122
122
1. Build and tag the Application Docker image:
123
- ```bash
123
+ ```console
124
124
$ docker build -t microservice-vote:1.0.0 -f ApplicationDockerfile .
125
125
```
126
126
1. You can use the Docker CLI to verify that your image is built.
127
- ```bash
127
+ ```console
128
128
$ docker images
129
129
```
130
130
@@ -133,23 +133,23 @@ The following steps will build the sample application and create a Docker image
133
133
OKD provides an internal, integrated container image registry. For this lab, we will use this registry to host our application image.
134
134
135
135
1. Ensure you are logged in to OKD. You can use OKD command line interface (CLI) to interact with the cluster. Replace `<username>`, `<password>` and `<okd_ip>` with appropriate values:
136
- ```bash
136
+ ```console
137
137
$ oc login --username=<username> --password=<password>
138
138
```
139
139
1. Create a new project to host our application:
140
- ```bash
140
+ ```console
141
141
$ oc new-project myproject
142
142
```
143
143
1. Log into the internal registry:
144
- ```bash
144
+ ```console
145
145
$ oc registry login --skip-check
146
146
```
147
147
1. Tag your Docker image:
148
- ```bash
148
+ ```console
149
149
$ docker tag microservice-vote:1.0.0 docker-registry-default.apps.<okd_ip>.nip.io/myproject/microservice-vote:1.0.0
150
150
```
151
151
1. Now your tagged image into the registry:
152
- ```bash
152
+ ```console
153
153
$ docker push docker-registry-default.apps.<okd_ip>.nip.io/myproject/microservice-vote:1.0.0
154
154
```
155
155
1. Your image is now available in the internal registry in OKD. You can verify this through the OKD's Registry Dashboard available at `https://registry-console-default.apps.<okd_ip>.nip.io/registry`. You can use the same username and password as the one used in `oc login` command. You Should see
@@ -163,57 +163,57 @@ In this part of the lab you will install an operator and a Helm chart.
163
163
In this section, we will deploy CouchDB Helm chart. However, as OKD does not come with tiller, we will install tiller on the cluster and set up Helm CLI to be able to communicate with the tiller.
164
164
165
165
1. Create a project for Tiller
166
- ```bash
166
+ ```console
167
167
$ oc new-project tiller
168
168
```
169
169
1. Download Helm CLI and install the Helm client locally:
170
170
171
171
Linux:
172
- ```bash
172
+ ```console
173
173
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-linux-amd64.tar.gz | tar xz
174
174
$ cd linux-amd64
175
175
```
176
176
177
177
OSX:
178
- ```bash
178
+ ```console
179
179
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-darwin-amd64.tar.gz | tar xz
180
180
$ cd darwin-amd64
181
181
```
182
182
183
183
1. Now configure the Helm client locally:
184
- ```bash
184
+ ```console
185
185
$ sudo mv helm /usr/local/bin
186
186
$ sudo chmod a+x /usr/local/bin/helm
187
187
$ helm init --client-only
188
188
```
189
189
1. Install the Tiller server:
190
- ```bash
190
+ ```console
191
191
$ oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="tiller" -p HELM_VERSION=v2.9.0 | oc create -f -
192
192
$ oc rollout status deployment tiller
193
193
```
194
194
Rollout process might take a few minutes to complete.
195
195
1. If things go well, the following commands should run successfully:
196
- ```bash
196
+ ```console
197
197
$ helm version
198
198
```
199
199
1. Grant the Tiller server `edit` and `admin` access to the current project:
200
- ```bash
200
+ ```console
201
201
$ oc policy add-role-to-user edit "system:serviceaccount:tiller:tiller"
202
202
$ oc policy add-role-to-user admin "system:serviceaccount:tiller:tiller"
203
203
```
204
204
205
205
Now that Helm is configured both locally and on OKD, you can deploy CouchDB Helm chart.
206
206
1. Navigate to `lab-artifacts/helm/database`:
207
- ```bash
207
+ ```console
208
208
$ cd ../helm/database
209
209
```
210
210
1. Deploy the CouchDB Helm chart:
211
- ```bash
211
+ ```console
212
212
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
213
213
$ helm install incubator/couchdb -f db_values.yaml --name couchdb
214
214
```
215
215
Ensure the CouchDB pod is up and running by executing `kubectl get pods` command. Your output will look similar to the following:
216
- ```bash
216
+ ```console
217
217
NAME READY STATUS RESTARTS AGE
218
218
couchdb-couchdb-0 2/2 Running 0 3m
219
219
```
@@ -225,30 +225,30 @@ Now that Helm is configured both locally and on OKD, you can deploy CouchDB Helm
225
225
#### Install Open Liberty artifacts
226
226
227
227
1. Navigate to Open Liberty Operator artifact directory:
228
- ```bash
228
+ ```console
229
229
$ cd lab-artifacts/operator/open-liberty-operator
230
230
```
231
231
1. Install Open Liberty Operator artifacts:
232
- ```bash
232
+ ```console
233
233
$ kubectl apply -f olm/open-liberty-crd.yaml
234
234
$ kubectl apply -f deploy/service_account.yaml
235
235
$ kubectl apply -f deploy/role.yaml
236
236
$ kubectl apply -f deploy/role_binding.yaml
237
237
$ kubectl apply -f deploy/operator.yaml
238
238
```
239
239
1. Creating a custom Security Context Constraints (SCC). SCC controls the actions that a pod can perform and what it has the ability to access.
240
- ```bash
240
+ ```console
241
241
$ kubectl apply -f deploy/ibm-open-liberty-scc.yaml --validate=false
242
242
```
243
243
1. Grant the default namespace's service account access to the newly created SCC, `ibm-open-liberty-scc`.
244
- ```bash
244
+ ```console
245
245
$ oc adm policy add-scc-to-group ibm-open-liberty-scc system:serviceaccounts:myproject
246
246
```
247
247
248
248
#### Deploy application
249
249
250
250
1. Deploy the microservice application using the provided CR:
251
- ```bash
251
+ ```console
252
252
$ cd ../application
253
253
$ kubectl apply -f application-cr.yaml
254
254
```
@@ -277,11 +277,11 @@ The update scenario is that you will increase the number of replicas for the Lib
277
277
278
278
1. In `lab-artifacts/operator/application/application-cr.yaml` file, change `replicaCount` value to 3.
279
279
1. Navigate to `lab-artifacts/operator/application` directory:
280
- ```bash
280
+ ```console
281
281
$ cd lab-artifacts/operator/application
282
282
```
283
283
1. Apply the changes into the cluster:
284
- ```bash
284
+ ```console
285
285
$ kubectl apply -f application-cr.yaml
286
286
```
287
287
1. You can view the status of your deployment by running `kubectl get deployments`. It might take a few minutes until all the pods are ready.
0 commit comments