This repository contains the source code a simple Openshift Operator to manage JWS Images.
This prototype mimics the features provided by the JWS Tomcat8 Basic Template. It allows the automated deployment of Tomcat instances.
The prototype has been written in Golang. it uses dep as dependency manager and the operator-sdk as development Framework and project manager. This SDK allows the generation of source code to increase productivity. It is solely used to conveniently write and build an Openshift or Kubernetes operator (the end-user does not need the operator-sdk to deploy a pre-build version of the operator)·
The development workflow used in this prototype is standard to all Operator development:
- Build the operator-sdk version we need and add a Custom Resource Definition
$ make setup
$ operator-sdk add api --api-version=web.servers.org/v1alpha1 --kind=JBossWebServer
- Define its attributes (by editing the generated file jbosswebserver_types.go)
- Update the generated code. This needs to be done every time CRDs are altered
$ operator-sdk generate k8s
- Define the specifications of the CRD (by editing the generated file deploy/crds/jwsservers.web.servers.org_v1alpha1_jbosswebserver_crd.yaml) and update the generated code
- Add a Controller for that Custom Resource
$ operator-sdk add controller --api-version=web.servers.org --help/v1alpha1 --kind=JBossWebServer
- Write the Controller logic and adapt roles to give permissions to necessary resources
- Generate the CR and CVS doing the following (adjust the version when needed):
$ operator-sdk generate crds
$ operator-sdk generate csv --csv-version 0.1.0
To build the operator, you will first need to install both of these tools:
Now that the tools are installed, follow these few steps to build it up:
- clone the repo in $GOPATH/src/github.com/web-servers
- Start by building the project dependencies using
dep ensure
from the root directory of this project. - Then, simply run
operator-sdk build <imagetag>
to build the operator.
You will need to push it to a Docker Registry accessible by your Openshift Server in order to deploy it. I used docker.io:
$ mkdir -p $GOPATH/src/github.com/web-servers
$ cd $GOPATH/src/github.com/web-servers
$ git clone https://github.com/web-servers/jws-image-operator.git
$ export IMAGE=docker.io/${USER}/jws-image-operator:v0.0.1
$ cd jws-image-operator
$ docker login docker.io
$ make push
Note the Makefile uses go mod tidy, go mod vendor then go build to build the executable and docker to build and push the image.
Download the tar.gz file and import it in docker and then push it to your docker repo something like:
$ wget http://download.eng.bos.redhat.com/brewroot/packages/jboss-webserver-5-webserver54-openjdk8-tomcat9-rhel8-operator-container/1.0/2/images/docker-image-sha256:a0eba0294e43b6316860bafe9250b377e6afb4ab1dae79681713fa357556f801.x86_64.tar.gz
$ docker load -i docker-image-sha256:3c424d48db2ed757c320716dc5c4c487dba8d11ea7a04df0e63d586c4a0cf760.x86_64.tar.gz
Loaded image: pprokopi/jboss-webserver-openjdk8-operator:jws-5.4-rhel-8-containers-candidate-96397-20200820162758-x86_64
The ${TAG} is the internal build tag.
The load command returns the tag of the image from the build something like: ${TAG}, use it to rename image and push it:
$ export IMAGE=docker.io/${USER}/jws-image-operator:v0.0.1
$ docker tag ${TAG} ${IMAGE}
$ docker login docker.io
$ docker push $IMAGE
The operator is pre-built and containerized in a docker image. By default, the deployment has been configured to utilize that image. Therefore, deploying the operator can be done by following these simple steps:
- Define a namespace
$ export NAMESPACE="jws-operator"
- Login to your Openshift Server using
oc login
and use it to create a new project
$ oc new-project $NAMESPACE
- Install the JWS Tomcat Basic Image Stream in the openshift project. For testing purposes, this repository provides a version of the corresponding script (xpaas-streams/jws53-tomcat9-image-stream.json) using the unsecured Red Hat Registy (registry.access.redhat.com). Please make sure to use the latest version with a secured registry for production use.
$ oc create -f xpaas-streams/jws53-tomcat9-image-stream.json -n openshift
As the image stream isn't namespace-specific, creating this resource in the openshift project makes it convenient to reuse it across multiple namespaces. The following resources, more specific, will need to be created for every namespace. If you don't use the -n openshift or use another ImageStream name you will have to adjust the imageStreamNamespace: to $NAMESPACE and imageStreamName: to the correct value in the Custom Resource file deploy/crds/jws_v1alpha1_tomcat_cr.yaml.
- Create the necessary resources
$ oc create -f deploy/crds/jwsservers.web.servers.org_v1alpha1_jbosswebserver_crd.yaml -n $NAMESPACE
$ oc create -f deploy/service_account.yaml -n $NAMESPACE
$ oc create -f deploy/role.yaml -n $NAMESPACE
$ oc create -f deploy/role_binding.yaml -n $NAMESPACE
- Deploy the operator using the template (IMAGE is something like docker.io/${USER}/jws-image-operator:v0.0.1)
$ oc process -f deploy/openshift_operator.template IMAGE=${IMAGE} | oc create -f -
- Create a Tomcat instance (Custom Resource). An example has been provided in deploy/crds/jwsservers.web.servers.org_v1alpha1_jbosswebserver_cr.yaml make sure you adjust sourceRepositoryUrl, sourceRepositoryRef (branch) and contextDir (subdirectory) to you webapp sources, branch and context. like:
sourceRepositoryUrl: https://github.com/jfclere/demo-webapp.git
sourceRepositoryRef: "master"
contextDir: /
imageStreamNamespace: openshift
imageStreamName: jboss-webserver54-openjdk8-tomcat9-ubi8-openshift:latest
Then deploy your webapp.
$ oc apply -f deploy/crds/jwsservers.web.servers.org_v1alpha1_jbosswebserver_cr.yaml
-
If the DNS is not setup in your Openshift installation, you will need to add the resulting route to your local
/etc/hosts
file in order to resolve the URL. It has point to the IP address of the node running the router. You can determine this address by runningoc get endpoints
with a cluster-admin user. -
Finally, to access the newly deployed application, simply use the created route with /demo-1.0/demo
oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jws-app jws-app-jws-operator.apps.jclere.rhmw-runtimes.net jws-app <all> None
Then go to http://jws-app-jws-operator.apps.jclere.rhmw-runtimes.net/demo-1.0/demo using a browser.
- To remove everything
oc delete jbosswebserver.web.servers.org/example-jbosswebserver
oc delete deployment.apps/jws-image-operator
Note that the first oc delete deletes what the operator creates for the example-jbosswebserver application, these second oc delete deletes the operator and all resource it needs to run. The ImageStream can be deleted manually if needed.
- What is supported?
10.1 changing the number of running replicas for the application: in your Custom Resource change replicas: 2 to the value you want.
- Install the operator as describe before
Note that kubernetes doesn't have templates and you have to adjust deploy/kubernetes_operator.template to have:
image: @OP_IMAGE_TAG@ set to right value and then use kubernetes apply -f deploy/kubernetes_operator.template to deploy the operator.
-
Prepare your image and push it somewhere See https://github.com/jfclere/tomcat-openshift or https://github.com/apache/tomcat/tree/master/modules/stuffed to build the images.
-
Create a Tomcat instance (Custom Resource). An example has been provided in deploy/crds/jwsservers.web.servers.org_v1alpha1_jbosswebserver_cr.yaml
applicationName: jws-app
applicationImage: docker.io/jfclere/tomcat-demo
#sourceRepositoryUrl: https://github.com/jboss-openshift/openshift-quickstarts.git
#sourceRepositoryRef: "1.2"
#contextDir: tomcat-websocket-chat
#imageStreamNamespace: openshift
#imageStreamName: jboss-webserver53-tomcat9-openshift:latest
Make sure imageStreamName is commented out otherwise the operator will try to build from the sources
- Then deploy your webapp.
$ oc apply -f deploy/crds/jwsservers.web.servers.org_v1alpha1_jbosswebserver_cr.yaml
- If you are on OpenShift the operator will create the route for you and you can use it
oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jws-app jws-app-jws-operator.apps.jclere.rhmw-runtimes.net jws-app <all> None
Then go to http://jws-app-jws-operator.apps.jclere.rhmw-runtimes.net/demo-1.0/demo using a browser.
- On kubernetes you have to create a balancer to expose the service and later something depending on your cloud to expose the application
kubectl expose deployment jws-app --type=LoadBalancer --name=jws-balancer
kubectl kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jws-balancer LoadBalancer 10.100.57.140 <pending> 8080:32567/TCP 4m6s
The service jws-balancer then can be used to expose the application.
serverReadinessScript and serverLivenessScript allow to use a custom live or ready probe, we support 2 formats:
serverLivenessScript: cmd arg1 arg2 ...
serverLivenessScript: shell shellarg1 shellargv2 ... "cmd line for the shell"
Don't forget '' if you need to escape something in the cmd line. Don't use ' ' in the arg that is the separator we support.
In case you don't use the HealthCheckValve you have to configure at least a serverReadinessScript.
For example if you are using the JWS 5.3 images you need the following:
# For pre JWS-5.4 image you need to set username/password and use the following health check.
jwsAdminUsername: tomcat
jwsAdminPassword: tomcat
serverReadinessScript: /bin/bash -c "/usr/bin/curl --noproxy '*' -s -u ${JWS_ADMIN_USERNAME}:${JWS_ADMIN_PASSWORD} 'http://localhost:8080/manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName' | /usr/bin/grep -iq 'stateName *= *STARTED'"
The 5.3 are using the manager webapp and jmx to figure if the server is started.
For example if you are using a openjdk:8-jre-alpine based image and /test is your health URL:
serverReadinessScript: /bin/busybox wget http://localhost:8080/test -O /dev/null
Note that HealthCheckValve requires tomcat 9.0.38+ or 10.0.0-M8 to work as expected and it was introducted in 9.0.15.
Below are some features that may be relevant to add in the near future.
Handling Configuration Changes
The current Reconciliation loop (Controller logic) is very simple. It creates the necessary resources if they don't exist. Handling configuration changes of our Custom Resource and its Pods must be done to achieve stability.
Adding Support for Custom Configurations
The JWS Image Templates provide custom configurations using databases such as MySQL, PostgreSQL, and MongoDB. We could add support for these configurations defining a custom resource for each of these platforms and managing them in the Reconciliation loop.
Handling Image Updates
This may be tricky depending on how we decide to handle Tomcat updates. We may need to implement data migration along with backups to ensure the reliability of the process.
Adding Full Support for Kubernetes Clusters
This Operator prototype is currently using some Openshift specific resources such as DeploymentConfigs, Routes, and ImageStreams. In order to build from sources on Kubernetes Clusters, equivalent resources available on Kubernetes have to be implemented.