Skip to content

Project20: port CI w Jenkins Sonarqube but use docker hub. Containerize vprofile app on Jenkins pipeline: Build;unit test,;integ ;sonar test;QG; docker build/deploy to hub; kops EC2 instance as Jenkins slave NODE to deploy full stack and docker image vprofile app from docker hub to the kops k8s cluster.Helm full stack deployed:rabbitmq,db;memcach

Notifications You must be signed in to change notification settings

dmastrop/CICD_Docker_with_Helm_kops_Jenkins_Sonarqube

Repository files navigation

# Overview:

Project20: port CI w Jenkins Sonarqube but use docker hub. Containerize vprofile app on Jenkins pipeline: Build;unit test,;integ ;sonar test;QG; docker build/deploy to hub; kops EC2 instance as Jenkins slave NODE to deploy full stack and docker image vprofile app from docker hub to the kops k8s cluster.Helm full stack deployed:rabbitmq,db;memcached and the application itself running on tomcat pod.

Most of the Jenkinsfile will be executed on the Jenkins server. Since the docker image needs to be built from source code, docker engine has to be installed onto Jenkins server

The last stage will be run on the kops-project14-EC2 instance that has kubectl, kops, and helm installed.
This EC2 instance will provision the helm template files onto the kops cluster as noted above.

The kops-project14-EC2 instance has aws cli configured and the aws configuration is with an Admin AWS user.

This is using AWS2 account

Provision this on us-east-1





# Basic staging flow for bringing up the CICD setup with kops, Jenkins, Sonarqube, Helm with docker container for the application (full stack implemented with backend server containers):

## Bring up the kops cluster from the kops-project14-EC2 instance (jenkins agent/slave node)
Name: kops-project14.**************.com
S3:  vprofile-kops-s3-state-project14

Create config cluster: 
kops create cluster --name=kops-project14.******************.com \
--state=s3://vprofile-kops-s3-state-project14 --zones=us-east-1a,us-east-1b \
--node-count=2 --node-size=t3.small --master-size=t3.medium --dns-zone=kops-project14.************.com \
--node-volume-size=8 --master-volume-size=8 --ssh-public-key=~/.ssh/kops-key-project14.pub

Deploy cluster:
kops update cluster --name=kops-project14.*****************.com --state=s3://vprofile-kops-s3-state-project14 --yes --admin



$ kops validate cluster --state=s3://vprofile-kops-s3-state-project14

Using cluster from kubectl context: kops-project14.******************.com

Validating cluster kops-project14.**********************.com

INSTANCE GROUPS
NAME                            ROLE            MACHINETYPE     MIN     MAX     SUBNETS
control-plane-us-east-1a        ControlPlane    t3.medium       1       1       us-east-1a
nodes-us-east-1a                Node            t3.small        1       1       us-east-1a
nodes-us-east-1b                Node            t3.small        1       1       us-east-1b

NODE STATUS
NAME                    ROLE            READY
i-06900f5ecacbc4430     node            True
i-0c2e6341778830d30     control-plane   True
i-0e4c27dff6605359a     node            True

ssh -i ~/.ssh/kops-key-project14 ubuntu@api.kops-project14.***************.com

Delete cluster:
kops delete cluster --name= kops-project14.*********************.com --state=s3://vprofile-kops-s3-state-project14 --yes




## Test the helm chart deployment with namespace test

kubectl create namespace test

From the local git repository on the kops-project14-EC2 instance (do a git pull from remote github repo for latest source code) run the following command to deploy the helm template files form db, memcached, rabbitmq, the vprofile app on tomcat

The docker image is a test image for now.  For production this will be built and uploaded to my own docker hub repository

in vproappdep.yml use the dynamic image image: {{ .Values.appimage}} so that helm can pick up the latest docker image from the repository

In production, appimage will be defined as my latest build docker image on my repository (dynamic). For now we are using a test image.

$ helm install --namespace test vprofile-stack helm/vprofilecharts --set appimage=imranvisualpath/vproappdock:9

$ kubectl get secret -n test
NAME                                   TYPE                 DATA   AGE
app-secret                             Opaque               2      18m
sh.helm.release.v1.vprofile-stack.v1   helm.sh/release.v1   1      18m


helm delete vprofile-stack --namespace test




## Full stack for the application is deployed below on the kops cluster

helm list --namespace prod


$ kubectl get all --namespace test
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/vproapp-7595664b84-plwmr                         1/1     Running   0          41s
pod/vprodb-849f57645b-tmh24                          1/1     Running   0          41s
pod/vprofile-stack-vprofilecharts-5f48d5d59c-g5kht   1/1     Running   0          41s
pod/vpromc-5c65464866-scmsr                          1/1     Running   0          41s
pod/vpromq01-69b8ff77dc-tnfpk                        1/1     Running   0          41s

NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
service/vproapp-service                 LoadBalancer   100.70.117.109   afbba37dba31240e8b9827de54c23f43-1689472808.us-east-1.elb.amazonaws.com   80:31939/TCP   41s
service/vprocache01                     ClusterIP      100.66.83.132    <none>                                                                    11211/TCP      41s
service/vprodb                          ClusterIP      100.70.28.103    <none>                                                                    3306/TCP       41s
service/vprofile-stack-vprofilecharts   ClusterIP      100.71.161.13    <none>                                                                    80/TCP         41s
service/vpromq01                        ClusterIP      100.71.99.201    <none>                                                                    5672/TCP       41s

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/vproapp                         1/1     1            1           41s
deployment.apps/vprodb                          1/1     1            1           41s
deployment.apps/vprofile-stack-vprofilecharts   1/1     1            1           41s
deployment.apps/vpromc                          1/1     1            1           41s
deployment.apps/vpromq01                        1/1     1            1           41s

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/vproapp-7595664b84                         1         1         1       41s
replicaset.apps/vprodb-849f57645b                          1         1         1       41s
replicaset.apps/vprofile-stack-vprofilecharts-5f48d5d59c   1         1         1       41s
replicaset.apps/vpromc-5c65464866                          1         1         1       41s
replicaset.apps/vpromq01-69b8ff77dc                        1         1         1       41s

## remove the stack and proceed with production setup below
helm delete vprofile-stack --namespace test


## Other commands

kubctl get pods --namespace prod

To verify the vprofile app version do:
kubectl describe pod vproapp-7595664b84-plwmr --namepace prod

To get the loadbalancer frontend URL:
kubectl get service -namespace prod








# Production: 

## Configure the kops-project14-EC2 instance as a slave node to the Jenkins server. Use SSH as the connection protocol.  
Install OpenJDK 11 on the kops node.   
Create a /opt/jenkins-slave directory. This will have the agent and workspace from jenkins on it
Ubuntu user needs to own this directory. Jenkins will use ubuntu user to run the stage
kops-project14-EC2 security group needs to be able to have inbound access from Jenkins (SSH), user ubunu
Manage Jenkins ---> Node and add the node along with the SSH private key to the EC2 instance.
Label and name is "KOPS" and must be used in the stage in the Jenkinsfile. See below
Only use the agent for the stage explicitly specify it as agent
Disable Host verification on the SSH for now.
Test the connection from Jenkins. Launch the agent and it should come up.





## Prepare the Jenkinsfile. Stages for CI are BUILD, UNIT TEST, INTEGRATION TEST, CODE ANALYSIS WITH CHECKSTYLE, CODE ANALYSIS with SONARQUBE, and if Quality Gate (AG) passes:  DOCKER BUILD for the application artifact (.war in docker image), Deploy and upload to docker hub.   CD deploy to the kops cluster with helm from the slave node kops-project14-EC2 jenkins node similar to the example above but with the real docker image artifact build.  At this point selenium can be used to browser test the frontend and backend on a browser.  (Selenium is not implemented yet).   Messaging is to slack. Any failed build will cause the process to start again with developers fixing and rechecking in the code from the start of the CI pipeline.(re-triggering the pipeline)

Jenkins and sonarqube server need access to each other in security groups.
Jenkins needs access to the slave node kops-project14-EC2 instance security group.
The registry will be defined in the Jenkinsfile (my docker hub account and the name of the image)
registryCredential is stored as secret on Jenkins has the password to my docker hub account
appimage will be set by the helm command based upon the docker image created in earlier stages (registry and BUILD_NUMBER will serve as the tag on the image for the lastest build that has passed testing)

## create a prod namespace: kubectl create namespace prod

## create the pipeline on Jenkins with trigger from this repository which is on github.

## As noted above messaging is via a slack channel. Any failure will cause a notification and the pipeline will have to be restarted with another check in from development to fix the code.

## The Jenkinsfile last stage is the helm command below executed from the root of the repository that has the src and helm directories.  KOPS is the name and label configured for the slave node on Jenkins server for the kops-project14-EC2 instance that is executing the stage below.

stage('Kubernetes Deploy') {
	        agent { label 'KOPS' }

            
            steps {
                    sh "helm upgrade --install --force vprofile-stack helm/vprofilecharts --set appimage=${registry}:${BUILD_NUMBER} --namespace prod"
            }
        }

## To automate things further I added the kubectl create namespace prod to the Jenkinsfile itself.
This code below will ensure that the namespace will be present prior to running the helm command above, regardless if the namespace is there already (from previous run), or if it is not there (first run)
The try-catch block will add the namespace if not present (no error thrown), or if the namespace is there the error message will print out in Jenkins logs but the script will not abort.  This is in lieu of using kubectl create namespace prod || exit 0 so that the exact error can be seen in the logs....



            script { 

              try { 
                sh "kubectl create namespace prod"
              }    
              catch (e) { 
                    echo "An error occurred: ${e}" 
              } 
            } 
            

            sh "helm upgrade --install --force vprofile-stack helm/vprofilecharts --set appimage=${registry}:${BUILD_NUMBER} --namespace prod"
          
        




## For sonarqube, I created a new project on the sonarqube server. The projectKey and projectName are the only chnages in the Jenkinsfile script. sonarserver and sonarscanner are the same on Jenkins server configuration as well as the jenkinstosonar token.  NOTE: for QG (QualityGate) status create a webhook on the sonarqube server for sonartojenkins communication so that sonarqube server can update Jenkins with QG results.


environment {
                scannerHome = tool 'sonarscanner'
            }

            steps {
                withSonarQubeEnv('sonarserver') {
                    sh '''${scannerHome}/bin/sonar-scanner -Dsonar.projectKey=sonar-project20 \
                   -Dsonar.projectName=vprofile-repo-project20 \
                   -Dsonar.projectVersion=1.0 \
                   -Dsonar.sources=src/ \
                   -Dsonar.java.binaries=target/test-classes/com/visualpathit/account/controllerTest/ \
                   -Dsonar.junit.reportsPath=target/surefire-reports/ \
                   -Dsonar.jacoco.reportsPath=target/jacoco.exec \
                   -Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml'''
                }








# For code administration and source control in general edit the code on VSCode locally on mac, push to remote repository and then pull the code to the slave node kops-project14-EC2 node so that the last stage can be executed on the node.

## NOTE: the kops-project14-EC2 node actually will fetch the code each time the pipeline runs and put it in the opt/jenkins-slave/workspace directory.  So the pull into the /home/ubuntu directory is not necessary.
To fetch the code SSH host verification on the node configuration in Jenkins should be none.  Also I found that the global Security Jenkins setting had to be set at SSH host verification none for this pipeline to work.   Otherwise there were failures in getting the source code from github repo.




# Gitlab to Jenkins project20 extension:

On the VPS devops ecosystem, configure the gitlab relay to Jenkins to run this project.

This involves configuring either a webhook or Jenkins integration on the dummy gitlab project

On Jenkins a token needs to be generated for the jenkins to gitlab connection used in the project/pipeline and this needs to be put in the Gitlab webhook configuration above for the gitlab dummy project for gitlab to jenkins communication.  Test the gitlab webhook from gitlab.

In Jenkins project select pipeline script from SCM and configure the SCM to point to the gitlab repository/docker container instance on the VPS.  Note the jenkins server has to be added to the VPS traefik whitelist so that jenkins traffic can be accepted into the VPS and routed to the gitlab docker container on the VPS
The SCM configuration requires special configuration with ssh to the gitlab repo via git because VPS uses nonstandard SSH port.
The git URL syntax is the following: ssh://git@gitlab.linode.cloudnetworktesting.com:*****/dmastrop/gitlab_to_jenkins_vprofile_project20c.git

Configure the SSH private VPS key as a credential on jenkins so that it can be used in the SCM configuration above

Use */main as branch specifier for now and Jenkinsfile.gitlabotjenkins jenkinsfile in the repository for the pipeline code for this gitlab to jenkins.  This jenkinsfile differs a bit from the Jenkinsfile used with a direct github to Jenkins project20, except there is some extra code to relay the Jenkins pipeline result back to Gitlab dummy pipelie on the Gitlab web console.

Configure pipeline to trigger with any push,etc.... to gitlab.

In Manage jenkins under Gitlab section configure the Jenkins to gitlab connection used in the above jenkins pipeline. This uses a token generated on gitlab with Maintainer role and scope api. The token is used to create a credential on Jenkins and that credential is used for the jenkins to gitlab connection. Test the connection to ensure that it works.
The "Enable authentication for '/project' end-point" can be checked off because the webhook on gitlab has the token that was generated above on jenkins.


So basically bi-directional tokens are used for bidirectional webhook gitlab to jenkins and connection: jenkins to gitlab connectivity.  NOTE: the jenkins to gitlab connection is not used to pull the code. The SCM configuration above on Jenkins is used to pull the code via SSH git as noted above.


For project 18 gitlab to jenkins extension (see other repo) try using Gitlab integration instead of a gitlab hook for the gitlab to jenkins communication.  The rest of the configuration should be the same as that used here.

The project20 Jenkinsfile and Jenkinsfile.gitlabtojenkins script was significantly updated to incorporate the kops bringup of the cluster, if it is not already up.   If it is up, then the kops update cluster will not be re-run to save time.  The logic is to check the status of the kubernetes kops cluster by running the kops create cluster command. Using several try and catch(error) structures, if the command errors out that means that the cluster is already up and helm can be re-deployed with latest source code.  Then since the state is in a catch error block, set the build to SUCCESS and return so that the current stage is exited with success and the next stage can be executed. Otherwise the pipeline will abort.
The code to do this is in the catch error block:
  currentBuild.result = 'SUCCESS'
  return

If the command kops create cluster does not error out the script proceeds to the next try and catch block to do the kops update cluster and bring up the kops cluster. After that the helm is deployed and then the state proceeds to the next stage.

## The code block for this logic can be found in the Jenkinsfile.gitlabtojenkins file in the following stage:

        //Try to automate the kops setup by putting in a large delay in the shell script after each command. Otherwise
        //write code for the kops validate cluster command and parse out if up or not
        stage('Kops setup and helm kubernetes deployment') { .......




## Prerequisites
- JDK 1.8 or later
- Maven 3 or later
- MySQL 5.6 or later

## Technologies 
- Spring MVC
- Spring Security
- Spring Data JPA
- Maven
- JSP
- MySQL
## Database
Here,we used Mysql DB 
MSQL DB Installation Steps for Linux ubuntu 14.04:
- $ sudo apt-get update
- $ sudo apt-get install mysql-server

Then look for the file :
- /src/main/resources/accountsdb
- accountsdb.sql file is a mysql dump file.we have to import this dump to mysql db server
- > mysql -u <user_name> -p accounts < accountsdb.sql


About

Project20: port CI w Jenkins Sonarqube but use docker hub. Containerize vprofile app on Jenkins pipeline: Build;unit test,;integ ;sonar test;QG; docker build/deploy to hub; kops EC2 instance as Jenkins slave NODE to deploy full stack and docker image vprofile app from docker hub to the kops k8s cluster.Helm full stack deployed:rabbitmq,db;memcach

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published