Skip to content

End-to-end three-tier application (frontend, backend, database) deployed on Kubernetes with GitOps-driven CI/CD, SonarQube analysis, and monitoring.

Notifications You must be signed in to change notification settings

ariefshaik7/flask-react-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

83 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ“ SimpleNotes โ€“ End-to-End DevOps Pipeline

This repository contains a full-stack Flask + React three-tier web application deployed on Kubernetes using GitOps. It demonstrates modern DevOps practices with a complete CI/CD pipeline.

  • Frontend: React app (served via Kubernetes Deployment and Service).

  • Backend: Flask API (packaged as a container, deployed via Kubernetes Deployment and Service).

  • Database: PostgreSQL deployed as a StatefulSet for persistent data storage. Persistent Volumes (PVCs) are used to ensure data durability across pod restarts.

  • Ingress: Requests are routed via Ingress-NGINX, with path-based routing:

    • /api โ†’ backend service
    • / โ†’ frontend service

simplenotes


๐Ÿš€ Technology Stack

Category

Technology

Frontend

React, Nginx

Backend

Python, Flask, Flask-SQLAlchemy, Flask-JWT-Extended

Database

PostgreSQL

Containerization

Docker

Orchestration

Kubernetes (AKS, EKS, GKE, Minikube)

CI/CD Automation

GitHub Actions

Continuous Deploy

Argo CD (GitOps)

IaC & Packaging

Helm

Monitoring

Prometheus, Grafana

Code Analysis

SonarQube

Container Registry

Docker Hub


๐Ÿ“ Project Structure

The monorepo is organized to separate the frontend, backend, and infrastructure code, making it modular and easy to navigate.

flask-react-k8s/
โ”œโ”€โ”€ .github/workflows/         # CI/CD pipelines for GitHub Actions
โ”‚   โ”œโ”€โ”€ backend-ci-cd.yaml     # Workflow for the Python backend
โ”‚   โ””โ”€โ”€ frontend-ci-cd.yaml    # Workflow for the React frontend
โ”œโ”€โ”€ argocd/                      # Argo CD application manifest for GitOps
โ”‚   โ””โ”€โ”€ argocd-application.yaml
โ”œโ”€โ”€ helm/simplenotes-app-chart/  # Helm chart for deploying all application components
โ”‚   โ”œโ”€โ”€ templates/               # Kubernetes manifest templates (Deployments, Services, Ingress, Statefulset etc.)
โ”‚   โ”œโ”€โ”€ Chart.yaml
โ”‚   โ””โ”€โ”€ values.yaml              # Default configuration values for the chart (image tags, replicas, etc.)
โ”œโ”€โ”€ simplenotes/
โ”‚   โ”œโ”€โ”€ backend/                 # Source code for the Flask backend
โ”‚   โ”‚   โ”œโ”€โ”€ Dockerfile
โ”‚   โ”‚   โ””โ”€โ”€ sonar-project.properties # SonarQube config for the backend
โ”‚   โ””โ”€โ”€ frontend/                # Source code for the React frontend
โ”‚       โ”œโ”€โ”€ Dockerfile
โ”‚       โ””โ”€โ”€ sonar-project.properties # SonarQube config for the frontend
โ””โ”€โ”€ README.md                    # This main project guide

โš™๏ธ Automated CI/CD Workflow

The entire pipeline is automated via GitHub Actions and operates on a GitOps principle. When code is pushed to the master branch in either the simplenotes/frontend or simplenotes/backend directories, the respective workflow is triggered.

Pipeline Stages

  1. Build & Test: The workflow sets up the required environment (Python/Node.js), installs dependencies, and runs any automated tests.

  2. Code Analysis: SonarQube scans the codebase to detect bugs, vulnerabilities, and code smells, acting as an automated quality gate.

  3. Containerization: A Docker image is built using the project's Dockerfile.

  4. Push to Registry: The new image is tagged with a unique GitHub Actions run ID and pushed to Docker Hub.

  5. Helm Chart Update: The pipeline automatically checks out the repository and updates the corresponding image.tag in the Helm chart's values.yaml file.

  6. GitOps Trigger: The updated values.yaml is committed and pushed back to the GitHub repository.

  7. Automated Deployment: ArgoCD, which is watching the repository, detects the change in the Helm chart and automatically syncs the deployment to the Kubernetes cluster, triggering a zero-downtime rolling update.


1. Automated SSL/TLS with cert-manager

To ensure secure communication, the application uses cert-manager to automatically provision and manage TLS certificates. This enables HTTPS, encrypting all traffic between users and the application.

  • Self-Signed Issuer: For development and personal projects without a registered domain name, a SelfSigned ClusterIssuer is used. cert-manager automatically generates a self-signed certificate and injects it into the Ingress resource, removing the need for manual certificate management.

  • Ingress Configuration: The Ingress resource is annotated to request a certificate from our self-signed issuer, which is then stored in a Kubernetes secret and used by the NGINX Ingress controller.

    helm/simplenotes-app-chart/templates/ingress.yml

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: simplenotes-ingress
      annotations:
        # Use the selfsigned-issuer to get a certificate
        cert-manager.io/cluster-issuer: "selfsigned-issuer"
    spec:
      ingressClassName: nginx
      tls:
      - hosts:
        - {{ .Values.ingress.host | default "simplenotes.com" }}
        # cert-manager will store the certificate in this secret
        secretName: simplenotes-tls-selfsigned
    # ...

2. Health Probes for Self-Healing

To build a resilient and self-healing system, the frontend and backend deployments are configured with Kubernetes health probes.

  • Liveness Probes: These probes check if the application is still running. If a probe fails, Kubernetes automatically restarts the container, recovering it from a frozen or deadlocked state.

  • Readiness Probes: These probes check if the application is ready to accept traffic. If a probe fails, Kubernetes stops sending new requests to the pod until it becomes ready again. This is crucial for preventing errors during startup or when the application is temporarily overloaded.

A dedicated /health endpoint was added to the Flask backend to provide an accurate health status.

helm/simplenotes-app-chart/templates/backend-deployment.yml

# ...
      containers:
      - name: backend
        # ...
        readinessProbe:
          httpGet:
            path: /health
            port: 5001
          initialDelaySeconds: 5
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /health
            port: 5001
          initialDelaySeconds: 15
          periodSeconds: 20
# ...

SonarQube Installation

sudo apt update
sudo apt install openjdk-17-jdk unzip wget -y
sudo adduser sonarqube
cd /opt
sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.9.4.87374.zip
sudo unzip sonarqube-9.9.4.87374.zip
sudo mv sonarqube-9.9.4.87374 sonarqube
sudo chown -R sonarqube:sonarqube /opt/sonarqube
sudo chmod -R 775 /opt/sonarqube
sudo su -s /bin/bash sonarqube
cd /opt/sonarqube/bin/linux-x86-64/
./sonar.sh start
./sonar.sh status

Access at: http://<your_server_ip>:9000 (admin / admin).Create a project and generate a token.


๐Ÿš€ End-to-End Deployment Guide

Step 1: Prerequisites & Setup

  • Fork this repository to your own GitHub account.

  • Kubernetes Cluster: Have access to a running Kubernetes cluster.

  • Tools: Ensure git, docker, kubectl, and helm are installed locally.

  • SonarQube Server: Set up a SonarQube instance (or use SonarCloud) and have the host URL and an access token ready.

Step 2: Configure GitHub Secrets

In your forked repository, go to Settings > Secrets and variables > Actions and create the following secrets:

Type

Name

Description

Secret

DOCKERHUB_USERNAME

Your Docker Hub username.

Secret

DOCKERHUB_PASSWORD

Your Docker Hub password or access token.

Secret

GIT_PAT

GitHub PAT with repo scope to update Helm chart.

Secret

SONAR_TOKEN

Your SonarQube analysis token.

Secret

SONAR_HOST_URL

Your SonarQube server URL (e.g., http://IP:9000).

Step 3: Kubernetes & ArgoCD Setup

| Orchestration | Kubernetes (AKS, EKS, GKE, Minikube), StatefulSet | a. Install NGINX Ingress Controller:

An Ingress controller is required to expose your services. For most cloud providers:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml

b. Install ArgoCD:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

c. Access ArgoCD:

By default, the Argo CD API server is not exposed with an external IP. To access the API server, choose one of the following techniques to expose the Argo CD API server: Service Type Load Balancerยถ

Change the argocd-server service type to LoadBalancer:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

After a short wait, your cloud provider will assign an external IP address to the service. You can retrieve this IP with:

kubectl get svc argocd-server -n argocd -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'

ArgoCD Application Overview

Below is a screenshot of the ArgoCD UI showing the health and sync status of all application components:


ArgoCD Application Tree


Port Forwarding

Kubectl port-forwarding can also be used to connect to the API server without exposing the service.

kubectl port-forward svc/argocd-server -n argocd 8080:443

The API server can then be accessed using https://localhost:8080

Get initial password

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

Now, navigate to https://localhost:8080 and log in with username admin and the retrieved password.

If you are using a managed Kubernetes cluster, follow your provider's documentation to install these components.

Step 4: Deploy the Application with ArgoCD

The argocd/argocd-application.yaml file defines your entire application stack for ArgoCD.

Important: Before applying, edit argocd/argocd-application.yaml and change the repoURL to point to your forked repository's URL.

YAML

# argocd/argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
# ...
spec:
  source:
    # CHANGE THIS TO YOUR FORKED REPO URL
    repoURL: 'https://github.com/YOUR-USERNAME/flask-react-k8s.git'
    path: 'helm/simplenotes-app-chart'
# ...

Apply this manifest to your cluster. ArgoCD will immediately deploy the entire application suite.

kubectl apply -f argocd/argocd-application.yaml

ArgoCD will now ensure your cluster's state always matches your Git repository.


Accessing the Deployed Application

1. Configure Local DNS

You need to map the application's hostname to your Ingress controller's external IP address.

a. Find your Ingress IP:

kubectl get svc -n ingress-nginx # Or the namespace where your controller is
# Look for the EXTERNAL-IP of the LoadBalancer service

b. Edit your local hosts file:

  • macOS/Linux: sudo nano /etc/hosts

  • Windows: C:\Windows\System32\drivers\etc\hosts (as Administrator)

Add the following line:

<YOUR_INGRESS_IP>   simplenotes.com prometheus.local grafana.local

2. Access the UIs


๐Ÿ“Š Monitoring with Prometheus & Grafana

This section details how to set up a robust monitoring stack for the Flask backend using Prometheus for metrics collection and Grafana for visualization.

Overview of the Monitoring Architecture

  1. Flask Application: The backend is instrumented with prometheus-flask-exporter, which exposes an HTTP endpoint at /metrics with key performance indicators (KPIs) like request latency and counts.

  2. ServiceMonitor: This custom Kubernetes resource declaratively tells the Prometheus Operator to find our application's service (via labels) and automatically begin scraping its /metrics endpoint. This avoids manual Prometheus configuration.

  3. Prometheus: Scrapes and stores the metrics from our application as time-series data.

  4. Grafana: Visualizes the data stored in Prometheus. We will import a pre-built dashboard to get started quickly.

  5. Ingress: Exposes the Prometheus and Grafana web UIs on user-friendly hostnames.


Step-by-Step Monitoring Setup

Step 1: Deploy the Prometheus & Grafana Stack

We will use the kube-prometheus-stack Helm chart, which bundles Prometheus, Grafana, and the crucial Prometheus Operator.

a. Add the required Helm repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

b. Install the chart:

This command installs the stack into a dedicated monitoring namespace.

helm install my-kube-prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace

Step 2: Enable Prometheus to Monitor Your App

Monitoring The ServiceMonitor for the backend is already included in the Helm chart at helm/simplenotes-app-chart/templates/servicemonitor.yaml.

The crucial part is the label release: my-kube-prometheus. This label on the ServiceMonitor tells the specific Prometheus instance installed by the Helm chart to pay attention to it. Prometheus will then look for any Service in the app namespace that has the label app: backend and begin scraping its /metrics endpoint.


Step 3: Access Prometheus and Grafana via Ingress

The Ingress rules for Prometheus and Grafana are already included in the Helm chart. Once the main application is deployed via ArgoCD, these will be created automatically.

To access them, you must configure your local DNS.

a. Find your Ingress Controller's External IP:

# The namespace may vary based on your installation
kubectl get svc -n ingress-nginx

b. Edit your local hosts file:

Add the following line, replacing <YOUR_INGRESS_IP> with the IP from the previous step.

<YOUR_INGRESS_IP>   simplenotes.com prometheus.local grafana.local

Step 4: Access Grafana and Visualize

a. Get the Grafana admin password:

kubectl get secret --namespace monitoring my-kube-prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

b. Log in to Grafana:

Open your browser and navigate to http://grafana.local. Log in with the username admin and the password you just retrieved.

c. Import a Dashboard:

  1. In the Grafana UI, go to Dashboards, click New, and then Import.

  2. In the "Import via grafana.com" box, enter the dashboard ID: 14227 (Flask Metrics).

  3. Click Load.

  4. On the next screen, select your Prometheus data source at the bottom.

  5. Click Import.

You will now have a dashboard visualizing the live metrics from your Flask backend! Note: Data will only appear after you have generated traffic by using the web application (e.g., logging in, creating notes).


๐Ÿ› ๏ธ Local Development

To run the entire application stack locally for development, you can use the provided Docker Compose file.

# From the simplenotes/ directory
cd simplenotes
docker-compose up --build

๐Ÿงน Cleanup

To avoid ongoing cloud provider costs, delete the resources when you are finished.

# Delete the application from the cluster
kubectl delete -f argocd/argocd-application.yaml

# Uninstall ArgoCD and monitoring
kubectl delete namespace argocd
kubectl delete namespace app
helm delete my-kube-prometheus -n monitoring
kubectl delete namespace monitoring

# Delete your cloud-provider Kubernetes cluster (example for AKS)
# az group delete --name "MyResourceGroup" --yes --no-wait

๐Ÿ“„ License

This project is licensed under the MIT License. See the LICENSE file for details.

๐Ÿค Contributing

Contributions are welcome! Please open issues or submit pull requests for improvements.


About

End-to-end three-tier application (frontend, backend, database) deployed on Kubernetes with GitOps-driven CI/CD, SonarQube analysis, and monitoring.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published