This repository contains a full-stack Flask + React three-tier web application deployed on Kubernetes using GitOps. It demonstrates modern DevOps practices with a complete CI/CD pipeline.
Explore the SimpleNotes Application Source Code
-
Frontend: React app (served via Kubernetes
DeploymentandService). -
Backend: Flask API (packaged as a container, deployed via Kubernetes
DeploymentandService). -
Database: PostgreSQL deployed as a StatefulSet for persistent data storage. Persistent Volumes (PVCs) are used to ensure data durability across pod restarts.
-
Ingress: Requests are routed via Ingress-NGINX, with path-based routing:
/apiโ backend service/โ frontend service
Category | Technology |
Frontend | React, Nginx |
Backend | Python, Flask, Flask-SQLAlchemy, Flask-JWT-Extended |
Database | PostgreSQL |
Containerization | Docker |
Orchestration | Kubernetes (AKS, EKS, GKE, Minikube) |
CI/CD Automation | GitHub Actions |
Continuous Deploy | Argo CD (GitOps) |
IaC & Packaging | Helm |
Monitoring | Prometheus, Grafana |
Code Analysis | SonarQube |
Container Registry | Docker Hub |
The monorepo is organized to separate the frontend, backend, and infrastructure code, making it modular and easy to navigate.
flask-react-k8s/
โโโ .github/workflows/ # CI/CD pipelines for GitHub Actions
โ โโโ backend-ci-cd.yaml # Workflow for the Python backend
โ โโโ frontend-ci-cd.yaml # Workflow for the React frontend
โโโ argocd/ # Argo CD application manifest for GitOps
โ โโโ argocd-application.yaml
โโโ helm/simplenotes-app-chart/ # Helm chart for deploying all application components
โ โโโ templates/ # Kubernetes manifest templates (Deployments, Services, Ingress, Statefulset etc.)
โ โโโ Chart.yaml
โ โโโ values.yaml # Default configuration values for the chart (image tags, replicas, etc.)
โโโ simplenotes/
โ โโโ backend/ # Source code for the Flask backend
โ โ โโโ Dockerfile
โ โ โโโ sonar-project.properties # SonarQube config for the backend
โ โโโ frontend/ # Source code for the React frontend
โ โโโ Dockerfile
โ โโโ sonar-project.properties # SonarQube config for the frontend
โโโ README.md # This main project guide
The entire pipeline is automated via GitHub Actions and operates on a GitOps principle. When code is pushed to the master branch in either the simplenotes/frontend or simplenotes/backend directories, the respective workflow is triggered.
-
Build & Test: The workflow sets up the required environment (Python/Node.js), installs dependencies, and runs any automated tests.
-
Code Analysis: SonarQube scans the codebase to detect bugs, vulnerabilities, and code smells, acting as an automated quality gate.
-
Containerization: A Docker image is built using the project's
Dockerfile. -
Push to Registry: The new image is tagged with a unique GitHub Actions run ID and pushed to Docker Hub.
-
Helm Chart Update: The pipeline automatically checks out the repository and updates the corresponding
image.tagin the Helm chart'svalues.yamlfile. -
GitOps Trigger: The updated
values.yamlis committed and pushed back to the GitHub repository. -
Automated Deployment: ArgoCD, which is watching the repository, detects the change in the Helm chart and automatically syncs the deployment to the Kubernetes cluster, triggering a zero-downtime rolling update.
To ensure secure communication, the application uses cert-manager to automatically provision and manage TLS certificates. This enables HTTPS, encrypting all traffic between users and the application.
-
Self-Signed Issuer: For development and personal projects without a registered domain name, a
SelfSigned ClusterIssueris used.cert-managerautomatically generates a self-signed certificate and injects it into the Ingress resource, removing the need for manual certificate management. -
Ingress Configuration: The Ingress resource is annotated to request a certificate from our self-signed issuer, which is then stored in a Kubernetes secret and used by the NGINX Ingress controller.
helm/simplenotes-app-chart/templates/ingress.ymlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: simplenotes-ingress annotations: # Use the selfsigned-issuer to get a certificate cert-manager.io/cluster-issuer: "selfsigned-issuer" spec: ingressClassName: nginx tls: - hosts: - {{ .Values.ingress.host | default "simplenotes.com" }} # cert-manager will store the certificate in this secret secretName: simplenotes-tls-selfsigned # ...
To build a resilient and self-healing system, the frontend and backend deployments are configured with Kubernetes health probes.
-
Liveness Probes: These probes check if the application is still running. If a probe fails, Kubernetes automatically restarts the container, recovering it from a frozen or deadlocked state.
-
Readiness Probes: These probes check if the application is ready to accept traffic. If a probe fails, Kubernetes stops sending new requests to the pod until it becomes ready again. This is crucial for preventing errors during startup or when the application is temporarily overloaded.
A dedicated /health endpoint was added to the Flask backend to provide an accurate health status.
helm/simplenotes-app-chart/templates/backend-deployment.yml
# ...
containers:
- name: backend
# ...
readinessProbe:
httpGet:
path: /health
port: 5001
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 5001
initialDelaySeconds: 15
periodSeconds: 20
# ...sudo apt update
sudo apt install openjdk-17-jdk unzip wget -y
sudo adduser sonarqube
cd /opt
sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.9.4.87374.zip
sudo unzip sonarqube-9.9.4.87374.zip
sudo mv sonarqube-9.9.4.87374 sonarqube
sudo chown -R sonarqube:sonarqube /opt/sonarqube
sudo chmod -R 775 /opt/sonarqube
sudo su -s /bin/bash sonarqube
cd /opt/sonarqube/bin/linux-x86-64/
./sonar.sh start
./sonar.sh statusAccess at: http://<your_server_ip>:9000 (admin / admin).Create a project and generate a token.
-
Fork this repository to your own GitHub account.
-
Kubernetes Cluster: Have access to a running Kubernetes cluster.
-
Tools: Ensure
git,docker,kubectl, andhelmare installed locally. -
SonarQube Server: Set up a SonarQube instance (or use SonarCloud) and have the host URL and an access token ready.
In your forked repository, go to Settings > Secrets and variables > Actions and create the following secrets:
Type | Name | Description |
Secret |
| Your Docker Hub username. |
Secret |
| Your Docker Hub password or access token. |
Secret |
| GitHub PAT with |
Secret |
| Your SonarQube analysis token. |
Secret |
| Your SonarQube server URL (e.g., |
| Orchestration | Kubernetes (AKS, EKS, GKE, Minikube), StatefulSet | a. Install NGINX Ingress Controller:
An Ingress controller is required to expose your services. For most cloud providers:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
b. Install ArgoCD:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
c. Access ArgoCD:
By default, the Argo CD API server is not exposed with an external IP. To access the API server, choose one of the following techniques to expose the Argo CD API server: Service Type Load Balancerยถ
Change the argocd-server service type to LoadBalancer:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
After a short wait, your cloud provider will assign an external IP address to the service. You can retrieve this IP with:
kubectl get svc argocd-server -n argocd -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
Below is a screenshot of the ArgoCD UI showing the health and sync status of all application components:
Kubectl port-forwarding can also be used to connect to the API server without exposing the service.
kubectl port-forward svc/argocd-server -n argocd 8080:443
The API server can then be accessed using https://localhost:8080
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Now, navigate to https://localhost:8080 and log in with username admin and the retrieved password.
If you are using a managed Kubernetes cluster, follow your provider's documentation to install these components.
The argocd/argocd-application.yaml file defines your entire application stack for ArgoCD.
Important: Before applying, edit argocd/argocd-application.yaml and change the repoURL to point to your forked repository's URL.
YAML
# argocd/argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
# ...
spec:
source:
# CHANGE THIS TO YOUR FORKED REPO URL
repoURL: 'https://github.com/YOUR-USERNAME/flask-react-k8s.git'
path: 'helm/simplenotes-app-chart'
# ...
Apply this manifest to your cluster. ArgoCD will immediately deploy the entire application suite.
kubectl apply -f argocd/argocd-application.yaml
ArgoCD will now ensure your cluster's state always matches your Git repository.
You need to map the application's hostname to your Ingress controller's external IP address.
a. Find your Ingress IP:
kubectl get svc -n ingress-nginx # Or the namespace where your controller is
# Look for the EXTERNAL-IP of the LoadBalancer service
b. Edit your local hosts file:
-
macOS/Linux:
sudo nano /etc/hosts -
Windows:
C:\Windows\System32\drivers\etc\hosts(as Administrator)
Add the following line:
<YOUR_INGRESS_IP> simplenotes.com prometheus.local grafana.local
- SimpleNotes Application: http://simplenotes.com
- Prometheus: http://prometheus.local
- Grafana: http://grafana.local
This section details how to set up a robust monitoring stack for the Flask backend using Prometheus for metrics collection and Grafana for visualization.
-
Flask Application: The backend is instrumented with
prometheus-flask-exporter, which exposes an HTTP endpoint at/metricswith key performance indicators (KPIs) like request latency and counts. -
ServiceMonitor: This custom Kubernetes resource declaratively tells the Prometheus Operator to find our application's service (via labels) and automatically begin scraping its/metricsendpoint. This avoids manual Prometheus configuration. -
Prometheus: Scrapes and stores the metrics from our application as time-series data.
-
Grafana: Visualizes the data stored in Prometheus. We will import a pre-built dashboard to get started quickly.
-
Ingress: Exposes the Prometheus and Grafana web UIs on user-friendly hostnames.
We will use the kube-prometheus-stack Helm chart, which bundles Prometheus, Grafana, and the crucial Prometheus Operator.
a. Add the required Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
b. Install the chart:
This command installs the stack into a dedicated monitoring namespace.
helm install my-kube-prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
Monitoring
The ServiceMonitor for the backend is already included in the Helm chart at helm/simplenotes-app-chart/templates/servicemonitor.yaml.
The crucial part is the label release: my-kube-prometheus. This label on the ServiceMonitor tells the specific Prometheus instance installed by the Helm chart to pay attention to it. Prometheus will then look for any Service in the app namespace that has the label app: backend and begin scraping its /metrics endpoint.
The Ingress rules for Prometheus and Grafana are already included in the Helm chart. Once the main application is deployed via ArgoCD, these will be created automatically.
To access them, you must configure your local DNS.
a. Find your Ingress Controller's External IP:
# The namespace may vary based on your installation
kubectl get svc -n ingress-nginx
b. Edit your local hosts file:
Add the following line, replacing <YOUR_INGRESS_IP> with the IP from the previous step.
<YOUR_INGRESS_IP> simplenotes.com prometheus.local grafana.local
a. Get the Grafana admin password:
kubectl get secret --namespace monitoring my-kube-prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
b. Log in to Grafana:
Open your browser and navigate to http://grafana.local. Log in with the username admin and the password you just retrieved.
c. Import a Dashboard:
-
In the Grafana UI, go to Dashboards, click New, and then Import.
-
In the "Import via grafana.com" box, enter the dashboard ID:
14227(Flask Metrics). -
Click Load.
-
On the next screen, select your Prometheus data source at the bottom.
-
Click Import.
You will now have a dashboard visualizing the live metrics from your Flask backend! Note: Data will only appear after you have generated traffic by using the web application (e.g., logging in, creating notes).
To run the entire application stack locally for development, you can use the provided Docker Compose file.
# From the simplenotes/ directory
cd simplenotes
docker-compose up --build
-
Frontend will be available at
http://localhost:3000 -
Backend API will be available at
http://localhost:5001 -
PostgreSQL will be available on port
5432
To avoid ongoing cloud provider costs, delete the resources when you are finished.
# Delete the application from the cluster
kubectl delete -f argocd/argocd-application.yaml
# Uninstall ArgoCD and monitoring
kubectl delete namespace argocd
kubectl delete namespace app
helm delete my-kube-prometheus -n monitoring
kubectl delete namespace monitoring
# Delete your cloud-provider Kubernetes cluster (example for AKS)
# az group delete --name "MyResourceGroup" --yes --no-wait
This project is licensed under the MIT License. See the LICENSE file for details.
Contributions are welcome! Please open issues or submit pull requests for improvements.

